Saturday, February 15, 2014

Delete Huge Dynamics AX Log Tables

Hi

In the last days, I analyzed a Customer Database that grew up very fast.

The top tables was :
1- AIFResponse
2- AifMessageLog
3- SalesParmLine
4- BatchJobHistory
5- InventSumLogTTS

For the first two tables, related the AIF Interface log, I have build a script to run on SQL Server side.

Use DatabaseName

DECLARE @continue INT
DECLARE @rowcount INT  
SET @continue = 1
WHILE @continue = 1
BEGIN    
 PRINT GETDATE()    
 SET ROWCOUNT 10000    

 BEGIN TRANSACTION    
 DELETE FROM Table where CREATEDDATETIME < '2014-01-01'
      )

 SET @rowcount = @@rowcount     
 COMMIT    
 PRINT GETDATE()    
 IF @rowcount = 0    
 BEGIN        
  SET @continue = 0    
 END
END

For the SalesParmLine table that stores temporary sales order line entries when a sales order is posted, I run the clean up job on Menu "XXX/Sales and marketing/Area page/Periodic/Sales update History cleanup"

For the BatchJobHistory table, I have open the Batch Job History form and delete all ended Job

About InventSumLogTTS table, the delete function is trickered by the setup of the dynamic masterplan, see classes\ReqCalcScheduleItem\InsertItemSched
So to correct the MRP, you need to go to Master Planning/Setup/Master planning parameters Set the current dynamic master plan to the same as the static plan.

Finally, you must perform the Shrink Database.
So, first of all I have set the Recovery Database to Simple, execute the shrink process and change again the Recovery Database to Full.
I have also Stop the AOS before the Shrink process.

Update 1:

Take a look to the link below, in order to Delete huge volume of data on Big Tables.

Concurrency Week: How to Delete Just Some Rows from a Really Big Table

That's it!

7 comments:

tommy.skaue said...

If you do a shrink, make sure you understand what it will do with the indexes (http://www.sqlservercentral.com/blogs/steve_jones/2010/09/14/common-sql-server-mistakes-_2D00_-shrinking-databases/).

Also, know that setting recovery mode to simple breaks the backup chain, so make sure you immediately do a new full backup after setting it back to full recovery mode in order to build a valid new recovery chain.

Denis Macchinetti said...

Thanks Tommy!

Thank you for your suggestion.

Denis

Saul said...

Better than changing the recovery mode is doing at least one LOG file backup. Once you have a log file backup, try shrinking it. If it still doesn't shrink, backup the log again (it will be smaller). The shrink will then work, or should. See BACKUP LOG, DBCC SHRINKFILE and DBCC LOGINFO commands for more.

Also

http://stackoverflow.com/questions/7193445/dbcc-shrinkfile-on-log-file-not-reducing-size-even-after-backup-log-to-disk

stephen joens(Synergy software systems) said...

This is not good practice
Deleting millions of rows in one transaction can throttle a SQL Server
TRUNCATE TABLE – We will presume that in this example TRUNCATE TABLE is not available due to permissions, that foreign keys prevent this operation from being executed or that this operation is unsuitable for purpose because we don’t want to remove all rows.
When you run something like the following to remove all rows from your table in a single transaction,

1 DELETE FROM ExampleTable
SQL Server sets about the process of writing to the transaction log all of the changes to be applied to the physical data.
It will also decide on how it lock the data.
It’s highly likely that the optimizer will decide that a complete table lock will be the most efficient way to handle the transaction.
There are potentially some big problems here.
1. Your transaction log may grow to accommodate the changes being written to it.
2. If your table is huge, you run the risk of consuming all the space on your transaction log disk.
3. If the Ax application(s) still requires access to the table and a table lock has been placed on it, ithen t has to wait until the table becomes available. This could be some time resulting in application time outs and frustrated users.
4. The transaction log disk will be working hard during this period as the transaction log grows. This could decreasing performance across all databases which might be sharing that disk for their transaction logs.

• Depending on how much memory is allocated to the SQL Server buffer pool, there could be significant drops in page life expectancy, reducing performance for other queries.

• The realisation that a big performance issue is occurring lends temptation to kill the query.
• The trouble with that is it can delay things even more because the server then has to rollback the transaction. Depending on how far along the operation is, this could add on even more time to what was originally going to be.
For example, if you kill the query and it is 90% done then the server has to rollback a 90% completed transaction. This will vary but the rollback can take as much time as the delete operation was in progress! (check using KILL n WITH STATUSONLY)
Suggested ways to delete millions of rows using T-SQL loops and TOP
Use a loop combined with TOP and delete rows in smaller transactions.
Here are a couple of variations of the same thing.
Note that I have arbitrarily chosen 1000 as a figure for demonstration purposes.

1 SELECT 1
2 WHILE @@ROWCOUNT > 0
3 BEGIN
4 DELETE TOP (1000)
5 FROM LargeTable
6 END
And another way…

1 DoItAgain:
2 DELETE TOP (1000)
3 FROM ExampleTable
4
5 IF @@ROWCOUNT > 0
6 GOTO DoItAgain
These are simple examples just to demonstrate.
You can add WHERE clauses and JOINS to help with the filtering process to remove specifics.
Also add error handling/transactions (COMMIT/ROLLBACK) .
Summary
It’s a bad idea to delete millions of rows in one transaction and whilst this might sound like a no-brainer, people do try and do this and then wonder why things start to go bad.
Breaking the delete operation down into smaller transactions will help reduce contention for your table, reduce probability of the transaction log becoming too large for its disk and reduce performance impact in general.
The transaction log disk will still be working hard as the refined delete routine removes the rows from your table. So run this task during maintenance windows which are typically done inside off peak periods.

Also check out the size of the smmTransLog,

John H. said...

For BatchHistory you can remove rows on some basis...but if you want to keep the history for say 30 days....and its still very large (ours is 145GB keeping 30 days of history). The real space is used by the PARAMETERS column and the INFO column...both are varbinary(max) and the data for these columns can be quite large and we have 3million rows for about 30 days.

NOTE: Being varbinary(max) the data >8000bytes is kept in the SQL LOB storage area rather than inline with the row. So some storage information can deceive you.

Does anyone know what these columns are for? If they are not important, can we just null them out to regain space.

Unknown said...

Hi John,
BatchHistory.PARAMETERS column holds the parameters used when batch has been launched. These are typically the values filled in the controls in the dialog that appears when the user launches the functionality in batch mode. This information is (probably) very important to understand which was the parameters when user launched the batch, useful to understand batch behavior.
BatchHistory.INFO column holds information collected during batch run, same information that the user could see if the same functionality is launched interactively. Also this information could be vital to understand what happened during the run.

Best regards,
Dario

erectile dysfunction drugs said...

Keep on working, great job!