I have an Access 2010 database stored on a network drive. It does the following:
- Pull a list of 250,000 parts (this is fine)
- Analyse the part numbers in VBA
- Save the results of the analysis to a 2D array
- Write the list back out to a table in the database for use by other databases
If I run this process with the database saved to my desktop, it takes about 50 seconds to complete.
If I run it with the database saved on the network drive, it takes about 300 times as long. the element that seems to taking the longest is writing to results from array to table
Sub WritePMROutput()
Status "Writing output to tblPmrParts"
Set db = CurrentDb
Dim rstOutput As DAO.Recordset
Set rstOutput = db.OpenRecordset("tblPMRParts")
db.Execute "DELETE tblPMRParts.* FROM tblPMRParts;" 'clear the table of all data before wriing new data
intPMR = 0
Do Until intPMR = intArrPMRSze 'loop until the counter reaches the size of the array
With rstOutput
.AddNew 'new line in output table
!PMRID = arrPMR(intPMR, 0)
!strayChars = arrPMR(intPMR, 1)
!partno = arrPMR(intPMR, 2)
!extension = arrPMR(intPMR, 3)
If Not arrPMR(intPMR, 4) = "" Then !PartPatternID = arrPMR(intPMR, 4) 'the if not function seems to be required here as was having issues with data type with "" in a number field
If Not arrPMR(intPMR, 5) = "" Then !ExtPatternID = arrPMR(intPMR, 5)
.Update
intPMR = intPMR + 1
End With
Loop
LogUpdate "PDM", "tblPMRParts"
End Sub
Can anyone suggest how I can improve on the efficiency of this operation? Or am I simply limited by the speed of my network?
Interestingly, I can copy the database to my desktop, run it, compact it, and copy it back to the network in about 5-10 mins.
Thanks in advance!
EDIT:
Following the very successful implementation of the fix suggested by Kim, I thought I would report back with the results. The situation was: On local PC, took about 60 seconds to run; across network, took about 15,000 seconds to run
After implementing a simple transaction (begin at the start of the sub, commit at the end): On local PC, 35 seconds (nearly twice as fast); across network 500 seconds - (30 times faster!)
The only issue I had with implementation was that the number of record locks required threw an error, but a quick Google search yielded a simple line of code to temporarily increase the number of record locksdAO.DBEngine.SetOption dbMaxLocksPerFile, 300000
Hope this helps someone else in future!
I refer to this source as it is explained pretty well:
http://support.microsoft.com/kb/146908
Example syntax is provided here too:
You can use database transactions to reduce the number of read-writes to your table.
You perform your loops and logic in memory, but only commit your recordset to the database once.
Quote from the website:
The problem with Access is that it's a Client-Side Database. Whenever Access runs a process against a table, it has to pull the entire contents of the table to your local space, perform the operation, and then push the results back out to the database. This is why most people end up opting to use SQL Server as a back-end, as it's Server-Side and can do processing (in the form of Stored Procedures and Views) on the server to minimize what you bring to the front-end.
The most efficient way to delete records in a table is using a pre-contstructed DELETE query. You may pick up a little time by doing that, as opposed to using a db.Execute statement. However, you can also try changing that statement to: