I am querying a pipe delimited text file using VBA with ADODB. I have tried using both the ACE and JET engines, but the results are the same, and not correct. I saved a "schema.ini" file in the directory telling the engine that the file is pipe delimited. When the query finishes though, I have data that shifted columns and other fields that are blank when I can see the data in the text file. I use the CopyFromRecordset
method to transfer the results to my workbook. Can anyone see something off in my code? Or has anyone run into this?
My connection string:
With Conn
.Provider = "Microsoft.ACE.OLEDB.12.0"
.ConnectionString = "Data Source=" & strPath & ";" & _
"Extended Properties=""text;HDR=YES;FMT=Delimited"""
.Open
End With
Value of "strPath":
strPath = "H:\Folder\"
SQL String:
strSQL = "SELECT * FROM " & strFile & ";"
Data transfer:
With ThisWorkbook
With .Sheets("CashFlows")
.Range("a2").CopyFromRecordset Rst
With .Range("a1")
For i = 0 To Rst.Fields.Count - 1
.Offset(0, i) = Rst.Fields(i).Name
Next i
End With
End With
End With
I have never had issues with ADO (either Ace or Jet) and I use them quite frequently. I have not, however, used them for pipe delimited files before.
Please leave a comment if I am unclear in my explanation, or if you require more code.
EDIT See the two lines out of the pipe delinted text file below. The first line parses normally. The second drops the "651111100" and "654444475". The strangest thing is that these parse totally fine when I open them in Excel directly and use text to columns.
08/31/2015|000|000|Recital #5546|0000000012|88885463|123334563
08/31/2015|000|000|DII #7412|651111100|654444475|00000326541
I re-checked, and the issue is only that it is dropping columns, not that it is shifting them around. Sorry for any confusion.
UPDATE
I did a find/replace in the text file and put in "^" instead of "|". I then tried to run this again and I ran into the same issue!
While I am still not sure what the problem might, the following is a usable solution:
The test-run on my computer completed the import for one million rows in a 80MB file within 12 seconds. While I am sure that there are better ways to do this which outperform this solution you might want to give it a try in the meantime.