I'm working with some multi-gigabyte text files and want to do some stream processing on them using PowerShell. It's simple stuff, just parsing each line and pulling out some data, then storing it in a database.
Unfortunately, get-content | %{ whatever($_) }
appears to keep the entire set of lines at this stage of the pipe in memory. It's also surprisingly slow, taking a very long time to actually read it all in.
So my question is two parts:
- How can I make it process the stream line by line and not keep the entire thing buffered in memory? I would like to avoid using up several gigs of RAM for this purpose.
- How can I make it run faster? PowerShell iterating over a
get-content
appears to be 100x slower than a C# script.
I'm hoping there's something dumb I'm doing here, like missing a -LineBufferSize
parameter or something...
If you want to use straight PowerShell check out the below code.
System.IO.File.ReadLines()
is perfect for this scenario. It returns all the lines of a file, but lets you begin iterating over the lines immediately which means it does not have to store the entire contents in memory.Requires .NET 4.0 or higher.
http://msdn.microsoft.com/en-us/library/dd383503.aspx
If you are really about to work on multi-gigabyte text files then do not use PowerShell. Even if you find a way to read it faster processing of huge amount of lines will be slow in PowerShell anyway and you cannot avoid this. Even simple loops are expensive, say for 10 million iterations (quite real in your case) we have:
UPDATE: If you are still not scared then try to use the .NET reader:
UPDATE 2
There are comments about possibly better / shorter code. There is nothing wrong with the original code with
for
and it is not pseudo-code. But the shorter (shortest?) variant of the reading loop is