I am trying to determine the best way to handle getting rid of newlines when reading in newline delimited files in Python.
What I've come up with is the following code, include throwaway code to test.
import os
def getfile(filename,results):
f = open(filename)
filecontents = f.readlines()
for line in filecontents:
foo = line.strip('\n')
results.append(foo)
return results
blahblah = []
getfile('/tmp/foo',blahblah)
for x in blahblah:
print x
Suggestions?
lines = open(filename).read().splitlines()
Here's a generator that does what you requested. In this case, using rstrip is sufficient and slightly faster than strip.
lines = (line.rstrip('\n') for line in open(filename))
However, you'll most likely want to use this to get rid of trailing whitespaces too.
lines = (line.rstrip() for line in open(filename))
What do you think about this approach?
with open(filename) as data:
datalines = (line.rstrip('\r\n') for line in data)
for line in datalines:
...do something awesome...
Generator expression avoids loading whole file into memory and with
ensures closing the file
for line in file('/tmp/foo'):
print line.strip('\n')
Just use generator expressions:
blahblah = (l.rstrip() for l in open(filename))
for x in blahblah:
print x
Also I want to advise you against reading whole file in memory -- looping over generators is much more efficient on big datasets.
I use this
def cleaned( aFile ):
for line in aFile:
yield line.strip()
Then I can do things like this.
lines = list( cleaned( open("file","r") ) )
Or, I can extend cleaned with extra functions to, for example, drop blank lines or skip comment lines or whatever.
I'd do it like this:
f = open('test.txt')
l = [l for l in f.readlines() if l.strip()]
f.close()
print l