Best method for reading newline delimited files an

2019-01-07 05:30发布

问题:

I am trying to determine the best way to handle getting rid of newlines when reading in newline delimited files in Python.

What I've come up with is the following code, include throwaway code to test.

import os

def getfile(filename,results):
   f = open(filename)
   filecontents = f.readlines()
   for line in filecontents:
     foo = line.strip('\n')
     results.append(foo)
   return results

blahblah = []

getfile('/tmp/foo',blahblah)

for x in blahblah:
    print x

Suggestions?

回答1:

lines = open(filename).read().splitlines()


回答2:

Here's a generator that does what you requested. In this case, using rstrip is sufficient and slightly faster than strip.

lines = (line.rstrip('\n') for line in open(filename))

However, you'll most likely want to use this to get rid of trailing whitespaces too.

lines = (line.rstrip() for line in open(filename))


回答3:

What do you think about this approach?

with open(filename) as data:
    datalines = (line.rstrip('\r\n') for line in data)
    for line in datalines:
        ...do something awesome...

Generator expression avoids loading whole file into memory and with ensures closing the file



回答4:

for line in file('/tmp/foo'):
    print line.strip('\n')


回答5:

Just use generator expressions:

blahblah = (l.rstrip() for l in open(filename))
for x in blahblah:
    print x

Also I want to advise you against reading whole file in memory -- looping over generators is much more efficient on big datasets.



回答6:

I use this

def cleaned( aFile ):
    for line in aFile:
        yield line.strip()

Then I can do things like this.

lines = list( cleaned( open("file","r") ) )

Or, I can extend cleaned with extra functions to, for example, drop blank lines or skip comment lines or whatever.



回答7:

I'd do it like this:

f = open('test.txt')
l = [l for l in f.readlines() if l.strip()]
f.close()
print l