I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
One possible one-liner:
Note that I used
map()
here instead of a nested list comprehension to aid readability.If you want a numpy array:
If you want a numpy array and each row in the text file has the same number of values:
Without numpy:
Or just:
How about the following:
I would use regular expressions
import re
First merging the files into one long string, and then extracting only the expressions corresponding to floats ( '[\d.E+-]' for scientific notation, but you can also use '[\d.]' for only float expressions).
Quick answer:
If you process often this kind of data, the csv module will help.
If you feel wild, you can even make this completly declarative:
And if you realy want you colleagues to hate you, you can make a one liner (NOT PYTHONIC AT ALL :-):
Stripping all the boiler plate and flexibility, you can end up with a clean and quite readable one liner. I wouldn't use it because I like the refatoring potential of using
csv
, but it can be good enought. It's a grey zone here, so I wouldn't say it's Pythonic, but it's definitly handy.