In my program I need to query through metadata.
I read data into numpy
record array A
from csv-like text file ** without duplicate rows**.
var1|var2|var3|var4|var5|var6
'a1'|'b1'|'c1'|1.2|2.2|3.4
'a1'|'b1'|'c4'|3.2|6.2|3.2
'a2'|''|'c1'|1.4|5.7|3.8
'a2'|'b1'|'c2'|1.2|2.2|3.4
'a3'|''|'c2'|1.2|2.2|3.4
'a1'|'b2'|'c4'|7.2|6.2|3.2
...
There are millions of rows and the query in nested loops can be up to billion times (mostly matching the first 3 columns), so the efficiency becomes critical.
There are 3 types of queries and the first one is the most frequent.Get rows matching one or more of the first 3 columns with given strings, e.g.,
To match a record where
var1='a2'
andvar2='b1'
,ind = np.logical_and(A['var1']=='a2', A['var2']=='b1')
To match a record where
var1='a2'
,var2='b1'
andvar3='c1'
,ind = np.logical_and(np.logical_and(A['var1']=='a2', A['var2']=='b1'), A['var3']=='c1')
As one can see, each time we compare the all elements of columns with given strings.
I thought mapping could be a more efficient way for indexing, so I converted the recarray A
to a dict D = {'var1_var2_var3
: [var4, var5, var6], ...}, and search through the keys by
fnmatch(keys, pat)`. I'm not sure it's a better way.
Or I can make a hierachical dict {'var1':{'var2':{'var3':[],...},...},...}
or in-memory hdf5 /var1/var2/var3
and just try to get the item if exists. This looks the fastest way?
The latter two types of queries are not very frequent and I can accept the way of numpy recarray comparison.
Get all rows the numeric values in the latter columns in a specific range, e.g.,
get rows where '1
ind = np.logical_and(1<A['var4']<3), 0<A['var5']<3)
A combination of the above two, e.g.,
get rows where
var2='b1'
, '1ind = np.logical_and(np.logical_and(A['var2']=='b1', 1<A['var4']<3), 0<A['var5']<3)
SQL
could be a good way but it looks too heavy to use database for this small task. And I don't have authority to install database support everywhere.
Any suggestions for data structure for fast in-memory query? (If it is hard to have a simple customed implementation, sqlite
and pandas.dateframe
seem to be possible solutions, as suggested.)
With your file sample ('b' for py3)
A simple read leaves me with the double layer of quoting
So I'll define a converter to strip those (a
csv
reader might do it as well):and I can write tests like
The tests over all records of
A
are being done in compilenumpy
code, so they shouldn't be that slow.This data could also be viewed as 2 multicolumn fields
Or loaded directly with
Then tests could be written as:
Sometimes it might be clearer to give individual columns their own id:
Repeated queries, or combinations could be saved.
I believe
pandas
stores its series innumpy
arrays, with different dtype for each column (and object dtype if types varies within a column). But I haven't seen discussion ofpandas
speed and speed tricks. I don't expect much speed improvement unless it provides for some sort of indexing.I can imagine writing this data to a database.
sqlite3
is builtin and has amemory
mode so you don't need file access. But I'm sufficiently out of practice with that code that I'll pass on demonstrating it. Nor do I have a sense of how easy or fast it is to do these sorts of queries.https://mail.scipy.org/pipermail/scipy-user/2007-August/013350.html has some code that can save a structured array to a sqlite3 database. It includes a function that converts a
dtype
into a table creation statement.====================
I've got that
pipermail
example working withpython3
. The test example has 11 fields. With 5000 records,is 6x faster than a corresponding
sqlite3
query (with an existingcursor
):Use Pandas, it is built for tasks like this:
More indexing and selecting information