I'm in the process of trying to switch from R to Python (mainly issues around general flexibility). With Numpy, matplotlib and ipython, I've am able to cover all my use cases save for merging 'datasets'. I would like to simulate SQL's join by clause (inner, outer, full) purely in python. R handles this with the 'merge' function.
I've tried the numpy.lib.recfunctions join_by, but it critical issues with duplicates along the 'key':
join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2',
defaults=None, usemask=True, asrecarray=False)
Join arrays r1
and r2
on key key
.
The key should be either a string or a sequence of string corresponding
to the fields used to join the array.
An exception is raised if the key
field cannot be found in the two input
arrays.
Neither r1
nor r2
should have any duplicates along key
: the presence
of duplicates will make the output quite unreliable. Note that duplicates
are not looked for by the algorithm.
source: http://presbrey.mit.edu:1234/numpy.lib.recfunctions.html
Any pointers or help will be most appreciated!
Suppose you represent the equivalent of a SQL table, in Python, as a list of dicts, all dicts having the same (assume string) keys (other representations, including those enabled by numpy
, can be logically boiled down to an equivalent form). Now, an inner join is (again, from a logical point of view) a projection of their cartesian product -- in the general case, taking a predicate argument on
(which takes two arguments, one "record" [[dict]] from each table, and returns a true value if the two records need to be joined), a simple approach would be (using per-table prefixes to disambiguate, against the risk that the two tables might otherwise have homonimous "fields"):
def inner_join(tab1, tab2, prefix1, prefix2, on):
for r1 in tab1:
for r2 in tab2:
if on(r1, r2):
row = dict((prefix1 + k1, v1) for k1, v1 in r1.items())
row.update((prefix2 + k2, v2) for k2, v2 in r2.items())
yield row
Now, of course you don't want to do it this way, because performance is O(M * N)
-- but, for the generality you've specified ("simulate SQL's join by clause (inner, outer, full)") there is really no alternative, because the ON
clause of a JOIN
is pretty unrestricted.
For outer and full joins, you need in addition to keep info identifying which records [[from one or both tables]] have not been yielded yet, and otherwise yield -- e.g. for a left join you'd add a bool, reset to yielded = False
before the for r2
inner loop, set to True
if the yield
executes, and after the inner loop, if not yielded:
, produce an artificial joined record (presumably using None
to stand for NULL in place of the missing v2
values, since there's no r2
to actually use for the purpose).
To get any substantial efficiency improvements, you need to clarify what constraints you're willing to abide on regarding the on
predicate and the tables -- we already know from your question that you can't live with a unique
constraint on either table's keys, but there are many other constraints that could potentially help, and to have us guessing at what such constraints actually apply in your case would be a pretty unproductive endeavor.
Revisiting this question I originally asked...
The pandas library is a perfect solution. It supplies a 'data frame' class and a 'merge' technique.