I have a bottleneck in my program which is caused by the following:
A = numpy.array([10,4,6,7,1,5,3,4,24,1,1,9,10,10,18])
B = numpy.array([1,4,5,6,7,8,9])
C = numpy.array([i for i in A if i in B])
The expected outcome for C
is the following:
C = [4 6 7 1 5 4 1 1 9]
Is there a more efficient way of doing this operation?
Note that array A
contains repeating values and they need to be taken into account. I wasn't able to use set intersection since taking the intersection will omit the repeating values, returning just [1,4,5,6,7,9]
.
Also note this is only a simple demonstration. The actual array sizes can be in the order of thousands, to well over millions.
We can use
np.searchsorted
for performance boost, more so for the case when the lookup array has sorted unique values -That
assume_unique
flag makes it work for both generic case and the special case ofB
being unique and sorted.Sample run -
Timings to compare against another vectorized
np.in1d
based solution (listed in two other answers) on large arrays for both cases -You can use
np.in1d
:np.in1d
returns a boolean array indicating whether each value ofA
also appears inB
. This array can then be used to indexA
and return the common values.It's not relevant to your example, but it's also worth mentioning that if
A
andB
each contain unique values thennp.in1d
can be sped up by settingassume_unique=True
:You might also be interested in
np.intersect1d
which returns an array of the unique values common to both arrays (sorted by value):Use
numpy.in1d
:If you check only for existence in
B
(if i in B
) then obviously you can use aset
for this. It doesn't matter how many fours there are inB
as long as there is at least one. Of course you are right, that you can't use two sets and an intersection. But even oneset
should improve performance, as searching complexity is less than O(n):