pandas: How to limit the results of str.contains?

2019-07-04 04:01发布

问题:

I have a DataFrame with >1M rows. I'd like to select all the rows where a certain column contains a certain substring:

matching = df['col2'].str.contains('substr', case=True, regex=False)
rows = df[matching].col1.drop_duplicates()

But this selection is slow and I'd like to speed it up. Let's say I only need the first n results. Is there a way to stop matching after getting n results? I've tried:

matching = df['col2'].str.contains('substr', case=True, regex=False).head(n)

and:

matching = df['col2'].str.contains('substr', case=True, regex=False).sample(n)

but they aren't any faster. The second statement is boolean and very fast. How can I speed up the first statement?

回答1:

Believe it or not but .str accessor is slow. You can use list comprehensions with better performance.

df = pd.DataFrame({'col2':np.random.choice(['substring','midstring','nostring','substrate'],100000)})

Test for equality

all(df['col2'].str.contains('substr', case=True, regex=False) ==
    pd.Series(['substr' in i for i in df['col2']]))

Output:

True

Timings:

%timeit df['col2'].str.contains('substr', case=True, regex=False)
10 loops, best of 3: 37.9 ms per loop

versus

%timeit pd.Series(['substr' in i for i in df['col2']])
100 loops, best of 3: 19.1 ms per loop


回答2:

You can spead it up with:

matching = df['col2'].head(n).str.contains('substr', case=True, regex=False)
rows = df['col1'].head(n)[matching==True]

However this solution would retrieve the matching results within the first n rows, not the first n matching results.

In case you actually want the first n matching results you should use:

rows =  df['col1'][df['col2'].str.contains("substr")==True].head(n)

But this option is way slower of course.

Inspired in @ScottBoston's answer you can use following approach for a complete faster solution:

rows = df['col1'][pd.Series(['substr' in i for i in df['col2']])==True].head(n)

This is faster but not that faster than showing the whole results with this option. With this solution you can get the first n matching results.

With below test code we can see how fast is each solution and it's results:

import pandas as pd
import time

n = 10
a = ["Result", "from", "first", "column", "for", "this", "matching", "test", "end"]
b = ["This", "is", "a", "test", "has substr", "also has substr", "end", "of", "test"]

col1 = a*1000000
col2 = b*1000000

df = pd.DataFrame({"col1":col1,"col2":col2})

# Original option
start_time = time.time()
matching = df['col2'].str.contains('substr', case=True, regex=False)
rows = df[matching].col1.drop_duplicates()
print("--- %s seconds ---" % (time.time() - start_time))

# Faster option
start_time = time.time()
matching_fast = df['col2'].head(n).str.contains('substr', case=True, regex=False)
rows_fast = df['col1'].head(n)[matching==True]
print("--- %s seconds for fast solution ---" % (time.time() - start_time))


# Other option
start_time = time.time()
rows_other =  df['col1'][df['col2'].str.contains("substr")==True].head(n)
print("--- %s seconds for other solution ---" % (time.time() - start_time))

# Complete option
start_time = time.time()
rows_complete = df['col1'][pd.Series(['substr' in i for i in df['col2']])==True].head(n)
print("--- %s seconds for complete solution ---" % (time.time() - start_time))

This would output:

>>> 
--- 2.33899998665 seconds ---
--- 0.302999973297 seconds for fast solution ---
--- 4.56700015068 seconds for other solution ---
--- 1.61599993706 seconds for complete solution ---

And the resulting Series would be:

>>> rows
4     for
5    this
Name: col1, dtype: object
>>> rows_fast
4     for
5    this
Name: col1, dtype: object
>>> rows_other
4      for
5     this
13     for
14    this
22     for
23    this
31     for
32    this
40     for
41    this
Name: col1, dtype: object
>>> rows_complete
4      for
5     this
13     for
14    this
22     for
23    this
31     for
32    this
40     for
41    this
Name: col1, dtype: object