Building on this answer, and given that
>>> df
columnA columnB columnC
0 cat1 3 400
1 cat1 2 20
2 cat1 5 3029
3 cat2 1 492
4 cat2 4 30
5 cat3 2 203
6 cat3 6 402
7 cat3 4 391
>>> df.groupby(['columnA']).agg({'columnA':'size','columnB':'min'}).rename(columns={'columnA':'size'})
size min
columnA
cat1 3 2
cat2 2 1
cat3 3 2
I want to obtain a DataFrame containing also the value of columnC corresponding to (on the same row of) the displayed minimum value of columnB, that is:
size min columnC
columnA
cat1 3 2 20
cat2 2 1 492
cat3 3 2 203
Of course this is possible only for those aggregating functions (like min or max) which 'pick' a value from the group rather than 'aggregate' (like sum or average).
Any clue?
Thanks in advance.
Since the result you are looking for is essentially a join on ['columnA', 'columnB']
, you can obtain the desired DataFrame using
result = pd.merge(result, df, on=['columnA', 'columnB'], how='left')
provided we setup result
with the right column names:
import pandas as pd
df = pd.DataFrame(
{'columnA': ['cat1', 'cat1', 'cat1', 'cat2', 'cat2', 'cat3', 'cat3', 'cat3'],
'columnB': [3, 2, 5, 1, 4, 2, 6, 4],
'columnC': [400, 20, 3029, 492, 30, 203, 402, 391]})
result = df.groupby('columnA').agg({'columnA':'size', 'columnB':'min'})
result = result.rename(columns={'columnA':'size'})
result = result.reset_index()
result = pd.merge(result, df, on=['columnA', 'columnB'], how='left')
result = result.set_index('columnA')
result = result.rename(columns={'columnB':'min'})
print(result)
yields
min size columnC
columnA
cat1 2 3 20
cat2 1 2 492
cat3 2 3 203
On reason why you might want to use pd.merge
instead of groupby/apply
is because groupby/apply
calls a function for each group. If there are a lot of groups, this can be slow.
For example, if you had a 10000-row DataFrame with 1000 groups,
import numpy as np
import pandas as pd
N = 10000
df = pd.DataFrame(
{'columnA': np.random.choice(['cat{}'.format(i) for i in range(N//10)],
size=N),
'columnB': np.random.randint(10, size=N),
'columnC': np.random.randint(100, size=N)})
then using_merge
(below) is ~ 250x faster than using_apply
:
def using_merge(df):
result = df.groupby('columnA').agg({'columnA':'size', 'columnB':'min'})
result = result.rename(columns={'columnA':'size'})
result = result.reset_index()
result = pd.merge(result, df, on=['columnA', 'columnB'], how='left')
result = result.set_index('columnA')
result = result.rename(columns={'columnB':'min'})
return result
def using_apply(df):
return (df.groupby("columnA")
.apply(lambda g: (g[g.columnB == g.columnB.min()]
.assign(size = g.columnA.size)
.rename(columns={'columnB': 'min'})
.drop('columnA', 1)))
.reset_index(level=1, drop=True))
In [80]: %timeit using_merge(df)
100 loops, best of 3: 7.99 ms per loop
In [81]: %timeit using_apply(df)
1 loop, best of 3: 2.06 s per loop
In [82]: 2060/7.99
Out[82]: 257.8222778473091
You can use idxmin
to pull out the row indices of those rows:
In [11]: g = df.groupby(['columnA'])
In [12]: res = g.agg({'columnA': 'size', 'columnB': 'min'})
In [13]: g['columnB'].idxmin()
Out[13]:
columnA
cat1 1
cat2 3
cat3 5
Name: columnB, dtype: int64
In [14]: df["columnC"].iloc[g['columnB'].idxmin()]
Out[14]:
1 20
3 492
5 203
Name: columnC, dtype: int64
You can append this as a column to res
:
In [15]: res["columnC"] = df["columnC"].iloc[g['columnB'].idxmin()].values
In [16]: res
Out[16]:
columnA columnB columnC
columnA
cat1 3 2 20
cat2 2 1 492
cat3 3 2 203