I have a data frame like this
lvl1=['l1A','l1A','l1B','l1C','l1D']
lvl2=['l2A','l2A','l2A','l26','l27']
wgt=[.2,.3,.15,.05,.3]
lvls=[lvl1,lvl2]
df=pd.DataFrame(wgt, lvls).reset_index()
df.columns = ['lvl' + str(i) for i in range(1,3)] + ['wgt']
df
lvl1 lvl2 wgt
0 l1A l2A 0.20
1 l1A l2A 0.30
2 l1B l2A 0.15
3 l1C l26 0.05
4 l1D l27 0.30
I want to get the average weight at each level and add them as a separate column to this data frame.
pd.concat([df, df.groupby('lvl1').transform('mean').add_suffix('_l1avg'), df.groupby('lvl2').transform('mean').add_suffix('_l2avg')], axis=1)
lvl1 lvl2 wgt wgt_l1avg wgt_l2avg
0 l1A l2A 0.20 0.25 0.216667
1 l1A l2A 0.30 0.25 0.216667
2 l1B l2A 0.15 0.15 0.216667
3 l1C l26 0.05 0.05 0.050000
4 l1D l27 0.30 0.30 0.300000
The levels can be more than two so I would like to do this using variable instead. What is the best and efficient way to do this as the dataset get to grow very large. I don't necessarily need these to be in the same data frame. It can be just a matrix of average weights in a separate n x m matrix (2 x 5) in this case.
Use list comprehension
:
cols = ['lvl1','lvl2']
k = ['{}_avg'.format(x) for x in cols]
df = df.join(pd.concat([df.groupby(c)['wgt'].transform('mean') for c in cols], 1, keys=k))
print (df)
lvl1 lvl2 wgt lvl1_avg lvl2_avg
0 l1A l2A 0.20 0.25 0.216667
1 l1A l2A 0.30 0.25 0.216667
2 l1B l2A 0.15 0.15 0.216667
3 l1C l26 0.05 0.05 0.050000
4 l1D l27 0.30 0.30 0.300000
l=[]
l.append(df)
for x ,y in enumerate(df.columns[:-1]):
l.append(df.groupby(y).transform('mean').add_suffix('_{}1avg'.format(x+1)))
pd.concat(l,1)
Out[1328]:
lvl1 lvl2 wgt wgt_11avg wgt_21avg
0 l1A l2A 0.20 0.25 0.216667
1 l1A l2A 0.30 0.25 0.216667
2 l1B l2A 0.15 0.15 0.216667
3 l1C l26 0.05 0.05 0.050000
4 l1D l27 0.30 0.30 0.300000
Here is a non-pandas solution. From the resulting dictionary, it's possible to efficiently map to columns.
from collections import defaultdict
import pandas as pd
df = pd.DataFrame([['l1A', 'l2A', 0.20],
['l1A', 'l2A', 0.30],
['l1B', 'l2A', 0.15],
['l1C', 'l26', 0.05],
['l1D', 'l27', 0.30]],
columns=['lvl1', 'lvl2', 'wgt'])
results = defaultdict(lambda: defaultdict(float))
arr = df.values
for i in range(1, 3):
for x in sorted(np.unique(arr[:, i-1])):
results[i][x] = np.mean(arr[np.where(arr[:, i-1]==x)][:, 2])
df['avg_lvl'+str(i)] = df['lvl'+str(i)].map(results[i])
# lvl1 lvl2 wgt avg_lvl1 avg_lvl2
# 0 l1A l2A 0.20 0.25 0.216667
# 1 l1A l2A 0.30 0.25 0.216667
# 2 l1B l2A 0.15 0.15 0.216667
# 3 l1C l26 0.05 0.05 0.050000
# 4 l1D l27 0.30 0.30 0.300000
For this miniature dataset I see the following performance for 3 responses:
%timeit pandas1(df) # wen
# 10 loops, best of 3: 35 ms per loop
%timeit pandas2(df) # jezrael
# 100 loops, best of 3: 4.54 ms per loop
%timeit numpy1(df) # jp_data_analysis
# 1000 loops, best of 3: 1.88 ms per loop