How to replace values in a Pandas series s
via a dictionary d
has been asked and re-asked many times.
The recommended method (1, 2, 3, 4) is to either use s.replace(d)
or, occasionally, use s.map(d)
if all your series values are found in the dictionary keys.
However, performance using s.replace
is often unreasonably slow, often 5-10x slower than a simple list comprehension.
The alternative, s.map(d)
has good performance, but is only recommended when all keys are found in the dictionary.
Why is s.replace
so slow and how can performance be improved?
import pandas as pd, numpy as np
df = pd.DataFrame({'A': np.random.randint(0, 1000, 1000000)})
lst = df['A'].values.tolist()
##### TEST 1 #####
d = {i: i+1 for i in range(1000)}
%timeit df['A'].replace(d) # 1.98s
%timeit [d[i] for i in lst] # 134ms
##### TEST 2 #####
d = {i: i+1 for i in range(10)}
%timeit df['A'].replace(d) # 20.1ms
%timeit [d.get(i, i) for i in lst] # 243ms
Note: This question is not marked as a duplicate because it is looking for specific advice on when to use different methods given different datasets. This is explicit in the answer and is an aspect not usually addressed in other questions.
One trivial solution is to choose a method dependent on an estimate of how completely values are covered by dictionary keys.
General case
df['A'].map(d)
if all values mapped; ordf['A'].map(d).fillna(df['A']).astype(int)
if >5% values mapped.Few, e.g. < 5%, values in d
df['A'].replace(d)
The "crossover point" of ~5% is specific to Benchmarking below.
Interestingly, a simple list comprehension generally underperforms
map
in either scenario.Benchmarking
Explanation
The reason why
s.replace
is so slow is that it does much more than simply map a dictionary. It deals with some edge cases and arguably rare situations, which typically merit more care in any case.This is an excerpt from
replace()
inpandas\generic.py
.There appear to be many steps involved:
This can be compared to much leaner code from
map()
inpandas\series.py
: