I'm new to pandas.
I have a very simple dataframe named dlf
with an index and two columns with 40k-row. It is loaded as so:
d = pd.DataFrame.from_csv(csvsLocation + 'name.csv', index_col='ID', infer_datetime_format=True)
d['LAST'] = pd.to_datetime(d['LAST'], format = '%d-%b-%y')
d['FIRST'] = pd.to_datetime(d['FIRST'], format = '%d-%b-%y')
dlf = d[['LAST', 'FIRST']]
It looks something like this:
LAST FIRST
ID
1 1997-04-17 1991-10-04
3 2009-02-13 1988-07-07
5 2009-10-24 1995-12-06
6 1996-04-31 1989-03-14
Running this apply method takes 5 seconds:
year = 1997
dlf[str(year)] = dlf.apply(lambda row: 1*(year >= row['FIRST'].year and year <= row['LAST'].year), axis=1)
I need this sped up because I intend to run it hundreds of times.
I suspect the issue is in using lambda.
What have I done wrong, and/or how can I speed it up?
Solution
You can access the the year via dt.year
on both date columns:
year = 1999
df[str(year)] = 1 * ((df['FIRST'].dt.year <= year) & (df['LAST'].dt.year >= year))
print(df)
Output:
LAST FIRST 1999
ID
1 1997-04-17 1991-10-14 0
3 2009-02-13 1988-07-07 1
5 2009-10-24 1995-10-06 1
6 1996-04-30 1969-03-14 0
You can also keep the boolean as result:
df[str(year)] = (df['FIRST'].dt.year <= year) & (df['LAST'].dt.year >= year)
print(df)
Output:
LAST FIRST 1999
ID
1 1997-04-17 1991-10-14 False
3 2009-02-13 1988-07-07 True
5 2009-10-24 1995-10-06 True
6 1996-04-30 1969-03-14 False
Performance
Measuring performance is always fun. But measuring can be tricky. If we just use our tiny example dataframe with 4 rows, things get a bit slower:
%timeit dlf[str(year)] = dlf.apply(lambda row: 1*(year >= row['FIRST'].year and year <= row['LAST'].year), axis=1)
1000 loops, best of 3: 1.27 ms per loop
%timeit df[str(year)] = 1 * ((df['FIRST'].dt.year <= year) & (df['LAST'].dt.year >= year))
100 loops, best of 3: 1.7 ms per loop
But let's have a look at 40k rows:
big = pd.concat([df] * 10000)
>>> big.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 40000 entries, 1 to 6
Data columns (total 4 columns):
LAST 40000 non-null datetime64[ns]
FIRST 40000 non-null datetime64[ns]
1999 40000 non-null bool
1997 40000 non-null int64
dtypes: bool(1), datetime64[ns](2), int64(1)
memory usage: 1.3 MB
Now we can see a significant speedup:
%timeit big[str(year)] = big.apply(lambda row: 1*(year >= row['FIRST'].year and year <= row['LAST'].year), axis=1)
1 loops, best of 3: 6.51 s per loop
%timeit big[str(year)] = 1 * ((big['FIRST'].dt.year <= year) & (big['LAST'].dt.year >= year))
100 loops, best of 3: 8.33 ms per loop
This is about 780 times faster.
I would pre-calculate first_year
and last_year
to simplify the comparisons:
dlf[year] = dlf[dlf['first_year'] <= year & [dlf['last_year'] >= year]
If i understood your question correctly you are going to add multiple columns (multiple years), here is a generic vectorized solution, so you don't need to repeat it 100 times:
years = [1997, 2016, 2000, 1989]
years = sorted(years)
dfy = pd.DataFrame(pd.Series(years * len(df)).reshape(len(df),len(years)), columns=years)
df = df.join(dfy.apply(lambda x: x.between(df.FIRST.dt.year, df.LAST.dt.year)).astype(int))
df.columns = df.columns.astype(str)
Step by step:
In [160]: years = [1997, 2016, 2000, 1989]
In [161]: years = sorted(years)
In [162]: dfy = pd.DataFrame(pd.Series(years * len(df)).reshape(len(df),len(years)), columns=years)
In [163]: dfy
Out[163]:
1989 1997 2000 2016
0 1989 1997 2000 2016
1 1989 1997 2000 2016
2 1989 1997 2000 2016
3 1989 1997 2000 2016
In [164]: dfy.apply(lambda x: x.between(df.FIRST.dt.year, df.LAST.dt.year)).astype(int)
Out[164]:
1989 1997 2000 2016
0 0 1 0 0
1 1 1 1 0
2 0 1 1 0
3 1 0 0 0
In [165]: df = df.join(dfy.apply(lambda x: x.between(df.FIRST.dt.year, df.LAST.dt.year)).astype(int))
In [166]: df.columns = df.columns.astype(str)
In [167]: df
Out[167]:
FIRST LAST 1989 1997 2000 2016
0 1991-10-04 1997-04-17 0 1 0 0
1 1988-07-07 2009-02-13 1 1 1 0
2 1995-12-06 2009-10-24 0 1 1 0
3 1989-03-14 1996-04-30 1 0 0 0