I'm trying to understand why calling count() directly on a group returns the correct answer (in this example, 2 rows in that group), but calling count via a lambda in the agg() function returns the beginning of epoch ("1970-01-01 00:00:00.000000002").
# Using groupby(lambda x: True) in the code below just as an illustrative example.
# It will always create a single group.
x = DataFrame({'time': [np.datetime64('2005-02-25'), np.datetime64('2006-03-30')]}).groupby(lambda x: True)
display(x.count())
>>time
>>True 2
display(x.agg(lambda x: x.count()))
>>time
>>True 1970-01-01 00:00:00.000000002
Could this be a bug in pandas? I am using Pandas version: 0.16.1 IPython version: 3.1.0 numpy version: 1.9.2
I get the same result regardless of whether I use the standard python datetime vs np.datetime64 vs the pandas Timestamp.
EDIT (as per the accepted answer from @jeff, it looks like I may need to coerce to dtype object before applying an aggregation function that doesn't return a datetime type):
dt = [datetime.datetime(2012, 5, 1)] * 2
x = DataFrame({'time': dt})
x['time2'] = x['time'].astype(object)
display(x)
y = x.groupby(lambda x: True)
y.agg(lambda x: x.count())
>>time time2
>>True 1970-01-01 00:00:00.000000002 2
Here x is the original frame from above (not with your groupby). Passing a UDF, e.g. the lambda, calls this on each Series. So this is the result of the function.
Then coercion to the original dtype of the Series happens. So the result is:
which is exactly what you are seeing. The point of the coercion to the original dtype is to preserve it if at all possible. Not doing this would be even more magic on the groupby results.