Pandas: Counting frequency of datetime objects in

2020-07-23 04:16发布

问题:

I have a column (from my original data) that I have converted from a string to a datetime-object in Pandas.

The column looks like this:

0     2012-01-15 11:10:12
1     2012-01-15 11:15:01
2     2012-01-16 11:15:12
3     2012-01-16 11:25:01
...
4     2012-01-22 11:25:11
5     2012-01-22 11:40:01
6     2012-01-22 11:40:18
7     2012-01-23 11:40:23
8     2012-01-23 11:40:23
...
9     2012-01-30 11:50:02
10    2012-01-30 11:50:41
11    2012-01-30 12:00:01
12    2012-01-30 12:00:34
13    2012-01-30 12:45:01
...
14    2012-02-05 12:45:13
15    2012-01-05 12:55:01
15    2012-01-05 12:55:01
16    2012-02-05 12:56:11
17    2012-02-05 13:10:01
...
18    2012-02-11 13:10:11
...
19    2012-02-20 13:25:02
20    2012-02-20 13:26:14
21    2012-02-20 13:30:01
...
22    2012-02-25 13:30:08
23    2012-02-25 13:30:08
24    2012-02-25 13:30:08
25    2012-02-26 13:30:08
26    2012-02-27 13:30:08
27    2012-02-27 13:30:08
28    2012-02-27 13:30:25
29    2012-02-27 13:30:25

What I would like to do is to count the frequency of each date occurring. As you can see, I have left some dates out, but if I were to compute the frequency manually (for visible values), I would have:

2012-01-15 - 2 (frequency)

2012-01-16 - 2

2012-01-22 - 3

2012-01-23 - 2

2012-01-30 - 5

2012-02-05 - 5

2012-02-11 - 1

2012-02-20 - 3

2012-02-25 - 3

2012-02-26 - 1

2012-02-27 - 4

This is the daily frequency and I would like to count it. I have so far tried this:

df[df.str.contains(r'^\d\d\d\d-\d\d-\d\d')].value_counts()

I know it fails because these are not 'string' objects, but I am not sure how else to count this.

I have also looked at the .dt property, but the Pandas documentation is very verbose on these simple frequency calculations.


Also, to generalize this, how would I:

  1. Apply the daily frequency to weekly frequency (eg. Monday to Sunday)
  2. Apply daily frequency to monthly frequency (eg. how many times I see "2012-01-**" in my column)
  3. Using the daily/weekly/monthly restrictions across other columns (eg. if I have a column that contains "GET requests", I would like to know how many occurred daily, then weekly and then monthly)
  4. Applying a weekly restriction with another restriction (eg. I have a column that returns "404 Not found" and I would like to check how many "404 Not found I received per week" )

Perhaps the solution is a long one, where I may need to do lots of: split-apply-combine ... but I was made to believe that Pandas simplifies/abstracts away a lot of the work, which is why I am stuck now.

The source of this file could be considered something equivalent to a server-log file.

回答1:

You can first get the date part of the datetime, and then use value_counts:

s.dt.date.value_counts()

Small example:

In [12]: s = pd.Series(pd.date_range('2012-01-01', freq='11H', periods=6))

In [13]: s
Out[13]:
0   2012-01-01 00:00:00
1   2012-01-01 11:00:00
2   2012-01-01 22:00:00
3   2012-01-02 09:00:00
4   2012-01-02 20:00:00
5   2012-01-03 07:00:00
dtype: datetime64[ns]

In [14]: s.dt.date
Out[14]:
0    2012-01-01
1    2012-01-01
2    2012-01-01
3    2012-01-02
4    2012-01-02
5    2012-01-03
dtype: object

In [15]: s.dt.date.value_counts()
Out[15]:
2012-01-01    3
2012-01-02    2
2012-01-03    1
dtype: int64