Hours, Date, Day Count Calculation

2019-07-28 12:37发布

问题:

I have this huge dataset which has dates for several days and timestamps. The datetime format is in UNIX format. The datasets are logs of some login.

The code is supposed to group start and end time logs and provide log counts and unique id counts.

I am trying to get some stats like:

total log counts per hour & unique login ids per hour. 

log count with choice of hours i.e. 24hrs, 12hrs, 6 hrs, 1 hr, etc and day of the week and such options.

I am able to split the data with start and end hours but I am not able to get the stats of counts of logs and unique ids.

Code:

from datetime import datetime,time

# This splits data from start to end time 
start = time(8,0,0)
end =   time(20,0,0)

    with open('input', 'r') as infile, open('output','w') as outfile:
        for row in infile:
            col = row.split()
            t1 = datetime.fromtimestamp(float(col[2])).time()
            t2 = datetime.fromtimestamp(float(col[3])).time()
            print (t1 >= start and t2 <= end)

Input data format: The data has no headers but the fields are given below. The number of days is not known in input.

UserID, StartTime, StopTime, GPS1, GPS2
00022d9064bc,1073260801,1073260803,819251,440006
00022d9064bc,1073260803,1073260810,819213,439954
00904b4557d3,1073260803,1073261920,817526,439458
00022de73863,1073260804,1073265410,817558,439525
00904b14b494,1073260804,1073262625,817558,439525
00022d1406df,1073260807,1073260809,820428,438735
00022d9064bc,1073260801,1073260803,819251,440006
00022dba8f51,1073260801,1073260803,819251,440006
00022de1c6c1,1073260801,1073260803,819251,440006
003065f30f37,1073260801,1073260803,819251,440006
00904b48a3b6,1073260801,1073260803,819251,440006
00904b83a0ea,1073260803,1073260810,819213,439954
00904b85d3cf,1073260803,1073261920,817526,439458
00904b14b494,1073260804,1073265410,817558,439525
00904b99499c,1073260804,1073262625,817558,439525
00904bb96e83,1073260804,1073265163,817558,439525
00904bf91b75,1073260804,1073263786,817558,439525

Expected Output: Example Output

StartTime, EndTime, Day, LogCount, UniqueIDCount

00:00:00, 01:00:00, Mon, 349, 30  

StartTime and Endtime = Human readable format

Only to separate data with range of time is already achieved, but I am trying to write a round off time and calculate the counts of logs and uniqueids. Solution with Pandas is also welcome.

Edit One: I more details

StartTime         --> EndTIime
1/5/2004, 5:30:01 --> 1/5/2004, 5:30:03

But that falls between 5:00:00 --> 6:00:00 . So this way count of all the logs in the time range is what I am trying to find. Similarly for others also like

5:00:00 --> 6:00:00 Hourly Count 
00:00:00 --> 6:00:00 Every 6 hours 
00:00:00 --> 12:00:00 Every 12 hours 

5 Jan 2004, Mon --> count 
6 Jan 2004, Tue --> Count

And so on Looking for a generic program where I can change the time/hours range as needed.

回答1:

Unfortunately i couldn't find any elegant solution.

Here is my attempt:

fn = r'D:\temp\.data\dart_small.csv'
cols = ['UserID','StartTime','StopTime','GPS1','GPS2']
df = pd.read_csv(fn, header=None, names=cols)

df['m'] = df.StopTime + df.StartTime
df['d'] = df.StopTime - df.StartTime

# 'start' and 'end' for the reporting DF: `r`
# which will contain equal intervals (1 hour in this case)
start = pd.to_datetime(df.StartTime.min(), unit='s').date()
end = pd.to_datetime(df.StopTime.max(), unit='s').date() + pd.Timedelta(days=1)

# building reporting DF: `r`
freq = '1H'  # 1 Hour frequency
idx = pd.date_range(start, end, freq=freq)
r = pd.DataFrame(index=idx)
r['start'] = (r.index - pd.datetime(1970,1,1)).total_seconds().astype(np.int64)

# 1 hour in seconds, minus one second (so that we will not count it twice)
interval = 60*60 - 1

r['LogCount'] = 0
r['UniqueIDCount'] = 0


for i, row in r.iterrows():
        # intervals overlap test
        # https://en.wikipedia.org/wiki/Interval_tree#Overlap_test
        # i've slightly simplified the calculations of m and d
        # by getting rid of division by 2,
        # because it can be done eliminating common terms
    u = df[np.abs(df.m - 2*row.start - interval) < df.d + interval].UserID
    r.ix[i, ['LogCount', 'UniqueIDCount']] = [len(u), u.nunique()]

r['Day'] = pd.to_datetime(r.start, unit='s').dt.weekday_name.str[:3]
r['StartTime'] = pd.to_datetime(r.start, unit='s').dt.time
r['EndTime'] = pd.to_datetime(r.start + interval + 1, unit='s').dt.time

print(r[r.LogCount > 0])

PS the less periods you will have in the report DF - r, the faster it will count. So you may want to get rid of rows (times) if you know beforehand that those timeframes won't contain any data (for example during the weekends, holidays, etc.)

Result:

                          start  LogCount  UniqueIDCount  Day StartTime   EndTime
2004-01-05 00:00:00  1073260800        24             15  Mon  00:00:00  01:00:00
2004-01-05 01:00:00  1073264400         5              5  Mon  01:00:00  02:00:00
2004-01-05 02:00:00  1073268000         3              3  Mon  02:00:00  03:00:00
2004-01-05 03:00:00  1073271600         3              3  Mon  03:00:00  04:00:00
2004-01-05 04:00:00  1073275200         2              2  Mon  04:00:00  05:00:00
2004-01-06 12:00:00  1073390400        22             12  Tue  12:00:00  13:00:00
2004-01-06 13:00:00  1073394000         3              2  Tue  13:00:00  14:00:00
2004-01-06 14:00:00  1073397600         3              2  Tue  14:00:00  15:00:00
2004-01-06 15:00:00  1073401200         3              2  Tue  15:00:00  16:00:00
2004-01-10 16:00:00  1073750400        20             11  Sat  16:00:00  17:00:00
2004-01-14 23:00:00  1074121200       218             69  Wed  23:00:00  00:00:00
2004-01-15 00:00:00  1074124800        12             11  Thu  00:00:00  01:00:00
2004-01-15 01:00:00  1074128400         1              1  Thu  01:00:00  02:00:00