Pandas groupby and sum total of group

2019-05-10 07:36发布

I have a Pandas DataFrame with customer refund reasons. It contains these example data rows:

    **case_type**       **claim_type**
1   service             service
2   service             service
3   chargeback          service
4   chargeback          local_charges
5   service             supplier_service
6   chargeback          service
7   chargeback          service
8   chargeback          service
9   chargeback          service
10  chargeback          service
11  service             service_not_used
12  service             service_not_used

I would like to compare the customer's reason with some sort of labeled reason. This is no problem, but I would also like to see the total number of records in a specific group (customer reason).

case_claim_type = df[["case_type", "claim_type"]]
case_claim_type.groupby(by=("case_type", "claim_type"))["case_type"].count()

Which gives me this output, for example:

**case_type**     **claim_type**                 
service           service                         2
                  supplier_service                1
                  service_not_used                2
chargeback        service                         6
                  local_charges                   1

I would also like to have have the sum of the output per case_type. Something like:

**case_type**     **claim_type**                 
service           service                         2
                  supplier_service                1
                  service_not_used                2
                  total:                          5
chargeback        service                         6
                  local_charges                   1
                  total:                          7

It doesn't necessarily has to be in this last output format, a column with the (aggregated) totals per case_type is also fine.

2条回答
Anthone
2楼-- · 2019-05-10 08:03

You can use:

df = case_claim_type.groupby(by=("case_type", "claim_type"))["case_type"].count()
print (df)
case_type   claim_type      
chargeback  local_charges       1
            service             1
service     service             2
            supplier_service    1
Name: case_type, dtype: int64

You can create new DataFrame by aggregate sum and add MultiIndex by MultiIndex.from_tuples:

df1 = df.sum(level=0)
#same as
#df1 = df.groupby(level=0).sum()
new_cols= list(zip(df1.index.get_level_values(0),['total'] * len(df.index)))
df1.index = pd.MultiIndex.from_tuples(new_cols)
print (df1)
chargeback  total    2
service     total    3
Name: case_type, dtype: int64

Then concat together and last sort_index:

df2 = pd.concat([df,df1]).sort_index()
print (df2)
case_type   claim_type      
chargeback  local_charges       1
            service             1
            total               2
service     service             2
            supplier_service    1
            total               3
Name: case_type, dtype: int64
查看更多
放我归山
3楼-- · 2019-05-10 08:09

Where:

df = pd.DataFrame({'case_type':['Service']*20+['chargeback']*9,'claim_type':['service']*5+['local_charges']*5+['service_not_used']*5+['supplier_service']*5+['service']*8+['local_charges']})

df_out = df.groupby(by=("case_type", "claim_type"))["case_type"].count()

Let use pd.concat, sum with level parameter, and assign:

(pd.concat([df_out.to_frame(),
           df_out.sum(level=0).to_frame()
                 .assign(claim_type= "total")
                 .set_index('claim_type', append=True)])
  .sort_index())

Output:

                             case_type
case_type  claim_type                 
Service    local_charges             5
           service                   5
           service_not_used          5
           supplier_service          5
           total                    20
chargeback local_charges             1
           service                   8
           total                     9
查看更多
登录 后发表回答