Removing duplicates from Pandas dataFrame with con

2020-07-14 09:15发布

问题:

Assuming I have the following DataFrame:

 A | B
 1 | Ms
 1 | PhD
 2 | Ms
 2 | Bs

I want to remove the duplicate rows with respect to column A, and I want to retain the row with value 'PhD' in column B as the original, if I don't find a 'PhD', I want to retain the row with 'Bs' in column B.

I am trying to use

 df.drop_duplicates('A') 

with a condition

回答1:

>>> df
    A   B
0   1   Ms
1   1   Ms
2   1   Ms
3   1   Ms
4   1   PhD
5   2   Ms
6   2   Ms
7   2   Bs
8   2   PhD

Sorting a dataframe with a custom function:

def sort_df(df, column_idx, key):
    '''Takes a dataframe, a column index and a custom function for sorting, 
    returns a dataframe sorted by that column using that function'''

    col = df.ix[:,column_idx]
    df = df.ix[[i[1] for i in sorted(zip(col,range(len(col))), key=key)]]
    return df

Our function for sorting:

cmp = lambda x:2 if 'PhD' in x else 1 if 'Bs' in x else 0

In action:

sort_df(df,'B',cmp).drop_duplicates('A', take_last=True)

    A   B
4   1   PhD
8   2   PhD


回答2:

Assuming uniqueness of B value given A value, and that each A value has a row with Bs in the B column:

df2 = df[df['B']=="PhD"]

will give you a dataframe with the PhD rows you want.

Then remove all the PhD and Ms from df:

df = df[df['B']=="Bs"]

Then concatenate df and df2:

df3 = concat([df2, df])

Then you can use drop_duplicates like you wanted:

df3.drop_duplicates('A', inplace=True)


回答3:

Consider using Categoricals. They're a nice was to group / order text non-alphabetically (among other things.)

import pandas as pd
df = pd.DataFrame([(1,'Ms'), (1, 'PhD'), (2, 'Ms'), (2, 'Bs'), (3, 'PhD'), (3, 'Bs'), (4, 'Ms'), (4, 'PhD'), (4, 'Bs')], columns=['A', 'B'])
df['B']=df['B'].astype('category')
# after setting the column's type to 'category', you can set the order
df['B']=df['B'].cat.set_categories(['PhD', 'Bs', 'Ms'], ordered=True)
df.sort(['A', 'B'], inplace=True)
df_unique = df.drop_duplicates('A')