count the frequency that a value occurs in a dataf

2019-01-01 00:39发布

问题:

I have a dataset

|category|
cat a
cat b
cat a

I\'d like to be able to return something like (showing unique values and frequency)

category | freq |
cat a       2
cat b       1

回答1:

Use groupby and count:

In [37]:
df = pd.DataFrame({\'a\':list(\'abssbab\')})
df.groupby(\'a\').count()

Out[37]:

   a
a   
a  2
b  3
s  2

[3 rows x 1 columns]

See the online docs: http://pandas.pydata.org/pandas-docs/stable/groupby.html

Also value_counts() as @DSM has commented, many ways to skin a cat here

In [38]:
df[\'a\'].value_counts()

Out[38]:

b    3
a    2
s    2
dtype: int64

If you wanted to add frequency back to the original dataframe use transform to return an aligned index:

In [41]:
df[\'freq\'] = df.groupby(\'a\')[\'a\'].transform(\'count\')
df

Out[41]:

   a freq
0  a    2
1  b    3
2  s    2
3  s    2
4  b    3
5  a    2
6  b    3

[7 rows x 2 columns]


回答2:

If you want to apply to all columns you can use:

df.apply(pd.value_counts)

This will apply a column based aggregation function (in this case value_counts) to each of the columns.



回答3:

df.category.value_counts()

This short little line of code will give you the output you want.



回答4:

df.apply(pd.value_counts).fillna(0)

value_counts - Returns object containing counts of unique values

apply - count frequency in every column. If you set axis=1, you get frequncy in every row

fillna(0) - make output more fancy. Changed NaN to 0



回答5:

In 0.18.1 groupby together with count does not give the frequency of unique values:

>>> df
   a
0  a
1  b
2  s
3  s
4  b
5  a
6  b

>>> df.groupby(\'a\').count()
Empty DataFrame
Columns: []
Index: [a, b, s]

However, the unique values and their frequencies are easily determined using size:

>>> df.groupby(\'a\').size()
a
a    2
b    3
s    2

With df.a.value_counts() sorted values (in descending order, i.e. largest value first) are returned by default.



回答6:

Using list comprehension and value_counts for multiple columns in a df

[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=[\'O\']).columns)]

https://stackoverflow.com/a/28192263/786326



回答7:

This should work:

df.groupby(\'category\').size()


回答8:

If your DataFrame has values with the same type, you can also set return_counts=True in numpy.unique().

index, counts = np.unique(df.values,return_counts=True)

np.bincount() could be faster if your values are integers.



回答9:

Without any libraries, you could do this instead:

def to_frequency_table(data):
    frequencytable = {}
    for key in data:
        if key in frequencytable:
            frequencytable[key] += 1
        else:
            frequencytable[key] = 1
    return frequencytable

Example:

to_frequency_table([1,1,1,1,2,3,4,4])
>>> {1: 4, 2: 1, 3: 1, 4: 2}


回答10:

Use size() method:

    import pandas as pd
    print df.groupby[\'category\'].size()
    #where df is your dataframe


回答11:

You can also do this with pandas by broadcasting your columns as categories first, e.g. dtype=\"category\" e.g.

cats = [\'client\', \'hotel\', \'currency\', \'ota\', \'user_country\']

df[cats] = df[cats].astype(\'category\')

and then calling describe:

df[cats].describe()

This will give you a nice table of value counts and a bit more :):

    client  hotel   currency    ota user_country
count   852845  852845  852845  852845  852845
unique  2554    17477   132 14  219
top 2198    13202   USD Hades   US
freq    102562  8847    516500  242734  340992


回答12:

Assuming you have a Pandas Dataframe df, try:

df.category.value_counts()

The Pandas Manual provides more information.



回答13:

n_values = data.income.value_counts()

First unique value count

n_at_most_50k = n_values[0]

Second unique value count

n_greater_50k = n_values[1]

n_values

Output:

<=50K    34014
>50K     11208

Name: income, dtype: int64

Output:

n_greater_50k,n_at_most_50k:-
(11208, 34014)


回答14:

@metatoaster has already pointed this out. Go for Counter. It\'s blazing fast.

import pandas as pd
from collections import Counter
import timeit
import numpy as np

df = pd.DataFrame(np.random.randint(1, 10000, (100, 2)), columns=[\"NumA\", \"NumB\"])

Timers

%timeit -n 10000 df[\'NumA\'].value_counts()
# 10000 loops, best of 3: 715 µs per loop

%timeit -n 10000 df[\'NumA\'].value_counts().to_dict()
# 10000 loops, best of 3: 796 µs per loop

%timeit -n 10000 Counter(df[\'NumA\'])
# 10000 loops, best of 3: 74 µs per loop

%timeit -n 10000 df.groupby([\'NumA\']).count()
# 10000 loops, best of 3: 1.29 ms per loop

Cheers!



标签: python pandas