I have a large pyspark.sql.dataframe.DataFrame
and I want to keep (so filter
) all rows where the URL saved in the location
column contains a pre-determined string, e.g. 'google.com'.
I have tried df.filter(sf.col('location').contains('google.com')
but this throws a
TypeError: _TypeError: 'Column' object is not callable'
How do I go around and filter my df properly? Many thanks in advance!