How can we find the number of words in a column of a spark dataframe without using REPLACE() function of SQL ? Below is the code and input I am working with but the replace() function does not work.
from pyspark.sql import SparkSession
my_spark = SparkSession \
.builder \
.appName("Python Spark SQL example") \
.enableHiveSupport() \
.getOrCreate()
parqFileName = 'gs://caserta-pyspark-eval/train.pqt'
tuesdayDF = my_spark.read.parquet(parqFileName)
tuesdayDF.createOrReplaceTempView("parquetFile")
tuesdaycrimes = spark.sql("SELECT LENGTH(Address) - LENGTH(REPLACE(Address, ' ', ''))+1 FROM parquetFile")
print(tuesdaycrimes.show())
+-------------------+--------------+--------------------+---------+----------+--------------+--------------------+-----------+---------+
| Dates| Category| Descript|DayOfWeek|PdDistrict| Resolution| Address| X| Y|
+-------------------+--------------+--------------------+---------+----------+--------------+--------------------+-----------+---------+
|2015-05-14 03:53:00| WARRANTS| WARRANT ARREST|Wednesday| NORTHERN|ARREST, BOOKED| OAK ST / LAGUNA ST| -122.42589|37.774597|
|2015-05-14 03:53:00|OTHER OFFENSES|TRAFFIC VIOLATION...|Wednesday| NORTHERN|ARREST, BOOKED| OAK ST / LAGUNA ST| -122.42589|37.774597|
|2015-05-14 03:33:00|OTHER OFFENSES|TRAFFIC VIOLATION...|Wednesday| NORTHERN|ARREST, BOOKED|VANNESS AV / GREE...| -122.42436|37.800415|
You can define a
udf
function asand call it using
.withColumn
function asAnd if you want distinct count of words, you can change the
udf
function to includeset
asYou can do it just using
split
andsize
of pysparkAPI
functions (Below is example):-There are number of ways to count the words using pyspark DataFrame functions, depending on what it is you are looking for.
Create Example Data
In this example, we will count the words in the
Description
column.Count in each row
If you wanted the count of words in the specified column for each row you can create a new column using
withColumn()
and do the following:pyspark.sql.functions.split()
to break the string into a listpyspark.sql.functions.size()
to count the length of the listFor example:
Sum word count over all rows
If you wanted to count the total number of words in the column across the entire DataFrame, you can use
pyspark.sql.functions.sum()
:Count occurrence of each word
If you wanted the count of each word in the entire DataFrame, you can use
split()
andpyspark.sql.function.explode()
followed by agroupBy
andcount()
.