Filter PySpark DataFrame by checking if string app

2020-02-14 10:28发布

I'm new to Spark and playing around with filtering. I have a pyspark.sql DataFrame created by reading in a json file. A part of the schema is shown below:

root
 |-- authors: array (nullable = true)
 |    |-- element: string (containsNull = true)

I would like to filter this DataFrame, selecting all of the rows with entries pertaining to a particular author. So whether this author is the first author listed in authors or the nth, the row should be included if their name appears. So something along the lines of

df.filter(df['authors'].getItem(i)=='Some Author')

where i iterates through all authors in that row, which is not constant across rows.

I tried implementing the solution given to PySpark DataFrames: filter where some value is in array column, but it gives me

ValueError: Some of types cannot be determined by the first 100 rows, please try again with sampling

Is there a succinct way to implement this filter?

1条回答
手持菜刀,她持情操
2楼-- · 2020-02-14 11:10

You can use pyspark.sql.functions.array_contains method:

df.filter(array_contains(df['authors'], 'Some Author'))

from pyspark.sql.types import *
from pyspark.sql.functions import array_contains

lst = [(["author 1", "author 2"],), (["author 2"],) , (["author 1"],)]
schema = StructType([StructField("authors", ArrayType(StringType()), True)])
df = spark.createDataFrame(lst, schema)
df.show()
+--------------------+
|             authors|
+--------------------+
|[author 1, author 2]|
|          [author 2]|
|          [author 1]|
+--------------------+

df.printSchema()
root
 |-- authors: array (nullable = true)
 |    |-- element: string (containsNull = true)

df.filter(array_contains(df.authors, "author 1")).show()
+--------------------+
|             authors|
+--------------------+
|[author 1, author 2]|
|          [author 1]|
+--------------------+
查看更多
登录 后发表回答