Split Contents of String column in PySpark Datafra

2019-01-15 17:10发布

I have a pyspark data frame whih has a column containing strings. I want to split this column into words

Code:

>>> sentenceData = sqlContext.read.load('file://sample1.csv', format='com.databricks.spark.csv', header='true', inferSchema='true')
>>> sentenceData.show(truncate=False)
+---+---------------------------+
|key|desc                       |
+---+---------------------------+
|1  |Virat is good batsman      |
|2  |sachin was good            |
|3  |but modi sucks big big time|
|4  |I love the formulas        |
+---+---------------------------+


Expected Output
---------------

>>> sentenceData.show(truncate=False)
+---+-------------------------------------+
|key|desc                                 |
+---+-------------------------------------+
|1  |[Virat,is,good,batsman]              |
|2  |[sachin,was,good]                    |
|3  |....                                 |
|4  |...                                  |
+---+-------------------------------------+

How can I achieve this?

1条回答
Juvenile、少年°
2楼-- · 2019-01-15 18:06

Use split function:

from pyspark.sql.functions import split

df.withColumn("desc", split("desc", "\s+"))
查看更多
登录 后发表回答