Using Spark 1.6, I have a Spark DataFrame column
(named let's say col1
) with values A, B, C, DS, DNS, E, F, G and H and I want to create a new column (say col2
) with the values from the dict
here below, how do I map this? (so f.i. 'A' needs to be mapped to 'S' etc..)
dict = {'A': 'S', 'B': 'S', 'C': 'S', 'DS': 'S', 'DNS': 'S', 'E': 'NS', 'F': 'NS', 'G': 'NS', 'H': 'NS'}
Sounds like the simplest solution would be to use the replace function: http://spark.apache.org/docs/2.4.0/api/python/pyspark.sql.html#pyspark.sql.DataFrame.replace
Inefficient solution with UDF (version independent):
with the result:
Much more efficient (Spark 2.0+ only) is to create a
MapType
literal:with the same result:
but more efficient execution plan:
compared to UDF version: