pyspark convert row to json with nulls

2019-03-02 02:44发布

问题:

Goal: For a dataframe with schema

id:string
Cold:string
Medium:string
Hot:string
IsNull:string
annual_sales_c:string
average_check_c:string
credit_rating_c:string
cuisine_c:string
dayparts_c:string
location_name_c:string
market_category_c:string
market_segment_list_c:string
menu_items_c:string
msa_name_c:string
name:string
number_of_employees_c:string
number_of_rooms_c:string
Months In Role:integer
Tenured Status:string
IsCustomer:integer
units_c:string
years_in_business_c:string
medium_interactions_c:string
hot_interactions_c:string
cold_interactions_c:string
is_null_interactions_c:string

I want to add a new column that is a JSON string of all keys and values for the columns. I have used the approach in this post PySpark - Convert to JSON row by row and related questions. My code

df = df.withColumn("JSON",func.to_json(func.struct([df[x] for x in small_df.columns])))

I am having one issue:

Issue: When any row has a null value for a column (and my data has many...) the Json string doesn't contain the key. I.e. if only 9 out of the 27 columns have values then the JSON string only has 9 keys... What I would like to do is maintain all keys but for the null values just pass an empty string ""

Any tips?

回答1:

You should be able to just modify the answer on the question you linked using pyspark.sql.functions.when.

Consider the following example DataFrame:

data = [
    ('one', 1, 10),
    (None, 2, 20),
    ('three', None, 30),
    (None, None, 40)
]

sdf = spark.createDataFrame(data, ["A", "B", "C"])
sdf.printSchema()
#root
# |-- A: string (nullable = true)
# |-- B: long (nullable = true)
# |-- C: long (nullable = true)

Use when to implement if-then-else logic. Use the column if it is not null. Otherwise return an empty string.

from pyspark.sql.functions import col, to_json, struct, when, lit
sdf = sdf.withColumn(
    "JSON",
    to_json(
        struct(
           [
                when(
                    col(x).isNotNull(),
                    col(x)
                ).otherwise(lit("")).alias(x) 
                for x in sdf.columns
            ]
        )
    )
)
sdf.show()
#+-----+----+---+-----------------------------+
#|A    |B   |C  |JSON                         |
#+-----+----+---+-----------------------------+
#|one  |1   |10 |{"A":"one","B":"1","C":"10"} |
#|null |2   |20 |{"A":"","B":"2","C":"20"}    |
#|three|null|30 |{"A":"three","B":"","C":"30"}|
#|null |null|40 |{"A":"","B":"","C":"40"}     |
#+-----+----+---+-----------------------------+

Another option is to use pyspark.sql.functions.coalesce instead of when:

from pyspark.sql.functions import coalesce

sdf.withColumn(
    "JSON",
    to_json(
        struct(
           [coalesce(col(x), lit("")).alias(x) for x in sdf.columns]
        )
    )
).show(truncate=False)
## Same as above