Can pyspark.sql.function be used in udf?

2020-02-12 21:31发布

问题:

I define a function like

getDate = udf(lambda x : to_date(x))

When I use it in

df.select(getDate("time")).show()

I met

File ".../pyspark/sql/functions.py", in to_date
return Column(sc._jvm.functions.to_date(_to_java_column(col)))
AttributeError: 'NoneType' object has no attribute '_jvm'

Does that mean that I can not use pyspark.sql.function in my own udf?

This is not a specific question, I wonder why this happen.

回答1:

Functions from pyspark.sql.functions are wrappers for JVM functions and are designed to operates on pyspark.sql.Column. You cannot use these:

  • To transform local Python objects. They take Column and return Column.
  • They cannot be used on the worker because there is no context in which they can be evaluated.


回答2:

Looking at error seems problem with sc as sc._jvm:'NoneType' object has no attribute '_jvm'

Here sc is of NoneType.

And there is no need to write udf for it, you can use directly:-

import pyspark.sql.functions as F
df.select(F.to_date(df.time)).show()