Pyspark dafaframe OrderBy list of columns [duplica

2019-08-30 23:00发布

问题:

This question already has an answer here:

  • How to select and order multiple columns in a Pyspark Dataframe after a join 1 answer

I am trying to use OrderBy function in pyspark dataframe before I write into csv but I am not sure to use OrderBy functions if I have a list of columns.

Code:

Cols = ['col1','col2','col3']
df = df.OrderBy(cols,ascending=False)

回答1:

As per docstring / signature:

Signature: df.orderBy(*cols, **kwargs)
Docstring:
Returns a new :class:`DataFrame` sorted by the specified column(s).
:param cols: list of :class:`Column` or column names to sort by.
:param ascending: boolean or list of boolean (default True).

Both

df = spark.createDataFrame([(1, 2, 3)] )
cols = ["_1", "_2", "_3"]

df.orderBy(cols, ascending=False)

and

df.orderBy(*cols, ascending=False)

are valid, as well as equivalents with list[pyspark.sql.Column].