Spark converting Pandas df to S3

2019-09-16 11:44发布

问题:

Currently i am using Spark along with Pandas framework. How can I convert Pandas Dataframe in a convenient way which can be written to s3.

I have tried below option but I get error as df is Pandas dataframe and it has no write option.

df.write()
    .format("com.databricks.spark.csv")
    .option("header", "true")
    .save("123.csv");

回答1:

As you are running this in Spark, one approach would be to convert the Pandas DataFrame into a Spark DataFrame and then save this to S3.

The code snippet below creates the pdf Pandas DataFrame and converts it into the df Spark DataFrame.

import numpy as np
import pandas as pd

# Create Pandas DataFrame
d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
     'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
pdf = pd.DataFrame(d)

# Convert Pandas DataFrame to Spark DataFrame
df = spark.createDataFrame(pdf)
df.printSchema()

To validate, we can also print out the schema for the Spark DataFrame with the output below.

root
 |-- one: double (nullable = true)
 |-- two: double (nullable = true)

Now that it is a Spark DataFrame, you can use the spark-csv package to save the file with the example below.

# Save Spark DataFrame to S3
df.write.format('com.databricks.spark.csv').options(header='true').save('123.csv')