I am trying to write dataframe
to text
file. If a file contains single column then I am able to write in text file. If file contains multiple column then I a facing some error
Text data source supports only a single column, and you have 2
columns.
object replace {
def main(args:Array[String]): Unit = {
Logger.getLogger("org").setLevel(Level.ERROR)
val spark = SparkSession.builder.master("local[1]").appName("Decimal Field Validation").getOrCreate()
var sourcefile = spark.read.option("header","true").text("C:/Users/phadpa01/Desktop/inputfiles/decimalvalues.txt")
val rowRDD = sourcefile.rdd.zipWithIndex().map(indexedRow => Row.fromSeq((indexedRow._2.toLong+1) +: indexedRow._1.toSeq)) //adding prgrefnbr
//add column for prgrefnbr in schema
val newstructure = StructType(Array(StructField("PRGREFNBR",LongType)).++(sourcefile.schema.fields))
//create new dataframe containing prgrefnbr
sourcefile = spark.createDataFrame(rowRDD, newstructure)
val op= sourcefile.write.mode("overwrite").format("text").save("C:/Users/phadpa01/Desktop/op")
}
}
you can convert the dataframe to rdd and covert the row to string and write the last line as
val op= sourcefile.rdd.map(_.toString()).saveAsTextFile("C:/Users/phadpa01/Desktop/op")
Edited
As @philantrovert and @Pravinkumar have pointed that the above would append [
and ]
in the output file, which is true. The solution would be to replace
them with empty
character as
val op= sourcefile.rdd.map(_.toString().replace("[","").replace("]", "")).saveAsTextFile("C:/Users/phadpa01/Desktop/op")
One can even use regex
I would recommend using a csv
or other delimited formats. The following is an example with the most concise/elegant way to write to .tsv in Spark 2+
val tsvWithHeaderOptions: Map[String, String] = Map(
("delimiter", "\t"), // Uses "\t" delimiter instead of default ","
("header", "true")) // Writes a header record with column names
df.coalesce(1) // Writes to a single file
.write
.mode(SaveMode.Overwrite)
.options(tsvWithHeaderOptions)
.csv("output/path")
I think using "substring" is more appropriate for all scenarios I feel.
Please check below code.
sourcefile.rdd
.map(r => { val x = r.toString; x.substring(1, x.length-1)})
.saveAsTextFile("C:/Users/phadpa01/Desktop/op")
You can save as text CSV file (.format("csv")
)
The result will be a text file in a CSV format, each column will be separated by a comma.
val op = sourcefile.write.mode("overwrite").format("csv").save("C:/Users/phadpa01/Desktop/op")
More info can be found in the spark programming guide
I use databricks api to save my DF output into text file.
myDF.write.format("com.databricks.spark.csv").option("header", "true").save("output.csv")