Apply a logic for a particular column in dataframe

2019-08-18 00:33发布

问题:

I have a Dataframe and it has been imported from mysql

dataframe_mysql.show()
+----+---------+-------------------------------------------------------+
|  id|accountid|                                                xmldata|
+----+---------+-------------------------------------------------------+
|1001|    12346|<AccountSetup xmlns:xsi="test"><Customers test="test...|
|1002|    12346|<AccountSetup xmlns:xsi="test"><Customers test="test...|
|1003|    12346|<AccountSetup xmlns:xsi="test"><Customers test="test...|
|1004|    12347|<AccountSetup xmlns:xsi="test"><Customers test="test...|
+----+---------+-------------------------------------------------------+

In the xmldata column there is xml tags inside, I need to parse it in a structured data in a seperate dataframe.

Previously I had the xml file alone in a text file, and loaded in a spark dataframe using "com.databricks.spark.xml"

 spark-shell --packages com.databricks:spark-xml_2.10:0.4.1, 
 com.databricks:spark-csv_2.10:1.5.0

 val sqlContext = new org.apache.spark.sql.SQLContext(sc)

 val df = sqlContext.read.format("com.databricks.spark.xml")
 .option("rowTag","Account").load("mypath/Account.xml")

the final output I got as structured one

df.show()

 +----------+--------------------+--------------------+--------------+--------------------+-------+....
    |   AcctNbr|         AddlParties|           Addresses|ApplicationInd|       Beneficiaries|ClassCd|....
    +----------+--------------------+--------------------+--------------+--------------------+-------+....
    |AAAAAAAAAA|[[Securities Amer...|[WrappedArray([D,...|             T|[WrappedArray([11...|     35|....
    +----------+--------------------+--------------------+--------------+--------------------+-------+....

Please advice how to achieve the this when I have the xml content inside a dataframe.

回答1:

Since you are trying to pull the XML data column out to a separate DataFrame you can still use the code from spark-xml's package. You just need to use their reader directly.

case class Data(id: Int, accountid: Int, xmldata: String)
val df = Seq(
    Data(1001, 12345, "<AccountSetup xmlns:xsi=\"test\"><Customers test=\"a\">d</Customers></AccountSetup>"),
    Data(1002, 12345, "<AccountSetup xmlns:xsi=\"test\"><Customers test=\"b\">e</Customers></AccountSetup>"),
    Data(1003, 12345, "<AccountSetup xmlns:xsi=\"test\"><Customers test=\"c\">f</Customers></AccountSetup>")
).toDF


import com.databricks.spark.xml.XmlReader

val reader = new XmlReader()

// Set options using methods
reader.withRowTag("AccountSetup")

val rdd = df.select("xmldata").map(r => r.getString(0)).rdd
val xmlDF = reader.xmlRdd(spark.sqlContext, rdd)

However, a UDF as philantrovert suggests with custom XML parsing would probably be cleaner in the long run. Reference link for the reader class here



回答2:

I tried the below query

val dff1 = Seq(
Data(1001, 12345, "<AccountSetup xmlns:xsi=\"test\"><Customers test=\"a\">d</Customers></AccountSetup>"),
Data(1002, 12345, "<AccountSetup xmlns:xsi=\"test\"><Customers test=\"b\">e</Customers></AccountSetup>"),
Data(1003, 12345, "<AccountSetup xmlns:xsi=\"test\"><Customers test=\"c\">f</Customers></AccountSetup>")
    ).toDF

    dff1.show()
    val reader = new XmlReader().withRowTag("AccountSetup")
    val xmlrdd = dff1.select("xmldata").map(a => a.getString(0)).rdd
    xmlrdd.toDF("newRowXml").show()
    val xmldf = reader.xmlRdd(sqlcontext, xmlrdd)
    xmldf.show()

I got the output for dff1.show() and xmlrdd.toDF("newRowXml").show()

//dff1.show()
+----+---------+--------------------+
|  id|accountid|             xmldata|
+----+---------+--------------------+
|1001|    12345|<AccountSetup xml...|
|1002|    12345|<AccountSetup xml...|
|1003|    12345|<AccountSetup xml...|
+----+---------+--------------------+

xmlrdd.toDF("newRowXml").show()
+--------------------+
|           newRowXml|
+--------------------+
|<AccountSetup xml...|
|<AccountSetup xml...|
|<AccountSetup xml...|
+--------------------+
18/09/20 19:30:29 INFO SparkUI: Stopped Spark web UI at http://192.168.56.1:4040
18/09/20 19:30:29 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/09/20 19:30:29 INFO MemoryStore: MemoryStore cleared
18/09/20 19:30:29 INFO BlockManager: BlockManager stopped
18/09/20 19:30:29 INFO BlockManagerMaster: BlockManagerMaster stopped
18/09/20 19:30:29 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/09/20 19:30:29 INFO SparkContext: Successfully stopped SparkContext
18/09/20 19:30:29 INFO ShutdownHookManager: Shutdown hook called
18/09/20 19:30:29 INFO ShutdownHookManager: Deleting directory C:\Users\rajkiranu\AppData\Local\Temp\spark-16433b5e-01b7-472b-9b88-fea0a67a991a

Process finished with exit code 1

cant able to see xmldf.show()