Does spark-sql support multiple delimiters in the

2019-03-02 06:59发布

问题:

I have an input data with multiple single character delimiters as followed :

col1data1"col2data1;col3data1"col4data1
col1data2"col2data2;col3data2"col4data2
col1data3"col2data3;col3data3"col4data3

In the above data the ["] ,[;] are my delimiters.

Is there any way in sparkSQL to convert directly the input data( which is in a file) into a table with column names col1,col2,col3,col4

回答1:

The answer is no, spark-sql does not support multi-delimiter but one way to do it is trying to read it your file into an RDD and than parse it using regular splitting methods :

val rdd : RDD[String] = ???
val s = rdd.first()
// res1: String = "This is one example. This is another"

Let's say that you want to split on space and point break.

so we can consider apply our function on our s value as followed :

s.split(" |\\.")
// res2: Array[String] = Array(This, is, one, example, "", This, is, another)

now we can apply the function on the whole rdd :

rdd.map(_.split(" |\\."))

Example on your data :

scala> val s = "col1data1\"col2data1;col3data1\"col4data1"
scala> s.split(";|\"")
res4: Array[String] = Array(col1data1, col2data1, col3data1, col4data1)

More on string splitting :

  • A Scala split String example.
  • How to split String in Scala but keep the part matching the regular expression?

Just remember that everything you can apply on a regular data type you can apply on a whole RDD, then all you have to do is converting your RDD to a DataFrame.