Spark SQL - load data with JDBC using SQL statemen

2019-01-04 03:47发布

I think I am missing something but can't figure what. I want to load data using SQLContext and JDBC using particular sql statement like

select top 1000 text from table1 with (nolock)
where threadid in (
  select distinct id from table2 with (nolock)
  where flag=2 and date >= '1/1/2015' and  userid in (1, 2, 3)
)

Which method of SQLContext should I use? Examples I saw always specify table name and lower and upper margin.

Thanks in advance.

2条回答
ら.Afraid
2楼-- · 2019-01-04 04:22
val url = "jdbc:postgresql://localhost/scala_db?user=scala_user"
Class.forName(driver)

val connection = DriverManager.getConnection(url)

val df2 = spark.read
      .format("jdbc")
      .option("url", url)
      .option("dbtable", "(select id,last_name from emps) e")
      .option("user", "scala_user")
      .load()

The key is "(select id,last_name from emps) e", here you can write a subquery in place of table_name.

查看更多
你好瞎i
3楼-- · 2019-01-04 04:31

You should pass a valid subquery as a dbtable argument. For example in Scala:

val query = """(SELECT TOP 1000 
  -- and the rest of your query
  -- ...
) AS tmp  -- alias is mandatory*"""   

val url: String = ??? 

val jdbcDF = sqlContext.read.format("jdbc")
  .options(Map("url" -> url, "dbtable" -> query))
  .load()

* Hive Language Manual SubQueries: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SubQueries

查看更多
登录 后发表回答