I am trying to run some ETL process on Amazon Redshift. It's written in Apache Spark. Same code works fine on Postgres but with Redshift is throwing SQLFeatureNotSupportedException: [Amazon][JDBC](10220) Driver not capable.
error.
I am trying to read data from flat files and write it to the tables. Spark code look like this
spark
.read.schema(getFileNameAndSchema(table)._2).csv(getFileNameAndSchema(table)._1)
.write
.mode(SaveMode.Overwrite)
.jdbc("jdbc:redshift://url:5429", table, dbProperties())
While googling for it I found that this type is thrown if ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE
cursors are not supported by the database. If this is the case the with RedShift then what is the work around?