I'm working with scala and spark and need to access azure blob storage and get its list of files. What is the best way to do that knowing spark version is 2.11.
1条回答
- For Spark running on local, there is an official blog which introduces how to access Azure Blob Storage from Spark. The key is that you need to configure Azure Storage account as HDFS-compatible storage in core-site.xml file and add two jars hadoop-azure & azure-storage to your classpath for accessing HDFS via the protocol wasb[s]. You can refer to the official tutorial to know HDFS-compatible storage with wasb, and the blog about configuration for HDInsight more details.
- For Spark running on Azure, the difference is just only access HDFS with wasb, the other preparations has been done by Azure when creating HDInsight cluster with Spark. The method for listing files is listFiles or wholeTextFiles of SparkContext
查看更多
0人赞
添加讨论(0)
举报
相关问题
- How to maintain order of key-value in DataFrame sa
- running headless chrome in an microsoft azure web
- Docker task in Azure devops won't accept "$(pw
- Spark on Yarn Container Failure
- Register MicroServices in Azure Active Directory (
相关文章
- Livy Server: return a dataframe as JSON?
- SQL Azure Reset autoincrement
- SQL query Frequency Distribution matrix for produc
- How to cast Azure DocumentDB Document class to my
- Can't get azure web role to run locally using
- Azure WebApp - Unable to auto-detect the runtime s
- How to change region for Azure WebSite
- Azure webjob vs cloud service