How can I make Apache Spark use multipart uploads when saving data to Amazon S3. Spark writes data using RDD.saveAs...File
methods. when the destination is start with s3n://
Spark automatically uses JetS3Tt to do the upload, but this fails for files larger than 5G. Large files need to be uploaded to S3 using multipart upload, which is supposed to be beneficial for smaller files as well. Multipart uploads are supported in JetS3Tt with MultipartUtils
, but Spark does not use this in the default configuration. Is there a way to make it use this functionality.
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
This is a limitation of s3n, you can use the new s3a protocol to access your files in S3. s3a is based on aws-adk library and support much of the features including multipart upload. More details in this link:
回答2:
s3n seems to be on deprecation path.
From their documentation
Amazon EMR used the S3 Native FileSystem with the URI scheme, s3n. While this still works, we recommend that you use the s3 URI scheme for the best performance, security, and reliability