I'm trying to import a SQL file from S3 bucket to EC2 instance.
The SQL file is publicly accessible and with the help of mysql client installed in the instance I'm executing the following command.
mysql> source https://s3-ap-southeast-1.amazonaws.com/{sql-file}
When I do I'm getting the following error.
ERROR:
Failed to open file 'https://s3-ap-southeast-1.amazonaws.com/{sql-file}', error: 2
I'm not an expert, but is it possible what I'm trying?
You can't reliably do this in one step. You have to download the file, then load the local (downloaded) copy into MySQL.
$ wget https://s3-ap-southeast-1.amazonaws.com/{sql-file} -O some-local-filename.sql
$ mysql [options]
mysql> source some-local-filename.sql
What about not downloading the file, but directly pushing into the MySQL DB ?
aws s3 cp s3://'<bucket name>'/'<database name>'.sql.gz - | gunzip | mysql
--host=127.0.0.1 \
--user='<user name>' \
--password='<password>' \
--port=3306
or,
aws s3 cp s3://'<bucket name>'/'<database name>'.sql - | mysql
--host=127.0.0.1 \
--user='<user name>' \
--password='<password>' \
--port=3306
There is a long procedure to do so, The best details are given by AWS themself. Please read
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html