Amazon Redshift - Unload to S3 - Dynamic S3 file n

2019-09-07 13:14发布

I have been using UNLOAD statement in Redshift for a while now, it makes it easier to dump the file to S3 and then allow people to analysie.

The time has come to try to automate it. We have Amazon Data Pipeline running for several tasks and I wanted to run SQLActivity to execute UNLOAD automatically. I use SQL script hosted in S3.

The query itself is correct but what I have been trying to figure out is how can I dynamically assign the name of the file. For example:

UNLOAD('<the_query>')
TO 's3://my-bucket/' || to_char(current_date)
WITH CREDENTIALS '<credentials>'
ALLOWOVERWRITE
PARALLEL OFF

doesn't work and of course I suspect that you can't execute functions (to_char) in the "TO" line. Is there any other way I can do it?

And if UNLOAD is not the way, do I have any other options how to automate such tasks with current available infrastructure (Redshift + S3 + Data Pipeline, our Amazon EMR is not active yet).

The only thing that I thought could work (but not sure) is not instead of using script, to copy the script into the Script option in SQLActivity (at the moment it points to a file) and reference {@ScheduleStartTime}

2条回答
聊天终结者
2楼-- · 2019-09-07 14:00

Are you able to SSH into the cluster? If so, I would suggest writing a shell script where you can create variables and whatnot, then pass in those variables into a connection's statement-query

查看更多
孤傲高冷的网名
3楼-- · 2019-09-07 14:05

Why not use RedshiftCopyActivity to copy from Redshift to S3? Input is RedshiftDataNode and output is S3DataNode where you can specify expression for directoryPath.

You can also specify the transformSql property in RedshiftCopyActivity to override the default value of : select * from + inputRedshiftTable.

Sample pipeline:

{ "objects": [{ "id": "CSVId1", "name": "DefaultCSV1", "type": "CSV" }, { "id": "RedshiftDatabaseId1", "databaseName": "dbname", "username": "user", "name": "DefaultRedshiftDatabase1", "*password": "password", "type": "RedshiftDatabase", "clusterId": "redshiftclusterId" }, { "id": "Default", "scheduleType": "timeseries", "failureAndRerunMode": "CASCADE", "name": "Default", "role": "DataPipelineDefaultRole", "resourceRole": "DataPipelineDefaultResourceRole" }, { "id": "RedshiftDataNodeId1", "schedule": { "ref": "ScheduleId1" }, "tableName": "orders", "name": "DefaultRedshiftDataNode1", "type": "RedshiftDataNode", "database": { "ref": "RedshiftDatabaseId1" } }, { "id": "Ec2ResourceId1", "schedule": { "ref": "ScheduleId1" }, "securityGroups": "MySecurityGroup", "name": "DefaultEc2Resource1", "role": "DataPipelineDefaultRole", "logUri": "s3://myLogs", "resourceRole": "DataPipelineDefaultResourceRole", "type": "Ec2Resource" }, { "myComment": "This object is used to control the task schedule.", "id": "DefaultSchedule1", "name": "RunOnce", "occurrences": "1", "period": "1 Day", "type": "Schedule", "startAt": "FIRST_ACTIVATION_DATE_TIME" }, { "id": "S3DataNodeId1", "schedule": { "ref": "ScheduleId1" }, "directoryPath": "s3://my-bucket/#{format(@scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}", "name": "DefaultS3DataNode1", "dataFormat": { "ref": "CSVId1" }, "type": "S3DataNode" }, { "id": "RedshiftCopyActivityId1", "output": { "ref": "S3DataNodeId1" }, "input": { "ref": "RedshiftDataNodeId1" }, "schedule": { "ref": "ScheduleId1" }, "name": "DefaultRedshiftCopyActivity1", "runsOn": { "ref": "Ec2ResourceId1" }, "type": "RedshiftCopyActivity" }] }

查看更多
登录 后发表回答