Loading JSON data to AWS Redshift results in NULL

2020-02-10 18:11发布

I am trying to perform a load/copy operation to import data from JSON files in an S3 bucket directly to Redshift. The COPY operation succeeds, and after the COPY, the table has the correct number of rows/records, but every record is NULL !

It takes the expected amount of time for the load, the COPY command returns OK, the Redshift console reports successful and no errors... but if I perform a simple query from the table, it returns only NULL values.

The JSON is very simple + flat, and formatted correctly (according to examples I found here: http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html)

Basically, it is one row per line, formatted like:

{ "col1": "val1", "col2": "val2", ... }
{ "col1": "val1", "col2": "val2", ... }
{ "col1": "val1", "col2": "val2", ... }

I have tried things like rewriting the schema based on values and data types found in the JSON objects and also copying from uncompressed files. I thought perhaps the JSON was not being parsed correctly upon load, but it should presumably raise an error if the objects cannot be parsed.

My COPY command looks like this:

copy events from 's3://mybucket/json/prefix' 
with credentials 'aws_access_key_id=xxx;aws_secret_access_key=xxx'
json 'auto' gzip;

Any guidance would be appreciated! Thanks.

3条回答
干净又极端
2楼-- · 2020-02-10 18:51

For cases when JSON data objects don't correspond directly to column names you can use a JSONPaths file to map the JSON elements to columns as mentioned by TimZ and described here

查看更多
女痞
3楼-- · 2020-02-10 18:58

COPY maps the data elements in the JSON source data to the columns in the target table by matching object keys, or names, in the source name/value pairs to the names of columns in the target table. The matching is case-sensitive. Column names in Amazon Redshift tables are always lowercase, so when you use the ‘auto’ option, matching JSON field names must also be lowercase. If the JSON field name keys aren't all lowercase, you can use a JSONPaths file to explicitly map column names to JSON field name keys.

The solution would be to use jsonpath

Example json:

{
"Name": "Major",
"Age": 19,
"Add": {
"street":{
"st":"5 maint st",
"ci":"Dub"
},
"city":"Dublin"
},

"Category_Name": ["MLB","GBM"]

}

Example table:

(
name varchar,
age int,
address varchar,
catname varchar
);

Example jsonpath:

{
"jsonpaths": [
"$['Name']",
"$['Age']",
"$['Add']",
"$['Category_Name']"
]
}

Example copy code:

copy customer --redshift code
from 's3://mybucket/customer.json'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
json from 's3://mybucket/jpath.json' ; -- Jsonpath file to map fields

Examples are taken from here

查看更多
再贱就再见
4楼-- · 2020-02-10 19:00

So I have discovered the cause - This would not have been evident from the description I provided in my original post.

When you create a table in Redshift, the column names are converted to lowercase. When you perform a COPY operation, the column names are case sensitive.

The input data that I have been trying to load is using camelCase for column names, and so when I perform the COPY, the columns do not match up with the defined schema (which now uses all lowercase column names)

The operation does not raise an error, though. It just leaves NULLs in all the columns that did not match (in this case, all of them)

Hope this helps somebody to avoid the same confusion!

查看更多
登录 后发表回答