Hive FAILED: ParseException line 2:0 cannot recogn

2019-02-23 00:12发布

问题:

I've tried running hive -v -f sqlfile.sql

Here is the content of the file

CREATE TABLE UpStreamParam (
'macaddress' CHAR(50),
'datats' BIGINT,
'cmtstimestamp' BIGINT,
'modulation' INT,
'chnlidx'   INT,
'severity' BIGINT,
'rxpower'  FLOAT,
'sigqnoise' FLOAT,
'noisedeviation'  FLOAT,
'prefecber'  FLOAT,
'postfecber'  FLOAT,
'txpower'  FLOAT,
'txpowerdrop' FLOAT,
'nmter'  FLOAT,
'premtter'  FLOAT,
'postmtter'  FLOAT,
'unerroreds'  BIGINT,
'corrected'  BIGINT,
'uncorrectables'  BIGINT)
STORED AS ORC TBLPROPERTIES ("orc.compress"="SNAPPY","orc.bloom.filters.columns"="macaddress")
PARTITIONED BY ('cmtsid' CHAR(50),' date' INT)
LOCATION '/usr/hive/warehouse/UpStreamParam' ;

And i'm getting the following error:

FAILED: ParseException line 2:0 cannot recognize input near ''macaddress'' 'CHAR' '(' in column specification

回答1:

First, the column name must be surrounded by ` (backticks), not ' (single quote).

Therefore you have to replace 'macaddress' to `macaddress`, as well as all other column names.

Second, the order of STORED AS and TBLPROPERTIES and PARTITIONED BY and LOCATION is wrong. The correct order is PARTITIONED BY, STORED AS, LOCATION, TBLPROPERTIES.

See the hive language manual for detail. https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTable

So the correct code is

CREATE TABLE UpStreamParam (
`macaddress` CHAR(50),
`datats` BIGINT,
`cmtstimestamp` BIGINT,
`modulation` INT,
`chnlidx`   INT,
`severity` BIGINT,
`rxpower`  FLOAT,
`sigqnoise` FLOAT,
`noisedeviation`  FLOAT,
`prefecber`  FLOAT,
`postfecber`  FLOAT,
`txpower`  FLOAT,
`txpowerdrop` FLOAT,
`nmter`  FLOAT,
`premtter`  FLOAT,
`postmtter`  FLOAT,
`unerroreds`  BIGINT,
`corrected`  BIGINT,
`uncorrectables`  BIGINT)
PARTITIONED BY (`cmtsid` CHAR(50), `date` INT)
STORED AS ORC
LOCATION '/usr/hive/warehouse/UpStreamParam'
TBLPROPERTIES ("orc.compress"="SNAPPY","orc.bloom.filters.columns"="macaddress");


回答2:

The problem could be because of the the Hive version, where the CHAR data type is supported from the Hive version 0.13.

If you are using lower versions, then please try using the string/varchar data type.

Please refer, https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-Char