I'm new to hadoop.I know that the HCatalog is a table and storage management layer for Hadoop. But how exactly it works & how to use it. Please give some simple example.
相关问题
- Spark on Yarn Container Failure
- enableHiveSupport throws error in java spark code
- spark select and add columns with alias
- Unable to generate jar file for Hadoop
-
hive: cast array
> into map
相关文章
- 在hive sql里怎么把"2020-10-26T08:41:19.000Z"这个字符串转换成年月日
- Java写文件至HDFS失败
- mapreduce count example
- SQL query Frequency Distribution matrix for produc
- Cloudera 5.6: Parquet does not support date. See H
- hbase-client 2.0.x error
- Could you give me any clue Why 'Cannot call me
- converting to timestamp with time zone failed on A
HCatalog supports reading and writing files in any format for which a Hive SerDe (serializer-deserializer) can be written. By default, HCatalog supports RCFile, CSV, JSON, and SequenceFile formats. To use a custom format, you must provide the InputFormat, OutputFormat, and SerDe.
HCatalog is built on top of the Hive metastore and incorporates components from the Hive DDL. HCatalog provides read and write interfaces for Pig and MapReduce and uses Hive’s command line interface for issuing data definition and metadata exploration commands.
It also presents a REST interface to allow external tools access to Hive DDL (Data Definition Language) operations, such as “create table” and “describe table”.
HCatalog presents a relational view of data. Data is stored in tables and these tables can be placed into databases. Tables can also be partitioned on one or more keys. For a given value of a key (or set of keys) there will be one partition that contains all rows with that value (or set of values).
Edit: Most of the text is from https://cwiki.apache.org/confluence/display/Hive/HCatalog+UsingHCat.
Hcatalog is the metadata management of Hadoop File system. Hcatalog can be accessed through webhcat which makes use of rest api. Whatever the tables created in hcatalog can be accessed through hive and pig.
As you mentioned "HCatalog is a table and storage management layer for Hadoop" Which gives high level abstraction to other frameworks like MR, Spark and Pig by performing I/O operations to Distributed storage layer for Hive tables.
HCatalog comprises 3 key elements
Once HCatalog installed and running successfully you do the following on CLI
Example:
In short, HCatalog opens up the hive metadata to other mapreduce tools. Every mapreduce tools has its own notion about HDFS data (example Pig sees the HDFS data as set of files, Hive sees it as tables). With having table based abstraction, HCatalog supported mapreduce tools do not need to care about where the data is stored, in which format and storage location (HBase or HDFS).
We do get the facility of WebHcat to submit jobs in an RESTful way if you configure webhcat along Hcatalog.
Here is a very basic example of how ho use HCATALOG.
I have a table in hive ,TABLE NAME is STUDENT which is stored in one of the HDFS location:
neethu 90 malini 90 sunitha 98 mrinal 56 ravi 90 joshua 8
Now suppose I want to load this table to pig for further transformation of data, In this scenario I can use HCATALOG:
When using table information from the Hive metastore with Pig, add the -useHCatalog option when invoking pig:
pig -useHCatalog
(you may want to export HCAT_HOME 'HCAT_HOME=/usr/lib/hive-hcatalog/')
Now loading this table to pig:
A = LOAD 'student' USING org.apache.hcatalog.pig.HCatLoader();
Now you have loaded the table to pig.To check the schema , just do a DESCRIBE on the relation.
DESCRIBE A
Thanks