介紹:
站在用戶的角度思考問(wèn)題,與客戶深入溝通,找到單縣網(wǎng)站設(shè)計(jì)與單縣網(wǎng)站推廣的解決方案,憑借多年的經(jīng)驗(yàn),讓設(shè)計(jì)與互聯(lián)網(wǎng)技術(shù)結(jié)合,創(chuàng)造個(gè)性化、用戶體驗(yàn)好的作品,建站類型包括:成都網(wǎng)站建設(shè)、做網(wǎng)站、企業(yè)官網(wǎng)、英文網(wǎng)站、手機(jī)端網(wǎng)站、網(wǎng)站推廣、空間域名、網(wǎng)絡(luò)空間、企業(yè)郵箱。業(yè)務(wù)覆蓋單縣地區(qū)。
一、什么是hive???
1,hive是基于Hadoop的一個(gè)數(shù)據(jù)倉(cāng)庫(kù)工具、
2,可以將結(jié)構(gòu)化的數(shù)據(jù)文件映射為一張數(shù)據(jù)庫(kù)表,并提供類sql的查詢功能、
3,可以將sql語(yǔ)句轉(zhuǎn)換為mapreduce任務(wù)進(jìn)行運(yùn)行、
4,可以用來(lái)進(jìn)行數(shù)據(jù)提取轉(zhuǎn)換加載(ETL)
5,hive是sql解析引擎,它將sql 語(yǔ)句轉(zhuǎn)換成M/R job然后在Hadoop中運(yùn)行。
hive的表其實(shí)就是HDFS的目錄/文件夾。
hive表中的數(shù)據(jù) 就是hdfs目錄中的文件。按表名把文件夾分開(kāi)。如果是分區(qū)表,則分區(qū)值是子文件夾,可以直接在M/R job里使用這些數(shù)據(jù).
6,hive優(yōu)點(diǎn)與缺點(diǎn):
可以提供類SQL語(yǔ)句快速實(shí)現(xiàn)簡(jiǎn)單的mapreduce統(tǒng)計(jì),不需要開(kāi)發(fā)專門(mén)的mapreduce應(yīng)用
不支持實(shí)時(shí)查詢
7,hive數(shù)據(jù)分為真實(shí)存儲(chǔ)的數(shù)據(jù)和元數(shù)據(jù)
真實(shí)數(shù)據(jù)存儲(chǔ)在hdfs中,元數(shù)據(jù)存儲(chǔ)在MySQL中
metastore 元數(shù)據(jù)存儲(chǔ)數(shù)據(jù)庫(kù)
Hive將元數(shù)據(jù)存儲(chǔ)在數(shù)據(jù)庫(kù)中,如MySQL、derby。
Hive中的元數(shù)據(jù)包括表的名字,表的列和分區(qū)及其屬性,表的屬性(是否為外部表等),表的數(shù)據(jù)所在目錄等。
二、hive的體系架構(gòu):
用戶接口,包括 CLI(shell),JDBC/ODBC,WebUI(通過(guò)瀏覽器)
元數(shù)據(jù)存儲(chǔ),通常是存儲(chǔ)在關(guān)系數(shù)據(jù)庫(kù)如 mysql, derby 中
解釋器、編譯器、優(yōu)化器、執(zhí)行器完成HQL查詢語(yǔ)句從語(yǔ)法分析,編譯,優(yōu)化以及查詢計(jì)劃的生成,生成的查詢計(jì)劃存儲(chǔ)在HDFS中,并隨后被mapreduce調(diào)用執(zhí)行
Hadoop:用 HDFS 進(jìn)行存儲(chǔ),利用 MapReduce 進(jìn)行計(jì)算(帶*的查詢select * from teacher不會(huì)生成mapreduce任務(wù),只是進(jìn)行全表掃描)
在此強(qiáng)調(diào):
Hadoop,zookpeer,spark,kafka,mysql已經(jīng)正常啟動(dòng)
三、開(kāi)始安裝部署hive
基礎(chǔ)依賴環(huán)境:
1,jdk 1.6+ 2, hadoop 2.x 3,hive 0.13-0.19 4,mysql (mysql-connector-jar)
安裝詳細(xì)如下:
#java export JAVA_HOME=/soft/jdk1.7.0_79/ export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar #bin export PATH=$PATH:/$JAVA_HOME/bin:$HADOOP_HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin:/usr/local/hadoop/hive/bin #hadoop export HADOOP_HOME=/usr/local/hadoop/hadoop #scala export SCALA_HOME=/usr/local/hadoop/scala #spark export SPARK_HOME=/usr/local/hadoop/spark #hive export HIVE_HOME=/usr/local/hadoop/hive
一、開(kāi)始安裝:
1,下載:
https://hive.apache.org/downloads.html
解壓:
tar xvf apache-hive-2.1.0-bin.tar.gz -C /usr/local/hadoop/ cd /usr/local/hadoop/ mv apache-hive-2.1.0 hive
2,修改配置
修改啟動(dòng)環(huán)境 cd /usr/local/hadoop/hive vim bin/hive-config.sh #java export JAVA_HOME=/soft/jdk1.7.0_79/ #hadoop export HADOOP_HOME=/usr/local/hadoop/hadoop #hive export HIVE_HOME=/usr/local/hadoop/hive
修改默認(rèn)配置文件
cd /usr/local/hadoop/hive vim conf/hive-site.xml <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://master:3306/hive?createDatabaseInfoNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>Username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>xujun</value> <description>password to use against metastore database</description> </property> </configuration>
3,修改tmp dir
修改將含有"system:java.io.tmpdir"的配置項(xiàng)的值修改為如上地址
/tmp/hive
4,安裝mysql driver
去mysql官網(wǎng)下載驅(qū)動(dòng)mysql-connector-java-5.1.40.zip
unzip mysql-connector-java-5.1.40.zip
cp mysql-connector-java-5.1.40-bin.jar /user/lcoal/hadoop/hive/lib/
二、安裝好mysql,并且啟動(dòng)
1.創(chuàng)建數(shù)據(jù)庫(kù)
create database hive grant all on *.* to hive@'%' identified by 'hive'; flush privileges;
三,初始化hive(初始化metadata)
cd /usr/local/hadoop/hive bin/schematool -initSchema -dbType mysql SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Metastore connection URL: jdbc:mysql://hadoop3:3306/hive?createDatabaseInfoNotExist=true Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive Starting metastore schema initialization to 2.1.0 Initialization script hive-schema-2.1.0.mysql.sql Initialization script completed schemaTool completed
四、啟動(dòng)
[hadoop@hadoop1 hadoop]$ hive/bin/hive which: no hbase in (/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin://soft/jdk1.7.0_79//bin:/bin:/bin:/bin:/usr/local/hadoop/hive/bin:/home/hadoop/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hadoop/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Logging initialized using configuration in jar:file:/usr/local/hadoop/hive/lib/hive-common-2.1.0.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. tez, spark) or using Hive 1.X releases. hive> show databases; OK default Time taken: 1.184 seconds, Fetched: 1 row(s) hive> 五,實(shí)踐操作 使用hive創(chuàng)建表 以下兩個(gè)操作只是針對(duì)當(dāng)前session終端 1,hive> set hive.cli.print.current.db=true; 設(shè)置顯示當(dāng)前數(shù)據(jù)庫(kù)名 hive (default)> 2,hive (default)> set hive.cli.print.header=true; 當(dāng)使用select 查詢數(shù)據(jù)時(shí)候,顯示的結(jié)果會(huì)帶有表的字段名稱 3,創(chuàng)建表,并導(dǎo)入數(shù)據(jù) hive> create table teacherq(id bigint,name string) row format delimited fields terminated by '\t'; OK hive> create table people (id int ,name string); OK Time taken: 3.363 seconds hive> SHOW TABLES; OK people teacherq student Time taken: 0.283 seconds, Fetched: 1 row(s) 導(dǎo)入數(shù)據(jù): hive>load data local inpath '/root/stdent.txt' into table teacherq; 注意:如果你是普通用戶啟動(dòng)hive,則使用相對(duì)路徑來(lái)導(dǎo)入本地?cái)?shù)據(jù) mv stdent.txt /usr/local/hadoop/hive/ cd /usr/local/hadoop/hive > load data local inpath 'stdent.txt' into table teacherq; Loading data to table default.teacherq OK Time taken: 2.631 seconds hive> select * from teacherq; OK 1 zhangsan 2 lisi 3 wangwu 4 libai Time taken: 1.219 seconds, Fetched: 4 row(s) hive>
4.建表(默認(rèn)是內(nèi)部表)
適用于先創(chuàng)建表,后load加載數(shù)據(jù)、
create table trade_detail(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t';
默認(rèn)普通表load數(shù)據(jù):
load data local inpath '/root/student.txt' into table student;
建外部表
適用于,hdfs先有數(shù)據(jù),后創(chuàng)建表,進(jìn)行數(shù)據(jù)查詢,分析管理
create external table td_ext(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t' location '/td_ext';
外部表load數(shù)據(jù):
load data local inpath '/root/student.txt' into table student;
建分區(qū)表
方法一:先創(chuàng)建分區(qū)表,然后load數(shù)據(jù)
partition就是輔助查詢,縮小查詢范圍,加快數(shù)據(jù)的檢索速度和對(duì)數(shù)據(jù)按照一定的規(guī)格和條件進(jìn)行管理。
create table td_part(id bigint, account string, income double, expenses double, time string) partitioned by (logdate string) row format delimited fields terminated by '\t';
分區(qū)表中l(wèi)oad數(shù)據(jù)
load data local inpath '/root/data.am' into table beauty partition (nation="USA");
hive (itcast)> select * from beat;
OK
beat.idbeat.namebeat.sizebeat.nation
1glm22.0china
2slsl21.0china
3sdsd20.0china
NULLwww19.0china
Time taken: 0.22 seconds, Fetched: 4 row(s)
方法二:先在hdfs 創(chuàng)建目錄,倒入數(shù)據(jù),最后,更改hive元數(shù)據(jù)的信息
1, 創(chuàng)建分區(qū)目錄
hive (itcast)> dfs -mkdir /beat/nation=japan
dfs -ls /beat;
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2016-12-05 16:07 /beat/nation=china
drwxr-xr-x - hadoop supergroup 0 2016-12-05 16:16 /beat/nation=japan
2, 為分區(qū)目錄加載數(shù)據(jù)
hive (itcast)> dfs -put d.c /beat/nation=japan
此時(shí)查詢數(shù)據(jù):數(shù)據(jù)還未加載進(jìn)來(lái)。
hive (itcast)> dfs -ls /beat/nation=japan;
Found 1 items
-rw-r--r-- 3 hadoop supergroup 20 2016-12-05 16:16 /beat/nation=japan/d.c
hive (itcast)> select * from beat;
OK
beat.idbeat.namebeat.sizebeat.nation
1glm22.0china
2slsl21.0china
3sdsd20.0china
NULLwww19.0china
Time taken: 0.198 seconds, Fetched: 4 row(s)
3,手動(dòng)修改hive表結(jié)構(gòu),添加分區(qū)表信息
hive (itcast)> alter table beat add partition (nation='japan') location "/beat/nation=japan";
OK
Time taken: 0.089 seconds
hive (itcast)> select * from beat;
OK
beat.idbeat.namebeat.sizebeat.nation
1glm22.0china
2slsl21.0china
3sdsd20.0china
NULLwww19.0china
7ab111.0japan
8rb23234.0japan
Time taken: 0.228 seconds, Fetched: 6 row(s)
此時(shí)數(shù)據(jù)加載完成。
刪除分區(qū)
用戶可以用 ALTER TABLE DROP PARTITION 來(lái)刪除分區(qū)。分區(qū)的元數(shù)據(jù)和數(shù)據(jù)將被一并刪除。
例:
ALTER TABLE beat DROP PARTITION (nation='japan');
特殊情況案例:
1,表中的某個(gè)字段需要作為分區(qū)的分區(qū)名,默認(rèn)不允許創(chuàng)建,解決方法:
hive (itcast)> create table sms(id bigint ,content string,area string) partitioned by (area string) row format delimited fields terminated by '\t' ;
FAILED: SemanticException [Error 10035]: Column repeated in partitioning columns
解決方法:
建立冗余字段,即使用area_pat來(lái)區(qū)分,
或者修改源碼
hive (itcast)> create table sms(id bigint ,content string,area string) partitioned by (area_pat string) row format delimited fields terminated by '\t' ;
文章標(biāo)題:hive的詳細(xì)介紹,安裝,部署
本文地址:http://jinyejixie.com/article30/gggjso.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供網(wǎng)站內(nèi)鏈、移動(dòng)網(wǎng)站建設(shè)、網(wǎng)站建設(shè)、微信小程序、全網(wǎng)營(yíng)銷推廣、網(wǎng)站策劃
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)
網(wǎng)頁(yè)設(shè)計(jì)公司知識(shí)