码迷,mamicode.com
首页 > 其他好文 > 详细

Hive安装

时间:2015-06-18 20:10:04      阅读:151      评论:0      收藏:0      [点我收藏+]

标签:linux   数据库   profile   用户登录   identified   


  1. 安装Mysql 数据库
         数据库的安装顺序
              (1)Mysql-server
              (2)mysql-client
              (3)启动mysql服务
              (4)用root用户登录,以授权方式创建hive元数据库,和hive用户
         grant all on hive.* to ‘hive‘@‘%‘ identified by ‘hive‘;  这个hive数据库,及其 用户和密码,是后面配置hive要用到的 !!!
    备注:
    Red Hat linux 下的mysql 安装 可参考 http://blog.itpub.net/28929558/viewspace-1192693/

2、/etc/profile

  

HADOOP_PREFIX=/opt/hadoop
JAVA_HOME=/opt/jdk18
ZOOKEEPER_HOME=/opt/zookeeper
HBASE_HOME=/opt/hbase
HIVE_HOME=/opt/hive

PATH=$PATH:$JAVA_HOME/bin:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin



3、修改$HIVE_HOME/conf 下面配置文件(将需要配置的两个模板,拷贝重命名)

点击(此处)折叠或打开

  1. cp hive-env.sh.template hive-env.sh

  2. cp hive-default.xml.template hive-site.xml

4、配置hive-env.sh 文件,指定 HADOOP_HOME

点击(此处)折叠或打开

  1. # Set HADOOP_HOME to point to a specific hadoop install directory

  2. HADOOP_HOME=/opt/hadoop   

5、配置 hive-site.xml,指定MySQL数据库驱动、数据库名、用户名及密码,
修改的内容如下所示:

    

  1. <property>

  2.   <name>javax.jdo.option.ConnectionURL</name>

  3.   <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>

  4.   <description>JDBC connect string for a JDBC metastore</description>

  5. </property>



  6. <property>

  7.   <name>javax.jdo.option.ConnectionDriverName</name>

  8.   <value>com.mysql.jdbc.Driver</value>

  9.   <description>Driver class name for a JDBC metastore</description>

  10. </property>



  11. <property>

  12.   <name>javax.jdo.option.ConnectionUserName</name>

  13.   <value>hive</value>

  14.   <description>username to use against metastore database</description>

  15. </property>



  16. <property>

  17.   <name>javax.jdo.option.ConnectionPassword</name>

  18.   <value>hive</value>

  19.   <description>password to use against metastore database</description>

  20. </property>


  21. <property>

  22.   <name>hive.metastore.local</name>

  23.   <value>true</value>

  24.   <description></description>

  25. </property>

以上几个配置项,跟0.11.0版本没有区别,但是下面几个要注意配置下:
在hive下创建临时IO的tmp文件夹。然后将路径配置到下列参数中

    

  1. <property>

  2.     <name>hive.querylog.location</name>

  3.     <value>/home/zhang/hive/iotmp</value>

  4.     <description>Location of Hive run time structured log file</description>

  5.   </property>

  6.   

  7.   <property>

  8.     <name>hive.exec.local.scratchdir</name>

  9.     <value>/home/zhang/hive/iotmp</value>

  10.     <description>Local scratch space for Hive jobs</description>

  11.   </property>

  12.   

  13.   <property>

  14.     <name>hive.downloaded.resources.dir</name>

  15.     <value>/home/zhang/hive/iotmp</value>

  16.     <description>Temporary local directory for added resources in the remote file system.</description>

  17.   </property>

备注如果不配置启动或操作hive时候会报错


载 mysql-connector-java-5.1.34-bin.jar 文件,并放到$HIVE_HOME/lib目录下 (到官网自行下载即可)
如果没有此jar包,启动hive会报如下错误:

    

  1. Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver (\"com.mysql.jdbc.Driver\") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.

  2. at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)

  3. at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)

  4. at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)

  5. ... 66 more


至此,部署完毕,可以启动,测试下:
看到如下结果,说明启动成功

    

  1. [zhang@namenode ~]$ hive

  2. 14/12/17 18:48:22 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

  3. 14/12/17 18:48:22 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist


  4. Logging initialized using configuration in jar:file:/home/zhang/hive/lib/hive-common-0.14.0.jar!/hive-log4j.properties

  5. SLF4J: Class path contains multiple SLF4J bindings.

  6. SLF4J: Found binding in [jar:file:/home/zhang/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

  7. SLF4J: Found binding in [jar:file:/home/zhang/hive/lib/hive-jdbc-0.14.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]

  8. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

  9. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

  10. hive> show tables;

  11. OK

  12. Time taken: 0.924 seconds

  13. hive> show databases;

  14. OK

  15. default

  16. Time taken: 0.051 seconds, Fetched: 1 row(s)


创建表测试:如果这部分没有报错,那么去检查HDFS下面是否生成了相关文件

    

  1. hive> create table test(t_id int,t_name string) row format delimited fields terminated by \‘|\‘ stored as textfile;

  2. OK

  3. Time taken: 1.586 seconds

  4. hive> show tables;

  5. OK

  6. test

  7. Time taken: 0.078 seconds, Fetched: 1 row(s)

  8. hive> select * from test;

  9. OK

  10. Time taken: 0.599 seconds

检查HDFS是否生成相关文件:

     

  1. [zhang@datanode01 ~]$ hdfs dfs -ls /

  2. 14/12/17 18:55:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

  3. Found 3 items

  4. drwxr-xr-x   - zhang supergroup          0 2014-12-16 02:21 /input

  5. drwx-wx-wx   - zhang supergroup          0 2014-12-17 01:25 /tmp

  6. drwxr-xr-x   - zhang supergroup          0 2014-12-17 18:54 /user

  7. [zhang@datanode01 ~]$ hdfs dfs -ls /user/

  8. 14/12/17 18:55:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

  9. Found 1 items

  10. drwxr-xr-x   - zhang supergroup          0 2014-12-17 18:54 /user/hive

  11. [zhang@datanode01 ~]$ hdfs dfs -ls /user/hive/

  12. 14/12/17 18:55:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

  13. Found 1 items

  14. drwxr-xr-x   - zhang supergroup          0 2014-12-17 18:54 /user/hive/warehouse

  15. [zhang@datanode01 ~]$ hdfs dfs -ls /user/hive/warehouse/

  16. 14/12/17 18:55:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

  17. Found 1 items

  18. drwxr-xr-x   - zhang supergroup          0 2014-12-17 18:54 /user/hive/warehouse/test

文件生成,至此验证完毕

参考:

http://www.linuxidc.com/Linux/2014-08/105363.htm


http://www.2cto.com/database/201305/215701.html


http://blog.chinaunix.net/uid-77311-id-4580099.html

  

 

Hive安装

标签:linux   数据库   profile   用户登录   identified   

原文地址:http://7090376.blog.51cto.com/7080376/1663204

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!