码迷,mamicode.com
首页 > 其他好文 > 详细

Hive-1.2.1_01_安装部署

时间:2018-07-22 11:29:38      阅读:142      评论:0      收藏:0      [点我收藏+]

标签:session   cte   format   tab   otf   show   bigdata   process   copy   

 

前言:该文章是基于 Hadoop2.7.6_01_部署 进行的。

 

1. Hive基本概念

1.1. 什么是Hive

       Hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供类SQL查询功能。

 

1.2. 为什么使用Hive

直接使用hadoop所面临的问题

       人员学习成本太高

       项目周期要求太短

       MapReduce实现复杂查询逻辑开发难度太大

 

为什么要使用Hive

       操作接口采用类SQL语法,提供快速开发的能力。

       避免了去写MapReduce,减少开发人员的学习成本。

       扩展功能很方便。

 

1.3. Hive的特点

可扩展

       Hive可以自由的扩展集群的规模,一般情况下不需要重启服务。

 

延展性

       Hive支持用户自定义函数,用户可以根据自己的需求来实现自己的函数。

 

容错

       良好的容错性,节点出现问题SQL仍可完成执行。

 

1.4. 基本组成

    用户接口:包括 CLI、JDBC/ODBC、WebGUI。

    元数据存储:通常是存储在关系数据库如 mysql , derby中。

    解释器、编译器、优化器、执行器。

 

1.5. 各组件的基本功能

    用户接口主要由三个:CLI、JDBC/ODBC和WebGUI。其中,CLI为shell命令行;JDBC/ODBC是Hive的JAVA实现,与传统数据库JDBC类似;WebGUI是通过浏览器访问Hive。

    元数据存储:Hive 将元数据存储在数据库中。Hive 中的元数据包括表的名字,表的列和分区及其属性,表的属性(是否为外部表等),表的数据所在目录等。

    解释器、编译器、优化器完成 HQL 查询语句从词法分析、语法分析、编译、优化以及查询计划的生成。生成的查询计划存储在 HDFS 中,并在随后有 MapReduce 调用执行。

 

 

1.6. Hive的数据存储

1、Hive中所有的数据都存储在 HDFS 中,没有专门的数据存储格式(可支持Text,SequenceFile,ParquetFile,RCFILE等)

2、只需要在创建表的时候告诉 Hive 数据中的列分隔符和行分隔符,Hive 就可以解析数据。

3、Hive 中包含以下数据模型:DB、Table,External Table,Partition,Bucket。

        db:在hdfs中表现为${hive.metastore.warehouse.dir}目录下一个文件夹

        table:在hdfs中表现所属db目录下一个文件夹

        external table:外部表, 与table类似,不过其数据存放位置可以在任意指定路径

              普通表: 删除表后, hdfs上的文件都删了

              External外部表删除后, hdfs上的文件没有删除, 只是把数据库中的元数据【描述细信息】删除了

        partition:在hdfs中表现为table目录下的子目录

        bucket:桶, 在hdfs中表现为同一个表目录下根据hash散列之后的多个文件, 会根据不同的文件把数据放到不同的文件中

 

 

2. 主机规划

主机名称

外网IP

内网IP

操作系统

安装软件

mini01

10.0.0.11

172.16.1.11

CentOS 7.4

Hadoop 【NameNode  SecondaryNameNode】、Hive

mini02

10.0.0.12

172.16.1.12

CentOS 7.4

Hadoop 【ResourceManager】

mini03

10.0.0.13

172.16.1.13

CentOS 7.4

Hadoop 【DataNode  NodeManager】、Mariadb

mini04

10.0.0.14

172.16.1.14

CentOS 7.4

Hadoop 【DataNode  NodeManager】

mini05

10.0.0.15

172.16.1.15

CentOS 7.4

Hadoop 【DataNode  NodeManager】

 

Linux添加hosts信息,保证每台都可以相互ping通

1 [yun@mini03 ~]$ cat /etc/hosts
2 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
3 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
4 
5 10.0.0.11    mini01
6 10.0.0.12    mini02
7 10.0.0.13    mini03
8 10.0.0.14    mini04
9 10.0.0.15    mini05

 

Windows的hosts文件修改

1 # 文件位置C:\Windows\System32\drivers\etc   在hosts中追加如下内容
2 ………………
3 10.0.0.11    mini01
4 10.0.0.12    mini02
5 10.0.0.13    mini03
6 10.0.0.14    mini04
7 10.0.0.15    mini05

 

 

3. Mariadb安装与配置

3.1. 数据库安装

 1 # 在mini03 机器安装mariadb
 2 [root@mini03 ~]# cat /etc/redhat-release   # 也可以使用其他版本
 3 CentOS Linux release 7.4.1708 (Core)
 4 [root@mini03 ~]# yum install -y mariadb mariadb-server  # CentOS7的mysql数据库为mariadb
 5 ………………
 6 [root@mini03 ~]# systemctl status mariadb.service 
 7 ● mariadb.service - MariaDB database server
 8    Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
 9    Active: inactive (dead)
10 [root@mini03 ~]# systemctl enable mariadb.service  # 加入开机自启动
11 Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
12 [root@mini03 ~]# systemctl start mariadb.service  # 启动mariadb
13 [root@mini03 ~]# systemctl status mariadb.service # 查看mariadb服务状态
14 ● mariadb.service - MariaDB database server
15    Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled)
16    Active: active (running) since Mon 2018-07-02 23:36:37 CST; 2s ago
17   Process: 2072 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
18   Process: 1992 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
19  Main PID: 2071 (mysqld_safe)
20    ………………

 

3.2. 数据库配置查看

 1 # 进入数据库
 2 [root@mini03 ~]# mysql
 3 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 4 Your MariaDB connection id is 2
 5 Server version: 5.5.56-MariaDB MariaDB Server
 6 
 7 Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
 8 
 9 Type help; or \h for help. Type \c to clear the current input statement.
10 
11 # 查看数据库版本
12 MariaDB [(none)]> select version(); 
13 +----------------+
14 | version()      |
15 +----------------+
16 | 5.5.56-MariaDB |
17 +----------------+
18 1 row in set (0.00 sec)
19 
20 # 支持哪些字符集
21 MariaDB [(none)]> show  CHARACTER SET;   
22 ## 字符集        描述                          默认校对规则     最大长度  
23 +----------+-----------------------------+---------------------+--------+
24 | Charset  | Description                 | Default collation   | Maxlen |
25 +----------+-----------------------------+---------------------+--------+
26 | big5     | Big5 Traditional Chinese    | big5_chinese_ci     |      2 |
27 | dec8     | DEC West European           | dec8_swedish_ci     |      1 |
28 | cp850    | DOS West European           | cp850_general_ci    |      1 |
29 | hp8      | HP West European            | hp8_english_ci      |      1 |
30 | koi8r    | KOI8-R Relcom Russian       | koi8r_general_ci    |      1 |
31 | latin1   | cp1252 West European        | latin1_swedish_ci   |      1 |
32 | latin2   | ISO 8859-2 Central European | latin2_general_ci   |      1 |
33 | swe7     | 7bit Swedish                | swe7_swedish_ci     |      1 |
34 | ascii    | US ASCII                    | ascii_general_ci    |      1 |
35 | ujis     | EUC-JP Japanese             | ujis_japanese_ci    |      3 |
36 | sjis     | Shift-JIS Japanese          | sjis_japanese_ci    |      2 |
37 | hebrew   | ISO 8859-8 Hebrew           | hebrew_general_ci   |      1 |
38 | tis620   | TIS620 Thai                 | tis620_thai_ci      |      1 |
39 | euckr    | EUC-KR Korean               | euckr_korean_ci     |      2 |
40 | koi8u    | KOI8-U Ukrainian            | koi8u_general_ci    |      1 |
41 | gb2312   | GB2312 Simplified Chinese   | gb2312_chinese_ci   |      2 |
42 | greek    | ISO 8859-7 Greek            | greek_general_ci    |      1 |
43 | cp1250   | Windows Central European    | cp1250_general_ci   |      1 |
44 | gbk      | GBK Simplified Chinese      | gbk_chinese_ci      |      2 |
45 | latin5   | ISO 8859-9 Turkish          | latin5_turkish_ci   |      1 |
46 | armscii8 | ARMSCII-8 Armenian          | armscii8_general_ci |      1 |
47 | utf8     | UTF-8 Unicode               | utf8_general_ci     |      3 |
48 | ucs2     | UCS-2 Unicode               | ucs2_general_ci     |      2 |
49 | cp866    | DOS Russian                 | cp866_general_ci    |      1 |
50 | keybcs2  | DOS Kamenicky Czech-Slovak  | keybcs2_general_ci  |      1 |
51 | macce    | Mac Central European        | macce_general_ci    |      1 |
52 | macroman | Mac West European           | macroman_general_ci |      1 |
53 | cp852    | DOS Central European        | cp852_general_ci    |      1 |
54 | latin7   | ISO 8859-13 Baltic          | latin7_general_ci   |      1 |
55 | utf8mb4  | UTF-8 Unicode               | utf8mb4_general_ci  |      4 |
56 | cp1251   | Windows Cyrillic            | cp1251_general_ci   |      1 |
57 | utf16    | UTF-16 Unicode              | utf16_general_ci    |      4 |
58 | cp1256   | Windows Arabic              | cp1256_general_ci   |      1 |
59 | cp1257   | Windows Baltic              | cp1257_general_ci   |      1 |
60 | utf32    | UTF-32 Unicode              | utf32_general_ci    |      4 |
61 | binary   | Binary pseudo charset       | binary              |      1 |
62 | geostd8  | GEOSTD8 Georgian            | geostd8_general_ci  |      1 |
63 | cp932    | SJIS for Windows Japanese   | cp932_japanese_ci   |      2 |
64 | eucjpms  | UJIS for Windows Japanese   | eucjpms_japanese_ci |      3 |
65 +----------+-----------------------------+---------------------+--------+
66 39 rows in set (0.00 sec)
67 
68 # 当前数据库默认字符集
69 MariaDB [(none)]> show variables like %character_set%;
70 +--------------------------+----------------------------+
71 | Variable_name            | Value                      |
72 +--------------------------+----------------------------+
73 | character_set_client     | utf8                       | ## 客户端来源数据使用的字符集
74 | character_set_connection | utf8                       | ## 连接层字符集
75 | character_set_database   | latin1                     | ## 当前选中数据库的默认字符集
76 | character_set_filesystem | binary                     |
77 | character_set_results    | utf8                       | ## 查询结果返回字符集
78 | character_set_server     | latin1                     | ## 默认的内部操作字符集【服务端(数据库)字符】
79 | character_set_system     | utf8                       | ## 系统元数据(字段名等)字符集【Linux系统字符集】
80 | character_sets_dir       | /usr/share/mysql/charsets/ |
81 +--------------------------+----------------------------+
82 8 rows in set (0.00 sec)

       注意:最好不要修改数据库的字符集,因为hive建库时需要使用latin1字符集。

 

3.3. 建库并授权

 1 # 这里没有创建数据库hive,该数据库可以等hive启动时自行创建
 2 MariaDB [(none)]> show databases;  
 3 +--------------------+
 4 | Database           |
 5 +--------------------+
 6 | information_schema |
 7 | mysql              |
 8 | performance_schema |
 9 | test               |
10 +--------------------+
11 4 rows in set (0.00 sec)
12 
13 # 创建用户与授权用于远程访问  格式为:mysql -hmini03 -uhive -phive 或者 mysql -h10.0.0.13 -uhive -phive  
14 MariaDB [(none)]> grant all on hive.* to hive@% identified by hive;   
15 Query OK, 0 rows affected (0.00 sec)
16 
17 # 创建用户与授权用于本地访问  格式为:mysql -hmini03 -uhive -phive 或者 mysql -h10.0.0.13 -uhive -phive  
18 MariaDB [(none)]> grant all on hive.* to hive@mini03 identified by hive;  
19 Query OK, 0 rows affected (0.00 sec)
20 
21 # 刷新权限信息
22 MariaDB [(none)]> flush privileges;  
23 Query OK, 0 rows affected (0.00 sec)
24 
25 MariaDB [(none)]> show grants for hive@% ;  
26 +-----------------------------------------------------------------------------------------------------+
27 | Grants for hive@%                                                                                   |
28 +-----------------------------------------------------------------------------------------------------+
29 | GRANT USAGE ON *.* TO hive@% IDENTIFIED BY PASSWORD *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |
30 | GRANT ALL PRIVILEGES ON `hive`.* TO hive@%                                                      |
31 +-----------------------------------------------------------------------------------------------------+
32 2 rows in set (0.00 sec)
33 
34 # 查看用户表信息
35 MariaDB [(none)]> select user,host from mysql.user;  
36 +------+-----------+
37 | user | host      |
38 +------+-----------+
39 | hive | %         |
40 | root | 127.0.0.1 |
41 | root | ::1       |
42 |      | localhost |
43 | root | localhost |
44 |      | mini03    |
45 | hive | mini03    |
46 | root | mini03    |
47 +------+-----------+
48 8 rows in set (0.00 sec)

 

 

4. hive-1.2.1的安装与配置

4.1. 软件安装

 1 [yun@mini01 software]$ pwd
 2 /app/software
 3 [yun@mini01 software]$ ll
 4 total 421924
 5 -rw-r--r-- 1 yun yun  92834839 May 14 00:23 apache-hive-1.2.1-bin.tar.gz
 6 -rw-r--r-- 1 yun yun 198811365 Jun  8 16:36 CentOS7.4_hadoop-2.7.6.tar.gz
 7 -rw-r--r-- 1 yun yun   1004838 Jul  2 22:59 mysql-connector-java-5.1.46.jar
 8 drwxrwxr-x 2 yun yun        19 Jun 18 10:48 zhangliang 
 9 [yun@mini01 software]$ 
10 [yun@mini01 software]$ tar xf apache-hive-1.2.1-bin.tar.gz 
11 [yun@mini01 software]$ mv apache-hive-1.2.1-bin /app/hive-1.2.1  
12 [yun@mini01 software]$ cd /app/
13 [yun@mini01 ~]$ ln -s hive-1.2.1/ hive
14 [yun@mini01 ~]$ ll
15 total 8
16 drwxr-xr-x  3 yun yun   23 May 26 14:30 bigdata
17 lrwxrwxrwx  1 yun yun   13 Jun  8 20:43 hadoop -> hadoop-2.7.6/
18 drwxr-xr-x 11 yun yun  172 Jun  9 17:54 hadoop-2.7.6
19 lrwxrwxrwx  1 yun yun   11 Jul  3 14:23 hive -> hive-1.2.1/
20 drwxrwxr-x  8 yun yun  159 Jul  3 14:22 hive-1.2.1
21 lrwxrwxrwx  1 yun yun   12 May 26 11:18 jdk -> jdk1.8.0_112
22 drwxr-xr-x  8 yun yun  255 Sep 23  2016 jdk1.8.0_112

 

4.2. 环境变量

1 # 使用root权限
2 [root@mini01 profile.d]# pwd
3 /etc/profile.d
4 [root@mini01 profile.d]# cat hive.sh 
5 export HIVE_HOME="/app/hive"
6 export PATH=$HIVE_HOME/bin:$PATH

 

4.3. 配置修改

 1 [yun@mini01 conf]$ pwd
 2 /app/hive/conf
 3 [yun@mini01 conf]$ vim hive-site.xml  # 默认没有,新建该配置文件 
 4 <configuration>
 5   <property>
 6     <name>javax.jdo.option.ConnectionURL</name>
 7     <!-- 如果hive库不存在,则创建 -->
 8     <value>jdbc:mysql://mini03:3306/hive?createDatabaseIfNotExist=true</value>
 9     <description>JDBC connect string for a JDBC metastore</description>
10   </property>
11 
12   <property>
13     <name>javax.jdo.option.ConnectionDriverName</name>
14     <value>com.mysql.jdbc.Driver</value>
15     <description>Driver class name for a JDBC metastore</description>
16   </property>
17 
18   <property>
19     <name>javax.jdo.option.ConnectionUserName</name>
20     <value>hive</value>
21     <description>username to use against metastore database</description>
22   </property>
23 
24   <property>
25     <name>javax.jdo.option.ConnectionPassword</name>
26     <value>hive</value>
27     <description>password to use against metastore database</description>
28   </property>
29 
30   <!-- 显示当前使用的数据库 -->
31   <property>
32     <name>hive.cli.print.current.db</name>
33     <value>true</value>
34     <description>Whether to include the current database in the Hive prompt.</description>
35   </property>
36 
37 </configuration>

 

4.4. 启动

  1 # 添加了环境变量
  2 [yun@mini01 ~]$ hive
  3 
  4 Logging initialized using configuration in jar:file:/app/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties
  5 Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
  6     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
  7     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
  8     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
  9     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 10     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 11     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 12     at java.lang.reflect.Method.invoke(Method.java:498)
 13     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 14     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 15 Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
 16     at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
 17     at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
 18     at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
 19     at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 20     at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
 21     at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
 22     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
 23     ... 8 more
 24 Caused by: java.lang.reflect.InvocationTargetException
 25     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 26     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 27     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 28     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 29     at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
 30     ... 14 more
 31 Caused by: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
 32 NestedThrowables:
 33 java.lang.reflect.InvocationTargetException
 34     at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
 35     at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)
 36     at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
 37     at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
 38     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 39     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 40     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 41     at java.lang.reflect.Method.invoke(Method.java:498)
 42     at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
 43     at java.security.AccessController.doPrivileged(Native Method)
 44     at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
 45     at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
 46     at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
 47     at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
 48     at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
 49     at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
 50     at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
 51     at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
 52     at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
 53     at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
 54     at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
 55     at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
 56     at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
 57     at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
 58     at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:624)
 59     at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
 60     at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
 61     at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
 62     at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
 63     at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
 64     at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
 65     ... 19 more
 66 Caused by: java.lang.reflect.InvocationTargetException
 67     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 68     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 69     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 70     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 71     at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
 72     at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
 73     at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282)
 74     at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:240)
 75     at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:286)
 76     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 77     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 78     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 79     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 80     at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
 81     at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
 82     at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
 83     at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
 84     at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
 85     ... 48 more
 86 Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
 87     at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:259)
 88     at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131)
 89     at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:85)
 90     ... 66 more
 91 Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
 92     at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
 93     at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)
 94     at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
 95     ... 68 more
 96 
 97 # 原因: 缺少连接数据库包,把 /app/software/mysql-connector-java-5.1.46.jar  拷贝过去  
 98 [yun@mini01 ~]$ cp -a /app/software/mysql-connector-java-5.1.46.jar /app/hive/lib/  
 99 [yun@mini01 ~]$ hive  # 再次启动hive 
100 
101 Logging initialized using configuration in jar:file:/app/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties
102 # 启动成功
103 hive> show databases;
104 OK
105 default
106 Time taken: 0.774 seconds, Fetched: 1 row(s)

 

4.4.1. 通过Navicat查看,hive库已经被创建

技术分享图片

 

4.4.2. 在mini03查看MySQL建表语句

 1 [root@mini03 ~]# mysql
 2 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 3 Your MariaDB connection id is 20
 4 Server version: 5.5.56-MariaDB MariaDB Server
 5 
 6 Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
 7 
 8 Type help; or \h for help. Type \c to clear the current input statement.
 9 
10 MariaDB [(none)]> 
11 MariaDB [(none)]> show databases;
12 +--------------------+
13 | Database           |
14 +--------------------+
15 | information_schema |
16 | hive               |
17 | mysql              |
18 | performance_schema |
19 | test               |
20 +--------------------+
21 5 rows in set (0.00 sec)
22 
23 MariaDB [(none)]> show create database hive;  
24 # 注意字符集为 latin1,不然使用时可能报错★★★
25 +----------+-----------------------------------------------------------------+
26 | Database | Create Database                                                 |
27 +----------+-----------------------------------------------------------------+
28 | hive     | CREATE DATABASE `hive` /*!40100 DEFAULT CHARACTER SET latin1 */ |
29 +----------+-----------------------------------------------------------------+
30 1 row in set (0.00 sec)

 

Hive-1.2.1_01_安装部署

标签:session   cte   format   tab   otf   show   bigdata   process   copy   

原文地址:https://www.cnblogs.com/zhanglianghhh/p/9348947.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!