标签:har word acl yar lsp dash ip地址 ken expec
1
|
. /hive |
1
|
hive - -service cli |
01
02
03
04
05
06
07
08
09
10
11
12
|
[hadoop@ywendeng hive]$ hive hive> show tables; OK stock stock_partition tst Time taken: 1.088 seconds, Fetched: 3 row(s) hive> select * from tst; OK Time taken: 0.934 seconds hive> exit ; [hadoop@djt11 hive]$ |
1
|
hive - -service hwi |
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
|
异常:16/05/31 20:24:52 FATAL hwi.HWIServer: HWI WAR file not found at /home/hadoop/app/hive/home/hadoop/app/hive/lib/hive-hwi-0.12.0.war[/align] 解决办法:将hive-default.xml 文件中关于hwi的配置文件拷贝到hive-site.xml文件中 示例: < property > < name >hive.hwi.war.file</ name > < value >lib/hive-hwi-0.12.0-SNAPSHOT.war</ value > < description >This sets the path to the HWI war file, relative to ${HIVE_HOME}. </ description > </ property > < property > < name >hive.hwi.listen.host</ name > < value >0.0.0.0</ value > < description >This is the host address the Hive Web Interface will listen on</ description > </ property > < property > < name >hive.hwi.listen.port</ name > < value >9999</ value > < description >This is the port the Hive Web Interface will listen on</ description > </ property > |
1
|
nohup hive - -service hiveserver2 & // 在Hive 0.11.0版本之后,提供了HiveServer2服务[ /align ] |
1
2
|
hive --service hiveserver2 & // 默认端口10000 hive --service hiveserver2 --hiveconf hive.server2.thrift.port 10002 & // 可以通过命令行直接将端口号改为10002 |
1
2
3
4
|
< property >[/align] < name >hive.server2.thrift.port< / name > < value >10000< / value > < description >Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is ‘binary‘.< / description > < / property > |
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
|
import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; public class HiveJdbcClient { private static String driverName = "org.apache.hive.jdbc.HiveDriver" ; //hive驱动名称 hive0.11.0之后的版本 //private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";//hive驱动名称 hive0.11.0之前的版本 public static void main(String[] args) throws SQLException { try { Class.forName(driverName); } catch (ClassNotFoundException e){ e.printStackTrace(); System.exit( 1 ); } //第一个参数:jdbc:hive://djt11:10000/default 连接hive2服务的连接地址 //第二个参数:hadoop 对HDFS有操作权限的用户 //第三个参数:hive 用户密码 在非安全模式下,指定一个用户运行查询,忽略密码 Connection con = DriverManager.getConnection( "jdbc:hive://djt11:10000/default" , "hadoop" , "" ); System.out.print(con.getClientInfo()); } } |
1
2
3
4
|
rpm -qa | grep mysql // 查看当前系统是否已经安装了mysql rpm -e mysql-libs-5.1.66-2.el6_3.i686 --nodeps // 如果已经安装则删除,否则滤过此步骤 rpm -ivh MySQL-server-5.1.73-1.glibc23.i386.rpm rpm -ivh MySQL-client-5.1.73-1.glibc23.i386.rpm |
1
2
3
4
5
6
7
|
[root@ywendeng app] # service mysqld start Initializing MySQL database: Installing MySQL system tables... OK Filling help tables... OK To start mysqld at boot time you have to copy support-files /mysql .server to the right place for your system |
1
2
3
4
5
6
7
|
[root@ywendeng app] # mysql -u root -p Enter password: // 默认密码为空,输入后回车即可 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.73 Source distribution Copyright (c) 2000, 2013, Oracle and /or its affiliates. All rights reserved. mysql> set password for root@localhost=password( ‘root‘ ); 密码设置为root |
1
2
3
4
5
6
7
8
9
|
[root@ywendeng app] # mysql -u root -p root mysql>create user ‘hive‘ identified by ‘hive‘ ; // 创建一个账号:用户名为hive,密码为hive mysql> grant all on *.* to ‘hive‘ @ ‘localhost‘ identified by ‘hive‘ ; // 将权限授予host为localhost的hive用户 // 说明:(执行下面的语句 *.*:所有库下的所有表 %:任何IP地址或主机都可以连接) //grant all on *.* to ‘hive‘ @ ‘%‘ identified by ‘hive‘ ; // 将权限授予host为的hive用户 Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) |
1
2
|
use mysql;//使用数据库 select host, user from user ; |
1
2
|
[root@ywendeng ~] #mysql -u root -p root mysql> set password for hive@localhost=password( ‘hive‘ ); |
1
2
3
4
|
[root@ywendeng ~] #mysql -u hive -p //用hive用户登录,密码hive Enter password: mysql> create database hive; // 创建数据库的名称为hive Query OK, 1 row affected (0.00 sec) |
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
[hadoop@ywendeng conf]$ vi hive-site.xml < property > < name >javax.jdo.option.ConnectionDriverName< / name > < value >com.mysql.jdbc.Driver< / value > < description >Driver class name for a JDBC metastore< / description > < / property > < property > < name >javax.jdo.option.ConnectionURL< / name > < value >jdbc:mysql://localhost:3306/hive?characterEncoding=UTF-8< / value > < description >JDBC connect string for a JDBC metastore< / description > < / property > < property > < name >javax.jdo.option.ConnectionUserName< / name > < value >hive< / value > < description >Username to use against metastore database< / description > < / property > < property > < name >javax.jdo.option.ConnectionPassword< / name > < value >hive< / value > < description >password to use against metastore database< / description > < / property > |
1
|
[hadoop@ywendeng conf]$ cp hive-default.xml.template hive-site.xml |
1
2
3
|
[hadoop@ywendeng lib] #rz //回车,选择已经下载好的mysql驱动包即可 [hadoop@ywendeng lib]$ ls mysql-connector-java-5.1.21.jar |
1
2
|
[hadoop@ywendeng hive]$ hive hive> show databases; |
1
2
3
4
|
[hadoop@ywendeng hive]$ mkdir iotmp [hadoop@ywendeng hive]$ ls bin derby.log hcatalog lib metastore_db README.txt scripts conf examples iotmp LICENSE NOTICE RELEASE_NOTES.txt |
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
|
[hadoop@ywendeng conf]$ vi hive-site.xml[/align]< property > < name >hive.querylog.location< / name > < value >/home/hadoop/app/hive/iotmp< / value > < description >Location of Hive run time structured log file< / description > < / property > < property > < name >hive.exec.local.scratchdir< / name > < value >/home/hadoop/app/hive/iotmp< / value > < description >Local scratch space for Hive jobs< / description > < / property > < property > < name >hive.downloaded.resources.dir< / name > < value >/home/hadoop/app/hive/iotmp< / value > < description >Temporary local directory for added resources in the remote file system.< / description > < / property > |
1
2
3
4
5
6
|
[hadoop@ywendeng hive]$ hive hive> show databases; OK default Time taken: 3.684 seconds, Fetched: 1 row(s) hive> |
01
02
03
04
05
06
07
08
09
10
11
|
将多 multiple join 合并为一个 multi-way join; 对join、group-by 和自定义的 map-reduce 操作重新进行划分; 消减不必要的列; 在表扫描操作中推行使用断言(predicate); 对于已分区的表,消减不必要的分区; 在抽样(sampling)查询中,消减不必要的桶。 |
1
2
3
4
|
1、TEXTFILE 2、SEQUENCEFILE 3、RCFILE 4、ORCFILE(0.11以后出现) |
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
|
示例: create table if not exists textfile_table( site string, url string, pv bigint , label string) row format delimited fields terminated by ‘\t‘ stored as textfile; 插入数据操作: set hive. exec .compress. output = true ; set mapred. output .compress= true ; set mapred. output .compression.codec=org.apache.hadoop.io.compress.GzipCodec; set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec; insert overwrite table textfile_table select * from textfile_table; |
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
|
示例: create table if not exists seqfile_table( site string, url string, pv bigint , label string) row format delimited fields terminated by ‘\t‘ stored as sequencefile; 插入数据操作: set hive. exec .compress. output = true ; set mapred. output .compress= true ; set mapred. output .compression.codec=org.apache.hadoop.io.compress.GzipCodec; set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec; SET mapred. output .compression.type=BLOCK; insert overwrite table seqfile_table select * from textfile_table; |
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
|
RCFILE文件示例: create table if not exists rcfile_table( site string, url string, pv bigint , label string) row format delimited fields terminated by ‘\t‘ stored as rcfile; 插入数据操作: set hive. exec .compress. output = true ; set mapred. output .compress= true ; set mapred. output .compression.codec=org.apache.hadoop.io.compress.GzipCodec; set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec; insert overwrite table rcfile_table select * from textfile_table; |
1
2
3
|
[hadoop@ywendeng app]$ tar -zxvf apache-hive-1.0.0-bin. tar .gz [hadoop@ywendeng app]$ mv apache-hive-1.0.0-bin hive |
1
2
3
4
|
[root@ywendeng ~]$ vi /etc/profile HIVE_HOME= /home/hadoop/app/hive PATH=$JAVA_HOME /bin :$HADOOP_HOME /bin :$HIVE_HOME /bin :$PATH export HIVE_HOME |
1
|
[root@ywendeng ~]$ source /etc/profile |
1
2
|
[hadoop@ ywendeng ]$ hive hive> |
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
|
[错误1] Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462) at org.apache.hadoop.ipc.Client.call(Client.java:1381) 异常原因: hadoop 没有启动 解决办法:在hadoop 安装目录下启动hadoop: 使用命令 sbin /start-all .sh |
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
[错误2] [ERROR] Terminal initialization failed; falling back to unsupported java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected at jline.TerminalFactory.create(TerminalFactory.java:101) at jline.TerminalFactory.get(TerminalFactory.java:158) at jline.console.ConsoleReader.<init>(ConsoleReader.java:229) at jline.console.ConsoleReader.<init>(ConsoleReader.java:221) at jline.console.ConsoleReader.<init>(ConsoleReader.java:209) at org.apache.hadoop.hive.cli.CliDriver.setupConsoleReader(CliDriver.java:787) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:721) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected at jline.console.ConsoleReader.<init>(ConsoleReader.java:230) at jline.console.ConsoleReader.<init>(ConsoleReader.java:221) at jline.console.ConsoleReader.<init>(ConsoleReader.java:209) at org.apache.hadoop.hive.cli.CliDriver.setupConsoleReader(CliDriver.java:787) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:721) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) 异常解决办法: 将hive下的新版本jline的JAR包拷贝到hadoop下: cp /hive/apache-hive-1 .2.1-bin /lib/jline-2 .12.jar /cloud/hadoop-2 .4.1 /share/hadoop/yarn/lib |
1
2
3
|
hive> show tables; OK Time taken: 0.043 seconds |
1
2
3
|
hive> create table test_table ( id int ,name string,no int); OK Time taken: 0.5 seconds |
1
2
3
|
hive> select * from test_table; OK Time taken: 0.953 seconds |
标签:har word acl yar lsp dash ip地址 ken expec
原文地址:https://www.cnblogs.com/ceshi2016/p/12124081.html