码迷,mamicode.com
首页 > 数据库 > 详细

HiveServer2中使用jdbc访问hbase时导致ZooKeeper连接持续增加的解决

时间:2016-02-24 09:43:37      阅读:220      评论:0      收藏:0      [点我收藏+]

标签:

最近在监控中发现HiveServer2连接到zookeeper里的连接持续上涨,很奇怪,虽然知道HiveServer2支持并发连接,使用ZooKeeper来管理Hive表的读写锁,但我们的环境并不需要这些,我们已经关闭并发功能,以下是线上的配置,甚至把这些值都改成final了。

技术分享


但是zookeeper连接依然会涨。后来想想,我们要访问的表是hive去映射的hbase,hiveserver2什么时候去连接zookeeper,它连接zookeeper干么,先从日志下手,将线上日志级别改为了debug,然后在hiveserver2.log发现了如下信息:

2016-02-23 14:03:30,271 DEBUG [HiveServer2-Background-Pool: Thread-598-SendThread(hadoop002:2181)]: zookeeper.ClientCnxn (ClientCnxn.java:readResponse(717)) - Got ping response for sessionid: 0x252fd37100600d2 after 0ms
2016-02-23 14:03:30,325 DEBUG [HiveServer2-Background-Pool: Thread-797-SendThread(hadoop003:2181)]: zookeeper.ClientCnxn (ClientCnxn.java:readResponse(717)) - Got ping response for sessionid: 0x352fd3707b600e3 after 0ms
2016-02-23 14:03:30,626 DEBUG [HiveServer2-Background-Pool: Thread-1138-SendThread(hadoop003:2181)]: zookeeper.ClientCnxn (ClientCnxn.java:readResponse(717)) - Got ping response for sessionid: 0x352fd3707b600e8 after 0ms
2016-02-23 14:03:30,768 DEBUG [HiveServer2-Background-Pool: Thread-730-SendThread(hadoop001:2181)]: zookeeper.ClientCnxn (ClientCnxn.java:readResponse(717)) - Got ping response for sessionid: 0x152fd3707c800db after 0ms
2016-02-23 14:03:32,751 DEBUG [HiveServer2-Background-Pool: Thread-461-SendThread(hadoop001:2181)]: zookeeper.ClientCnxn (ClientCnxn.java:readResponse(717)) - Got ping response for sessionid: 0x152fd3707c800d5 after 0ms
2016-02-23 14:03:33,057 DEBUG [HiveServer2-Background-Pool: Thread-1211-SendThread(hadoop002:2181)]: zookeeper.ClientCnxn (ClientCnxn.java:readResponse(717)) - Got ping response for sessionid: 0x252fd37100600dd after 0ms


这是个线程池,由SessionManager创建,但它是在何时创建的,从日志里一时不好看出来,所以在我们测试环境里对HiveServer2搞了个远程调试,启用远程调试步骤:

在/etc/hive/conf/conf.server下hive-env.sh里上方添加:#add by lidong for remote debug
export HADOOP_OPTS="$HADOOP_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8888 -XX:NewRatio=12 -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit"

又经过近2天的折腾,终于搞明白了这个zookeeper连接是在Hive工程里的MapRedTask的execute(DriverContext driverContext) 方法里创建的:

...

      if (!runningViaChild) { //这句很重要,解决就靠它了
        // we are not running this mapred task via child jvm
        // so directly invoke ExecDriver
        return super.execute(driverContext);//就是这句,会调用hadoop里的JobClient去submitJob(job);  然后zookeeper连接就产生了
      }


...

后面也再没去清理zookeeper的连接,导致就留下了

原因都清楚了,我选择了更为简单的处理办法,让控制runningViaChild的参数为true,让每个job在hiveserver2里都是子进程去提交,子进程结束,所有的资源都释放了



解决办法就是:

在hive-site.xml里,把

hive.exec.submitviachild 设置为true



调试的堆栈信息留个纪念:

技术分享

HiveServer2中使用jdbc访问hbase时导致ZooKeeper连接持续增加的解决

标签:

原文地址:http://blog.csdn.net/odailidong/article/details/50723398

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!