码迷,mamicode.com
首页 > 其他好文 > 详细

oslo_messaging中的heartbeat_check

时间:2016-11-08 14:29:10      阅读:307      评论:0      收藏:0      [点我收藏+]

标签:oslo_messaging

最近在做OpenStack控制节点高可用(三控)的测试,当关掉其中一个控制节点的时候,nova service-list 看到所有nova服务都是down的。 nova-compute的log中有大量这种错误信息:

2016-11-08 03:46:23.887 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.275 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe


上述抛出的异常在oslo_messaging/_drivers/impl_rabbit.py中定位出来了:

 def _heartbeat_thread_job(self):
        """Thread that maintains inactive connections
        """
        while not self._heartbeat_exit_event.is_set():
            with self._connection_lock.for_heartbeat():
                recoverable_errors = (
                    self.connection.recoverable_channel_errors +
                    self.connection.recoverable_connection_errors)
                try:
                    try:
                        self._heartbeat_check()
                        # NOTE(sileht): We need to drain event to receive
                        # heartbeat from the broker but don‘t hold the
                        # connection too much times. In amqpdriver a connection
                        # is used exclusivly for read or for write, so we have
                        # to do this for connection used for write drain_events
                        # already do that for other connection
                        try:
                            self.connection.drain_events(timeout=0.001)
                        except socket.timeout:
                            pass
                    except recoverable_errors as exc:
                        LOG.info(_LI("A recoverable connection/channel error "
                                     "occurred, trying to reconnect: %s"), exc)
                        self.ensure_connection()
                        
                except Exception:
                    LOG.warning(_LW("Unexpected error during heartbeart "
                                    "thread processing, retrying..."))
                    LOG.debug(‘Exception‘, exc_info=True)
            self._heartbeat_exit_event.wait(
                timeout=self._heartbeat_wait_timeout)
        self._heartbeat_exit_event.clear()

原本heartbeat check就是来检测组件服务和rabbitmq server之间的连接是否是活着的,oslo_messaging中的heartbeat_check任务在服务启动的时候就跑在后台了,当关闭一个控制节点时,实际上也关闭了一个rabbitmq server节点。只不过这里会一直处于循环之中,一直抛出recoverable_errors捕获到的异常,只有当self._heartbeat_exit_event.is_set()才会退出while循环。按理说应该加个超时的东西,这样就就不会一直处于循环之中,过好几分钟后才恢复。










本文出自 “the-way-to-cloud” 博客,请务必保留此出处http://iceyao.blog.51cto.com/9426658/1870593

oslo_messaging中的heartbeat_check

标签:oslo_messaging

原文地址:http://iceyao.blog.51cto.com/9426658/1870593

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!