码迷,mamicode.com
首页 > 编程语言 > 详细

Python Day11

时间:2017-06-17 21:41:30      阅读:234      评论:0      收藏:0      [点我收藏+]

标签:决定   roc   微软   hang   示例   任务分发   适合   through   选择   

一、RabbitMQ简介   

RabbitMQ是一个由erlang开发的AMQP(Advanced Message Queue )的开源实现。AMQP 的出现其实也是应了广大人民群众的需求,虽然在同步消息通讯的世界里有很多公开标准(如 COBAR的 IIOP ,或者是 SOAP 等),但是在异步消息处理中却不是这样,只有大企业有一些商业实现(如微软的 MSMQ ,IBM 的 Websphere MQ 等),因此,在 2006 年的 6 月,Cisco 、Redhat、iMatix 等联合制定了 AMQP 的公开标准。
 
  RabbitMQ是由RabbitMQ Technologies Ltd开发并且提供商业支持的。该公司在2010年4月被SpringSource(VMWare的一个部门)收购。在2013年5月被并入Pivotal。其实VMWare,Pivotal和EMC本质上是一家的。不同的是VMWare是独立上市子公司,而Pivotal是整合了EMC的某些资源,现在并没有上市。
 
  RabbitMQ的官网是http://www.rabbitmq.com
  windows安装相关:
 

 

二、RabbitMQ应用场景  

 

对于一个大型的软件系统来说,它会有很多的组件或者说模块或者说子系统或者(subsystem or Component or submodule)。那么这些模块的如何通信?这和传统的IPC有很大的区别。传统的IPC很多都是在单一系统上的,模块耦合性很大,不适合扩展(Scalability);如果使用socket那么不同的模块的确可以部署到不同的机器上,但是还是有很多问题需要解决。比如:
 
 1)信息的发送者和接收者如何维持这个连接,如果一方的连接中断,这期间的数据如何方式丢失?
 
 2)如何降低发送者和接收者的耦合度?
 
 3)如何让Priority高的接收者先接到数据?
 
 4)如何做到load balance?有效均衡接收者的负载?
 
 5)如何有效的将数据发送到相关的接收者?也就是说将接收者subscribe 不同的数据,如何做有效的filter。
 
 6)如何做到可扩展,甚至将这个通信模块发到cluster上?
 
 7)如何保证接收者接收到了完整,正确的数据?
 
  AMDQ协议解决了以上的问题,而RabbitMQ实现了AMQP。
 

 

三、RabbitMQ使用   

技术分享

1.RabbitMQ基本示例
简单的RabbitMQ:
send端:
 1 import pika
 2 
 3 connection = pika.BlockingConnection(
 4     pika.ConnectionParameters(localhost)
 5 )
 6 # 声明一个管道
 7 channel = connection.channel()
 8 
 9 # 声明queue
10 channel.queue_declare(queue=hello)
11 
12 # n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange.
13 channel.basic_publish(
14     exchange=‘‘,  # 先不管
15     routing_key=hello,  # queue名字
16     body=Hello World!  # 信息
17 )
18 print(" [x] Sent ‘Hello World!‘")
19 connection.close()

 

receive端:

 1 import pika
 2 
 3 connection = pika.BlockingConnection(
 4     pika.ConnectionParameters(localhost)
 5 )
 6 
 7 channel = connection.channel()
 8 
 9 # You may ask why we declare the queue again ? we have already declared it in our previous code.
10 # We could avoid that if we were sure that the queue already exists. For example if send.py program
11 # was run before. But we‘re not yet sure which program to run first. In such cases it‘s a good
12 # practice to repeat declaring the queue in both programs.
13 # 这句话的意思就是你如果确定这个队列存在,你是可以不需要再次声明这个队列的,但是如果队列还不存在的话,比如你先运行receive端,这时
14 # 就需要先声明这个队列,然后才能接收消息
15 channel.queue_declare(queue=hello)
16 
17 def callback(ch, method, properties, body):
18     print(" [x] Received %r" % body)
19 
20 channel.basic_consume(
21     callback,
22     queue=hello,
23     no_ack=True
24 )
25 
26 print( [*] Waiting for messages. To exit press CTRL+C)
27 channel.start_consuming()
 

 
2.RabbitMQ消息轮询
技术分享

 

在这种模式下,RabbitMQ会默认把p发的消息依次分发给各个消费者(c),跟负载均衡差不多
send端:

 

 1 import pika
 2 
 3 connection = pika.BlockingConnection(
 4     pika.ConnectionParameters(localhost)
 5 )
 6 # 声明一个管道
 7 channel = connection.channel()
 8 
 9 # 声明queue
10 channel.queue_declare(queue=task)
11 
12 # n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange.
13 channel.basic_publish(
14     exchange=‘‘,  # 先不管
15     routing_key=task,  # queue名字
16     body=Hello World!  # 信息
17 )
18 print(" [x] Sent ‘Hello World!‘")
19 connection.close()

receive端:

 1 import pika
 2 import time
 3 
 4 connection = pika.BlockingConnection(
 5     pika.ConnectionParameters(localhost)
 6 )
 7 
 8 channel = connection.channel()
 9 
10 # You may ask why we declare the queue again ? we have already declared it in our previous code.
11 # We could avoid that if we were sure that the queue already exists. For example if send.py program
12 # was run before. But we‘re not yet sure which program to run first. In such cases it‘s a good
13 # practice to repeat declaring the queue in both programs.
14 # 这句话的意思就是你如果确定这个队列存在,你是可以不需要再次声明这个队列的,但是如果队列还不存在的话,比如你先运行receive端,这时
15 # 就需要先声明这个队列,然后才能接收消息
16 channel.queue_declare(queue=task)
17 
18 def callback(ch, method, properties, body):
19     print(" [x] Received %r" % body)
20     time.sleep(10)
21     ch.basic_ack(delivery_tag=method.delivery_tag)  # 这句代码即为确认任务完成
22 
23 channel.basic_consume(
24     callback,
25     queue=task,
26     # 如果no_ack设置为True的话消费者处理完事情是不会给rabbitmq发送确认消息的,也就是队列不管任务处理没处理完,都会抹去这条任务;
27     # 默认为False,消费者处理完事情会发送一个确认信息,(确认信息是:ch.basic_ack(delivery_tag=method.delivery_tag))
28     # 如果消费者宕机了,任务没处理完,这时rabbitmq检测到消费者的socket断了,则会将此次任务转交给下一个消费者接着处理,
29     # 直到任务处理完毕,消费者给rabbitmq发送一个确认消息,此任务才会从队列中抹去
30     # no_ack=True
31 )
32 
33 print( [*] Waiting for messages. To exit press CTRL+C)
34 channel.start_consuming()

注:通过ch.basic_ack(delivery_tag=method.delivery_tag)来确认任务完成,只有任务完成了此任务才会从RabbitMQ队列中删除,也就是说处理这个任务的机器宕机了,该任务并不会消失,而是会被RabbitMQ分发到下一个机器上继续运行,直到任务被处理完成才会抹去。那么RabbitMQ是怎么知道处理该任务的机器宕机了呢?socket断了啊,机器宕机的话,socket会断开,这时RabbitMQ就回收此任务分发到下一个机器。

 

 
3.RabbitMQ持久化
有时候需要将队列或者消息保存到硬盘中,以备不时之需,这时持久化就派上用场了
send端:
 1 import pika
 2 
 3 connection = pika.BlockingConnection(
 4     pika.ConnectionParameters(localhost)
 5 )
 6 # 声明一个管道
 7 channel = connection.channel()
 8 
 9 # 声明queue
10 channel.queue_declare(queue=forever1, durable=True)  # durable=True是队列持久化
11 
12 # n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange.
13 channel.basic_publish(
14     exchange=‘‘,  # 先不管
15     routing_key=forever1,  # queue名字
16     body=Hello World!,  # 信息
17     properties=pika.BasicProperties(
18         delivery_mode=2,  # 消息持久化
19     )
20 )
21 print(" [x] Sent ‘Hello World!‘")
22 connection.close()

receive端:

 1 import pika
 2 import time
 3 
 4 connection = pika.BlockingConnection(
 5     pika.ConnectionParameters(localhost)
 6 )
 7 
 8 channel = connection.channel()
 9 
10 # You may ask why we declare the queue again ? we have already declared it in our previous code.
11 # We could avoid that if we were sure that the queue already exists. For example if send.py program
12 # was run before. But we‘re not yet sure which program to run first. In such cases it‘s a good
13 # practice to repeat declaring the queue in both programs.
14 # 这句话的意思就是你如果确定这个队列存在,你是可以不需要再次声明这个队列的,但是如果队列还不存在的话,比如你先运行receive端,这时
15 # 就需要先声明这个队列,然后才能接收消息
16 channel.queue_declare(queue=forever1, durable=True)
17 
18 def callback(ch, method, properties, body):
19     print(" [x] Received %r" % body)
20     time.sleep(10)
21     ch.basic_ack(delivery_tag=method.delivery_tag)  # 这句代码即为确认任务完成
22 
23 channel.basic_consume(
24     callback,
25     queue=forever1,
26     # 如果no_ack设置为True的话消费者处理完事情是不会给rabbitmq发送确认消息的,也就是队列不管任务处理没处理完,都会抹去这条任务;
27     # 默认为False,消费者处理完事情会发送一个确认信息,(确认信息是:ch.basic_ack(delivery_tag=method.delivery_tag))
28     # 如果消费者宕机了,任务没处理完,这时rabbitmq检测到消费者的socket断了,则会将此次任务转交给下一个消费者接着处理,
29     # 直到任务处理完毕,消费者给rabbitmq发送一个确认消息,此任务才会从队列中抹去
30     # no_ack=True
31 )
32 
33 print( [*] Waiting for messages. To exit press CTRL+C)
34 channel.start_consuming()
注:channel.queue_declare(queue=‘forever1‘, durable=True)此为设置队列持久化;
properties=pika.BasicProperties(delivery_mode=2)中的delivery_mode=2为设置消息持久化。
 

 
4.RabbitMQ消息公平分发
如果Rabbit只管按顺序把消息发到各个消费者身上,不考虑消费者负载的话,很可能出现,一个机器配置不高的消费者那里堆积了很多消息处理不完,同时配置高的消费者却一直很轻松。为解决此问题,可以在各个消费者端,配置perfetch=1,意思就是告诉RabbitMQ在我这个消费者当前消息还没处理完的时候就不要再给我发新消息了。
技术分享

 

核心代码:

channel.basic_qos(prefetch_count=1)

send端:(其实没什么变化)

 1 import pika
 2 
 3 connection = pika.BlockingConnection(
 4     pika.ConnectionParameters(localhost)
 5 )
 6 # 声明一个管道
 7 channel = connection.channel()
 8 
 9 # 声明queue
10 channel.queue_declare(queue=forever1, durable=True)  # durable=True是队列持久化
11 
12 # n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange.
13 channel.basic_publish(
14     exchange=‘‘,  # 先不管
15     routing_key=forever1,  # queue名字
16     body=Hello World!,  # 信息
17     properties=pika.BasicProperties(
18         delivery_mode=2,  # 消息持久化
19     )
20 )
21 print(" [x] Sent ‘Hello World!‘")
22 connection.close()

receive端:

 1 import pika
 2 import time
 3 
 4 connection = pika.BlockingConnection(
 5     pika.ConnectionParameters(localhost)
 6 )
 7 
 8 channel = connection.channel()
 9 
10 # You may ask why we declare the queue again ? we have already declared it in our previous code.
11 # We could avoid that if we were sure that the queue already exists. For example if send.py program
12 # was run before. But we‘re not yet sure which program to run first. In such cases it‘s a good
13 # practice to repeat declaring the queue in both programs.
14 # 这句话的意思就是你如果确定这个队列存在,你是可以不需要再次声明这个队列的,但是如果队列还不存在的话,比如你先运行receive端,这时
15 # 就需要先声明这个队列,然后才能接收消息
16 channel.queue_declare(queue=forever1, durable=True)
17 
18 def callback(ch, method, properties, body):
19     print(" [x] Received %r" % body)
20     time.sleep(30)
21     ch.basic_ack(delivery_tag=method.delivery_tag)  # 这句代码即为确认任务完成
22 
23 channel.basic_qos(prefetch_count=1)  # 只处理一个任务
24 channel.basic_consume(
25     callback,
26     queue=forever1,
27     # 如果no_ack设置为True的话消费者处理完事情是不会给rabbitmq发送确认消息的,也就是队列不管任务处理没处理完,都会抹去这条任务;
28     # 默认为False,消费者处理完事情会发送一个确认信息,(确认信息是:ch.basic_ack(delivery_tag=method.delivery_tag))
29     # 如果消费者宕机了,任务没处理完,这时rabbitmq检测到消费者的socket断了,则会将此次任务转交给下一个消费者接着处理,
30     # 直到任务处理完毕,消费者给rabbitmq发送一个确认消息,此任务才会从队列中抹去
31     # no_ack=True
32 )
33 
34 print( [*] Waiting for messages. To exit press CTRL+C)
35 channel.start_consuming()

注:即在生产中可以给高配置机器设置多接收几个任务,而低配置机器则设置少接收几个任务,避免造成任务堆积。

 


 
5.RabbitMQ消息发布\订阅
之前的例子都基本都是1对1的消息发送和接收,即消息只能发送到指定的queue里,但有些时候你想让你的消息被所有的Queue收到,类似广播的效果,这时候就要用到exchange了.
 
An exchange is a very simple thing. On one side it receives messages from producers and the other side it pushes them to queues. The exchange must know exactly what to do with a message it receives. Should it be appended to a particular queue? Should it be appended to many queues? Or should it get discarded. The rules for that are defined by the exchange type.
 
Exchange在定义的时候是有类型的,以决定到底是哪些Queue符合条件,可以接收消息
 
  • fanout: 所有bind到此exchange的queue都可以接收消息
  • direct: 通过routingKey和exchange决定的那个唯一的queue可以接收消息
  • topic:所有符合routingKey(此时可以是一个表达式)的routingKey所bind的queue可以接收消息
 
表达式符号说明:#代表一个或多个字符,*代表任何字符
例:#.a会匹配a.a,aa.a,aaa.a等
       *.a会匹配a.a,b.a,c.a等
注:使用RoutingKey为#,Exchange Type为topic的时候相当于使用fanout
 
headers: 通过headers 来决定把消息发给哪些queue
 

 
5.1RabbitMQ之fanout
绑定同一个exchange的消息队列均能收到发布者的消息
技术分享

 

send端:

 1 import pika
 2 
 3 connection = pika.BlockingConnection(pika.ConnectionParameters(
 4     host=localhost))
 5 channel = connection.channel()
 6 
 7 # 绑定exchange,声明exchange类型
 8 channel.exchange_declare(
 9     exchange=logs,
10     type=fanout
11 )
12 
13 message = "info: Hello World!2"
14 
15 channel.basic_publish(
16     exchange=logs,  # exchange名字
17     routing_key=‘‘,  # 必须写,且为空,不然会报错
18     body=message  # 消息内容
19 )
20 print(" [x] Sent %r" % message)
21 
22 connection.close()

receive端:

 1 import pika
 2 
 3 connection = pika.BlockingConnection(pika.ConnectionParameters(
 4     host=localhost))
 5 channel = connection.channel()
 6 
 7 channel.exchange_declare(
 8     exchange=logs,
 9     type=fanout
10 )
11 # exclusive排他,唯一的,不指定queue名字,rabbit会随机分配一个名字,exclusive=True会在使用此queue的消费者断开后,自动将queue删除
12 result = channel.queue_declare(exclusive=True)
13 queue_name = result.method.queue
14 print("random queuename", queue_name)
15 
16 channel.queue_bind(
17     exchange=logs,
18     queue=queue_name
19 )
20 
21 print( [*] Waiting for logs. To exit press CTRL+C)
22 
23 def callback(ch, method, properties, body):
24     print(" [x] %r" % body)
25 
26 channel.basic_consume(
27     callback,
28     queue=queue_name,
29     no_ack=True
30 )
31 
32 channel.start_consuming()

注:消息发布和订阅均是实时的,就像广播一样。

 

5.2RabbitMQ之direct(有选择的接收消息)
RabbitMQ还支持根据关键字发送,即:队列绑定关键字,发送者将数据根据关键字发送到消息exchange,exchange根据 关键字 判定应该将数据发送至指定队列。
技术分享

send端:

 1 import pika
 2 import sys
 3 
 4 connection = pika.BlockingConnection(pika.ConnectionParameters(host=localhost))
 5 channel = connection.channel()
 6 
 7 channel.exchange_declare(exchange=direct_logs, type=direct)
 8 
 9 severity = sys.argv[1] if len(sys.argv) > 1 else info
10 message =  .join(sys.argv[2:]) or Hello World!
11 
12 channel.basic_publish(exchange=direct_logs, routing_key=severity, body=message)
13 print(" [x] Sent %r:%r" % (severity, message))
14 connection.close()

receive端:

 1 import pika
 2 import sys
 3 
 4 connection = pika.BlockingConnection(pika.ConnectionParameters(host=localhost))
 5 channel = connection.channel()
 6 
 7 channel.exchange_declare(exchange=direct_logs, type=direct)
 8 
 9 result = channel.queue_declare(exclusive=True)
10 queue_name = result.method.queue
11 
12 severities = sys.argv[1:]
13 if not severities:
14     sys.stderr.write("Usage: %s [info] [warning] [error]\n" % sys.argv[0])
15     sys.exit(1)
16 print(severities)
17 for severity in severities:
18     channel.queue_bind(exchange=direct_logs, queue=queue_name, routing_key=severity)
19 
20 print( [*] Waiting for logs. To exit press CTRL+C)
21 
22 def callback(ch, method, properties, body):
23     print(" [x] %r:%r" % (method.routing_key, body))
24 
25 channel.basic_consume(callback, queue=queue_name, no_ack=True)
26 
27 channel.start_consuming()

 

5.3RabbitMQ之topic(更细致的消息过滤)

Although using the direct exchange improved our system, it still has limitations - it can‘t do routing based on multiple criteria.
 
In our logging system we might want to subscribe to not only logs based on severity, but also based on the source which emitted the log. You might know this concept from the syslog unix tool, which routes logs based on both severity (info/warn/crit...) and facility (auth/cron/kern...).
That would give us a lot of flexibility - we may want to listen to just critical errors coming from ‘cron‘ but also all logs from ‘kern‘.
技术分享

 

send端:

 1 import pika
 2 import sys
 3 
 4 connection = pika.BlockingConnection(pika.ConnectionParameters(host=localhost))
 5 channel = connection.channel()
 6 
 7 channel.exchange_declare(exchange=topic_logs, type=topic)
 8 
 9 routing_key = sys.argv[1] if len(sys.argv) > 1 else anonymous.info
10 
11 message =  .join(sys.argv[2:]) or Hello World!
12 channel.basic_publish(exchange=topic_logs, routing_key=routing_key, body=message)
13 print(" [x] Sent %r:%r" % (routing_key, message))
14 connection.close()

receice端:

 1 import pika
 2 import sys
 3 
 4 connection = pika.BlockingConnection(pika.ConnectionParameters(host=localhost))
 5 channel = connection.channel()
 6 
 7 channel.exchange_declare(exchange=topic_logs, type=topic)
 8 
 9 result = channel.queue_declare(exclusive=True)
10 queue_name = result.method.queue
11 
12 binding_keys = sys.argv[1:]
13 if not binding_keys:
14     sys.stderr.write("Usage: %s [binding_key]...\n" % sys.argv[0])
15     sys.exit(1)
16 
17 for binding_key in binding_keys:
18     channel.queue_bind(exchange=topic_logs, queue=queue_name, routing_key=binding_key)
19 
20 print( [*] Waiting for logs. To exit press CTRL+C)
21 
22 def callback(ch, method, properties, body):
23     print(" [x] %r:%r" % (method.routing_key, body))
24 
25 channel.basic_consume(callback, queue=queue_name, no_ack=True)
26 
27 channel.start_consuming()

 


 

6.RabbitMQ之RPC(Remote procedure call)

技术分享

server端:

 1 import pika
 2 
 3 connection = pika.BlockingConnection(pika.ConnectionParameters(host=localhost))
 4 
 5 channel = connection.channel()
 6 channel.queue_declare(queue=rpc_queue)
 7 
 8 def fib(n):
 9     if n == 0:
10         return 0
11     elif n == 1:
12         return 1
13     else:
14         return fib(n - 1) + fib(n - 2)
15 
16 def on_request(ch, method, props, body):
17     n = int(body)
18 
19     print(" [.] fib(%s)" % n)
20     response = fib(n)
21 
22     ch.basic_publish(exchange=‘‘,
23                     routing_key=props.reply_to,
24                     properties=pika.BasicProperties(correlation_id=props.correlation_id),
25                     body=str(response))
26     ch.basic_ack(delivery_tag=method.delivery_tag)
27 
28 channel.basic_consume(on_request, queue=rpc_queue)
29 
30 print(" [x] Awaiting RPC requests")
31 channel.start_consuming()

client端:

 1 import pika
 2 import uuid
 3 
 4 class FibonacciRpcClient(object):
 5     def __init__(self):
 6         self.connection = pika.BlockingConnection(pika.ConnectionParameters(
 7             host=localhost))
 8         self.channel = self.connection.channel()
 9         result = self.channel.queue_declare(exclusive=True)
10         self.callback_queue = result.method.queue
11 
12         self.channel.basic_consume(self.on_response,  # 只要一收到消息就调用on_response
13                                   no_ack=True,
14                                   queue=self.callback_queue)
15 
16     def on_response(self, ch, method, props, body):
17         if self.corr_id == props.correlation_id:
18             self.response = body
19 
20     def call(self, n):
21         self.response = None
22         self.corr_id = str(uuid.uuid4())
23 
24         self.channel.basic_publish(exchange=‘‘,
25                                   routing_key=rpc_queue,
26                                   properties=pika.BasicProperties(
27                                       reply_to=self.callback_queue,
28                                       correlation_id=self.corr_id,
29                                   ),
30                                   body=str(n))
31 
32         while self.response is None:
33             self.connection.process_data_events()  # 非阻塞版的start_consuming()
34         return int(self.response)
35 
36 fibonacci_rpc = FibonacciRpcClient()
37 
38 print(" [x] Requesting fib(30)")
39 response = fibonacci_rpc.call(30)
40 print(" [.] Got %r" % response)
41 
42 print(" [x] Requesting fib(20)")
43 response = fibonacci_rpc.call(20)
44 print(" [.] Got %r" % response)

 

 

 

Python Day11

标签:决定   roc   微软   hang   示例   任务分发   适合   through   选择   

原文地址:http://www.cnblogs.com/breakering/p/7041215.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!