标签:使用 work return 调度 维护 rlock manager time 位置
multiprocessing支持子进程、通信和共享数据、执行不同形式的同步,提供了Process、Queue、Pipe、Lock等组件。 创建进程的类:Process([group[, target[, name[, args[, kwargs]]]]]) target表示调用对象 args表示调用对象的位置参数元组。 kwargs表示调用对象的字典。name为别名。 group表示线程组。 方法: is_alive():返回进程是否 join([timeout])运行:阻塞当前上下文环境的进程,直到调用此方法的进程终止或到达指定timeout(可选参数) run():start()调用run方法,如果实例进程时未制定target,这start执行 默认run()方法 start():进程准备就绪,等待CPU调度 terminate():不管任务是否完成,立即停止工作进程 其中,Process以start()启动某个进程。 属性:authkey、daemon(要通过start() 设置)、exitcode(进程在运行时为None、如果为–N,表示被信号N结束)、name、pid。其中daemon是父进程终止后自动终止,且自己不能产生新进程,必须在start() 之前设置。
构造方法: Process([group [, target [, name [, args [, kwargs]]]]]) group: 线程组,目前还没有实现,库引用中提示必须是None; target: 要执行的方法; name: 进程名; args/kwargs: 要传入方法的参数。 实例方法: is_alive():返回进程是否在运行。 join([timeout]):阻塞当前上下文环境的进程程,直到调用此方法的进程终止或到达指定的timeout(可选参数)。 start():进程准备就绪,等待CPU调度 run():strat()调用run方法,如果实例进程时未制定传入target,这star执行t默认run()方法。 terminate():不管任务是否完成,立即停止工作进程 属性: daemon:和线程的setDeamon功能一样 name:进程名字。 pid:进程号。
#创建调用多进程 #函数 # import multiprocessing # import time # # def worker_1(interval): # print("worker_1") # time.sleep(interval) # print("end worker_1") # # def worker_2(interval): # print("worker_2") # time.sleep(interval) # print("end worker_2") # # # if __name__ == "__main__": # p1 = multiprocessing.Process(target = worker_1, args = (2,)) # p2 = multiprocessing.Process(target = worker_2, args = (3,)) # p1.start() # p2.start() # p1.join() # p2.join() # print(‘finsh end‘) #定义成类 # import multiprocessing # import time # # class ClockProcess(multiprocessing.Process): # def __init__(self, interval): # multiprocessing.Process.__init__(self) # self.interval = interval # # def run(self): # n = 5 # while n > 0: # print("the time is {0}".format(time.ctime())) # time.sleep(self.interval) # n -= 1 # # if __name__ == ‘__main__‘: # p = ClockProcess(3) # p.start()
注意:这里使用锁需要把锁传递进函数,因为是使用的是不同的进程,这里有复制拷贝!!!
from multiprocessing import Process, Lock def f(l, i): with l.acquire(): print(‘hello world %s‘%i) if __name__ == ‘__main__‘: lock = Lock() for num in range(10): Process(target=f, args=(lock, num)).start()
from multiprocessing import Process, Queue import queue def f(q,n): #q.put([123, 456, ‘hello‘]) q.put(n*n+1) print("son process",id(q)) if __name__ == ‘__main__‘: q = Queue() #try: q=queue.Queue() print("main process",id(q)) for i in range(3): p = Process(target=f, args=(q,i)) p.start() print(q.get()) print(q.get()) print(q.get())
The Pipe()
function returns a pair of connection objects connected by a pipe which by default is duplex (two-way). For example:
from multiprocessing import Process, Pipe def f(conn): conn.send([12, {"name":"yuan"}, ‘hello‘]) response=conn.recv() print("response",response) conn.close() print("q_ID2:",id(child_conn)) if __name__ == ‘__main__‘: parent_conn, child_conn = Pipe() print("q_ID1:",id(child_conn)) p = Process(target=f, args=(child_conn,)) p.start() print(parent_conn.recv()) # prints "[42, None, ‘hello‘]" parent_conn.send("儿子你好!") p.join()
The two connection objects returned by Pipe()
represent the two ends of the pipe. Each connection object has send()
and recv()
methods (among others). Note that data in a pipe may become corrupted if two processes (or threads) try to read from or write to the same end of the pipe at the same time. Of course there is no risk of corruption from processes using different ends of the pipe at the same time.
Queue和pipe只是实现了数据交互,并没实现数据共享,即一个进程去更改另一个进程的数据。
A manager object returned by Manager()
controls a server process which holds Python objects and allows other processes to manipulate them using proxies.
A manager returned by Manager()
will support types list
, dict
, Namespace
, Lock
, RLock
, Semaphore
, BoundedSemaphore
, Condition
, Event
, Barrier
, Queue
, Value
and Array
. For example:
from multiprocessing import Process, Manager def f(d, l,n): d[n] = ‘1‘ d[‘2‘] = 2 d[0.25] = None l.append(n) #print(l) print("son process:",id(d),id(l)) if __name__ == ‘__main__‘: with Manager() as manager: d = manager.dict() l = manager.list(range(5)) print("main process:",id(d),id(l)) p_list = [] for i in range(10): p = Process(target=f, args=(d,l,i)) p.start() p_list.append(p) for res in p_list: res.join() print(d) print(l)
进程池内部维护一个进程序列,当使用时,则去进程池中获取一个进程,如果进程池序列中没有可供使用的进进程,那么程序就会等待,直到进程池中有可用进程为止。
进程池中有两个方法:
from multiprocessing import Process,Pool import time,os def Foo(i): time.sleep(1) print(i) return i+100 def Bar(arg): print(os.getpid()) print(os.getppid()) print(‘logger:‘,arg) pool = Pool(5) Bar(1) print("----------------") for i in range(10): #pool.apply(func=Foo, args=(i,)) #pool.apply_async(func=Foo, args=(i,)) pool.apply_async(func=Foo, args=(i,),callback=Bar) pool.close() pool.join() print(‘end‘)
标签:使用 work return 调度 维护 rlock manager time 位置
原文地址:https://www.cnblogs.com/-wenli/p/10819978.html