码迷,mamicode.com
首页 > 编程语言 > 详细

python多线程实现抓取网页

时间:2014-06-29 22:15:45      阅读:395      评论:0      收藏:0      [点我收藏+]

标签:style   blog   http   get   strong   2014   

Python实现抓取网页
下面的Python抓取网页的程序比较初级,只能抓取第一页的url所属的页面,只要预定URL足够多,保证你抓取的网页是无限级别的哈,下面是代码:


##coding:utf-8
'''
	无限抓取网页
	@author wangbingyu
	@date 2014-06-26
'''
import sys,urllib,re,thread,time,threading

'''
创建下载线程类
'''
class download(threading.Thread):
	def __init__(self,url,threadName):
		threading.Thread.__init__(self,name=threadName)
		self.thread_stop = False
		self.url = url
	
	def run(self):
		while not self.thread_stop:
			self.list = self.getUrl(self.url)
			self.downloading(self.list)
	
	def stop(self):
		self.thread_stop = True
			
	def downloading(self,list):
		try:
			for i in range(len(list) - 1):
				urllib.urlretrieve(list[i],'E:\upload\download\%s.html' %  time.time())
		except Exception,ex:
			print Exception,'_upload:',ex
	
	def getUrl(self,url):
		result = []
		s = urllib.urlopen(url).read();
		ss = s.replace(' ','')
		urls=re.findall('<a.*?href=.*?<\/a>',ss,re.I)
		for i in urls:
			tmp = i.split('"')
			try:
				if tmp[1]:
					if re.match(r'\http://.*',tmp[1]):
						result.append(tmp[1])
			except Exception,ex:
				print Exception,":getUrl",ex 
		return result

if __name__ == '__main__':
	list = ['http://www.baidu.com','http://www.qq.com','http://www.taobao.com','http://www.sina.com.cn']
	for i in range(len(list)):
		#print list[i]
		download(list[i],'thread%s' % i).start()
	#list = ['http://www.baidu.com','http://www.sina.com.cn']
	#obj = download('http://www.baidu.com','threadName')
	#obj.start();
	
input()



python多线程实现抓取网页,布布扣,bubuko.com

python多线程实现抓取网页

标签:style   blog   http   get   strong   2014   

原文地址:http://blog.csdn.net/u014649204/article/details/35558985

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!