码迷,mamicode.com
首页 > 其他好文 > 详细

初步的百度爬虫

时间:2015-12-06 22:34:46      阅读:169      评论:0      收藏:0      [点我收藏+]

标签:

from bs4 import BeautifulSoup
import urllib2
import urllib
import re
import urlparse

param = raw_input(Please input what your want search)
#   www.baidu.com/s?&wd=kkkkkkkkkkkk
yeshu = int(raw_input(Please input page number 1-10))
#www.baidu.com/s?wd=11111&pn=20
for i in range(yeshu):
    i = i * 10
    url = http://www.baidu.com/s?&wd=+param+&pn=+str(i)
    try:
        req = urllib2.urlopen(url)
    except urllib2.URLError,e:
        continue
    content = req.read()

    soap = BeautifulSoup(content)

    link = soap.find_all(class_ = t)

    href = []
    for i in range(len(link)):
        pattern = re.compile(href=\"(.+?)\")
        rs = pattern.findall(str(link[i]))
        if len(rs) == 0:
            break
        href.append(str(rs[0]))

    for t in range(len(href)):
        try:
            ss = urllib2.urlopen(href[t])
        except urllib2.URLError,e:
            continue
        real = ss.geturl()
        domain = urlparse.urlparse(real)
        realdomain = domain.netloc
        fp = open(url.txt,a+)
        fp.write(realdomain+\n)
        fp.close()

    

 

初步的百度爬虫

标签:

原文地址:http://www.cnblogs.com/elliottc/p/5024437.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!