码迷,mamicode.com
首页 > 其他好文 > 详细

使用IP代理池和用户代理池爬取糗事百科文章

时间:2019-05-20 00:54:01      阅读:150      评论:0      收藏:0      [点我收藏+]

标签:dom   div   import   gecko   全局   window   this   while   糗事百科   

简单使用IP代理池和用户代理池的爬虫

import re
import random
import urllib.request as urlreq
import urllib.error as urlerr

#用户代理池
uapools = [
    "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0",
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36",
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.79 Safari/537.36 Edge/14.14393"
]
#ip代理池
ipools = []

#获取用户代理
def get_ua(uapools):
    thisua = random.choice(uapools)
    header = ("User-Agent", thisua)
    url_opener = urlreq.build_opener()
    url_opener.addheaders = [header]
    urlreq.install_opener(url_opener)

#获取ip池,这里从西刺获取首页IP保存到列表中
def get_ipools(ipurl):
    get_ua(uapools)
    data = urlreq.urlopen(ipurl).read().decode("utf-8","ignore")
    pat = "/></td>.*?<td>(.*?)</td>.*?<td>(.*?)</td>"
    ret = re.compile(pat, re.S).findall(data)
    # print(ret)
    for i in ret:
        ips = i[0] + ":" + i[1]
        ipools.append(ips)
    return ipools

#解析糗事百科的文章
def get_article(data):
    pat = ‘<div class="content">.*?<span>(.*?)</span>.*?</div>‘
    rst = re.compile(pat, re.S).findall(data)
    print(rst)
    # down_file(rst, i)

def get_html(urlweb):
    for i in range(1, 6):     #爬取前五页文章
        while 1:
            try:
                page = urlweb + str(i)
                thisua = random.choice(uapools)
                header = ("User-Agent", thisua)               #构建用户代理
                ip = random.choice(ipools)
                print("当前使用的ip为" + ip)
                proxy = urlreq.ProxyHandler({"http": ip})   #构建IP代理
                url_opener = urlreq.build_opener(proxy, urlreq.HTTPHandler)   #添加IP代理头
                url_opener.addheaders = [header]                           #添加用户代理头
                urlreq.install_opener(url_opener)                             #设为全局变量
                data = urlreq.urlopen(page).read().decode("utf-8","ignore")
            except Exception as e:
                print(e)
                ipools.remove(ip)   #爬取失败时,从IP池中删除IP,重新爬取文章
                continue
            get_article(data)   #解析文章
            break                    #完成一页的爬取

if __name__ == "__main__":
    ipurl = "https://www.xicidaili.com/nn/"
    ipools = get_ipools(ipurl)        #获取ip池
    urlweb = "https://www.qiushibaike.com/text/page/"
    get_html(urlweb)

使用IP代理池和用户代理池爬取糗事百科文章

标签:dom   div   import   gecko   全局   window   this   while   糗事百科   

原文地址:https://blog.51cto.com/yinsuifeng/2397031

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!