码迷,mamicode.com
首页 > 编程语言 > 详细

Python 简单爬虫抓取糗事百科

时间:2015-08-06 18:35:20      阅读:281      评论:0      收藏:0      [点我收藏+]

标签:linux   爬虫   python   

# coding:utf-8


import time
import random
import urllib2
from bs4 import BeautifulSoup

#引入 beautifulsoup模块

#p = 1

#定义 页
url = ‘http://www.qiushibaike.com/text/page/‘
#定义header

my_headers = [
    ‘Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0‘,
    ‘Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)‘,
    ‘Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0)‘,
    ‘ELinks/0.12pre5 (textmode; Linux; -)‘
]


#获取网页内容
def get_con(url, headers):
    random_header = random.choice(headers)
    req = urllib2.Request(url)
    req.add_header(‘User-Agent‘, random_header)
    req.add_header(‘Host‘, ‘www.qiushibaike.com‘)
    req.add_header(
        ‘Referer‘, ‘http://www.qiushibaike.com/‘)
    req.add_header(‘GET‘, ‘url‘)
    content = urllib2.urlopen(req).read()
    return content

#读取每一条信息

def get_txt(haha):
    soup = BeautifulSoup(haha)
    all_txt = soup.find_all(‘div‘, class_="content")
    i = 1
    for txt in all_txt:
        cont = str(txt)
        head = cont.find(r‘class="content"‘)
        end = cont.find(r‘</div‘, head)
        con = cont[head + 16:end]
        print str(i), con
        i = i + 1
        time.sleep(3)

#根据输入的数字,确定打印起始页

page = raw_input("Please input a number:")
p = int(page)


#使用while循环打印出所有信息
while p < 36:
    haha = get_con(url + str(p) + ‘?s=4796159‘, my_headers)
    print get_txt(haha)
    print "这是第" + str(p) + "页"
    p = p + 1
   

本文出自 “World” 博客,请务必保留此出处http://xiajie.blog.51cto.com/6044823/1682355

Python 简单爬虫抓取糗事百科

标签:linux   爬虫   python   

原文地址:http://xiajie.blog.51cto.com/6044823/1682355

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!