码迷,mamicode.com
首页 > 其他好文 > 详细

pyspider示例代码三:用PyQuery解析页面数据

时间:2016-11-29 06:55:31      阅读:247      评论:0      收藏:0      [点我收藏+]

标签:head   first   neu   minutes   als   start   call   create   size   

本系列文章主要记录和讲解pyspider的示例代码,希望能抛砖引玉。pyspider示例代码官方网站是http://demo.pyspider.org/。上面的示例代码太多,无从下手。因此本人找出一下比较经典的示例进行简单讲解,希望对新手有一些帮助。

示例说明:

本示例主要是PyQuery解析返回的response页面数据。response.doc解析页面数据是pyspider的主要用法,应该熟练掌握基本使用方法。其他返回类型示例见后续文章。

pyspider爬取的内容通过回调的参数response返回,response有多种解析方式。
1、response.json用于解析json数据
2、response.doc返回的是PyQuery对象
3、response.etree返回的是lxml对象
4、response.text返回的是unicode文本
5、response.content返回的是字节码

使用方法:

PyQuery可以采用CSS选择器作为参数对网页进行解析。 关于PyQuery更多用法,可以参考PyQuery complete API。

PyQuery官方参考手册: https://pythonhosted.org/pyquery/

response.doc(.ml.mlt.mtw.cl > li).items()
response.doc(.pti > .pdbt > .authi > em > span).attr(title)

CSS选择器:更多详情请参考CSS Selector Reference

选择器示例示例说明
.class .intro Selects all elements with class=”intro”
#id #firstname Selects the element with id=”firstname”
element p Selects all <p> elements
element,element div, p Selects all <div> elements and all <p> elements
element element div p Selects all <p> elements inside <div> elements
element>element div > p Selects all <p> elements where the parent is a <div> element
[attribute] [target] Selects all elements with a target attribute
[attribute=value] [target=_blank] Selects all elements with target=”_blank”
[attribute^=value] a[href^=”https”] Selects every <a> element whose href attribute value begins with “https”
[attribute$=value] a[href$=”.pdf”] Selects every <a> element whose href attribute value ends with “.pdf”
[attribute*=value] a[href*=”w3schools”] Selects every <a> element whose href attribute value contains the substring “w3schools”
:checked input:checked Selects every checked <input> element

示例代码:

1、PyQuery基本语法应用

 

#!/usr/bin/env python
# -*- encoding: utf-8 -*-
# Created on 2016-10-09 15:16:04
# Project: douban_rent

from pyspider.libs.base_handler import *

groups = {上海租房: https://www.douban.com/group/shanghaizufang/discussion?start=,
          上海租房@长宁租房/徐汇/静安租房: https://www.douban.com/group/zufan/discussion?start=,
          上海短租日租小组 : https://www.douban.com/group/chuzu/discussion?start=,
          上海短租: https://www.douban.com/group/275832/discussion?start=,
          }

class Handler(BaseHandler):
    crawl_config = {
    }

    @every(minutes=24 * 60)
    def on_start(self):
        for i in groups:
            url = groups[i] + 0
            self.crawl(url, callback=self.index_page, validate_cert=False)

    @config(age=10 * 24 * 60 * 60)
    def index_page(self, response):
        for each in response.doc(.olt .title a).items():
            self.crawl(each.attr.href, callback=self.detail_page)

        #next page 
        for each in response.doc(.next a).items():
            self.crawl(each.attr.href, callback=self.index_page)

    @config(priority=2)
    def detail_page(self, response):
        count_not=0
        notlist=[]
        for i in response.doc(.bg-img-green a).items():
            if i.text() <> response.doc(.from a).text():
                count_not +=1 
                notlist.append(i.text())
        for i in notlist: print i

        return {
            "url": response.url,
            "title": response.doc(title).text(),
            "author": response.doc(.from a).text(),
            "time": response.doc(.color-green).text(),
            "content": response.doc(#link-report p).text(),
            "回应数": len([x.text() for x in response.doc(h4).items()])   ,
            # "最后回帖时间": [x for x in response.doc(.pubtime).items()][-1].text(),
            "非lz回帖数": count_not,
            "非lz回帖人数": len(set(notlist)),
            # "主演": [x.text() for x in response.doc(.actor a).items()],
        }

 

2、PyQuery基本语法应用

#!/usr/bin/env python
# -*- encoding: utf-8 -*-
# Created on 2015-10-09 09:00:10
# Project: learn_pyspider_db

from pyspider.libs.base_handler import *
import re


class Handler(BaseHandler):
    crawl_config = {
    }

    @every(minutes=24 * 60)
    def on_start(self):
        self.crawl(http://movie.douban.com/tag/, callback=self.index_page)

    @config(age=10 * 24 * 60 * 60)
    def index_page(self, response):
        for each in response.doc(.tagCol a).items():
            self.crawl(each.attr.href, callback=self.list_tag_page)

    @config(age=10 * 24 * 60 * 60)
    def list_tag_page(self, response):
        all = response.doc(.more-links)
        self.crawl(all.attr.href, callback=self.list_page)

    @config(age=10*24*60*60)
    def list_page(self, response):
        for each in response.doc(.title).items():
            self.crawl(each.attr.href, callback=self.detail_page)
        # craw the next page
        next_page = response.doc(.next > a)
        self.crawl(next_page.attr.href, callback=self.list_page)

    @config(priority=2)
    def detail_page(self, response):
        director = ‘‘
        for item in response.doc(#info > span > .pl):
            if item.text == u导演:
                next_item = item.getnext()
                director = next_item.getchildren()[0].text
                break
        return {
            "url": response.url,
            "title": response.doc(title).text(),
            "director": director,
        }

3、PyQuery复杂选择及翻页处理

#!/usr/bin/env python
# -*- encoding: utf-8 -*-
# Created on 2016-10-12 06:39:31
# Project: qx_zj_poi2

import re
from pyspider.libs.base_handler import *


class Handler(BaseHandler):
    crawl_config = {
    }

    @every(minutes=24 * 60)
    def on_start(self):
        self.crawl(http://www.poi86.com/poi/amap/province/330000.html, callback=self.city_page)

    @config(age=24 * 60 * 60)
    def city_page(self, response):
        for each in response.doc(body > div:nth-child(2) > div > div.panel-body > ul > li > a).items():
            self.crawl(each.attr.href, callback=self.district_page)
    
    @config(age=24 * 60 * 60)
    def district_page(self, response):
        for each in response.doc(body > div:nth-child(2) > div > div.panel-body > ul > li > a).items():
            self.crawl(each.attr.href, callback=self.poi_idx_page)

    @config(age=24 * 60 * 60)
    def poi_idx_page(self, response):
        for each in response.doc(td > a).items():
            self.crawl(each.attr.href, callback=self.poi_dtl_page)
        # 翻页
        for each in response.doc(body > div:nth-child(2) > div > div.panel-body > div > ul > li:nth-child(13) > a).items():
            self.crawl(each.attr.href, callback=self.poi_idx_page)
            
    @config(priority=100)
    def poi_dtl_page(self, response):
        return {
            "url": response.url,
            "id": re.findall(\d+,response.url)[1],
            "name": response.doc(body > div:nth-child(2) > div:nth-child(2) > div.panel-heading > h1).text(),
            "province": response.doc(body > div:nth-child(2) > div:nth-child(2) > div.panel-body > ul > li:nth-child(1) > a).text(),
            "city": response.doc(body > div:nth-child(2) > div:nth-child(2) > div.panel-body > ul > li:nth-child(2) > a).text(),
            "district": response.doc(body > div:nth-child(2) > div:nth-child(2) > div.panel-body > ul > li:nth-child(3) > a).text(),
            "addr": response.doc(body > div:nth-child(2) > div:nth-child(2) > div.panel-body > ul > li:nth-child(4)).text(),
            "tel": response.doc(body > div:nth-child(2) > div:nth-child(2) > div.panel-body > ul > li:nth-child(5)).text(),
            "type": response.doc(body > div:nth-child(2) > div:nth-child(2) > div.panel-body > ul > li:nth-child(6)).text(),
            "dd_map": response.doc(body > div:nth-child(2) > div:nth-child(2) > div.panel-body > ul > li:nth-child(7)).text(),
            "hx_map": response.doc(body > div:nth-child(2) > div:nth-child(2) > div.panel-body > ul > li:nth-child(8)).text(),
            "bd_map": response.doc(body > div:nth-child(2) > div:nth-child(2) > div.panel-body > ul > li:nth-child(9)).text(),
        }

 

pyspider示例代码三:用PyQuery解析页面数据

标签:head   first   neu   minutes   als   start   call   create   size   

原文地址:http://www.cnblogs.com/microman/p/6111711.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!