码迷,mamicode.com
首页 > 其他好文 > 详细

亿邦动力抓取实例,持续更新

时间:2020-02-09 09:18:31      阅读:72      评论:0      收藏:0      [点我收藏+]

标签:code   accept   closed   web   extension   linu   site   crawl   case   

技术图片
# -*- coding: utf-8 -*-
import scrapy
from ybdlspider.items import YbdlspiderItem
import re
class YbSpider(scrapy.Spider):
    name = yb
    allowed_domains = [ebrun.com]
    start_urls = [http://www.ebrun.com/retail/1]#首页
    num=1
    def parse(self, response):#标题和详情页地址
        url_list=response.xpath(//div/a[@eb="com_chan_lcol_fylb"])
        for i in url_list:
            item=YbdlspiderItem( )
            item["title"]=i.xpath("./@title").extract_first()
            item["href"]=i.xpath("./@href").extract_first()

            yield scrapy.Request(item["href"],callback=self.parse_detail,meta={"item":item})
        beforeurl=response.url
        pat1=r"/retail/(\d)"
        page=re.search(pat1,beforeurl).group(1)
        page=int(page)+1
        if page<3:#翻页控制
            nexturl="http://www.ebrun.com/retail/"+str(page)
            yield scrapy.Request(nexturl,callback=self.parse)

    def parse_detail(self,response):#详情页内容和发布时间
        item=response.meta["item"]
        item["content"]=response.xpath(//section/article/div[@class="post-text"]//p/text()).extract()
        item["time"]=response.xpath(//html/body/main/section/article/div/p/span[@class="f-right"]).extract_first()
        print(item)
        yield item
        
spider
技术图片
# -*- coding: utf-8 -*-

# Scrapy settings for ybdlspider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = ybdlspider

SPIDER_MODULES = [ybdlspider.spiders]
NEWSPIDER_MODULE = ybdlspider.spiders


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = ‘ybdlspider (+http://www.yourdomain.com)‘

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
LOG_LEVEL="WARNING"
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
USER_AGENT=Mozilla/5.0 (Linux; U; Android 8.0.0; zh-CN; MHA-AL00 Build/HUAWEIMHA-AL00) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.108 UCBrowser/12.1.4.994 Mobile Safari/537.36
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# DEFAULT_REQUEST_HEADERS = {
#     ‘User-Agent‘:‘Mozilla/5.0 (Linux; U; Android 8.0.0; zh-CN; MHA-AL00 Build/HUAWEIMHA-AL00) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.108 UCBrowser/12.1.4.994 Mobile Safari/537.36‘,
#     }
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8‘,
#   ‘Accept-Language‘: ‘en‘,
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    ‘ybdlspider.middlewares.YbdlspiderSpiderMiddleware‘: 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    ‘ybdlspider.middlewares.YbdlspiderDownloaderMiddleware‘: 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    ‘scrapy.extensions.telnet.TelnetConsole‘: None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    ybdlspider.pipelines.YbdlspiderPipeline: 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = ‘httpcache‘
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage‘
set

 

亿邦动力抓取实例,持续更新

标签:code   accept   closed   web   extension   linu   site   crawl   case   

原文地址:https://www.cnblogs.com/lizhen2020/p/12286014.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!