标签:file_path 求和 sign war 应对 item exce 数据存储 参数
基于管道的持久化存储
全栈数据的爬取
五大核心组件
请求传参
scrapy的大文件下载(基于一种形式的管道类实现)
爬虫类中将解析到的图片地址存储到item,将item提交给指定的管道
在管道文件中导报:from scrapy.pipelines.images import ImagesPipeline
基于ImagesPipeline父类,自定义一个管道类
重写管道类中的如下三个方法:
def file_path(self,request,response=None,info=None):指定文件路径
def get_media_requests(self,item,info):将item中存储的图片地址进行get请求发送
def item_completed(self,request,item,info):返回item
二进制 图片视频实例代码
spidersName
# -*- coding: utf-8 -*-
import scrapy
from imgPro.items import ImgproItem
class ImgdemoSpider(scrapy.Spider):
name = ‘imgDemo‘
# allowed_domains = [‘www.xxx.com‘]
start_urls = [‘http://www.521609.com/daxuemeinv/‘]
def parse(self, response):
li_list = response.xpath(‘//*[@id="content"]/div[2]/div[2]/ul/li‘)
for li in li_list:
img_src = ‘http://www.521609.com‘+li.xpath(‘./a[1]/img/@src‘).extract_first()
img_name = li.xpath(‘./a[2]/b/text() | ./a[2]/text()‘).extract_first()+‘.jpg‘
print(img_name)
item = ImgproItem()
item[‘img_src‘] = img_src
item[‘img_name‘] = img_name
yield item
items
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class ImgproItem(scrapy.Item):
# define the fields for your item here like:
img_src = scrapy.Field()
img_name = scrapy.Field()
pipelines
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don‘t forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.pipelines.images import ImagesPipeline
import scrapy
# class ImgproPipeline(object):
# def process_item(self, item, spider):
# return item
class ImgproPipeline(ImagesPipeline):
#指定文件存储的目录(文件名)
def file_path(self,request,response=None,info=None):
#接受mate
item = request.meta[‘item‘]
return item[‘img_name‘]
#对指定资源进行请求发送
def get_media_requests(self,item,info):
#meta可以传递给file_path
yield scrapy.Request(item[‘img_src‘],meta={‘item‘:item})
#用于返回item,将item传递给下一个即将被执行的管道类
def item_completed(self,request,item,info):
return item
中间件的使用
作用:拦截所有的请求和响应
拦截请求:process_request拦截正常的请求,process_exception拦截异常的请求
拦截响应
篡改响应数据
需求:爬取网易新闻中国内,国际,军事,航工,无人机这五个板块下所有的新闻标题和内容
1.如何通过中间件更换不满足需求的响应对象
2,selenum如何作用到scrapy中
代码 spidersName
# -*- coding: utf-8 -*-
import scrapy
from selenium import webdriver
from wangyiPro.items import WangyiproItem
class WangyiSpider(scrapy.Spider):
name = ‘wangyi‘
# allowed_domains = [‘www.xxx.com‘]
start_urls = [‘https://news.163.com/‘]
#整个项目中涉及的响应对象个数:
# - 1+5+n
#解析:解析五个新闻板块对应的url
five_model_urls = []
bro = webdriver.Chrome(executable_path=r‘C:\Users\Administrator\Desktop\爬虫+数据+算法\chromedriver.exe‘)
#方法只会被调用一次
def closed(self,spider):
self.bro.quit()
def parse(self, response):
li_list = response.xpath(‘//*[@id="index2016_wrap"]/div[1]/div[2]/div[2]/div[2]/div[2]/div/ul/li‘)
model_indexs = [3,4,6,7,8]
for index in model_indexs:
li_tag = li_list[index]
#解析出了每一个板块对应的url
model_url = li_tag.xpath(‘./a/@href‘).extract_first()
self.five_model_urls.append(model_url)
#对每一个板块的url进行手动的请求发送
yield scrapy.Request(model_url,callback=self.parse_model)
#解析:每一个板块中的新闻标题和新闻详情页的url(两个值都是动态加载出来的)
def parse_model(self,response):
#遇到了不满足需求的响应对象就是当前方法中的response参数
div_list = response.xpath(‘/html/body/div/div[3]/div[4]/div[1]/div/div/ul/li/div/div‘)
for div in div_list:
title = div.xpath(‘./div/div[1]/h3/a/text()‘).extract_first()
detail_url = div.xpath(‘./div/div[1]/h3/a/@href‘).extract_first()
# print(detail_url)
item = WangyiproItem()
item[‘title‘] = title
if detail_url:
# print(detail_url)
yield scrapy.Request(detail_url,callback=self.parse_detail,meta={‘item‘:item})
def parse_detail(self,response):
item = response.meta[‘item‘]
content = response.xpath(‘//*[@id="endText"]//text()‘).extract()
content = ‘‘.join(content)
item[‘content‘] = content
yield item
items
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class WangyiproItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
content = scrapy.Field()
middlewares
# -*- coding: utf-8 -*-
# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
from scrapy.http import HtmlResponse
from time import sleep
class WangyiproDownloaderMiddleware(object):
def process_request(self, request, spider):
return None
#拦截所有的响应(1+5+n),只有5个响应不满足需求
def process_response(self, request, response, spider):
#1.将拦截到所有的响应中的指定5个不满足需求的响应对象找出
# request.url:每一个响应对应的url
#spider.five_model_urls:5个板块对应的url
# print(spider.five_model_urls)
if request.url in spider.five_model_urls:
#满足if条件的response就是5个板块对应的response
spider.bro.get(request.url)#对每一个板块对应的url进行get请求发送
sleep(3)
spider.bro.execute_script(‘window.scrollTo(0,document.body.scrollHeight)‘)
sleep(2)
spider.bro.execute_script(‘window.scrollTo(0,document.body.scrollHeight)‘)
sleep(2)
page_text = spider.bro.page_source
new_response = HtmlResponse(url=request.url,body=page_text,encoding=‘utf-8‘,request=request)
return new_response
#2.将这5个响应对象删除,实例化5个新的响应对象
#3.保证5个新的响应对象中包含动态加载出来的新闻标题数据
#4.将满足需求的5个新的响应对象返回
else:
return response
def process_exception(self, request, exception, spider):
pass
分析:
selenium在scrapy中的编码流程:
中间件middlewares代码实例
from scrapy import signals
import random
class MiddleproDownloaderMiddleware(object):
#拦截正常请求
#参数request:拦截到的请求
user_agent_list = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
"(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
]
def process_request(self, request, spider):
print(‘proces_request!!!‘)
#UA伪装
request.headers[‘User-Agent‘] = random.choice(self.user_agent_list)
return None
#拦截所有的响应
def process_response(self, request, response, spider):
return response
#拦截发生异常的请求,目的就是为了将异常的请求进行修正,然后将修正之后的正常的请求进行重新发送
def process_exception(self, request, exception, spider):
#代理操作
# request.meta[‘proxy‘] = ‘http://ip:port‘
print(‘i am exception!!!‘)
return request
CrawlSpider:Spider的一个子类
实现全站数据爬取
实现流程:
LinkExtracor链接提取器
Rule规则解析器
如何使用CrawlSpider实现深度爬取
spiderName
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from sunPro.items import SunproItem_content,SunproItem
class SunSpider(CrawlSpider):
name = ‘sun‘
# allowed_domains = [‘www.xxx.com‘]
start_urls = [‘http://wz.sun0769.com/index.php/question/questionType?type=4&page=‘]
#实例化了一个链接提取器对象
#作用:可以根据指定的规则(allow=(正则))进行链接的提取
link = LinkExtractor(allow=r‘type=4&page=\d+‘)#提取页码链接
link_detail = LinkExtractor(allow=r‘question/\d+/\d+\.shtml‘)
rules = (
#规则解析器
#作用:规则解析器可以将链接提取器提取到的链接进行请求发送且进行指定规则(callback)的数据解析
Rule(link, callback=‘parse_item‘, follow=False),
Rule(link_detail,callback=‘parse_detail‘)
)
#该方法调用的次数请求的个数
def parse_item(self, response):
tr_list = response.xpath(‘//*[@id="morelist"]/div/table[2]//tr/td/table//tr‘)
for tr in tr_list:
title = tr.xpath(‘./td[2]/a[2]/@title‘).extract_first()
status = tr.xpath(‘./td[3]/span/text()‘).extract_first()
detail_url = ‘xxxx‘
# print(title,status)
item = SunproItem()
item[‘title‘] = title
item[‘status‘] = status
yield item
def parse_detail(self,response):
content = response.xpath(‘/html/body/div[9]/table[2]//tr[1]‘).extract()
content = ‘‘.join(content)
# print(content)
item = SunproItem_content()
item[‘content‘] = content
yield item
items
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class SunproItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
status = scrapy.Field()
class SunproItem_content(scrapy.Item):
# define the fields for your item here like:
content = scrapy.Field()
pipelines
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don‘t forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
class SunproPipeline(object):
def process_item(self, item, spider):
if item.__class__.__name__ == ‘SunproItem_content‘:
print(item[‘content‘])
else:
print(item[‘title‘],item[‘status‘])
return item
分布式
概念:可以使用多台电脑组件一个分布式机群,让其执行同一组程序,对同一组网络资源进行联合爬取。
原生的scrapy是无法实现分布式
基于scrapy+redis(scrapy&scrapy-redis组件)实现分布式
scrapy-redis组件作用:
环境安装:
编码流程:
1.创建工程
2.cd proName
3.创建crawlspider的爬虫文件
4.修改一下爬虫类:
- 导包:from scrapy_redis.spiders import RedisCrawlSpider
- 修改当前爬虫类的父类:RedisCrawlSpider
- allowed_domains和start_urls删除
- 添加一个新属性:redis_key = ‘xxxx‘可以被共享的调度器队列的名称
5.修改配置settings.py
- 指定管道
ITEM_PIPELINES = {
‘scrapy_redis.pipelines.RedisPipeline‘: 400
}
- 指定调度器
# 增加了一个去重容器类的配置, 作用使用Redis的set集合来存储请求的指纹数据, 从而实现请求去重的持久化
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
# 使用scrapy-redis组件自己的调度器
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# 配置调度器是否要持久化, 也就是当爬虫结束了, 要不要清空Redis中请求队列和去重指纹的set。如果是True, 就表示要持久化存储, 就不清空数据, 否则清空数据
SCHEDULER_PERSIST = True
- 指定redis数据库
REDIS_HOST = ‘redis服务的ip地址‘
REDIS_PORT = 6379
6.配置redis数据库(redis.windows.conf)
- 关闭默认绑定
- 56Line:#bind 127.0.0.1
- 关闭保护模式
- 75line:protected-mode no
7.启动redis服务(携带配置文件)和客户端
- redis-server.exe redis.windows.conf
- redis-cli
8.执行工程
- scrapy runspider spider.py
9.将起始的url仍入到可以被共享的调度器的队列(sun)中
- 在redis-cli中操作:lpush sun www.xxx.com
10.redis:
- xxx:items:存储的就是爬取到的数据
增量式
标签:file_path 求和 sign war 应对 item exce 数据存储 参数
原文地址:https://www.cnblogs.com/zhaoganggang/p/13202237.html