码迷,mamicode.com
首页 > 其他好文 > 详细

scrapy框架初级

时间:2018-10-25 18:02:18      阅读:173      评论:0      收藏:0      [点我收藏+]

标签:callback   request   应用   download   程序   直接下载   false   next   ace   

一、安装
python模块网站,应用文件放置在scrips下,whl:https://www.lfd.uci.edu/~gohlke/pythonlibs/
 
Scrapy框架依赖 Twistid需要再上边网站下载,放置scrips下;
??pip install C:\python\Anaconda3\Twisted-18.7.0-cp36-cp36m-win_amd64.whl
??pip install scrapy
 
二、创建Scrapy项目
1.由于pychram没有集成环境,需要执行命令创建,执行完,用pychram选择新窗口打开;
??????????scrapy startproject  projectname
 
2.创建爬虫文件执行命令如下:
命令部分??????????????????文件名  爬取得网站
scrapy genspider baidu baidu.com
scrapy genspider -t crawl baidu baidu.com
 
3配置文件修改:
settings.py文件
USER_AGENT = ‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36 Maxthon/5.2.3.6000‘

# Obey robots.txt rules
ROBOTSTXT_OBEY = False
DOWNLOAD_DELAY = 3
ITEM_PIPELINES = {
‘xiaoshuo_pc.pipelines.XiaoshuoPcPipeline‘: 300,
}
 
4运行程序:
scrapy crawl name(变量值)
scrapy crawl name -o book.json(输出到文件{json、xml、csv})
scrapy crawl name -o book.json -t json(-t 代表格式输出,一般忽略)
**第一次运行的时候,我遇到no module named win32API错误,这是因为Python没有自带访问windows系统API的库的,需要下载第三方库。库的名称叫pywin32,可以从网上直接下载,下载链接:http://sourceforge.net/projects/pywin32/files%2Fpywin32/ (下载适合你的Python版本)下载后放置到scripts目录下双机运行,即可((或者pip install pypiwin32))
 
三、小说获取示例代码:
创建入口执行文件main.py
from scrapy.cmdline import execute
execute("scrapy crawl zol".split()) # zol为zol文件中的变量定义的名
class ShiqikSpider(scrapy.Spider):
name = ‘shiqik‘
allowed_domains = [‘17k.com‘]
start_urls = [‘https://www.81zw.us/book/1379/6970209.html‘]

def parse(self, response):
title=response.xpath(‘//div[@class="bookname"]/h1/text()‘).extract_first()
content=‘‘.join(response.xpath(‘//div[@id="content"]/text()‘).extract()).replace(‘   ‘,‘\n‘)
yield {"title":title,"content":content}
next_page=response.xpath(‘//div[@class="bottem2"]/a[3]/@href‘).extract_first()
if next_page.find(".html")!=-1:
print("继续下一个url")
new_url=response.urljoin(next_page)
yield scrapy.Request(new_url,callback=self.parse,dont_filter=True)
 
四、小说获取示例代码:
 
 
class BayizhongwenSpider(CrawlSpider):
name = ‘bayizhongwen‘
allowed_domains = [‘81zw.us‘]
# start_urls = [‘https://www.81zw.us/book/1215/863759.html‘]
start_urls = [‘https://www.81zw.us/book/1215‘]

rules = (
Rule(LinkExtractor(restrict_xpaths=r‘//dl/dd[2]/a‘), callback=‘parse_item‘, follow=True),
Rule(LinkExtractor(restrict_xpaths=r‘//div[@class="bottem1"]/a[3]‘), callback=‘parse_item‘, follow=True),
)
def parse_item(self, response):
title=response.xpath(‘//div[@class="bookname"]/h1/text()‘).extract_first()
content=‘‘.join(response.xpath(‘//div[@id="content"]/text()‘).extract()).replace(‘   ‘,‘\n‘)
print({"title":title,"content":content})
yield {"title":title,"content":content}
 
 
 
 一、创建项目
(venv) C:\Users\noc\PycharmProjects>scrapy startproject tupian
 
二、创建app
(venv) C:\Users\noc\PycharmProjects\tupian>scrapy genspider zol zol.com.cn
 
三、修改配置信息
settings.py文件:
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36‘

# Obey robots.txt rules
ROBOTSTXT_OBEY = False
DOWNLOAD_DELAY = 3
 
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
# ‘tupian.pipelines.TupianPipeline‘: 300,
‘scrapy.contrib.pipeline.images.ImagesPipeline‘: 300,
}
# 增加图片存放目录
IMAGES_STORE=‘e:/img‘
 
四、创建入口执行文件start.py
from scrapy.cmdline import execute
execute("scrapy crawl zol".split()) # zol为zol文件中的变量定义的名

五、主文件代码:

import scrapy


class ZolSpider(scrapy.Spider):
name = ‘zol‘
allowed_domains = [‘zol.com.cn‘]
start_urls = [‘http://desk.zol.com.cn/bizhi/7239_89590_2.html‘] # 爬取图片页面的地址

def parse(self, response):
image_url = response.xpath(‘//img[@id="bigImg"]/@src‘).extract() # 爬取第一张图片的地址
image_name = response.xpath(‘string(//h3)‘).extract_first() # 爬取图片名称
yield {"image_url": image_url, "image_name": image_name} # 推送
next_page = response.xpath(‘//a[@id="pageNext"]/@href‘).extract_first() # 爬取图片下一张按钮的地址
if next_page.find(‘.html‘) != -1: # 判断最后一张图片地址如果不包含.html
yield scrapy.Request(response.urljoin(next_page), callback=self.parse)

 
六、middlewares文件

from tupian.settings import USER_AGENT
from random import choice
from fake_useragent import UserAgent


# User-Agent设置
class UserAgentDownloaderMiddleware(object):
def process_request(self, request, spider):
# if self.user_agent:
# request.headers.setdefault(b‘User-Agent‘,choice(USER_AGENT))
request.headers.setdefault(b‘User-Agent‘, UserAgent().random)

# 代理设置
class ProxyMiddleware(object):
def process_request(self, request, spider):
# request.meta[‘proxy‘]=‘http://ip:port‘
request.meta[‘proxy‘]=‘http://124.235.145.79:80‘
# request.meta[‘proxy‘]=‘http://user:passwd@ip:port‘
# request.meta[‘proxy‘]=‘http://398707160:j8inhg2g@139.224.116.10:16816‘



 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

scrapy框架初级

标签:callback   request   应用   download   程序   直接下载   false   next   ace   

原文地址:https://www.cnblogs.com/returnes/p/9851197.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!