码迷,mamicode.com
首页 > Web开发 > 详细

NodeJS爬虫系统初探

时间:2015-12-21 21:41:41      阅读:240      评论:0      收藏:0      [点我收藏+]

标签:

NodeJS爬虫系统

NodeJS爬虫系统

0. 概论

爬虫是一种自动获取网页内容的程序。是搜索引擎的重要组成部分,因此搜索引擎优化很大程度上是针对爬虫而做出的优化。
robots.txt是一个文本文件,robots.txt是一个协议,不是一个命令。robots.txt是爬虫要查看的第一个文件。robots.txt文件告诉爬虫在服务器上什么文件是可以被查看的,搜索机器人就会按照该文件中的内容来确定访问的范围。
技术分享
一般网站的robots.txt查找方法:
例如www.qq.com
http://www.qq.com/robots.txt
技术分享

1. 配置爬虫系统和开发环境

所需Node模块;

  • Express
  • Request
  • Cheerio

直接在桌面创建spider项目

1.[KANO@kelvin 桌面]$ express spider
2.bash: express: 未找到命令...
3.安装软件包“nodejs-express”以提供命令“express”? [N/y] y
4.
5.
6. * 正在队列中等待...
7.下列软件包必须安装:
8. nodejs-buffer-crc32-0.2.1-8.fc21.noarch A pure JavaScript CRC32 algorithm that plays nice with binary data
9. nodejs-commander-2.2.0-2.fc21.noarch Node.js command-line interfaces made easy
10.………………………………………(略)……………………………………………………………
11. nodejs-vhost-1.0.0-2.fc21.noarch Virtual domain hosting middleware for Node.js and Connect
12. nodejs-compressible-1.0.1-2.fc21.noarch Compressible Content-Type/MIME checking for Node.js
13. nodejs-negotiator-0.4.3-2.fc21.noarch An HTTP content negotiator for Node.js
14. create : spider/app.js
15. create : spider/public
16. create : spider/public/images
17. create : spider/routes
18. create : spider/routes/index.js
19. create : spider/routes/user.js
20. create : spider/public/stylesheets
21. create : spider/public/stylesheets/style.css
22. create : spider/views
23. create : spider/views/layout.jade
24. create : spider/views/index.jade
25. create : spider/public/javascripts
26.
27. install dependencies:
28. $ cd spider && npm install
29.
30. run the app:
31. $ node app
32.
33.

然后进到目录下,执行安装

1.[KANO@kelvin 桌面]$ cd spider/
2.[KANO@kelvin spider]$ sudo npm install
3.[sudo] KANO 的密码:
4.npm http GET https://registry.npmjs.org/express/3.5.2
5.npm http GET https://registry.npmjs.org/jade
6.……………………(略)…………………………
7.npm http 200 https://registry.npmjs.org/negotiator/-/negotiator-0.3.0.tgz
8.jade@1.11.0 node_modules/jade
9.├── character-parser@1.2.1
10.├── void-elements@2.0.1
11.├── commander@2.6.0
12.├── mkdirp@0.5.1 (minimist@0.0.8)
13.├── jstransformer@0.0.2 (is-promise@2.1.0, promise@6.1.0)
14.├── clean-css@3.4.8 (commander@2.8.1, source-map@0.4.4)
15.├── constantinople@3.0.2 (acorn@2.6.4)
16.├── with@4.0.3 (acorn@1.2.2, acorn-globals@1.0.9)
17.├── transformers@2.1.0 (promise@2.0.0, css@1.0.8, uglify-js@2.2.5)
18.└── uglify-js@2.6.1 (uglify-to-browserify@1.0.2, async@0.2.10, source-map@0.5.3, yargs@3.10.0)
19.
20.express@3.5.2 node_modules/express
21.├── methods@0.1.0
22.├── merge-descriptors@0.0.2
23.├── cookie@0.1.2
24.├── debug@0.8.1
25.├── cookie-signature@1.0.3
26.├── range-parser@1.0.0
27.├── fresh@0.2.2
28.├── buffer-crc32@0.2.1
29.├── mkdirp@0.4.0
30.├── commander@1.3.2 (keypress@0.1.0)
31.├── send@0.3.0 (debug@0.8.0, mime@1.2.11)
32.└── connect@2.14.5 (response-time@1.0.0, pause@0.0.1, connect-timeout@1.0.0, method-override@1.0.0, vhost@1.0.0, qs@0.6.6, basic-auth-connect@1.0.0, bytes@0.3.0, static-favicon@1.0.2, raw-body@1.1.4, errorhandler@1.0.0, setimmediate@1.0.1, cookie-parser@1.0.1, morgan@1.0.0, serve-static@1.1.0, express-session@1.0.2, csurf@1.1.0, serve-index@1.0.1, multiparty@2.2.0, compression@1.0.0)
33.

安装完之后,启动

1.[KANO@kelvin spider]$ node app
2.Express server listening on port 3000
3.GET / 200 793ms - 170b
4.GET /stylesheets/style.css 200 20ms - 110b
5.

默认开启3000端口
技术分享
kelvin是我的hostname,不知道的请用:

1.[KANO@kelvin spider]$ hostname
2.kelvin

接着安装request

1.[KANO@kelvin spider]$ sudo npm install request --save-dev
2.[sudo] KANO 的密码:
3.npm http GET https://registry.npmjs.org/request
4.npm http 200 https://registry.npmjs.org/request
5.……………………(略)…………………………
6.npm http 200 https://registry.npmjs.org/ansi-regex/-/ansi-regex-2.0.0.tgz
7.request@2.67.0 node_modules/request
8.├── is-typedarray@1.0.0
9.├── aws-sign2@0.6.0
10.├── forever-agent@0.6.1
11.├── caseless@0.11.0
12.├── stringstream@0.0.5
13.├── tunnel-agent@0.4.2
14.├── oauth-sign@0.8.0
15.├── isstream@0.1.2
16.├── json-stringify-safe@5.0.1
17.├── extend@3.0.0
18.├── node-uuid@1.4.7
19.├── qs@5.2.0
20.├── tough-cookie@2.2.1
21.├── form-data@1.0.0-rc3 (async@1.5.0)
22.├── mime-types@2.1.8 (mime-db@1.20.0)
23.├── combined-stream@1.0.5 (delayed-stream@1.0.0)
24.├── bl@1.0.0 (readable-stream@2.0.5)
25.├── hawk@3.1.2 (cryptiles@2.0.5, sntp@1.0.9, boom@2.10.1, hoek@2.16.3)
26.├── http-signature@1.1.0 (assert-plus@0.1.5, jsprim@1.2.2, sshpk@1.7.1)
27.└── har-validator@2.0.3 (commander@2.9.0, pinkie-promise@2.0.0, is-my-json-valid@2.12.3, chalk@1.1.1)
28.

技术分享
request模块已经安装上了

安装cheerio

1.[KANO@kelvin spider]$ sudo npm install cheerio --save-dev
2.[sudo] KANO 的密码:
3.npm http GET https://registry.npmjs.org/cheerio
4.npm http 200 https://registry.npmjs.org/cheerio
5.npm http GET https://registry.npmjs.org/css-select
6.……………………(略)…………………………
7.npm http 304 https://registry.npmjs.org/isarray/0.0.1
8.npm http 304 https://registry.npmjs.org/core-util-is
9.cheerio@0.19.0 node_modules/cheerio
10.├── entities@1.1.1
11.├── lodash@3.10.1
12.├── css-select@1.0.0 (boolbase@1.0.0, css-what@1.0.0, nth-check@1.0.1, domutils@1.4.3)
13.├── dom-serializer@0.1.0 (domelementtype@1.1.3)
14.└── htmlparser2@3.8.3 (domelementtype@1.3.0, domutils@1.5.1, entities@1.0.0, domhandler@2.3.0, readable-stream@1.1.13)

技术分享
现在cheerio和request都装上了,这么一整个开发环境就装好了

2. 爬虫实战

关于expressjs中文文档:www.expressjs.com.cn
技术分享
将其替换到app.js
技术分享

监控node进程

1.[KANO@kelvin spider]$ supervisor start app.js
2.
3.Running node-supervisor with
4. program ‘app.js‘
5. --watch ‘.‘
6. --extensions ‘node,js
7. --exec ‘node
8.
9.Starting
child process with ‘node app.js‘
10.Watching directory ‘/home/KANO/桌面/spider‘ for changes.
11.Express server listening on port 3000
12.

刷新窗口
技术分享

关于request文档:https://www.npmjs.com/package/request
技术分享
将其复制到app.js中,爬取果壳网的页面

1.var express = require(‘express‘);
2.var app = express();
3.var request = require(‘request‘);
4.app.get(‘/‘, function(req, res){
5. request(‘http://mooc.guokr.com/course/‘, function (error, response, body) {
6. if (!error && response.statusCode == 200) {
7. console.log(body); // Show the HTML for the Google homepage.
8. res.send(‘hello world‘);
9. }
10. })
11.});
12.
13.app.listen(3000);

刷新一下kelvin:3000
打印出的页面输出在终端上
技术分享

下面对页面内容进行选择,使用cheerio

关于cheerio文档:https://www.npmjs.com/package/cheerio
技术分享

对页面进行分析
技术分享
要爬取课程名,查到课程名在<h3 class=‘course-title‘></h3><span></span>

1.var express = require(‘express‘);
2.var app = express();
3.var request = require(‘request‘);
4.var cheerio = require(‘cheerio‘);
5.app.get(‘/‘, function(req, res){
6. request(‘http://mooc.guokr.com/course/‘, function (error, response, body) {
7. if (!error && response.statusCode == 200) {
8. $ = cheerio.load(body);//当前的$是一个拿到了整个body的前端选择器
9. res.json({
10. ‘course‘: $(‘.course-title span‘).text()
11. });
12. }
13. })
14.});
15.
16.app.listen(3000);

刷新一下,页面
技术分享
一个简单的爬虫就完成了。

但是如果需要异步请求还得更改代码,同时还得对爬下来的数据进行处理……………还有多多不完善之处,今天的node.js爬虫初探就这样吧~

NodeJS爬虫系统初探

标签:

原文地址:http://www.cnblogs.com/XBlack/p/5064707.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!