标签:电影 属性 cas 豆瓣 create max created processor nod
简单使用Scala和Jsoup对豆瓣电影进行爬虫,技术比较简单易学。
Jsoup学习请看这个网址:jsoup Cookbook(中文版):http://www.open-open.com/jsoup/
我这里只介绍我用到了的四个函数:
1、第一个函数:Jsoup.connect(url)
val doc:Document=Jsoup.connect(url).get()//从一个网站获取和解析一个HTML文档,使用get方式。说的直白点这里获得的就是网页的源码;
//特殊使用:带有参数并使用Post方式
Document doc = Jsoup.connect("http://example.com")
.data("query", "Java")
.userAgent("Mozilla")
.cookie("auth", "token")
.timeout(3000)
.post();
2、第二个函数:Element.select(String selector)
doc.select("a.nbg")//通过使用CSS(或Jquery)selector syntax 获得你想要操作元素,这里获得的是说有class=nbg的<a/>标签。
3、第三个函数:public String attr(String attributeKey)
Elements中的attr函数是通过属性获得Element中第一个匹配该属性的值,如elem.select("a.nbg").attr("title"):获得a标签中的title。
4、第四个函数:public String html()
获得element中包含的Html内容
这里的Html内容比较简单,只需要获得如图一中标记的四处。这里只要用到第二章中的后面三个方法。
//解析Document,需要对照网页源码进行解析
def parseDoc(doc: Document, movies: ConcurrentHashMap[String, String]) = {
var count = 0
for (elem <- doc.select("tr.item")) {//获得所有的电影条目
movies.put(elem.select("a.nbg").attr("title"), elem.select("a.nbg").attr("title") + "\t" //标题
+ elem.select("a.nbg").attr("href") + "\t" //豆瓣链接
// +elem.select("p.pl").html+"\t"//简介
+ elem.select("span.rating_nums").html + "\t" //评分
+ elem.select("span.pl").html //评论数
)
count += 1
}
count
}
这里使用了Scala中的Try语法,我这里只简单说明,当Jsoup.connect(url).get()
返回异常时模式匹配会匹配Failure(e)并将异常赋值给模板类中的e。当返回成功时将匹配Success(doc),并将获得的Html的Document赋值给doc。
//用于记录总数,和失败次数
val sum, fail: AtomicInteger = new AtomicInteger(0)
/**
* 当出现异常时10s后重试,异常重复100次
* @param delay:延时时间
* @param url:抓取的Url
* @param movies:存取抓到的内容
*/
def requestGetUrl(times: Int = 100, delay: Long = 10000)(url: String, movies: ConcurrentHashMap[String, String]): Unit = {
Try(Jsoup.connect(url).get()) match {//使用try来判断是否成功和失败对网页进行抓取
case Failure(e) =>
if (times != 0) {
println(e.getMessage)
fail.addAndGet(1)
Thread.sleep(delay)
requestGetUrl(times - 1, delay)(url, movies)
} else throw e
case Success(doc) =>
val count = parseDoc(doc, movies);
if (count == 0) {
Thread.sleep(delay);
requestGetUrl(times - 1, delay)(url, movies)
}
sum.addAndGet(count);
}
}
为了加快住区速度使用了Scala中的并发集合:par。类似于java中的fork/join框架;
/**
* 多线程抓取
* @param url:原始的Url
* @param tag:电影标签
* @param maxPage:页数
* @param threadNum:线程数
* @param movies:并发集合存取抓到的内容
*/
def concurrentCrawler(url: String, tag: String, maxPage: Int, threadNum: Int, movies: ConcurrentHashMap[String, String]) = {
val loopPar = (0 to maxPage).par
loopPar.tasksupport = new ForkJoinTaskSupport(new ForkJoinPool(threadNum)) // 设置并发线程数
loopPar.foreach(i => requestGetUrl()(url.format(URLEncoder.encode(tag, "UTF-8"), 20 * i), movies)) // 利用并发集合多线程同步抓取:遍历所有页
saveFile1(tag, movies)//保存为文件
}
想要进行爬虫只需要这样调用concurrentCrawler(URL, tag, page, Thread_Num, new ConcurrentHashMapString, String)函数就行。
def main(args: Array[String]): Unit = {
val Thread_Num = 30 //指定并发执行线程数
val t1 = System.currentTimeMillis
for ((tag, page) <- tags)
concurrentCrawler(URL, tag, page, Thread_Num, new ConcurrentHashMap[String, String]())//并发抓取
val t2 = System.currentTimeMillis
println(s"抓取数:$sum 重试数:$fail 耗时(秒):" + (t2 - t1) / 1000)
}
}
运行结果:
抓取数:793 重试数:0 耗时(秒):4
本文来自伊豚wpeace(blog.wpeace.cn)
import java.io.{File, PrintWriter}
import java.net.URLEncoder
import java.text.SimpleDateFormat
import java.util.Date
import java.util.concurrent.ConcurrentHashMap
import java.util.concurrent.atomic.AtomicInteger
import org.jsoup.Jsoup
import org.jsoup.nodes.Document
import scala.collection.JavaConversions._
import scala.collection.mutable.ArrayBuffer
import scala.collection.parallel.ForkJoinTaskSupport
import scala.concurrent.forkjoin.ForkJoinPool
import scala.util.{Failure, Success, Try}
/**
* Created by peace on 2017/3/5.
*/
object Douban {
val URL = "https://movie.douban.com/tag/%s?start=%d&type=T"
//访问的链接
//需要抓取的标签和页数
val tags = Map(
"经典" -> 4, //tag,页数
"爱情" -> 4,
"动作" -> 4,
"剧情" -> 4,
"悬疑" -> 4,
"文艺" -> 4,
"搞笑" -> 4,
"战争" -> 4
)
//解析Document,需要对照网页源码进行解析
def parseDoc(doc: Document, movies: ConcurrentHashMap[String, String]) = {
var count = 0
for (elem <- doc.select("tr.item")) {
movies.put(elem.select("a.nbg").attr("title"), elem.select("a.nbg").attr("title") + "\t" //标题
+ elem.select("a.nbg").attr("href") + "\t" //豆瓣链接
// +elem.select("p.pl").html+"\t"//简介
+ elem.select("span.rating_nums").html + "\t" //评分
+ elem.select("span.pl").html //评论数
)
count += 1
}
count
}
//用于记录总数,和失败次数
val sum, fail: AtomicInteger = new AtomicInteger(0)
/**
* 当出现异常时10s后重试,异常重复100次
* @param delay:延时时间
* @param url:抓取的Url
* @param movies:存取抓到的内容
*/
def requestGetUrl(times: Int = 100, delay: Long = 10000)(url: String, movies: ConcurrentHashMap[String, String]): Unit = {
Try(Jsoup.connect(url).get()) match {//使用try来判断是否成功和失败对网页进行抓取
case Failure(e) =>
if (times != 0) {
println(e.getMessage)
fail.addAndGet(1)
Thread.sleep(delay)
requestGetUrl(times - 1, delay)(url, movies)
} else throw e
case Success(doc) =>
val count = parseDoc(doc, movies);
if (count == 0) {
Thread.sleep(delay);
requestGetUrl(times - 1, delay)(url, movies)
}
sum.addAndGet(count);
}
}
/**
* 多线程抓取
* @param url:原始的Url
* @param tag:电影标签
* @param maxPage:页数
* @param threadNum:线程数
* @param movies:并发集合存取抓到的内容
*/
def concurrentCrawler(url: String, tag: String, maxPage: Int, threadNum: Int, movies: ConcurrentHashMap[String, String]) = {
val loopPar = (0 to maxPage).par
loopPar.tasksupport = new ForkJoinTaskSupport(new ForkJoinPool(threadNum)) // 设置并发线程数
loopPar.foreach(i => requestGetUrl()(url.format(URLEncoder.encode(tag, "UTF-8"), 20 * i), movies)) // 利用并发集合多线程同步抓取:遍历所有页
saveFile1(tag, movies)
}
//直接输出
def saveFile(file: String, movies: ConcurrentHashMap[String, String]) = {
val writer = new PrintWriter(new File(new SimpleDateFormat("yyyyMMdd").format(new Date()) + "_" + file ++ ".txt"))
for ((_, value) <- movies) writer.println(value)
writer.close()
}
// 排序输出到文件
def saveFile1(file: String, movies: ConcurrentHashMap[String, String]) = {
val writer = new PrintWriter(new File(new SimpleDateFormat("yyyyMMdd").format(new Date()) + "_" + file ++ ".txt"))
val col = new ArrayBuffer[String]();
for ((_, value) <- movies)
col += value;
val sort = col.sortWith(
(o1, o2) => {
val s1 = o1.split("\t")(2);
val s2 = o2.split("\t")(2);
if (s1 == null || s2 == null || s1.isEmpty || s2.isEmpty) {
true
} else {
s1.toFloat > s2.toFloat
}
}
)
sort.foreach(writer.println(_))
writer.close()
}
def main(args: Array[String]): Unit = {
val Thread_Num = 30 //指定并发执行线程数
val t1 = System.currentTimeMillis
for ((tag, page) <- tags)
concurrentCrawler(URL, tag, page, Thread_Num, new ConcurrentHashMap[String, String]())//并发抓取
val t2 = System.currentTimeMillis
println(s"抓取数:$sum 重试数:$fail 耗时(秒):" + (t2 - t1) / 1000)
}
}
标签:电影 属性 cas 豆瓣 create max created processor nod
原文地址:http://blog.csdn.net/peace1213/article/details/62231260