标签:
自己编译下:
sbt compile
sbt test
sbt package
ApacheLogParser.jar
对于访问日志简单分析grep等利器比较好,但是更复杂的查询就需要Spark了。
import com.alvinalexander.accesslogparser._
val p = new AccessLogParser
val log = sc.textFile("log.small")
//log.count
//分析Apache日志中404有多少个
def getStatusCode(line: Option[AccessLogRecord]) = {
line match {
case Some(l) => l.httpStatusCode
case None => "0"
}
}
log.filter(line => getStatusCode(p.parseRecord(line)) == "404").count
/*想知道哪些URL是有问题的,比如URL中有一个空格等导致404错误,显**然需要下面步骤:
*过滤出所有 404 记录
*从每个404记录得到request字段(分析器请求的URL字符串是否有空格***等)不要返回重复的记录
*/
// get the `request` field from an access log record
def getRequest(rawAccessLogString: String): Option[String] = {
val accessLogRecordOption = p.parseRecord(rawAccessLogString)
accessLogRecordOption match {
case Some(rec) => Some(rec.request)
case None => None
}
}
log.filter(line => getStatusCode(p.parseRecord(line)) == "404").map(getRequest(_)).count
val recs = log.filter(line => getStatusCode(p.parseRecord(line)) == "404").map(getRequest(_))
val distinctRecs = log.filter(line => getStatusCode(p.parseRecord(line)) == "404").map(getRequest(_)).distinct
distinctRecs.foreach(println)
OK了!简单的例子!主要使用了分析日志的包!地址是:https://github.com/jinhang/ScalaApacheAccessLogParser
下次谢谢如何基于lamda架构来分析日志,kafka和spark streaming进行实时分析,hadoop和spark sql进行离线分析,mysql做分析结果的持久化,Flask可视化Web UI显示出来。睡了!
标签:
原文地址:http://blog.csdn.net/jianghuxiaojin/article/details/51414223