标签:bat ++ apache des amp efi 文件夹 for interval
采集数据到HDFS
安装flume
在虚拟机hdp-1中, 打开SFTP-hdp-1窗口,将fllume压缩包导入到虚拟机hdp-1的/root/目录中.
解压flume压缩包到/root/apps/下,命令:
tar -xvzf apache-flume-1.6.0-bin.tar.gz -C apps/
并将apache-flume-1.6.0-bin文件夹重命名为flume-1.6.0,
命令为 mv apache-flume-1.6.0-bin flume-1.6.0
2.配置flume
进入/root/ apps/flume-1.6.0/下新建文件dir-hdfs.conf和tail-hdfs.conf
dir-hdfs.conf文件内容
ag1.sources = source1
ag1.sinks = sink1
ag1.channels = channel1
ag1.sources.source1.type = spooldir
ag1.sources.source1.spoolDir = /root/log
ag1.sources.source1.fileSuffix=.FINISHED
ag1.sources.source1.deserializer.maxLineLength=5129
ag1.sinks.sink1.type = hdfs
ag1.sinks.sink1.hdfs.path =hdfs://hdp-1:9000/access_log/%y-%m-%d/%H-%M
ag1.sinks.sink1.hdfs.filePrefix = app_log
ag1.sinks.sink1.hdfs.fileSuffix = .log
ag1.sinks.sink1.hdfs.batchSize= 100
ag1.sinks.sink1.hdfs.fileType = DataStream
ag1.sinks.sink1.hdfs.writeFormat = Text
ag1.sinks.sink1.hdfs.rollSize = 512000
ag1.sinks.sink1.hdfs.rollCount = 1000000
ag1.sinks.sink1.hdfs.rollInterval = 60
ag1.sinks.sink1.hdfs.round = true
ag1.sinks.sink1.hdfs.roundValue = 10
ag1.sinks.sink1.hdfs.roundUnit = minute
ag1.sinks.sink1.hdfs.useLocalTimeStamp = true
ag1.channels.channel1.type = memory
ag1.channels.channel1.capacity = 500000
ag1.channels.channel1.transactionCapacity = 600
ag1.sources.source1.channels = channel1
ag1.sinks.sink1.channel = channel1
tail-hdfs.conf文件内容
ag1.sources = source1
ag1.sinks = sink1
ag1.channels = channel1
ag1.sources.source1.type = exec
ag1.sources.source1.command = tail -F /root/log/access.log
ag1.sinks.sink1.type = hdfs
ag1.sinks.sink1.hdfs.path =hdfs://hdp-1:9000/access_log/%y-%m-%d/%H-%M
ag1.sinks.sink1.hdfs.filePrefix = app_log
ag1.sinks.sink1.hdfs.fileSuffix = .log
ag1.sinks.sink1.hdfs.batchSize= 100
ag1.sinks.sink1.hdfs.fileType = DataStream
ag1.sinks.sink1.hdfs.writeFormat = Text
ag1.sinks.sink1.hdfs.rollSize = 512000
ag1.sinks.sink1.hdfs.rollCount = 1000000
ag1.sinks.sink1.hdfs.rollInterval = 60
ag1.sinks.sink1.hdfs.round = true
ag1.sinks.sink1.hdfs.roundValue = 10
ag1.sinks.sink1.hdfs.roundUnit = minute
ag1.sinks.sink1.hdfs.useLocalTimeStamp = true
ag1.channels.channel1.type = memory
ag1.channels.channel1.capacity = 500000
ag1.channels.channel1.transactionCapacity = 600
ag1.sources.source1.channels = channel1
ag1.sinks.sink1.channel = channel1
3.Nginx安装配置
安装make
yum -y install gcc automake autoconf libtool make
安装g++:
yum install gcc gcc-c++
安装openssl
yum -y install openssl openssl-devel
安装PCRE库
将压缩文件pcre-8.39.tar.gz 置入hdp-1的/root/目录中,解压安装.依次执行以下命令
tar -zxvf pcre-8.39.tar.gz -C apps/
cd apps/pcre-8.39/
./configure
make
make install
安装zlib库
将压缩文件zlib-1.2.11.tar.gz 置入hdp-1的/root/目录中,解压安装.依次执行以下命令
tar -zxvf zlib-1.2.11.tar.gz -C apps/
cd apps/zlib-1.2.11/
./configure
make
make install
安装nginx
将压缩文件nginx-1.1.10.tar.gz置入hdp-1的/root/目录中,解压安装.依次执行以下命令
tar -zxvf nginx-1.1.10.tar.gz -C apps/
cd apps/nginx-1.1.10
./configure
make
make install
4.启动nginx
cd /usr/local/nginx/sbin
./nginx
在浏览器中输入hdp-1 查看是否启动成功
./nginx 启动命令
./nginx -s stop此方式相当于先查出nginx进程id再使用kill命令强制杀掉进程。
./nginx -s quit 此方式停止步骤是待nginx进程处理任务完毕进行停止。
./nginx -s reload 重新加载命令
ps aux|grep nginx 查询nginx进程
5.测试项目fram
利用idea将项目fram打包成jar文件,置入hdp-3的root目录下,
在虚拟机hdp-3启动该项目
在浏览器的地址栏中输入hdp-3:8180 进行验证
6. 通过配置nginx 利用hdp-1测试frame项目
cd /usr/local/nginx/conf/
修改nginx.conf文件
vi nginx.conf
解除注释
log_format main ‘$remote_addr - $remote_user [$time_local] "$request" ‘
‘$status $body_bytes_sent "$http_referer" ‘
‘"$http_user_agent" "$http_x_forwarded_for"‘;
添加配置
upstream frame-tomcat {
server hdp-3:8180;
}
server {
listen 80;
server_name hdp-1;
#charset koi8-r;
access_log logs/log.frame.access.log main;
location / {
# root html;
# index index.html index.htm;
proxy_pass http://frame-tomcat;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html
root html;
}
}
重新加载nginx
在浏览器中输入hdp-1 进行验证
7. 使用flume将nginx的accesslog收集到hdfs
启动hdfs
start-dfs.sh
启动yarn
start-yarn.sh
在hdp-3启动frame项目
在hdp-1启动nginx
跟踪日志文件log.frame.access.log
打开浏览器点击登录按钮
文件更新
克隆两个hdp-1
在第一个克隆hdp-1 查看hdfs上的文件
在第二个克隆hdp-1中启动flume
./flume-ng agent -C ../conf/ -f ../tail-hdfs.conf -n ag1 -Dflume.root.logger=INFO,console
在第二个hdp-2查看hdfs的文件
浏览器上的
继续对frame进行操作如点击学校按钮,查看学校页面
在第一个克隆hdp-1中重新查看hdfs上的文件
标签:bat ++ apache des amp efi 文件夹 for interval
原文地址:https://www.cnblogs.com/caiwuzi/p/13181362.html