标签:case jmx oop hadoop win exporter pts roo col
1. run JMX exporter as a java agent with all the four daemons. For this I have added EXTRA_JAVA_OPTS in hadoop-env.sh and yarn-env.sh :
[root@cloud01 hadoop]# cat yarn-env.sh |egrep -v ‘^$|#‘
export YARN_RESOURCEMANAGER_OPTS="$YARN_RESOURCEMANAGER_OPTS -javaagent:/home/ec2-user/jmx_exporter/jmx_prometheus_javaagent-0.3.1.jar=9104:/home/ec2-user/jmx_exporter/prometheus_config.yml"
export YARN_NODEMANAGER_OPTS="$YARN_NODEMANAGER_OPTS -javaagent:/home/ec2-user/jmx_exporter/jmx_prometheus_javaagent-0.3.1.jar=9105:/home/ec2-user/jmx_exporter/prometheus_config.yml"
[root@do1cloud01 hadoop]# cat hadoop-env.sh |egrep -v ‘^$|#‘
JAVA_HOME=/do1cloud/jdk1.8.0_151
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
case ${HADOOP_OS_TYPE} in
Darwin*)
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.kdc= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf= "
;;
esac
export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS -javaagent:/home/ec2-user/jmx_exporter/jmx_prometheus_javaagent-0.3.1.jar=9102:/home/ec2-user/jmx_exporter/prometheus_config.yml"
export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS -javaagent:/home/ec2-user/jmx_exporter/jmx_prometheus_javaagent-0.3.1.jar=9103:/home/ec2-user/jmx_exporter/prometheus_config.yml"
2.prometheus的配置
[root@do1cloud03 prometheus]# cat prometheus.yml|egrep -v ‘^$|#‘ global: rule_files: - "rules/cpu2mem.yml" scrape_configs: - job_name: ‘federate‘ scrape_interval: 10s honor_labels: true metrics_path: ‘/federate‘ params: ‘match[]‘: - ‘{job=~".+"}‘ - job_name: ‘haya44‘ static_configs: - targets: [‘192.168.1.45:9102‘,‘192.168.1.44:9103‘,‘192.168.1.44:9104‘,‘192.168.1.44:9105‘]
标签:case jmx oop hadoop win exporter pts roo col
原文地址:https://www.cnblogs.com/hixiaowei/p/11647553.html