标签:
$ sudo yum install hue
接下来我们要打开hue用户对hdfs的访问权限
编辑 /etc/hadoop-httpfs/conf/httpfs-site.xml
<!-- Hue HttpFS proxy user setting --> <property> <name>httpfs.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>httpfs.proxyuser.hue.groups</name> <value>*</value> </property>然后我们要在hue里面配置上httpfs的地址,编辑 /etc/hue/conf/hue.ini ,因为上一课我是在host2上安装的HttpFs,所以地址写成host2的地址
[hadoop] # Configuration for HDFS NameNode # ------------------------------------------------------------------------ [[hdfs_clusters]] # HA support by using HttpFs [[[default]]] # Enter the filesystem uri fs_defaultfs=hdfs://mycluster # NameNode logical name. ## logical_name= # Use WebHdfs/HttpFs as the communication mechanism. # Domain should be the NameNode or HttpFs host. # Default port is 14000 for HttpFs. webhdfs_url=http://host2:14000/webhdfs/v1
[[yarn_clusters]] [[[default]]] # Enter the host on which you are running the ResourceManager resourcemanager_host=host1 # The port where the ResourceManager IPC listens on ## resourcemanager_port=8032 # Whether to submit jobs to this cluster submit_to=True # Resource Manager logical name (required for HA) ## logical_name= # Change this if your YARN cluster is Kerberos-secured ## security_enabled=false # URL of the ResourceManager API resourcemanager_api_url=http://host1:8088 # URL of the ProxyServer API proxy_api_url=http://host1:8088 # URL of the HistoryServer API history_server_api_url=http://host1:19888
[zookeeper] [[clusters]] [[[default]]] # Zookeeper ensemble. Comma separated list of Host/Port. # e.g. localhost:2181,localhost:2182,localhost:2183 host_ports=host1:2181,host2:2181
service hbase-thrift start
[hbase] # Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'. # Use full hostname with security. hbase_clusters=(Cluster|host1:9090) # HBase configuration directory, where hbase-site.xml is located. ## hbase_conf_dir=/etc/hbase/conf # Hard limit of rows or columns per row fetched before truncating. ## truncate_limit = 500 # 'buffered' is the default of the HBase Thrift Server and supports security. # 'framed' can be used to chunk up responses, # which is useful when used in conjunction with the nonblocking server in Thrift. ## thrift_transport=buffered
[beeswax] # Host where HiveServer2 is running. # If Kerberos security is enabled, use fully-qualified domain name (FQDN). hive_server_host=host1 # Port where HiveServer2 Thrift server runs on. ## hive_server_port=10000 # Hive configuration directory, where hive-site.xml is located ## hive_conf_dir=/etc/hive/conf
[impala] # Host of the Impala Server (one of the Impalad) server_host=host2
curl https://bintray.com/sbt/rpm/rpm > bintray-sbt-rpm.repo sudo mv bintray-sbt-rpm.repo /etc/yum.repos.d/ sudo yum install sbt
git clone https://github.com/ooyala/spark-jobserver.git cd spark-jobserver sbt re-start编辑jobserver 配置文件,将jobserver跟你的spark-master连接上。编辑 job-server/src/main/resources/application.conf
master = "spark://xmseapp03:7077"
[spark] # URL of the REST Spark Job Server. server_url=http://host1:8090/
secret_key=qpbdxoewsqlkhztybvfidtvwekftusgdlofbcfghaswuicmqp
service hue start
hue的运行日志在 /var/log/hue/runcpserver.log
HUE支持一个Hbase的查询语法,比如像下面这幅图我是查所有以row1打头的rowkey,并往下看50条
Alex 的 Hadoop 菜鸟教程: 第19课 华丽的控制台 HUE 安装以及使用教程
标签:
原文地址:http://blog.csdn.net/nsrainbow/article/details/43677077