标签:sasl als sed source job apach ref value cal
hue是一个开源的apache hadoop ui系统,由cloudear desktop演化而来,最后cloudera公司将其贡献给了apache基金会的hadoop社区,它基于python框架django实现的。
通过使用hue,我们可以使用可视化的界面在web浏览器上与hadoop集群交互来分析处理数据,例如操作hdfs上的数据,运行MapReduce Job,查看HBase中的数据
(1)下载
http://archive.cloudera.com/cdh5/cdh/5/
从这里下载cdh5.11.1的最新版本的hue,3.9.0版本,到本地,并上传到服务器,解压缩到app目录下
(2)必要的组件准备
需要先安装好mysql数据库
需要安装好下面的组件
(3)编译
到hue的根目录下,运行
make apps
(4)配置
基础配置,打开desktop/conf/hue.ini文件
[desktop] # Set this to a random string, the longer the better. # This is used for secure hashing in the session store. secret_key=jFE93j;2[290-eiw.KEiwN2s3[‘d;/.q[eIW^y#e=+Iei*@Mn<qW5o # Webserver listens on this address and port http_host=hadoop001 http_port=8888 # Time zone name time_zone=Asia/Shanghai # Enable or disable Django debug mode. django_debug_mode=false # Enable or disable backtrace for server error http_500_debug_mode=false # Enable or disable memory profiling. ## memory_profiler=false # Server email for internal error messages ## django_server_email=‘hue@localhost.localdomain‘ # Email backend ## django_email_backend=django.core.mail.backends.smtp.EmailBackend # Webserver runs as this user server_user=hue server_group=hue # This should be the Hue admin and proxy user ## default_user=hue # This should be the hadoop cluster admin #default_hdfs_superuser=hadoop
配置hue集成hadoop
首先hadoop里设置代理用户,需要配置hadoop的core-site.xml
<property> <name>hadoop.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>*</value> </property>
加入这两个属性即可。
然后重启hadoop集群
sbin/stop-dfs.sh
sbin/stop-yarn.sh
sbin/start-dfs.sh
sbin/start-yarn.sh
配置hue与hadoop集成
[hadoop] # Configuration for HDFS NameNode # ------------------------------------------------------------------------ [[hdfs_clusters]] # HA support by using HttpFs [[[default]]] # Enter the filesystem uri fs_defaultfs=hdfs://hadoop001:8020 # NameNode logical name. ## logical_name= # Use WebHdfs/HttpFs as the communication mechanism. # Domain should be the NameNode or HttpFs host. # Default port is 14000 for HttpFs. webhdfs_url=http://hadoop001:50070/webhdfs/v1 # Change this if your HDFS cluster is Kerberos-secured ## security_enabled=false # Default umask for file and directory creation, specified in an octal value. ## umask=022 # Directory of the Hadoop configuration hadoop_conf_dir=/home/hadoop/app/hadoop/etc/hadoop # Configuration for YARN (MR2) # ------------------------------------------------------------------------ [[yarn_clusters]] [[[default]]] # Enter the host on which you are running the ResourceManager resourcemanager_host=hadoop002 # The port where the ResourceManager IPC listens on resourcemanager_port=8032 # Whether to submit jobs to this cluster submit_to=True # Resource Manager logical name (required for HA) ## logical_name= # Change this if your YARN cluster is Kerberos-secured ## security_enabled=false # URL of the ResourceManager API resourcemanager_api_url=http://hadoop002:8088 # URL of the ProxyServer API proxy_api_url=http://hadoop002:8088 # URL of the HistoryServer API history_server_api_url=http://hadoop002:19888 # In secure mode (HTTPS), if SSL certificates from Resource Manager‘s # Rest Server have to be verified against certificate authority ## ssl_cert_ca_verify=False # HA support by specifying multiple clusters # e.g. # [[[ha]]] # Resource Manager logical name (required for HA) ## logical_name=my-rm-name # Configuration for MapReduce (MR1)
配置hue集成hive
[beeswax] # Host where HiveServer2 is running. # If Kerberos security is enabled, use fully-qualified domain name (FQDN). hive_server_host=hadoop001 # Port where HiveServer2 Thrift server runs on. hive_server_port=10000 # Hive configuration directory, where hive-site.xml is located hive_conf_dir=/home/hadoop/app/hive/conf # Timeout in seconds for thrift calls to Hive service server_conn_timeout=120
(5)启动hue
先启动hive的metastore服务,和hiveserver2服务
(6)访问hue
http://hadoop004:8888
可能会遇到的问题:
Failed to contact an active Resource Manager: YARN RM returned a failed response: { "RemoteException" : { "message" : "User: hue is not allowed to impersonate admin", "exception" : "AuthorizationException", "javaClassName" : "org.apache.hadoop.security.authorize.AuthorizationException" } } (error 403)
这个问题是hadoop的core-site.xml配置的代理的用户和hue配置文件的用户不一致造成的。
比如,hadoop的core-site.xml是这样配置的
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
代理用户是hue。
而hue里面是这样配置的:
# Webserver runs as this user
#server_user=hue
#server_group=hue
需要把server_user和server_group设置成hue,即可
标签:sasl als sed source job apach ref value cal
原文地址:https://www.cnblogs.com/nicekk/p/9028606.html