标签:backup point oop ade property rowspan ipc test either
本文总结了Hadoop生态系统中各个组件使用的端口,包括了HDFS,Map Reduce,HBase,Hive,Spark,WebHCat,Impala,Alluxio,Sqoop等,后续会持续更新。
HDFS Ports:
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
Configuration Parameters |
NameNode WebUI |
Master Nodes (NameNode and any back-up NameNodes) |
50070 |
http |
Web UI to look at current status of HDFS, explore file system |
Yes (Typically admins, Dev/Support teams) |
dfs.http.address |
50470 |
https |
Secure http service |
dfs.https.address |
|||
NameNode metadata service |
Master Nodes (NameNode and any back-up NameNodes) |
8020/9000 |
IPC |
File system metadata operations |
Yes (All clients who directly need to interact with the HDFS) |
Embedded in URI specified by fs.default.name |
DataNode |
All Slave Nodes |
50075 |
http |
DataNode WebUI to access the status, logs etc. |
Yes (Typically admins, Dev/Support teams) |
dfs.datanode.http.address |
50475 |
https |
Secure http service |
dfs.datanode.https.address |
|||
50010 |
Data transfer |
dfs.datanode.address |
||||
50020 |
IPC |
Metadata operations |
No |
dfs.datanode.ipc.address |
||
Secondary NameNode |
Secondary NameNode and any backup Secondary NameNode |
50090 |
http |
Checkpoint for NameNode metadata |
No |
dfs.secondary.http.address |
Map Reduce Ports:
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
Configuration Parameters |
JobTracker WebUI |
Master Nodes (JobTracker Node and any back-up JobTracker node ) |
50030 |
http |
Web UI for JobTracker |
Yes |
mapred.job.tracker.http.address |
JobTracker |
Master Nodes (JobTracker Node) |
8021 |
IPC |
For job submissions |
Yes (All clients who need to submit the MapReduce jobs including Hive, Hive server, Pig) |
Embedded in URI specified by mapred.job.tracker |
TaskTracker Web UI and Shuffle |
All Slave Nodes |
50060 |
http |
DataNode Web UI to access status, logs, etc. |
Yes (Typically admins, Dev/Support teams) |
mapred.task.tracker.http.address |
History Server WebUI |
51111 |
http |
Web UI for Job History |
Yes |
mapreduce.history.server.http.address |
HBase Ports:
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
Configuration Parameters |
HMaster |
Master Nodes (HBase Master Node and any back-up HBase Master node) |
60000 |
Yes |
hbase.master.port |
||
HMaster Info Web UI |
Master Nodes (HBase master Node and back up HBase Master node if any) |
60010 |
http |
The port for the HBaseMaster web UI. Set to -1 if you do not want the info server to run. |
Yes |
hbase.master.info.port |
Region Server |
All Slave Nodes |
60020 |
Yes (Typically admins, dev/support teams) |
hbase.regionserver.port |
||
Region Server |
All Slave Nodes |
60030 |
http |
Yes (Typically admins, dev/support teams) |
hbase.regionserver.info.port |
|
All ZooKeeper Nodes |
2888 |
Port used by ZooKeeper peers to talk to each other.Seehere for more information. |
No |
hbase.zookeeper.peerport |
||
All ZooKeeper Nodes |
3888 |
Port used by ZooKeeper peers to talk to each other.Seehere for more information. |
hbase.zookeeper.leaderport |
|||
2181 |
Property from ZooKeeper‘s config zoo.cfg. The port at which the clients will connect. |
hbase.zookeeper.property.clientPort |
Hive Ports:
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
Configuration Parameters |
Hive Server2 |
Hive Server machine (Usually a utility machine) |
10000 |
thrift |
Service for programatically (Thrift/JDBC) connecting to Hive |
Yes (Clients who need to connect to Hive either programatically or through UI SQL tools that use JDBC) |
ENV Variable HIVE_PORT |
Hive Metastore |
9083 |
thrift |
Yes (Clients that run Hive, Pig and potentially M/R jobs that use HCatalog) |
hive.metastore.uris |
WebHCat Ports:
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
WebHCat Server |
Any utility machine |
50111 |
http |
Web API on top of HCatalog and other Hadoop services |
Yes |
Spark Ports:
Service |
Servers |
Default Ports Used |
Description |
Spark GUI |
Nodes running spark |
7077 |
Spark web interface for monitoring and troubleshooting |
Impala Ports:
Service |
Servers |
Default Ports Used |
Description |
Impala Daemon |
Nodes running impala daemon |
21000 |
Used by transmit commands and receive results by impala-shell |
Impala Daemon |
Nodes running impala daemon |
21050 |
Used by applications through JDBC |
Impala Daemon |
Nodes running impala daemon |
25000 |
Impala web interface for monitoring and troubleshooting |
Impala StateStore Daemon |
Nodes running impala StateStore daemon |
25010 |
StateStore web interface for monitoring and troubleshooting |
Impala Catalog Daemon |
Nodes running impala catalog daemon |
25020 |
Catalog service web interface for monitoring and troubleshooting |
Alluxio Ports:
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
Alluxio Web GUI |
Any utility machine |
19999 |
http |
Web GUI to check alluxio status |
Yes |
Alluxio API |
Any utility machine |
19998 |
Tcp |
Api to access data on alluxio |
No |
Sqoop Ports:
Service |
Servers |
Default Ports Used |
Description |
Sqoop server |
Nodes running Sqoop |
12000 |
Used by Sqoop client to access the sqoop server |
Hadoop Ecosystem related ports
标签:backup point oop ade property rowspan ipc test either
原文地址:http://www.cnblogs.com/allanli/p/hadoop_ecosystem_com_ports.html