标签:love 其他 containe run 部署hadoop version stand style 小结
背景描述:
最近在进行安全扫描的时候,说hadoop存在漏洞,Hadoop 未授权访问【原理扫描】,然后就参考官方文档及一些资料,在测试环境中进行了开启,中间就遇到了很多的坑,或者说自己没有想明白的问题,在此记录下吧,这个问题搞了2天。
环境描述:
hadoop版本:2.6.2
操作步骤:
1.想要开启服务级认证,需要在core-site.xml文件中开启参数hadoop.security.authorization,将其设置为true
<property> <name>hadoop.security.authorization</name> <value>true</value> <description>Is service-level authorization enabled?</description> </property>
备注:根据官方文档的解释,设置为true就是simple类型的认证,基于OS用户的认证.现在服务级的认证已经开启了。
增加此参数之后,需要重启namenode:
sbin/hadoop-daemon.sh stop namenode
sbin/hadoop-daemon.sh start namenode
如何知道是否真正的开启了该配置,查看hadoop安全日志SecurityAuth-aiprd.audit,如果有新日志增加,里面带有认证信息,说明开启成功。
2.针对具体的各个服务的认证,在配置文件hadoop-policy.xml中
<configuration> <property> <name>security.client.protocol.acl</name> <value>*</value> <description>ACL for ClientProtocol, which is used by user code via the DistributedFileSystem. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.client.datanode.protocol.acl</name> <value>*</value> <description>ACL for ClientDatanodeProtocol, the client-to-datanode protocol for block recovery. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.datanode.protocol.acl</name> <value>*</value> <description>ACL for DatanodeProtocol, which is used by datanodes to communicate with the namenode. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.inter.datanode.protocol.acl</name> <value>*</value> <description>ACL for InterDatanodeProtocol, the inter-datanode protocol for updating generation timestamp. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.namenode.protocol.acl</name> <value>*</value> <description>ACL for NamenodeProtocol, the protocol used by the secondary namenode to communicate with the namenode. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.admin.operations.protocol.acl</name> <value>*</value> <description>ACL for AdminOperationsProtocol. Used for admin commands. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.refresh.user.mappings.protocol.acl</name> <value>*</value> <description>ACL for RefreshUserMappingsProtocol. Used to refresh users mappings. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.refresh.policy.protocol.acl</name> <value>*</value> <description>ACL for RefreshAuthorizationPolicyProtocol, used by the dfsadmin and mradmin commands to refresh the security policy in-effect. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.ha.service.protocol.acl</name> <value>*</value> <description>ACL for HAService protocol used by HAAdmin to manage the active and stand-by states of namenode.</description> </property> <property> <name>security.zkfc.protocol.acl</name> <value>*</value> <description>ACL for access to the ZK Failover Controller </description> </property> <property> <name>security.qjournal.service.protocol.acl</name> <value>*</value> <description>ACL for QJournalProtocol, used by the NN to communicate with JNs when using the QuorumJournalManager for edit logs.</description> </property> <property> <name>security.mrhs.client.protocol.acl</name> <value>*</value> <description>ACL for HSClientProtocol, used by job clients to communciate with the MR History Server job status etc. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <!-- YARN Protocols --> <property> <name>security.resourcetracker.protocol.acl</name> <value>*</value> <description>ACL for ResourceTrackerProtocol, used by the ResourceManager and NodeManager to communicate with each other. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.resourcemanager-administration.protocol.acl</name> <value>*</value> <description>ACL for ResourceManagerAdministrationProtocol, for admin commands. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.applicationclient.protocol.acl</name> <value>*</value> <description>ACL for ApplicationClientProtocol, used by the ResourceManager and applications submission clients to communicate with each other. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.applicationmaster.protocol.acl</name> <value>*</value> <description>ACL for ApplicationMasterProtocol, used by the ResourceManager and ApplicationMasters to communicate with each other. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.containermanagement.protocol.acl</name> <value>*</value> <description>ACL for ContainerManagementProtocol protocol, used by the NodeManager and ApplicationMasters to communicate with each other. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.resourcelocalizer.protocol.acl</name> <value>*</value> <description>ACL for ResourceLocalizer protocol, used by the NodeManager and ResourceLocalizer to communicate with each other. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.job.task.protocol.acl</name> <value>*</value> <description>ACL for TaskUmbilicalProtocol, used by the map and reduce tasks to communicate with the parent tasktracker. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.job.client.protocol.acl</name> <value>*</value> <description>ACL for MRClientProtocol, used by job clients to communciate with the MR ApplicationMaster to query job status etc. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> <property> <name>security.applicationhistory.protocol.acl</name> <value>*</value> <description>ACL for ApplicationHistoryProtocol, used by the timeline server and the generic history service client to communicate with each other. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed.</description> </property> </configuration>
备注:默认有10个服务,每个服务的默认值都是*,表示的就是任何的用户都可以对其进行访问。
3.目前只需要针对客户端哪些用户能够访问namenode即可,即修改参数security.client.protocol.acl的值
<property> <name>security.zkfc.protocol.acl</name> <value>aiprd</value> <description>ACL for access to the ZK Failover Controller </description> </property>
备注:表示客户端进行对应的用户是aiprd的就可以访问namenode。
刷新ACL配置:
bin/hdfs dfsadmin -refreshServiceAcl
修改格式如下:
<property> <name>security.job.submission.protocol.acl</name> <value>user1,user2 group1,group2</value> </property>
备注:该值是,用户之间逗号隔开,用户组之间用逗号隔开,用户和用户组之间用空格分开,如果没有用户,要以空格开头后面接用户组。
4.远程客户端访问hdfs中文件进行验证
[aiprd@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/ Found 10 items drwxr-xr-x - aiprd supergroup 0 2019-08-14 04:31 hdfs://hadoop1:9000/hbase drwxr-xr-x - aiprd hadoop 0 2019-08-14 06:40 hdfs://hadoop1:9000/test01 drwxr-xr-x - aiprd supergroup 0 2019-08-14 06:22 hdfs://hadoop1:9000/test02 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:39 hdfs://hadoop1:9000/test03 drwxr-xr-x - aiprd supergroup 0 2019-08-14 06:30 hdfs://hadoop1:9000/test07 drwxr-xr-x - aiprd supergroup 0 2019-08-14 06:31 hdfs://hadoop1:9000/test08 drwxr-xr-x - aiprd supergroup 0 2019-08-14 06:32 hdfs://hadoop1:9000/test09 drwxr-xr-x - aiprd supergroup 0 2019-08-14 06:41 hdfs://hadoop1:9000/test10 drwxrwx--- - aiprd supergroup 0 2019-08-14 07:06 hdfs://hadoop1:9000/test11 drwxr-xr-x - aiprd1 supergroup 0 2019-08-15 00:10 hdfs://hadoop1:9000/test12
备注:在客户端上,将hadoop的程序部署在aiprd用户下,执行命令能够查看其中的文件、文件夹信息。同时,aiprd用户也是启动namenode的用户即hadoop中的超级用户,所以,查看到的文件的用户组都是aiprd.
5.测试,如果增加或者使用其他的用户是否可以
<property> <name>security.zkfc.protocol.acl</name> <value>aiprd1</value> <description>ACL for access to the ZK Failover Controller </description> </property>
刷新ACL配置。
bin/hdfs dfsadmin -refreshServiceAcl
将用户修改aiprd1。即只有客户端的程序用户是aiprd1才能访问。
6.在客户端中,继续使用之前部署在aiprd用户下的hadoop客户端进行访问
[aiprd@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/ ls: User aiprd (auth:SIMPLE) is not authorized for protocol interface org.apache.hadoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null
备注:发现aiprd用户是不能访问的了
7.客户端中,在aiprd1用户下,在部署hadoop客户端,然后进行访问
[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12 Found 6 items drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:43 hdfs://hadoop1:9000/test12/01 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:43 hdfs://hadoop1:9000/test12/02 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:43 hdfs://hadoop1:9000/test12/03 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:44 hdfs://hadoop1:9000/test12/04 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:49 hdfs://hadoop1:9000/test12/05 drwxr-xr-x - aiprd1 supergroup 0 2019-08-15 00:10 hdfs://hadoop1:9000/test12/10
备注:是能够访问的,所以,如果要使用用户来进行认证,那么客户端程序对应的OS用户,必须要和hadoop-policy.xml中配置的用户一致否则不能访问。
既然,服务级参数的值,可以是用户,也可以是用户组,用户验证完了,那么来验证用户组吧,此时,就遇到了很多的坑。
1.还是之前的参数security.zkfc.protocol.acl,这次使用,用户组
<property> <name>security.zkfc.protocol.acl</name> <value>aiprd hadoop</value> <description>ACL for access to the ZK Failover Controller </description> </property>
刷新ACL配置:
/bin/hdfs dfsadmin -refreshServiceAcl
那么问题来了,之前的用户是基于OS级别的判断,这个应该也是,也就是判断我这个用户到底是不是这个用户组里面的。
2.在客户端上aiprd用户下的程序是可以访问的,经过之前的验证没有问题
3.在客户端上,在aiprd1下部署hadoop客户端程序,正常是访问不了hdfs的,那么将aiprd1加入到这个hadoop组下,理论上是可以访问的
[aiprd1@localhost ~]$ id aiprd1 uid=1001(aiprd1) gid=1001(aiprd1) groups=1001(aiprd1),1002(hadoop) [aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12 ls: User aiprd1 (auth:SIMPLE) is not authorized for protocol interface org.apache.hadoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null
经过验证,是不可以的,说明这个hadoop分组并没有起作用。
试了如下的办法:
实在没有办法,开启DEBUG吧,开启之后,获得信息如下:
2019-08-15 15:12:27,188 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user aiprd1: id: aiprd1: No such user 2019-08-15 15:12:27,188 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user aiprd1 adoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null :SIMPLE) 2019-08-15 15:12:27,188 DEBUG org.apache.hadoop.ipc.Server: Socket Reader #1 for port 9000: responding to null from 192.168.30.1:61985 Call#-3 Retry#-1 2019-08-15 15:12:27,188 DEBUG org.apache.hadoop.ipc.Server: Socket Reader #1 for port 9000: responding to null from 192.168.30.1:61985 Call#-3 Retry#-1 Wrote 243 bytes. izationException: User aiprd1 (auth:SIMPLE) is not authorized for protocol interface
意思是说,当试着为这个用户查找用户组的时候,没有这个用户,就很奇怪,明明是有用户的啊。然后就基于这个报错各种查找,然后在下面的文章中获得了点启示:
https://www.e-learn.cn/content/wangluowenzhang/1136832
To accomplish your goal you‘d need to add your user account (clott) on the NameNode machine and add it to hadoop group there. If you are going to run MapReduce with your user, you‘d need your user account to be configured on NodeManager hosts as well.
4.按照这个意思,在Namenode节点上,创建aiprd1用户,并加入到hadoop用户组里面。
[root@hadoop1 ~]# useradd -G hadoop aiprd1 [root@hadoop1 ~]# id aiprd1 uid=503(aiprd1) gid=503(aiprd1) groups=503(aiprd1),502(hadoop) [root@hadoop1 ~]# su - aiprd [aiprd@hadoop1 ~]$ jps 15289 NameNode 15644 Jps
备注:此节点运行了NameNode.
5.再次在hadoop客户端上,aiprd1用户下执行查询操作
[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12 Found 6 items drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:43 hdfs://hadoop1:9000/test12/01 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:43 hdfs://hadoop1:9000/test12/02 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:43 hdfs://hadoop1:9000/test12/03 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:44 hdfs://hadoop1:9000/test12/04 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:49 hdfs://hadoop1:9000/test12/05 drwxr-xr-x - aiprd1 supergroup 0 2019-08-15 00:10 hdfs://hadoop1:9000/test12/10
可以进行查询了。
在客户端上,将aiprd1对应的用户组hadoop去掉。
[aiprd1@localhost ~]$ id uid=1001(aiprd1) gid=1001(aiprd1) groups=1001(aiprd1)
再次执行查询:
[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12 Found 6 items drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:43 hdfs://hadoop1:9000/test12/01 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:43 hdfs://hadoop1:9000/test12/02 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:43 hdfs://hadoop1:9000/test12/03 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:44 hdfs://hadoop1:9000/test12/04 drwxr-xr-x - aiprd supergroup 0 2019-08-14 23:49 hdfs://hadoop1:9000/test12/05 drwxr-xr-x - aiprd1 supergroup 0 2019-08-15 00:10 hdfs://hadoop1:9000/test12/10
还是可以查询的,可以看出来,用户组和客户端上用户所在的组没有关系,需要在Namenode节点设置。
查看官方,有如下解释:
Once a username has been determined as described above, the list of groups is determined by a group mapping service, configured by the hadoop.security.group.mapping property. The default implementation, org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback, will determine if the Java Native Interface (JNI) is available. If JNI is available, the implementation will use the API within hadoop to resolve a list of groups for a user. If JNI is not available then the shell implementation, org.apache.hadoop.security.ShellBasedUnixGroupsMapping, is used. This implementation shells out with the bash -c groups command (for a Linux/Unix environment) or the net group command (for a Windows environment) to resolve a list of groups for a user.
An alternate implementation, which connects directly to an LDAP server to resolve the list of groups, is available via org.apache.hadoop.security.LdapGroupsMapping. However, this provider should only be used if the required groups reside exclusively in LDAP, and are not materialized on the Unix servers. More information on configuring the group mapping service is available in the Javadocs.
For HDFS, the mapping of users to groups is performed on the NameNode. Thus, the host system configuration of the NameNode determines the group mappings for the users.
Note that HDFS stores the user and group of a file or directory as strings; there is no conversion from user and group identity numbers as is conventional in Unix.
对于HDFS来说,用户到组的映射关系是在NameNode上执行的,因此,NameNode的主机系统配置决定了用户组的映射。
实验之后才看明白,之前根本没有理解,以为是从客户端拿到用户对应的用户组信息,然后到NameNode来进行判断呢。
所以,到这里,基于服务级的ACL,用户、用户组的都已经可以配置了,对于其他的服务,可以根据实际情况进行配置。这里面只要求哪些用户、用户组可以连接上来就好了。
小结:
1.hadoop.security.authorization设置为true,开启simple认证,即基于os用户的认证,配置之后,重启namenode
2.acl为用户认证的,保证服务acl中配置的值与客户端进程对应的用户一致即可访问。
3.acl为用户组的,客户端如果使用A访问,那么要在NameNode上创建用户A,将A加入到acl用户组,验证过程:获取客户端的用户,比如为A,NameNode节点上,通过用户A,到NameNode的主机上来查找用户A对应的用户组信息,如果NameNode上没有用户A,认证失败,如果有用户A,没有在acl用户组上,认证失败,有用户A,用户A在acl配置的组里面,认证成功。
4.acl配置的用户组与客户端程序用户,所在的用户组没有关系。
5.每次修改hadoop-policy.xml中的值,记得要执行刷新操作。
另外:要注意,不同版本的参数,配置可能不同,要看和自己hadoop版本一致的文档。
https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html
文档创建时间:2019年8月15日17:30:24
hadoop开启Service Level Authorization 服务级认证-SIMPLE认证-过程中遇到的坑
标签:love 其他 containe run 部署hadoop version stand style 小结
原文地址:https://www.cnblogs.com/chuanzhang053/p/11359439.html