标签:创建 gem location apach opera 2.4 manage 权限控制 文件的
hive权限中有用户、组、角色的概念。
目前主要通过以下三种方式使用hive:
1.Storage Based Authorization in the Metastore Server
基于存储权限验证metastore服务:默认情况下hive像dbms一样管理权限,但是有时候为用户授权后用户却没有hdfs权限。因此当前hive元数据授权的方式是结合dbms权限管理和底层的存储权限管理。
当用hive给用户授权时,会先检查用户是hdfs的对应目录是否有相应权限,如果有再执行授权,如果没有则反错。比如给用户授权查询test.t1表时,如果用户在hdfs上没有访问hdfs://user/hive/warehouse/test/t1目录权限则会直接报错。默认是不开启的。
相关配置:
hive.metastore.pre.event.listeners | org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener |
hive.security.metastore.authorization.manager | org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider |
hive.security.metastore.authenticator.manager | org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator |
hive.security.metastore.authorization.auth.reads | true |
设置以上配置后,将会开启基于存储的metastore验证。
hive cli和hcatalog会使用此种权限验证方式。
(详情参考文档:https://cwiki.apache.org/confluence/display/Hive/HCatalog+Authorization)
2.SQL Standards Based Authorization in HiveServer2
基于sql的hiveserver2的权限验证。基于存储的权限验证方式只能验证用户是否有访问目录或文件的权限,但无法执行如列访问之类更精细的控制,因此要用基于sql的权限控制方式。hiveserver2提供了一套完全类似sql的权限验证,可以设置role、用户,可以执行表级、行级、列级授权。
注意hive cli是不支持sql验证方式
的。因为用户可以在cli中改变验证规则甚至禁用该规则。
特点
用户和角色
非常类似关系型数据库,用户可以被授予角色。角色是一组权限的集合。
内置了public和admin两个角色,每个用户都有public角色,可以执行基本操作。admin为管理角色。用户登录hiveserver2时执行show current roles;
来查看当前角色。使用set role切换角色。
admin角色用户可以创建其它角色。做admin角色前,做set role切换到admin角色。
role name不分大小写,但是用户名分大小写。
配置
hive-site.xml:
hive.server2.enable.doAs | false |
hive.users.in.admin.role | |
hive.security.metastore.authorization.manager | org.apache.hadoop.hive.ql.security.authorization.MetaStoreAuthzAPIAuthorizerEmbedOnly |
hive.security.authorization.manager | org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory |
hiveserver2-site.xml:
-hiveconf hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory
-hiveconf hive.security.authorization.enabled=true
-hiveconf hive.security.authenticator.manager=org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator
-hiveconf hive.metastore.uris=‘ ‘
注意:启动这个的前提是hive.server2.enable.doAs=false.
权限设置
各种grant/revoke操作,具体参考文档。
(详情参考:https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization)
总结:hive cli和hcatalog使用了基于hdfs权限加上类似dbsm的验证方式。hiveserver2使用了完全dbms化的sql权限验证(前提是hdfs权限验证通过)
默认hive是不启用权限验证的,只要使用hive的用户在hdfs上权限即可。
默认存在超级用户,用户可以通过自定义类来实现超级用户。使用超级启用必须开始权限验证。
修改以下配置开启基本的权限验证。开始后用户必须给自己授权才能做相应的操作。
<property>
<name>hive.security.authorization.enabled</name>
<value>true</value>
<description>hive是否开启客户端认证,默认是false</description>
</property>
<property>
<name>hive.security.authorization.createtable.owner.grants</name>
<value>ALL</value>
<description>创建表和用户是否拥有该的权限,默认为空,即创建者也没有该表的读写权限。 可以设置all select drop等</description>
</property>
<property>
<name>hive.security.authorization.createtable.user.grants</name>
<value>ALL</value>
<description> 创建表后是否对特定用户权限,默认为空;如可以权限: irwin,hadoop:select;tom:create </description>
</property>
<property>
<name>hive.security.authorization.createtable.role.grants</name>
<value>ALL</value>
<description>当表创建时自动授权给特定角色权限,默认是空;</description>
</property>
<property>
<name>hive.security.authorization.task.factory</name>
<value>org.apache.hadoop.hive.ql.parse.authorization.HiveAuthorizationTasFactoryImpl</value>
<description> 覆盖默认的权限设置hive.security.authorization.task.factory DDL操作,一个实现接口的org.apache.hadoop.hive.ql.parse.authorization.hiveauthorizationtaskfactory</description>
</property>
<property>
<name>hive.security.command.whitelist</name>
<value>set</value>
<description>hive授权用户执行的非SQL命令列表,用都好分隔。默认是set,reset,dfs,add,delete。就是hive授权用户可以执行的非sql命令。
</description>
</property>
<property>
<name>hive.conf.restricted.list</name>
<value>hive.security.authorization.manager,hive.security.authenticator.manager,hive.users.in.admin.role</value>
<description>上面的属性值是限制修改的,即如果不从本属性列表中删除,在命令台reset无效</description>
</property>
引用自:
http://blog.csdn.net/kwu_ganymede/article/details/52733021
package com.ganymede.hiveauth;
import org.apache.hadoop.hive.ql.parse.ASTNode;
import org.apache.hadoop.hive.ql.parse.AbstractSemanticAnalyzerHook;
import org.apache.hadoop.hive.ql.parse.HiveParser;
import org.apache.hadoop.hive.ql.parse.HiveSemanticAnalyzerHookContext;
import org.apache.hadoop.hive.ql.parse.SemanticException;
import org.apache.hadoop.hive.ql.session.SessionState;
/**
* Created by Ganymede on 2016/10/4.
*/
public class MyAuthHook extends AbstractSemanticAnalyzerHook {
private static String[] admin = {"root", "hadoop", "hive"}; //配置Hive管理员
@Override
public ASTNode preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast) throws SemanticException {
switch (ast.getToken().getType()) {
case HiveParser.TOK_CREATEDATABASE:
case HiveParser.TOK_DROPDATABASE:
case HiveParser.TOK_CREATEROLE:
case HiveParser.TOK_DROPROLE:
case HiveParser.TOK_GRANT:
case HiveParser.TOK_REVOKE:
case HiveParser.TOK_GRANT_ROLE:
case HiveParser.TOK_REVOKE_ROLE:
String userName = null;
if (SessionState.get() != null
&& SessionState.get().getAuthenticator() != null) {
userName = SessionState.get().getAuthenticator().getUserName();
}
if (!admin[0].equalsIgnoreCase(userName)
&& !admin[1].equalsIgnoreCase(userName) && !admin[2].equalsIgnoreCase(userName)) {
throw new SemanticException(userName
+ " can‘t use ADMIN options, except " + admin[0] + "," + admin[1] + ","
+ admin[2] + ".");
}
break;
default:
break;
}
return ast;
}
public static void main(String[] args) throws SemanticException {
String[] admin = {"admin", "root"};
String userName = "root1";
for (String tmp : admin) {
System.out.println(tmp);
if (!admin[0].equalsIgnoreCase(userName) && !admin[1].equalsIgnoreCase(userName)) {
throw new SemanticException(userName
+ " can‘t use ADMIN options, except " + admin[0] + ","
+ admin[1] + ".");
}
}
}
}
将程序打成jar包放到$HIVE_HOME/lib下,然后修改hive-site.mxl文件:
<!-- 配置对hiveserver2生效-->
<property>
<name>hive.aux.jars.path</name>
<value>file:///opt/mllib/hive-app.jar</value>
</property>
<!-- 设置超级用户类-->
<property>
<name>hive.semantic.analyzer.hook</name>
<value>com.ganymede.hiveauth.MyAuthHook</value>
</property>
设置完超级用户后,其它必须用户必须通过超级用户授权才能做各种操作。
设置hiveserver2的权限验证方式、实现类、用户名密码等:
<property>
<name>hive.server2.authentication</name>
<value>CUSTOM</value>
<description>安全验证方式,默认NONE,可选的包括 OSASL、KERBEROS、LDAP、PAM和CUSTOM等</description>
</property>
<property>
<name>hive.server2.custom.authentication.class</name>
<value>com.bqjr.bigdata.hive.hiveserver2.CustomHiveServer2Auth</value>
<description>实现自定义,实现hiveserver2登录密码校验</description>
</property>
<property>
<name>hive.server2.custom.authentication.file</name>
<value>/usr/local/hiveserver2/hive.server2.users.conf</value>
<description>hiveserver2用户名密码文件</description>
</property>
自定义hiveserver2用户名密码验证实现类:
package com.bqjr.bigdata.hive.hiveserver2;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hive.service.auth.PasswdAuthenticationProvider;
import javax.security.sasl.AuthenticationException;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
/**
* Created by jian.zhang02 on 2016/12/21.
*/
public class CustomHiveServer2Auth implements PasswdAuthenticationProvider{
@Override
public void Authenticate(String username, String password) throws AuthenticationException {
boolean ok = false;
String passMd5 = new MD5().md5(password);
HiveConf hiveConf = new HiveConf();
Configuration conf = new Configuration(hiveConf);
String filePath = conf.get("hive.server2.custom.authentication.file");
System.out.println("hive.server2.custom.authentication.file [" + filePath + "] ..");
File file = new File(filePath);
BufferedReader reader = null;
try {
reader = new BufferedReader(new FileReader(file));
String tempString = null;
while ((tempString = reader.readLine()) != null) {
String[] datas = tempString.split(",", -1);
if(datas.length != 2) continue;
//ok
if(datas[0].equals(username) && datas[1].equals(passMd5)) {
ok = true;
break;
}
}
reader.close();
} catch (Exception e) {
e.printStackTrace();
throw new AuthenticationException("read auth config file error, [" + filePath + "] ..", e);
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException e1) {}
}
}
if(ok) {
System.out.println("user [" + username + "] auth check ok .. ");
} else {
System.out.println("user [" + username + "] auth check fail .. ");
throw new AuthenticationException("user [" + username + "] auth check fail .. ");
}
}
}
基于默认验证方式的授权。
CREATE ROLE role_name
DROP ROLE role_name
GRANT ROLE role_name [, role_name] …
TO principal_specification [, principal_specification] …
[WITH ADMIN OPTION]
REVOKE [ADMIN OPTION FOR] ROLE role_name [, role_name] …
FROM principal_specification [, principal_specification] …
principal_specification:
USER user
| GROUP group
| ROLE role
SHOW ROLE GRANT principal_specification
principal_specification:
USER user
| GROUP group
| ROLE role
GRANT
priv_type [(column_list)]
[, priv_type [(column_list)]] …
[ON object_specification]
TO principal_specification [, principal_specification] …
[WITH GRANT OPTION]
REVOKE [GRANT OPTION FOR]
priv_type [(column_list)]
[, priv_type [(column_list)]] …
[ON object_specification]
FROM principal_specification [, principal_specification] …
REVOKE ALL PRIVILEGES, GRANT OPTION
FROM user [, user] …
priv_type:
ALL | ALTER | UPDATE | CREATE | DROP
| INDEX | LOCK | SELECT | SHOW_DATABASE
object_specification:
TABLE tbl_name
| DATABASE db_name
principal_specification:
USER user
| GROUP group
| ROLE role
SHOW GRANT principal_specification
[ON object_specification [(column_list)]]
principal_specification:
USER user
| GROUP group
| ROLE role
object_specification:
TABLE tbl_name
| DATABASE db_name
Operation | ALTER | UPDATE | CREATE | DROP | INDEX | LOCK | SELECT | SHOW_DATABASE | LOAD |
---|---|---|---|---|---|---|---|---|---|
EXPORT | X | ||||||||
IMPORT | X | X | X | ||||||
CREATE TABLE | X | X | |||||||
CREATE TABLE AS SELECT | X | ||||||||
DROP TABLE | X | ||||||||
SELECT | X | ||||||||
ALTER TABLE ADD COLUMN | X | ||||||||
ALTER TABLE REPLACE COLUMN | X | ||||||||
ALTER TABLE RENAME | X | ||||||||
ALTER TABLE ADD PARTITION | X | ||||||||
ALTER TABLE DROP PARTITION | X | ||||||||
ALTER TABLE ARCHIVE | X | ||||||||
ALTER TABLE UNARCHIVE | X | ||||||||
ALTER TABLE SET PROPERTIES | X | ||||||||
ALTER TABLE SET SERDE | X | ||||||||
ALTER TABLE SET SERDE | X | ||||||||
ALTER TABLE SET SERDEPROPERTIES | X | ||||||||
ALTER TABLE CLUSTER BY | X | X | |||||||
ALTER TABLE PROTECT MODE | X | ||||||||
ALTER PARTITION PROTECT MODE | X | ||||||||
ALTER TABLE SET FILEFORMAT | X | ||||||||
ALTER PARTITION SET FILEFORMAT | X | ||||||||
ALTER TABLE SET LOCATION | X | ||||||||
ALTER PARTITION SET LOCATION | X | ||||||||
ALTER TABLE CONCATENATE | X | ||||||||
ALTER PARTITION CONCATENATE | X | ||||||||
SHOW DATABASES | X | ||||||||
LOCK TABLE | X | ||||||||
UNLOCK TABLE | X |
<wiz_tmp_tag id="wiz-table-range-border" contenteditable="false" style="display: none;">
标签:创建 gem location apach opera 2.4 manage 权限控制 文件的
原文地址:http://www.cnblogs.com/skyrim/p/7455270.html