LDAP User/Group filter HIVE - hive

I have a Ldap server and a group.
Now I want to do the Ldap authentication for AWS HIVE using that group.
Please find the details below:
**CN=hadoop-admins
OU=Groups,OU=Root
DC=int,DC=domain,DC=com**
I have put the values in the following hive properties:
<property>
<name>hive.server2.authentication.ldap.groupDNPattern></name>
<value>uid=%s,OU=Groups,OU=Root,DC=int,DC=domain,DC=com:CN=%s,
CN=Groups,ou=Root,DC=int,DC=domain,DC=com</value>
</property>
<property>
<name>hive.server2.authentication.ldap.groupFilter</name>
<value>hadoop-admins</value>
</property>
But the authentication failed with the below error:
**2017-12-28T12:45:27,064 INFO [HiveServer2-Handler-Pool: Thread-33([])]: ldap.LdapSearch (LdapSearch.java:findUserDn(100)) - Expected exactly one user result for the user: Aditya.Tiwari, but got 0. Returning null**
I am part of the ldap group, than also it is failing. Can someone please help me out with this?

Related

HBase Storage Handler: UnknownProtocolException: No coprocessor found for name AuthenticationService hbase:meta

Error
Receiving this error with HBase Storage Handler in Hive when I run a query in a Kerberized environment.
on HBase 1.5
Caused by: org.apache.hadoop.hbase.exceptions.UnknownProtocolException: org.apache.hadoop.hbase.exceptions.UnknownProtocolException:
No registered coprocessor service found for name AuthenticationService in region hbase:meta,,1
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8499)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2282)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2264)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36808)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
The important part being:
No registered coprocessor service found for name AuthenticationService
in region hbase:meta,,1
I did some reading and learned that AuthenticationService is provided by TokenProvider coprocessor.
In hbase-site.xml ensure these options are configured
hadoop.security.authentication
hbase.coprocessor.master.classes
hbase.coprocessor.region.classes
Ensure values are configured as follows:
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
Note:
In older versions of HBase the settinghbase.coprocessor.regionserver.classes was used, make sure you are using the correct; hbase.coprocessor.region.classes

Apache hive-metastore standalone showing error "Kerberos principal should have 3 parts:"

I am deploying hive-metastore-3.0.0 using kerberos over DC/OS I have generated principal and keytab correctly and verified the same but while providing repective settings in metastore-site.xml still server showing error "Kerberos principal should have 3 parts:" its by default pickup my user "nobody or root" by which I run service but not the principal.
Rquest you to please help is there any addition property I have to set ?
My metastore-site.xml is:
<name>hive.metastore.sasl.enabled</name>
<value>true</value>
<description>If true, the metastore thrift interface will be secured with SASL. Clients must authenticate with Kerberos.</description>
</property>
<property>
<name>hive.metastore.kerberos.keytab.file</name>
<value>hive-metastore.keytab</value>
<description>The path to the Kerberos Keytab file containing the metastore thrift server's service principal.</description>
</property>
<property>
<name>hive.metastore.kerberos.principal</name>
<value>hive-metastore/node-0-server.hive-metastore.autoip.dcos.thisdcos.directory#LOCAL</value>
<description>The service principal for the metastore thrift server. The special string _HOST will be replaced automatically with the correct host name.</description>
</property>
<property>
<name>hive.metastore.authentication</name>
<value>KERBEROS</value>
<description>authenticationtype</description>
</property>```

Configuration: Hiveserver2 & Beeline

I am trying to connect Beeline with HiveServer2 and i am getting the below alert.
Need help to connect Beeline with HiveServer2.
[hdpsysuser#hdpmaster bin]$ beeline
which: no hbase in (/usr/local/bin:/usr/local/sbin:/enter code here usr/bin:/usr/sbin:/bin:/sbin:/home/hdpuser/.local/bin:/home/hdpuser/bin:/home/hdpsysuser/.local/bin:/home/hdpsysuser/bin:/usr/hadoopsw/hadoop-2.7.3/sbin:/usr/hadoopsw/hadoop-2.7.3/bin:/usr/hadoopsw/hive/bin:/usr/hadoopsw/db-derby-10.13.1.1-bin/bin)
Beeline version 2.1.1 by Apache Hive
beeline> show tables;
No current connection
beeline> !connect jdbc:hive2://hdpmaster:10000
Connecting to jdbc:hive2://hdpmaster:10000
Enter username for jdbc:hive2://hdpmaster:10000: hdpsysuser
Enter password for jdbc:hive2://hdpmaster:10000: **********
17/05/09 01:51:20 [main]: WARN jdbc.HiveConnection: Failed to connect to
hdpmaster:10000
Error: Could not open client transport with JDBC Uri:
jdbc:hive2://hdpmaster:10000: Failed to open new session: java.lang.RuntimeException:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hdpsysuser is not allowed to impersonate hdpsysuser (state=08S01,code=0)
add below property in hive-site.xml in hive conf
<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>
</property>
Also if you want user ABC to impersonate all(*), add below properties to your
core-site.xml
<property>
<name>hadoop.proxyuser.ABC.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.ABC.hosts</name>
<value>*</value>
</property>

hue connect hive had an error

Failed to open new session: java.lang.RuntimeException:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
User: hadoop is not allowed to impersonate cheng
User:hadoop is my hadoop install use,and cheng is ubuntu user.
I have already the following configuration in my core-site.xml:
<name>hadoop.proxyuser.hive.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hive.groups</name>
<value>*</value>
</property>
the hive user is not exist before,so I change the hadoop.proxyuser.hive.groups to
hadoop.proxyuser.hadoop.group and so on.in hue config hue.ini,set the hue user.
so the problem is solution.

Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

i just installed hive and mysql..
and copied the mysqlconnector to the hive_home/lib folder
but when i try show databases and create table commands in the hive> prompt giving me the error as below:
create database saty;
FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
and my hive_site.xml is
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hadoop?CreateDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
and i dont have a directory called /user/hive/warehouse in my file system.
i created these path with mkdir command.. and tried after reboot..
bout still getting the error..
regards,
satya
Try Specifying these two properties
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>username</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
<description>Password to use against metastore database</description>
</property>
And Similarly the username of your mysql login should have the permission to access the db sepcified in the JDBC connection string. This Can be acheived using the following command
GRANT ALL ON Databasename.* TO username#'%' IDENTIFIED BY 'password';
The answer is located at http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/5.0/CDH5-Installation-Guide/cdh5ig_hive_schema_tool.html
To suppress the schema check and allow the metastore to implicitly modify the schema, you need to set the hive.metastore.schema.verification configuration property to false in hive-site.xml.
reconfigure your hive using -hiveconf hive.root.logger=warn,console than find the detail reason that why you could not instantiate your hive mate store client.
The problem i met was wrong mysql configuration, the error message is "Binary logging not possible. Message: Transaction level 'READ-COMMITTED' in InnoDB is not safe for binlog mode 'STATEMENT'". When i change binlog_format from "statement" to "binlog_format=mixed", then hive meta store client instantiate successfully.
Hope it work for you.
I had similar issue and the cause in my case was selinux where it prevented Postgres from proper running.
I inserted following line to the first line of /etc/rc3.d/S64postgresql -
echo 0 > /selinux/enforce # Disable selinux temporarily
and restarted the node with the hive metastore.
So generally you can check two things:
Check whether the DB for the metastore is running properly
Whether the user/passwd from the property is correct