We have put these entries in hive-site.xml:
hive.server2.authentication : KERBEROS
hive.server2.authentication.kerberos.keytab : /tmp/hive.keytab
hive.server2.authentication.kerberos.principal : hive/FQDN of the hive VM#xxxxxxxx.COM
Using kinit command on the hive VM, we have verified that Kerberos principal and the keytab file are valid:
kinit -t FILE:/tmp/hive.keytab -k hive/FQDN of the hive VM#xxxxxxxx.COM
Then if we do,
klist
it shows the same in Ticket Cache as the default Principal.
But, when we try to start the HiveServer2 using :
sudo service hive-server2 start
it throws the exception :
Starting HiveServer2
javax.security.auth.login.LoginException: Kerberos principal should have 3 parts: hive
at org.apache.hive.service.auth.HiveAuthFactory.getAuthTransFactory(HiveAuthFactory.java:127)
at org.apache.hive.service.cli.thrift.ThriftCLIService.run(ThriftCLIService.java:505)
at java.lang.Thread.run(Thread.java:679)
When we try to start the service (using ./hiveserver2) with any other logged in user, say User123, it throws the same exception with :
Starting HiveServer2
javax.security.auth.login.LoginException: Kerberos principal should have 3 parts: User123
at org.apache.hive.service.auth.HiveAuthFactory.getAuthTransFactory(HiveAuthFactory.java:127)
at org.apache.hive.service.cli.thrift.ThriftCLIService.run(ThriftCLIService.java:505)
at java.lang.Thread.run(Thread.java:679)
Shouldn’t Kerberos Principal be picked up from the hive-site.xml and not the login user? Are we missing out something.
--
I have created a principal hive/FQDN of the hive VM#xxxxxxxx.COM in advance and created a keytab file for it.
We are on CDH 4.7 (not installed using CM), OEL6 and Kerberos5
Kerberos secuirty should be configured for HDFS and MR too, and not just HIVE.
Related
I got error "Credential cache is empty" during ODBC Hive tests. See full error detail
ODBC Hive - Test Results
[Cloudera][Hardy] (34) Error from server: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Credential cache is empty).
Do you have any experience?
I tested different setting of MIT Kerberos in Windows e.g.:
generate kerberos ticket, kinit.exe -k -t app_store.keytab app_store#HW.PROD.BDP'
checked kerberos tickets in cache, klist.exe
setup KRB5CCNAME=C:\cache\krb5cache and KRB5_CONFIG=c:\ProgramData\MIT\Kerberos5\krb5.ini
I see a few possible issues:
You have to check that krb5cache is file (not directory), it is important point
The path to the cache have to be different for different person use this setting for variable KRB5CCNAME=%USERPROFILE%\krb5cache
You have to generate kerberos ticket before run ODBC Test, see
kinit.exe -k -t "c:\Apps\MIT\Kerberos\store.keytab" store#HW.PROD.BDP
Within a bash script, I am invoking yarn application command in order to get the current applications running on a Cloudera Hadoop cluster secured by Kerberos. In case my application is not running, it is necessary to restart it:
spark_rtp_app_array=( $(yarn application --list -appTypes SPARK -appStates ACCEPTED,RUNNING | awk -F "\t" ' /'my_user'/ && /'my_app'/ {print $1}') )
Whenever the Kerberos ticket has ended I need to invoke kinit command, in order to renew that ticket before calling yarn application --list:
kinit -kt my_keytab_file.keytab my_kerberos_user
Otherwise, I could end with an authentication error which keeps repeating in an undefinite way with the following traces:
19/02/13 15:00:22 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS in\
itiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
19/02/13 15:00:22 WARN security.UserGroupInformation: PriviledgedActionException as:my_kerberos_user (auth:KERBEROS) cause:java.io\
.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechan\
ism level: Failed to find any Kerberos tgt)]
[...]
Is there any way of setting a maximum number of connection retries to YARN?
The bash script is being executed in a cron task, so it should not be hung in any way.
I am still seeing the following exception while trying to access SQL Server using Kerberos. What am I missing?
Connecting to jdbc:sqlserver://SERVER:PORT;databaseName=DB_NAME;integratedSecurity=true;authenticationScheme=JavaKerberos;applicationName=GAA-MFI-Switches; using com.microsoft.sqlserver.jdbc.SQLServerDriver = USER
Integrated authentication failed. ClientConnectionId:4d83d195-c50c-404e-8bb0-39d90d1b9fda
Some notes:
I created my keytab file KEY_TAB.keytab
Confirmed that my user has permission to access the database through SSMS
Initialized the krb cache like this:
kinit -k -t KEY_TAB.keytab USER#DOMAIN.COM
Ran 'klist" and verified that I can see my principal there:
>klist
Ticket cache: FILE:/tmp/krb5cc_cdc104145_9Z6n4S
Default principal: USER#DOMAIN.COM
Valid starting Expires Service principal
12/01/2017 14:19:10 12/02/2017 00:19:10 krbtgt/COMAIN.COM#DOMAIN.COM
renew until 12/08/2017 14:19:10
12/01/2017 14:19:38 12/02/2017 00:19:10 MSSQLSvc/[PLACEHOLDER].com:1433#DOMAIN.COM
renew until 12/08/2017 14:19:10
12/01/2017 14:19:48 12/02/2017 00:19:10 HTTP/[PLACEHOLDER].com#DOMAIN.COM
renew until 12/08/2017 14:19:10
What am I missing?
My Hive server is SSL as well as Kerberos enabled. But when I try to connect to hiverserver2 via beeline using following command:
*!connect jdbc:hive2://**hostnameOfServer**:10000/hive;ssl=true;sslTrustStore=**keystorePath**;trustStorePassword=**passwordfor keystore**;principal=**Kerberos hive principal** **database username** **database password** org.apache.hive.jdbc.HiveDriver*
I get following error :
Error: Could not open client transport with JDBC Uri: jdbc:hive2://hostnameOfServer:10000/hive;ssl=true;sslTrustStore=keystorePath;trustStorePassword=passwordfor
keystore;principal=Kerberos hive principal database username
database password org.apache.hive.jdbc.HiveDriver: Invalid status 21 (state=08S01,code=0)
Also I tried using following command on beeline:
jdbc:hive2://**hostnameOfServer**:10000/hive;principal=**Kerberos hive principal**?transportMode=https;httpPath=cliservice;auth=kerberos;sasl.qop=auth.
But got same error.
Are ssl and kerberos compatible to each other?
Yes it is compatible from version Hive-2.0.0. Check the below JIRA task for more information
https://issues.apache.org/jira/browse/HIVE-14019
I`m trying to connect a Spark application to HBase with Kerberos enabled. Spark version is 1.5.0, CDH 5.5.2 and it's executed in yarn cluster mode.
When HbaseContext is initialized, it throws this error:
ERROR ipc.AbstractRpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
I have tried to do the authentication in the code, adding:
UserGroupInformation.setConfiguration(config)
UserGroupInformation.loginUserFromKeytab(principalName, keytabFilename)
I distribute the keytab file with --files option in spark-submit.
Now, the error is:
java.io.IOException: Login failure for usercomp#COMPANY.CORP from keytab krb5.usercomp.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
...
Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:856)
Is this the way to connect to Kerberized HBase from a Spark app?
please see the example configuration like below if you are missing anything like hadoop.security.authentication
val conf= HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "list of ip's")
conf.set("hbase.zookeeper"+ ".property.clientPort","2181");
conf.set("hbase.master", "masterIP:60000");
conf.set("hadoop.security.authentication", "kerberos");
Actually try to put your hbase-site.xml directly in the SPARK_CONF directory of your edge node (should be something like /etc/spark/conf or /etc/spark2/conf).
you can use loginUserFromKeytabAndReturnUGI, and uig.doAs
or you could put you hbase classpath to SPARK_DIST_CLASSPATH.