ODBC Hive, Credential cache is empty - hive

I got error "Credential cache is empty" during ODBC Hive tests. See full error detail
ODBC Hive - Test Results
[Cloudera][Hardy] (34) Error from server: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Credential cache is empty).
Do you have any experience?
I tested different setting of MIT Kerberos in Windows e.g.:
generate kerberos ticket, kinit.exe -k -t app_store.keytab app_store#HW.PROD.BDP'
checked kerberos tickets in cache, klist.exe
setup KRB5CCNAME=C:\cache\krb5cache and KRB5_CONFIG=c:\ProgramData\MIT\Kerberos5\krb5.ini

I see a few possible issues:
You have to check that krb5cache is file (not directory), it is important point
The path to the cache have to be different for different person use this setting for variable KRB5CCNAME=%USERPROFILE%\krb5cache
You have to generate kerberos ticket before run ODBC Test, see
kinit.exe -k -t "c:\Apps\MIT\Kerberos\store.keytab" store#HW.PROD.BDP

Related

yarn application command hangs due to absence of Kerberos ticket

Within a bash script, I am invoking yarn application command in order to get the current applications running on a Cloudera Hadoop cluster secured by Kerberos. In case my application is not running, it is necessary to restart it:
spark_rtp_app_array=( $(yarn application --list -appTypes SPARK -appStates ACCEPTED,RUNNING | awk -F "\t" ' /'my_user'/ && /'my_app'/ {print $1}') )
Whenever the Kerberos ticket has ended I need to invoke kinit command, in order to renew that ticket before calling yarn application --list:
kinit -kt my_keytab_file.keytab my_kerberos_user
Otherwise, I could end with an authentication error which keeps repeating in an undefinite way with the following traces:
19/02/13 15:00:22 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS in\
itiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
19/02/13 15:00:22 WARN security.UserGroupInformation: PriviledgedActionException as:my_kerberos_user (auth:KERBEROS) cause:java.io\
.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechan\
ism level: Failed to find any Kerberos tgt)]
[...]
Is there any way of setting a maximum number of connection retries to YARN?
The bash script is being executed in a cron task, so it should not be hung in any way.

mbsync authentication failed

I was able to configure mbsync and mu4e in order to use my gmail account (so far everything works fine). I am now in the process of using mu4e-context to control multiple accounts.
I cannot retrieve emails from my openmailbox account whereas I receive this error
Reading configuration file .mbsyncrc
Channel ombx
Opening master ombx-remote...
Resolving imap.ombx.io... ok
Connecting to imap.ombx.io (*.*.10*.16*:*9*)...
Opening slave ombx-local...
Connection is now encrypted
Logging in...
IMAP command 'LOGIN <user> <pass>' returned an error: NO [AUTHENTICATIONFAILED] Authentication failed.
In other posts I've seen people suggesting AuthMechs Login or PLAIN but mbsync doesn't recognizes the command. Here is my .mbsyncrc file
IMAPAccount openmailbox
Host imap.ombx.io
User user#openmailbox.org
UseIMAPS yes
# AuthMechs LOGIN
RequireSSl yes
PassCmd "echo ${PASSWORD:-$(gpg2 --no-tty -qd ~/.authinfo.gpg | sed -n 's,^machine imap.ombx.io .*password \\([^ ]*\\).*,\\1,p')}"
IMAPStore ombx-remote
Account openmailbox
MaildirStore ombx-local
Path ~/Mail/user#openmailbox.org/
Inbox ~/Mail/user#openmailbox.org/Inbox/
Channel ombx
Master :ombx-remote:
Slave :ombx-local:
# Exclude everything under the internal [Gmail] folder, except the interesting folders
Patterns *
Create Slave
Expunge Both
Sync All
SyncState *
I am using Linux Mint and my isync is version 1.1.2
Thanks in advance for any help
EDIT: I have run a debug option and I have upgraded isync to version 1.2.1
This is what the debug returned:
Reading configuration file .mbsyncrc
Channel ombx
Opening master store ombx-remote...
Resolving imap.ombx.io... ok
Connecting to imap.ombx.io (*.*.10*.16*:*9*)...
Opening slave store ombx-local...
pattern '*' (effective '*'): Path, no INBOX
got mailbox list from slave:
Connection is now encrypted
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN] Openmailbox is ready to
handle your requests.
Logging in...
Authenticating with SASL mechanism PLAIN...
>>> 1 AUTHENTICATE PLAIN <authdata>
1 NO [AUTHENTICATIONFAILED] Authentication failed.
IMAP command 'AUTHENTICATE PLAIN <authdata>' returned an error: NO [AUTHENTICATIONFAILED] Authentication failed.
My .msyncrc file now contains these options instead
SSLType IMAPS
SSLVersions TLSv1.2
AuthMechs PLAIN
At the end, the solution was to use the correct password. Since openmailbox uses an application password for third-party e-mail clients I was using the wrong (original) password instead of the application password.

Are SSL and Kerberos compatible to each other on Hive Server?

My Hive server is SSL as well as Kerberos enabled. But when I try to connect to hiverserver2 via beeline using following command:
*!connect jdbc:hive2://**hostnameOfServer**:10000/hive;ssl=true;sslTrustStore=**keystorePath**;trustStorePassword=**passwordfor keystore**;principal=**Kerberos hive principal** **database username** **database password** org.apache.hive.jdbc.HiveDriver*
I get following error :
Error: Could not open client transport with JDBC Uri: jdbc:hive2://hostnameOfServer:10000/hive;ssl=true;sslTrustStore=keystorePath;trustStorePassword=passwordfor
keystore;principal=Kerberos hive principal database username
database password org.apache.hive.jdbc.HiveDriver: Invalid status 21 (state=08S01,code=0)
Also I tried using following command on beeline:
jdbc:hive2://**hostnameOfServer**:10000/hive;principal=**Kerberos hive principal**?transportMode=https;httpPath=cliservice;auth=kerberos;sasl.qop=auth.
But got same error.
Are ssl and kerberos compatible to each other?
Yes it is compatible from version Hive-2.0.0. Check the below JIRA task for more information
https://issues.apache.org/jira/browse/HIVE-14019

How to connect spark application to secure HBase with Kerberos

I`m trying to connect a Spark application to HBase with Kerberos enabled. Spark version is 1.5.0, CDH 5.5.2 and it's executed in yarn cluster mode.
When HbaseContext is initialized, it throws this error:
ERROR ipc.AbstractRpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
I have tried to do the authentication in the code, adding:
UserGroupInformation.setConfiguration(config)
UserGroupInformation.loginUserFromKeytab(principalName, keytabFilename)
I distribute the keytab file with --files option in spark-submit.
Now, the error is:
java.io.IOException: Login failure for usercomp#COMPANY.CORP from keytab krb5.usercomp.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
...
Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:856)
Is this the way to connect to Kerberized HBase from a Spark app?
please see the example configuration like below if you are missing anything like hadoop.security.authentication
val conf= HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "list of ip's")
conf.set("hbase.zookeeper"+ ".property.clientPort","2181");
conf.set("hbase.master", "masterIP:60000");
conf.set("hadoop.security.authentication", "kerberos");
Actually try to put your hbase-site.xml directly in the SPARK_CONF directory of your edge node (should be something like /etc/spark/conf or /etc/spark2/conf).
you can use loginUserFromKeytabAndReturnUGI, and uig.doAs
or you could put you hbase classpath to SPARK_DIST_CLASSPATH.

OpenDJ Multi-master replication fails(Hangs at Initializing registration information step):: javax.naming.AuthenticationException

I am using OpenDJ-2.4.6 along with Oracle JDK 7.80 and I want to run Multi-master replication on 2 of my servers, the OS for these servers is Amazon Linux.
The OpenDJ setup runs perfectly fine; I can start the server too without any errors.
It is when I run the "dsreplication" script as follows:
./dsreplication enable --host1 server1.example,com --port1 4444 --bindDN1 "cn=Directory Manager" --bindPassword1 "Passw0rd" --replicationPort1 1388 --host2 server2.example,com --port2 4444 --bindDN2 "cn=Directory Manager" --bindPassword2 "Passw0rd" --replicationPort2 1388 --adminUID admin --adminPassword "Passw0rd" --baseDN "dc=example,dc=com"
the script hangs on the following step:
Initializing registration information on server server2.example.com:4444 with the contents of server server1.example.com:4444 .....
And on checking the logs, there is no error reported in there.
But, when I run the following command:
./dsreplication status -h localhost -p 4444 --adminUID admin --adminPassword "Passw0rd" -X
it throws the following error:
The displayed information might not be complete because the following
errors were encountered reading the configuration of the existing
servers: Error on server2.example.com:4444: An error occurred
connecting to the server. Details:
javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid
Credentials] Error on server:4444: An error occurred connecting to the
server. Details: javax.naming.AuthenticationException: [LDAP: error
code 49 - Invalid Credentials]
Please help me.
Thanks in advance.
The error could not be more explicit: "Invalid Credentials" on server 2.
Check the bindDN and bindPassword are valid against server 2.
When doing replication with OpenDJ, the hostnames must be resolved and addressable from either machines. Have you checked that this is the case with your Amazon Linux servers ?