I have kerberized Hadoop Hortonworks cluster running. Beeline works fine.
But When I am launching hive it fails with the follwoing error:
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
[root#hdpm1 ~]# su - hive
[hive#hdpm1 ~]$ hive
Before runing beeline you must get TGT using kinit
example for hive user using service keytab:
kinit -kt <path_to keystore> <principal_name>
kinit -kt /etc/security/keytabs/hive.service.keytab hive/<host>
Related
I am currently using the trivy scanner to scan images in the pipeline. This has worked very well until now. But recently it is necessary to scan the image from an internal Openshift registry.
Unfortunately I have the problem that I do not know how to authenticate trivy against the internal registry. The documentation does not give any information regarding Openshift. It describes Azure and AWS as well as github.
My scan command currently looks like this in groovy:
trivy image --ignore-unfixed --format template --template \"path for output" --output trivy_image_report.html --skip-update --offline-scan $image
Output:
INFO Vulnerability scanning is enabled
INFO Secret scanning is enabled
INFO If your scanning is slow, please try '--security-checks vuln' to disable secret scanning
INFO Please see also https://aquasecurity.github.io/trivy/v0.31.3/docs/secret/scanning/#recommendation for faster secret detection
FATAL image scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:
* unable to inspect the image (openshiftregistry/namespace/imagestream:tag): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory
* containerd socket not found: /run/containerd/containerd.sock
* GET https://openshiftregistry/v2/namespace/imagestream/manifests/tag: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:namespace/imagestream Type:repository]]
The image is stored within an imageStream in Openshift. Is there something i can add to the trivy command to authenticate the service against the registry or is there something else what has to be done before i use the command in groovy?
Thanks for help
Thanks to Will Gordon in the comments. This link was very helpfull: Access the Registry (Openshift).
This lines helped me (more information can be found on the linked site):
oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443
And
podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Thanks
I am trying to connect to Hive with Kerberos authentication using beeline. I have initialized a ticket with
kinit -V --kdc-hostname=<HOSTNAME> -kt /etc/krb5.keytab <USER#REALM>
and I can see it is active when I run klist but when I try to connect to Hive, I get the well known error message:
SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
I changed the log4j level to debug, and found the following:
DEBUG HiveAuthFactory: Cannot find private method "getKeytab" in class:org.apache.hadoop.security.UserGroupInformation
and after this, beeline is trying to use my unix username to authenticate, which is obviously failing. So I think the problem is that beeline doesn't find my keytab file.
Most probably the problem is with beeline command.
Make sure you provide authentication parameter correctly and have double quotes around the connection string.
beeline -u "jdbc:hive2://HOSTNAME:10000/default;principal=hive/hostname#Example.com"
And also check your Kerberos principal if it has permission to access hive.
how to set up the Spark Thrift Server on EMR? I am trying to make a JDBC/ODBC connection to EMR using Spark Thrift Server. For e.g.
beeline> !connect jdbc:hive2://10.253.3.5:10015
We execute the following to restart the Hive-Server2 -
sudo stop hive-server2
sudo stop hive-hcatalog-server
sudo start hive-hcatalog-server
sudo start hive-server2
Not sure what are the services to restart Spark Thrift Server on AWS EMR and how to set up the User Id and Password.
We need to start the Spark thrift Server by executing the following on EMR-
sudo /usr/lib/spark/sbin/start-thriftserver.sh --master yarn-client
The Default port is 10001
Test Connection as below -
/usr/lib/spark/bin/beeline -u 'jdbc:hive2://x.x.x.x:10001/default' -e "show databases;"
Spark JDBC Driver can be used to connect to the Thrift Server from any application
I have installed aerospike on my mac my following this installation steps
All the validations are working fine. I am able to connect to the cluster using browser chrome. Below is the screen shot.
I have also installed the AQL tools following the instructions here.
But I'm unable to connect to local node from aql.
$ aql
2017-11-21 16:06:09 WARN Failed to connect to seed 127.0.0.1 3000.
AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000
Error -1: Failed to connect
$ asadm
Aerospike Interactive Shell, version 0.1.11
ERROR: Not able to connect any cluster.
Also, I have noticed the Java client is giving error.
AerospikeClient client = new AerospikeClient("localhost", 3000);
when I changed the localhost to actual Ip returned by vagrant ssh -c "ip addr"|grep 'global eth1' it is working fine.
How to connect with aql using customer parameters? I want to pass ip address and port as parameters to aql. Any suggestions.
$ aql --help
https://www.aerospike.com/docs/tools/aql/index.html - discusses all various command line options.
$ aql -h a.b.c.d -p 1234
There is another possibility, you have your owned port instead of the default 3000, so when you try to connect to aerospike, you can try to run command like : aql -p4000
Hope this may help you
Seems like the port is not getting freed even after exiting the vagrant console.
Tried closing all the terminal windows and then starting again. But no luck.
Finally, restarting the system resolved the issue.
When opening a hiveserver2 connection using beeline, setting the hive scratchdir is not working; I am running HDP 2.4 having hive.server2.enable.doAs enabled. When I execute
beeline -u "jdbc:hive2://localhost:10000/;principal=hive/_HOST#COMPANY.COM" \
--hiveconf hive.exec.scratchdir=/tmp/user/somedir
I get a Ranger Security Permisson writing to /tmp/hive. The restricted properties don't contain hive.exec.scratchdir.
How can I configure/set/overwrite this setting on runtime?