Can't get Kerberos realm for hive jdbc - hive

I am using hive in Kerberos environment, and get the following issue:
beeline> !connect jdbc:hive2://xx.xx.xx.xx:10000/default;principal=hive/emr-header-1.xxx#EMR.46727.COM
Connecting to jdbc:hive2://xx.xx.xx.xx:10000/default;principal=hive/emr-header-1.xxx#EMR.46727.COM
21/01/12 23:00:35 [main]: WARN jdbc.HiveConnection: Failed to connect to xx.xx.xx.xx:10000
Error: Could not open client transport with JDBC Uri: jdbc:hive2://xx.xx.xx.xx:10000/default;principal=hive/emr-header-1.xx#EMR.46727.COM: Can't get Kerberos realm (state=08S01,code=0)
The special thing for my cluster is that krb5.conf is not in /etc/krb5.conf, instead it is in another location. I have already export KRB5_CONFIG as the new path, but it still doesn't work, what do I need to do for using the custom krb5.conf ? Thanks

Finally found the solution, I can specify the custom krb5.conf via specifying java properties java.security.krb5.conf, e.g. -Djava.security.krb5.conf=/etc/myconf/krb5.conf

sometimes
export KRB5_CONFIG=/etc/krb5/krb5.conf is not working!
so do a soft link to the default krb5.conf location
rm -rf /etc/krb5.conf && ln -s /etc/krb5/krb5.conf /etc/krb5.conf
the /etc/krb5/krb5.conf is my custom krb5.conf file

Related

Does libgit2 support a ssh config host alias?

I am using a ssh host alias, as in:
.ssh/config
Host my-git-host
HostName github.com
And the command-line git works for push/pull/clone using my-git-host as a remote etc.
But would this also work through libgit2 as is ?
I am using an application that uses libgit2 and it fails to push/pull with an error - as though it were trying to dns resolve my-git-host.
thx,
ok, I think I discovered the application I'm using uses git2-rs which uses libgit2 but perhaps not with ssh support.
thx

LDAP Unable to start openldap for windows

I watched youtube online as reference to install openldap on windows,
I also followed the tutorial on zytrax.com
C:\OpenLDAP>slaptest -f slapd.conf -F slapd.d
5c9eec00 using config directory slapd.d, error 0
config file testing succeeded
there is this part "Conversion to slapd.d is trivial. After modifying the slapd.conf file as required simply create a new directory/folder called slapd.d. Open a command line (dos box for us oldies), navigate to c:\OpenLDAP (or wherever you put your installation) and enter:" in which I don't understand, what do I need to configure in slapd.conf
C:\OpenLDAP>slapd -d 8 -h "ldaps://localhost/ ldap://localhost/"
5c9ef038 OpenLDAP 2.4.42 Standalone LDAP Server (slapd)daemon: bind(2) failed errno=10013 (WSAEACCES)
5c9ef038 daemon: bind(3) failed errno=10013 (WSAEACCES)
5c9ef038 slapd stopped.
5c9ef038 connections_destroy: nothing to destroy.
How do I get my ldapserver to start running ?
I had the same issue and my issue was that the ports were already open by another service. Try specifying other ports when starting the slapd server.
slapd -d 8 -h "ldaps://localhost:6866/ ldap://localhost:3899/"

AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000; Not able to connect to local node from aql

I have installed aerospike on my mac my following this installation steps
All the validations are working fine. I am able to connect to the cluster using browser chrome. Below is the screen shot.
I have also installed the AQL tools following the instructions here.
But I'm unable to connect to local node from aql.
$ aql
2017-11-21 16:06:09 WARN Failed to connect to seed 127.0.0.1 3000.
AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000
Error -1: Failed to connect
$ asadm
Aerospike Interactive Shell, version 0.1.11
ERROR: Not able to connect any cluster.
Also, I have noticed the Java client is giving error.
AerospikeClient client = new AerospikeClient("localhost", 3000);
when I changed the localhost to actual Ip returned by vagrant ssh -c "ip addr"|grep 'global eth1' it is working fine.
How to connect with aql using customer parameters? I want to pass ip address and port as parameters to aql. Any suggestions.
$ aql --help
https://www.aerospike.com/docs/tools/aql/index.html - discusses all various command line options.
$ aql -h a.b.c.d -p 1234
There is another possibility, you have your owned port instead of the default 3000, so when you try to connect to aerospike, you can try to run command like : aql -p4000
Hope this may help you
Seems like the port is not getting freed even after exiting the vagrant console.
Tried closing all the terminal windows and then starting again. But no luck.
Finally, restarting the system resolved the issue.

RabbitMQ 3.3.1 can not login with guest/guest

I have installed the latest version of RabbitMQ on a VPS Debian Linux box. Tried to get login through guest/guest but returned with the message login failed. I did a little research and found that for security reason its prohibited to get login via guest/guest remotely.
I also have tried enabling guest uses on this version to get logged in remotely by creating a rabbitmq.config file manually (because the installation didn't create one) and placing the following entry only
[{rabbit, [{loopback_users, []}]}].
after restart the rabbitmq with the following command.
invoke-rc.d rabbitmq-server stop -- to stop
invoke-rc.d rabbitmq-server start -- to start
It still doesn't logged me in with guest/guest. I also have tried installing RabbitMQ on Windows VPS and tried to get log in via guest/guest through localhost but again i get the same message login failed.
Also provide me a source where I could try installing the old version of RabbitMQ that does support logging remotely via guest/guest.
I had the same Problem..
I installed RabbitMQ and Enabled Web Interface also but still couldn't sign in with any user i newly created, this is because you need to be administrator to access this.
Do not create any config file and mess with it..
This is what i did then,
Add a new/fresh user, say user test and password test:
rabbitmqctl add_user test test
Give administrative access to the new user:
rabbitmqctl set_user_tags test administrator
Set permission to newly created user:
rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
That's it, enjoy :)
I tried on Debian the same configuration with the following steps:
Installed RabbitMQ.
Enabled the web-management plug-in (not necessary).
When I tried to login I had the same error:
So I created a rabbitmq.config file (classic configuration file) inside the /etc/rabbitmq directory with the following content (notice the final dot):
[{rabbit, [{loopback_users, []}]}].
Alternatively, one can create instead a rabbitmq.conf file (new configuration file) inside the same directory with the following content:
loopback_users = none
Then I executed the invoke-rc.d rabbitmq-server start command and both the console and the Java client were able to connect using the guest/guest credentials:
So I think you have some other problem if this procedure doesn't work. For example your RabbitMQ might be unable to read the configuration file if for some reason you have changed the RABBITMQ_CONFIG_FILE environment variable.
This is a new features since the version 3.3.0. You can only login using guest/guest on localhost. For logging from other machines or on ip you'll have to create users and assign the permissions. This can be done as follows:
rabbitmqctl add_user test test
rabbitmqctl set_user_tags test administrator
rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
Adding the below line in the config file and restarting the server worked for me. Kindly try in your setup.
loopback_users.guest = false
I got this line from the example RabbitMQ config file from Github as linked here.
notice: check your PORT is 15672 ! (version > 3.3 ) if 5672 not works
First of all, check the "choosen answer above":
rabbitmqctl add_user test test
rabbitmqctl set_user_tags test administrator
rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
and if still can't make connection work, check if your port is correct!
for me, this command works:
$ rabbitmqadmin -H 10.140.0.2 -P 15672 -u test -p test list vhosts
+------+----------+
| name | messages |
+------+----------+
| / | |
+------+----------+
for the completed ports , check this:
What ports does RabbitMQ use?
to verify your rabbit mq server, check this: Verify version of rabbitmq
p.s.
For me, after I created the "test" user and run set_user_tags, set_permissions , I can't connect to rabbitmq via port 5672. but I can connect via 15672.
However, port 15672 always gives me a "blank response". and my code stop working.
so about 5 minutes later, I switched to 5672, everything worked!
Very wired problem. I have no time to dig deeper. so I wrote it down here for someone meeting the same problems.
for other guys which use Ansible for RabbitMQ provisioning, what I missed for rabbitmq_user module was tags: administrator
here is my working Ansible configuration to recreate "guest" user (for development environment purpose, don't do that in production environment):
- name: Create RabbitMQ user "guest"
become: yes
rabbitmq_user:
user: guest
password: guest
vhost: /
configure_priv: .*
read_priv: .*
write_priv: .*
tags: administrator
force: yes # recreate existing user
state: present
and I also had to setup a file /etc/rabbitmq/rabbitmq.config containing the following:
[{rabbit, [{loopback_users, []}]}].
in order to be able to log using "guest"/"guest" from outside of localhost
#Create rabbitmq.conf file with
rabbitmq.conf
loopback_users = none
Dockerfile:
FROM rabbitmq:3.7-management
#Rabbitmq config
COPY rabbitmq.conf /etc/rabbitmq/rabbitmq.conf
#Install vim (edit file)
RUN ["apt-get", "update"]
RUN ["apt-get", "-y", "install", "vim"]
#Enable plugins rabbitmq
RUN rabbitmq-plugins enable --offline rabbitmq_mqtt rabbitmq_federation_management rabbitmq_stomp
Run:
$ docker build -t my-rabbitmq-image .
$ docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 my-rabbitmq-image
Check that the rabbitmq.conf file has been copied correctly.
$ docker exec -it my_container_id /bin/bash
$ vim /etc/rabbitmq/rabbitmq.conf
I had the same problem. I tried what was suggested by Gas and ran "invoke-rc.d rabbitmq-server start" it didn't start. I tried to reboot the server and the webui worked with the guest user. Maybe after adding the rabbitmq.config file, something else also needed to started.
I used rabbitmq version 3.5.3.
One more thing to note: if you're using AWS instance then you need to open inbound port 15672. (The port for RabbitMQ versions prior to 3.0 is 55672.).
Students and I stared at this problem for an hour. Be sure you've named your files correctly. In the /etc/rabbitmq directory, there are two distinct files. There is an /etc/rabbitmq/rabbitmq.config file which you should edit to get the loopback users as described, but there is another file called rabbitmq-env.conf file. Many folks were using tab completion and just adding "ig", which isn't the right file. Double check!
sometimes you don't need the comma , which is there in the configuration file by default , if nothing else is configured below rabbit tag , while starting broker
we will get a crash
like
{loopback_users, []} , I spend many times hours forgetting this and later removing the comma , it is applicable for all other configurations including SSL
Try restart your rabbitmq and login again, for me work.
For a slightly different use, but might be useful for anyone dealing with accessing the API for monitoring purposes:
I can confirm the answer given by #Oliboy50 works well, however make sure you enable it for each vhost you want the user to be able to monitor, such as:
permissions:
- vhost: "{{item.name}}"
configure_priv: .*
write_priv: .*
read_priv: .*
state: present
tags: management
with_items: "{{user_system_users}}"
With this loop I was able to get past the "401 Unauthorized" error when using the API for any vhost.
By default, the guest user is prohibited from connecting from remote hosts; it can only connect over a loopback interface (i.e. localhost). This applies to connections regardless of the protocol. Any other users will not (by default) be restricted in this way.
It is possible to allow the guest user to connect from a remote host
by setting the loopback_users configuration to none
# DANGER ZONE!
#
# allowing remote connections for default user is highly discouraged
# as it dramatically decreases the security of the system. Delete the user
# instead and create a new one with generated secure credentials.
loopback_users = none
Or, in the classic config file format (rabbitmq.config):
%% DANGER ZONE!
%%
%% Allowing remote connections for default user is highly discouraged
%% as it dramatically decreases the security of the system. Delete the user
%% instead and create a new one with generated secure credentials.
[{rabbit, [{loopback_users, []}]}].
See at "guest" user can only connect from localhost
TIP: It is advisable to delete the guest user or at least change its password to reasonably secure generated value that won't be known to the public.
If you will check the log file under info report you will get this.
`config file(s) : /etc/rabbitmq/rabbitmq.config (not found)`.
Change the config file permission using below command then login using guest , it will work
sudo chmod 777 /etc/rabbitmq/rabbitmq.config

Amazon AWS EC2 Instance - Can't connect with SSH

This shouldn't be this hard. I cannot connect to new AWS EC2 instance via SSH clients. I am connecting from a Win 7 box.
Instance OS: Debian 6
AMI: debian-squeeze-i386-20121119-e4554303-3a9d-412e-9604-eae67dde7b76-ami-1977f070.1(ami-a121a6c8)
User: tried root and also ec2-user
Using .pem keypair that AWS generated and I downloaded
Confirmed security group and Key Pair Name on instance
SSH port 22 is OPEN: Nmap says so and Telnet gets a welcome reply
Using 3 different clients: all clients connect ok
PuTTY replies: Server refused our key
MindTerm Java browser add-in replies: Authentication failed, permission denied
Bitvise SSH replies: Attempting 'publickey' auth; auth failed;
Rebooted instance, wash, rinse, repeat...
REBUILT new instance and new keypair, wash, rinse, repeat...
Connecting isn't the issue. Why would the instance not accept the .pem file as the password? Is there an additional step I am missing? I followed EVERY frigging guide I could Google. AWS support is a joke. stackoverflow to the rescue...
TIA.
According to the debian wiki which has documentation on the AMI you are using, the username you need to use to login is 'admin'.
I have had many issues with connecting to EC2 via ssh.
ssh -i the-keypair-filename root#yourdomain.com
- Keypair file must be in same directory.
- I just used terminal to connect.
Make sure you generate or assign the keypair when launching the instance.
Also you can verify the keypair you have set in the AWS Management Console, this is done by selecting the running instance and then looking for "Key Pair Name:".
I hope this is helpful.
My problem was that I didn't add a volume that was expected in the fstab file so the server didn't start fully and the sshd daemon wasn't running.
Check with:
telnet HOST 22
Check the server logs to make sure it starts properly before you waste lots of time like I did.
Amazon Linux AMIs that use ec2-user password are listed at the bottom of this page.
http://aws.amazon.com/amazon-linux-ami/
Check that you are using one of those if trying to use ec2-user, or check the documentation for the AMI you are using.
Teri
Try using the "admin" username and ignore the username suggested by Amazon.
I had the similar problem and I have solved the issue by following approach.
1) Edited the knife.rb file in my chef folder i.e. :\Users\Administrator\chef-starter\chef-repo.chef\knife.rb as bellow:
knife[:aws_access_key_id] = "xxxxxxxxxxxxxxxxxxxx"
knife[:aws_secret_access_key] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
knife[:region] = 'ap-southeast-1'
knife[:aws_ssh_key_id] = "ChefUser"
knife[:ssh_user]="ec2-user"
In the command prompt, issued the command to create an ec2-server:
knife ec2 server create -r "role[webserver]" --image ami-abcd1234 --flavor t1.micro -G ChefClient -x root -N server01 -i H:\Chef-files\ChefUser.pem
Note that, even though I had given all the details in the knife.rb file, I had to give the .pem file path in coomand line through -i option. That solved my problem.
Check, if the solution of mine helps you.
Cheers,
Chandan
Logging in as "ubuntu" worked for me:
ssh -i private_key.pem ubuntu#myubuntuserver
Hope this helps
--Erin