I'm trying to connect to a redis server from two different servers, call them server1 and server2
From server1 I cannot login, using the right or the wrong password I always get:
user#server1:~$ redis-cli -h my-redis-server.com
my-redis-server.com:6379> auth rightpassword
(error) WRONGPASS invalid username-password pair
From server2 I can login
user#server2:~$ redis-cli -h my-redis-server.com
my-redis-server.com:6379> auth rightpassword
OK
But the funny thing is the error when trying to login from server2 with the wrong password is different
user#server2:~$ redis-cli -h my-redis-server.com
my-redis-server.com:6379> auth wrongpassword
(error) ERR invalid password
Using the monitor command on the redis server the login attempts from server1 are not printed while the login attempts (successful or not) from server2 are printed.
It seems the firewall is not blocking connections from server1 and also the redis server is configured to accept connections from server1 "bind 0.0.0.0". I mean, it actually looks like connections are accepted from server1 but somehow redis is refusing to run commands from server1 :-/ From what I've seen, redis doesn't have a way of blocking access per IP other than the "bind" config, and even that should return a connection refused rather than a wrong password error. I also think if the firewall was blocking, I should get a connection refused.
Geez, I must be missing something. Does anybody has a clue about what could be going on here?
PS: wonder why even redis has two different wrong password errors :-|
With Redis version 6+, they have added the ability of Access Control Lists (ACLs) for allowing users access to specific commands (read, write, key-constrained, etc.) based on the permissions of the user.
This may be why this error is being displayed:
(error) WRONGPASS invalid username-password pair
The AUTH command is slightly different for Redis version 6+:
AUTH command documentation: https://redis.io/commands/auth/
AUTH [username] password
ACL Documentation:
https://redis.io/docs/manual/security/acl/
If both of the Redis server versions are the same (i.e. 6+) then I would guess that server #2 has the default user enabled, which is why the AUTH command works. The concept of the default user is Redis' way of maintaining backwards compatibility with versions previous to 6. The current way that server #2 is operating is the default configuration for Redis. From what you mentioned in your original post, it seems like server #1 has the default user disabled and instead another user was created, possibly with different permissions.
For server #1, you may be able to run:
whoami
This should return the username that could be used for this command:
AUTH [username] password
It may also be helpful to run:
ACL LIST
to view the current users and their permissions
Related
When I ssh to git#github.com, I get a message that looks like this:
Hi <my username>! You've successfully authenticated, but GitHub does not provide shell access.
The connection is then closed. I understand this is intentional behavior, but how do they do it? Is there a config option in sshd_config? Is it a different or proprietary package to manage ssh connections? How do they change the message to include the username?
I have no idea what to look up to find these answers. Any searches involving TTY allocation seem to only return troubleshooting for servers that shouldn't be doing that.
It's either that the user shell is set to /bin/false (or something else that does nothing) and there is a sshd "banner" or "motd" (message of the day) that has that message,
or that the user shell is set to a program that emits that message and exits.
I created an application on existent OpenShift project by pulling a docker image from remote repo.
The pod is created but fails with STATUS "Crash Loop Back-off".
Invesitgating the reason using
oc log <pod id> -p
it appears a list of unsuccessfull "chown: changing ownership of '...': Operation not permitted
I found this is due to non predictable user id running the container.
According to
https://docs.openshift.com/container-platform/3.6/admin_guide/manage_scc.html and various post here and there,
it seems the solution is to relax security policy:
oc login -u system:admin https://<remote openshift endpoint>
oadm policy add-scc-to-group anyuid system:authenticated
If this is the solution, I don't know, because I cannot get out of 1st problem:
oc login -u system:admin
asks for login/pwd and after print an error
error: username system:admin is invalid for basic auth
I guess there is the need of a certificate, a token, something secure, but I cannot understand how to generate it from Openshift, or
if there was a key pair to generate locally (of which kind) and how to bind the key to the user. Furthermore, checking in the web console
I cannot see that kind of user (system:admin).
Am I missing something?
Thanks a lot,
Lorenzo
I configured Openldap2.4 on RHEL6.5.
i applied default password policy on my ldap tree.
But, Account lock has been effectively applying only when i do su - username with wrong password .
But when i tried to check with login with putty session or direct ssh it is not applying.
Can any one please help me on the above issue ?.
when i tried using sudo su - testuser2.4
pwdFailureTime: 20150427095439Z
pwdFailureTime: 20150427095445Z
pwdFailureTime: 20150427095451Z
pwdAccountLockedTime: 20150427095451Z
But when i tried direct ssh or putty session with 3 failures still the
policy not applied.
You have to avoid using the managerDN user. That's for use by OpenLDAP itself, and it bypasses all overlays, specifically this one. The overlay will work if you're logging in as a user within the DIT.
I'm having trouble connecting to Navicat using an SSH Tunnel and seem to have all my ducks in a row, so wondering if anyone else who had done this has had success:
I set up a normal (TCP) user and checked the connection (host, port, user, password, and remote access ip added in cpanel) to make sure it worked.
As per the instructions, I then went to the SSH tab and enabled it ([x] SSH Tunnel).
I added the same IP for host, then 22 for port, then added root as user, selected password as authentication and then entered the root password.
I keep getting a host.mydomainame.com cannot connect to this mysql host.
I know it is working because:
a) if I use the wrong user/pwd I simply get a 'could not create tunnel' error
b) my host confirms that an SSH connection IS created the moment I connect with the correct root/pwd combo (even though the error message is generated on my side)
BTW as per Navicat I ensured that AllowTcpForwarding is set to yes.
I also confirmed using bithive I can connect to the same server from the same IP with the same user.
Figured this out so thought I'd update so anyone else having issue can make this work. Answer turns out to be pretty basic.
The 'General Tab' where you set your MySql User has to have localhost, not the hostname or ip as it usually does, since the SSH Tunnel Tab creates the connection to that host first.
In my case, I used a PuTTYgen -> Load an existing private key file -> Conversions -> Export ssh.com key and that solved my issue!
I'm running a mongodb process with the following line:
/usr/bin/mongod --dbpath /var/db/mongo --journal
According to mongodb's docs:
http://www.mongodb.org/display/DOCS/Http+Interface
I should be able to access the http console with http://myhost:28017
When I attempt to access the page it asks for authentication.
According to the docs if security is configured I would need to authenticate. But after looking at mongodb.org/display/DOCS/Security+and+Authentication it seems clear to me I'm not using any authentication. I don't run the process with the --auth option, nor are there any users when I run a db.system.users.find() command.
What's going on here?
I have been able to reproduce this, and this is not the intended behavior. I have filed https://jira.mongodb.org/browse/SERVER-4601 The fix version is 2.1.1
Thank you for bringing this to our attention!
In the meantime, there are two work-arounds:
1) Enter the credentials for authentication in the browser pop-up window
2) Remove all user credentials from each of your DBs (including admin) using db.system.users.remove()
Either of these should allow you to view the http console.
Greetings Brain,
i am using mongo V 2.4.6 and its on default port 27017, its http console is enabled by default but when you try to access from Network it ask for password and i dont know why as i am new to this and dont know the exact reason. by i have a way to access it.
Create a tunnel to your mongo Server and when you access, it wont ask for password. and if you are using putty.
enter host name
go to ssh on left menu options and click +
Than Click on tunnel
in Source port type 28017
in destination type localhost colon port 28017(sorry dont know how to write http url in localhost here in my post)
not click open and provide ssh username and password
now open browser on PC from where you are doing ssh .Type localhost and port 28017
and Boom its accessible and wont ask for username and password. hope it work for you, let me know if u need any help.