cannot run passed Lua script to Redis: This Redis command is not allowed from scripts - redis

I have three instances of Redis server running sentinel and a Lua script in order to let the redis_exporter gather the list of connected clients to the Redis server. but when I pass the script to the redis-cli command I get the following error:
(error) ERR Error running script (call to f_4c6be190ef2981eda70d58ec0c959bd1ca2c5352): #user_script:10: #user_script: 10: This Redis command is not allowed from scripts
This is my Lua script:
local r = redis.call("CLIENT", "LIST")
return r
Is there any way to fix this problem?

A quick google seems the issue is coming from the redis server itself, not the library.
The CLIENT command has a no-script flag
https://github.com/antirez/redis/blob/fe5aea38c35e3fc35a744ad2de73543df553ae48/src/sentinel.c

Related

SSH login error - "libintl.so.9" not found

I'm logging in to vSphere VM (FreeBSD) using SSH and getting the following error-
Shared object "libintl.so.9" not found, required by "bash"
Connection to xxx closed.
I mistakenly changed root user's shell to bash. I was able to login earlier using csh. I can't run chsh or any other commands because I am not able to login to the server.
Is there a way to revert the shell change or specify shell while SSH login? I have tried rebooting the VM using vSphere but still getting the same error.
I have also tried sftp using FileZilla but since it uses SSH, I am getting the following error-
Status: Connected to xxx
Error: FATAL ERROR: Received unexpected end-of-file from SFTP server
Error: Could not connect to server
To fix this, you will need to shutdown the VM from vsphere, reboot, then choose "Single User" mode. Once at the single user shell, change root's shell to /bin/sh or /bin/csh. Don't use 3rd part shells for root.
Also, you get the error because your bash binary is out of date and not ABI compatible with the OS it's installed. Using pkg update should help once you get access again.

Cannot ssh into Google-Engine, connecting in a loop

I am unable to connect through SSH to my GCE instance. I was connecting without any problem, the only think I changes was my user name through top right corner of the browser then selected Change Linux Username.
When I try to ssh into my google engine via browser, I keep having following message in a endless loop:
When I try to ssh via cloud shell I also get following error message, (serial console output):
Permission denied (publickey).
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
[Q] Is there any way to fix this problem? Since I have no access to the engine now, I don't know what to do.
However you could always get back access through serial console then from there you could internally y troubleshoot user/ssh issue.
1) $ gcloud compute instances add-metadata [INSTANCE_NAME] --metadata=serial-port-enable=1
You can then connect to the instance through the serial port
NOTE:The root password have must been already set in order to use the serial port
2)
$ gcloud compute connect-to-serial-port [INSTANCE_NAME]
If you never set the root password you could set it by adding a startup-script to your instance that will set a password as root by running the below command :
NOTE: the instance must be rebooted in order to run the startup script.
3) $ gcloud compute instances add-metadata [instance name] --metadata startup-script='echo "root:YourPasswdHere" | chpasswd'
Reboot the instance run the command on the step "2)" authenticate your self as root with the password that you set on the startup script in the step "3)" .
I had the same problem, It took me several days to figure out what was happening in my case.
To find out, I created a new instance from scratch and started making all modifications I've done to those that eventually couldn't connect to, one by one, exiting the ssh connection and re entering so as to test it.
I've tried it a couple of times, in both cases, the connection was impossible after uninstalling python (I only needed 3.7 version so I was uninstalling all others and installing that one I needed).
My command for uninstalling it was
sudo apt purge python2.7-minimal
and
sudo apt purge python3.5-minimal
I don't know if it was specifically because of deleting python, or because of using purge (in which case this problem might reproduce if using purge with another program).
I don't even know why would this affect ssh connection.
Could it be that google cloud is somehow using destination python for the ssh web?
In any case, if you are experiencing this problem try to avoid uninstalling anything from the base VM.

Ansible SSH error - Node getting scaled down (behind AWS ELB) before code is deployed

I am currently facing an issue during ansible rolling deployment as mentioned below:-
Generating dynamic inventory and passing the file to deployment playbook.
Before deployment some of the nodes are getting scaled down (auto scaling policy) and hence ansible is throwing ssh error for that error.
Ansible is skipping the remaining nodes in the host inventory files and getting terminated.
Is there any way to skip the specific node (which is getting scaled down during deployment) and continue the deployment process with other nodes in inventory file?
Thank you for your time !.
I think you should try ansible dynamic inventory so that it will automatically fetch the inventory on the basis of tags at realtime.
Ref:
http://docs.ansible.com/ansible/latest/intro_dynamic_inventory.html
You can try running the same playbook for the hosts which are pending using --limit option, in case you want to rerun your playbook for the rest of the hosts.
Example: ansible-playbook --limit #/some-path/site.retry
You can consider ignore_errors: yes in your tasks which are failing for any reason. This will let playboook continue for rest of the hosts.
Ref:
http://docs.ansible.com/ansible/latest/playbooks_error_handling.html

Rundeck - reboot server job

I have a rundeck job that reboots a server, it sends the command "sudo reboot". This works and the server is rebooting.
The problem is that rundeck doesn't get a signal back so the job fails.
Is there a way to make this work and get a complete signal back in rundeck?
Perhaps wrap your command in a script, background the reboot operation, and return 0? I'm doing something similar with a set of development VMs, but I'm using virsh. I don't see why this couldn't be done with a physical server:
#!/bin/bash
ssh rundeck#yourserver sudo reboot &
exit 0
You may need to experiment a bit with the ssh options (perhaps '-f' and/or '-n') to get this to work properly.
Well playing around now I just used as Local Command step:
ssh ${node.username}#${node.hostname} "reboot & exit"
The return code is ZERO and everybody is happy.

I am trying to ssh to a windows machine from ansible, the program gets stuck

tasks:
name: Connect to Windows hpst machine through an ssh script
script : windows_connect.ssh
name: Run the bat file
script : C:\OV\sentenv.bat
name: To exit from the remote machine
shell: exit
This is what my playbook looks like.
Windows_connect.sh contains a script to connect to a windows machine via ssh.
ssh root#host
So ideally shouldnt ansible prompt me for a password?
Instead it gets stuck.
Please help me with the same
Ansible opens at least one new ssh connection for every task in the play.
So the idea to open a ssh connection with the first task and then reuse the connection in the later tasks will not work. Actually Ansible won't be able to execute the first task because the script module you are trying to use will already need a working connection to the remote host.
I have found little information on how many ssh connection Ansible will use to execute a task so I might be wrong here.
You should consult the Ansible documentation for Window support and implement a solution based on the winrm Python module.