How to execute multiple commands in Stackexchang.Redis CLI in windows? - redis

I am trying to execute multiple commands one after the other in redis cli through batch file, redis pipelining but could not find the working solution.
Here is my command:
127.0.0.1:6379>$(Auth Test12\r\nPING\r\n;)
error : Invalid Arguments
$ (printf "PING\r\nPING\r\nPING\r\n"; sleep 1)
Error : Invalid Arguments
Using Cmd.exe in ProcessStartInfo in c#
FileName:Cmd.exe
Arguments: #"/c cd c:\program fies\redis && call redis-cli -a Test12 config set requirepass Test1234.
Tried same command in .bat file also.
Error: redis-cli is not recognized as internal or external batch command
My Requirement is to authenticate with existing password and Set new password to redis server from C# code.
If anyone could suggest on this, it would be a great help.

Related

Apache Airflow command not found with SSHOperator

I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim

Getting "Server unexpectedly closed network connection" after executing a remote command with Plink

I am using Plink to execute remote command:
When using remote command (text file) error occurs:
FATAL ERROR: Server unexpectedly closed network connection
test.bat
"C:\Program Files (x86)\PuTTY\plink.exe" XX.XX.XX.XX -l userID -pw password -m "D:\FindingLog\test.txt"
test.txt
cd log
When I remove -m "D:\FindingLog\test.txt" in batch file, it works (successful login)
What's the problem?
The SSH session closes (and Plink with it) as soon as the command finishes. Normally the "command" is shell. As you have overridden this default "command" and yet you seem to want to run the shell nevertheless, you have to explicitly execute the shell yourself:
cd log
/bin/bash
Also as use of -m switch implies a non-interactive terminal, you probably want to force an interactive terminal back using -t switch.
See also How to prevent PuTTY shell from auto-exit after executing command from batch file in Windows?
Upgrading to plink 0.74 fixed this issue for me (from much older version 0.60).

ORA-12545: Connect failed because target host or object does not exist while connecting through the shell

I am trying to run the sql scripts from shell. My scripts are working fine. It is getting connected to database and applying the sql files. Only thing I am not able to understand is why the below error message is getting logged every time.
Error Message :
ERROR:
ORA-12545: Connect failed because target host or object does not exist
Shell Script:
/opt/ORACLE/app/oracle/product/11.2.0/client_1/bin/sqlplus -s <<eoj >>$LOG_FIL 2>&1
${DBUSER1}/${DBPASS}#${hostBillingDBSID}
#${SQLParm} $RPT_FIL
eoj
try the below.
Shell Script:
#let's include oracle installation in the PATH variable
export PATH=$PATH:/opt/ORACLE/app/oracle/product/11.2.0/client_1/bin
#now just use sqlplus, instead of full path reference.
sqlplus -s ${DBUSER1}/${DBPASS}#${hostBillingDBSID} <<eoj >>$LOG_FIL 2>&1
#${SQLParm} $RPT_FIL
eoj
The user/password(connection string) has to be passed as command line arguments to sqlplus.

SGE Command Not Found, Undefined Variable

I'm attempting to setup a new compute cluster, and currently experiencing errors when using the qsub command in the SGE. Here's a simple experiment that shows the problem:
test.sh
#!/usr/bin/zsh
test="hello"
echo "${test}"
test.sh.eXX
test=hello: Command not found.
test: Undefined variable.
test.sh.oXX
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
If I ran the script on the head node (sh test.sh), the output is correct. I submit the job to the SGE by typing "qsub test.sh".
If I submit the exact same script job in the same way on an established compute cluster like HPC, it works perfectly as expected. What setting could be causing this problem?
Thanks for any help on this matter.
Most likely the queues on your cluster are set to posix_compliant mode with a default shell of /bin/csh. The posix_compliant setting means your #! line is ignored. You can either change the queues to unix_behavior or specify the required shell using qsub's -S option.
#$ -S /bin/sh

Passing shell script file

I have a linux shell script file which collects various data from linux server. (Services, Process, FreeSpace etc.).
From windows to collect the data we are using Plink to connect to linux Boxes and run the shell script
plink root#servername -pw Password -noagent -m Batch-File.
and using pscp to copy the file to windows location.
Now when I try to do the same for Esxi the plink command fails with the error below.
FATAL ERROR: Server unexpectedly closed network connection
though If i give a direct command as below.
plink root#servername -pw Password -noagent ls /etc
works out.
Let me know how to use the plink for esxi .. if possible.
After seeing the messages log it looks like that the issue is with esxi's limitation to read long character string. The message log fails in the session with String Too Long and then post a message of closing the connection.
Thus the approach was to copy the shell script as a pscp connection, run the file with executable permission and collect the data gathered and delete the file from system.