A redis command ERR:wrong number of arguments - redis

using hiredis to pass the command to redis-server.
My code:
redisContext* c = redisConnect("127.0.0.1", 6379);
char y[15]={"pointx"};
strcat(y," 2");
redisReply* reply= (redisReply*)redisCommand(c,"set %s",y);
printf("%s\n", reply->str);
The output is "ERR wrong number of arguments for 'set' command".
However,it works when I change the code like this:
redisContext* c = redisConnect("127.0.0.1", 6379);
char y[15]={"pointx"};
char x[5] = {"2"};
redisReply* reply= (redisReply*)redisCommand(c,"set %s %s",y,x);
printf("%s\n", reply->str);
The output is "OK".
why??

The Redis server does not parse the command built with redisCommand. The server only accepts Redis protocol, with already delimited parameters.
Parsing therefore occurs in hiredis, and it applies only on the format string, in one step. For performance reasons, hiredis avoids multiple formatting passes (or a recursive implementation), so expansion of the parameters is not done before parsing, but while parsing in on-going - contrary to what you think.
Imagine your objects are very big (say several MB), you would not want them to be parsed at each query. This is why hiredis only parses the format string and not the parameters.
In your first example, hiredis parses a format string with a unique parameter, it builds a message with only one parameter, and redis receives:
$ netcat -l -p 6379
*2
$3
set
$8
pointx 2
which is an ill formed set command (one parameter only).

Related

Running command in perl6, commands that work in shell produce failure when run inside perl6

I'm trying to run a series of shell commands with Perl6 to the variable $cmd, which look like
databricks jobs run-now --job-id 35 --notebook-params '{"directory": "s3://bucket", "output": "s3://bucket/extension", "sampleID_to_canonical_id_map": "s3://somefile.csv"}'
Splitting the command by everything after notebook-params
my $cmd0 = 'databricks jobs run-now --job-id 35 --notebook-params ';
my $args = "'{\"directory\": \"$in-dir\", \"output\": \"$out-dir\",
\"sampleID_to_canonical_id_map\": \"$map\"}'"; my $run = run $cmd0,
$args, :err, :out;
Fails. No answer given either by Databricks or the shell. Stdout and stderr are empty.
Splitting the entire command by white space
my #cmd = $cmd.split(/\s+/);
my $run = run $cmd, :err, :out
Error: Got unexpected extra arguments ("s3://bucket", "output":
"s3://bucket/extension",
"sampleID_to_canonical_id_map":
"s3://somefile.csv"}'
Submitting the command as a string
my $cmd = "$cmd0\"$in-dir\", \"output\": \"$out-dir\", \"sampleID_to_canonical_id_map\": \"$map\"}'";
again, stdout and stderr are empty. Exit code 1.
this is something about how run can only accept arrays, and not strings (I'm curious why)
If I copy and paste the command that was given to Perl6's run, it works when given from the shell. It doesn't work when given through perl6. This isn't good, because I have to execute this command hundreds of times.
Perhaps Perl6's shell https://docs.perl6.org/routine/shell would be better? I didn't use that, because the manual suggests that run is safer. I want to capture both stdout and stderr inside a Proc class
EDIT: I've gotten this running with shell but have encountered other problems not related to what I originally posted. I'm not sure if this qualifies as being answered then. I just decided to use backticks with perl5. Yes, backticks are deprecated, but they get the job done.
I'm trying to run a series of shell commands
To run shell commands, call the shell routine. It passes the positional argument you provide it, coerced to a single string, to the shell of the system you're running the P6 program on.
For running commands without involving a shell, call the run routine. The first positional argument is coerced to a string and passed to the operating system as the filename of the program you want run. The remaining arguments are concatenated together with a space in between each argument to form a single string that is passed as a command line to the program being run.
my $cmd0 = 'databricks jobs run-now --job-id 35 --notebook-params ';
That's wrong for both shell and run:
shell only accepts one argument and $cmd0 is incomplete.
The first argument for run is a string interpreted by the OS as the filename of a program to be run and $cmd0 isn't a filename.
So in both cases you'll get either no result or nonsense results.
Your other two experiments are also invalid in their own ways as you discovered.
this is something about how run can only accept arrays, and not strings (I'm curious why)
run can accept a single argument. It would be passed to the OS as the name of the program to be run.
It can accept two arguments. The first would be the program name, the second the command line passed to the program.
It can accept three or more arguments. The first would be the program name, the rest would be concatenated to form the command line passed to the program. (There are cases where this is more convenient coding wise than the two argument form.)
run can also accept a single array. The first element would the program name and the rest the command line passed to it. (There are cases where this is more convenient.)
I just decided to use backticks with perl5. Yes, backticks are deprecated, but they get the job done.
Backticks are subject to code injection and shell interpolation attacks and errors. But yes, if they work, they work.
P6 has direct equivalents of most P5 features. This includes backticks. P6 has two variants:
The safer P6 alternative to backticks is qx. The qx quoting construct calls the shell but does not interpolate P6 variables so it has the same sort of level of danger as using shell with a single quoted string.
The qqx variant is the direct equivalent of P5 backticks or using shell with a double quoted string so it suffers from the same security dangers.
Two mistakes:
the simplistic split cuts up the last, single parameter into multiple arguments
you are passing $cmd to run, not #cmd
use strict;
my #cmd = ('/tmp/dummy.sh', '--param1', 'param2 with spaces');
my $run = run #cmd, :err, :out;
print(#cmd ~ "\n");
print("EXIT_CODE:\t" ~ $run.exitcode ~ "\n");
print("STDOUT:\t" ~ $run.out.slurp ~ "\n");
print("STDERR:\t" ~ $run.err.slurp ~ "\n");
output:
$ cat /tmp/dummy.sh
#!/bin/bash
echo "prog: '$0'"
echo "arg1: '$1'"
echo "arg2: '$2'"
exit 0
$ perl6 dummy.pl
/tmp/dummy.sh --param1 param2 with spaces
EXIT_CODE: 0
STDOUT: prog: '/tmp/dummy.sh'
arg1: '--param1'
arg2: 'param2 with spaces'
STDERR:
If you can avoid generating $cmd as single string, I would generate it into #cmd directly. Otherwise you'll have to implement complex split operation that handles quoting.

How do I store the value returned by either run or shell?

Let's say I have this script:
# prog.p6
my $info = run "uname";
When I run prog.p6, I get:
$ perl6 prog.p6
Linux
Is there a way to store a stringified version of the returned value and prevent it from being output to the terminal?
There's already a similar question but it doesn't provide a specific answer.
You need to enable the stdout pipe, which otherwise defaults to $*OUT, by setting :out. So:
my $proc = run("uname", :out);
my $stdout = $proc.out;
say $stdout.slurp;
$stdout.close;
which can be shortened to:
my $proc = run("uname", :out);
say $proc.out.slurp(:close);
If you want to capture output on stderr separately than stdout you can do:
my $proc = run("uname", :out, :err);
say "[stdout] " ~ $proc.out.slurp(:close);
say "[stderr] " ~ $proc.err.slurp(:close);
or if you want to capture stdout and stderr to one pipe, then:
my $proc = run("uname", :merge);
say "[stdout and stderr] " ~ $proc.out.slurp(:close);
Finally, if you don't want to capture the output and don't want it output to the terminal:
my $proc = run("uname", :!out, :!err);
exit( $proc.exitcode );
The solution covered in this answer is concise.
This sometimes outweighs its disadvantages:
Doesn't store the result code. If you need that, use ugexe's solution instead.
Doesn't store output to stderr. If you need that, use ugexe's solution instead.
Potential vulnerability. This is explained below. Consider ugexe's solution instead.
Documentation of the features explained below starts with the quote adverb :exec.
Safest unsafe variant: q
The safest variant uses a single q:
say qx[ echo 42 ] # 42
If there's an error then the construct returns an empty string and any error message will appear on stderr.
This safest variant is analogous to a single quoted string like 'foo' passed to the shell. Single quoted strings don't interpolate so there's no vulnerability to a code injection attack.
That said, you're passing a single string to the shell which may not be the shell you're expecting so it may not parse the string as you're expecting.
Least safe unsafe variant: qq
The following line produces the same result as the q line but uses the least safe variant:
say qqx[ echo 42 ]
This double q variant is analogous to a double quoted string ("foo"). This form of string quoting does interpolate which means it is subject to a code injection attack if you include a variable in the string passed to the shell.
By default run just passes the STDOUT and STDERR to the parent process's STDOUT and STDERR.
You have to tell it to do something else.
The simplest is to just give it :out to tell it to keep STDOUT. (Short for :out(True))
my $proc = run 'uname', :out;
my $result = $proc.out.slurp(:close);
my $proc = run 'uname', :out;
for $proc.out.lines(:close) {
.say;
}
You can also effectively tell it to just send STDOUT to /dev/null with :!out. (Short for :out(False))
There are more things you can do with :out
{
my $file will leave {.close} = open :w, 'test.out';
run 'uname', :out($file); # write directly to a file
}
print slurp 'test.out'; # Linux
my $proc = run 'uname', :out;
react {
whenever $proc.out.Supply {
.print
LAST {
$proc.out.close;
done; # in case there are other whenevers
}
}
}
If you are going to do that last one, it is probably better to use Proc::Async.

Command in expect for grep keyword

Expect script query:
In one of my expect script I have to pick keyword from output of send command and store in a file, could some one help me.
send "me\n"
output :
EM/X Nmis Ssh Session/2; Userid =
Impact = ; Scope = ; CustomerId = 0
Here I want to pick keyword : Nmis Ssh Session/2
and my target is to create new command in expect script is :
send "set Nmis Ssh Session/2 \n"
so this value : Nmis Ssh Session/2 should store in a variable. Could some one help me.
I'm not quite sure exactly what information is produced by which side, but maybe something like this will do:
expect -re {EM/X ([^;]+);}
set theVariable $expect_out(1,string)
The key is that we use the -re option to pass a regular expression to the expect command. That makes the text that matches what is in the parentheses (a sequence of non-semicolon characters) be stored in the variable expect_out(1,string) (there are many other things stored in the expect_out array; see the documentation). Copying it from there to a named variable for the purpose of storage and further manipulation is trivial.
I do not know if the RE is the right one; there's something of an art in choosing the right one, and it takes quite a lot of knowledge about what the possible output of the other side could be.

How to store a result to a variable in HP OpenVMS DCL?

I want to save the output of a program to a variable.
I use the following approach ,but fail.
$ PIPE RUN TEST | DEFINE/JOB VALUE #SYS$PIPE
$ x = f$logical("VALUE")
I got an error:%DCL-W-MAXPARM, too many parameters - reenter command with fewer parameters
\WORLD\
reference :
How to assign the output of a program to a variable in a DCL com script on VMS?
The usual way to do this is to write the output to a file and read from the file and put that into a DCL symbol (or logical). Although not obvious, you can do this with the PIPE command was well:
$ pipe r 2words
hello world
$ pipe r 2words |(read sys$pipe line ; line=""""+line+"""" ; def/job value &line )
$ sh log value
"VALUE" = "hello world" (LNM$JOB_85AB4440)
$
IF you are able to change the program, add some code to it to write the required values into symbols or logicals (see LIB$ routines)
If you can modify the program, using LIB$SET_SYMBOL in the program defines a DCL symbol (what you are calling a variable) for DCL. That's the cleanest way to do this. If it really needs to be a logical, then there are system calls that define logicals.

How can I use the value of mp2t.af.pcr as a Tshark field?

I have a wireshark capture that contains an RTP multicast stream (plus some other incidental data).
Using a Tshark command like the following, I can produce a CSV of the RTP timestamp compared with the packet capture time:
tshark.exe -r "capture.pcap" -Eseparator=, -Tfields -e rtp.timestamp -e frame.time_epoch -d udp.port==5000,rtp
This decodes the UDP packets as RTP, and successfully prints out the two fields as expected.
Now, my question: The payload of the RTP stream is an MPEG2 Transport Stream, and I also want to print the PCR value (if there is one) alongside the packet and RTP timestamps.
In wireshark, I can see the PCR being decoded correctly, however using a command like the following:
tshark.exe -r "HBO HD CZ.pcap" -Eseparator=,-Tfields -e rtp.timestamp -e frame.time_epoch -e mp2t.af.pcr -d udp.port==5000,mp2t
...only prints out a "1" if there is a PCR oresent, not the actual value. I have also checked the .pcr_flag to confirm that these two are not exchanged, but still I see the same result.
The documentation seems to call mp2t.af.pcr a "Label", does this mean that Tshark is not able to use it as a field? Is there a way to generate a CSV with these values?
(What part of the documentation calls it a "Label"? That's a somewhat odd description of a named field.)
The problem is that the value that Wireshark displays after "base(XXX)*300 + ext(YYY)" is calculated and displayed, but the field itself isn't given an integral type and is instead given a type that doesn't have a value. Arguably, it should be an FT_UINT64 field and should be given a value, so that you can filter on it and can print the value in TShark.
Please file an enhancement request for this on the Wireshark Bugzilla.