Expect script query:
In one of my expect script I have to pick keyword from output of send command and store in a file, could some one help me.
send "me\n"
output :
EM/X Nmis Ssh Session/2; Userid =
Impact = ; Scope = ; CustomerId = 0
Here I want to pick keyword : Nmis Ssh Session/2
and my target is to create new command in expect script is :
send "set Nmis Ssh Session/2 \n"
so this value : Nmis Ssh Session/2 should store in a variable. Could some one help me.
I'm not quite sure exactly what information is produced by which side, but maybe something like this will do:
expect -re {EM/X ([^;]+);}
set theVariable $expect_out(1,string)
The key is that we use the -re option to pass a regular expression to the expect command. That makes the text that matches what is in the parentheses (a sequence of non-semicolon characters) be stored in the variable expect_out(1,string) (there are many other things stored in the expect_out array; see the documentation). Copying it from there to a named variable for the purpose of storage and further manipulation is trivial.
I do not know if the RE is the right one; there's something of an art in choosing the right one, and it takes quite a lot of knowledge about what the possible output of the other side could be.
Related
I'm trying to run a series of shell commands with Perl6 to the variable $cmd, which look like
databricks jobs run-now --job-id 35 --notebook-params '{"directory": "s3://bucket", "output": "s3://bucket/extension", "sampleID_to_canonical_id_map": "s3://somefile.csv"}'
Splitting the command by everything after notebook-params
my $cmd0 = 'databricks jobs run-now --job-id 35 --notebook-params ';
my $args = "'{\"directory\": \"$in-dir\", \"output\": \"$out-dir\",
\"sampleID_to_canonical_id_map\": \"$map\"}'"; my $run = run $cmd0,
$args, :err, :out;
Fails. No answer given either by Databricks or the shell. Stdout and stderr are empty.
Splitting the entire command by white space
my #cmd = $cmd.split(/\s+/);
my $run = run $cmd, :err, :out
Error: Got unexpected extra arguments ("s3://bucket", "output":
"s3://bucket/extension",
"sampleID_to_canonical_id_map":
"s3://somefile.csv"}'
Submitting the command as a string
my $cmd = "$cmd0\"$in-dir\", \"output\": \"$out-dir\", \"sampleID_to_canonical_id_map\": \"$map\"}'";
again, stdout and stderr are empty. Exit code 1.
this is something about how run can only accept arrays, and not strings (I'm curious why)
If I copy and paste the command that was given to Perl6's run, it works when given from the shell. It doesn't work when given through perl6. This isn't good, because I have to execute this command hundreds of times.
Perhaps Perl6's shell https://docs.perl6.org/routine/shell would be better? I didn't use that, because the manual suggests that run is safer. I want to capture both stdout and stderr inside a Proc class
EDIT: I've gotten this running with shell but have encountered other problems not related to what I originally posted. I'm not sure if this qualifies as being answered then. I just decided to use backticks with perl5. Yes, backticks are deprecated, but they get the job done.
I'm trying to run a series of shell commands
To run shell commands, call the shell routine. It passes the positional argument you provide it, coerced to a single string, to the shell of the system you're running the P6 program on.
For running commands without involving a shell, call the run routine. The first positional argument is coerced to a string and passed to the operating system as the filename of the program you want run. The remaining arguments are concatenated together with a space in between each argument to form a single string that is passed as a command line to the program being run.
my $cmd0 = 'databricks jobs run-now --job-id 35 --notebook-params ';
That's wrong for both shell and run:
shell only accepts one argument and $cmd0 is incomplete.
The first argument for run is a string interpreted by the OS as the filename of a program to be run and $cmd0 isn't a filename.
So in both cases you'll get either no result or nonsense results.
Your other two experiments are also invalid in their own ways as you discovered.
this is something about how run can only accept arrays, and not strings (I'm curious why)
run can accept a single argument. It would be passed to the OS as the name of the program to be run.
It can accept two arguments. The first would be the program name, the second the command line passed to the program.
It can accept three or more arguments. The first would be the program name, the rest would be concatenated to form the command line passed to the program. (There are cases where this is more convenient coding wise than the two argument form.)
run can also accept a single array. The first element would the program name and the rest the command line passed to it. (There are cases where this is more convenient.)
I just decided to use backticks with perl5. Yes, backticks are deprecated, but they get the job done.
Backticks are subject to code injection and shell interpolation attacks and errors. But yes, if they work, they work.
P6 has direct equivalents of most P5 features. This includes backticks. P6 has two variants:
The safer P6 alternative to backticks is qx. The qx quoting construct calls the shell but does not interpolate P6 variables so it has the same sort of level of danger as using shell with a single quoted string.
The qqx variant is the direct equivalent of P5 backticks or using shell with a double quoted string so it suffers from the same security dangers.
Two mistakes:
the simplistic split cuts up the last, single parameter into multiple arguments
you are passing $cmd to run, not #cmd
use strict;
my #cmd = ('/tmp/dummy.sh', '--param1', 'param2 with spaces');
my $run = run #cmd, :err, :out;
print(#cmd ~ "\n");
print("EXIT_CODE:\t" ~ $run.exitcode ~ "\n");
print("STDOUT:\t" ~ $run.out.slurp ~ "\n");
print("STDERR:\t" ~ $run.err.slurp ~ "\n");
output:
$ cat /tmp/dummy.sh
#!/bin/bash
echo "prog: '$0'"
echo "arg1: '$1'"
echo "arg2: '$2'"
exit 0
$ perl6 dummy.pl
/tmp/dummy.sh --param1 param2 with spaces
EXIT_CODE: 0
STDOUT: prog: '/tmp/dummy.sh'
arg1: '--param1'
arg2: 'param2 with spaces'
STDERR:
If you can avoid generating $cmd as single string, I would generate it into #cmd directly. Otherwise you'll have to implement complex split operation that handles quoting.
Hashcat doesn't support the target application I'm trying to crack, but I'm wondering whether the mask function can be 'fed' the list of passwords and parsed through the rockyou rule to generate an effective wordlist for me?
If so, how can this be done as the documentation leaves lots to be desired.. !
Many thanks
I used HashCatRulesEngine:
https://github.com/llamasoft/HashcatRulesEngine
You can chain all the HashCat rules together, it then union selects them, weeds out any duplicates and takes as input your sample password file.
It then generates all possible permutations.
For instance:
echo "password">foo
./hcre /Users/chris/Downloads/hashcat-4.0.0/rules/Incisive-leetspeak.rule /Users/chris/Downloads/hashcat-4.0.0/rules/InsidePro-HashManager.rule /Users/chris/Downloads/hashcat-4.0.0/rules/InsidePro-PasswordsPro.rule /Users/chris/Downloads/hashcat-4.0.0/rules/T0XlC-insert_00-99_1950-2050_toprules_0_F.rule /Users/chris/Downloads/hashcat-4.0.0/rules/T0XlC-insert_space_and_special_0_F.rule /Users/chris/Downloads/hashcat-4.0.0/rules/T0XlC-insert_top_100_passwords_1_G.rule /Users/chris/Downloads/hashcat-4.0.0/rules/T0XlC.rule /Users/chris/Downloads/hashcat-4.0.0/rules/T0XlCv1.rule /Users/chris/Downloads/hashcat-4.0.0/rules/best64.rule /Users/chris/Downloads/hashcat-4.0.0/rules/combinator.rule /Users/chris/Downloads/hashcat-4.0.0/rules/d3ad0ne.rule /Users/chris/Downloads/hashcat-4.0.0/rules/dive.rule /Users/chris/Downloads/hashcat-4.0.0/rules/generated.rule /Users/chris/Downloads/hashcat-4.0.0/rules/generated2.rule /Users/chris/Downloads/hashcat-4.0.0/rules/hybrid /Users/chris/Downloads/hashcat-4.0.0/rules/leetspeak.rule /Users/chris/Downloads/hashcat-4.0.0/rules/oscommerce.rule /Users/chris/Downloads/hashcat-4.0.0/rules/rockyou-30000.rule /Users/chris/Downloads/hashcat-4.0.0/rules/specific.rule /Users/chris/Downloads/hashcat-4.0.0/rules/toggles1.rule /Users/chris/Downloads/hashcat-4.0.0/rules/toggles2.rule /Users/chris/Downloads/hashcat-4.0.0/rules/toggles3.rule /Users/chris/Downloads/hashcat-4.0.0/rules/toggles4.rule /Users/chris/Downloads/hashcat-4.0.0/rules/toggles5.rule /Users/chris/Downloads/hashcat-4.0.0/rules/unix-ninja-leetspeak.rule < foo >passwordsx
1 password the word "password" was permutated a total of:
bash-3.2# wc -l passwordsx
227235 passwordsx
bash-3.2#
Times meaning that each word you feed into this generates 227235 possible combinations roughly giving you full coverage..
You can use hashcat itself as a candidate generator by adding the --stdout switch (then pipe to your file or program of choice). I haven't tried all the possibilities, but it should work with any of the supported hashcat modes.
Here's an example using a ruleset: https://hashcat.net/wiki/doku.php?id=rule_based_attack#debugging_rules
I want to save the output of a program to a variable.
I use the following approach ,but fail.
$ PIPE RUN TEST | DEFINE/JOB VALUE #SYS$PIPE
$ x = f$logical("VALUE")
I got an error:%DCL-W-MAXPARM, too many parameters - reenter command with fewer parameters
\WORLD\
reference :
How to assign the output of a program to a variable in a DCL com script on VMS?
The usual way to do this is to write the output to a file and read from the file and put that into a DCL symbol (or logical). Although not obvious, you can do this with the PIPE command was well:
$ pipe r 2words
hello world
$ pipe r 2words |(read sys$pipe line ; line=""""+line+"""" ; def/job value &line )
$ sh log value
"VALUE" = "hello world" (LNM$JOB_85AB4440)
$
IF you are able to change the program, add some code to it to write the required values into symbols or logicals (see LIB$ routines)
If you can modify the program, using LIB$SET_SYMBOL in the program defines a DCL symbol (what you are calling a variable) for DCL. That's the cleanest way to do this. If it really needs to be a logical, then there are system calls that define logicals.
I have seen this pass results to another command in redis
and using via command line this command works well :
src/redis-cli keys '*' | xargs src/redis-cli mget
However how can we achieve the same effect via Lettuce (i started trying out 4.0.2.Final)
Also a solution to this is particularly important in the following scenario :
Say we are using geolocation capabilities, and we add a set of locations of "my-location-category"
using GEOADD
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1" 8.3796281 48.9978127 "location-id:2" 8.665351 49.553302 "location-id:3"
Next, say we do a GeoRadius to get locations within 10 km radius of 8.6582361 49.5285495 for "category-1"
Now when we get "location-id:1" & "location-id:3"
Given that I already set values for above keys "location-id:1" & "location-id:3"
I want to pipe commands to do the GEORADIUS as well as do mget on all the matching results.
Does Redis provide feature to do that?
and / or how can we achieve this via the Lettuce client library without first manually iterating through results of GEORADIUS and then do manual mget.
That would be more efficient performance for the program that uses it.
Does anyone know how we can do this ?
Update
This is the piped command for the scenario I discussed above :
src/redis-cli GEORADIUS "category-1" 8.6582361 49.5285495 10 km | xargs src/redis-cli mget
Now we need to know how to do this via Lettuce
IMPORTANT: never use KEYS, always use SCAN instead if you must.
This isn't really a question about Lettuce nor Java so I can actually answer it :)
What you're trying to do is use the results from a read operation (GEORADIUS) as input (key names) for another read operation (MGET). This type of flow can't be pipelined, well, just because of that - pipelining means that you don't need the answers for operations right away but in you case you do.
However.
Since you're reading String keys with MGET, you might as well just denormalize everything (remember, we're NoSQL) and store the contents of these keys in the Sorted Set's members, e.g.:
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1:moredata:evenmoredata:{maybe a JSON document here}:orperhapsmsgpack"
This will allow you to get the locations and their "data" with one GEORADIUS call. Of course, any updates to location:1's data will need to be done across all categories.
A note about Lua scripts: while a Lua script could definitely save on the back and forth in this case, any such script will be against best practices/not cluster safe.
After digging around and studying Lua script, my conclusion is that removing round-trips in such a way can only be done via Lua scripts as suggested by Itamar Haber.
I ended up creating a lua script file (myscript.lua) as below
local locationKeys = redis.call('GEORADIUS', 'category-1', '8.6582361', '49.5285495', '10', 'km' )
if unpack(locationKeys) == nil then
return nil
else
return redis.call('MGET', unpack(locationKeys))
end
** of course we should be sending in parameters to this... this is just a poc :)
now you can execute it via command
src/redis-cli EVAL "$(cat myscript.lua)" 0
Then to reduce the network-overhead of sending across the entire script to Redis for execution, we have the option of registering the script with Redis.
Redis will give us a sha1 digested code for future references for that script, which can be used for next calls to that script.
This can be done as below :
src/redis-cli SCRIPT LOAD "$(cat myscript.lua)"
this should give back a sha1 code something like this : 49730aa2ed3034ee48f818e486tpbdf1b500b19e
next calls can be done using this code
eg
src/redis-cli evalsha 49730aa2ed3034ee48f818e486b2bdf1b500b19e 0
The sad part however here is that the sha1 digest is remembered only so long as the instance of redis is running. If it is restarted, that the sha1 digest is lost. Then you do the SCRIPT LOAD once again. And if nothing changes in the script, then the sha1-digest code will be the same.
Ideally while using through client api, we should first attempt evalsha, if that returns a "No matching script" error, then as a fallback do script load, and procure the sha1 code once again, and create an internal map of that and use that sha1 code for further calls.
This can well be done via Lettuce. I could find the methods for those. Hope this gives a good insight into solution for the problem.
Let me start with I am very new to powershell and programming for that matter. I have a powershell script that takes some arguments and that outputs a value.
The result of the script is going to be something like 9/10 where 9 would be the number active out of the total amount of nodes. I want to assign the output to a variable so I can then call another script based on the value.
This is what I have tried, but it does not work:
$active = (./MyScript.ps1 lb uid **** site)
I have also tried the following which seems to assign the variable an empty string
$active = (./MyScript.ps1 lb uid **** site | out-string)
In both cases they run and give me the value immediately instead of assigning it to the variable. When I call the variable, I get no data.
I would embrace PowerShell's object-oriented nature and rather than output a string like "9/10", create an object with properties like NumActiveNodes and TotalNodes e.g. in your script output like so:
new-object psobject -Property #{NumActiveNodes = 9; TotalNodes = 10}
Of course, substitute in the dynamic values for num active and total nodes. Note that uncaptured objects will automatically appear on your script's output. Then, if this is your scripts only output, you can do this:
$obj = .\MyScript.ps1
$obj.NumActiveNodes
9
$obj.TotalNodes
10
It will make it nicer for those consuming the output of your script. In fact the output is somewhat self-documenting e.g.:
C:\PS> .\MyScript.ps1
NumActiveNodes TotalNodes
-------------- ----------
9 10
P.S. When did StackOverflow start sucking so badly at formatting PowerShell script?
If you don't want to change the script ( and assuming only that $avail_count/$total_count line is written by the script), you can do:
$var= powershell .\MyScript.ps1
Or just drop the write-host and have just $avail_count/$total_count
and then do:
$var = .\MyScript.ps1
you could just do a $global:foobar in your script and it will persist after the script is closed
I know, the question is a bit older, but it might help someone to find the right answer.
I had the similar problem with executing PS script with another PS script and saving the output into variable, here are 2 VERY good answers:
Mathias
mklement0
Hope it helps!
Please up-vote them if so, because they are really good!