RedisSearch Module for Redis cluster - redis

i am struggling to install RedisSearch module on Rediscluster(Local environment).Using below command but keep on getting error. As i don't know what should be value for OPT1 and OPT2
redis-cli --cluster call 127.0.0.1:30001 MODULE LOAD redisearch.so OPT1 OPT2

redisearch.so OPT1 OPT2
This command is used to tune the run-time configuration options when loading the module. Opt1 and Opt2 are nothing but the configuration key and value pair.
Example: redis-server --loadmodule ./redisearch.so TIMEOUT 100
Opt1 is TIMEOUT and Opt2 is 100 where the TIMEOUT is the maximum amount of time in milliseconds that a search query is allowed to run.
You can find more of the configuration at [0]
[0] https://oss.redislabs.com/redisearch/Configuring.html

Related

Snakemake explicit handling for Out Of Memory (OOM) failures

A Snakemake workflow can re-attempt for each restart after any type of failure, including if the error is of an Out Of Memory (OOM) doing e.g.
def get_mem_mb(wildcards, attempt):
return attempt * 100
rule:
input: ...
output: ...
resources:
mem_mb=get_mem_mb
shell:
"..."
Is there anyway in Snakemake to deal explicitly with a memory-related error, as NextFlow does. e.g. when Exit error is memory related (137 in a LSF system)?
process foo {
memory { 2.GB * task.attempt }
time { 1.hour * task.attempt }
errorStrategy { task.exitStatus in 137..140 ? 'retry' : 'terminate' }
maxRetries 3
script:
<your job here>
}
I could not find this information anywhere,
thanks
I am not sure if there is an explicit way for Snakemake to handle out of memory errors. However, the memory function you have in your code example is what I've done to handle memory issues using Snakemake.
To make use of the function, you need to provide the --rerun-incomplete option when executing Snakemake, so that failed jobs will be rerun. You can control the number of times Snakemake will retry with --restart-times.

springboot use lua to create redis bloom filter : #user_script:1: ERR bad error rate

I use the SpringBoot provided redistemplate to execute the Lua script:
return redis.call('bf.reserve', KEYS[1],ARGV[1],ARGV[2])
but it keeps getting wrong:
ERR Error running script (call to f_264cca3824c7a277f5d3cf63f1b2642a0750e989): #user_script:1: ERR bad error rate.
this is my docker image:
redislabs/rebloom:2.2.5
i try to run this script in linux command,it works:
[root#daice ~]# redis-cli --eval a.lua city , 0.001 100000
OK
[root#daice ~]# redis-cli
127.0.0.1:6379> keys *
1) "city"
I just looked up the error in this link, the snippet looks like
if (RedisModule_StringToDouble(argv[2], &error_rate) != REDISMODULE_OK) {
return RedisModule_ReplyWithError(ctx, "ERR bad error rate");
I assume the argument that you are providing for error_rate does not convert to a double value.

Handling SIGPIPE error in snakemake

The following snakemake script:
rule all:
input:
'test.done'
rule pipe:
output:
'test.done'
shell:
"""
seq 1 10000 | head > test.done
"""
fails with the following error:
snakemake -s test.snake
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
1 pipe
2
rule pipe:
output: test.done
jobid: 1
Error in job pipe while creating output file test.done.
RuleException:
CalledProcessError in line 9 of /Users/db291g/Tritume/test.snake:
Command '
seq 1 10000 | head > test.done
' returned non-zero exit status 141.
File "/Users/db291g/Tritume/test.snake", line 9, in __rule_pipe
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/thread.py", line 55, in run
Removing output files of failed job pipe since they might be corrupted:
test.done
Will exit after finishing currently running jobs.
Exiting because a job execution failed. Look above for error message
The explanation returned non-zero exit status 141 seems to say that snakemake has caught the SIGPIPE fail sent by head. I guess strictly speaking snakemake is doing the right thing in catching the fail, but I wonder if it would be possible to ignore some types of errors like this one. I have a snakemake script using the head command and I'm trying to find a workaround this error.
Yes, Snakemake sets pipefail by default, because in most cases this is what people implicitly expect. You can always deactivate it for specific commands by prepending set +o pipefail; to the shell command.
A somehow clunky solution is to append || true to the script. This will make the command always exit cleanly, which is not acceptable. To check whether the script actually succeded you can query the array variable ${PIPESTATUS[#]} to ensure it contains the expected exit codes:
This script is ok:
seq 1 10000 | head | grep 1 > test.done || true
echo ${PIPESTATUS[#]}
141 0 0
This is not ok:
seq 1 10000 | head | FOOBAR > test.done || true
echo ${PIPESTATUS[#]}
0

Using argparse to accept only one group of required arguments

I'm trying to use parser = argparse.ArgumentParser for a lil program I write.
The program accepts as an input EITHER ( a path to a txt file ) OR (opt1 && opt2 && opt3 ).
Meaning if the user wants to use a txt file as an input he cant provide neither of opt and if he provided any opt - he have to provide all 3 and cant provide a path to a txt file.
I tried using add_mutually_exclusive_group but not sure how to because the second group of arguments is a group itself.
This is what I tried so far:
import argparse
parser = argparse.ArgumentParser(description='this is the description',)
root_group = parser.add_mutually_exclusive_group()
group_list = root_group.add_mutually_exclusive_group()
group_list.add_argument('-path', help='path to the txt file')
group_list = root_group.add_mutually_exclusive_group()
group_list.add_argument('-opt1', help='opt1')
group_list.add_argument('-opt2', help='opt2')
group_list.add_argument('-opt3', help='opt3')
args = parser.parse_args()
-
python tests.py -path txt -opt1 asdasd
usage: tests.py [-h] [[-path PATH] [-opt1 OPT1 | -opt2 OPT2 | -opt3 OPT3]
tests.py: error: argument -opt1: not allowed with argument -path
path is not allowed with any of opt - that's exactly what I want.
But I want that if the user supplied even 1 opt he will have to provide all of them.
I also want that at least 1 group have be to satisfied.
Mutually exclusive groups aren't designed for nesting. It accepts it your code, but the net effect into make a 4 arguments exclusive. It will accept only one of path, opt1, opt2, etc.
While I have explored defining nesting groups, and allowing any or and operations within groups, such a feature is a long way off.
Since your user has to provide all 3 --opt I'd suggest condensing that into one argument:
root_group.add_argument('--opt', nargs=3)
root_group.add_argument('--path')
Usage should look something like
usage: tests.py [-h] [--path PATH | --opt OPT OPT OPT]
Contrast that with a hypothetical usage that allows nested inclusive groups:
[-path PATH | [-opt1 OPT1 & -opt2 OPT2 & -opt3 OPT3]]
===========
With a tuple metavar, the usage can be refined to:
g.add_argument('--opt',nargs=3,metavar=('OPT1','OPT2','OPT3'))
usage: ipython3 [-h] [--path PATH | --opt OPT1 OPT2 OPT3]
=============
Your other option is to write a custom usage and perform you own logical tests after parsing.
I would use a subcommand parser instead. Your "options" aren't really optional; they are 3 required arguments in a particular context.
import argparse
p = argparse.ArgumentParser()
sp = p.add_subparsers()
p1 = sp.add_parser('file')
p1.add_argument('path')
p2 = sp.add_parser('opts')
p2.add_argument('opt1')
p2.add_argument('opt2')
p2.add_argument('opt3')
args = parser.parse_args()
Then you would invoke your script with
python tmp.py file foo.txt
or
python tmp.py opts 1 2 3
The help will tell you about the required positional argument whose value is either file or opts:
% python tmp.py -h
usage: tmp.py [-h] {file,opts} ...
positional arguments:
{file,opts}
optional arguments:
-h, --help show this help message and exit
and each subcommand has its own usage message:
% python tmp.py file -h
usage: tmp.py file [-h] path
positional arguments:
path
optional arguments:
-h, --help show this help message and exit
% python tmp.py opts -h
usage: tmp.py opts [-h] opt1 opt2 opt3
positional arguments:
opt1
opt2
opt3
optional arguments:
-h, --help show this help message and exit

Wrong qstat GPU resource count SGE

I have a gpu resource called gpus. When I run qstat -F gpus I get weird output of the format "qc:gpus=-1" , thus negative number of available gpus are reported. If i run qstat -g c says I have multiple GPUs available. Multiple jobs fail because of "unavailable gpus". It's like the counting of GPUs starts from 1 instead of 8 on each node, so if I used more than 1 it becomes negative. My queue is :
hostlist node-01 node-02 node-03 node-04 node-05
seq_no 0
load_thresholds NONE
suspend_thresholds NONE
nsuspend 1
suspend_interval 00:05:00
priority 0
min_cpu_interval 00:05:00
processors UNDEFINED
qtype BATCH INTERACTIVE
ckpt_list NONE
pe_list smp mpich2
rerun FALSE
slots 1,[node-01=8],[node-02=8],[node-03=8],[node-04=8],[node-05=8]
Does anyone have any idea why this is happening?
I believe you set the "gpus" complex in the host configuration. You can see it if you do
qconf -se node-01
And you can check the definition of the "gpus" complex with
qconf -sc
For instance, my UGE has this definition for the "ngpus" complex:
#name shortcut type relop requestable consumable default urgency
ngpus gpu INT <= YES YES 0 1000
And an example node "qconf -se gpu01":
hostname gpu01.cm.cluster
...
complex_values exclusive=true,m_mem_free=65490.000000M, \
m_mem_free_n0=32722.546875M,m_mem_free_n1=32768.000000M, \
ngpus=2,slots=16,vendor=intel
You can modify the value by "qconf -me node-01". See the man page complex(5) for details.