AFAIK redis is a single threaded and its uses event loop under the hood. I would want to understand 2 things:
are all redis commands synchronous?
if they are asynchronous
SET mykey "Hello" (first command)
GET mykey (second command)
there is a possibility for the second command to return nil, if the set command isn't executed yet. Is that correct?
Redis is single-threaded , which means all commands should be atomic. For details
In your above example; If the SET command gets to executed first then GET command will wait until SET completion; if GET command gets to executed first then then it will return nil and SET will subsequently be executed. so each command execution is atomic.
refer documentation; https://redis.io/topics/faq.
ps: for redis4.0 there is some multi-threading capability; refer documentation for details
Related
I am using Redis 7.0.4. I am trying to use RESP3 protocol so that the luascript responses are making more sense. When I use the hello command in luascript, it throws error
This Redis command is not allowed from script
If I set it from command line, it seems to be temporary and fallback to 2.
What is the right way to set it, so my script can take advantage of it?
HELLO is not allowed in Lua script. If you want to switch RESP version in Lua script, you should call redis.setresp(version) instead.
-- use RESP3
redis.setresp(3)
Also, RESP3 support some new types, so you need to be careful with the RESP3 data type conversion from/to Lua types. Check this for detail.
I'm using redis incr as our request counter as I researched incr is a atomic and thread-safe, now I wanna add expire time for each key but seems this process is not thread-safe, for example, redis crash only after incr done and before Expire command running, the basic pseudocode is as below:
value := redisClient.getValue(key)
if value > common.ChatConfig.SendMsgRetryCfg.RetryCount {
return error
}
value, err := redisClient.Incr(key).Result()
if err == nil {
redisClient.Expire(key, 24*time.Hour)
}
I wanna know how to change my codes and make the process be atomic and thread-safe? thank u
To make the two commands "atomic" use a Redis transaction or a Lua script. This will be thread-safe and fault-tolerant, as any changes will be persisted only after all commands (in the tx/script) had finished.
I'm working on a project using Redis cache with cachingFramework.redis.
I already have working the get function with Redis (using FetchObject)
but I need to update the Save function to save in DB and override/update the key/value in Redis.
Should I use SetObject? or I need to call first Remove(key)
It really depends on the when parameter, but by default the SET operation on redis overrides the current value, regardless of its type. Here is the documentation.
So you don't need to call the Remove method.
You can check here how StackExchange.Redis library choose between different SET commands (SET, SETNX, SETEX) depending on the parameters.
I want to save some information within the python code that is part of my snake file, and have this information available to the python code in every instance that snakemake creates when it is running the workflow. But a separate run of the workflow should have its own separate instance of information.
For example, say I were to create a UUID in my python code, and then later use it in the python code. But I want the UUID to be the same one in all running instances of the workflow. Instead, a new UUID gets created each time an instance is started.
If I start snakemake twice at the same time, I would want each of the two runs to create their own UUID, but within each run, all instances created by the run would use the same UUID. How to do this? Is there an identifier somewhere in the snakemake object that remains the same within one run across all instances, but changes from run to run?
Here's an example that fails with a 'No rule to produce' error:
import uuid
ID = str(uuid.uuid4())
print("ID:", ID)
rule all:
output: ID
run: print("Hello world")
If instead of 'run' it uses 'shell', it works fine, so I assume that Snakemake is rerunning the snakefile code when it executes the "run" portion of the rule. How could this be modified to work, to retain the first UUID value instead of generating a second one? Also, why isn't the ID specified for output in the rule captured when the rule is first processed, without requiring a second invocation of the python code? Since it works with 'shell', the second invocation is not needed specifically for processing the "output" statement.
Indeed, when you use a run block, Snakemake will invoke itself to execute that job, meaning that it also reparses the Snakefile, generating a new UUID. The same will happen on the cluster. There are good technical reasons for doing it like this (performance, the Python GIL, restrictions with pickling, simplicity and robustness of the implementation).
I am not sure what exactly you want to achieve, but it might help to look at this: http://snakemake.readthedocs.io/en/stable/project_info/faq.html#i-want-to-pass-variables-between-rules-is-that-possible
I've found a method that seems to work: use the process group ID:
ID = str(os.getpgrp())
Multiple instances of the same pipeline have the same group ID. However, I'm not sure if this remains true on a cluster, probably not. In my case that didn't matter.
I'm writing a primitive that takes in two agentsets and a command block. It needs to call a few functions, execute the command block in the current context, and then call another function. Here's what I have so far:
class WithContext(pushGraphContext: GraphContext => Unit, popGraphContext: api.World => GraphContext)
extends api.DefaultCommand {
override def getSyntax = commandSyntax(
Array(AgentsetType, AgentsetType, CommandBlockType))
def perform(args: Array[Argument], context: Context) {
val turtleSet = args(0).getAgentSet.requireTurtleSet
val linkSet = args(1).getAgentSet.requireLinkSet
val world = linkSet.world
val gc = new GraphContext(world, turtleSet, linkSet)
val extContext = context.asInstanceOf[ExtensionContext]
val nvmContext = extContext.nvmContext
pushGraphContext(gc)
// execute command block here
popGraphContext(world)
}
}
I looked at some examples that used nvmContext.runExclusively, but that looked like it's specifically for having a given agentset run the command block. I want the current agent (possibly the observer) to run it. Should I wrap nvm.agent in an agentset and pass that to nvmContext.runExclusively? If so, what's the easiest way to wrap an agent in agentset? If not, what should I do?
Method #1
The quicker-but-arguably-dirtier method is to use runExclusiveJob, as demonstrated in (e.g.) the create-red-turtles command in https://github.com/NetLogo/Sample-Scala-Extension/blob/master/src/SampleScalaExtension.scala .
To wrap the current agent in an agentset, you can use agent.AgentSetBuilder. (You could also pass an Array[Agent] of length 1 to one of the ArrayAgentSet constructors, but I'd recommend AgentSetBuilder since it's less reliant on internal implementation details which are likely to change.)
Method #2
The disadvantage of method #1 is the slight constant overhead associated with creating and setting up the extra AgentSet, Job, and Context objects and directing execution through them.
Creating and running a separate job isn't actually how built-in commands like if and while work. Instead of making a new job, they remain in the current job and cause commands in a command block to run (or not run) by manipulating the instruction pointer (nvm.Context.ip) to jump into them or skip over them.
I believe an extension command could do the same. I'm not certain if it has been tried before, but I can't see any reason it wouldn't work.
Doing it this way would involve understanding more about NetLogo engine internals, as documented at https://github.com/NetLogo/NetLogo/wiki/Engine-architecture . You'd model your primitive after e.g. https://github.com/NetLogo/NetLogo/blob/5.0.x/src/main/org/nlogo/prim/etc/_if.java , including altering your implementation of nvm.CustomAssembled. (Note that prim._extern, which runs extension commands, delegates its assemble method to the wrapped command's own assemble method, so this should work.) In your assemble method, instead of calling done() at the end to terminate the job, you'd just allow execution to fall through.
I could try to construct an example that works this way, but it'd take me a couple hours; it's probably not worth me doing unless there's a real need.