I'm using redis incr as our request counter as I researched incr is a atomic and thread-safe, now I wanna add expire time for each key but seems this process is not thread-safe, for example, redis crash only after incr done and before Expire command running, the basic pseudocode is as below:
value := redisClient.getValue(key)
if value > common.ChatConfig.SendMsgRetryCfg.RetryCount {
return error
}
value, err := redisClient.Incr(key).Result()
if err == nil {
redisClient.Expire(key, 24*time.Hour)
}
I wanna know how to change my codes and make the process be atomic and thread-safe? thank u
To make the two commands "atomic" use a Redis transaction or a Lua script. This will be thread-safe and fault-tolerant, as any changes will be persisted only after all commands (in the tx/script) had finished.
Related
AFAIK redis is a single threaded and its uses event loop under the hood. I would want to understand 2 things:
are all redis commands synchronous?
if they are asynchronous
SET mykey "Hello" (first command)
GET mykey (second command)
there is a possibility for the second command to return nil, if the set command isn't executed yet. Is that correct?
Redis is single-threaded , which means all commands should be atomic. For details
In your above example; If the SET command gets to executed first then GET command will wait until SET completion; if GET command gets to executed first then then it will return nil and SET will subsequently be executed. so each command execution is atomic.
refer documentation; https://redis.io/topics/faq.
ps: for redis4.0 there is some multi-threading capability; refer documentation for details
I'd like to organize a thread barrier: given a single lock object, any thread can obtain it and continue thread's chain further, but any other thread will stay dormant on the same lock object until the first thread finishes and releases the lock.
Let's express my intention in code (log() simply prints string in a log):
val mutex = Semaphore(1) // number of permits is 1
source
.subscribeOn(Schedulers.newThread()) // any unbound scheduler (io, newThread)
.flatMap {
log("#1")
mutex.acquireUninterruptibly()
log("#2")
innerSource
.doOnSubscribe(log("#3"))
.doFinally {
mutex.release()
log("#4")
}
}
.subscribe()
It actually works well, i can see how multiple threads show log "#1" and only one of them propagates further, obtaining lock object mutex, then it releases it and i can see other logs, and next threads comes into play. OK
But sometimes, when pressure is quite high and number of threads is greater, say 4-5, i experience DEADLOCK:
Actually, the thread that has acquired the lock, prints "#1" and "#2" but it then never print "#3" (so doOnSubscribe() not called), so it actually stops and does nothing, not subscribing to innerSource in flatMap. So all threads are blocked and app is not responsive at all.
My question - is it safe to have blocking operation inside flatMap? I dig into flatMap source code and i see the place where it internally subscribes:
if (!isDisposed()) {
o.subscribe(new FlatMapSingleObserver<R>(this, downstream));
}
Is it possible that thread's subscription, that has acquired lock, was disposed somehow?
You can use flatMap second parameter maxConcurrency and set it to 1, so it does what you want without manually locking
I'm writing a native WebRTC application that forwards pre-encoded frames for a client right now. That part is all fine, but I'm having segfault issues every time I attempt to exit my application, specifically with regards to how I'm destroying my WebRtcPeerConnectionFactory.
I'm instantiating this object by first launching separate threads for networking, signaling, and working respectively, and in my destructor kill these threads by calling thread->Quit() before setting my webRtcPeerConnectionFactory to a nullptr (as I've seen in examples in the source code do in their conductor.cc files), but I either segfault or hang indefinitely depending on the order with which the prior two actions are taken.
On a high level is there a correct way to gracefully destroy the factory object or is there some cleanup function I'm not calling? I can't find any other examples online that take advantage of the WebRTC threading model so I'm not sure where to move on from here. Thanks!
My instantiation of the object is performed like so:
rtc_network_thread_ = rtc::Thread::CreateWithSocketServer();
rtc_worker_thread_ = rtc::Thread::Create();
rtc_signaling_thread_ = rtc::Thread::Create();
if (!rtc_network_thread_->Start() || !rtc_worker_thread_->Start() || !rtc_signaling_thread->Start()) {
// error handling
}
peer_connection_factory_ = webrtc::CreatePeerConnectionFactory(
rtc_network_thread_.get(), rtc_worker_thread_.get(), rtc_signaling_thread_.get(),
nullptr, webrtc::CreateBuiltInAudioEncoderFactory(), webrtc::CreateBuiltInAudioDecoderFactory(),
dummy_encoder_factory_.get(), nullptr)
And my subsequent cleanup looks like this:
rtc_worker_thread_->Quit();
rtc_network_thread_->Quit();
rtc_signaling_thread_->Quit();
if (peer_connection_factory_) {
// errors occur here, either with setting to nullptr or with threads
// quitting if I quit the threads after setting my factory to a nullptr
peer_connection_factory_ = nullptr;
}
Following the guidelines here I'm able to set the "consumer_cancel_notify" property for my client connection, but when the Queue is deleted the client still isn't noticing. I'm guessing that I probably have to override some method or set a callback somewhere, but after digging through the source code I'm lost as to where I'd do this. Does anybody offhand know where I'd listen for this notification?
Ok here's how I got it to work:
When creating the Queue (i.e. "declaring" the Queue), add a callback for the "AMQP_CANCEL" messages.
Inside AMQPQueue::sendConsumeCommand(), inside the while(1) loop where the code checks for the different *frame.payload.method.id*s, add a check for the AMQP_BASIC_CANCEL_METHOD, e.g.
if (frame.payload.method.id == AMQP_BASIC_CANCEL_METHOD){
cout << "AMQP_BASIC_CANCEL_METHOD received" << endl;
if ( events.find(AMQP_CANCEL) != events.end() ) {
(*events[AMQP_CANCEL])(pmessage);
}
continue;
}
That's it.
For my purposes I wanted to redeclare the Queue if it got deleted so that I could keep consuming messages, so inside my callback I just redeclared the queue, set up the bindings, added events, set the consumer tag, and consumed.
I need help with realizing quite complex business logic which operates on many tables and executes quite a few SQL commands. However I want to be sure that the data will not be left in incosistent state and to this moment I don't see the solution which would not require nested transactions. I wrote a simple pseudo-code which illustrates a scenario similar to what I want to accomplish:
Dictionary<int, bool> opSucceeded = new Dictionary<int, bool> ();
for (int i = 0; i < 10; i++)
{
try
{
// this operation must be atomic
Operation(dbContext, i);
// commit (?)
opSucceeded[i] = true;
}
catch
{
// ignore
}
}
try
{
// this operation must know which Operation(i) has succeeded;
// it also must be atomic
FinalOperation(dbContext, opSucceeded);
// commit all
}
catch
{
// rollback FinalOperation and operation(i) where opSucceeded[i] == true
}
The biggest problem for me is: how to ensure that if the FinalOperation fails, all operations Operation(i) which succeeded are rolled back? Note that I also would like to be able to ignore failures of single Operation(i).
Is it possible to achieve this by using nested TransactionScope objects and if not - how would you approach such problem?
If I am following your question, you want to have a series of operations against the database, and you capture enough information to determine if each operating succeeds or fails (the dictionary in your simplified code).
From there, you have a final operation that must roll back all of the successful operations from earlier if it fails itself.
It would seem this is exactly the type of case that a simple transaction is for. There is no need to keep track of the success or failure of the child/early operations as long as failure of the final operation rolls the entire transaction back (here assuming that FinalOperation isn't using that information for other reasons).
Simply start a transaction before you enter the block described, and commit or rollback the entire thing after you know the status of your FinalOperation. There is no need to nest the child operations as far as I can see from your current description.
Perhaps I a missing something? (Note, if you wanted to RETAIN the earlier/child operations, that would be something different entirely... but a failure of the final op rolling the whole package of operations back makes the simple transaction usable).