How to use Pub/sub with hiredis in C++? - redis

I am trying to test this pub/sub function of redis with hiredis client via c++.
I can see that subscribing to certain channel seems to be easy enough to do through redisCommand Api.
However I am wondering how the reply is coming back when somebody publish to the certain server.
Thank you

https://github.com/redis/hiredis/issues/55
aluiken commented on Mar 2, 2012
void onMessage(redisAsyncContext *c, void *reply, void *privdata) {
redisReply *r = reply;
if (reply == NULL) return;
if (r->type == REDIS_REPLY_ARRAY) {
for (int j = 0; j < r->elements; j++) {
printf("%u) %s\n", j, r->element[j]->str);
}
}
}
int main (int argc, char **argv) {
signal(SIGPIPE, SIG_IGN);
struct event_base *base = event_base_new();
redisAsyncContext *c = redisAsyncConnect("127.0.0.1", 6379);
if (c->err) {
printf("error: %s\n", c->errstr);
return 1;
}
redisLibeventAttach(c, base);
redisAsyncCommand(c, onMessage, NULL, "SUBSCRIBE testtopic");
event_base_dispatch(base);
return 0;
}

This is a late answer, but you can try redis-plus-plus, which is based on hiredis, and written in C++ 11.
Disclaimer: I'm the author of this library. If you have any problem with this client, feel free to let me know. If you like it, also feel free to star it :)
Sample code:
Redis redis("tcp://127.0.0.1:6379");
// Create a Subscriber.
auto sub = redis.subscriber();
// Set callback functions.
sub.on_message([](std::string channel, std::string msg) {
// Process message of MESSAGE type.
});
sub.on_pmessage([](std::string pattern, std::string channel, std::string msg) {
// Process message of PMESSAGE type.
});
sub.on_meta([](Subscriber::MsgType type, OptionalString channel, long long num) {
// Process message of META type.
});
// Subscribe to channels and patterns.
sub.subscribe("channel1");
sub.subscribe({"channel2", "channel3"});
sub.psubscribe("pattern1*");
// Consume messages in a loop.
while (true) {
try {
sub.consume();
} catch (...) {
// Handle exceptions.
}
}
Check the doc for detail.

Observer pattern is what we see in pub/sub feature of Redis.
All subscribers are observers and subject is the channel which is being modified by publishers.
When a publisher modifies a channel i.e. executes command like redis-cli> publish foo value
This change is communicated by Redis server to all observers (i.e. subscribers)
So Redis server has list of all observers for a particular channel.

Related

Lock-free thread safety in console apps

To ensure thread-safety, I'm trying to find a generic cross-platform approach to
execute all delegates asynchronously in the main thread or ...
execute delegete in a background thread and pass result to the main one
Considering that console apps do not have synchronization context, I create new context when app is loading and then use one of the following methods.
Set and restore custom SC as described in Await, SynchronizationContext, and Console Apps article by Stephen Toub
Marshall all delegates to main thread using context.Post call as described in the article ExecutionContext vs SynchronizationContext by Stephen Toub
Using background thread with producer-consumer collection as described in Basic synchronization by Joe Albahari
Question
Ideas #1 and #2 set context correctly only if it's done synchronously. If they're called from inside Parallel.For(0, 100) then synchronization context starts using all threads available in a thread pool. Idea #3 always performs tasks within dedicated thread as expected, unfortunately, not in the main thread. Combining idea #3 with IOCompletionPortTaskScheduler, I can achieve asynchrony and single-threading, unfortunately, this approach will work only in Windows. Is there a way to combine these solutions to achieve requirements at the top of the post, including cross-platform.
Scheduler
public class SomeScheduler
{
public Task<T> RunInTheMainThread<T>(Func<T> action, SynchronizationContext sc)
{
var res = new TaskCompletionSource<T>();
SynchronizationContext.SetSynchronizationContext(sc); // Idea #1
sc.Post(o => res.SetResult(action()), null); // Idea #2
ThreadPool.QueueUserWorkItem(state => res.SetResult(action())); // Idea #3
return res.Task;
}
}
Main
var scheduler = new SomeScheduler();
var sc = SynchronizationContext.Current ?? new SynchronizationContext();
new Thread(async () =>
{
var res = await scheduler.ExecuteAsync(() => 5, sc);
});
You can use the lock/Monitor.Pulse/Monitor.Wait and a Queue
I know the title says lock-free. But my guess is that you want the UI updates to occur outside the locks or worker threads should be able to continue working without having to wait for main thread to update the UI (at least this is how I understand the requirement).
Here the locks are never during the producing of items, or updating the UI. They are held only during the short duration it takes to enqueue/dequeue item in the queue.
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using static System.Threading.Thread;
namespace ConsoleApp1
{
internal static class Program
{
private class WorkItem
{
public string SomeData { get; init; }
}
private static readonly Queue<WorkItem> s_workQueue = new Queue<WorkItem>();
private static void Worker()
{
var random = new Random();
// Simulate some work
Sleep(random.Next(1000));
// Produce work item outside the lock
var workItem = new WorkItem
{
SomeData = $"data produced from thread {CurrentThread.ManagedThreadId}"
};
// Acquire lock only for the short time needed to add the work item to the stack
lock (s_workQueue)
{
s_workQueue.Enqueue(workItem);
// Notify the main thread that a new item is added to the queue causing it to wakeup
Monitor.Pulse(s_workQueue);
}
// work item is now queued, no need to wait for main thread to finish updating the UI
// Continue work here
}
private static WorkItem GetWorkItem()
{
// Acquire lock only for the duration needed to get the item from the queue
lock (s_workQueue)
{
WorkItem result;
// Try to get the item from the queue
while (!s_workQueue.TryDequeue(out result))
{
// Lock is released during Wait call
Monitor.Wait(s_workQueue);
// Lock is acquired again after Wait call
}
return result;
}
}
private static void Main(string[] args)
{
const int totalTasks = 10;
for (var i = 0; i < totalTasks; i++)
{
_ = Task.Run(Worker);
}
var remainingTasks = totalTasks;
// Main loop (similar to message loop)
while (remainingTasks > 0)
{
var item = GetWorkItem();
// Update UI
Console.WriteLine("Got {0} and updated UI on thread {1}.", item.SomeData, CurrentThread.ManagedThreadId);
remainingTasks--;
}
Console.WriteLine("Done");
}
}
}
Update
Since you don't want to have the main thread Wait for an event, you can change the code as follows:
private static WorkItem? GetWorkItem()
{
// Acquire lock only for the duration needed to get the item from the queue
lock (s_workQueue)
{
// Try to get the item from the queue
s_workQueue.TryDequeue(out var result);
return result;
}
}
private static void Main(string[] args)
{
const int totalTasks = 10;
for (var i = 0; i < totalTasks; i++)
{
_ = Task.Run(Worker);
}
var remainingTasks = totalTasks;
// Main look (similar to message loop)
while (remainingTasks > 0)
{
var item = GetWorkItem();
if (item != null)
{
// Update UI
Console.WriteLine("Got {0} and updated UI on thread {1}.", item.SomeData, CurrentThread.ManagedThreadId);
remainingTasks--;
}
else
{
// Queue is empty, so do some other work here then try again after the work is done
// Do some other work here
// Sleep to simulate some work being done by main thread
Thread.Sleep(100);
}
}
Console.WriteLine("Done");
}
The problem in the above solution is that the Main thread should do only part of the work it is supposed to do, then call GetWorkItem to check if the queue has something, before resuming whatever it was doing again. It is doable if you can divide that work into small pieces that don't take too long.
I don't know if my answer here is what you want. What do you imagine the main thread would be doing when there are no work items in the queue?
if you think it should be doing nothing (i.e. waiting) then the Wait solution should be fine.
If you think it should be doing something, then may be that work it should be doing can be queued as a Work item as well.

Automatically pausing/continuing a service when OS suspends

I made a Windows Service process that can be started/stopped/paused/continued.
The service is created with CreateService() and the service starts a service controller with RegisterServiceCtrlHandlerExA().
Even though the service can subscribe to power setting notifications using RegisterPowerSettingNotification() I find that these only represent events like battery/mains for laptops, and such. Not for suspend/sleep of the OS.
How can I tell the SCM to automatically pause my service before the OS suspends/sleeps? And continue my service after it wakes up again?
This requires calling the PowerRegisterSuspendResumeNotification() function.
For this, you need to #include <powrprof.h> and link against powrprof.lib.
The callback itself looks like:
static ULONG DeviceNotifyCallbackRoutine
(
PVOID Context,
ULONG Type, // PBT_APMSUSPEND, PBT_APMRESUMESUSPEND, or PBT_APMRESUMEAUTOMATIC
PVOID Setting // Unused
)
{
LOGI("DeviceNotifyCallbackRoutine");
if (Type == PBT_APMSUSPEND)
{
turboledz_pause_all_devices();
LOGI("Devices paused.");
}
if (Type == PBT_APMRESUMEAUTOMATIC)
{
turboledz_paused = 0;
LOGI("Device unpaused.");
}
return 0;
}
static DEVICE_NOTIFY_SUBSCRIBE_PARAMETERS notifycb =
{
DeviceNotifyCallbackRoutine,
NULL,
};
And then register it with:
HPOWERNOTIFY registration;
const DWORD registered = PowerRegisterSuspendResumeNotification
(
DEVICE_NOTIFY_CALLBACK,
&notifycb,
&registration
);
if (registered != ERROR_SUCCESS)
{
const DWORD err = GetLastError();
LOGI("PowerRegisterSuspendResumeNotification failed with error 0x%lx", err);
}

boost::asio: how can I make some clients listen to server and other client read/write to server at the same time

I am a novice about boost::asio, I write a server, some clients can connect to it and keep listening.
class socket_server {
public:
~socket_server() { io_context.stop(); };
int server_process();
private:
boost::asio::io_context io_context;
};
int socket_server::server_process() {
try {
unlink("/var/run/socket");
server s(io_context, "/var/run/socket");
INFO("server_process, start run\n");
io_context.run();
} catch (std::exception &e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
class server {
public:
server(boost::asio::io_context &io_context, const std::string &file)
: acceptor_(io_context, stream_protocol::endpoint(file)), socket_id_(0) {
do_accept();
}
private:
void do_accept();
stream_protocol::acceptor acceptor_;
int socket_id_;
};
void server::do_accept() {
INFO("do accept\n");
acceptor_.async_accept(
[this](std::error_code ec, stream_protocol::socket socket) {
if (!ec) {
INFO("new session create\n");
std::make_shared<session>(std::move(socket), socket_id_++)->start();
}
do_accept();
});
}
class session : public std::enable_shared_from_this<session> {
public:
session(stream_protocol::socket sock, int socket_id)
: socket_(std::move(sock)), socket_id_(socket_id) {}
~session() { socket_id_--; }
void start();
private:
void do_read();
void do_write(std::array<char, 1024> data);
int get_id() { return socket_id_; }
// The socket used to communicate with the client.
stream_protocol::socket socket_;
// Buffer used to store data received from the client.
std::array<char, 1024> data_;
int socket_id_;
};
void session::start() { do_read(); }
void session::do_read() {
INFO("in do_read\n");
auto self(shared_from_this());
socket_.async_read_some(
boost::asio::buffer(data_),
[this, self](std::error_code ec, std::size_t length) {
if (!ec) {
if (request.find("listen") != std::string::npos) {
std::unique_lock<std::mutex> lock(unsol_mutex);
unsol_cond.wait(lock)
do_write(get_unsol_data());
} else {
std::unique_lock<std::mutex> lock(send_mutex);
if (send_cond.wait_for(lock, std::chrono::seconds(2)) ==
std::cv_status::timeout) {
ERROR("response time out\n");
}
do_write(get_write_data());
}
}
});
}
In do_read(), I found when a client is listening (block in unsol_cond.wait(lock)), another client can not go to do_read().
Is it due to make_shared session? Is there a better implementation suggestion?
Thanks~
You're using blocking synchronization primitives in async code. That's an anti-pattern.
Firstly, as you noticed, the blocking operations will prevent the event loop from progressing.
Secondly, holding locks across async calls is often a bug (it doesn't guard the critical execution during execution of the async operation).
For simple integration with Asio proactor model, you can often
use a strand instead.
Under the hood, it will end up using mutexes, just like now, but only
if the concurrency model requires it. That mainly depends on the
execution context used and/or how many threads are running the
services.
Use a queue with a async send-chain. I have quite a few answers on this site that show you how to do that.
I would gladly demonstrate, but the code is too incomplete, and the naming doesn't really give me an idea what things mean ("listen"/"unsol"?, nothing ever signals those conditions so... hard to guess what they do in reality)

WxSocket (Was not declared in this scope)

Hello, if i try to build this code here, ill get a error and dont know what to do.
void wxsocket_test_finalFrame::OnServerStart(wxCommandEvent& WXUNUSED(event))
{
// Create the address - defaults to localhost:0 initially
wxIPV4address addr;
addr.Service(3000);
// Create the socket. We maintain a class pointer so we can
// shut it down
m_server = new wxSocketServer(addr);
// We use Ok() here to see if the server is really listening
if (! m_server->Ok())
{
return;
}
// Set up the event handler and subscribe to connection events
m_server->SetEventHandler(*this, SERVER_ID);
m_server->SetNotify(wxSOCKET_CONNECTION_FLAG);
m_server->Notify(true);
}
void wxsocket_test_finalFrame::OnServerEvent(wxSocketEvent& WXUNUSED(event))
{
// Accept the new connection and get the socket pointer
wxSocketBase* sock = m_server->Accept(false);
// Tell the new socket how and where to process its events
sock->SetEventHandler(*this, SOCKET_ID);
sock->SetNotify(wxSOCKET_INPUT_FLAG | wxSOCKET_LOST_FLAG);
sock->Notify(true);
}
void wxsocket_test_finalFrame::OnSocketEvent(wxSocketEvent& event)
{
wxSocketBase *sock = event.GetSocket();
// Process the event
switch(event.GetSocketEvent())
{
case wxSOCKET_INPUT:
{
char buf[10];
// Read the data
sock->Read(buf, sizeof(buf));
// Write it back
sock->Write(buf, sizeof(buf));
// We are done with the socket, destroy it
sock->Destroy();
break;
}
case wxSOCKET_LOST:
{
sock->Destroy();
break;
}
}
}
\wxsocket_test_finalMain.cpp|99|error: 'm_server' was not declared in this scope|
OS: Windows
Compiler: gcc version 8.1.0 (x86_64-posix-seh-rev0, Built by MinGW-W64 project)
im a bloody newbie and cant figure out what is happening here, someone has a clue ?

Pattern for limiting number of simultaneous asynchronous calls

I need to retrieve multiple objects from an external system. The external system supports multiple simultaneous requests (i.e. threads), but it is possible to flood the external system - therefore I want to be able to retrieve multiple objects asynchronously, but I want to be able to throttle the number of simultaneous async requests. i.e. I need to retrieve 100 items, but don't want to be retrieving more than 25 of them at once. When each request of the 25 completes, I want to trigger another retrieval, and once they are all complete I want to return all of the results in the order they were requested (i.e. there is no point returning the results until the entire call is returned). Are there any recommended patterns for this sort of thing?
Would something like this be appropriate (pseudocode, obviously)?
private List<externalSystemObjects> returnedObjects = new List<externalSystemObjects>;
public List<externalSystemObjects> GetObjects(List<string> ids)
{
int callCount = 0;
int maxCallCount = 25;
WaitHandle[] handles;
foreach(id in itemIds to get)
{
if(callCount < maxCallCount)
{
WaitHandle handle = executeCall(id, callback);
addWaitHandleToWaitArray(handle)
}
else
{
int returnedCallId = WaitHandle.WaitAny(handles);
removeReturnedCallFromWaitHandles(handles);
}
}
WaitHandle.WaitAll(handles);
return returnedObjects;
}
public void callback(object result)
{
returnedObjects.Add(result);
}
Consider the list of items to process as a queue from which 25 processing threads dequeue tasks, process a task, add the result then repeat until the queue is empty:
class Program
{
class State
{
public EventWaitHandle Done;
public int runningThreads;
public List<string> itemsToProcess;
public List<string> itemsResponses;
}
static void Main(string[] args)
{
State state = new State();
state.itemsResponses = new List<string>(1000);
state.itemsToProcess = new List<string>(1000);
for (int i = 0; i < 1000; ++i)
{
state.itemsToProcess.Add(String.Format("Request {0}", i));
}
state.runningThreads = 25;
state.Done = new AutoResetEvent(false);
for (int i = 0; i < 25; ++i)
{
Thread t =new Thread(new ParameterizedThreadStart(Processing));
t.Start(state);
}
state.Done.WaitOne();
foreach (string s in state.itemsResponses)
{
Console.WriteLine("{0}", s);
}
}
private static void Processing(object param)
{
Debug.Assert(param is State);
State state = param as State;
try
{
do
{
string item = null;
lock (state.itemsToProcess)
{
if (state.itemsToProcess.Count > 0)
{
item = state.itemsToProcess[0];
state.itemsToProcess.RemoveAt(0);
}
}
if (null == item)
{
break;
}
// Simulate some processing
Thread.Sleep(10);
string response = String.Format("Response for {0} on thread: {1}", item, Thread.CurrentThread.ManagedThreadId);
lock (state.itemsResponses)
{
state.itemsResponses.Add(response);
}
} while (true);
}
catch (Exception)
{
// ...
}
finally
{
int threadsLeft = Interlocked.Decrement(ref state.runningThreads);
if (0 == threadsLeft)
{
state.Done.Set();
}
}
}
}
You can do the same using asynchronous callbacks, there is no need to use threads.
Having some queue-like structure to hold the pending requests is a pretty common pattern. In Web apps where there may be several layers of processing you see a "funnel" style approach with the early parts of the processing change having larger queues. There may also be some kind of prioritisation applied to queues, higher priority requests being shuffled to the top of the queue.
One important thing to consider in your solution is that if request arrival rate is higher than your processing rate (this might be due to a Denial of Service attack, or just that some part of the processing is unusually slow today) then your queues will increase without bound. You need to have some policy such as to refuse new requests immediately when the queue depth exceeds some value.