Event-based mailing with Redis, Python and PHP - redis

I want to frequently publish messages to a Redis server from a Python component. Then, in an other PHP component that also has the access to this Redis server, I want to send emails to users based on the messages that are stored in Redis. If I'm not wrong, there are two ways of doing that: Push-Pull and Push-Push designs:
Pull design
The PHP component is frequently making requests to the Redis server, and when there is a new message, make the action.
Push design:
We don't need to make these frequent requests from the PHP component: Whenever a new message is published in Redis, the action from the PHP component can be triggered.
My question
How can we implement the Push-Push design? When I read the Redis documentation about the Pub-Sub model, I don't find anything about it. So I don't really understand how we can trigger an action, in an other way then making frequent requests to the redis server.

You need to run a "long-running" php process as daemon via using solutions such as nohup, supervisor or upstart. This process will keep working as daemon to consume your redis channel. Python will keep producing, php will keep consuming.
There are several libraries to run php process as daemon such as reactphp or if you are using a framework such as laravel, it offers a good pub/sub interface.
It will be something like this;
Php part will subscribe to mychannel
127.0.0.1:6379> SUBSCRIBE mychannel
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "mychannel"
3) (integer) 1
python will publish to mychannel
127.0.0.1:6379> publish mychannel myemailjson
(integer) 1
127.0.0.1:6379> publish mychannel myanotheremailjson
(integer) 1
127.0.0.1:6379>
in the meanwhile, your php process will receive that messages
1) "message"
2) "mychannel"
3) "myemailjson"
1) "message"
2) "mychannel"
3) "myanotheremailjson"
In that subscriber php process you will call/trigger/dispatch your email delivery jobs(probably asynchronously) whenever it receives a message.

Related

ServiceStack Redis Mq: is eventual consistency an issue?

I'm looking at turning a monolith application into a microservice-oriented application and in doing so will need a robust messaging system for interprocesses-communication. The idea is for the microserviceprocesses to be run on a cluster of servers for HA, with requests to be processed to be added on a message queue that all the applications can access. I'm looking at using Redis as both a KV-store for transient data and also as a message broker using the ServiceStack framework for .Net but I worry that the concept of eventual consistency applied by Redis will make processing of the requests unreliable. This is how I understand Redis to function in regards to Mq:
Client 1 posts a request to a queue on node 1
Node 1 will inform all listeners on that queue using pub/sub of the existence of
the request and will also push the requests to node 2 asynchronously.
The listeners on node 1 will pull the request from the node, only 1 of them will obtain it as should be. An update of the removal of the request is sent to node 2 asynchronously but will take some time to arrive.
The initial request is received by node 2 (assuming a bit of a delay in RTT) which will go ahead and inform listeners connected to it using pub/sub. Before the update from node 1 is received regarding the removal of the request from the queue a listener on node 2 may also pull the request. The result being that two listeners ended up processing the same request, which would cause havoc in our system.
Is there anything in Redis or the implementation of ServiceStack Redis Mq that would prevent the scenario described to occur? Or is there something else regarding replication in Redis that I have misunderstood? Or should I abandon the Redis/SS approach for Mq and use something like RabbitMQ instead that I have understood to be ACID-compliant?
It's not possible for the same message to be processed twice in Redis MQ as the message worker pops the message off the Redis List backed MQ and all Redis operations are atomic so no other message worker will have access to the messages that have been removed from the List.
ServiceStack.Redis (which Redis MQ uses) only supports Redis Sentinel for HA which despite Redis supporting multiple replicas they only contain a read only view of the master dataset, so all write operations like List add/remove operations can only happen on the single master instance.
One notable difference from using Redis MQ instead of specific purpose MQ like Rabbit MQ is that Redis doesn't support ACK's, so if the message worker process that pops the message off the MQ crashes then it's message is lost, as opposed to Rabbit MQ where if the stateful connection of an un Ack'd message dies the message is restored by the RabbitMQ server back to the MQ.

when redis update, redis will send the update to my grpc server. How to implement it

When there's an update in redis, redis would send the update to my grpc server. How to implement it.
It looks like redis monitor command can get all updates in redis. I though I can parse data from redis monitor and send it to grpc server.
Is there a better solution?
Hope you want to notify when your value get updated in redis. If so, you can use redis keyspace notification to get notified about update. You need to subscribe this event, so redis will publish when ever update done. So you need to use any one of clients like node.js to subscribe those events, so you can do whatever you need from there.
You can get detailed explanation from following page with this example:
Reference: https://redis.io/topics/notifications
FYI: By default this notification will be disabled, if you want then you need to enable by update redis configuration and restart is needed to apply this configuration change.

How to subscribe redis channel from nuxt app

I have a nuxtjs frontend app and a php backend running on a different server.
I'm setting up a real time chat. The backend publish to a redis server once a new message has been sent (i.e "hey I received a new message for room "foo", get it and go get notify other recipients"). So I'm supposed to subscribe a specific redis channel and then notify others.
My thinking is more about the way I should approach this.
Do I need something like socket.io to talk to my redis server ?
Should I use only redis.createClient() to initiate a redis instance and then subscribe (dynamically) to each and every new room I can already be in or added ?

How to get notifications about failovers in Redis cluster to the client application

Is there a way for a client to get notified about failover events in the Redis cluster? If so, which client library would support this? I am currently using Jedis but have the flexibility to switch to any other Java client.
There are two ways that I can think of to check this, one of them is to grep for master nodes on the cluster, keeping in mind their IDs, if the ports changed for any of them then a failover happened.
$ redis-cli -p {PORT} cluster nodes | grep master
Another way, but it is not as robust of a solution is using the consistency checker ruby script, that will start showing errors in writes as an output, which you can monitor and send notifications depending on it, since that happens when the read server is trying to take its master's role.
Sentinel (http://redis.io/topics/sentinel) has the ability to monitor the cluster member, and send a publish/subscribe notification upon failure. The link contains a more in-depth explanation and tutorial.

Deploying java client, RabbitMQ, and Celery to server

I have a Java API on my server, and I want it to create tasks and add them to Celery via RabbitMQ. I followed the following tutorial, http://www.rabbitmq.com/tutorials/tutorial-two-python.html, where I used java for the client (send.java) and python to receive (receive.py). In receive.py, where the callback method is invoked, I call a method that I've annotated with #celery.task so that the task is added to celery.
I'm wondering how all of this is deployed on a server though, specifically, why there is a receive.py file. Is receive.py a process that must continually run on the server? Is there a way to configure RabbitMQ so that it automatically routes java client tasks to celery?
Thanks!
RabbitMQ is just a message queue. Producers put messages and consumers get them on demand. You can only restrict access for specific queues via RabbitMQ's auth options.
As for deployment: yes, receive.py needs to continuously run. It is Celery's job to do that. See the Workers Guide for info on running a worker.