Publish message from Redis / redis stream to a micro service - redis

I have a scenario like I would be storing details on the redis-database and this would publish messages to external API's.
Question:
Is that feasible to publish message from Redis to outside world? - All pub/sub I have seen are within redis-cli.

Yes -- you can pub/sub to clients, but those still have to be Redis clients that also know about pub/sub. Per your account handle, you probably need to look for a simple-but-sufficient Java library.
I wrote about a simple application I create for my own use: a listener process fetches market data (for less than a handful of contracts) and publishes them to Redis from where clients (in a different process and/or on a different machine) can subscribe. Works great. I do this in R because I had already written a (C++ based) client for R (around hiredis) to which we then added pub/sub.
The blog post is here, and I also wrote about Redis for market monitoring in this short arXiv paper -- and there is a corresponding short arXiv paper introducing Redis.

Related

Handling of pubsub subscribers for distributed longrunning tasks

I am evaluating the use of using pubsub for long-running tasks such as video transcoding, where a particular transcode may take between 2-10 minutes. Is pubsub a good approach for such a task distribution? For example, let's say I have five servers:
- publisher1
- publisher2
- publisher3
- publisher4
- publisher5
And a topic called "videos". Would it be possible to spread out the messages equally across those five servers? What about when servers are added or removed? What would be a good approach to doing this, or is pubsub not the right tool for something like this?
This does sound like a reasonable use case for pubsub. Specifically, if you use a pull subscriber, you can configure flow control settings to have at most one outstanding message to your server, and configure the max ack extension period (in java) to be a reasonable upper bound of your processing time. This api is described here http://googleapis.github.io/google-cloud-java/google-cloud-clients/apidocs/index.html?com/google/cloud/pubsub/v1/package-summary.html
This should effectively load balance across your servers by default if you use the same subscriber id for all jobs. If a server is added and backlog exists, it will receive a new entry. If a server is removed, it will no longer be sent messages. If it removed while processing or crashes, the message it was working on will be resent to another server.
One concern however is that pubsub has a limit of 10MB per message. You might consider instead putting the data itself in a google cloud storage bucket. Cloud storage can publish the file location to a pubsub topic when an upload is complete. https://cloud.google.com/storage/docs/pubsub-notifications

Ready-made solution for notification microservice

I have a microservice architecture and now I need to introduce a notification center. Requirements are: any service is able to send a notification, any service is able to subscribe to any kind of notifications, UI (web) is able to subscribe to notifications (websockets are preferred). Of course I can write such service by myself but maybe there is ready-made robust solution for that.
UPD: I'm not looking for pub/sub messaging system as it is too low-level for notification center
What you are looking for is publish-subscriber messaging. If you are using AWS stack, then I can recommend Amazon SNS or Amazon SQS. I think Amazon SNS is more suitable because its push based.
Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need
to periodically check or “poll” for updates.
Amazon SQS is a message queue service used by distributed applications to exchange messages through a polling model, and can be
used to decouple sending and receiving components—without requiring
each component to be concurrently available.
Out of Amazon web services stack, there are a bunch of free messaging solutions:
RabbitMQ is one of the leading implementation of the AMQP protocol (along with Apache Qpid). Therefore, it implements a broker
architecture, meaning that messages are queued on a central node
before being sent to clients. This approach makes RabbitMQ very easy
to use and deploy, because advanced scenarios like routing, load
balancing or persistent message queuing are supported in just a few
lines of code. However, it also makes it less scalable and “slower”
because the central node adds latency and message envelopes are quite
big.
ZeroMq is a very lightweight messaging system specially designed for high throughput/low latency scenarios like the one you can find in
the financial world. Zmq supports many advanced messaging scenarios
but contrary to RabbitMQ, you’ll have to implement most of them
yourself by combining various pieces of the framework (e.g : sockets
and devices).
ActiveMQ is in the middle ground. Like Zmq, it can be deployed with both broker and P2P topologies. Like RabbitMQ, it’s easier to
implement advanced scenarios but usually at the cost of raw
performance.
Now you know what you need, I would recommend to read through each technology for a while and decide which one serves your goal more accurately. If that doesn't worth our time and your requirement is more specific & relatively small, then you can go for writing something on your own.

Microservices Why Use RabbitMQ?

I haven't found an existing post asking this but apologize if I missed it.
I'm trying to get my head round microservices and have come across articles where RabbitMQ is used. I'm confused why RabbitMQ is needed. Is the intention that the services will use a web api to communicate with the outside world and RabbitMQ to communicate with each other?
In Microservices architecture you have two ways to communicate between the microservices:
Synchronous - that is, each service calls directly the other microservice , which results in dependency between the services
Asynchronous - you have some central hub (or message queue) where you place all requests between the microservices and the corresponding service takes the request, process it and return the result to the caller. This is what RabbitMQ (or any other message queue - MSMQ and Apache Kafka are good alternatives) is used for. In this case all microservices know only about the existance of the hub.
microservices.io has some very nice articles about using microservices
A message queue provide an asynchronous communications protocol - You have the option to send a message from one service to another without having to know if another service is able to handle it immediately or not. Messages can wait until the responsible service is ready. A service publishing a message does not need know anything about the inner workings of the services that will process that message. This way of handling messages decouple the producer from the consumer.
A message queue will keep the processes in your application separated and independent of each other; this way of handling messages could create a system that is easy to maintain and easy to scale.
Simply put, two obvious cases can be used as examples of when message queues really shine:
For long-running processes and background jobs
As the middleman in between microservices
For long-running processes and background jobs:
When requests take a significant amount of time, it is the perfect scenario to incorporate a message queue.
Imagine a web service that handles multiple requests per second and cannot under any circumstances lose one. Plus the requests are handled through time-consuming processes, but the system cannot afford to be bogged down. Some real-life examples could include:
Images Scaling
Sending large/many emails (like newsletters)
Search engine indexing
File scanning
Video encoding
Delivering notifications
PDF processing
Calculations
The middleman in between microservices:
For communication and integration within and between applications, i.e. as the middleman between microservices, a message queue is also useful. Think of a system that needs to notify another part of the system to start to work on a task or when there are a lot of requests coming in at the same time, as in the following scenarios:
Order handling (Order placed, update order status, send an order, payment, etc.)
Food delivery service (Place an order, prepare an order, deliver food)
Any web service that needs to handle multiple requests
Here is a story explaining how Parkster (a digital parking service) are breaking down their system into multiple microservices by using RabbitMQ.
This guide follow a scenario where a web application allows users to upload information to a web site. The site will handle this information and generate a PDF and email it back to the user. Handling the information, generating the PDF and sending the email will in this example case take several seconds and that is one of the reasons of why a message queue will be used.
Here is a story about how and why CloudAMQP used message queues and RabbitMQ between microservices.
Here is a story about the usage of RabbitMQ in an event-based microservices architecture to support 100 million users a month.
And finally a link to Kontena, about why they chose RabbitMQ for their microservice architecture: "Because we needed a stable, manageable and highly-available solution for messaging.".
Please note that I work for the company behind CloudAMQP (hosting provider of RabbitMQ).
The same question can be why REST is necessary for microservices? Microservice concept is not something new under moon. A long time distribution of workflow was used for backend engineering and asynchronous request processing, Microservice is the same component in a separated jvm which matches with S(single responsibility) in SOLID. What makes it micro SERVICE - is that it is balanced. And that is the all! Particularly (!), it can be REST Service on Spring Cloud/REST base, which is registered by Eureka, has proxy gateway and load balancing over Zuul and Ribbon. But it is not the whole world of microservices!By the way, asynchronous distributed processing is one of tasks which microservices are used for. Long time ago services(components) in separated JVM was integrated over any messaging and the pattern is known as ESB. Microservices are the same subjects the pattern. Due to fashion for Spring Cloud REST seems like it is the only way of microservices. Nope! Message based asynchronous microservice architecture is supported by Vertx https://dzone.com/articles/asynchronous-microservices-with-vertx, for example. Why not to use RabbitMQ as message channel? In this case load balancing can be provided by building RabbitMQ cluster. For example:https://codeburst.io/using-rabbitmq-for-microservices-communication-on-docker-a43840401819. So, world is much wide more.

Nservicebus routing

We have multiple web and windows applications which were deployed to different servers that we are planning to integrate using NservierBus to let all apps can pub/sub message between them, I think we using pub/sub pattern and using MSMQ transport will be good for it. but one thing I am not clear if it is a way to avoid hard code to set sub endpoint to MSMQ QueueName#ServerName which has server name in it directly if pub is on another server. on 6-pre I saw idea to set endpoint name then using routing to delegate to transport-level address, is that a solution to do that? or only gateway is the solution? is a broker a good idea? what is the best practice for this scenario?
When using pub/sub, the subscriber currently needs to know the location of the queue of the publisher. The subscriber then sends a subscription-message to that queue, every single time it starts up. It cannot know if it subscribed already and if it subscribed for all the messages, since you might have added/configured some new ones.
The publisher reads these subscriptions messages and stores the subscription in storage. NServiceBus does this for you, so there's no need to write code for this. The only thing you need is configuration in the subscriber as to where the (queue of the) publisher is.
I wrote a tutorial myself which you can find here : http://dennis.bloggingabout.net/2015/10/28/nservicebus-publish-subscribe-tutorial/
That being said, you should take special care related to issues regarding websites that publish messages. More information on that can be found here : http://docs.particular.net/nservicebus/hosting/publishing-from-web-applications
In a scale out situation with MSMQ, you can also use the distributor : http://docs.particular.net/nservicebus/scalability-and-ha/distributor/
As a final note: It depends on the situation, but I would not worry too much about knowing locations of endpoints (or their queues). I would most likely not use pub/sub just for this 'technical issue'. But again, it completely depends on the situation. I can understand that rich-clients which spawn randomly might want this. But there are other solutions as well, with a more centralized storage and an API that is accessed by all the rich clients.

CQRS using Redis MQ

I have been working on a CQRS project (my first) for over the last 9 months which has been a heavy learning curve. I am currently using JOliver's excellent EventStore in my write model and using PostGresSql for my read model.
Both my read and write databases are on the same machine which means that when a change is made to the write database, in the same synchronous call a change is made to the read model.
As I was learning CQRS I felt this was the best way to go as I had no experience with message queue/service bus frameworks such as MassTransit, NServiceBus etc.
I am now at a point with most of my architecture in place to introduce a message queue framework.
Today, I came across Redis MQ which is part of ServiceStack and as we are already using ServiceStack for our Rest based HTTP clients, this seems like the right way to go.
My question is more about understanding what I need to know (or if I have any misunderstandings) to implement Redis MQ and whether Redis MQ is the right choice?
Now from what I understand, I would use Redis MQ as a durable queue between the write and read database. Once my event store has recorded that something has happened in my domain then it will publish to Redis MQ. The services listening for events/messages would receive the event/message from Redis MQ and once it has processed it (i.e. update or write to the read model), a notification/response goes back to the event store to tell the event store that the message has been received and processed by the listener/subscriber.
Does this sound correct?
Also would the Redis MQ architecture give me everything that NSB, RavenDB, MassTransit etc offer?
Also, I will be deploying to windows 2008 and 2003 server. Is Redis stable for these OSs?
I think the ServiceStack implementation of message queueing in Redis is more appropriate for job-queue scenarios - it pushes a message onto the end of a Redis list and then uses Redis pub-sub to notify listening subscribers that there is a message to pull from the queue. Any consumers would be competing for messages.
For event sourcing, you may be more interested in a type of fanout or topic based messaging topology as offered by RabbitMQ, not that that precludes you from building that sort of thing using Redis data structures yourself.
Now from what I understand, I would use Redis MQ as a durable queue
between the write and read database.
Yes this is correct.
Once my event store has recorded that something has happened in my
domain then it will publish to Redis MQ.
Yes and this can be done in several ways. It can either happen as part of the transaction which persists to the event store or you can have an out of band process which continuously publishes events from the event store.
a notification/response goes back to the event store to tell the event
store that the message has been received and processed by the
listener/subscriber.
The response back to the event publisher is usually omitted. This truly decouples the publishers from subscribers. You make the assumption that once the message is published, all interested subscribers will handle it. If something happens, an error should be logged.
Also would the Redis MQ architecture give me everything that NSB,
RavenDB, MassTransit etc offer?
I don't have experience running Redis MQ, but I do know that Redis supports pub/sub which is one of the value propositions of NSB and MassTransit (as opposed to say bare-bones MSMQ). What MT and NSB offer beyond pub/sub are sagas and it doesn't seem like Redis MQ support those out of the box at least. You may not ever have a need for sagas so this should not automatically be a deterrent. RavenDB is not a message queue so it doesn't apply here.
Also, I will be deploying to windows 2008 and 2003 server. Is Redis
stable for these OSs?
I've run Redis on 2008 R2 and it has been stable so I would think Redis MQ would be stable as well.
You may be interested in a little side project of mine on GitHub which is a queue and persistence implementation for NServiceBus using Redis. https://github.com/mackie1001/NServicebus.Redis
I'd not call it production ready and I want to port it to NSB 4 and do some thorough testing but the meat of it is done.