Custom Sink : Multiple Backends - tap

I have a scenario that I need to update or Insert data to multiple datastores. This is no orchetration of logic just insert/update data. But Multiple backends. Say Oracle, Mongo, DB2 etc. There are multiple ways to achieve it. We are looking at the options:
1) Use tap to update different backends on the main stream. I am concerned that this is antipattern to WireTap pattern. Please advice.
2) Cleaner approach I see is to develop a Custom Sink to update different backends using Spring Integration. Is this approach a valid/agreeable pattern?
Please provide your inputs on which is the right way to achieve this usecase.
Thanks
Karthik

1) Use tap to update different backends on the main stream. I am concerned that this is antipattern to WireTap pattern. Please advice.
Not sure I understand what you mean by anti-pattern. The way tap works in Spring XD is that you connect to a pub/sub on the message-bus (topic for kafka and exchange for rabbit) to receive the tapped data.
The actual wire-tap channel (SI channel created for the message output module) would still have a single secondary channel that sends the messages to the pub/sub on the message-bus. All you do is to connect to this pub/sub with as many tap streams (as subscribers for this) to receive data.
Hence I believe creating tap would make sense in your case.

Thanks for your time and appreciate it.
I found a cleaner way to implement to update multiple backends with Dynamic Route as a sink. Dynamic Router then streams data based on a header value to Stream A and Stream B which will have sinks to update backend RDBMS.
Tap, I am not convinced to update the business critical data to backends and it is a WireTap impl.
Thanks
Karthik

Related

Long Link RMQ clustering

Sorry for what may seem a simple question to some, but.
Currently use RMQ for pretty simple client/Queue/Consumer type transactions. Some use return message queues, while others are just simple 'jobs'.
Looking to distribute between 'sites' and for use-cases of RMQ-Clustering with nodes that are not co-located, ie: on WAN.
Has anyone done such a thing, or should I bite-the-bullet and move to ActiveMQ/Artemis.
Thank you for any insights.
ActiveMQ 5's network connectors are designed to support the "long link" or WAN connectivity pattern (along with others). The messaging pattern is known as 'store-and-forward'. It supports one-way push, bi-directional and pull approaches.
Network of Brokers
ref: https://activemq.apache.org/networks-of-brokers

How would i go about creating a streaming API, that receives data from a POST request and pushes it out?

I am interested in developing an API that is capable of receiving data in real-time and pushing it out to clients connected to an endpoint. I have looked in socket.io and web sockets. However, these depend on events being triggered to send/receive data. This isn't ideal for my use case. What alternatives are there for me to achieve this?
Any help and advice are greatly appreciated.
So If I understand it right, you want to write a streaming service that can push updates on some data in real-time over an endpoint exposed to the clients. I guess webhooks might be a solution looking into your problem statement. I'd recommend you to look into this https://www.youtube.com/watch?v=63grynZmo7c as well. It has got elementary information as to how do you create a webhook and start receiving real-time updates on it

two microservices linked with Rabbitmq

I have two microservices. One is connected to the mongodb database and the other to postgresql. I need to transfer information from the second microservice to the first and vice versa, for this I used rabbimq, is it possible to use rabbitmq for such purposes or not? (Everything works for me, I'm only interested in whether I used rabbitmq correctly or not)
Two ways to communicate/transfer data.
1)Expose HTTP endpoint(/GET in your case), so any other microservice can get the information over HTTP.
2)As you are already implemented, publish the event with data, and other ms will listen to the event and sync with data.
As you mentioned in a comment, for your requirement, async communication is best option.

Redis keyspace notifications - get values(small size) of set operations

I'm working on creating DB with Redis.
One of my recruitments is that all the clients in the system will be able to listen to set events and get information about both key and value change.
I know that publishing value may be big(512 MB) but I know that in my system the size of value will not be more than 100 chars.
I have 3 possible solutions and I wonder which one will be better or consider other solutions:
1) After each set operation client will also publish it (PUB/SUB)
2)Edit setGenericCommand function to publish the value as well and use keyspace binding.
3)After client receive keyspace notification it will get the value with get operation.
I would like to understand which approach will be better?
Thank you!
So, 1st and foremost, remember that PubSub is at-most-once delivery. If you really need to process every change in the client, you should consider a more resilient way to do so.
That said, assuming you're ok with PubSub's promises, 1 is the simplest and I'd go with that. At most, I'd provide the clients with a Lua wrapper that combines the SET and PUBLISH commands. This, of course, removes the need to actually listen to Keyspace notifications as you basically implementing it yourself.
2 means hacking Redis, which is great but means you'll have to maintain your own which is meh--;
3 is also simple enough, but with 1 you get away with a single round trip instead of 2.
Another (4) approach is to write a custom module, but IMO too complex for this need. Go with 1 and Lua, and may the force be with you.

Best way to display dynamic data in a webpage

My goal is to visualize the incoming data stream on a browser. I have used activemq to queue the stream. A single message consumed from the queue looks like this: "int,date/time,int,string". I have to update my line graph on the browser (every 100ms). Any ideas?
It sounds like a use case for WebSocket.
There are many ways to implement it, but a rather nice blog post on the topic is presented here.
Another way is to use MQTT directly from the browser using javascript and subscribe to a topic with your updates. You have to forward your data to that topic, in this case. For that, you can use composite queues with forwardOnly=false.
If you're using ActiveMQ, you could enable its websockets interface: http://activemq.apache.org/websockets.html
In your browser code, use the STOMP over WebSocket library to subscribe to the queue. http://jmesnil.net/stomp-websocket/doc/