Apache NiFi TCP Client/Server - apache

Can I simulate a TCP client/server interaction using Apache NiFi processors alone or do I have to write code for this? The processors to be considered here are ListenTCP, PutTCP, and GetTCP. In particular, I want to simulate and show a POC for sending HL7 messages from a TCP client to a TCP server. Anyone done this before using NiFi? Any help would be appreciated. Thanks.

ListenTCP starts a server socket waiting for incoming TCP connections. Your client can make connections to the hostname where NiFi is running and the port specified in ListenTCP. If your client needs to send multiple pieces of data over a single connection, then it must send new-lines in between each message. You can simulate a client in NiFi by using PutTCP and pointing it at the same host/port where ListenTCP is running.
UPDATE - Here is an example of the flow:

Related

Envoy proxy with rabbitmq

We have been migrating to .net core console app microservices. currently, each microservice works in a chain and puts messages in rabbitmq, then the next service picks a message from rabbitmq, processes it, then puts in another rabbitmq....we have around 9 services.
We are seeing issues where services fail and have no idea why, but often see problems with rabbitmq connections or network issues hitting the next server (some vm's have all services hosted on the same box, others are distributed between boxes)
I've been looking at envoy proxy as it deals with the circuit breaker etc stuff and claims to have observability
However, I cannot find anywhere online that has anyone using envoy proxy with rabbitmq
Can envoy proxy be used with rabbitmq in this manner?
Or does envoy proxy act as the queue?
We deal with about 4,000 messages a sec currently, and we need to process in near as real-time as possible.
Envoy does not act as the queue, so it can't replace your message-based communication system. It can, however, proxy traffic to/from the RabbitMQ servers to give you some bits of what you're looking for.
What you'd do is use the TCP Proxy capability to setup TCP reverse proxies to RabbitMQ. Then your servers should all connect to the Envoy proxy rather than directly to the message queue. Envoy's built in stats will then output metrics on the TCP connections (all the RabbitMQ protocols seem to be TCP) that it handles. It also does intrinsically support circuit breakers, timeouts, retries, etc, so you'll get all those. But you'll definitely have to tweak those to your particular deployment.
We've done this pattern multiple times at my company, just with Kafka rather than RabbitMQ. However, since they're both TCP based it should work similarly.

Cannot receive UDP when different sockets bind to same port

Came across an interesting observation in a SO post, where there are two client processes (client is across NAT) which both bind locally to same port (reusing port), use UDP socket to send data in one process, and receive in another.
It turns out, the receive process could not receive the data.
Client Process (Send) --- Port 5000 ---> NAT --Port 5333 (say) -> Server
This works
Server ----Port(5333)---> NAT ---Port??---> Client Process (Recv)
This doesnt work
It seems if a single client process with same socket is used for both send and receive, it receives the data from server.
Why this behavior? If both client send and receive processes were bind to the same port, things should have worked?
Why is different process causing this behavior? Looks like due to different processes, different port are used despite port reuse?

ActiveMQ replicated levelDB with zookeeper, client must know all brokers?

client must know all brokers using Failover Transport, right? Like that,
failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
Is there optimization,so that the client does not have to know the existence of each broker ?
Put a TCP load balancer in front of the brokers. Only forward requests to the master broker. The LB can ping who's online or not by checking the "Slave" attribute of the broker via Jolokia/JMX.
A standalone approach would be to provide an URL to a comma separated list of broker URLs to try in case of failure. Can be done using the updateURIsURL option in the failover URI.
There is also some possibilities to auto-discover brokers using Multicast or by querying an LDAP directory, but that requires certain infrastructure in place. Read more about it here.

Spring STOMP Broker Relay + RabbitMQ Cluster with HA Proxy fronting each for load balancing

I am designing a system where a huge number of real-time data generated from devices is to be transferred to subscribers preferably over websockets. I have decided to use Spring STOMP Websockets as it was quicker to set-up, understand and had a few things supported out of the box like RabbitMQ and Security. And also because the plan is to use Spring for another REST API so Spring as a choice of tech stack. RabbitMQ is the message broker that I have decided on. However I can not find good amount of guidance on how to scale such a system.
The possible solution I am thinking of is:
To add HAProxy in front of STOMP broker instances and also between
STOMP Brokers and a RabbitMQ cluster, HAProxy will act as a
load-balancer in both cases. Spring STOMP broker will then be pointing to the HAProxy as broker relay host. The requirement is to have high availability and no data loss.
As I do not have prior experience with Websockets, I would like to get guidance on if this solution sounds correct or if there is anything that I am missing here?
Note: In this system, both the message producers and consumers are actually websocket Java clients. I took the sample code from https://github.com/nickebbutt/stomp-websockets-java-client and created two separate clients - One that only sends the messages i.e. device data(Producers) and other that subscribes to these messages(Consumer). Thus both connect using same websocket URL to same STOMP broker. With above system implementation the clients will point to HAProxy for websocket connection.
Just an updated on this, I did experimentation by creating the above set-up and it worked i.e. I was able to connect to websocket stomp server/send/receive data with RabbitMQ broker and use of HAProxy load balancing as described. The broker host/port configured in Spring was pointing to HAProxy which in turn was forwarding requests to RabbitMQ backend. Similarly, the websocket clients were connecting to Spring STOMP websocket server application via HAProxy.

recv() fails on UDP

I’m writing a simple client-server app which for the time being will be for my own personal use. I’m using Winsock for the net communication. I have not done any networking for the last 10 years, so I am quite rusty. I’d like to use as little external code as possible, so I have written a home-made server discovery mechanism, as follows.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket. The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket. The client then uses this info to connect to the server (on a different socket). This mechanism should allow the server to bind its listening socket to the first port it can within the dynamic port range (49152-65535) and to the clients to discover where the server is and on which port it is listening.
The server part works fine: the server receives the broadcast messages and successfully sends its response.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket).
But the message never makes it to the client app. I’ve tried doing a recv() in blocking and non-blocking mode, and there is never any data available. ioctlsocket() always shows no data is available, even though I know the packet got it to the machine.
The server succeeds on doing a recv() on broadcasted data. But the client fails on doing a recv() of the server’s response which is addressed to its discovery socket.
The question is very vague: what gotchas should I watch for in this scenario? Why would recv() fail to get a packet which has actually arrived to the machine? The sockets are UDP, so the fact that they are not connected is irrelevant. Or is it?
Many thanks in advance.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket.
The message doesn't need to contain anything. Just broadcast an empty message from the 'discovery socket'. recvfrom() will tell the server where it came from, and it can just reply directly.
The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket.
Fair enough, although actually the server could just broadcast its own TCP listening port every 5 seconds or whatever.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket). But the message never makes it to the client app
If it got to the host it must get to the application. You must have got the ports mixed up somehow. Simplify it as above and retry.
Well, it was one of those stupid situations: Windows Firewall was active, besides the other firewall, and silently dropping packets. Deactivating it solved the problem.
But I still don’t understand how it works, as it was allowing the server to receive packets sent through broadcasting. And when I got at my wits end and set the server to answer back through a broadcast, THOSE packets got dropped.
Two days of frustration. I hope someone profits from my experience.