By default, the activemq uses tcp protocol. But now, I change it to use ssl.
If I deploy the publisher and server on one machine, it makes no difference with regard to the speed. But after I deploy them on different machine, it's much slower to use ssl than to use tcp. Is this normal? If not, what's probably wrong with my code?
Thanks.
Depends on how much slower your application is working.
If you process huge amount of data volumes, SSL will take a decent amount of CPU cycles to encrypt (and also decrypt) the data. Is it the ActiveMQ server that is slower or is it the client. Profile the system setup to get an overview where to find the bottenecks.
Another possibillity is frequent hand shakes. Say your client code (can you post it?) to send messages by opening a connection for each message, it might be the case that the latency for sending a message will suffer from the increased SSL handshake time compared to plain tcp.
UPDATE:
A speed up would be to reuse your connection. A SSL handshake has to be done for every message sent in your case which involves cpu expensive asymmetric crypto and a few more tcp roundtrips than plain TCP. It is easy to do, with the pooling connection factory provided by activemq. This example does not alter your code much:
public class MySender{
private static ConnectionFactory factory = new org.apache.activemq.pool.PooledConnectionFactory("ssl://192.168.0.111:61616");
public void send(){
Connection connection = factory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Topic topic = session.createTopic(newDataEvent.getDataType().getType());
MessageProducer producer = session.createProducer(topic);
TextMessage message = session.createTextMessage();
message.setText(xstream.toXML(newDataEvent));
producer.send(message);
session.close();
connection.close();
}
}
Related
In one of our spring boot apps used in Springcloud dataflow streams, we are currently using HapiContext to construct a new HL7 client and establish a connection out of it to send HL7 messages to a TCP host and port.
#Autowired
HapiContext context;
Connection connection = context.newClient(host, Integer.parseInt(port), false);
// The initiator which will be used to transmit our message
Initiator initiator = connection.getInitiator();
Message response = initiator.sendAndReceive(adtMessage);
Currently we are not using SSL/TLS for this connection and call. but we now have a requirement such that the call should be changed to an SSL based one.
I have tried doing a lot of searches on the Internet, but I am not able to find any documentation on how to achieve this.
Is there anyway to get this done?
How are you creating the HapiContext?
The DefaultHapiContext seems to provide for creating a client with a tls parameter.
lookup for the ca.uhn.hl7v2.hoh.sockets.CustomCertificateTlsSocketFactory, this should have createClientSocket which will add the SSL context necessary
I have a chat implementation working with CometD.
On front end I have a Client that has a clientId=123 and is talking to VirtualMachine-1
The longpolling connection between the VirtualMachine-1 and the Client is done through the clientId. When the connection is established during the handshake, VirtualMachine-1 registers the 123 clientId as it's own and accepts its data.
For some reason, if VM-1 is restarted or FAILS. The longpolling connection between Client and VM-1 is disconnected (since the VirtualMachine-1 is dead, the heartbeats would fail, thus it would become disconnected).
In which case, CometD loadBalancer will re-route the Client communication to a new VirtualMachine-2. However, since VirtualMachine-2 has different clientId it is not able to understand the "123" coming from the Client.
My question is - what is the cometD behavior in this case? How does it re-route the traffic from VM-1 to a new VM-2 to successfully go through handshaking process?
When a CometD client is redirected to the second server by the load balancer, the second server does not know about this client.
The client will send a /meta/connect message with clientId=123, and the second server will reply with a 402::unknown_session and advice: {reconnect: "handshake"}.
When receiving the advice to re-handshake, the client will send a /meta/handshake message and will get a new clientId=456 from the second server.
Upon handshake, a well written CometD application will subscribe (even for dynamic subscriptions) to all needed channels, and eventually be restored to function as before, almost transparently.
Messages published to the client during the switch from one server to the other are completely lost: CometD does not implement any persistent feature.
However, persisting messages until the client acknowledged them is possible: CometD offers a number of listeners that are invoked by the CometD implementation, and through these listeners an application can persist messages (or other information) into their own choice of persistent (and possibly distributed) store: Redis, RDBMS, etc.
CometD handles reconnection transparently for you - it just takes a few messages between client and the new server.
You also want to read about CometD's in-memory clustering features.
On the law lvl of SSL protocols there are 4 types of messages:
Handshake Protocol
ChangeCipherSpec Protocol
Alert Protocol
Application Data Protocol
After the handshaking is completed and the symmetric private key been exchanged, the client will send Application Data messages to the server.
How ever same server can handle multiple clients, and each of those clients got it's own symmetric key.
Does the server keeps the connection open with all of the clients? If not how does the server know what symmetric key to use for an incoming connection? Does Application Data Protocol provide some sort of session id that the server can use to map to the right key?
Does the server keeps the connection open with all of the clients?
It can, depending on how the server is implemented.
how does the server know what symmetric key to use for an incoming connection?
Imagine you have a multiplayer game. Your server code typically looks something like this:
sock = socket.listen(port)
clients = []
if sock.new_client_waiting:
new_client = sock.accept()
clients.append(new_client)
for client in clients:
if client.new_data_waiting:
data = client.read()
# handle incoming actions...
So if there are two clients, the server will just know about both and have a socket object for both. Therein lies your answer: the OS (its TCP stack) handles the concept of connections and by providing you a socket, you can read/write from and to that socket, and you know from which clients it originates (to some degree of certainty anyway).
Many servers work differently, e.g. web server code is much more like this:
sock = socket.listen(port)
while True: # infinitely loop...
client = sock.accept() # This call will block until a client is available
spawn_new_http_handler(client)
Whenever a new person connects, a worker thread will pick it up and manage things from there. But still, it will have its socket to read from and write to.
Does Application Data Protocol provide some sort of session id that the server can use to map to the right key?
I do not know these specs by heart, but I am pretty sure the answer is no. Session resumption is done earlier, at the handshake stage, and is meant for clients returning. E.g. if I connected to https://example.com 30 minutes ago and return now, it might have my session and we don't need to do the whole handshake again. It doesn't have anything to do with telling clients apart.
I am using Jedis client for connecting to my Redis server. The following are the settings I'm using for connecting with Jedis (using apache common pool):
JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setTestOnBorrow(true);
poolConfig.setTestOnReturn(true);
poolConfig.setMaxIdle(400);
// Tests whether connections are dead during idle periods
poolConfig.setTestWhileIdle(true);
poolConfig.setMaxTotal(400);
// configuring it for some good max value so that timeout don't occur
poolConfig.setMaxWaitMillis(120000);
So far with these setting I'm not facing any issues in terms of reliability (I can always get the Jedis connection whenever I want), but I am seeing a certain lag with Jedis performance.
Can any one suggest me some more optimization for achieving high performance?
You have 3 tests configured:
TestOnBorrow - Sends a PING request when you ask for the resource.
TestOnReturn - Sends a PING whe you return a resource to the pool.
TestWhileIdle - Sends periodic PINGS from idle resources in the pool.
While it is nice to know your connections are still alive, those onBorrow PING requests are wasting an RTT before your request, and the other two tests are wasting valuable Redis resources. In theory, a connection can go bad even after the PING test so you should catch a connection exception in your code and deal with it even if you send a PING. If your network is stable, and you do not have too many drops, you should remove those tests and handle this scenario in your exception catches only.
Also, by setting MaxIdle == MaxTotal, there will be no eviction of resources from your pool (good/bad?, depends on your usage). And when your pool is exhausted, an attempt to get a resource will endup in timeout after 2 minutes of waiting for a free resource.
I have a lot of client programs and one service.
This Client programs communicate with the server with http channel with WCF.
The clients have dynamic IP.
They are online 24h/day.
I need the following:
The server should notify all the clients in 3 min interval. If the client is new (started in the moment), is should notify it immediately.
But because the clients have dynamic IP and they are working 24h/day and sometimes the connection is unstable, is it good idea to use wcf duplex?
What happens when the connection goes down? Will it automatically recover?
Is is good idea to use remote MSMQ for this type of notification ?
Regards,
WCF duplex is very resource hungry and per rule of thumb you should not use more than 10. There is a lot of overhead involved with duplex channels. Also there is not auto-recover.
If you know the interval of 3 minutes and you want the client to get information when it starts why not let the client poll the information from the server?
When the connection goes down the callback will throw an exception and the channel will close.
I am not sure MSMQ will work for you unless each client will create an MSMQ queue for you and you push messages to each one of them. Again with an unreliable connection it will not help. I don't think you can "push" the data if you loose the connection to a client, client goes off-line or changes an IP without notifying your system.