netty udp can not receive or send msg - udp

I am using netty UDP as the IOT server of our company's equipment.
But every few days, udpserver can no longer send and receive messages, and our equipment can't connect to the server.
Sometimes it'll take a while.
Sometimes it still doesn't work after a while, so you need to restart the server.
I don't know why.
I guess is it because:
here bytebuf = packet copy().content();
the packet is too long ,cause the port 'cheat death'(can not receive or send msg),
How to limit the size of the packet?
Here is my code:
#Override
public void channelRead0(ChannelHandlerContext channelHandlerContext, DatagramPacket packet) {
ByteBuf buf = packet.copy().content();
// 大于1000 个字节不再处理 packet too long!
if (buf.capacity() > 1000) {
log.warn("收到过长的数据包 此包不再处理 packet too big ,discard");
return;
}
try {
byte[] b = new byte[buf.readableBytes()];
buf.readBytes(b);
// do sth....
}
catch (Exception e) {
log.error(e.getMessage(), e);
}
finally {
buf.release();
}
}
}

Related

How to move messages from one queue to another in RabbitMQ

In RabbitMQ,I have a failure queue, in which I have all the failed messages from different Queues. Now I want to give the functionality of 'Retry', so that administrator can again move the failed messages to their respective queue. The idea is something like that:
Above diagram is structure of my failure queue. After click on Retry link, message should move into original queue i.e. queue1, queue2 etc.
If you are looking for a Java code to do this, then you have to simply consume the messages you want to move and publish those messages to the required queue. Just look up on the Tutorials page of rabbitmq if you are unfamiliar with basic consuming and publishing operations.
It's not straight forward consume and publish. RabbitMQ is not designed in that way. it takes into consideration that exchange and queue both could be temporary and can be deleted. This is embedded in the channel to close the connection after single publish.
Assumptions:
- You have a durable queue and exchange for destination ( to send to)
- You have a durable queue for target ( to take from )
Here is the code to do so:
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.QueueingConsumer;
import org.apache.commons.lang.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.rabbit.connection.CachingConnectionFactory;
public object shovelMessage(
String exchange,
String targetQueue,
String destinationQueue,
String host,
Integer port,
String user,
String pass,
int count) throws IOException, TimeoutException, InterruptedException {
if(StringUtils.isEmpty(exchange) || StringUtils.isEmpty(targetQueue) || StringUtils.isEmpty(destinationQueue)) {
return null;
}
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setHost(StringUtils.isEmpty(host)?internalHost.split(":")[0]:host);
factory.setPort(port>0 ? port: Integer.parseInt(internalPort.split(":")[1]));
factory.setUsername(StringUtils.isEmpty(user)? this.user: user);
factory.setPassword(StringUtils.isEmpty(pass)? this.pass: pass);
Channel tgtChannel = null;
try {
org.springframework.amqp.rabbit.connection.Connection connection = factory.createConnection();
tgtChannel = connection.createChannel(false);
tgtChannel.queueDeclarePassive(targetQueue);
QueueingConsumer consumer = new QueueingConsumer(tgtChannel);
tgtChannel.basicQos(1);
tgtChannel.basicConsume(targetQueue, false, consumer);
for (int i = 0; i < count; i++) {
QueueingConsumer.Delivery msg = consumer.nextDelivery(500);
if(msg == null) {
// if no message found, break from the loop.
break;
}
//Send it to destination Queue
// This repetition is required as channel looses the connection with
//queue after single publish and start throwing queue or exchange not
//found connection.
Channel destChannel = connection.createChannel(false);
try {
destChannel.queueDeclarePassive(destinationQueue);
SerializerMessageConverter serializerMessageConverter = new SerializerMessageConverter();
Message message = new Message(msg.getBody(), new MessageProperties());
Object o = serializerMessageConverter.fromMessage(message);
// for some reason msg.getBody() writes byte array which is read as a byte array // on the consumer end due to which this double conversion.
destChannel.basicPublish(exchange, destinationQueue, null, serializerMessageConverter.toMessage(o, new MessageProperties()).getBody());
tgtChannel.basicAck(msg.getEnvelope().getDeliveryTag(), false);
} catch (Exception ex) {
// Send Nack if not able to publish so that retry is attempted
tgtChannel.basicNack(msg.getEnvelope().getDeliveryTag(), true, true);
log.error("Exception while producing message ", ex);
} finally {
try {
destChannel.close();
} catch (Exception e) {
log.error("Exception while closing destination channel ", e);
}
}
}
} catch (Exception ex) {
log.error("Exception while creating consumer ", ex);
} finally {
try {
tgtChannel.close();
} catch (Exception e) {
log.error("Exception while closing destination channel ", e);
}
}
return null;
}
To requeue a message you can use the receiveAndReply method. The following code will move all messages from the dlq-queue to the queue-queue:
do {
val movedToQueue = rabbitTemplate.receiveAndReply<String, String>(dlq, { it }, "", queue)
} while (movedToQueue)
In the code example above, dlq is the source queue, { it } is the identity function (you could transform the message here), "" is the default exchange and queue is the destination queue.
I also have implemented something like that, so I can move messages from a dlq back to processing. Link: https://github.com/kestraa/rabbit-move-messages
Here is a more generic tool for some administrative/supporting tasks, the management-ui is not capable of.
Link: https://github.com/bkrieger1991/rabbitcli
It also allows you to fetch/move/dump messages from queues even with a filter on message-content or message-headers :)

What is the use case of BrokerService in ActiveMQ and how to use it correctly

I am new about ActiveMQ. I'm trying to study and check how it works by checking the example code provided by Apache at this link:-
http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
public class Server implements MessageListener {
private static int ackMode;
private static String messageQueueName;
private static String messageBrokerUrl;
private Session session;
private boolean transacted = false;
private MessageProducer replyProducer;
private MessageProtocol messageProtocol;
static {
messageBrokerUrl = "tcp://localhost:61616";
messageQueueName = "client.messages";
ackMode = Session.AUTO_ACKNOWLEDGE;
}
public Server() {
try {
//This message broker is embedded
BrokerService broker = new BrokerService();
broker.setPersistent(false);
broker.setUseJmx(false);
broker.addConnector(messageBrokerUrl);
broker.start();
} catch (Exception e) {
System.out.println("Exception: "+e.getMessage());
//Handle the exception appropriately
}
//Delegating the handling of messages to another class, instantiate it before setting up JMS so it
//is ready to handle messages
this.messageProtocol = new MessageProtocol();
this.setupMessageQueueConsumer();
}
private void setupMessageQueueConsumer() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(messageBrokerUrl);
Connection connection;
try {
connection = connectionFactory.createConnection();
connection.start();
this.session = connection.createSession(this.transacted, ackMode);
Destination adminQueue = this.session.createQueue(messageQueueName);
//Setup a message producer to respond to messages from clients, we will get the destination
//to send to from the JMSReplyTo header field from a Message
this.replyProducer = this.session.createProducer(null);
this.replyProducer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
//Set up a consumer to consume messages off of the admin queue
MessageConsumer consumer = this.session.createConsumer(adminQueue);
consumer.setMessageListener(this);
} catch (JMSException e) {
System.out.println("Exception: "+e.getMessage());
}
}
public void onMessage(Message message) {
try {
TextMessage response = this.session.createTextMessage();
if (message instanceof TextMessage) {
TextMessage txtMsg = (TextMessage) message;
String messageText = txtMsg.getText();
response.setText(this.messageProtocol.handleProtocolMessage(messageText));
}
//Set the correlation ID from the received message to be the correlation id of the response message
//this lets the client identify which message this is a response to if it has more than
//one outstanding message to the server
response.setJMSCorrelationID(message.getJMSCorrelationID());
//Send the response to the Destination specified by the JMSReplyTo field of the received message,
//this is presumably a temporary queue created by the client
this.replyProducer.send(message.getJMSReplyTo(), response);
} catch (JMSException e) {
System.out.println("Exception: "+e.getMessage());
}
}
public static void main(String[] args) {
new Server();
}
}
My confusion about the messageBrokerUrl = "tcp://localhost:61616"; You know ActiveMQ service is running on port 61616 by default. Why does this example chooses same port. If I try to run the code thows eception as:
Exception: Failed to bind to server socket: tcp://localhost:61616 due to: java.net.BindException: Address already in use: JVM_Bind
Perhaps if I change the port number, I can execute the code.
Please let me know why it is like this in the example and how to work with BrokerService.
The BrokerService in this example is trying to create an in memory ActiveMQ broker for use in the example. Given the error you are seeing I'd guess you already have an ActiveMQ broker running on the machine that is bound to port 61616 as that's the default port and thus the two are conflicting. You could either stop the external broker and run the example or modify the example to not run the embedded broker and just rely on your external broker instance.
Embedded brokers are great for unit testing or for creating examples that don't require the user to have a broker installed and running.

TCP Server configuration in Mule - writing into client socket

I am trying to create a mule flow with a TCP inbound endpoint which is a TCP server that listens to a port. When a successful client connection is identified, before receiving any request from the client, I need to write a message into the socket (which lets the client know that I am listening), only after which the client sends me further requests. This is how I do it with a sample java program :
import java.net.*;
import java.io.*;
public class TCPServer
{
public static void main(String[] args) throws IOException
{
ServerSocket serverSocket = null;
try {
serverSocket = new ServerSocket(4445);
}
catch (IOException e)
{
System.err.println("Could not listen on port: 4445.");
System.exit(1);
}
Socket clientSocket = null;
System.out.println ("Waiting for connection.....");
try {
clientSocket = serverSocket.accept();
}
catch (IOException e)
{
System.err.println("Accept failed.");
System.exit(1);
}
System.out.println ("Connection successful");
System.out.println ("Sending output message - .....");
//Sending a message to the client to indicate that the server is active
PrintStream pingStream = new PrintStream(clientSocket.getOutputStream());
pingStream.print("Server listening");
pingStream.flush();
//Now start listening for messages
System.out.println ("Waiting for incoming message - .....");
PrintWriter out = new PrintWriter(clientSocket.getOutputStream(),true);
BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
{
System.out.println ("Server: " + inputLine);
out.println(inputLine);
if (inputLine.equals("Bye."))
break;
}
out.close();
in.close();
clientSocket.close();
serverSocket.close();
}
}
I have tried to use Mule's TCP inbound endpoint as a server, but I am not able to see how I can identify a successful connection from the client, inorder to trigger the outbound message. The flow gets triggered only when a message is sent across from the client. Is there a way I can extend the functionality of the Mule TCP connector and have a listener which could do the above requirement?
Based on the answer provided, this is how I implemented this -
public class TCPMuleOut extends TcpMessageReceiver {
boolean InitConnection = false;
Socket clientSocket = null;
public TCPMuleOut(Connector connector, FlowConstruct flowConstruct,
InboundEndpoint endpoint) throws CreateException {
super(connector, flowConstruct, endpoint);
}
protected Work createWork(Socket socket) throws IOException {
return new MyTcpWorker(socket, this);
}
protected class MyTcpWorker extends TcpMessageReceiver.TcpWorker {
public MyTcpWorker(Socket socket, AbstractMessageReceiver receiver)
throws IOException {
super(socket, receiver);
// TODO Auto-generated constructor stub
}
#Override
protected Object getNextMessage(Object resource) throws Exception {
if (InitConnection == false) {
clientSocket = this.socket;
logger.debug("Sending logon message");
PrintStream pingStream = new PrintStream(
clientSocket.getOutputStream());
pingStream.print("Log on message");
pingStream.flush();
InitConnection = true;
}
long keepAliveTimeout = ((TcpConnector) connector)
.getKeepAliveTimeout();
Object readMsg = null;
try {
// Create a monitor if expiry was set
if (keepAliveTimeout > 0) {
((TcpConnector) connector).getKeepAliveMonitor()
.addExpirable(keepAliveTimeout,
TimeUnit.MILLISECONDS, this);
}
readMsg = protocol.read(dataIn);
// There was some action so we can clear the monitor
((TcpConnector) connector).getKeepAliveMonitor()
.removeExpirable(this);
if (dataIn.isStreaming()) {
}
return readMsg;
} catch (SocketTimeoutException e) {
((TcpConnector) connector).getKeepAliveMonitor()
.removeExpirable(this);
System.out.println("Socket timeout");
} finally {
if (readMsg == null) {
// Protocols can return a null object, which means we're
// done
// reading messages for now and can mark the stream for
// closing later.
// Also, exceptions can be thrown, in which case we're done
// reading.
dataIn.close();
InitConnection = false;
logger.debug("Client closed");
}
}
return null;
}
}
}
And the TCP connector is as below:
<tcp:connector name="TCP" doc:name="TCP connector"
clientSoTimeout="100000" receiveBacklog="0" receiveBufferSize="0"
sendBufferSize="0" serverSoTimeout="100000" socketSoLinger="0"
validateConnections="true" keepAlive="true">
<receiver-threading-profile
maxThreadsActive="5" maxThreadsIdle="5" />
<reconnect-forever />
<service-overrides messageReceiver="TCPMuleOut" />
<tcp:direct-protocol payloadOnly="true" />
</tcp:connector>
What you're trying to do is a little difficult to accomplish but not impossible. The messages are received by the org.mule.transport.tcp.TcpMessageReceiver class, and this class always consumes the data in the input stream to create the message that injects in the flow.
However, you could extend that receiver and instruct the TCP module to use yours by adding a service-overrides tag in your flow's tcp connector (documented here) and replacing the messageReceiver element.
In your extended receiver you should change the TcpWorker.getNextMessage method in order to send the ack message before read from the input stream.
HTH, Marcos.

UDP DatagramPacket not received from external client

I have a game with a UDP/TCP server and client. One UDP port (2406) for updating the client's location, and one TCP port (2407) for the chat. The problem here is at 2406.
When I play the clients in my local network, everything runs fine. But when an external client wants to join, I only receive the first package (the join command) and after that... nothing. I (logged in on local network) cannot see the external player. BUT they can see me. The chat works for both sides. So it's really related to the DatagramSocket. I'll try to post as much info as possible related to the UDP and not the TCP.
Anyone knows what the problem is here?
Ports are forwarded like UDP 2406, TCP 2407.
Server, sockets:
DatagramSocket socket = new DatagramSocket(2406, InetAddress.getLocalHost());
ServerSocket serversocket_chat = new ServerSocket(2407, 0, InetAddress.getLocalHost());
Server, receive thread:
byte[] buffer = new byte[1024];
DatagramPacket dp = new DatagramPacket(buffer, 1024);
while(true){
try{
this.socket.receive(dp);
String data = new String(dp.getData(), 0, dp.getLength()).trim();
String[] args = data.split(":");
String command = args[0];
String reply = null;
try{
reply = handleCommand(dp, command, args);
} catch( Exception e ){
System.err.println("Error while handling command: " + command);
e.printStackTrace();
}
if(reply != null){
reply += "\n";
DatagramPacket reply_packet = new DatagramPacket(reply.getBytes(), reply.length(), dp.getSocketAddress());
this.socket.send(reply_packet);
}
} catch (IOException e){
e.printStackTrace();
}
}
new Thread(chat_receive).start();
As soon as someone sends a message, the method handleCommand will find out what it is. Every message is a byte[] derived from a String. If the message is "cj:Hello", handleCommand finds command cs, username Hello. THIS is received by the server. After that, if that same person sends something, nothing will be received.
Client sockets:
private DatagramSocket socket;
private Socket socket_chat;
Client connecting:
this.socket = new DatagramSocket();
this.socket_chat = new Socket(ip, port+1);
Client sending:
private Runnable send = new Runnable() {
#Override
public void run() {
DatagramPacket dp;
String sendStringBuffered;
while(true){
if(sendString != null){
sendStringBuffered = sendString;
dp = new DatagramPacket(sendStringBuffered.getBytes(), sendStringBuffered.length(), ip, port);
try {
socket.send(dp);
} catch (IOException ex) {
Logger.getLogger(NewClient.class.getName()).log(Level.SEVERE, null, ex);
}
sendString = null;
}
}
}
};
Two things come into mind:
UDP is not reliable. A datagram may get lost at any time
UDP packets usually do no not traverse NATs just like that
For the sake of systematic troubleshooting, make sure the packets really get to their destination using a packet sniffer/analyzer (like tcpdump or wireshark)

Java NIO ByteBuffer : read the message size on head before read the complete message

I'm making a java NIO server which receive messages, each message have its size on the head of the message, that why I'm reading first into a buffer which have default size(44), then get the complete size from this buffer, and then create a new buffer which is supposed to get the rest of the message(body) and then use System.arrayCopy() to make an array with the complete message. this operations are working good, the problem is that the second buffer(body of the message)have the size but does not contain the right data.
plz here is my code if you something wrong :
public void getMessageByMessageSize(SelectionKey key) {
socket = (SocketChannel) key.channel();
int nBytes = 0;
byte[] message = null;
try {
nBytes = socket.read(headBuffer);
if (nBytes < 0) {
try {
key.channel().close();
key.cancel();
return;
} catch (IOException e) {
e.printStackTrace();
}
}
//size of the message body
int corpMessageSize = MessageUtils.getMessageSize(headBuffer)
- HEADER_SIZE;
ByteBuffer corpsBuffer = ByteBuffer.allocate(corpMessageSize);
headBuffer.flip();
nBytes += socket.read(corpsBuffer);
corpsBuffer.flip();
byte[] corp=corpsBuffer.array();
message = new byte[nBytes];
System.arraycopy(headBuffer.array(), 0, message, 0, HEADER_SIZE);
System.arraycopy(corpsBuffer.array(), 0, message, HEADER_SIZE,
nBytes - HEADER_SIZE);
System.out.println(nBytes);
headBuffer.clear();
corpsBuffer.clear();
} catch (IOException e) {
e.printStackTrace();
try {
key.channel().close();
key.cancel();
return;
} catch (IOException ex) {
e.printStackTrace();
}
}
this.worker.verifyConnection(this,message, key);
//this.worker.processData(this, socket, message, nBytes);
}
i have a simple client which send create a byte message, make its size in the head, and then send it.
thanks
nBytes += socket.read(corpsBuffer);
You are assuming that this reads the entire message and fills the buffer. Nothing in TCP/IP guarantees that. You have to loop. If you're in non-blocking mode, you have to re-select if you get a zero length read.