I'm working on a new TCP socket server and take example of an old code I'd writen few years ago for an UDP socker server. On my previous work, I checked CRC as first action on messageReceived() then handle the message type to fwd it to the good service. As I can receive various sized messages, it's very convenient that IoBuffer size fit to the exact size of the message received. I can always find checksum at the end of the message.
But in my new project, IoBuffer is always 2048o by default. I tried to understand what option I used on the old project until I saw that the behavior is not the same in Mina 2.0.9 than 2.0.4.
I did some tests and saw that :
TCP and version 2.0.4 : iobuffer has a fixed size = 2048
UDP and version 2.0.4 : iobuffer has message size
TCP and version 2.0.9 : iobuffer has a fixed size = 2048
UDP and version 2.0.9 : iobuffer has a fixed size = 2048
Log message with my old app with Mina 2.0.4
16:44:31.800 [NioDatagramAcceptor-1] INFO log1 - RECEIVED: HeapBuffer[pos=0 lim=4 cap=**4**: 61 7A 64 0A]
Same but just change Mina version to 2.0.9
16:47:40.890 [NioDatagramAcceptor-1] INFO log1 - RECEIVED: HeapBuffer[pos=0 lim=4 cap=**2048**: 61 7A 64 0A]
I can't find information about that in release notes. Do someone know about that ? It should be a good reason to change it. Am I on the wrong way with what I want to do ? I'm pretty sure that the first action should be to check CRC before trying to handle message by type but perhaps it's not the usual pattern.
Bad thing is also if I want to update Mina version of my old app, it can't work anymore...
Hope it's clear.. Thanks for your help.
You can define the receive buffer size in DatagramConnector.getSessionConfig().setReceiveBufferSize. Just set it to whatever you want it to be.
Related
In the last couple of weeks I've been developing a boot loader that performs a firmware update on a certain device. The setup is as follows:
The firmware binary and its respective SHA1 hash are stored in a web server;
The device is composed of an ESP8266 and a STM32 microcontroller (STM32F401 or STM32F030, there two hardware versions, but the one I'm using is the F401). The ESP is used only with AT+ commands, i.e., I did not built it's firmware, just used the latest version from Espressif.
The idea is that, the STM32 bootloader should use the ESP to download the firmware hash and binary from the webserver and then boot the firmware if the hash is OK. The download is made using the ESP in passive mode, i.e. the STM has to manually request X bytes to read from the ESP buffer, currently I'm using 1 MTU (1460 bytes).
At first, the connection to the webserver was made using HTTP and everything worked perfectly, however, I had to change it to HTTPS, and that's where the problem starts. After the STM has received around 100kB of the firmware (which has 110kB), the ESP only provides 30 bytes per request (which should be around 1 MTU), thus, making the download time extremely large.
I've already did some digging trying to find out if this is related to the ESP, but didn't find anything. Also, the point where this 30 byte download rate starts to happen isn't always at the 100kB mark, I've tested with a 170kB firmware and It started to happen at 160kB ish, so, it looks like it's always the last 10kB.
I've also added some delays in the firmware when the packet size becomes smaller than 1 MTU, to give more time for the ESP to process the packet, since the SSL decryption stuff takes longer to process; but it did not help.
My question is: is there some characteristic in the HTTPS/SSL protocols that reduces the packet length? What could be the causes of what is happening here?
Using this API: https://developer.mozilla.org/en-US/docs/Web/API/Network_Information_API
You can run $ navigator.connection in a browser console to receive your different values regarding your network connection.
However the downlink attribute is a max of 10 (aka 10Mbps). Why is it capped here? Doesn't really help me since I need more info since I am deciding whether a client can handle HD video that may very well require over 10Mbps, thanks.
I found the answer in the comments to this answer: https://stackoverflow.com/a/47511842/3973137
Turns out Chrome caps it at 10 Mbps to prevent fingerprinting
I am currently working on an application to change my RGBWW light strips by a Java application.
Information has to be sent via UDP packages in order to be understood by the controller.
Unfortunately, the hex number 0x80 has to be sent - which is causing some problems.
Whenever I send a byte array containing only numbers fron 0x00 to 0x79 (using DataPacket and a DataSocket), I do get an UDP Package popping up on my network monitor.
As soon as I include the number 0x80 or any other higher, I see two things Happen:
1: I do not longer get only UDP protocols, but messages are displayed as RTP / RTCP most of the time
2: The method Integer.hexToString() does not display "80", but gives me a "ffffff80".
My question: Is there something I am missing when it comes to sending hex info by UDP? Or is there another way of sending it, possibly avoiding the annoyingly signed bytes?
I unfortunately did not find any information that would have significantly helped me on that issue, but I hope you can help me!
Thanks in advance!
I would like to use WinPCap library for "reliable" UDP receiving in my C++ application. All examples that I found, using this library for capturing and then proceding. Is there any way (example) how to configure PCap for streaming mode and receive UDP only and on uder defined port or how to solve this. In this time I have reliable UDP server able to receiving 0.5Gb/s. But on slower PC I have a packet lose I can see packets in ethereal but not in application.
thanks
vsm
I assume that you have already tried all of the more standard methods of increasing the number of datagrams that you are able to process? Things like increasing the recv buffer size, speeding up the processing that you do per datagram and using IOCP to allow you to bring more threads to bear on the problem or using RIO if you can target Windows 8?
If so then using WinPCap might work but it sounds like a bit of an extreme solution.
What you need to do is create a filter so that you only capture the datagrams that you are interested in... The docs include examples which use filters.
I have server from here: http://www.gamedev.net/topic/533159-article-using-udp-with-iocp/. This code working with IOCP. Its working fine on WIndows XP. There is no problem with receiving 0.5Gb/s. But now on Win7 is little unreliable. Sometimes there are packets positions error. (my device generating udp packets and in its payload there is PacketNumber - number increasing with each packet. When error occured i write all packet numbers into file. I can see for exmaple: 10,11,290,13,14... ). Is there any known differences in WinXP and Win7 for IOCP and multi threading? Or do you konw any free UDP server with IOCP processing?
In procedding loop I only adding packets into buffer and checking their numbers.
There is a built in limitation of 2 MB for the IBM WebSphere MQ JMS interface.
http://www-01.ibm.com/support/docview.wss?uid=swg21221260
Is there a way to bypass that limitation?
The limitation applied to WMQ versions distributed with WAS back at V5.1.1 many years ago. If this is the problem, upgrading to current version of WMQ will resolve it. The current version of WMQ is V7.0.1. V6.0.2 is also still current but will be out of service in September of 2012. V6 & V7 can send and receive messages up to 100MB but WMQ itself defaults to 4MB out of the box. It is necessary to tune parameters of the QMgr, queues and channels if messages larger than 4MB are required but JMS is not a limitation at modern versions.
The WMQ Java/JMS manuals do not specifically mention a maximum size because it is the same as the native WMQ max length of 100MB. However, the WMQ V6 Performance Report provides benchmarks for JMS messages up to 64MB.
Whatever is preventing you from putting a 3MB message isn't a limitation of WMQ's JMS implementation as regards message size. If you have checked MAXMSGL on all of the channels and queues and the QMgr then it's something less obvious but it is configuration.
This might sound arduous, but it is a solution:
Take your message content, convert it into a byte array.
Split the byte array into n sub arrays that are ~< 1.9 MB each.
Start a JMS transaction and send each sub array in a ByteMessage, incrementing the group count:
e.g.
message.setStringProperty("JMSXGroupID", groupId);
message.setIntProperty("JMSXGroupSeq", i);
On the receiver side, you implement a selector to get all the messages in the group as soon as you receive the first message. Retrieve all the messages in the group (hopefully you get them all), sort them correctly, re-create the big byte array, unmarshall it, and you're done.
Trivial really.....
Here's a better example.