UDP malformed packets - udp

I use C# program for client UDP application. Application listens for a connection, and then communicates.
Socket udpClient = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
udpClient.Bind(new IPEndPoint(IPAddress.Any, ListenPort));
udpClient.Blocking = true;
int count = 0;
while (count == 0) udpClient.ReceiveFrom(receiveBuffer, ref ePoint);
udpClient.SendTo(data, endPoint);
udpClient.ReceiveFrom(receiveBuffer, ref ep);
...
I use Wireshark to debug the application. The problem is that after sometime my application starts sending malformed STUN packets, and I think that because of that they get rejected by a router on the internet.
The question: is it possible to prevent sending malformed UDP/STUN packets?

When your application sends malformed UDP packets, it has a bug. The minimal fragment of your code has only one SendTo call. You can add a check function for the content/length of data.
BTW: UDP is connectionless. I would say, your application waits for a request or a kind of start command not for a connection.

Related

OCaml-ssl with Unix.select leads to read errors

I’m using SSL for reading data from various remote services over secure websockets as follows: I create the socket, embed it in the SSL context and add the socket to the reading list for Unix.select. When the socket fires, I use Ssl.read to get the data.
4 services are working well. And with one I get Ssl.Read_error.Error_syscall: error:00000000:lib(0):func(0):reason(0) after receiving each websocket frame (size ~5-6Kb). By the way, frames here are much bigger than on other services, but I’m not sure it’s the reason.
I ignore syscall errors (and most probably loose some data) because frames continue to arrive. Then, always after one minute I get Ssl.Read_error.Error_zero_return: error:00000000:lib(0):func(0):reason(0), which means the peer closed SSL socket for writing and I have to restart the process because no new data will be received from this socket.
Problem is perfectly reproducible. At the same time examples for this service and my own test implementation with Node.JS receive the data for hours without any problems.
I assume I do something wrong or setup socket/SSL too straightforward (see below).
Any help or ideas would be strongly appreciated.
let sock = Unix.socket PF_INET SOCK_STREAM 0 in
let laddr = Unix.inet_addr_of_string p.interface in
Unix.bind sock (ADDR_INET (laddr,0));
Unix.connect sock addr;
let (sock, res) =
let req = Bytes.of_string http_request in
if ssl then begin
Ssl.init ();
let ctx = create_context TLSv1_2 Client_context in
let sock = Ssl.embed_socket sock ctx in
Ssl.connect sock;
(SslSock sock, (write sock req 0 http_request_len))
end else
(UnixSock sock, (Unix.write sock req 0 http_request_len))
WireShark did the trick: this “bad” service sends two websocket frames in one tcp packet where second frame has zero payload length. Naturally, my Websocket implementation improperly handled frames with zero payload which lead to missing of Ping frames and closing of TCP connection by remote server.

Openthread CLI UDP communication from main.c (NRF52840)

We are working with NRF52840 dongles and want to be able to have them relay data over an OpenThread mesh network through UDP automatically. We have found within the OpenThread API a solid Udp.h library with all the Udp functions we need to create code that runs on the dongles from the main.c.
Below is our code that should broadcast the message: "Hallo" to all nodes that have an open socket on port 1994.
We have read that the ipv6 address ff03::1 is reserved for multicast UDP broadcasting and it works perfectly when manually performed with the CLI udp commands.
CLI: Udp open, udp send ff03::1 1994 Hallo
With all the nodes that have udp open, udp bind :: 1994, receiving the Hallo message from the sending node.
We are trying to recreate this in the main.c of our nodes so that we can provide the nodes with some intelligence of their own.
This piece of code is run once when the push button on the dongle is pressed.
The code compiles perfectly and we have tested the functions that have a return with the RGB led (green OK, red not) to confirm that there weren't any errors produced (sadly not all functions return a no_error value)
void udpSend(){
const char *buf = "Hallo";
otMessageInfo messageInfo;
otInstance *myInstance;
myInstance = thread_ot_instance_get();
otUdpSocket mySocket;
memset(&messageInfo, 0, sizeof(messageInfo));
// messageInfo.mPeerAddr = otIp6GetUnicastAddresses(myInstance)->mNext->mNext->mAddress;
otIp6AddressFromString("ff03::1", &messageInfo.mPeerAddr);
messageInfo.mPeerPort = 1994;
messageInfo.mInterfaceId = OT_NETIF_INTERFACE_ID_THREAD;
otUdpOpen(myInstance, &mySocket, NULL, NULL);
otMessage *test_Message = otUdpNewMessage(myInstance, NULL);
otMessageSetLength(test_Message, sizeof(buf));
if (otMessageAppend(test_Message, &buf, sizeof(buf)) == OT_ERROR_NONE){
nrf_gpio_pin_write(LED2_G, 0);
}
else{
nrf_gpio_pin_write(LED2_R, 0);
}
otUdpSend(&mySocket, test_Message, &messageInfo);
otCliUartOutputFormat("Done.\0");
otUdpClose(&mySocket);
}
Now, we aren't exactly experts, so we are not sure why this isn't working as we had a lot of trouble figuring out how everything is called/initialised.
We hope to create a way to send and receive data through UDP through the code, so that they can operate autonomously.
We would really appreciate it if someone could assist us with our project!
Thanks!
Jonathan
There are a few errors in your code:
Remove the call to otMessageSetLength(). The message length is automatically increased as part of otMessageAppend().
The call to otMessageAppend() should be: otMessageAppend(test_message, buf, (uint16_t)strlen(buf)).
Removed the & before buf.
Replaced sizeof() with strlen().
Couple other things you should consider:
After calling otUdpNewMessage(), if any following call returns an error, make sure to call otMessageFree() on the message buffer.
Custody is only given to OpenThread after a successful call to otUdpSend().
Do not call udpSend() from interrupt context.
OpenThread library was designed to assume a single thread of execution.
Hope that helps.

Signaling on RTCPeerconnection

I wrote a simple code for testing RTCPeerconnection
peer = new RTCPeerconnection(...);
peer.onicecandidate = function(evt){
console.log(evt.candidate);
// into the console
// RTCIceCandidate {candidate: "candidate:... 1 udp ", sdpMLineIndex:0, sdpMid:"data"}
// RTCIceCandidate {candidate: "candidate:... 2 udp ", sdpMLineIndex:0, sdpMid:"data"}
}
i want send it to signaling server but i receive it 2 times and 2 different values.
Have i to record all candidate values?
When i receive candidate information from signaling server, have i to receive all the values about the same peer?
I have too, localDescription
// {type: "offer", sdp: "v=0↵o=- 6483...48 2 IN IP4 ...}
Have i to send it to signaling server and receive the description of other peer?
I think you need to do bit more reading up on WebRTC, for a single PeerConnection, there could be many ICE candidates, you usually send it to remote peer through some signaling server, no point storing it in server, as they probably expire after a period of time.
this and this might be good place to read up on the basics.

The receiveBufferSize not being honored. UDP packet truncated

netty 4.0.24
I am passing XML over UDP. When receiving the UPD packet, the packet is always of length 2048, truncating the message. Even though, I have attempted to set the receive buffer size to something larger (4096, 8192, 65536) but it is not being honored.
I have verified the UDP sender using another UDP ingest mechanism. A standalone Java app using java.net.DatagramSocket. The XML is around 45k.
I was able to trace the stack to DatagramSocketImpl.createChannel (line 281). Stepping into DatagramChannelConfig, it has a receiveBufferSize of whatever I set (great), but a rcvBufAllocator of 2048.
Does the rcvBufAllocator override the receiveBufferSize (SO_RCVBUF)? Is the message coming in multiple buffers?
Any feedback or alternative solutions would be greatly appreciated.
I also should mention, I am using an ESB called vert.x which uses netty heavily. Since I was able to trace down to netty, I was hopeful that I could find help here.
The maximum size of incoming datagrams copied out of the socket is actually not a socket option, but rather a parameter of the socket read() function that your client passes in each time it wants to read a datagram. One advantage of this interface is that programs accepting datagrams of unknown/varying lengths can adaptively change the size of the memory allocated for incoming datagram copies such that they do not over-allocate memory while still getting the whole datagram. (In netty this allocation/prediction is done by implementors of io.netty.channel.RecvByteBufAllocator.)
In contrast, SO_RCVBUF is the size of a buffer that holds all of the datagrams your client hasn't read yet.
Here's an example of how to configure a UDP service with a fixed max incoming datagram size with netty 4.x using a Bootstrap:
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelOption;
import io.netty.channel.FixedRecvByteBufAllocator;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioDatagramChannel;
int maxDatagramSize = 4092;
String bindAddr = "0.0.0.0";
int port = 1234;
SimpleChannelInboundHandler<DatagramPacket> handler = . . .;
InetSocketAddress address = new InetSocketAddress(bindAddr, port);
NioEventLoopGroup group = new NioEventLoopGroup();
Bootstrap b = new Bootstrap()
.group(group)
.channel(NioDatagramChannel.class)
.handler(handler);
b.option(ChannelOption.RCVBUF_ALLOCATOR, new FixedRecvByteBufAllocator(maxDatagramSize));
b.bind(address).sync().channel().closeFuture().await();
You could also configure the allocator with ChannelConfig.setRecvByteBufAllocator

Can I call peerconnection->addStream(stream) in OnAddStream() event

I try to make peer to peer connection between a server and a client. I send local video stream, through peer connection,
from the client to the server and when once the server received it in onAddStream() event it takes the stream and add it to peer connection with addStream() to send it back to the client, where it came from initially. The source on the server side looks like this:
void ServerPeerConnection::OnAddStream(webrtc::MediaStreamInterface* stream)
{
this->AddStream(stream);
}
I know it seems senseless but it's the first step to implement before to go further.
So I'm asking you if it's allowed to the sequence? Should I addStream() before SDP parameters are transferred between the peers or can I call addStream() after. Now doing so I have the following error log:
Error(statscollector.cc:192): The SSRC 2128160837 is not associated with a track
Error(statscollector.cc:192): The SSRC 0 is not associated with a track
Transport::ConnectChannels_w: No local description has been set. Will generate o
ne.
Jingle:Channel[audio|1|]: NULL DTLS identity supplied. Not doing DTLS
Jingle:Channel[audio|2|]: NULL DTLS identity supplied. Not doing DTLS
You can attach the remote stream like this:
var MediaStream = window.webkitMediaStream || window.MediaStream;
firstPeer.onaddstream = function(remoteSteam) {
remoteStream = new MediaStream(remoteSteam.audioTracks, remoteSteam.videoTracks);
otherPeer.addStream(remoteStream); /* attaching remote stream */
};
https://github.com/muaz-khan/WebRTC-Experiment/issues/2