LWIP - how to get UDP source port of received packet? - udp

SOLVED how to identify source port of sender who sent the packet you received now:
Use example "udp_server"...
...and anywhere after line: int len = recvfrom(sock, rx_buffer, sizeof(rx_buffer) - 1, 0, (struct sockaddr *)&source_addr, &socklen);
...add line u16_t client_port = htons(((struct sockaddr_in *)&source_addr)->sin_port);
IP address of sender is in existing line inet_ntoa_r(((struct sockaddr_in *)&source_addr)->sin_addr, addr_str, sizeof(addr_str) - 1);
So:
char client_IP[16]; //address of client who send request
u16_t client_port = 0; //port of client who send request
ESP_LOGI(TAG,
"Received %d bytes from %s:%d", len, client_IP, client_port);
will give you:
UDP Rx: Received 28 bytes from 192.168.0.110:55124
When UDP packet is received, now you know ip:port to reply to.

Related

CMSIS USB firmware for stm32f401. Fails to send device descriptor with EOVERFLOW (-75) error

I am trying to get USB device code working on my stm32f401 microcontroller. So far, I'm able to read the packets from the host, send the first device request, receive and apply the assigned address. After that, the host requests for the device descriptor again, and my device doesn't response to that for some reason. I've been fighting this issue for weeks now, seems like I need help with it.
In my code, after initializing the peripherals, I set up my endpoint 0 and set its Tx FIFO size to 16. When the device receives a packet, it reads the packet status and reads the packet content in function setup_packet_handler(). This function prints the content word by word:
uint32_t data = *fifo;
printf("Rx from FIFO: 0x%08x\n", data);
void rxflvl_handler() {
uint32_t packet_status = USB_OTG_FS->GRXSTSP;
uint8_t endpoint_number = (packet_status & USB_OTG_GRXSTSP_EPNUM_Msk) >> USB_OTG_GRXSTSP_EPNUM_Pos;
uint16_t bcnt = (packet_status & USB_OTG_GRXSTSP_BCNT_Msk) >> USB_OTG_GRXSTSP_BCNT_Pos;
uint16_t pktsts = (packet_status & USB_OTG_GRXSTSP_PKTSTS_Msk) >> USB_OTG_GRXSTSP_PKTSTS_Pos;
printf("Received packet of status 0x%02x\n", pktsts);
switch (pktsts) {
case 0x06: // SETUP packet (includes data)
setup_packet_handler(endpoint_number, bcnt);
break;
case 0x02: // OUT packet (includes data)
break;
case 0x04: // SETUP stage has completed
case 0x03: // OUT transfer has completed
// Re-enable the endpoint
OUT_ENDPOINT(endpoint_number)->DOEPCTL |= USB_OTG_DOEPCTL_CNAK;
OUT_ENDPOINT(endpoint_number)->DOEPCTL |= USB_OTG_DOEPCTL_EPENA;
break;
}
}
My main() function watches the current number of packets to send and free space of the Tx FIFO for endpoint 0:
int main(void)
{
/* Pins initialization */
/* USB initialization */
in_packets = (in_endpoint->DIEPTSIZ & USB_OTG_DIEPTSIZ_PKTCNT_Msk) >> USB_OTG_DIEPTSIZ_PKTCNT_Pos;
fifo_space = in_endpoint->DTXFSTS & USB_OTG_DTXFSTS_INEPTFSAV_Msk;
printf("FIFO free space: %i\n", fifo_space);
while (1)
{
// Checking for USB interrupts
usb_poll();
// Notify if packets number changed
uint32_t npkts = (in_endpoint->DIEPTSIZ & USB_OTG_DIEPTSIZ_PKTCNT_Msk) >> USB_OTG_DIEPTSIZ_PKTCNT_Pos;
if (npkts != in_packets) {
printf("Packets number update: %i\n", npkts);
in_packets = npkts;
}
// Notify if FIFO space changed
uint16_t new_space = in_endpoint->DTXFSTS & USB_OTG_DTXFSTS_INEPTFSAV_Msk;
if (new_space != fifo_space) {
fifo_space = new_space;
printf("FIFO free space: %i\n", new_space);
}
}
}
Now the broken part, the code which sends the device descriptor. The function handle_descriptor_request() is called when the device descriptor request is received.
The function sets the size to transmit to 18; packet count to 3 (endpoint 0 max packet size is configured to 8); enables the endpoint and fills the FIFO. After that, prints the FIFO free space, to ensure it has decreased.
void handle_descriptor_request(void) {
printf("Received descriptor request\n");
uint32_t* fifo = (USB_OTG_FS_PERIPH_BASE + USB_OTG_FIFO_BASE +
0 * 0x1000);
USB_OTG_INEndpointTypeDef* in_endpoint = (USB_OTG_INEndpointTypeDef*)(
USB_OTG_FS_PERIPH_BASE +
USB_OTG_IN_ENDPOINT_BASE +
0
);
// flush_txfifo(0);
// in_endpoint->DIEPTSIZ = 0;
// in_endpoint->DIEPCTL |= USB_OTG_DIEPCTL_SNAK;
// in_endpoint->DIEPCTL |= USB_OTG_DIEPCTL_EPDIS;
printf("Manual PKTCNT to 3\n");
// Set Tx size to 18
in_endpoint->DIEPTSIZ &= ~USB_OTG_DIEPTSIZ_XFRSIZ_Msk;
in_endpoint->DIEPTSIZ &= ~USB_OTG_DIEPTSIZ_PKTCNT_Msk;
in_endpoint->DIEPTSIZ |= 18 << USB_OTG_DIEPTSIZ_XFRSIZ_Pos;
// Send 3 packets (8 bytes max for each)
in_endpoint->DIEPTSIZ |= 3 << USB_OTG_DIEPTSIZ_PKTCNT_Pos;
// Enable transmission
in_endpoint->DIEPCTL |= USB_OTG_DIEPCTL_CNAK;
in_endpoint->DIEPCTL |= USB_OTG_DIEPCTL_EPENA;
// Write a valid device descriptor to the Tx FIFO
*fifo = 0x02000112;
*fifo = 0x08000000;
*fifo = 0x13aa6666;
*fifo = 0x00000000;
*fifo = 0x0000012c;
fifo_space = in_endpoint->DTXFSTS & USB_OTG_DTXFSTS_INEPTFSAV_Msk;
printf("FIFO free space: %i\n", fifo_space);
}
Below is the log I'm getting from the device. I see that the first device request is transmitted successfully, the FIFO size drops from 16 to 11, than back to 16. Number of packets to send decreases from 3 to 2 (although it should decrease to 0, right?). I also see this descriptor with Wireshark. Then the device address is received and answered by a ZLP packet; then something goes wrong. My FIFO space keeps decreasing with every descriptor request, the packets doesn't seem to leave the device.
USB reset received
FIFO free space: 16
INEPNE Endpoint interrupt
TXFE Endpoint interrupt
// First descriptor request. Sent successfully
Received packet of status 0x06
// Device descriptor request in binary form
Rx from FIFO: 0x01000680
Rx from FIFO: 0x00400000
// Output from handle_descriptor_request()
Received descriptor request
Manual PKTCNT to 3
FIFO free space: 11
// Output from main()
Packets number update: 2
FIFO free space: 16
// Output from rxflvl_handler()
Received packet of status 0x04
Received packet of status 0x02
Received packet of status 0x03
INEPNE Endpoint interrupt
TXFE Endpoint interrupt
// Received address
Received packet of status 0x06
// Set address request in binary form
Rx from FIFO: 0x001d0500
Rx from FIFO: 0x00000000
Received address: 29
Writing zero-length packet
Manual PKTCNT to 1
Packets number update: 1
Received packet of status 0x04
Packets number update: 0
TXFRC Endpoint interrupt
TXFE Endpoint interrupt
// Another device descriptor request
Received packet of status 0x06
Rx from FIFO: 0x01000680
Rx from FIFO: 0x00120000
Received descriptor request
Manual PKTCNT to 3
FIFO free space: 11
Packets number update: 3
Received packet of status 0x04
// Looks unsuccessful. Try again...
Received packet of status 0x06
Rx from FIFO: 0x01000680
Rx from FIFO: 0x00120000
Received descriptor request
Manual PKTCNT to 3
FIFO free space: 6
// And so on. The FIFO free space decreases to 0
Here are the Wireshark views of the first and second descriptor responses. The second one fails with EOVERFLOW (-75). Sometimes I see -71 error when run 'dmesg' on my host. Can't figure out what is the reason for that.
If I will be flushing the 0th FIFO before the descriptor transmission, i.e. uncomment the
flush_txfifo(0);
Thing doesn't change a lot: the FIFO free space stops decreasing below 11, but the packets doesn't seem to be sent.
If I recover the endpoint before the descriptor tranmission, uncommenting the
in_endpoint->DIEPTSIZ = 0;
in_endpoint->DIEPCTL |= USB_OTG_DIEPCTL_SNAK;
in_endpoint->DIEPCTL |= USB_OTG_DIEPCTL_EPDIS;
then starting at the second try, the device seems to strart sending something, but still something wrong. The FIFO size increases to 15 words instead of 16
// ...
Rx from FIFO: 0x01000680
Rx from FIFO: 0x00120000
Received descriptor request
Manual PKTCNT to 3
FIFO free space: 11
FIFO free space: 15
Received packet of status 0x04
Received packet of status 0x02
Received packet of status 0x03
Also I can't understand why the number of packets drops from 3 to 2 for the successful transmission. I can see in Wireshark whole descriptor (18 bytes) and the endpoint size is configured to 8.
Could you point me to things I may need to check?

Sending message from udp client to udp server without sending protocol address of client

I have a query is it possible to send data from udp client to udp server without sending the protocol address and then from server to client
Server.c
if(bind(socket_desc, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0){
printf("Couldn't bind to the port\n");
return -1;
}
printf("Done with binding\n");
printf("Listening for incoming messages...\n\n");
// Receive client's message:
if (recvfrom(socket_desc, client_message, sizeof(client_message), 0,
(struct sockaddr*)NULL, NULL) < 0){
printf("Couldn't receive\n");
return -1;
}
// printf("Received message from IP: %s and port: %i\n",
// inet_ntoa(client_addr.sin_addr), ntohs(client_addr.sin_port));
printf("Msg from client: %s\n", client_message);
// Respond to client:
int n = atoi(client_message);
int m =0, sum = 0;
while(n>0){
m=n%10;
sum=sum+m;
n=n/10;
}
sprintf(server_message, "%d", sum);
if (sendto(socket_desc, server_message, strlen(server_message), 0,
(struct sockaddr*)&client_addr, client_struct_length) < 0){
printf("Can't send\n");
return -1;
}
When I send data from client to server the server printed what I had sent from client but then then when I tried to send data back to client it is unable to send the modified data.
client_addr does not magically get filled with the clients address. It has to be explicitly done in recvfrom - but you use instead NULL there and thus throw away this information:
if (recvfrom(socket_desc, client_message, sizeof(client_message), 0,
(struct sockaddr*)NULL, NULL) < 0){

(UDP) GVCP BroadCast not functioning

Environment: Windows Socket Programming using VC++ 2010
GVCP : GigE Vision Control Protocol
GVCP = UDP+(GVCP Header Data+Payload Data). so basically on top its a UDP only
for Detecting GigE Sensor (Camera) need to first Broadcast a GVCP packet (containing Gvcp Payload data), using Broadcast address 255.255.255.255
but i am able to broadcast only by 192.168.1.255 (as seen on wire-shark) when i change broadcast address 255.255.255.255 nothing is visible on wire-shark nor on other machine
so problem is not able to broadcast using IP 255.255.255.255 using UDP/WinSock
able to start broadcasting the GVCP packet its just a socket creation error the correct one is below
//---------------------DATA SENDER------------------------------
struct sockaddr_in Sender_addr;
int Sender_addrlen = sizeof(Sender_addr);
Sender_addr.sin_family = AF_INET;
Sender_addr.sin_port = htons(CAMPORT); //BROADCAST_PORT);
Sender_addr.sin_addr.s_addr = inet_addr("255.255.255.255"); //Broadcast
IP Here");
//---------------------DATA RECEIVER----------------------------
struct sockaddr_in Recv_addr;
int Recv_addrlen = sizeof(Recv_addr);
Recv_addr.sin_family = AF_INET;
Recv_addr.sin_port = htons(PCPORT);
Recv_addr.sin_addr.s_addr = INADDR_ANY;
if(bind(sock,(sockaddr*)&Recv_addr,sizeof(Recv_addr))<0)
{
perror("bind");
_getch;
closesocket(sock);
}
//and then send command for GVCP packet (GVCP packet Structure is )
TxBuff[0] = 0x42;
TxBuff[1] = 0x01;
TxBuff[2] = 0x00;
TxBuff[3] = 0x02;
TxBuff[4] = 0x00;
TxBuff[5] = 0x00;
TxBuff[6] = 0x00;
TxBuff[7] = 0x02;
if(sendto(sock,TxBuff,TxBuffSize,0,(struct sockaddr
*)&Sender_addr,sizeof(Sender_addr)) <0)
{
perror("send: error ");
_getch();
closesocket(sock);
}

Checksum calculation issue for ICMPv6 using Asio Boost

I have used the ICMP example provided in the ASIO documentation to create a simple ping utility. However, the example covers IPv4 only and I have a hard time to make it work with IPv6.
Upgrading the ICMP header class to support IPv6 requires a minor change - the only difference between ICMP and ICMPv6 header is the different enumeration of ICMP types. However, I have a problem computing the checksum that needs to be incorporated in the ICMPv6 header.
For IPv4 the checksum is based on the ICMP header and payload. However, for IPv6 the checksum should include the IPv6 pseudo-header before the ICMPv6 header and payload. The ICMPv6 checksum function needs to know the source and destination address that will be in the IPv6 header. However, we have no control over what goes into the IPv6 header. How can this be done in Asio-Boost?
For reference please find below the function for IPv4 checksum calculation.
void compute_checksum(icmp_header& header, Iterator body_begin, Iterator body_end)
{
unsigned int sum = (header.type() << 8) + header.code()
+ header.identifier() + header.sequence_number();
Iterator body_iter = body_begin;
while (body_iter != body_end)
{
sum += (static_cast<unsigned char>(*body_iter++) << 8);
if (body_iter != body_end)
sum += static_cast<unsigned char>(*body_iter++);
}
sum = (sum >> 16) + (sum & 0xFFFF);
sum += (sum >> 16);
header.checksum(static_cast<unsigned short>(~sum));
}
[EDIT]
What are the consequences if the checksum is not calculated correctly? Will the target host send echo reply if the echo request has invalid checksum?
If the checksum is incorrect, a typical IPv6 implementation will drop the packet. So, it is a serious issue.
If you insist on crafting the packet yourself, you'll have to do it
completely. This incldues finding the source IP address, to put it in
the pseudo-header before computing the checksum. Here is how I do it
in C, by calling connect() for my intended destination address
(even when I use UDP, so it should work for ICMP):
/* Get the source IP addresse chosen by the system (for verbose display, and
* for checksumming) */
if (connect(sd, destination->ai_addr, destination->ai_addrlen) < 0) {
fprintf(stderr, "Cannot connect the socket: %s\n", strerror(errno));
abort();
}
source = malloc(sizeof(struct addrinfo));
source->ai_addr = malloc(sizeof(struct sockaddr_storage));
source_len = sizeof(struct sockaddr_storage);
if (getsockname(sd, source->ai_addr, &source_len) < 0) {
fprintf(stderr, "Cannot getsockname: %s\n", strerror(errno));
abort();
}
then, later:
sockaddr6 = (struct sockaddr_in6 *) source->ai_addr;
op6.ip.ip6_src = sockaddr6->sin6_addr;
and:
op6.udp.check =
checksum6(op6.ip, op6.udp, (u_int8_t *) & message, messagesize);

winsock - Sender UDP socket not bound to the desired port

Below you can see my code that implements a pretty basic UDP sender in C++ with Winsock. The thing is that no matter how many times I run the code, the socket (the listenSocket) gets bound to a different UDP port. Is there any specific reason for this? Am I doing some mistake in my code?
thanks
#include <cstdlib>
#include <iostream>
#include <windows.h>
#include <winsock2.h>
#include <ws2tcpip.h>
#include <stdio.h>
using namespace std;
int main(int argc, char *argv[])
{
WSADATA wsaData;
SOCKADDR_IN myAddress;
SOCKADDR_IN targetAddress;
int myPort = 60888;
const char *myIP = "192.168.0.1";
int remotePort = 2048;
const char *remoteIP = "192.168.0.2";
SOCKET ListenSocket = INVALID_SOCKET;
SOCKET SendSocket = INVALID_SOCKET;
SOCKET acceptSocket;
char cBuffer[1024] = "Test Buffer";
int nBytesSent = 0;
int nBufSize = strlen(cBuffer);
int iResult;
// Initialize Winsock
if( WSAStartup( MAKEWORD(2, 2), &wsaData ) != NO_ERROR )
{
cerr<<"Socket Initialization: Error with WSAStartup\n";
system("pause");
WSACleanup();
exit(10);
}
ListenSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
SendSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (ListenSocket == INVALID_SOCKET or SendSocket == INVALID_SOCKET)
{
cerr<<"Socket Initialization: Error creating socket"<<endl;
system("pause");
WSACleanup();
exit(11);
}
//bind
myAddress.sin_family = AF_INET;
myAddress.sin_addr.s_addr = inet_addr(myIP);
myAddress.sin_port = htons(myPort);
targetAddress.sin_family = AF_INET;
targetAddress.sin_addr.s_addr = inet_addr(remoteIP);
targetAddress.sin_port = htons(remotePort);
if(bind(ListenSocket, (SOCKADDR*) &myAddress, sizeof(myAddress)) == SOCKET_ERROR)
{
cerr<<"ServerSocket: Failed to connect\n";
system("pause");
WSACleanup();
exit(14);
}
else
printf("Server: bind() is OK.\n");
nBytesSent = sendto(SendSocket, cBuffer, nBufSize, 0,
(SOCKADDR *) &targetAddress,
sizeof(SOCKADDR_IN));
printf("Everything is ok\n");
system("PAUSE");
closesocket(ListenSocket);
closesocket(SendSocket);
return EXIT_SUCCESS;
}
EDIT: Maybe I was not so clear. What I do with this code is to send some data to a remote PC. But what is required is that the UDP segments should appear to be originated from a specific port. How can this be done? Is it wrong what I'm doing here? Now that I'm thinking of it, I guess it is wrong indeed. The SendSocket and ListenSocket don't have any connection, correct? So, how can I make it that the UDP segments appear to originate from a specific UDP port? Thanks!
You are not calling bind() on SendSocket before sending data with it, so WinSock is free to bind that socket to whatever random local IP/Port it needs to. If you have to send data with a specific source IP/Port every time, you have to bind() to that IP/Port first. If that local IP/Port is the same pair you are binding ListenSocket to, then you don't need to use two separate sockets to begin with. You can send data with the same socket that is listening for incoming data.