I want to simulate UDP Flooding Attack in contiki.
I set a button event. The default SEND_INTERVAL is 60.when I press the button, the send interval is set to 30. But in cooja, after I press the button, the node doesn't send 10 UDP packets in 5 minutes.
The numbers of sending packets is chaotic, Sometimes it is 3, and sometimes it is 7. It should be 10. 300/30=100
Are there any other methods to implement UDP Flooding Attack?
Related
I'm working on a project of client/server form. In the the project I need to check both TCP and UDP flow if they are finished or not.
Since UDP has no FIN bit,is there a simple way to detect whether a UDP flow is completed or not?
The easiest way to detect the end of UDP is add some extro information in UDP packets. For example, we can add a FIN flag in the final packet.
The better way is to add a header part in UDP packets, so that you can do lots of things like controling the end of UDP flows or limiting the speed. That is how reliable UDP works.
I have 1 server and several (maybe up to 20) clients. All clients are sending UDP datagram at random time. Each datagram is quite short (about 10B), but I must make sure all the data from each client is received correctly.
If I let all clients send datagram to the same port, and client B sends it datagram at the exact time when the server is receiving data from client A, it seems the server will miss the data from client A.
So what's the correct method to do this job? Do I need to create a listener for each of the 20 clients?
When you bind a UDP socket to a port, the networking stack will allocate a buffer for a finite number of incoming UDP packets for you, so that (assuming you call recv() in a relatively timely manner), no incoming packets should get lost.
If you want see your buffer size in terminal, you can take a look at:
/proc/sys/net/core/rmem_default for recv
and
/proc/sys/net/core/wmem_default for send
I think the default buffer size on Linux is 131071B.
On Linux, you can change the UDP buffer size (e.g. to 26214400) by (as root):
sysctl -w net.core.rmem_max=26214400
You can also make it permanent by adding this line to /etc/sysctl.conf:
net.core.rmem_max=26214400
Since each packet is only 10B, shouldnt be a problem.
If you are still worried about packet loss you could implement a protocol where your client waits for a ACK from the server or it will resend. Many protocols use such a feature, but this is only possible if timing allows it. For example in streaming data it is not useful because there is no time to resend.
or consider using tcp ( if it is an option)
We are developing one project where we are sending UDP Packets, we are successfully able to send it. But we further want to fragment our packet if they exceed some limit. The listener with whom we have to communicate is expecting any packet of 1024 and it happens that depending on the content packet may get bigger then expected, so when it goes it should be fragmented and in the wireshark it should show as 2 messages fragments and should be reassembled at the end. I am developing in vb.net.
In the Openflow 1.3.3 spec you have the Receive Dropped and Receive Overrun Errors counters for a port. What are the conditions when these 2 counters increment?
Thanks
Receive Overrun Errors: This warning is is related to load on the COM port and the CPUs ability to service the COM port interrupts (ie. CPU load and interrupt priorities).
Receive Dropped: I think this increments when a switch receives a packet but it drops. In other words, it receives a packet but it doesn't forward it to any of it port.
Edit:
Here is what I think:
Note that there is 2 Received Packets counter.
Received Packets Per Flow Entry. So it is incremented when a packet is received and it is matched to that flow.
Received Packets Per Port: This one doesn't care what flow it is going to match with. If a packet is received in a port, increment it.
Receive Dropped is per port. So Receive Dropped should always be less than or equal to Received Packets.
I'm trying to make a TCP connection between a server(in this case, my PC) and my telit gl865-dual modem.
I am connecting the modem via serial port(ftdi adaptor) and send or recieve data and commands directly my computer.
The connection can be established and data transmission can be done both ways. But when the modem sends data, there is a delay at least 3-5 seconds, the answer of server can be seen on module in miliseconds.
The commands I use(>> indicates the respond from module):
ad#sd = 1, 0, 4444, "myserversip"
>> CONNECT
Is there a way to arrange send time like server's?
Thanks.
This must be due to slow network.
Check your network speed, if good then you have to see your server side code, why it's delaying.