In my mind, UDP is fast but not reliable, and in a lot of places jgroup is based on UDP, is that reliable? I saw a lot of places use jgroup to transfer caching information. Does jgroup have to use TCP to make transition reliable?
JGroups provides components to implement reliability above UDP, see here for some details,
http://www.jgroups.org/manual/html_single/index.html#d0e5392
Related
UDP has one good feature - it is connectionless. But it has many bad features - packets can be lost, arrive multiple times, there is no packet sequence - packet 2 can arrive faster than 1. How to keep good and remove bad?. Is there any good implementations that provide reliable transport protocol on top of udp so that we are still conectionless but without mentioned problems. One example of what can be done with it is mosh.
What you describe as bad isn't really bad depending on the context.
For example UDP is used a lot in realtime streaming, delivery confirmation and resending is useless in this context.
That being said there are e few implementations that you might want to look at:
ENet (http://enet.bespin.org/)
RUDP (https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol)
UDT (https://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol)
I work in embedded context:
CoAP (https://en.wikipedia.org/wiki/Constrained_Application_Protocol) also implements a lot of these features, so its worth a look.
What is your reason for not choosing TCP?
According to my knowledge, UDP does not use the path MTU to avoid fragmentation which however TCP does. I am trying to come up a reason to this particular design.
TCP needs to avoid fragmentation because it has to retransmit the whole datagram even if just one fragment is dropped. On the contrary, UDP does not need to retransmit datagram, it's up to the application layer to ensure the integrity.
In conclusion, fragment will slow the TCP transport protocol but not UDP transport protocol.
Here comes the problem, for communication that need integrity, whether employ TCP which natrually guarantee the integrity, or develop an application layer retransmit protocol upon UDP, it will need to resend the whole datagram if not ACKed. Then, fragmentation will slow down application layer retransmit protocol upon UDP just the same as TCP.
What's wrong with my reasoning?
UDP is a datagram protocol where each packet means a single entity, independent of the other packets (UDP does not detected duplication, reordering etc). TCP instead is a stream protocol, that is the whole transfer consists of a single unstructured octet stream similar to a large file. To make the transfer of this stream more efficient it makes sense to detect the MTU of the connection and try to send mostly packets which max out this MTU, thus reducing the overhead of the transfer. To further reduce the overhead TCP will merge multiple consecutive writes into as few packets (with max MTU) as possible.
UDP instead can not avoid fragmentation by itself because it transmits the datagram as it is, that is datagram boundary is determined by packet boundary. Any kinds of optimization to reduce overhead have to be done by the application itself.
Thus TCP is best suited for applications where its features like guaranteed and ordered delivery and efficient use of bandwidth are needed. Unfortunately these features come with drawbacks like comparable slow connection setup, higher latency (in case of packet loss) etc. But there are applications which don't need all the good parts but have to avoid the bad parts. For example real time audio and video can deal with packet loss but needs a low latency, i.e. it does not matter if all data arrive but they have to arrive fast. In these cases the more simple UDP protocol is better suited.
Because there's nothing useful it can do with the MTU. It's a datagram protocol.
Because there is no path. It's an connectionless protocol.
I have made a peer to peer program in TCP then merged to UDP. I would like to change it so you can send files, however UDP does not support this and TCP is slower than TCP. Due to the fact that these are the most common Data Transport Layers, I was curious to learn about any other layers that I do not know about.
So what other layers are there that would work with VB.net?
I would probably use HTTP for this.
The things you are thinking about inventing is already invented there.
This could be good starting points for your server side:
http://msdn.microsoft.com/en-us/library/system.net.httplistener(VS.80).aspx
And the client:
http://msdn.microsoft.com/en-us/library/system.net.webclient.aspx
I had a discussion with my manager, he said tcp is faster than http because of tcp is work on layer lower than http.
Then I remember about the OSI Model that I learnt in university, so I think what he meant is because http work on application layer but tcp work on transport layer (which is 2 layers belows) so is faster...
So my questions is:
Is lower layers works faster than upper layers is becasue there is less layers need to access when doing data transfers betweens two computers?
If so, that's mean when we use tcp (i.e with WCF), the commnicuation will be start at transport layers => down to physical layer => another computer's physical layer => up to transport layers? But I throught the data still need to be understand by the application, so it will still has to goes up to Application layer?
Thanks in advance.
There is always a layer above TCP. The question is really about how much overhead the stuff above TCP adds. HTTP is relatively chunky because each transmission requires a bunch of header cruft in both the request and the response. It also tends to be used in a stateless mode, whereby each request/response uses a separate TCP session. Keep-alives can ameliorate the session-per-request, but not the headers.
I noted the WCF tag, so I guess you're comparing NetTcp to for example BasicHttp. As #Marcelo Cantos pointed out, both drive on the TCP protocol.
Whereas the BasicHttpbinding uses HTTP for transport, a message is first encapsulated in XML (which is pretty verbose and data-eager) and then sent through HTTP, using lots of data for headers.
On the contrary, NetTcp uses a (proprietary?) protocol, where the message encoding and headers are specifically designed to decrease bandwidth usage.
In a common scenario you won't see any difference, but when dealing with lots of requests or larger amounts of data (especially binary data, which has to be encoded to fit in XML which increases its size), you might gain benefits by using NetTcp.
You are correct: TCP and HTTP are protocols that operate on different layers.
In general: in order for applications to utilize networking they need to operate at the application layer. The speed that any given protocol goes depends on the overhead it demands. HTTP typically operates over TCP, so it requires all of the overhead of TCP, all of the overhead of the layers under TCP, and all the overhead that HTTP requires itself (it has some rather large headers).
You're essentially comparing apples to oranges in comparing the speeds of TCP and HTTP. It makes more sense to compare TCP vs UDP vs other transport layer protocols -- and HTTP vs FTP vs other application layer protocols.
I typically use hardwired serial port connections between embedded devices for custom command/response/status protocols.
In this application I plan to use the microchip TCP/IP stack and Wi-Fi module with no OS to exchange short (<= 100 byte) commands and responses.
The stack is up and running on a microchip ethernet development kit and I am able to ping it from my desktop (not using the Wi-Fi module just yet).
I suppose I could hack into ping (microchip provides the c source for the stack) and add the messages I need to it, but I am looking for the correct/simplest/best method.
Correct / simplest / best aren't necessarily the same thing. But if I were you, I would consider using UDP instead of TCP.
UDP is a datagram protocol; TCP is stream-oriented and has a lot more overhead (and benefits that come with the overhead). But UDP more closely matches the current serial port byte-oriented (packet-oriented) approach you have today.
Chances are you have some higher-level protocol that receives/buffers/checksums/delimits/parses the data stream you receive from the UART. If you use UDP, you can mimic this nicely with a lean, lightweight UDP implementation. With UDP you just shoot out the bytes (packets) & cross your fingers that they got to the other end (a lot like serial).
TCP is heavier-weight connection-based protocol with built-in resending, acknowledgments, in-order delivery, timers, back-off algorithms, etc. On most embedded systems I've worked with using TCP (several different stacks), UDP is lighter-weight & outperforms TCP in terms of code size & throughput.
Also don't forget that TCP is used as the backbone of the internet; some packets pass through a dozen or more hops (routers/gateways) en route to the final destination. Lots of places for packets to get dropped, so TCP handles a lot of messy details transparently. I'd guess in your system/situation, we're talking about a LAN (everyone on the same wire) and transmission will be pretty reliable... thus the TCP overhead isn't really necessary.
There are times when the benefits of TCP justify the overhead, but from what you've written I think you should consider a basic UDP datagram set up. Just google "simple udp example" and you'll see the basic structure. For example, here is a simple UDP client/server example using just 43 lines (server) and 30 lines (client).
When you have a TCP/IP stack, it should provide a send() function to send some data messages.
Some small devices come only with a simple UDP/IP implementation, since this is a lot simpler. If you don't need the sequence control and reliability of TCP you could consider to use UDP to send short messages. This is much better to hack the ICMP messages.
But if you need the comfort and reliabilty of the TCP stream protocol, dont re-invent this on base of IUDP. Usually you can't do it better, more efficient and with less effort.