I had a discussion with my manager, he said tcp is faster than http because of tcp is work on layer lower than http.
Then I remember about the OSI Model that I learnt in university, so I think what he meant is because http work on application layer but tcp work on transport layer (which is 2 layers belows) so is faster...
So my questions is:
Is lower layers works faster than upper layers is becasue there is less layers need to access when doing data transfers betweens two computers?
If so, that's mean when we use tcp (i.e with WCF), the commnicuation will be start at transport layers => down to physical layer => another computer's physical layer => up to transport layers? But I throught the data still need to be understand by the application, so it will still has to goes up to Application layer?
Thanks in advance.
There is always a layer above TCP. The question is really about how much overhead the stuff above TCP adds. HTTP is relatively chunky because each transmission requires a bunch of header cruft in both the request and the response. It also tends to be used in a stateless mode, whereby each request/response uses a separate TCP session. Keep-alives can ameliorate the session-per-request, but not the headers.
I noted the WCF tag, so I guess you're comparing NetTcp to for example BasicHttp. As #Marcelo Cantos pointed out, both drive on the TCP protocol.
Whereas the BasicHttpbinding uses HTTP for transport, a message is first encapsulated in XML (which is pretty verbose and data-eager) and then sent through HTTP, using lots of data for headers.
On the contrary, NetTcp uses a (proprietary?) protocol, where the message encoding and headers are specifically designed to decrease bandwidth usage.
In a common scenario you won't see any difference, but when dealing with lots of requests or larger amounts of data (especially binary data, which has to be encoded to fit in XML which increases its size), you might gain benefits by using NetTcp.
You are correct: TCP and HTTP are protocols that operate on different layers.
In general: in order for applications to utilize networking they need to operate at the application layer. The speed that any given protocol goes depends on the overhead it demands. HTTP typically operates over TCP, so it requires all of the overhead of TCP, all of the overhead of the layers under TCP, and all the overhead that HTTP requires itself (it has some rather large headers).
You're essentially comparing apples to oranges in comparing the speeds of TCP and HTTP. It makes more sense to compare TCP vs UDP vs other transport layer protocols -- and HTTP vs FTP vs other application layer protocols.
Related
The closest I came across this is this question on SO but that is just for basic understanding.
My question is: when Media Source Extension (MSE) is used where the media source is fetched from a remote end point, for example, through AJAX or fetch API or even websocket, the media is sent over TCP.
That will handle packet loss and sequencing so protocol like RTP with RTCP is not used. Is that correct?
But this will result in delay so it cannot be truly used for real-time communication. Yes?
There is no security/encryption requirement for MSE like in WebRTC (DTLS/SRTP). Yes?
One cannot, for example, mix a remote audio source from MSE with an audio mediaStreamTrack from a RTCPeerConnection as they do not have any common param like CNAME (RTCP) or are part of the same mediastream). In other words, the world of MSE and WebRTC cannot mix unless synchronization is not important. Correct?
That will handle packet loss and sequencing so protocol like RTP with RTCP is not used. Is that correct?
AJAX and Fetch are just JavaScript APIs for making HTTP requests. Web Socket is just an API and protocol extended from an initial HTTP request. HTTP uses TCP. TCP takes care of ensuring packets arrive and arrive in-order. So, yes, you won't need to worry about packet loss and such, but not because of MSE.
But this will result in delay so it cannot be truly used for real-time communication. Yes?
That depends entirely on your goals. It's a myth that TCP isn't fast, or that TCP increases general latency for every packet. What is true is that the initial 3-way handshake takes a few round trips. It's also true that if a packet does actually get dropped, the application sees latency as suddenly sharply increased until the packet is requested again and sent again.
If your goals are something like a telephony application where the loss of a packet or two is meaningless overall, then UDP is more appropriate. (In voice communications, we talk slow enough that if a few milliseconds of sound go missing, we can still decipher what was being said. Our spoken language is robust enough that if entire words get garbled or are silent, we can figure out the gist of what was being said from context.) It's also important that immediate continuity be kept for voice communications. The tradeoff is that realtime-ness is better than accuracy at any particular instant/packet.
However, if you're doing something, say a one-way stream, you might choose a protocol over TCP. In this case, it may be important to be as realtime as possible, but more important that the audio/video don't glitch out. Consider the Super Bowl, or some other large sporting event. It's a live event and important that it stays realtime. However, if the time reference for the viewer is only 3-5 seconds delayed from live, it's still "live" enough for the viewer. The viewer would be far more angry if the video glitched out and they missed something happening in the game, rather than if they were just behind a few seconds. Since it's one-way streaming and there is no communication feedback loop, the tradeoff for reliability and quality over extreme low latency makes sense.
There is no security/encryption requirement for MSE like in WebRTC (DTLS/SRTP). Yes?
MSE doesn't know or care how you get your data.
One cannot, for example, mix a remote audio source from MSE with an audio mediaStreamTrack from a RTCPeerConnection as they do not have any common param like CNAME (RTCP) or are part of the same mediastream). In other words, the world of MSE and WebRTC cannot mix unless synchronization is not important. Correct?
Mix, where? Synchronization, where? No matter what you do, if you have streams coming from different places... or even different devices without sync/gen lock, they're out of sync. However, if you can define a point of reference where you consider things "synchronized", then it's all good. You could, for example, have independent streams going into a server and the server uses its current timestamps to set everything up and distribute together via WebRTC.
How you do this, or what you do, depends on the specifics of your application.
UDP has one good feature - it is connectionless. But it has many bad features - packets can be lost, arrive multiple times, there is no packet sequence - packet 2 can arrive faster than 1. How to keep good and remove bad?. Is there any good implementations that provide reliable transport protocol on top of udp so that we are still conectionless but without mentioned problems. One example of what can be done with it is mosh.
What you describe as bad isn't really bad depending on the context.
For example UDP is used a lot in realtime streaming, delivery confirmation and resending is useless in this context.
That being said there are e few implementations that you might want to look at:
ENet (http://enet.bespin.org/)
RUDP (https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol)
UDT (https://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol)
I work in embedded context:
CoAP (https://en.wikipedia.org/wiki/Constrained_Application_Protocol) also implements a lot of these features, so its worth a look.
What is your reason for not choosing TCP?
According to my knowledge, UDP does not use the path MTU to avoid fragmentation which however TCP does. I am trying to come up a reason to this particular design.
TCP needs to avoid fragmentation because it has to retransmit the whole datagram even if just one fragment is dropped. On the contrary, UDP does not need to retransmit datagram, it's up to the application layer to ensure the integrity.
In conclusion, fragment will slow the TCP transport protocol but not UDP transport protocol.
Here comes the problem, for communication that need integrity, whether employ TCP which natrually guarantee the integrity, or develop an application layer retransmit protocol upon UDP, it will need to resend the whole datagram if not ACKed. Then, fragmentation will slow down application layer retransmit protocol upon UDP just the same as TCP.
What's wrong with my reasoning?
UDP is a datagram protocol where each packet means a single entity, independent of the other packets (UDP does not detected duplication, reordering etc). TCP instead is a stream protocol, that is the whole transfer consists of a single unstructured octet stream similar to a large file. To make the transfer of this stream more efficient it makes sense to detect the MTU of the connection and try to send mostly packets which max out this MTU, thus reducing the overhead of the transfer. To further reduce the overhead TCP will merge multiple consecutive writes into as few packets (with max MTU) as possible.
UDP instead can not avoid fragmentation by itself because it transmits the datagram as it is, that is datagram boundary is determined by packet boundary. Any kinds of optimization to reduce overhead have to be done by the application itself.
Thus TCP is best suited for applications where its features like guaranteed and ordered delivery and efficient use of bandwidth are needed. Unfortunately these features come with drawbacks like comparable slow connection setup, higher latency (in case of packet loss) etc. But there are applications which don't need all the good parts but have to avoid the bad parts. For example real time audio and video can deal with packet loss but needs a low latency, i.e. it does not matter if all data arrive but they have to arrive fast. In these cases the more simple UDP protocol is better suited.
Because there's nothing useful it can do with the MTU. It's a datagram protocol.
Because there is no path. It's an connectionless protocol.
I have made a peer to peer program in TCP then merged to UDP. I would like to change it so you can send files, however UDP does not support this and TCP is slower than TCP. Due to the fact that these are the most common Data Transport Layers, I was curious to learn about any other layers that I do not know about.
So what other layers are there that would work with VB.net?
I would probably use HTTP for this.
The things you are thinking about inventing is already invented there.
This could be good starting points for your server side:
http://msdn.microsoft.com/en-us/library/system.net.httplistener(VS.80).aspx
And the client:
http://msdn.microsoft.com/en-us/library/system.net.webclient.aspx
I typically use hardwired serial port connections between embedded devices for custom command/response/status protocols.
In this application I plan to use the microchip TCP/IP stack and Wi-Fi module with no OS to exchange short (<= 100 byte) commands and responses.
The stack is up and running on a microchip ethernet development kit and I am able to ping it from my desktop (not using the Wi-Fi module just yet).
I suppose I could hack into ping (microchip provides the c source for the stack) and add the messages I need to it, but I am looking for the correct/simplest/best method.
Correct / simplest / best aren't necessarily the same thing. But if I were you, I would consider using UDP instead of TCP.
UDP is a datagram protocol; TCP is stream-oriented and has a lot more overhead (and benefits that come with the overhead). But UDP more closely matches the current serial port byte-oriented (packet-oriented) approach you have today.
Chances are you have some higher-level protocol that receives/buffers/checksums/delimits/parses the data stream you receive from the UART. If you use UDP, you can mimic this nicely with a lean, lightweight UDP implementation. With UDP you just shoot out the bytes (packets) & cross your fingers that they got to the other end (a lot like serial).
TCP is heavier-weight connection-based protocol with built-in resending, acknowledgments, in-order delivery, timers, back-off algorithms, etc. On most embedded systems I've worked with using TCP (several different stacks), UDP is lighter-weight & outperforms TCP in terms of code size & throughput.
Also don't forget that TCP is used as the backbone of the internet; some packets pass through a dozen or more hops (routers/gateways) en route to the final destination. Lots of places for packets to get dropped, so TCP handles a lot of messy details transparently. I'd guess in your system/situation, we're talking about a LAN (everyone on the same wire) and transmission will be pretty reliable... thus the TCP overhead isn't really necessary.
There are times when the benefits of TCP justify the overhead, but from what you've written I think you should consider a basic UDP datagram set up. Just google "simple udp example" and you'll see the basic structure. For example, here is a simple UDP client/server example using just 43 lines (server) and 30 lines (client).
When you have a TCP/IP stack, it should provide a send() function to send some data messages.
Some small devices come only with a simple UDP/IP implementation, since this is a lot simpler. If you don't need the sequence control and reliability of TCP you could consider to use UDP to send short messages. This is much better to hack the ICMP messages.
But if you need the comfort and reliabilty of the TCP stream protocol, dont re-invent this on base of IUDP. Usually you can't do it better, more efficient and with less effort.