I'm writing both client and server code using WCF, where I need to know the "perceived" bandwidth of traffic between the client and server. I could use ping statistics to gather this information separately, but I wonder if there is a way to configure the channel stack in WCF so that the same statistics can be gathered simultaneously while performing my web service invocations. This would be particularly useful in cases where ICMP is disabled (e.g. ping won't work).
In short, while making my regular business-related web service calls (REST calls to be precise), is there a way to collect connection speed data implicitly?
Certainly I could time the web service round trip, compared to the size of data used in the round-trip, to give me an idea of throughput - but I won't know how much of that perceived bandwidth was network related, or simply due to server-processing latency. I could perhaps solve that by having the server send back a time delta, representing server latency, so that the client can compute the actual network traffic time. If a more sophisticated approach is not available, that might be my answer...
The ICMP was not created with the intention of trying those connection speed statistics, but rather if a valid connection was made between two hosts.
My best guess is that the amount of data sent in those REST calls or ICMP traffic is not enough to calculate a perceived connection speed / bandwidth.
If you calculate by these metrics, you will get very big bandwidth statistics or very low, use as an example the copy box in windows XP. You need a constant and substantial amount of data to be sent in order to calculate valid throughput statistics.
Related
I use winsock2 with C++ to connect thousands of clients to my server using the TCP protocol.
The clients will rarely send packets to the server.... there is about 1 minute between 2 packets from a client.
Right now I iterate over each client with non-blocking sockets to check if the client has sent anything to the server.
But I think a much better design of the server would be to wait until it receives a packet from the thousands of clients and then ask for the client's socket. That would be better because the server wouldn't have to iterate over each client, where 99.99% of the time nothing happens. And with a million clients, a complete iteration might take seconds....
Is there a way to receive data from each client without iterating over each client's socket?
What you want is to use IO completion ports, I think. See https://learn.microsoft.com/en-us/windows/win32/fileio/i-o-completion-ports. Microsoft even has an example at GitHub for this: https://github.com/microsoft/Windows-classic-samples/blob/main/Samples/Win7Samples/netds/winsock/iocp/server/IocpServer.Cpp
BTW, do not expect to server million connections on single machine. Generally, there are limits, like available memory, both user space and kernel space, handle limits, etc. If you are careful, you can probably serve tens of thousands of connection per process.
Currently I am working my way into the topic of load and performance testing. In our planning, however, the customer now wants to have indicators for the load and performance test named. Here I am personally however over-questioned. What exactly are the performance indicators within a load and performance test?
You can separate the Performance indicators based on Client Side and Server Side Indicators:
1. Client Side Indicators : JMeter Dashboard
Average Response Time
Minimum Response Time
Maximum Response Time
90th Percentile
95th Percentile
99th Percentile
Throughput
Network Byte Send
Network Byte Received
Error% and different types of Error received
Response Time Over Time
Active Threads Over Time
Latencies Over Time
Connect Time Over Time
Hits Per Second
Codes Per Second
Transactions Per Second
Total Transactions Per Second etc.
You can also obtain Composite Graphs for better understanding.
2. Server Side Indicators :
CPU Utilization
Memory Utilization
Disk Details
Filesystem Details
Network Trafic Details
Network Socket
Network Netstat
Network TCP
Network UDP
Network ICMP etc.
3. Component Level Monitoring :
Language Specific likes Java, .Net, Python etc.
Database Server
Web Server
Application Server
Broker Statistics
Load Balancers etc.
Just to name a few.
Say I have built the WebRTC video chat website, some connections after the handshake (ICE Candidates) will go directly p2p, some will use the STUN server, and some will use the "last resort" the TURN server to establish the connection. TURN server based connection is very expensive compared to the direct connection and the STUN connection (which are free) because all traffic must actually go through the TURN server.
How can we estimate the percentage of connections of random users that will need to to go via TURN? Imagine we know very little about the expected audience, except that the majority is in the US. I believe it must be difficult to figure, but my current estimation is somewhere beween 1% and 99%, which is just too wide, can this at least be narrowed down?
https://medium.com/the-making-of-appear-in/what-kind-of-turn-server-is-being-used-d67dbfc2ff5d has some numbers from appear.in which show around 20%. That is global statistics, the stats for the US might be different.
Have a bunch of WCF REST services hosted on Azure that access a SQL Azure database. I see that ServicePointManager.UseNagleAlgorithm is set to true. I understand that setting this to false would speed up calls (inserts of records < 1460 bytes) to table storage - the following link talks about it.
My Question - Would disabling the Nagle Algorithm also speed up my calls to SQL Azure?
Nagle's algorithm is all about buffering tcp-level data into a smaller # of packets, and is not tied to record size. You could be writing rows to Table Storage of, say, 1300 bytes of data, but once you include tcp header info, content serialization, etc., the data transmitted could be larger than the threshold of 1460 bytes.
In any case: the net result is that you could be seeing write delays up to 500ms when the algorithm is enabled, as data is buffered, resulting in less tcp packets over the wire.
It's possible that disabling Nagle's algorithm would help with your access to SQL Azure, but you'd probably need to do some benchmarking to see if your throughput is being affected based on the type of reads/writes you're doing. It's possible that the calls to SQL Azure, with the requisite SQL command text, result in large-enough packets that disabling nagle wouldn't make a difference.
I have a lot of clients (around 4000).
Each client pings my server every 2 seconds.
Can these ping requests put a load on the server and slow it down?
How can I monitor this load?
Now the server response slowly but the processor is almost idle and the free memory is ok.
I'm running Apache on Ubuntu.
Assuming you mean a UDP/ICMP ping just to see if the host is alive, 4000 hosts probably isn't much load and is fairly easy to calculate. CPU and memory wise, ping is handled by you're kernel, and should be optimized to not take much resources. So, you need to look at network resources. The most critical point will be if you have a half-duplex link, because all of you're hosts are chatty, you'll cause alot of collisions and retransmissions (and dropped pings). If the links are all full duplex, let's calculate the actual amount of bandwidth required at the server.
4000 client #2 seconds
Each ping is 72 bytes on the wire (32 bytes data + 8 bytes ICMP header + 20 bytes IP header + 14 bytes Ethernet). * You might have some additional overhead if you use vlan tagging, or UDP based pings
If we can assume the pings are randomly distributed, we would have 2000 pings per second # 72 bytes = 144000 bytes
Multiple by 8 to get Bps = 1,152,000 bps or about 1.1Mbps.
On a 100Mbps Lan, this would be about 1.1% utilization just for the pings.
If this is a lan environment, I'd say this is basically no load at all, if it's going across a T1 then it's an immense amount of load. So you should basically run the same calculation on which network links may also be a bottle neck.
Lastly, if you're not using ICMP pings to check the host, but have an application level ping, you will have all the overhead of what protocol you are using, and the ping will need to go all the way up the protocol stack, and you're application needs to respond. Again, this could be a very minimal load, or it could be immense, depending on the implementation details and the network speed. If the host is idle, I doubt this is a problem for you.
Yes, they can. A ping request does not put much CPU load on, but it certainly takes up bandwidth and a nominal amount of CPU.
If you want to monitor this, you might use either tcpdump or wireshark, or perhaps set up a firewall rule and monitor the number of packets it matches.
The other problem apart from bandwidth is the CPU. If a ping is directed up to the CPU for processing, thousands of these can cause a load on any CPU. It's worth monitoring - but as you said yours is almost idle so it's probably going to be able to cope. Worth keeping in mind though.
Depending on the clients, ping packets can be different sizes - their payload could be just "aaaaaaaaa" but some may be "thequickbrownfoxjumpedoverthelazydog" - which is obviously further bandwidth requirements again.