What is the minimum amount of time you need to wait before calling out to an NTP server again? (wait at least 64/128/10000/etc. seconds) - hardware

What is the minimum amount of time you need to wait before requesting time again from an NTP server? I've seen a wide variety of answers, but I'm looking to get a good ballpark.
Is it OK to query an NTP server every 5 minutes? I've heard if you query too much they'll IP address ban you and I'm using the NIST NTP server so I'm worried about getting IP address banned from a US government server.
I'm looking to not have to deal with an RTC in my hardware prototype.

It depends on the server; typically, NTP will start at 64 seconds and stabilize up to 1024 seconds. Cisco uses iburst by default, but they tend to stay up; otherwise iburst can cause issues/blocks.

Related

What may be the expected percentage of connections that will fallback to TURN?

Say I have built the WebRTC video chat website, some connections after the handshake (ICE Candidates) will go directly p2p, some will use the STUN server, and some will use the "last resort" the TURN server to establish the connection. TURN server based connection is very expensive compared to the direct connection and the STUN connection (which are free) because all traffic must actually go through the TURN server.
How can we estimate the percentage of connections of random users that will need to to go via TURN? Imagine we know very little about the expected audience, except that the majority is in the US. I believe it must be difficult to figure, but my current estimation is somewhere beween 1% and 99%, which is just too wide, can this at least be narrowed down?
https://medium.com/the-making-of-appear-in/what-kind-of-turn-server-is-being-used-d67dbfc2ff5d has some numbers from appear.in which show around 20%. That is global statistics, the stats for the US might be different.

Apache KeepAlive on API Server

I have a LAMP server (Quad Core Debian with 4GB RAM, Apache 2.2 and PHP 5.3) with Rackspace which is used as an API Server. I would like to know what is the best KeepAlive option for Apache given our setup.
The API server hosts a single PHP file which responds with plain JSON. This is a fairly hefty file which performs some MySql reads/writes and quite a few Memcache lookups.
We have about 90 clients that are logged into the system at any one time.
Roughly 1/3rd of clients would be idle.
Of the active clients (roughly 60) they send a request to the API every 3 seconds.
Clients switch from active to idle and vice versa every 15 or 20 minutes or so.
With KeepAlive On, the server goes nuts and memory peaks at close to 4GB (swap is engaged etc).
With KeepAlive Off, the memory sits at 3GB however I notice that Apache is constantly killing and creating new processes to handle each connection.
So, my three options are:
KeepAlive On and KeepAliveTimeout Default - In this case I guess I will just need to get more RAM.
KeepAlive On and KeepAliveTimeout Low (perhaps 10 seconds?) If KeepAliveTimeout is set at 10 seconds, will a client maintain a constant connection to that one process by accessing the resource at regular 3 second intervals? When that client becomes idle for longer than 10 seconds will the process then be killed? If so I guess option 2 looks like the best one to go for?
KeepAlive Off This is clearly best for RAM, but will it have an impact on the response times due to the work involved in setting up a new process for each request?
Which option is best?
It looks like your php script is leaking memory. Before making them long running processes you should get to grips with that.
If you have not a good idea of the memory usage per request and from request to request adding memory is not a real solution. It might help for now and break again next week.
I would keep running separate processes till memory management is under control. If you have response problems currently your best bet is add another server to spread load.
The very first thing you should be checking is whether the clients are actually using the keepalive functioality at all. I'm not sure what you mean by an 'API server' but if its some sort of webservice then (IME) its rather difficult to implement well behaved clients using keepalives.(See %k directive for mod_log_config).
ALso, we really need to know what your objectives and constraints are? Performance / capacity / low cost?
Is this running over HTTP or HTTPS - there's a big difference in latency.
I'd have said that a keeplive time of 10 seconds is ridiculously high - not low at all.
Even if you've got 90 clients holding connections open, 4Gb seems a rather large amount of memory for them to be using - I'e run systems with 150-200 concurrent connections to complex PHP scripts using approx 0.5Gb over resting usage. Your figures of 250 + 90 x 20M only gives you a footprint of about 2Gb (I know is not that simple - but its not much more complicated).
For the figures you've given I wouldn't expect any benefit - but a significantly bigger memory footprint - using anything over 5 seconds for the keepalive. You could probably use a keepalive time of 2 seconds without any significant loss of throughput, But there's no substitute for measuring the effectiveness of various configs - and analysing the data to find the optimal config.
Certainly if you find that your clients are able to take advantage of keepalives and get a measurable benefit from doing so then you need to find the best way of accomodating that. Using a threaded server might help a little with memory usage, but you'll probably find a lot more benefit in running a reverse proxy in front of the webserver - particularly which SSL.
Besides that you may get significant benefits through normal tuning - code profiling, output compression etc.
Instead of managing the KeepAlive settings, which clearly have no real advantage in your particular situation between the 3 options, you should consider switching the Apache to an event or a thread based MPM where you could easily use KeepAlive On and set the Timeout value high.
I would go as far as also considering the switch to Apache on Windows. The benefit here is that it's MPM is completely thread based and takes advantage of Windows preference for threads over processes. You can easily do 512 threads with KeepAlive On and Timeout of 3-10 seconds on 1-2GB of RAM.
WampDeveloper Pro -
Xampp -
WampServer
Otherwise, your only other options are to switch MPM from Prefork to Worker...
http://httpd.apache.org/docs/2.2/mod/worker.html
Or to Event (which also got better with Apache 2.4)...
http://httpd.apache.org/docs/2.2/mod/event.html

Keep time in sync among servers without internet connection

I have 5 servers on a LAN without Internet connection. I need them to keep the clock in sync among them.
I could configure them as NTP peers, and set a high stratum for the local clock of one of them. In this way, the other four would sync with that clock.
What I actually want, is them to agree on a time using all of the 5 local clocks (i.e. doing some kind of average), for reasons of robustness and precision. Is it possible with NTP?
PS: I do not want to use an external clock source.
EDIT: and no scripting outside NTP features, that could only make precision worse :)
If you average 5 drifting clocks, the only thing you get is another drifting clock that's harder to correct. It won't be more precise. NTP uses multiple servers to increase precision because it takes network latency into account. Since all your systems are on a fast local network, you just need one server.
Set up two systems to be NTP server, one a primary, and if you feel the need, one a backup. Have all other systems synchronize to them. This will be significantly easier to set up than the clock-averaging solution, and you won't have to develop any crazy scripts.
You might be able to have one of them listen for the times from each computer, perform an average, set the average as it's own time, and broadcast that time for all the other computers. It seems a little excessive, though.
you can set up one of them as ntp server which will broadcast its time on the local network and the others as slaves to listen on the local network
edit:
I missed the average part. well, in that case, you can probably write a script on the local server to collect times from all the slaves get the average and update own time with that value.
You may even want to get rid of ntp in that case and just use the script to update time on all the servers
I wish I could give a definitive proposal, but I don't know enough about your environment. No matter what you'll likely be doing some sort of script kung fu.
If it's unix/linux I would set everyone up with SSH authorized keys to poll each others' date +%s command (to get the epoch), average those times with awk or something, and then set the machine's own local date.
Or perhaps it would be more secure (and reliable) to have one authoritative machine check everyone's time in the same manor, average it, and then provision itself and every other host to that average.
On Windows you'll probably be looking into VBScript and WMI.
EDIT:
You may run into some weird problems if anyone's clock drifts forward from the average and my guess is about half of them will ;). Future timestamps can be rather strange. It will be up to you to determine how frequently this synchronization will need to occur.

Do ping requests put a load on a server?

I have a lot of clients (around 4000).
Each client pings my server every 2 seconds.
Can these ping requests put a load on the server and slow it down?
How can I monitor this load?
Now the server response slowly but the processor is almost idle and the free memory is ok.
I'm running Apache on Ubuntu.
Assuming you mean a UDP/ICMP ping just to see if the host is alive, 4000 hosts probably isn't much load and is fairly easy to calculate. CPU and memory wise, ping is handled by you're kernel, and should be optimized to not take much resources. So, you need to look at network resources. The most critical point will be if you have a half-duplex link, because all of you're hosts are chatty, you'll cause alot of collisions and retransmissions (and dropped pings). If the links are all full duplex, let's calculate the actual amount of bandwidth required at the server.
4000 client #2 seconds
Each ping is 72 bytes on the wire (32 bytes data + 8 bytes ICMP header + 20 bytes IP header + 14 bytes Ethernet). * You might have some additional overhead if you use vlan tagging, or UDP based pings
If we can assume the pings are randomly distributed, we would have 2000 pings per second # 72 bytes = 144000 bytes
Multiple by 8 to get Bps = 1,152,000 bps or about 1.1Mbps.
On a 100Mbps Lan, this would be about 1.1% utilization just for the pings.
If this is a lan environment, I'd say this is basically no load at all, if it's going across a T1 then it's an immense amount of load. So you should basically run the same calculation on which network links may also be a bottle neck.
Lastly, if you're not using ICMP pings to check the host, but have an application level ping, you will have all the overhead of what protocol you are using, and the ping will need to go all the way up the protocol stack, and you're application needs to respond. Again, this could be a very minimal load, or it could be immense, depending on the implementation details and the network speed. If the host is idle, I doubt this is a problem for you.
Yes, they can. A ping request does not put much CPU load on, but it certainly takes up bandwidth and a nominal amount of CPU.
If you want to monitor this, you might use either tcpdump or wireshark, or perhaps set up a firewall rule and monitor the number of packets it matches.
The other problem apart from bandwidth is the CPU. If a ping is directed up to the CPU for processing, thousands of these can cause a load on any CPU. It's worth monitoring - but as you said yours is almost idle so it's probably going to be able to cope. Worth keeping in mind though.
Depending on the clients, ping packets can be different sizes - their payload could be just "aaaaaaaaa" but some may be "thequickbrownfoxjumpedoverthelazydog" - which is obviously further bandwidth requirements again.

Is there a limit with the number of SSL connections?

Is there a limit with the number of SSL connections?
We are trying to connect through SSL with 2000 sessions. We have tried it a couple of times but it always dies at 1062nd. Is there a limit?
Your operating system will have a limit on the number of open files if you are on linux
ulimit -a will show your various limits.
I imagine yours is set to 1024 and some of the sessions just happened to have closed allow the figure of 1062 (this last bit is a guess)
Yes, everything has a limit. As far as I'm aware, there is no inherit limit with "SSL".. it is after all just a protocol.
But, there is a limited amount of memory, ports, CPU on the machine you are connected to, from and every single one in between.
The actually server you are connected to may have an arbitrary limit set too.
This question doesn't have enough information to answer beyond "YES".
SSL itself doesn't have any limitations, but there are some practical limits you may be running into:
SSL connections require more resources on both ends of the connection, so you may be hitting some built-in server limit.
TCP/IP uses a 16-bit port number to identify connections, only some of which (around 16,000) are used for dynamic client connections. This would limit the number of active connections a single client could make to the same server.
On Linux, each process has a maximum number of file descriptors that it can have open, and each network connection uses one file descriptor. I imagine Windows has a similar limit.