I run a Mac OS X Server Yosemite with 6 virtual hosts.
Pretty normal Wordpress stuff, nothing very special.
For 2 days now, when I start the server, the CPU usage goes up to 750% immediately (I have an Intel Quad-Core) and the pages are very slow, sometime I even get a "MySQL too many connections" error.
My machine is a Mac mini quad core 2.3 Ghz with 16 GB of Ram.
I had the server running for 3 months without any similar problems.
Do you think it's an DDOS attack?
Thx,
Matthias
I found the reason. It was a ddos attack at my wordpress site. A bot was trying to brute-force the xmlrpc.php file in the root folder of my wordpress installation, guessing usernames and passwords via mysql.
I simply added
// in xmlrpc.php
<?php exit; ?>
at the beginning of xmlrpc.php, and now there is peace on earth again.
Related
In my Ubuntu's System Monitor i am getting 17 processes for PHP.
They are getting a big part of my cpu and memory.
Some labelled "php" and others "php7.1".
Is this normal to happen? If not what is the suggested solution?
System: Ubuntu 14.04, Apache 2.4.7 with php 7.1 as a module.
Depends on how you integrate PHP in your webserver environment.
When 20 requests are handled, which performs PHP stuff, 20 times PHP is executed.
The amount of PHP processes should not be a problem. More interesting is, which PHP application needs much cpu and memory.
I have a CentOS 7 (64bit) VPS powered by OpenVZ. 1.70 GHz CPU, 1GB RAM, 1TB SSD, 1 Gbps port speed. I'm running Webmin and Apache (2.4.6) as a virtual host. No other software is running on the VPS, and I'm using it as a file server to link directly to MP3 files over HTTP (I have around 50GB of MP3s hosted).
The MP3 files are podcasts, so typically around 50mb - 100mb in size. The problem I'm having is it can take 5 or 6 seconds of buffering before an MP3 file start to stream. The domain I'm using is setup with cloudflare and loads very quickly, download speeds and ping times are also good (around 50ms), but the delay before an MP3 starts to stream is a bit of an inconvenience.
Is there anything I can do in Apache to speed up the buffering? Or is the buffering just a result of having a low-spec VPS?
All settings in Webmin and Apache are pretty much default, as I'm more use to working with IIS.
To answer my own question I solved this by disabling file streaming through the Cloudflare proxy. Streaming files in this was can cause performance issues, as outlined on this page:
https://support.cloudflare.com/hc/en-us/articles/200169706-Can-I-use-Cloudflare-with-a-streaming-music-or-video-site-
I also upgraded my VPS to double the amount of RAM. I apologise for this not being an Apache / CentOS issue, but hopefully someone with a similar issue will find this helpful.
We have some kind of strange behavior on our Openstack based servers. There is an nginx as loadbalancer with two apache web servers. This setup is automated by ansible and worked fine on our previous cloud.
After moving to another cloud we have the problem that all static files are served extremly slow. Slow means 35 seconds for a 140k JS file. And what makes it realy strange is that only windows devices have this problem.
When you download that static file on a Windows VM on a Mac it has the normal speed (100ms). Downloading it on a Linux VM on a Windows Host is very slow (35s). So it depends on the real system.
We have no idea where to start searching. Every tipp is welcome.
I'm running some websites on a dedicated Ubuntu web server. If I'm remembering correctly, it has 8 cores, 16GB memory, and running as a 64 bit Ubuntu. Content and files are delivered quickly to web browsers. Everything seems like a dream... until I run gzip or zip to backup an 8.6GB sized website.
When running gzip or zip, Apache stops delivering content. Internal server error messages are delivered until the compression process is complete. During the process, I can login via ssh without delays and run the top command. I can see that the zip process is taking about 50% CPU (I'm guessing that's 50% of a single CPU, not all 8?).
At first I thought this could be a log issue, with Apache logs growing out of control and not wanting to be messed with. Log files are under 5MB though and being rotated when they hit 5MB. Another current thought is that Apache only wants to run on one CPU and lets any other process take the lead. Not sure where to look to address that yet.
Any thoughts on how to troubleshoot this issue? Taking out all my sites while backups occur is not an option, and I can't seem to reproduce this issue on my local machines (granted, it's different hardware and configuration). My hopes are that this question is not to vague. I'm happy to provide additional details as needed.
Thanks for your brains in advance!
I'd suggest running your backup script under the "ionice" command. It will help prevent starving httpd from I/O.
We're experiencing a strange problem with our current Varnish configuration.
4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 # 3.00GHz Quad Core, 4GB RAM)
3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 # 3.00GHz Quad Core, 4GB RAM)
The 3 Varnish Servers have a pretty much standard, vanilla cfg: the only thing we changed was the vcl_recv and vcl_fetch in order to handle the session cookies. They are currently configured to use in-memory cache, but we already tried switching to HDD cache using an high-performance Raid Drive with the same exact results.
We had put this in place almost two years ago without problems on our old web farm, and everything worked like a blast. Now, using the machines described above and after a clean reinstall, our customers are experiencing a lot of connection problems (pending request on clients, 404 errors, missing files, etc.) when our websites are under heavy traffic. From the console log we can clearly see that these issues start happening when each Varnish reaches roughly 700 request per seconds: it just seems like they can't handle anything more. We can easily reproduce the critical scenario at any tme by shutting down one or two Varnish Servers, and see how the others react: they always start to skip beats everytime the req per seconds count reaches 700. Considering what we've experienced in the past, and looking to the Varnish specs, this doesn't seem to be normal at all.
We're trying to improve our Varnish servers performances and/or understand where the problem actually is: in order to do that, we could really use some kind of "benchmark" from other companies who are using it in a similar fashion in order to help us understand how far we are from the expected performances (I assume we are).
EDIT (added CFG files):
This is our default.vcl file.
This is the output of varnishadm >param.show output console cmd.
I'll also try to post a small part of our varnishlog file.
Thanks in advance,
To answer the question in the headline: A single Varnish server with the specifications you describe should easily serve 20k+ requests/sec with no other tuning than increasing the number of threads.
You don't give enough information (vcl, varnishlog) to answer your remaining questions.
My guess would be that you somehow end up serialising the backend requests. Check out your hit_for_pass objects and make sure they have a valid TTL set. (120s is fine)