Speed up direct MP3 streaming on Apache? - apache

I have a CentOS 7 (64bit) VPS powered by OpenVZ. 1.70 GHz CPU, 1GB RAM, 1TB SSD, 1 Gbps port speed. I'm running Webmin and Apache (2.4.6) as a virtual host. No other software is running on the VPS, and I'm using it as a file server to link directly to MP3 files over HTTP (I have around 50GB of MP3s hosted).
The MP3 files are podcasts, so typically around 50mb - 100mb in size. The problem I'm having is it can take 5 or 6 seconds of buffering before an MP3 file start to stream. The domain I'm using is setup with cloudflare and loads very quickly, download speeds and ping times are also good (around 50ms), but the delay before an MP3 starts to stream is a bit of an inconvenience.
Is there anything I can do in Apache to speed up the buffering? Or is the buffering just a result of having a low-spec VPS?
All settings in Webmin and Apache are pretty much default, as I'm more use to working with IIS.

To answer my own question I solved this by disabling file streaming through the Cloudflare proxy. Streaming files in this was can cause performance issues, as outlined on this page:
https://support.cloudflare.com/hc/en-us/articles/200169706-Can-I-use-Cloudflare-with-a-streaming-music-or-video-site-
I also upgraded my VPS to double the amount of RAM. I apologise for this not being an Apache / CentOS issue, but hopefully someone with a similar issue will find this helpful.

Related

Commit transfer performance for large files to HTTP+SVN server

I have a SVN repository behind an Apache HTTPS server that stores small and large (+1GB) files. When I commit a large file, the transfer speed is about 10MB/sec (using a 1GBit network line). When I look at CPU utilization on the server, it is saturated with about 85% being consumed by apache2, and some 15% by the disk driver.
I have already tried disabling Apache logging and SSL, but that didn't help to improve the transfer speed. This makes me think that mod_dav_svn is using most of the CPU? I have also tried to increase the amount of available cores on the server (default = 1 core), but this mysteriously slows down the commits while httpd remains using 1 core. And setting SVNCompressionLevel 0 also didn't result in any noticeable speed improvement.
Is there any way to significantly increase the transfer speed through parallelization or some other optimization?
Server:
Debian 9.3
Apache 2.4.25
libapache2-mod-svn 1.9.5
svn repository: default FSFS config (i.e. all commented out in fsfs.conf). The HDD can write up to 30Mb/sec (hardware limited) without saturating the CPU (tested with copying). FS is NTFS, using ntfs-3g with big_writes enabled which is using some 10-15% CPU while writing #10MB/sec.
Client:
svn 1.8.13
CPU: first generation Intel Core #3.20Ghz
Obviously, I would be very pleased if I could transfer at 25-30MB/sec.
Is there any way to significantly increase the transfer speed through
parallelization or some other optimization?
Yes, there is. However, the question lacks necessary details about the SVN client and server version, the server's and FSFS repository configuration and the hardware it runs on. It is hard to tell what kind of optimizations will help in your case. You may want to upgrade your server and client to the latest versions and disable the compression in the server's config.
FYI: VisualSVN Server in my tests can deliver 1Gbps speed.

Httpd processes bring CPU down (99% usage)

I run a Mac OS X Server Yosemite with 6 virtual hosts.
Pretty normal Wordpress stuff, nothing very special.
For 2 days now, when I start the server, the CPU usage goes up to 750% immediately (I have an Intel Quad-Core) and the pages are very slow, sometime I even get a "MySQL too many connections" error.
My machine is a Mac mini quad core 2.3 Ghz with 16 GB of Ram.
I had the server running for 3 months without any similar problems.
Do you think it's an DDOS attack?
Thx,
Matthias
I found the reason. It was a ddos attack at my wordpress site. A bot was trying to brute-force the xmlrpc.php file in the root folder of my wordpress installation, guessing usernames and passwords via mysql.
I simply added
// in xmlrpc.php
<?php exit; ?>
at the beginning of xmlrpc.php, and now there is peace on earth again.

Serving static files slow only for windows devices

We have some kind of strange behavior on our Openstack based servers. There is an nginx as loadbalancer with two apache web servers. This setup is automated by ansible and worked fine on our previous cloud.
After moving to another cloud we have the problem that all static files are served extremly slow. Slow means 35 seconds for a 140k JS file. And what makes it realy strange is that only windows devices have this problem.
When you download that static file on a Windows VM on a Mac it has the normal speed (100ms). Downloading it on a Linux VM on a Windows Host is very slow (35s). So it depends on the real system.
We have no idea where to start searching. Every tipp is welcome.

Apache hangs/times out when backing up website with gzip or zip?

I'm running some websites on a dedicated Ubuntu web server. If I'm remembering correctly, it has 8 cores, 16GB memory, and running as a 64 bit Ubuntu. Content and files are delivered quickly to web browsers. Everything seems like a dream... until I run gzip or zip to backup an 8.6GB sized website.
When running gzip or zip, Apache stops delivering content. Internal server error messages are delivered until the compression process is complete. During the process, I can login via ssh without delays and run the top command. I can see that the zip process is taking about 50% CPU (I'm guessing that's 50% of a single CPU, not all 8?).
At first I thought this could be a log issue, with Apache logs growing out of control and not wanting to be messed with. Log files are under 5MB though and being rotated when they hit 5MB. Another current thought is that Apache only wants to run on one CPU and lets any other process take the lead. Not sure where to look to address that yet.
Any thoughts on how to troubleshoot this issue? Taking out all my sites while backups occur is not an option, and I can't seem to reproduce this issue on my local machines (granted, it's different hardware and configuration). My hopes are that this question is not to vague. I'm happy to provide additional details as needed.
Thanks for your brains in advance!
I'd suggest running your backup script under the "ionice" command. It will help prevent starving httpd from I/O.

Varnish: how many req per second peak to (reasonably) expect?

We're experiencing a strange problem with our current Varnish configuration.
4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 # 3.00GHz Quad Core, 4GB RAM)
3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 # 3.00GHz Quad Core, 4GB RAM)
The 3 Varnish Servers have a pretty much standard, vanilla cfg: the only thing we changed was the vcl_recv and vcl_fetch in order to handle the session cookies. They are currently configured to use in-memory cache, but we already tried switching to HDD cache using an high-performance Raid Drive with the same exact results.
We had put this in place almost two years ago without problems on our old web farm, and everything worked like a blast. Now, using the machines described above and after a clean reinstall, our customers are experiencing a lot of connection problems (pending request on clients, 404 errors, missing files, etc.) when our websites are under heavy traffic. From the console log we can clearly see that these issues start happening when each Varnish reaches roughly 700 request per seconds: it just seems like they can't handle anything more. We can easily reproduce the critical scenario at any tme by shutting down one or two Varnish Servers, and see how the others react: they always start to skip beats everytime the req per seconds count reaches 700. Considering what we've experienced in the past, and looking to the Varnish specs, this doesn't seem to be normal at all.
We're trying to improve our Varnish servers performances and/or understand where the problem actually is: in order to do that, we could really use some kind of "benchmark" from other companies who are using it in a similar fashion in order to help us understand how far we are from the expected performances (I assume we are).
EDIT (added CFG files):
This is our default.vcl file.
This is the output of varnishadm >param.show output console cmd.
I'll also try to post a small part of our varnishlog file.
Thanks in advance,
To answer the question in the headline: A single Varnish server with the specifications you describe should easily serve 20k+ requests/sec with no other tuning than increasing the number of threads.
You don't give enough information (vcl, varnishlog) to answer your remaining questions.
My guess would be that you somehow end up serialising the backend requests. Check out your hit_for_pass objects and make sure they have a valid TTL set. (120s is fine)