What is causing to have the "load spike" issue on my server? - shared-hosting

I have list of websites (around 50 +) on my nearer hosting provider hosting package. recently many of the sites being said the below "note that your account has been suspended due to higher resource usage which causes load spikes in the server and lets the other sites gets down"
All these sites build with Joomla and regular PHP coding. Not sure What I have to do as per the hosting side? any thoughts.,

I think you should change your hosting provider, consult the problems with your current provider, or, if you have a lot of traffic comming to your site, change to better hosting solution

Related

Verifying individual servers in a load balancing configuration

Here is my situation. Recently, my production environment has been burned by a few Windows updates that caused some production servers to stop responding. While we have since resolved the issue of both of the servers (which are in a load balancing configuration) getting updates on the same day, the question arouse, how do we check that the application running on each server is still working? If we call the load balancing IP, we may or may not hit a server that is working. So if the update takes out the application on one server, how do we know that this has happened
The only idea I have for this is to purchase 2 more SSL certificates and allocate 2 ip addresses and assign one to each server. This way I would be guaranteed that I would know each server is up (we have a 3rd party service pinging our servers). But I have to believe that there is a better way to do this?
Please note that I am a .Net developer by trade with only an extremely small smattering of networking and IIS experience, but I'm what my small company has. So please assume I don't know where a lot of stuff is and dumb down the answer.
Load balancer maintains live status of the servers ( based on timeouts or http health checks ). It uses this status to route the traffic only to active servers.
Generally, LBs have a dashboard through which you can check this status. If not, you can check it's logs.

Serving two sites (Apache and Node.JS) from one server

I am on Dreamhost VPS with root access. It runs Apache, and is hosting a site "www.example.com". At the same time, I am developing a Node.js web site, and binding Node.js to port 3456 (for example). So the Node.js site is accessible by typing "www.example.com:3456".
These are two distinct websites. I don't ever want users of the "www.example.com" accessing my Node.js website (which will be migrated to Nodejitsu after development).
Will I run into any problems with this setup?
I do not believe this will be a problem, unless one of your visitors happens to end up at port 3456. To mitigate this, you should think about writing your own small piece of middleware to whitelist your IP (thus rejecting anyone else). You can see an example at: http://www.hacksparrow.com/how-to-write-midddleware-for-connect-express-js.html. I'm sure you wont have a problem modifying this to your needs.

Avoiding bandwidth cap by ISP on an Apache HTTP Web Server: encryption methods can do it?

I got a working Apache HTTP web server on my computer so a friend (and only him, no one else) who has no computer at home could get my files directly, as from an website, from an Internet café.
I did some speed tests on my computer at home and on my computer at workplace and found out that, in both cases, I get almost full bandwidth (~7MB/s) when using protocol encryption methods in some P2P softwares (BitTorrent, eMule). This leads me to believe that this is happening because the data is hidden from their ISPs.
Well, at the same very moment, when downloading from my web server at home to my work, it goes sluggish as hell (~90KB/s)...
Is there a protocol encryption method like the one in P2P to prevent my Apache web server from being slowed down by the ISP? Or at least some alternate solution to achieve better speed in this situation? Tried HTTPS but it seemed to not work.
Download != upload. Your upload at home will most likely be 1 Mbit (do you have an ADSL connection?), which will come down to ~ 90 KB/s.
But this doesn't belong on SO. :-)

Server configuration for high traffic website

I'm managing a hosting server and one of my customers will launch a high traffic PHP website. It's a penny auction website and we expect between 25k and 30k visitors per day.
Can you tell me please what should I change in my server configuration (PHP and Apache) to avoid problems? I'm afraid that the server crash with a large number of visitors.
Thank you
Using a lighter web server like nginx as a reverse proxy and a static content server should keep the Apache memory and CPU usage to a minimum which will be a problem on larger sites.
APC as an opcode cache will also be useful in a large site because compiling the PHP scripts to opcode is expensive.
Which Apache forking model are you using for the server? Event and Worker MPM's will probably work better for larger sites with higher concurrent connections.
How is PHP setup within Apache, i.e. FastCGI/CGI/DSO/SuPHP/FPM? SuPHP will be slowest while FastCGI, FPM and DSO will give you much better performance and allow you to use opcode caches.
If you don't need SSL support on the site a free service like https://www.cloudflare.com/ will also lessen the load on your servers.
You could put an opcode cache into use, eAccelerator is a good one for this purpose.
You may also want to consider creating Apache vHosts for static content like images/CSS/javascript to be served from. If these can be put into a CDN, then even better.
There are other tools available for benchmarking, including the Apache benchmarking tool "ab". You can use this to stress-test your site.
There are several areas in which tuning can take place, not just PHP.

Why do some setups front-end Glassfish with Apache?

I've been trying to mug up on Glassfish and one thing that keeps coming up is the "how-to" on fronting Glassfish with Apache. Unfortunately, I have yet to find a description of why you would want to do this!
From my experimentation, Glassfish seems like a pretty fully featured web server-type service; but I might be missing a lot. So, is the notion of front-ending Glassfish more of a solution to integrate it with an existing architecture, or does front-ending (in a pure Java environment) provide extra benefits?
There's also another valid use case as to why we front Glassfish with Apache. Apache in this instance would function as a reverse proxy for increased security of your Glassfish. The RP is configured to allow only certain URLs to be passed through to the application server. For e.g., you may have app contexts /myApp and /myPrivApp deployed in Glassfish. In the RP server, you only configure /myApp to be passed to Glassfish. Anybody requesting for /myPrivApp would see a 404 'cos the request stops right at the RP level.
In one of my deployments, I have a bunch of WARs deployed, some for users coming from the internet, some for intranet only. I have 2 RPs running, one for internet users and the other for intranet. I configure the internet RP to only allow URLs for approved internet applications to pass through while intranet users get to see everything.
Hope that helps.
It is usually used to speed things up. Since apache is a very fast web server it is used to deliver static content. Like images, CSS files and so on. Glassfish serves the dynamic content (servlets, JSPs) in this scenario.
Another reason for using Apache as a frontend to Glassfish is the possibility to provide load balancing across a Glassfish cluster. See http://tiainen.sertik.net/2011/03/load-balancing-with-glassfish-31-and.html for details.
A other reason is that glassfish cannot run (easily) on port 80, without giving it root rights of course.
So, for most users it's easer to run a proxy (apache, nginx, varnish) some sort in front of apache and have both servers run under a normal user.
Then you have a other advantage of some configurations options of your front end. Like others mentioned, caching for example.