web2py rocket or Apache - apache

Web2py states in the home page 'including fast multi-threaded web server' and I have seen text that says the rocket server performs as well as Apache. Web2py also claims to set security as high priority.
So why do people recommend that rocket not be used in production? What are the disadvantages? If the answer is handling a lot of traffic, what might this be?
(I am looking at switching a Django application to web2py, and if for a small application I could use the rocket server it would simplify the initial transition.)

The main reason people advise against Rocket in production is that it has few config / customization options. With Apache or nginx or whatever, you have an endless supply of pluggable modules or built-in capabilities for security, caching, rewrites, logging, tuning, threading, error handling ... many diverse capabilities. Rocket is more of just a good, basic web server with just a few of those options.
We use the Rocket server all the time in development environments -- it works very well in a low load, functionality-centric environment. However we replace it with Apache by the time we reach security and load testing environments because we need several capabilities Apache provides / Rocket doesn't.
There is nothing wrong with Rocket. It would probably be fine in a low-load production environment. If you ever decide you do need options Rocket doesn't have, change your choice of web server then.

Related

Bottle WSGI server vs Apache

I don't actually have any problem, just a bit curious of things.
I make a python web framework based on bottle (http://bottlepy.org/). Today I try to do a bit comparison to compare bottle WSGI server and apache server performance. I work on lubuntu 12.04, using apache 2, python 2.7, bottle development version (0.12) and get this surprising result:
As stated in the bottle documentation, the included WSGI Server is only intended for development purpose. The question is, why the development server is faster than the deployment one (apache)?
As far as I know, development server is usually slower, since it provide some "debugging" features.
Also, I never has any response in less than 100 ms when developing PHP application. But look, it is just 13 ms in bottle.
Can anybody please explain this? This is just doesn't make sense for me. A deployment server should be faster than the development one.
Development servers are not necessarily faster than production grade servers, so such an answer is a bit misleading.
The real reason in this case is likely going to be due to lazy loading of your web application on the first request that hits a process. Especially if you don't configure Apache correctly, you could hit this lazy loading quite a bit if your site doesn't get much traffic.
I would suggest you go watch my PyCon talk which deals with some of these issues.
http://lanyrd.com/2013/pycon/scdyzk/
Especially make sure you aren't using prefork MPM. Use mod_wsgi daemon mode in preference.
A deployment server should be faster than the development one.
True. And it generally is faster... in a "typical" web server environment. To test this, try spinning up 20 concurrent clients and have them make continuous requests to each version of your server. You see, you've only tested 1 request at a time--certainly not a typical web environment. I suspect you'll see different results (we're thinking of both latency AND throughput here) with tens or hundreds of concurrent requests per second.
To put it another way: At 10, 20, 100 requests per second, you might still see ~200ms latency from Apache, but you'd see much worse latency from Bottle's server.
Incidentally, the Bottle docs do refer to concurrency:
The built-in default server is based on wsgiref WSGIServer. This
non-threading HTTP server is perfectly fine for development and early
production, but may become a performance bottleneck when server load
increases.
It's also worth noting that Apache is doing a lot more than the Bottle reference server is (checking .htaccess files, dispatching to child process/thread, robust logging, etc.) and all those features necessarily add to request latency.
Finally, I'd ask whether you tuned the Apache installation. It's possible that you could configure it to be faster than it is now, e.g. by tuning the MPM, simplifying logging, disabling .htaccess checks.
Hope this helps. And if you do run a concurrent benchmark, please do share the results with us.

Server Setup: Based on Apache and Tomcat needs

I'm trying to setup a server based on our needs for a new website. Basically, I need to build a website based on social engine, and according to the platform's requirements (found here: http://www.socialengine.net/support/documentation/article?q=152&question=SocialEngine-Requirements) it requires the webserver to be Apache based.
Now my issue comes with the addition of a web application that needs to be included in the site. The web application requires the server to be capable of Asynchronous Request Processing, and is currently only supported by Tomcat or GlassFish.
I found a couple tutorials such as this one http://www.serverwatch.com/tutorials/article.php/2203891/Integrating-Tomcat-with-Apache.htm that explain how to "integrate" Tomcat into Apache. Would a server running Tomcat alone be able to handle the applet needs as well as serve the Apache (assuming HTTP) needs from the Social Engine platform? Are there any hosting providers any of you would reccomend?
Although I've done alot of front end stuff before, this is the first time i have to deal with any of the back end details, so my knowledge of server side functionality is really garbage. Please let me know if I'm not asking the right questions.
Thanks
You wouldn't really be able to use Tomcat for both apps, since the other one needs PHP. It's pretty common to have both Tomcat and Apache running on the same server. You might want to look up more recent documentation on mixing them, even this but definitely have a look at mod_proxy_ajp.
What's the other application? It's a little tricky to set up Asynchronous Request Processing if you are new to server apps, but there is also a lot of documentation, so if you're game, you can probably figure it out OK. You might also want to see if that app would work with node.js (hosting info here)
If you want to set it all up yourself, you could get a virtual private server from Rackspace Cloud or similar host or get a shared host that has the required apps already set up, which would limit your ability to customize the environment and may require 2 hosting plans, but would be easier to set up. It also somewhat depends on if both apps need to be on the same machine for any reason and/or on the same domain.
A regular LAMP stack will run SE4 just fine, however, you will need to do some tuning to get the page loads under 3 seconds. You will want to remove any Apache modules that you aren't using with a2dismod. For instance, if you're not using any Ruby on the site, a2dismod ruby. This will help get memory usage under control. APC is a must.
For a much more in depth read on tuning php/apache, please read this: Performance tuning on Apache, PHP, MySQL, WordPress v1.1 – Updated

Can I Replace Apache with Node.js?

I have a website running on CentOS using the usual suspects (Apache, MySQL, and PHP). Since the time this website was originally launched, it has evolved quite a bit and now I'd like to do fancier things with it—namely real-time notifications. From what I've read, Apache handles this poorly. I'm wondering if I can replace just Apache with Node.js (so instead of "LAMP" it would "LNMP").
I've tried searching online for a solution, but haven't found one. If I'm correctly interpreting the things that I've read, it seems that most people are saying that Node.js can replace both Apache and PHP together. I have a lot of existing PHP code, though, so I'd prefer to keep it.
In case it's not already obvious, I'm pretty confused and could use some enlightenment. Thanks very much!
If you're prepared to re-write your PHP in JavaScript, then yes, Node.js can replace your Apache.
If you place an Apache or NGINX instance running in reverse-proxy mode between your servers and your clients, you could handle some requests in JavaScript on Node.js and some requests in your Apache-hosted PHP, until you can completely replace all your PHP with JavaScript code. This might be the happy medium: do your WebSockets work in Node.js, more mundane work in Apache + PHP.
Node.js may be faster than Apache thanks to it's evented/non-blocking architecture, but you may have problems finding modules/libraries which substitute some of Apache functionality.
Node.js itself is a lightweight low-level framework which enables you to relatively quickly build server-side stuff and real-time parts of your web applications, but Apache offers much broader configuration options and "classical" web server oriented features.
I would say that unless you want to replace PHP with node.js based web application framework like express.js then you should stay with Apache (or think about migrating to Nginx if you have performance problems).
I believe Node.js is the future in web serving, but if you have a lot of existing PHP code, Apache/MySQL are your best bet. Apache can be configured to proxy requests to Node.js, or Node.js can proxy requests to Apache, but I believe some performance is lost in both cases, especially in the first one. Not a big deal if you aren't running a very high traffic website though.
I just registered to stackoverflow, and I can't comment on the accepted answer yet, but today I created a simple Node.js script that actually uses sendfile() to serve files through the HTTP protocol. (The existing example that the accepted answer links to only uses bare TCP protocol to send the file, and I could not find an example for HTTP, so I wrote it myself.)
So I thought someone might find this useful. Serving files through the sendfile() OS call is not necessarily faster than when data is copied through "user land", but it ends up utilizing the CPU and RAM less, thus being able to handle larger number of connections than the classic way.
The link: https://gist.github.com/1350901
Previous SO post describing exactly what im saying (php + socket.io + node)
I think you could put up a node server on somehost:8000 with socket.io and slap the socket.io client code into tags and with minimal work get your existing app rocking with socket.io (realtime baby) without a ton of work.
While node can be your only backend server remember that node likes to live up to it's name and become a node. I checked out a talk awhile back that Ryan Dahl gave to a PHP Users's group and he mentioned the name node relating to a vision of several node processes doing work and talking with each other.
Its LAMP versus MEAN nowadays. For a direct comparison see http://tamas.io/what-is-the-mean-stack.
Of course M, E and A are somewhat variable. For example the more recent koa may replace (E)xpress.
However, just replacing Apache with Node.js is probably not the right way to modernize your web stack.

Stress testing a server and VPS's vs. Dedicated servers

We used to have a dedicated server (1&1) and very infrequently ran into problems with the server having issues.
Recently, we migrated to a VPS (Wiredtree.com) with similar specs to our old dedicated server, but notice frequent problems running out of memory, mysql having to restart, etc... both when knowingly running intensive scrips and also just randomly during normal use.
Because of this, we're considering migrating to another at VPS - this time at Slicehost to see if it performs better.
My question is two fold...
Are their straightforward ways we could stress test a VPS at Slicehost to see if the same issues occur without having to actually migrate everything over?
Also, is it possible that the issues we're facing aren't just because of the provider (Wiredtree) but just the difference between a dedicated box and VPS (despite having similar specs)?
The best way to stress test an environment is to put it under load. If this VPS is hosting a web application, use one of the many available web server benchmark tools: ab, httperf, Siege or http_load. You don't necessarily care that much about the statistics from the tool itself, but more that it puts a predictable load on the server so that you can tune Apache to handle it, or at least not crash and burn.
The one problem you have with testing against Slicehost is that you are at the mercy of the Internet and your bandwidth to Slicehost. You may not be able to put enough load on the server to reach a meaningful conclusion.
Instead, you might find it just as valuable to run one of the many virtualization products on the market and set up a VM with comparable specs to the VPS plan you're considering. Local testing over your LAN will allow you to put a higher and more predictable load on the server.
In either case, you don't need to migrate everything, but you will need to set up an environment for your application to run in, with representative data in your database.
A VPS with similar specs to a dedicated server should perform approximately the same, but in order to get good performance, you still need to tune Apache, MySQL and any other long-lived server processes. In my experience, the out-of-the-box configuration of Apache in many Linux distributions is not ideal and will allow far too many child processes, overcommitting memory and sending the server into a swap-death spiral.

Why use Glassfish instead of Apache? What's it strengths and weaknesses?

Sorry for my ignorance here, but when I hear the word webserver, I immediately imagine Apache, although I know people use Microsoft's IIS too. However since I've been hanging out here at Stackoverflow I've noticed lots of people use Glassfish.
Which made me wonder, why would I want to use Glassfish (in the sense that I'm interested, but I don't really understand why it might make my life easier). From what I read it's Sun's open-source derivate of Apache's Tomcat, thus I imagine it's a good (or great) quality product. But since I don't know its strengths and weaknesses, I don't know when it would be wise to choose Glassfish over another server. Could anyone elaborate ?
GlassFish is an Application Server which can also be used as a Web Server (Http Server).
A web Server means: Handling HTTP requests (usually from browsers).
A Servlet Container (e.g. Tomcat) means: It can handle servlets & JSP.
An Application Server (e.g. GlassFish) means: It can manage Java EE applications (usually both servlet/JSP and EJBs).
You should use GlassFish for Java EE enterprise applications.
The need for a seperate Web server is mostly needed in a production environment. You would normally find a Application server to be suffice most of your development needs. A web server is capable of holding larger number of active sessions and connections, thus providing the necessary balance without performance costs.
Stick to a simple web server if you are only working with servlets/jsps. It is also to be noted that in a netbeans environment, glassfish has better support than other App servers. In the context of eclipse though, WSAD and JBoss seem to the preferred options.
Glassfish will soon release the modular kernel.
This means that the containers you need start up and shutdown as you need them. I.e no EAR deployed, EJB container won;t start up. This seems to have made it very good for development as it can start and stop very quickly. This takes it a lot closer to development environments like Rails (where redeployment is a massive part of your development)
I have used GlassFish server for developing Web Services.
It provides a very interactive Admin Console where admin can test the Web Services.
I really find it helpful while developing Web Services