Ultra small http server with support for Lua scripts - apache

I have a Renesas R5F571M processor that has 4MB of flash and 512K of RAM.
I need to run FreeRTOS and also have a web server that can run Lua scripts in order to interface to the hardware with custom C code.
Can anyone suggest a very compact HTTP+Lua server that I could use.
The Barracuda Application Server looks ideal but at around $20K is way out of my reach.
I would love to be able to use Nginx and PHP but the resource constraints preclude that option.

I once upon a time, worked with the Lighttpd web server. You could compile it under certain conditions to a binary as small as, ~400KB size (400KB << 4MB). On the backend you could interface it to the fastCGI C-library. The backend then you can write in C.
You can skip the Lua scripts in my opinion. Or if you still want to use them, you could use the Lighttpd mod_magnet module which can work directly with Lua, so you can skip the FastCGI library. It also has a smaller memory footprint than Nginx, although I am not sure if it is small enough to fit in the 512KB RAM.
p.s. Lighttpd is free.

on the compact side:
bozohttpd http://www.eterna.com.au/bozohttpd/ HTTP/1.x webserver that can run lua scripts but forks with every request so it's stateless in that sense
lhttpd https://github.com/danristea/lhttpd HTTP/1.x HTTP/2 webserver that was inspired by bozohttpd but integrates lua coroutines with an event system (kqueue/epoll) to achieve non-blocking stateful execution.
(disclaimer: I am the author of lhttpd)

Related

Script For sending many HTTP request at a time for checking server load

I want check my AWS autoscling is working fine when the CPU utilization is greater then 80%.So i want a script for send many http resuest to my AWS server for testing.Please help me.
There are several tools to accomplish this task, two of them are
ab, a command line tool build into Apache:
ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving.
jMeter, a graphical testing tool written in Java:
The Apache JMeter™ application is open source software, a 100% pure Java application designed to load test functional behavior and measure performance. It was originally designed for testing Web Applications but has since expanded to other test functions.
Handle them with care, as they really do what you are requesting. I've managed to brute force my test server until it collapsed with ab quite easy...

Twisted and Tornado difference when deploying?

I only have a small knowledge about Tornado, and when it comes to deployement, it is better to use Nginx as a load balancer with the number of Tornado process.
How about Twisted? is that the same way?
If I'm tracking your question right, you seem to be asking: "Should Tornado be front-ended with Nginx, how about Twisted?"
If thats really where the question is going, then its got a "it depends" answer but perhaps in a way that you might not expect. In a runtime sense, twisted, tornado and Nginx are in more ways than not the same thing.
All three of these systems use the same programming pattern in their core. Something the OO people call a Reactor pattern, which is often also known as asynchronous I/O event programming, and something that old-school unix people would call a select style event loop. (done via select / epoll / kqueue / WaitForMultipleObjects / etc)
To build up to the answer, some background:
Twisted is a Reactor based framework that was created to for writing python based asynchronous I/O projects in their most generic form. So while its extremely good for web applications (via the Twisted Web sub-module), its equally good for dealing with Serial data (via the SerialPort sub-module) or implementing a totally different networking protocol like ssh.
Twisted excels in flexibility. If you need to glue many IO types together, and particularly if you want to connect IOs to the web it is fantastic. As noted in remudada answer, it also has application tool built in (twistd).
As an async I/O framework, Twisted has very few weaknesses. As a web framework though (while it is actively continuing to evolve) it feel decidedly behind the curve particularly as compared to plugin rich frameworks like Flask, and Tornado definitely has many web conveniences over Twisted Web.
Torando is a Reactor based framework in python, that was created to for serving webpages/webapps extremely fast (basically as fast as python can be made to serve up webpages). The folks who wrote it were aiming to write a system so fast that the production code itself could be python.
The Tornado core is nearly a logically match to the core of Twisted. The core of these projects are so similar that on modern releases you can run Twisted inside of Tornado or run a Tornado port inside of Twisted.
Tornado is single-mindedly focused on serving webpages/webapps fast. In most applications it will be 20%ish faster then Twisted Web, but nowhere near as flexible as supporting other async I/O uses. It (like twisted) is still python based, so if a given webapp does too much CPU based work its performance will fall quickly.
Nginx is a Reactor based application that was created to for serving webpages and connection redirection written in C. While Twisted and Tornado use the Reactor pattern to make python go fast, Nginx takes things to next step and uses that logic in C.
When people compare Python and C they often talk about Python being 1.2 to 100 times slower. However, in the Reactor pattern (which when done right spends most of it's time in operating system) language inefficiency is minimized - as long as not too much logic happens outside of the reactor.
I don't have hard data to back this up but my expectation is that you would find the simplest "Hello world" (I.E. serving static test) running no more then 50% slower on Tornado then on Nginx (with Twisted Web being 20% slower then Tornado on average).
Differences speed of the same thing, where does that leave us?
You asked "it is better to use Nginx as a load balancer with the number of Tornado process", so to answer that I need to ask you question back.
Are you deploying in a way where its critical that you take advanced of multiple cores?
In exchange for blazing async IO speed, the Reactor pattern has a weakness:
Each Reactor can only take advantage of one process core.
As you might guess, the flip side of that weakness is that the Reactor pattern uses that core extremely efficiently and if you have the load, should be able to take that core near 100% use.
This leads back to the type of design your asking about, but the reason for all the background in this answer is because the layering of these services (Nginx in front of Tornado or Twisted) should only be done to take advantage of multi-core machines.
If your running on a single core system (lowest class cloud server or an embedded platform like a Raspberry Pi) you SHOULD NOT front-end a reactor. Doing so will simple slow the whole system down.
If you run more (heavily loaded) reactor services then CPU cores, your also going to be working against yourself.
So:
if you're deploying on single core system:
Run one instance of either a Tornado or Twisted (or Nginx alone if its static pages)
if your trying to fully exploit multiple cores
Run Nginx (or twisted) on the application port, run one instance of Tornado or Twisted for each remaining processor core.
Twisted is a good enough application server on it's own and I would rather use it as it is (unlike Tornado)
If you look at the offical guide http://twistedmatrix.com/documents/11.1.0/core/howto/application.html
you can see how it is set up. Ofcourse you can use uwsgi / nginx / emperor with twsited since it can be run as a standard application, but I would suggest that you do this when you really need the scaling and load balancing.

Bottle WSGI server vs Apache

I don't actually have any problem, just a bit curious of things.
I make a python web framework based on bottle (http://bottlepy.org/). Today I try to do a bit comparison to compare bottle WSGI server and apache server performance. I work on lubuntu 12.04, using apache 2, python 2.7, bottle development version (0.12) and get this surprising result:
As stated in the bottle documentation, the included WSGI Server is only intended for development purpose. The question is, why the development server is faster than the deployment one (apache)?
As far as I know, development server is usually slower, since it provide some "debugging" features.
Also, I never has any response in less than 100 ms when developing PHP application. But look, it is just 13 ms in bottle.
Can anybody please explain this? This is just doesn't make sense for me. A deployment server should be faster than the development one.
Development servers are not necessarily faster than production grade servers, so such an answer is a bit misleading.
The real reason in this case is likely going to be due to lazy loading of your web application on the first request that hits a process. Especially if you don't configure Apache correctly, you could hit this lazy loading quite a bit if your site doesn't get much traffic.
I would suggest you go watch my PyCon talk which deals with some of these issues.
http://lanyrd.com/2013/pycon/scdyzk/
Especially make sure you aren't using prefork MPM. Use mod_wsgi daemon mode in preference.
A deployment server should be faster than the development one.
True. And it generally is faster... in a "typical" web server environment. To test this, try spinning up 20 concurrent clients and have them make continuous requests to each version of your server. You see, you've only tested 1 request at a time--certainly not a typical web environment. I suspect you'll see different results (we're thinking of both latency AND throughput here) with tens or hundreds of concurrent requests per second.
To put it another way: At 10, 20, 100 requests per second, you might still see ~200ms latency from Apache, but you'd see much worse latency from Bottle's server.
Incidentally, the Bottle docs do refer to concurrency:
The built-in default server is based on wsgiref WSGIServer. This
non-threading HTTP server is perfectly fine for development and early
production, but may become a performance bottleneck when server load
increases.
It's also worth noting that Apache is doing a lot more than the Bottle reference server is (checking .htaccess files, dispatching to child process/thread, robust logging, etc.) and all those features necessarily add to request latency.
Finally, I'd ask whether you tuned the Apache installation. It's possible that you could configure it to be faster than it is now, e.g. by tuning the MPM, simplifying logging, disabling .htaccess checks.
Hope this helps. And if you do run a concurrent benchmark, please do share the results with us.

Can I Replace Apache with Node.js?

I have a website running on CentOS using the usual suspects (Apache, MySQL, and PHP). Since the time this website was originally launched, it has evolved quite a bit and now I'd like to do fancier things with it—namely real-time notifications. From what I've read, Apache handles this poorly. I'm wondering if I can replace just Apache with Node.js (so instead of "LAMP" it would "LNMP").
I've tried searching online for a solution, but haven't found one. If I'm correctly interpreting the things that I've read, it seems that most people are saying that Node.js can replace both Apache and PHP together. I have a lot of existing PHP code, though, so I'd prefer to keep it.
In case it's not already obvious, I'm pretty confused and could use some enlightenment. Thanks very much!
If you're prepared to re-write your PHP in JavaScript, then yes, Node.js can replace your Apache.
If you place an Apache or NGINX instance running in reverse-proxy mode between your servers and your clients, you could handle some requests in JavaScript on Node.js and some requests in your Apache-hosted PHP, until you can completely replace all your PHP with JavaScript code. This might be the happy medium: do your WebSockets work in Node.js, more mundane work in Apache + PHP.
Node.js may be faster than Apache thanks to it's evented/non-blocking architecture, but you may have problems finding modules/libraries which substitute some of Apache functionality.
Node.js itself is a lightweight low-level framework which enables you to relatively quickly build server-side stuff and real-time parts of your web applications, but Apache offers much broader configuration options and "classical" web server oriented features.
I would say that unless you want to replace PHP with node.js based web application framework like express.js then you should stay with Apache (or think about migrating to Nginx if you have performance problems).
I believe Node.js is the future in web serving, but if you have a lot of existing PHP code, Apache/MySQL are your best bet. Apache can be configured to proxy requests to Node.js, or Node.js can proxy requests to Apache, but I believe some performance is lost in both cases, especially in the first one. Not a big deal if you aren't running a very high traffic website though.
I just registered to stackoverflow, and I can't comment on the accepted answer yet, but today I created a simple Node.js script that actually uses sendfile() to serve files through the HTTP protocol. (The existing example that the accepted answer links to only uses bare TCP protocol to send the file, and I could not find an example for HTTP, so I wrote it myself.)
So I thought someone might find this useful. Serving files through the sendfile() OS call is not necessarily faster than when data is copied through "user land", but it ends up utilizing the CPU and RAM less, thus being able to handle larger number of connections than the classic way.
The link: https://gist.github.com/1350901
Previous SO post describing exactly what im saying (php + socket.io + node)
I think you could put up a node server on somehost:8000 with socket.io and slap the socket.io client code into tags and with minimal work get your existing app rocking with socket.io (realtime baby) without a ton of work.
While node can be your only backend server remember that node likes to live up to it's name and become a node. I checked out a talk awhile back that Ryan Dahl gave to a PHP Users's group and he mentioned the name node relating to a vision of several node processes doing work and talking with each other.
Its LAMP versus MEAN nowadays. For a direct comparison see http://tamas.io/what-is-the-mean-stack.
Of course M, E and A are somewhat variable. For example the more recent koa may replace (E)xpress.
However, just replacing Apache with Node.js is probably not the right way to modernize your web stack.

Designing Web Interface for Embedded System

OS : Linux.
I'm trying to find possible ways to implement web interface for my embedded system.
Currently there is a shell (text based) and a small set of commands are implemented to query the device.
I'm new to web development, my questions are:
What web server must I use? (I got apache up on my development setup and tried using CGI to fetch some pages, but seems like this not the right choice for embedded systems)
Assuming I'm using CGI, what strategy can be used for passing data between CGI and the Main App?
I intended to create a thread in the MainApp to handle the query from CGI script. This thread would call interfaces in the MainApp, retrive the data and pass it to CGI.
We use Lighttpd on our embedded systems, it's small and very easy to integrate. There are also specialized webservers with features especially geared to embedding, like AppWeb, which in my opinion is also a very nice product.
For the communication between the main application and the CGI's you can use sockets, or System V message queues if those are available on your embedded platform. The advantage of SYSV message queues is that they're very easy to use and manage, and messages in the queues survive restarts of the applications, but they also have some quirks (like you can't select() on them).
As web server another choice is thttpd. I use it successfully in an industrial product.
For the communication between CGI and main application sockets is the right choice.
There is no web server you must use, but there are some better choices for embedded than apache. Apache is designed for embedded and is bigger and slower.
I would not recommend CGI. It is slow to run and slow to develop. I can speak for Appweb for which I'm one of the developers. Appweb has two good web frameworks:
Ejscript which is a server-side Javascript framework for Appweb
ESP which is an MVC C-Language web farmework
ESP is extremely fast and provides easy binding of controllers to URLs. Ejscript is bigger and has a more extensive class library. Both are designed for embedded. Both are much better than CGI and execute 20+ times faster than CGI.
I am working in LuCI, which is a light CGI for embedded device. Actually it is for openwrt which is a open source project of wireless router.
The server is uhttpd , light and powerful.
The CGI script is Lua whose interpreter is no more than 10k, delicate, right? And it is developed by C and can communicate with C, powerful.
So this is my suggest.
We use JUCI with openwrt. It is written in javascript that runs on the client browser and communicates with web server over json rest api. The backend can be implemented in any language but we use reusable components written in C that plug into system bus (ubus). This means that relevant services expose their functionality through ubus which can both be used through cli and through rest api. It's actually pretty nice.