PyWSGI (Gevent) Virtual hosts - virtualhost

Is it possible to host more than one site within Gevent's pywsgi server? I have a machine with bottlepy and gevent pywsgi server and am curious how I would go about setting up a second site. The only thing I can think of is using something like nginx as a front end and running each gevent server/site on a different internal port. Is this really the best way to approach this?

Virtual hosting is not part of the WSGI protocol.
If you don't want to use nginx or any other frontend server you can write or use an existing wsgi middleware that would dispatch to several underlying wsgi apps.
Something like this (I have not tested it):
http://discorporate.us/jek/projects/wfront/
However, wsgi server are mostly meant to be used as app servers, not frontend servers. I would use nginx, apache, lighttpd or any other well tested frontend server and let it do its job.
A few reasons for using frontend servers :
they check request integrity for security
they support SSL
they are usually more robust
they can act as load balancers to several wsgi process in order to scale

If you'd like to pay attention to CherryPy (As WSGI server) with Bottle (As Application), I have used for a while, and prove it pretty stable.
The following is an example for multiple virtual hosts.
import cherrypy
from bottle import Bottle
import os
app1 = Bottle()
app2 = Bottle()
#app1.route('/')
def homePage():
return "========= home1 ==============="
#app2.route('/')
def homePage_2():
return "========= home2 ==============="
vhost = cherrypy._cpwsgi.VirtualHost(None,
domains={
'www.domain1.com': app1,
'www.domain2.com': app2,
}
)
cherrypy.tree.graft(vhost)
cherrypy.config.update({
'server.socket_host': '192.168.1.4',
'server.socket_port': 80,
})
cherrypy.engine.start()
cherrypy.engine.block()

Related

Mod wsgi and apache configuration

I'm facing the following issue: i've a public web server running on a given URL, say, www.mysite.com.
It uses apache2.
I've developed a python web app and I want to make it publicly accessible.
Locally, I use the command
mod_wsgi-express start-server wsgi.py
to start the server and everything works.
However, I would like to link only a specific URL to my app, such as mysite/my_test, leaving apache2 serving all the other requests.
In other words, I would like to set the server URL for mod_wsgi-express to mysite/my_test port 80.
By default I get Server URL: http://localhost:8000, and I would like to change this.
I've tried the --mount-point option, but I didn't see any difference.
I know I can change the apache configuration and adding WSGIScriptAlias but I'm facing multiple issues, so I'm searching for a quickest and easiest way.
Hope this is clear.
Thanks.

Why using NGINX or how to deploy Meteor app correctly?

I am going to finish my Meteor app in a few weeks. So the problem that I will face - how to make my app available to other people.
Firstly I bought a droplet on Digital Ocean. And started to read about the ways to deploy meteor app to production server.
I found 2 totally different ways to do that!
The first one is pretty simple (and so I really love it). Here is the link. I have to do a few steps - create a droplet with Ubuntu 14.04, then connect to this droplet via ssh, then install and run mup. After that anybody can access to my app. I worry, that there is no ssl support (my project is e-commerce, so I really need https-connection), but then I found in mup docs a short article How to set up SSL with Mup. So everything is perfect at first glance.
But then I found another way to deploy meteor app. Here is link. It is much more complicated. First I need to install node and mongo on my droplet. Then install and configure nginx. And then after many steps comes Meteor installation. Author don't explain why people need do deploy app this way, assuming that it is obviously to everyone. His explanation is "The problem with this is that it isn’t wise to run an application like Meteor through your public port (which is 80)".
I admit I have no experience and knowledge in such questions. The one thing that I can say exactly is that I need a really proper way to deploy e-commerce meteor app. And it doesn't matter I won't sleep many hours by doing this.
So question is: which one way is proper? And (it is important) why?
Either security and performance are important for this project. I am also going to use prerender.io or spiderable (for seo purposes) and fast render, if it can have an influence on your answers. and really thank you for answers guys!
You can deploy your Meteor App on server via different mechanism . There are lots of way to do the same thing.
Like as you said you also found two ways to do that.
So in first link you used Meteor up for deployment your application as you successfully deployed .
In second approach you need to first login to the server and than create user than install everything needed to your server machine after that you need to setup Nginx.
So as i guess your question is related to "Nginx" . And you want to know
1)Why we need to use Nginx
2)Which one is the better approach
So answer for your first question is as follows:-
Nginx (pronounced "engine x") is a web server that is used for many purpose mainly use for proxy pass. Means using nginx you can redirect your request from one url to another and the actual url is hidden from the UI (For securety purpose and for redirection).
Like in meteor your app is by default running on 3000 so one way is that you can open 3000 port and run your application on that port. But via nginx you can run your app on 80 port and as user hit any event than in nginx you can configure address where you want to send your request.
Like you can send them to 3000 port.
So now user don't know in actual where is your request going on because you show them port 80 but in actual your request is go to 3000 port. So this is the one advantage of using nginx same there are lots more.
So for configuration of nginx you just need to install nginx if you are using ubuntu than via simple command-:
sudo apt-get install nginx
then setting in nginx configuration file that is under the following directory:-
/etc/nginx/sites-enabled/default
just open this file and setup up your configuration here like:-
server {
listen 80;
server_name localhost;
root /home/parveen/meteor/app;
location / {
index /index.html;
}
location /api {
proxy_pass http://localhost:3000;
}
}
In this way you can configure your nginx setting as you want please read nginx documentation for detail.
After that you need to start your server using forever or nohup which you want to use so that your server will not stop as you exit from the login of server.
Conclusion:-
In the second approach you need to install everything by yourself via ssh login to your server than configuration of nginx and and then run your server.
If you do any changes than again you need to update your changes to server and then stop meteor app then restart that. But this is more secure approach and you can do what you want to do.
In first approach they are using mup (Meteor up) that do so many of works for you . You just need to do some configuration you can use Docker or as define in the blog (droplet) link you shared and just need to run meteor up command and that will first create a bundle for your app than run that so in the first approach if you do any changes than you not need to login again to your server update changes , what you need to do is just run again the same command and that will create new bundle with updates and run your project. But i don't think that is more secure.
So its depend on your requirement and choice which you want to use.
If you have any question than most welcome.
Hope this would help!
Thanks

What configuration should be specified to bring different servers in same URL space in CloudBees PaaS

I am trying to use CloudBees PaaS (RUN#CloudBees) to consolidate essentially three different distinct uses under the same URL space:
root (/) main landing, marketing page
app (/app) java app running in CloudBees
blog (/blog) another java app running in cloudbees or possibly outside (example.wordpress.com)
If I was doing it myself in a datacenter or in AWS I would setup a reverse proxy (possibly like Varnish and configure reverse proxy to map the URL space as follows:
root (/): www.example.com/ --> CMS running as cloudbees app example-cms.cloudbees.net
app (/app) java app running in CloudBees www.example.com/app -> app.example.com
blog (/blog) similarly www.example.com/blog -> example.wordpress.com or exampleblog.cloudbees.net
How can I achieve the same with CloudBees. Can it be done? Is this too much to expect from a PaaS vendor?
An interesting problem, and a few solutions:
Use domains instead of paths (eg blog.example.com etc) - so you can use DNS to direct things
Build an app that essentially proxies traffic for you (this could run on cloudbees or elsewhere) - there are lots of ways to do this.
Use some routing/proxy service (like CloudFlare) which may let you set up routing rules (so it can proxy traffic).
My preference would always be for number 1 - DNS is a great way to do things like this.
You can with this approach have /blog similar Urls in your paas application, and have it do a 302 redirect to the real blog.example.com - that kind of gives you a bit of both.

Apache Jakarta (Tomcat) Connector to forward traffic to specific Tomcat

There is one use case I am unable to solve so far with the Apache Jakarta (Tomcat) Connector load balancing feature.
I have one IIS site which has one Apache Tomcat Connector attached to it. I need to "forward" the traffic to a different Tomcat depending on the URI that is requested. It is pretty simple to do when you only have apps with a specific context (like /app1, /app2, etc). My problem is that I have one app (in one Tomcat) that is at / (ROOT) and one other app (in another Tomcat) that is at /app1.
I have tried the following config in uriworkermap.properties:
/*=loadBalancer1
/app1/*=loadBalancer2
But this doesn't work, because "loadBalancer1" takes all the traffic. "loadBalancer2" is being ignored, which make sense, since /app1/* matches /* (regexp wise).
I also tried adding a exclusion as so:
/*=loadBalancer1
!/app1/*=loadBalancer1
/app1/*=loadBalancer2
But that doesn't work either, because "loadBalancer1" is still taking all the traffic, but just ignoring that "/app1/*" URI pattern. "loadBalancer2" is simply ignored again.
Any suggestion, keeping in mind that I cannot have 2 IIS sites, nor can I move the app that is at / (ROOT) to a different context path in Tomcat?
Thank you
Edit:
Instead of using just one Apache Tomcat Connector, I use 1 connector for each Tomcat on the same IIS site.
uriworkermap.properties #1: for Tomcat with app at / (ROOT)
/*=wlb
!/app1/*=wlb
uriworkermap.proerties #2: for Tomcat with app at /app1
/app1/*=wlb
Connector #1 will ignore traffic on URI "/app1/*", but connector #2 will catch it (and vice versa).
Now I can set different VM options and memory allocation to my apps!
I am open to comments or better solution..
Does it work when you reverse it as such ?
/app1/*=loadBalancer2
/*=loadBalancer1

How can I test a comet ajax site on a single host and work around browser simultaneous connection limit?

I am using the comet long-polling technique with apache, php, jquery.
I've got a basic comet update running and it works great. I'm now attempting to build a more complex comet script, and I want a better way to debug.
My comet scripts use $.ajax() with a long timeout, and the server side just sleeps until it either runs up to the timeout or has an event to send to the client. The comet requests go to a different subdomain than the main ajax requests.
For normal pages I edit and test on a linux laptop. I've got apache, mysql, and php with a test database and mirror image of the site. I can edit, save, and see the changes with no upload step. For the comet stuff I've been having to upload to a server to test. This requires me to set up a few fake servers, but mostly it requires me to upload changed files for each test. I've got a mostly automatic upload script, but it's still too slow.
The problem testing locally is the long timeout. The browser won't open another connection to the same server while the comet request is still open. I don't have a subdomain locally so I have all the requests going to the same server so they basically block each other.
I've tried a number of things to make this work and none really do it. I tried first to change my browser setting for number of simultaneous connections. This didn't work in firefox on linux, and I didn't find anything about changing this limit on other browsers.
I tried setting my hosts file to give me two names that map to my ip address. Then I tried configuring VirtualHost conf directives in apache, but that didn't work. I think because apache is looking for an actual dns server to tell it the hostname, not just my /etc/hosts file. Maybe I can run a local dns server to fool apache into thinking my box has two names, but that just seems like a real long way around this problem.
So, does anyone have an idea of how to make this work on one ip address/host?
I'm new to the comet thing, so maybe I've just got the wrong idea about something. Maybe this isn't even possible. Either way, it's time to just ask if this is already a solved problem.
It really should be possible to use /etc/hosts to fool Apache. It certainly does work on Ubuntu Hardy with Apache 2.2.
Try to give different hostname to you local address. Simply add a line like this to /etc/hosts:
127.0.0.1 a.example.com b.example.com c.example.com d.example.com
(Note: use a tab after IP)
Validate this with a ping
ping a.example.com
In you apache configuration, you may use a wildcard alias together with a named virtual host:
<VirtualHost *:80>
ServerName example.com
ServerAlias *.example.com
## snip ##
<VirtualHost>
Instead of using example.com, you might want to use something that's under your control. I use local subdomain of our company's domain (i.e. something.local.molindo.at).
Now you can use different subdomains for your test, each with its own limitation on concurrent connections.
You may need to restart your browser to get this working.
I have made something similar and my hosting gives my max queries limit reached which actually should not happen. But I have read that if my php code is in infinite loop.. ie the sleep mode the hosting detects it and makes db connection user as to be using more queries than allowed. That is alot to presume but I have found a solution to that with same speculations.