HHVM serve multiple domain - virtualhost

I'm try to host several domain on the same VPS, using HHVM to serve the pages.
I'm wondering how can I write the VirtualHost in order to point the right folder in my /var/www directory ?
For example xxx.domain.com >> /var/www/domain.com/

Good news. Since the release of HHVM 2.3 (Dec 13, 2013), you can run HHVM in FCGI mode. Use either Nginx or Apache and it works wonderfully.
Reference: http://www.hhvm.com/blog/1817/fastercgi-with-hhvm
With an older version of HHVM you can run multiple server instances on internal ports, i.e. 8001, 8002, etc. Then configure Nginx as a reverse proxy. (Apache can do that too).
upstream node1{
server 127.0.0.1:8001;
}
upstream node2{
server 127.0.0.1:8002;
}
server {
...
server_name server1.com;
location ~ \.php$ {
proxy_pass http://node1;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on; #only for https
}
}
server {
...
server_name server2.com;
location ~ \.php$ {
proxy_pass http://node2;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on; #only for https
}
}
Of course this setup takes up a lot of memory. Go with 2.3 if you can upgrade.

Apparently is not yet possible. Accordling to the official github repository where the code is hosted exists an open issue about the same issue you are asking and it's tag for wishlist / feature request.
Probably the best way to solve this is to run a HHVM server for each domain (mean each domain you need a different root folder) and use Apache or Nginx as proxy.

On Nginx, the only way I was able to get this to work was to use / as SourceRoot for HHVM, and to add a / in fastcgi_param SCRIPT_FILENAME /$document_root$fastcgi_script_name; in my /etc/nginx/hhvm.conf file. With that combination, I'm running ~7 sites without a problem so far. I'm running Ubuntu 13.10 64-bit.
In /etc/hhvm/server.hdf, change SourceRoot = /var/www to SourceRoot = /:
Server {
Port = 9000
SourceRoot = /
DefaultDocument = index.php
}
In /etc/nginx/hhvm.conf, add a / in front of $document_root$fastcgi_script_name;:
location ~ \.php$ {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
fastcgi_keep_conn on;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /$document_root$fastcgi_script_name;
fastcgi_intercept_errors on;
fastcgi_read_timeout 300;
include fastcgi_params;
}
You may also need to change fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; to fastcgi_param SCRIPT_FILENAME /$document_root$fastcgi_script_name;, at least I had to with mine.
There may be security implications by using / as your SourceRoot - I mitigate this as much as I can by firewalling port 9000 so only localhost can reach it. Or you can use a socket instead. Not fool-proof, but from what I've seen so far it's OK.

Related

Harbor 2.5.0 behind Apache reverse proxy

I installed Harbor in a server inside the company farm and I can use it without problem through https://my-internal-server.com/harbor.
I tried to add the reverse proxy rules to Apache to access it through the public server for harbor, v2, chartrepo, service endpoints, like https://my-public-server.com/harbor, but this doesn't work.
For example:
ProxyPass /harbor https://eslregistry.eng.it/harbor
ProxyPassReverse /harbor https://eslregistry.eng.it/harbor
I also set in harbor.yaml:
external_url: https://my-public-server.com
When I try to access to https://my-public-server.com/harbor with the browser I see a Loading... page and 404 errors for static resources because it tries to get them with this GET:
https://my-public-server.com/scripts.a459d5a2820e9a99.js
How can I configure it to work?
You should pass the whole domain, not only the path. Take a look at the official Nginx config to have an idea how this might look like.
upstream harbor {
server harbor_proxy_ip:8080;
}
server {
listen 443 ssl;
server_name harbor.mycomp.com;
ssl_certificate /etc/nginx/conf.d/mycomp.com.crt;
ssl_certificate_key /etc/nginx/conf.d/mycomp.com.key;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_pass http://harbor/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
Note that you should disable proxy or buffering

Nginx load balancing with Node.js and Google Oauth

I created a Node.js site that uses Google authentication. The site is used by 100+ users concurrently which affect the performance. So I understand that Nginx could help in scaling the site by creating multiple instance of the Node.js app in multiple ports and then we use Nginx as a load balancer.
So, I configured Nginx, but the issue is that it dose not seem to work with Google authentication. I am able to see the first page of my site and I am able to to login via Google but it is dose not work after this point.
Any suggestions to what could be missing to make this work.
This is my configuration file:
upstream my_app
{
least_conn; # Use Least Connections strategy
server ip:3001; # NodeJS Server 2 I changed the actual ip
server ip:3002; # NodeJS Server 3
server ip:3003; # NodeJS Server 4
server ip:3004; # NodeJS Server 5
keepalive 256;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
expires epoch;
add_header Cache-Control "no-cache, public, must-revalidate, proxy-revalidate";
server_name ip;
access_log /var/log/nginx/example.com-access.log;
error_log /var/log/nginx/example.com-error.log error;
# Browser and robot always look for these
# Turn off logging for them
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
# pass the request to the node.js server
# with some correct headers for proxy-awareness
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 16k;
proxy_buffer_size 32k;
proxy_pass http://my_app ;
proxy_redirect off ;
add_header Pragma "no-cache";
# Handle Web Socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I just started learning about nginx, I checked when the upstream have only one ip address and it is working. i.e it works as a reverse proxy but not as a load balancer and my guess is due to google authentication nature.
And the error I receive in the error log is connection refused.
Thanks.
I figure out what was wrong. least_conn load balancing technique was not the right to choose since it dose not persist session. I changed it to
hash $remote_addr or hash_ip and it is working.

How to install gitlab separate on centos7?

I wish to install gitlab on my Centos 7 server. But I need to separate the gitlab and apache folder. That is when I type localhost should get the index page in HTML folder and when I type git.example.com should get the gitlab page. Is there any way to do this? Please help me, anyone.
Might not be the best solution, but what I did was to set a "front NGINX" to proxy my 3 services: Apache (at www), Redmine (at issues) and GitLab (at git)
Then I configured my Apache to listen on another port (say 808). And my GitLab to listen on its own port (say 809).
And I added a server configuration in NGINX with a proxypass using something like this:
server {
listen 80;
server_name www.example.com;
location / {
access_log off;
proxy_pass http://localhost:808;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
and one for the GitLab as:
server {
listen 80;
server_name git.example.com;
location / {
access_log off;
proxy_pass http://localhost:809;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 502 /502.html;
location = /502.html {
root /opt/gitlab/error_pages;
}
}

nginx location directive : authentication happening in wrong location block?

I'm flummoxed.
I have a server that is primarily running couchdb over ssl (using nginx to proxy the ssl connection) but also has to serve some apache stuff.
Basically I want everything that DOESN'T start /www to be sent to the couchdb backend. If a url DOES start /www then it should be mapped to the local apache server on port 8080.
My config below works with the exception that I'm getting prompted for authentication on the /www paths as well. I'm a bit more used to configuring Apache than nginx, so I suspect I'm mis-understanding something, but if anyone can see what is wrong from my configuration (below) I'd be most grateful.
To clarify my use scenario;
https://my-domain.com/www/script.cgi should be proxied to
http://localhost:8080/script.cgi
https://my-domain.com/anythingelse should be proxied to
http://localhost:5984/anythingelse
ONLY the second should require authentication. It is the authentication issue that is causing problems - as I mentioned, I am being challenged on https://my-domain.com/www/anything as well :-(
Here's the config, thanks for any insight.
server {
listen 443;
ssl on;
# Any url starting /www needs to be mapped to the root
# of the back end application server on 8080
location ^~ /www/ {
proxy_pass http://localhost:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Everything else has to be sent to the couchdb server running on
# port 5984 and for security, this is protected with auth_basic
# authentication.
location / {
auth_basic "Restricted";
auth_basic_user_file /path-to-passwords;
proxy_pass http://localhost:5984;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
Maxim helpfully answered this for me by mentioning that browsers accessing the favicon would trigger this behaviour and that the config was correct in other respects.

Nginx configuration leads to endless redirect loop

So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2.
Here's the ssl config
server {
listen 443 default ssl;
server_name <redacted>.com www.<redacted>.com;
root /home/app/<redacted>/public;
passenger_enabled on;
rails_env production;
ssl_certificate /home/app/ssl/<redacted>.com.pem;
ssl_certificate_key /home/app/ssl/<redacted>.key;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Url-Scheme $scheme;
proxy_redirect off;
proxy_max_temp_file_size 0;
location /blog {
rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent;
}
location ~* \.(js|css|jpg|jpeg|gif|png)$ {
if (-f $request_filename) {
expires max;
break;
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
Here's the non-ssl config
server {
listen 80;
server_name <redacted>.com www.<redacted>.com;
root /home/app/<redacted>/public;
passenger_enabled on;
rails_env production;
location /blog {
rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent;
}
location ~* \.(js|css|jpg|jpeg|gif|png)$ {
if (-f $request_filename) {
expires max;
break;
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
Let me know if there's any additional info I can give to help diagnose the issue.
It's your line here:
listen 443 default ssl;
change it to:
listen 443;
ssl on;
This I'll call the old style.
Also, that along with
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Url-Scheme $scheme;
proxy_redirect off;
proxy_max_temp_file_size 0;
did the trick for me. I see now i am missing the real IP line you have, but so far, this got rid of my infinite loop problem with ssl_requirement and ssl_enforcer.
I've toyed around with a bunch of these answers but nothing worked for me. Then I realized since I use Cloudflare the problem may not be in the server but with Cloudflare. Lo and behold when I set my SSL to Full (Strict) everything works as it should!
I found that it was this line
proxy_set_header Host $http_host;
Which should be changed to
proxy_set_header Host $host;
According to the nginx documentation by using '$http_host you're passing the "unchanged request-header".
Have you tried using "X-Forwarded-Proto" instead of X_FORWARDED_PROTO?
I've run into a problem with this header before, it wasn't causing redirects, but changing this header fixed it for me.
Since you have a rewrite statement found in both ssl and non-ssl sections
location /blog {
rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent;
}
Where is the server section for blog..com?? Could that be the source of the issue?
I had a similar issue for my symfony2 application, albeit form a different cause: I had set fastcgi_param HTTPS off; when I of course needed fastcgi_param HTTPS on; in my nginx configuration.
location ~ ^/(app|app_dev|config)\.php(/|$) {
satisfy any;
allow all;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS on;
}
In case someone else stumbles on this, I was attempting to configure both http and https via the same server {} block, but only added the "listen 443" directive believing that the "this line is default and implied" meant that it would also listen on 80 as well, it didn't. Uncommenting the "listen 80" line so that both listen lines were present corrected the infinite loop. No idea why it would have even been getting a redirect at all, but it did.
For those who are searching desperatly why their owncloud keep making a redirect loop in spite of having a good configuration file, i've found why it's not working.
My config:
nginx + php-fpm + mysql on a fresh centos 6.5
when installing php-fpm and nginx, default permission on /var/lib/php/session/ is root:apache
php-fpm through nginx store php session here, if nginx did not have authorization to write it fail miserably to keep any login session, resulting in an infinite loop.
So juste add nginx in apache group (usermod -a -G apache nginx) or change ownership of this folder.
Have a nice day.
X_FORWARDED_PROTO as in your file can cause errors and it did in my case. X-Forwarded-Proto is correct whereas the hiphens are more important than uppercase or lowercase letters.
You can avoid those problems by sticking to conventions ;)
see also here: Custom HTTP headers : naming conventions and here: http://www.ietf.org/rfc/rfc2047.txt