Proxy pass to multiple upstreams - apache

Is there a directive in apache or nginx (preferably) that allows to replicate an incoming stream to multiple upstreams simultaneously?
The reason I need this: I want to stream live video content from one client to a number of Flash RMTP servers that will make that content available to a number of clients.
This setup is working on one streaming server, but I want to add more.
Any help is greatly appreciated.

I am assuming you are using, something similar to this:
proxy_pass
location / {
proxy_pass http://192.168.1.11:8000;
proxy_set_header X-Real-IP $remote_addr;
}
--
Use this instead:
http:// wiki.nginx.org/NginxHttpUpstreamModule
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
}
server {
location / {
proxy_pass http://backend;
}
}

Related

Nginx - proxy_pass to google storage bucket static page VueJS sub paths cause 404 error, VueJS router not kicking in

I'm hosting VueJS at google cloud storage bucket, app works only when using domain name without subpath: www.domain.com when using URL like: www.domain.com/sub/path I'm getting 404 error as it seem that NGINX is looking for this path in the bucket instead of let VueJS router take over.
I tried to follow older thread but in my case would not help.
Any ideas how to fix this?
location = / {
proxy_pass https://gcs/mygoogle-cloud-bucket/main.html;
proxy_set_header Host storage.googleapis.com;
}
location / {
rewrite /(.*) /$1 break;
proxy_pass https://gcs/mygoogle-cloud-bucket/$1$is_args$args;
proxy_redirect off;
index main.html;
proxy_set_header Host storage.googleapis.com;
}
It seems like what you need to do is to create a Static Website using Cloud Storage and VueJS.
With this bieng the case, there are a few things that needs to be clarified:
Cloud Storage doesn't support HTTPs, so yo uneed to use a Load Balancer.
Make sure the objects in your bucket are public.
Build the Vue project With Relative Path.
It is also recomended to set the special pages, but this is not necessary.
Set up your load balancer and the SSL certificate as it is mentioned here.
Configure routing rules.
Make sure you have connected your custom domain to your load balancer
This should get you going with your site. If you would like to check a worknig example, you can take a look at this one.
Your code should look something like:
location / {
rewrite /$ $uri$index_name;
proxy_set_header Host storage.googleapis.com;
proxy_pass https://gs/$bucket_name$uri;
proxy_http_version 1.1;
proxy_set_header Connection "";
}

NGINX - Using variable in proxy_pass breaks routing

I'm trying to get NGINX's resolver to automatically update the DNS resolution cache, so I'm transitioning to using a variable as the proxy_pass value to achieve that. However, when I do use a variable, it makes all requests go to the root endpoint of the request and cuts off any additional paths of the url. Here's my config:
resolver 10.0.0.2 valid=10s;
server {
listen 80;
server_name localhost;
location /api/test-service/ {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# If these 2 lines are uncommented, 'http://example.com/api/test-service/test' goes to 'http://example.com/api/test-service/'
set $pass_url http://test-microservice.example.com:80/;
proxy_pass $pass_url;
# If this line is uncommented, things work as expected. 'http://example.com/api/test-service/test' goes to 'http://example.com/api/test-service/test'
# proxy_pass http://test-microservice.example.com:80/;
}
This doesn't make any sense to me because the hardcoded URL and the value of the variable are identical. Is there something I'm missing?
EDIT: Ah, so I've found the issue. But I'm not entirely sure how to handle it. Since this is a reverse proxy, I need the proxy_pass to REMOVE the /api/test-service/ from the URI before it passes it to the proxy. So..
This:
http://example.com/api/test-service/test
Should proxy to this:
http://test-microservice.example.com:80/test
But instead proxies to this:
http://test-microservice.example.com:80/api/test-service/test
When I'm not using a variable, it drops it no problem. But the variable adds it. Is that just inherently what using the variable will do?
There is a small point you missed in documentation
When variables are used in proxy_pass:
location /name/ {
proxy_pass http://127.0.0.1$request_uri;
}
In this case, if URI is specified in the directive, it is passed to the server as is, replacing the original request URI.
So you config needs to be changed to
set $pass_url http://test-microservice.example.com:80$request_uri;
proxy_pass $pass_url;

Getting real IP with MUP and SSL

We are using MUP for Meteor deployment to AWS. Couple of weeks ago we got excited that we can now switch to a free cert, thanks to Letsencrypt and Kadira. Everything was working very nicely, until I realized in the logs that client IP is no longer being passed through the proxy... No matter what I do, I see 127.0.0.1 as my client IP. I was trying to get it in methods using this.connection.clientIP or headers package.
Well, after doing much research and learning in-depth how stub and nginx work, I came to conclusion that this was never working.
The best solution I came up with is to use proxy_protocol as described by Chris, but I could not get it to work.
I have played with settings of /opt/stud/stud.conf and attempted to turn write-proxy and proxy-proxy settings on.
This is what my nginx config looks like:
server {
listen 80 proxy_protocol;
server_name www.example.com example.com;
set_real_ip_from 127.0.0.1;
real_ip_header proxy_protocol;
access_log /var/log/nginx/example.access.log;
error_log /var/log/nginx/example.error.log;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto http;
}
}
Here is what my headers look like on production EC2 server:
accept:"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
accept-encoding:"gzip, deflate, sdch"
accept-language:"en-US,en;q=0.8"
cache-control:"no-cache"
connection:"upgrade"
host:"127.0.0.1:3000"
pragma:"no-cache"
upgrade-insecure-requests:"1"
x-forwarded-for:"127.0.0.1"
x-forwarded-proto:"http"
x-ip-chain:"127.0.0.1,127.0.0.1"
x-real-ip:"127.0.0.1"
So, the questions of the day. Using MUP with SSL, is there a way to get a pass-though client IP address?
I know you said you have tried using headers, but you may give it another shot and see if you can get something this way. I was having alot of problems with x-forwarded-for counts not staying consistent, but if I pull from the header chain, [0] is always the client IP.
Put this code in your /server folder:
Meteor.methods({
getIP: function() {
var header = this.connection.httpHeaders;
var ipAddress = header['x-forwarded-for'].split(',')[0];
return ipAddress;
}
});
In your browser console:
Meteor.call('getIP', function(err, result){
if(!err){
console.log(result);
} else {
console.log(err);
}
};
See what you get from that response. If that works, you can just call the method on Template.rendered or whenever you need the IP.
Otherwise, I'm pretty sure you should be able to set the IP to an arbitrary header in your nginx conf and then access it directly in the req object.
By the way, in the nginx config you included, I think you need to use real_ip_header X-Forwarded-For; so that real_ip will use that header to locate the client IP, and you should also set real_ip_recursive on; so that it will ignore your trusted set_real_ip_from
Alright, so after a sleepless night and learning everything I could about the way STUD and HAProxy protocol works, I came to a simple conclusion it's simply not supported.
I knew I could easily go back to have SSL termination at Nginx, but I wanted to make sure that my deployment has automation as MUP.
Solution? MUPX. The next version of MUP, but still in development. It uses Docker and has SSL termination directly at Nginx.
So there you have it. Lesson? Stable is not always a solution. :)

Laravel routes behind reverse proxy

Ok, so for development purposes, we have a dedicated web server. It's not currently connected directly to the internet, so I've setup an apache reverse proxy on another server, which forwards to the development server.
This way, I can get web access to the server.
The problem is, the routes in Laravel are now being prefixed with the internal server IP address, or the servers computer name.
For example, I go to http://subdomain.test.com but all the routes, generated using the route() helper, are displaying the following url: http://10.47.32.22 and not http://subdomain.test.com.
The reverse proxy is setup as such:
<VirtualHost *:80>
ServerName igateway.somedomain.com
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass / http://10.47.32.22:80/
ProxyPassReverse / http://10.47.32.22:80/
<Location />
Order allow,deny
Allow from all
</Location>
</VirtualHost>
I have set the actual domain name in config\app.php.
Question
How can I set the default URL to use in routing? I don't want it using the internal addresses, because that defeats the point of the reverse proxy.
I've tried enclosing all my routes in a Route::group(['domain' ... group, which doesn't work either.
I ran into the same (or similar problem), when a Laravel 5 application was not aware of being behind an SSL load-balancer.
I have the following design:
client talks to an SSL load balancer over HTTPS
SSL load balancer talks to a back-end server over HTTP
That, however, causes all the URLs in the HTML code to be generated with http:// schema.
The following is a quick'n'dirty workaround to make this work, including the schema (http vs. https):
Place the following code on top of app/Http/routes.php
In latest version of laravel, use web/routes.php
$proxy_url = getenv('PROXY_URL');
$proxy_schema = getenv('PROXY_SCHEMA');
if (!empty($proxy_url)) {
URL::forceRootUrl($proxy_url);
}
if (!empty($proxy_schema)) {
URL::forceSchema($proxy_schema);
}
then add the following line into .env file:
PROXY_URL = http://igateway.somedomain.com
If you also need to change schema in the generated HTML code from http:// to https://, just add the following line as well:
PROXY_SCHEMA = https
In latest version of laravel forceSchema method name has changed to forceScheme and the code above should look like this:
if (!empty($proxy_schema)) {
URL::forceScheme($proxy_schema);
}
Ok, so I got it. Hopefully this will help someone in the future.
It seems like Laravel ignores the url property in the config\app.php file for http requests (it does state it's only for artisan), and it instead uses either HTTP_HOST or SERVER_NAME provided by apache to generate the domain for URLs.
To override this default url, go to your routes.php file and use the following method:
URL::forceRootUrl('http://subdomain.newurl.com');
This will then force the URL generator to use the new url instead of the HTTP_HOST or SERVER_NAME value.
Go to app/Http/Middleware/TrustProxies.php and change the protected variable $proxies like this:
protected $proxies = ['127.0.0.1'];
Just this! Be happy!
Because laravel route is created not from the config/app itself rather than from the server. My solution is adding the proxy_set_header Host to the nginx's config.
server {
listen 80;
server_name my.domain.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host my.domain.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8000;
}
}
i'm using laravel 8 with nginx docker inside a running nginx on the machine, so yeah it's double nginx
add this code in App\Providers\AppServiceProvider class
if (Str::contains(Config::get('app.url'), 'https://')) {
URL::forceScheme('https');
}
Seems the Laravel have more convinient solution.. Check answer there: How do I configure SSL with Laravel 5 behind a load balancer (ssl_termination)?
Following up #TimeLord's solution:
In latest version of laravel the name for forced schema has changed and now it is:
URL::forceScheme()
I know this topic is old a.f but I've been solving this issue by replacing the following line in my DatabaseSessionHandler.pdf [#Illuminate/Session]:
protected function ipAddress()
{
return $_SERVER['HTTP_X_FORWARDED_FOR'];
// return $this->container->make('request')->ip();
}
Of course you need to migrate the sesssion table first and set up the config
(.env Variable SESSION_DRIVER=database)
For nginx, you don't need to do anything extra in Laravel. The fix can be done at from nginx config;
server {
listen 80;
listen [::]:80 ipv6only=on;
server_name sub.domain.dev;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host sub.domain.dev;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8000;
}
}
Figured out a way that much cleaner and only does exactly what the loadbalancer tells it to, add this function to your RouteServiceProvider
protected function enforceProtocol()
{
if(request()->server->has('HTTP_X_FORWARDED_PROTO')){
URL::forceScheme(request()->server()['HTTP_X_FORWARDED_PROTO']);
}
}
and in the boot section, simple call it like so
public function boot()
{
$this->enforceProtocol();
//other stuff
}

How can I host multiple Rails apps with nginx and Unicorn?

How can I host multiple Rails apps with nginx and Unicorn?
I currently have one site up and running thanks to "Deploying to a VPS".
I have searched but I need a step-by-step guide to get this working. The results I found are not so well explained to help me understand how to accomplish this.
Basically, you do the same thing you did to get everything for your first application running minus the Nginx installation. So, however you got your Unicorn instance for your first application running, do it again for your next application.
You can then just add another server block into your Nginx config with an upstream that points to that new Unicorn instance.
One Nginx running for the entire machine will do fine, with one Unicorn running per application.
Hope this helps some.
Here is a sample of the additional server block you would need to add for Nginx to serve additional applications:
upstream unicorn_app_x {
server unix:/path/to/unicorn/socket/or/http/url/here/unicorn.sock;
}
server {
listen 127.0.0.1:80;
server_name mysitehere.com aliasfor.mysitehere.com;
root /path/to/rails/app/public;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://unicorn_app_x;
break;
}
}
}
The instructions provided above were not enough.
my startup file: /etc/init.d/unicorn had several references to a single host's configuration. With these configurations, it would not serve a second host.
so I created a new startup instance of unicorn.
cp /etc/init.d/unicorn /etc/init.d/unicorn_app_x
edited /etc/init.d/unicorn_app_x, replacing references to the first site with references to the second: including the unique socket.
then I added the file to startup automatically: update-rc.d act_unicorn defaults
it finally worked!