how to config rtmp://example.com when using SRS livestream - rtmp

I use Nginx as proxy & SRS as Livestream server, here is my Nginx config for server block:
`server {
listen 80;
listen 443 ssl http2;
server_name example.com;
ssl_certificate /usr/local/srs/conf/server.crt;
ssl_certificate_key /usr/local/srs/conf/server.key;
# For SRS homepage, console and players
# http://r.ossrs.net/console/
# http://r.ossrs.net/players/
location / {
proxy_pass http://127.0.0.1:8080/;
}
# For SRS streaming, for example:
# http://r.ossrs.net/live/livestream.flv
# http://r.ossrs.net/live/livestream.m3u8
location ~ /.+/.*\.(flv|m3u8|ts|aac|mp3)$ {
proxy_pass http://127.0.0.1:8080$request_uri;
}
# For SRS backend API for console.
location /api/ {
proxy_pass http://127.0.0.1:1985/api/;
}
# For SRS WebRTC publish/play API.
location /rtc/ {
proxy_pass http://127.0.0.1:1985/rtc/;
}
}`
For the config it work fine for VLC player to play the livestream from URL look like
https://example.com/live/livestream.m3u8
But from my OBS software to publish the stream, I need use Ip instead the domain to work properly, look like: rtmp://my_public_ip/live
if I replace by the url look like rtmp://example.com/live => OBS doesn't work!
How can I publish stream from OBS via my domain?
I tried to use the vhost config but its didn't work.

Please publish to SRS directly not NGINX, and use the config for OBS:
Server: rtmp://srs_server_ip/live
StreamKey: livestream
Note that never put the livestream in Server config, it should be in the StreamKey, which is confusing.
Please read more from here

Related

Why am I receiving a 404 when using proxy_pass with NginX?

I'm trying to use Nginx to expose my Web APIs on port 80 using proxy_pass. The Web APIs are written in Node using Express and they are all running on separate port numbers.
I have locations working in the nginx.conf file when pulling static files from the root and /test, but receive a 404 error when trying to redirect to the API. The API I'm testing with runs on port 8080 and I'm able to access and test it using Postman.
This is using Nginx 1.16.1 being hosted on a Windows 2016 Server
http {
include mime.types;
default_type application/octet-stream;
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost crowdtrades.com;
//Root and /test locations are working correctly
location / {
root c:/CrowdTrades;
index index.html index.htm;
}
location /test/ {
root c:/CrowdTrades/test;
index test.html;
}
// #Test2 this is the location I'm not able to get working
location /test2/ {
proxy_set_header Host $host;
proxy_pass http://localhost:8080/api/signup/;
}
}
}
So after trying all kinds of configuration changes and restarting Nginx each time I gave up for the night. My cloud VM is scheduled to shut down at night, when I picked this up in the AM it was working. I have no idea why it's working now but restarting the server seemed to help.

Securing Nginx with SSL

I´m securing an Nginx server with SSL and I have a question. I have two virtual servers one for http listening in port 80 and the https listening in 443 like this:
# HTTP server
server {
listen 80;
server_name localhost;
...
# many configuration rules here for caching, etc
}
# HTTPS server
server {
listen 443 ssl;
server_name localhost;
...
}
The question is, do I need to duplicate all the configuration rules that I have in the http version into my https version? Is there any way to avoid duplicate all these rules?
UPDATE
I´m trying to config with an include according to #ibueker answer. Looks easy but somehow is not working. Does the include need to be inside a location? Example attached:
# HTTP server
server {
listen 80;
server_name localhost;
...
include ./wpo
}
Where wpo file is in the same path, and it´s like:
# Expire rules for static content
# RCM: WPO
# Images
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
root /home/ubuntu/env/production/www/yanpy/app;
expires 1w;
add_header Cache-Control "public";
}
# CSS and Javascript
location ~* \.(?:css|js)$ {
root /home/ubuntu/env/production/www/yanpy/app;
expires 1w;
add_header Cache-Control "public";
}
# cache.appcache, your document html and data
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
root /home/ubuntu/env/production/www/yanpy/app;
expires -1;
}
You can put them in another file and include them for both server blocks.
include /path/to/file;

nginx: proxy_pass subdirectories to other servers

Im using nginx as Web Server and Reverse Proxy with SSL enabled.
The Web Server serves an WoltLab Suite Forum 5.0.0 (formerly known as Burning Board) and proxied some subdomains to different hosts like a NodeJS backend, an Tomcat backend and many other different services.
This worked great so far, but now i have the problem that i can no longer use subdomains to accomplish this.
Please don't ask why, please don't.
Now, that i can no longer use subdomains, Im trying to get it working with sub directories.
An example:
I had xyz.example.com point to my nginx server at 12.13.14.15.
nginx proxied all requests to xyz.example.com to 10.20.30.40:1234
Now i want nginx to proxy all requests to example.com/xyz/ to 10.20.30.40:1234
I got this working with Apache Archiva as backend service, but all other services like my NodeJS backend are refusing to work correctly with my current configuration.
It sends me to the BurningBoard wich shows me their Page-Not-Found page.
example.com/xyz/admin/index.php becomes to example.com/admin/index.php wich wont work of course.
The directory that proxies to Archiva has the exact same configuration, just other directory names of course.
The Archiva URL looks like this after i call it from Web:
example.com/repo/ becomes example.com/repo/#welcome and shows me Archivas Welcome Page.
This is exactly what i want for my other services too.
Here are my current configuration files for nginx (sensitive data replaced with X):
<=== sites-available/default ===>
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name XXXX.XX www.XXXX.XX;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
include snippets/ssl-XXXXX.XX.conf;
include snippets/ssl-params.conf;
root /var/www/html;
include /etc/nginx/snippets/proxies.conf;
# the last try_files argument is for SEO friendly URLs for the WSF 5.0
location / {
index index.php;
try_files $uri $uri/ /index.php?$uri&$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
# Lets Encrypt automation
location ~ /.well-known {
allow all;
}
location ~ /\.ht {
deny all;
}
}
<=== snippets/proxies.conf ===>
# Apache Archiva
location /repo {
rewrite ^/repo(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# Git Solution
location /git {
rewrite ^/git(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# Filehosting
location /cloud {
rewrite ^/cloud(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# NodeJS
location /webinterface {
rewrite ^/webinterface(/.*)$ $1 break;
proxy_pass https://XXXXXXXX:XXXXX;
include /etc/nginx/snippets/websocket-magic.conf;
return 404;
}
Any ideas how to solve this problem?
Also, please tell me if you need more informations like nginx' version or the like.

Configure proxy_pass for intermittent service

I'm using Nginx within a Doccker container to host my application
I'm trying to configure Nginx to proxy traffic to the /.well-known/ directory to another container that handles the letsencrypt process to setup & renew SSL certificates, but I don't need that container to be running all the time, only when attempting to renew the certificates.
My idea was to use proxy_pass for the directory specific traffic, through to the leysencrypt container, but as it's not always running, the Nginx process exits complaining that the upstream is not available.
Is there a way to configure Nginx not to check the status of the upstream for proxy_pass setting?
Here's the current config, if it's useful…
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name domain.com;
root /var/www/html/web;
location / {
return 301 https://$host$request_uri;
}
location ^~ /.well-known/ {
proxy_pass http://letsencrypt/.well-known/;
}
}
I guess I could use an in app forwarding of files, but this feels clunky. I'd rather configure it within Nginx.
location ^~ /.well-known/ {
resolver 127.0.0.1;
set $upstream letsencrypt;
proxy_pass http://$upstream/.well-known/; # use variables to make nginx startable
}

'ERR_TOO_MANY_REDIRECTS' error nginx docker

I am using nginx docker for deploying my app in aws server. I have to access my api using nginx proxy url which looks like https://domain.com/api/. This is a https request so i have to set proxy redirection to another port where api service running and the service is running under another docker container in same server instance. so my nginx conf file looks like below,
server {
listen 80;
server_name domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000";
location /api/ {
proxy_pass http://my-public-ip-address:3000;
}
}
So my problem is that while I am trying to access the api endpoint using above url its showing ERR_TOO_MANY_REDIRECTS. So any one know about this issue? And also i went through all the article with same issue mentioned but no luck.