trying to modify SRS: where is the directory ./objs? - simple-realtime-server

I am running SRS from a Docker container. On Manjaro I can issue a docker command in terminal, and stream video from OBS to the SRS instance I just launched. I can then view live video in a web page, from that server.
BUT!
The configuration is a bit opaque. The config files refer to a directory ("./objs") that is likely defined in a config file, but I can't find it.
AND!
I'm still a bit confused about how Docker works- isn't there a straightforward way to edit an image or container?
I can see all the config files in /etc/srs (like srs.conf, rtmp.conf and so on). The config files refer to the directory ./objs for the location of (for instance) the nginx root, index.html. But I cannot find that directory.
I also have installed SRS from official repos, on the same machine. I launched srs from /usr/bin/srs (not from the Docker image), but it immediately quits. So I gave up for now. Running it from Docker seems to work pretty well.

When start SRS by docker, it actually the same to start in shell:
cd /usr/local/srs && ./objs/srs -c conf/srs.conf
You can also use docker inspect ossrs/srs:5 to get the information:
"Cmd": ["./objs/srs", "-c", "conf/srs.conf"],
"WorkingDir": "/usr/local/srs",
The current work directory is /usr/local/srs, so ./objs is /usr/local/srs/objs, and I will describe more about why relative directories.
The configuration file is conf/srs.conf, which is actually /usr/local/srs/conf/srs.conf, and there are lots of configuration examples in ./conf.
The examples in ossrs.io is in the same format, uses a configuration and depends on the work directory.
SRS uses relative directory ./objs or ./conf because there are some features depend on it. You can see full.conf for all the configurations that supported.
We use relative directory because you can easily change to other directory, or start more than one SRS server. Of course, you could change the relative directory to abolute directory, please search ./objs in full.conf, for example:
pid ./objs/srs.pid;
ff_log_dir ./objs;
srs_log_file ./objs/srs.log;
http_server {
dir ./objs/nginx/html;
}
vhost __defaultVhost__ {
hls {
hls_path ./objs/nginx/html;
}
Please note that you can use docker volume to mount directory of host for container.

Related

How to force OpenAPI Generator CLI to use pre-downloaded .jar file?

I have installed (via npm) openapi-generator-cli on my WSL running Ubuntu 22.04 image, with correctly configured HTTP_PROXY and HTTPS_PROXY environment variables.
The problem is, running any command (including sudo openapi-generator-cli help) results in OAG-CLI attempting to download the .jar file from maven.org, which end with connection getting refused for unknown reason (SSL cert not listed as trusted? WSL-exclusive bug? corporate proxy having an edge case?).
Instead of dealing with all that, I realised I can just download the latest (as per official website) .jar file:
https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/6.2.1/openapi-generator-cli-6.2.1.jar
via browser and place it manually for OAG-CLI to use.
I have edited the auto-generated openapitools.json just so:
{
"$schema": "node_modules/#openapitools/openapi-generator-cli/config.schema.json",
"spaces": 2,
"generator-cli": {
"version": "6.2.1",//same version as .jar
"storageDir": "."//see below
}
}
Unfortunately, despite placing two copies of the .jar file (one named openapi-generator-cli-6.2.1.jar, one named openapi-generator-cli.jar) in both the "current" folder and /usr/libs/openapi, and trying the following values for storageDir:
.
./
/usr/libs/openapi
/usr/libs/openapi/
~/usr/libs/openapi
~/usr/libs/openapi/
every single run of sudo openapi-generator-cli help resulted in an immediate Downloading 6.2.1 ... message (followed by connection refused error some time later).
What else do I need to do to make OAG-CLI use the .jar within storageDir instead of trying to download a new copy?
(Answer containing just the structure and contents of a folder created by "storageDir": "~/foo" would allow me to reverse-engineer a working setup.)

How to get Docker Desktop on Mac to point to new location of VM Image?

Docker Desktop supports moving the VM Image that it uses onto another drive if needed. On my Mac Mini (2018) I have moved it to an external SSD in order to enlarge it more than the internal storage would have allowed.
The external SSD was named "Extra Space", which (ironically) became a problem when I also tried to use the SSD for other non-Docker development and discovered that some of the Ruby Gems I am using have problems with spaces in path names.
I decided that I would have to rename the drive to "ExtraSpace" (without the "extra" space), but then Docker was not able to find its VM Image. I was unable to use the built-in location chooser ("Preferences" -> "Resources" -> "Advanced" -> "Disk Image Location") because that tool assumes that it is moving the image from one location to another but in my case the image is not being moved, only the path to the existing image is changing.
I looked through the Docker configuration in ~/Library/Containers/com.docker.desktop/ and found the path to the image in Library/Containers/com.docker.docker/Data/vms/0/hyperkit.json. I tried changing it there, but Docker Desktop would not start.
In the error logs, I found lots of messages like this:
time="2021-10-31T15:06:43-04:00" level=error msg="(5487d9bc) 4ecbf40e-BackendAPI S->C f68f0c84-DriverCMD GET /vm/disk (925.402ยตs): mkdir /Volumes/Extra Space: permission denied"
common/cmd/com.docker.backend/internal/handlers.(*VMInitHandler).VMDiskInfo(0x58c13b8, {0x58b94a0, 0xc0001d82a0})
common/cmd/com.docker.backend/internal/handlers/vminit.go:39 +0x42
Why does Docker Desktop not follow the VM configuration file to find the location of the image? Is there somewhere else I have to change it?
After a lot more searching, I found the following additional files that need to be updated with the new path:
~/Library/Preferences/com.electron.docker-frontend.plist
~/Library/Preferences/com.electron.dockerdesktop.plist
~/Library/Group Containers/group.com.docker/settings.json
Once I had updated all of these files with the new path, Docker Desktop was able to start up correctly.

Adding LESS file to HTML [duplicate]

I'm trying to load a 3D model, stored locally on my computer, into Three.js with JSONLoader, and that 3D model is in the same directory as the entire website.
I'm getting the "Cross origin requests are only supported for HTTP." error, but I don't know what's causing it nor how to fix it.
My crystal ball says that you are loading the model using either file:// or C:/, which stays true to the error message as they are not http://
So you can either install a webserver in your local PC or upload the model somewhere else and use jsonp and change the url to http://example.com/path/to/model
Origin is defined in RFC-6454 as
...they have the same
scheme, host, and port. (See Section 4 for full details.)
So even though your file originates from the same host (localhost), but as long as the scheme is different (http / file), they are treated as different origin.
Just to be explicit - Yes, the error is saying you cannot point your browser directly at file://some/path/some.html
Here are some options to quickly spin up a local web server to let your browser render local files
Python 2
If you have Python installed...
Change directory into the folder where your file some.html or file(s) exist using the command cd /path/to/your/folder
Start up a Python web server using the command python -m SimpleHTTPServer
This will start a web server to host your entire directory listing at http://localhost:8000
You can use a custom port python -m SimpleHTTPServer 9000 giving you link: http://localhost:9000
This approach is built in to any Python installation.
Python 3
Do the same steps, but use the following command instead python3 -m http.server
VSCode
If you are using Visual Studio Code you can install the Live Server extension which provides a local web server enviroment.
Node.js
Alternatively, if you demand a more responsive setup and already use nodejs...
Install http-server by typing npm install -g http-server
Change into your working directory, where yoursome.html lives
Start your http server by issuing http-server -c-1
This spins up a Node.js httpd which serves the files in your directory as static files accessible from http://localhost:8080
Ruby
If your preferred language is Ruby ... the Ruby Gods say this works as well:
ruby -run -e httpd . -p 8080
PHP
Of course PHP also has its solution.
php -S localhost:8000
In Chrome you can use this flag:
--allow-file-access-from-files
Read more here.
Ran in to this today.
I wrote some code that looked like this:
app.controller('ctrlr', function($scope, $http){
$http.get('localhost:3000').success(function(data) {
$scope.stuff = data;
});
});
...but it should've looked like this:
app.controller('ctrlr', function($scope, $http){
$http.get('http://localhost:3000').success(function(data) {
$scope.stuff = data;
});
});
The only difference was the lack of http:// in the second snippet of code.
Just wanted to put that out there in case there are others with a similar issue.
Just change the url to http://localhost instead of localhost. If you open the html file from local, you should create a local server to serve that html file, the simplest way is using Web Server for Chrome. That will fix the issue.
I'm going to list 3 different approaches to solve this issue:
Using a very lightweight npm package: Install live-server using npm install -g live-server. Then, go to that directory open the terminal and type live-server and hit enter, page will be served at localhost:8080. BONUS: It also supports hot reloading by default.
Using a lightweight Google Chrome app developed by Google: Install the app, then go to the apps tab in Chrome and open the app. In the app point it to the right folder. Your page will be served!
Modifying Chrome shortcut in windows: Create a Chrome browser's shortcut. Right-click on the icon and open properties. In properties, edit target to "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --disable-web-security --user-data-dir="C:/ChromeDevSession" and save. Then using Chrome open the page using ctrl+o. NOTE: Do NOT use this shortcut for regular browsing.
Note: Use http:// like http://localhost:8080 in case you face error.
Use http:// or https:// to create url
error: localhost:8080
solution: http://localhost:8080
In an Android app โ€” for example, to allow JavaScript to have access to assets via file:///android_asset/ โ€” use setAllowFileAccessFromFileURLs(true) on the WebSettings that you get from calling getSettings() on the WebView.
fastest way for me was:
for windows users run your file on Firefox problem solved, or
if you want to use chrome easiest way for me was to install Python 3 then from command prompt run command python -m http.server then go to http://localhost:8000/ then navigate to your files
python -m http.server
Easy solution for whom using VS Code
I've been getting this error for a while. Most of the answers works. But I found a different solution. If you don't want to deal with node.js or any other solution in here and you are working with an HTML file (calling functions from another js file or fetch json api's) try to use Live Server extension.
It allows you to open a live server easily. And because of it creates localhost server, the problem is resolving. You can simply start the localhost by open a HTML file and right-click on the editor and click on Open with Live Server.
It basically load the files using http://localhost/index.html instead of using file://....
EDIT
It is not necessary to have a .html file. You can start the Live Server with shortcuts.
Hit (alt+L, alt+O) to Open the Server and (alt+L, alt+C) to Stop the server. [On MAC, cmd+L, cmd+O and cmd+L, cmd+C]
Hope it will help someone :)
If you use old version of Mozilla Firefox (pre-2019), it will work as expected without any issues;
P.S. Surprisingly, old versions of Internet Explorer & Edge work absolutely fine too.
For those on Windows without Python or Node.js, there is still a lightweight solution: Mongoose.
All you do is drag the executable to wherever the root of the server should be, and run it. An icon will appear in the taskbar and it'll navigate to the server in the default browser.
Also, Z-WAMP is a 100% portable WAMP that runs in a single folder, it's awesome. That's an option if you need a quick PHP and MySQL server. Though it hasn't been updated since 2013. A modern alternative would be Laragon or WinNMP. I haven't tested them, but they are portable and worth mentioning.
Also, if you only want the absolute basics (HTML+JS), here's a tiny PowerShell script that doesn't need anything to be installed or downloaded:
$Srv = New-Object Net.HttpListener;
$Srv.Prefixes.Add("http://localhost:8080/");
$Srv.Start();
Start-Process "http://localhost:8080/index.html";
While($Srv.IsListening) {
$Ctx = $Srv.GetContext();
$Buf = [System.IO.File]::OpenRead((Join-Path $Pwd($Ctx.Request.RawUrl)));
$Ctx.Response.ContentLength64 = $Buf.Length;
$Ctx.Response.Headers.Add("Content-Type", "text/html");
$Buf.CopyTo($Ctx.Response.OutputStream);
$Buf.Close();
$Ctx.Response.Close();
};
This method is very barebones, it cannot show directories or other fancy stuff. But it handles these CORS errors just fine.
Save the script as server.ps1 and run in the root of your project. It will launch index.html in the directory it is placed in.
I suspect it's already mentioned in some of the answers, but I'll slightly modify this to have complete working answer (easier to find and use).
Go to: https://nodejs.org/en/download/. Install nodejs.
Install http-server by running command from command prompt npm install -g http-server.
Change into your working directory, where index.html/yoursome.html resides.
Start your http server by running command http-server -c-1
Open web browser to http://localhost:8080
or http://localhost:8080/yoursome.html - depending on your html filename.
I was getting this exact error when loading an HTML file on the browser that was using a json file from the local directory. In my case, I was able to solve this by creating a simple node server that allowed to server static content. I left the code for this at this other answer.
It simply says that the application should be run on a web server. I had the same problem with chrome, I started tomcat and moved my application there, and it worked.
I suggest you use a mini-server to run these kind of applications on localhost (if you are not using some inbuilt server).
Here's one that is very simple to setup and run:
https://www.npmjs.com/package/tiny-server
Experienced this when I downloaded a page for offline view.
I just had to remove the integrity="*****" and crossorigin="anonymous" attributes from all <link> and <script> tags
If you insist on running the .html file locally and not serving it with a webserver, you can prevent those cross origin requests from happening in the first place by making the problematic resources available inline.
I had this problem when trying to to serve .js files through file://. My solution was to update my build script to replace <script src="..."> tags with <script>...</script>.
Here's a gulp approach for doing that:
1.
run npm install --save-dev to packages gulp, gulp-inline and del.
2.
After creating a gulpfile.js to the root directory, add the following code (just change the file paths for whatever suits you):
let gulp = require('gulp');
let inline = require('gulp-inline');
let del = require('del');
gulp.task('inline', function (done) {
gulp.src('dist/index.html')
.pipe(inline({
base: 'dist/',
disabledTypes: 'css, svg, img'
}))
.pipe(gulp.dest('dist/').on('finish', function(){
done()
}));
});
gulp.task('clean', function (done) {
del(['dist/*.js'])
done()
});
gulp.task('bundle-for-local', gulp.series('inline', 'clean'))
Either run gulp bundle-for-local or update your build script to run it automatically.
You can see the detailed problem and solution for my case here.
For all y'all on MacOS... setup a simple LaunchAgent to enable these glamorous capabilities in your own copy of Chrome...
Save a plist, named whatever (launch.chrome.dev.mode.plist, for example) in ~/Library/LaunchAgents with similar content to...
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>launch.chrome.dev.mode</string>
<key>ProgramArguments</key>
<array>
<string>/Applications/Google Chrome.app/Contents/MacOS/Google Chrome</string>
<string>-allow-file-access-from-files</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
It should launch at startup.. but you can force it to do so at any time with the terminal command
launchctl load -w ~/Library/LaunchAgents/launch.chrome.dev.mode.plist
TADA! ๐Ÿ˜Ž ๐Ÿ’๐Ÿป ๐Ÿ™Š ๐Ÿ™๐Ÿพ
Not possible to load static local files(eg:svg) without server. If you have NPM /YARN installed in your machine, you can setup simple http server using "http-server"
npm install http-server -g
http-server [path] [options]
Or open terminal in that project folder and type "hs". It will automaticaly start HTTP live server.
er. I just found some official words "Attempting to load unbuilt, remote AMD modules that use the dojo/text plugin will fail due to cross-origin security restrictions. (Built versions of AMD modules are unaffected because the calls to dojo/text are eliminated by the build system.)" https://dojotoolkit.org/documentation/tutorials/1.10/cdn/
One way it worked loading local files is using them with in the project folder instead of outside your project folder. Create one folder under your project example files similar to the way we create for images and replace the section where using complete local path other than project path and use relative url of file under project folder .
It worked for me
Install local webserver for java e.g Tomcat,for php you can use lamp etc
Drop the json file in the public accessible app server directory
Start the app server,and you should be able to access the file from localhost
For Linux Python users:
import webbrowser
browser = webbrowser.get('google-chrome --allow-file-access-from-files %s')
browser.open(url)
url should be like:
createUserURL = "http://www.localhost:3000/api/angular/users"
instead of:
createUserURL = "localhost:3000/api/angular/users"
Many problem for this, with my problem is missing '/' example:
jquery-1.10.2.js:8720 XMLHttpRequest cannot load http://localhost:xxxProduct/getList_tagLabels/
It's must be: http://localhost:xxx/Product/getList_tagLabels/
I hope this help for who meet this problem.
I have also been able to recreate this error message when using an anchor tag with the following href:
Example a tag
In my case an a tag was being used to get the 'Pointer Cursor' and the event was actually controlled by some jQuery on click event. I removed the href and added a class that applies:
cursor:pointer;
cordova achieve this. I still can not figure out how cordova did. It does not even go through shouldInterceptRequest.
Later I found out that the key to load any file from local is: myWebView.getSettings().setAllowUniversalAccessFromFileURLs(true);
And when you want to access any http resource, the webview will do checking with OPTIONS method, which you can grant the access through WebViewClient.shouldInterceptRequest by return a response, and for the following GET/POST method, you can just return null.
If you are searching for a solution for Firebase Hosting, you can run the
firebase serve --only hosting command from the Firebase CLI
That's what I came here for, so I thought I'd just leave it here to help like ones.
If your using VS code just trying loading a live server in there. fixed my problem immediately.

Changing permissions of added file to a Docker volume

In the Docker best practices guide it states:
You are strongly encouraged to use VOLUME for any mutable and/or user-serviceable parts of your image.
And by looking at the source code for e.g. the cpuguy83/nagios image this can clearly be seen done, as everything from nagios to apache config directories are made available as volumes.
However, looking at the same image the apache service (and cgi-scripts for nagios) are run as the nagios user by default. So now I'm in a pickle, as I can't seem to figure how to add my own config files in order to e.g. define more hosts for nagios monitoring. I've tried:
FROM cpuguy83/nagios
ADD my_custom_config.cfg /opt/nagios/etc/conf.d/
RUN chown nagios: /opt/nagios/etc/conf.d/my_custom_config.cfg
CMD ["/opt/local/bin/start_nagios"]
I build as normal, and try to run it with docker run -d -p 8000:80 <image_hash>, however I get the following error:
Error: Cannot open config file '/opt/nagios/etc/conf.d/my_custom_config.cfg' for reading: Permission denied
And sure enough, the permissions in the folder looks like (whist the apache process runs as nagios):
# ls -l /opt/nagios/etc/conf.d/
-rw-rw---- 1 root root 861 Jan 5 13:43 my_custom_config.cfg
Now, this has been answered before (why doesn't chown work in Dockerfile), but no proper solution other than "change the original Dockerfile" has been proposed.
To be honest, I think there's some core concept here I haven't grasped (as I can't see the point of declaring config directories as VOLUME nor running services as anything other than root) - so provided a Dockerfile as above (which follows Docker best practices by adding multiple volumes) is the solution/problem:
To change NAGIOS_USER/APACHE_RUN_USER to 'root' and run everything as root?
To remove the VOLUME declarations in the Dockerfile for nagios?
Other approaches?
How would you extend the nagios dockerfile above with your own config file?
Since you are adding your own my_custom_config.cfg file directly into the container at build time just change the permissions of the my_custom_config.cfg file on your host machine and then build your image using docker build. The host machine permissions are copied into the container image.

Docker: How to live sync host folder with container folder?

I am working on a website powered by Node. So I have made a simple Dockerfile that adds my site's files to the container's FS, installs Node and runs the app when I run the container, exposing the private port 80.
But if I want to change a file for that app, I have rebuild the container image and re-run it. That takes some seconds.
Is there an easy way to have some sort of "live sync", NFS like, to have my host system's app files be in sync with the ones from the running container?
This way I only have to relaunch it to have changes apply, or even better, if I use something like supervisor, it will be done automatically.
You can use volumes in order to do this. You have two options:
Docker managed volumes:
docker run -v /src/path nodejsapp
docker run -i -t -volumes-from <container id> bash
The file you edit in the second container will update the first one.
Host directory volume:
docker run -v `pwd`/host/src/path:/container/src/path nodejsapp
The changes you make on the host will update the container.
If you are under OSX, those kind of volume shares can become very slow, especially with node-based apps ( a lot of files ). For this issue, http://docker-sync.io can help, by providing a volume-share like synchronisation, without using volume shares, this usually speeds up your container read/write speed of the code-directory from 50-80 times, depending on what docker-machine you use.
For performance see https://github.com/EugenMayer/docker-sync/wiki/4.-Performance and for easy examples how to use it, see the boilerplates https://github.com/EugenMayer/docker-sync-boilerplate for your case the unison example https://github.com/EugenMayer/docker-sync-boilerplate/tree/master/unison is the one you would need for NFS like sync
docker run -dit -v ~/my/local/path:/container/path/ myimageId
For /container/path/ you could use for instance /usr/src/app.
The flags:
-d = detached mode,
-it = interactive,
-v + paths = specifies the volume.
(If you just care about the volume, you can drop the -dit flag.)
Docker run reference
I use Scaffold's File Sync functionality for this. It gets the job done, and without needing overly complex configuration.
Setting up Scaffold in my project was as simple as installing Skaffold (through chocolatey, since I'm on Windows), running skaffold init --generate-manifests in my project folder, and answering a couple questions it asked.