We have a situation where we want to be able to start the Nsolid Proxy/Hub from an arbitrary folder. When we try to do this it fails due to not being able to find .nsolid-proxyrc in one of the parent folders.
We took a look at the source code for Nsolid Proxy and it looks like the library it is using, rc, allows end-users to specify a file location, but Nsolid Proxy doesn't accept a CLI argument that allows us to specify it. It should be functionality that is easy to add, but it appears to be a closed source project.
TL:DR; We need to be able to specify the exact location of .nsolid-proxyrc when starting the hub. Is there a known work-around for this or is there a way we could request this feature gets added to the project?
You can specify the configuration file by using the --config flag (from rc) when starting N|Solid Proxy
$ nsolid proxy.js --config /path/to/config/file
By default it will look at the current working directory and then up the folder tree like node_modules then resort to the following locations:
$HOME/.nsolid-proxyrc
$HOME/.nsolid-proxy/config
$HOME/.config/.nsolid-proxy
$HOME/.config/.nsolid-proxy/config
/etc/nsolid-proxyrc
/etc/nsolid-proxy/config
Related
i'm trying to build the example to run an erlang websocket server.
I created all that files and put them into one folder, add the rebar file and ran
./rebar get-deps
inside the folder direction.
But there is no
make
make runconsole
and nothing's happening.
Is there also a possibility to create that websocket server using IntelliJ? I tried to put that 3 .erl files into IntelliJ and want to Build the project but I receive
erlc: 2: Warning: behaviour cowboy_http_handler undefined
The make command reads something called a Makefile, which is a file written in a certain format, which tells the make command what it is supposed to do, e.g. compile some files with the listed names using the listed commands. Because there is no Makefile listed in that tutorial, you should have gotten an error something like this:
No targets specified and no makefile found.
You can contact the author of the tutorial at his github account and ask him where the Makefile is. Actually, the Makefile for the tutorial is here:
https://github.com/marcelog/erws
I created all that files and put them into one folder
The instructions in the Makefile depend on the exact directory structure that the author has here:
https://github.com/marcelog/erws
I tried using rebar3 and changing some stuff in the Makefile, but I still got errors. The problem is that rel directory: I don't know how to create all the stuff in there. You need to use rebar and reltool for that:
https://gist.github.com/FabioBatSilva/f1d1c4ea250302fed8c2
Here is a cowboy websockets example that I came up with last year, see if it helps:
How to Connect Cowboy (Erlang) websocket to webflow.io generated webpage
It uses the Erlang.mk build system as described in the cowboy docs here:
https://ninenines.eu/docs/en/cowboy/2.5/guide/getting_started/
I want to create two versions of an app with slightly different content. Therefore I thought about having two "www" directories (lets say "www-foo" and "www-bar") and tell capacitor in the capacitor.config.json which one to use (with "webDir" setting). Also the "appId" should be different then.
So I guessed the easiest way would be to have these capacitor.config.json files with different "appId" and "webDir" settings and when running the build script to specify which config file to use (like it's known from webpack with --config flag). But I can't find any information if it's possible to specify the config file to use for building the app.
Is it just not possible (yet) or am I too stupid to find it? :)
Otherwise I would try to create the capacitor.config.json file with webpack before running the capacitor build script.
I used this article as a guide for my project.
I don't know if exist any option for two or more AppID/webDir in Capacitor.
But understanding you necessity my suggest is create a custom build script in Node.js that change info in capacitor.config.ts (appId) > build (www/www-Two) > sync & copy to platform
I am attempting to load a file from a remote URL during build to be WebPacked. This file is an MDX file and I am using the MDX vue-loader to load this file for use within the Vue application.
The system I am deploying is tenanted with a headless CMS powering some pages across the system. I would like to explore the possibilities of loading the MDX files at build time from a remote URL.
I have placed the MDX files on GitHub Pages with the remote URL passed in as an environment variable at build time.
The result is something like this (the idea here is that I can swap the domain during build to satisfy the tenanted site requirement):
import('https://somedomain.com/content/home.mdx');
This fails with your typical error during build of:
dependencies not found please install them using npm --save https://somedomain.com/content/home.mdx
I can WebPack ignore this import which allows it to build but then it fails to load in the browser as browsers will only load external modules with a MIME type of JS. Not to mention the fact that this hasn't been through the MDX loader so I suspect even if I could get the browser to load it the file would not have been parsed into something usable.
I realise I could copy these files in during the build stage from the remote but I was hopeful that there might be a way to either allow the browser to pull this remote file or WebPack to download this remote file and pack it into the output.
Does anyone have any ideas if this might be possible? Many thanks in advance.
As MDX needs pre-processing during build I think integration with Webpack is the only way.
You can try the SaveRemoteFilePlugin webpack plugin which allows you to download the file from remote to local file system. But maybe it's not what you want as it seems pushing downloaded files directly into dist folder without passing it through rest of the Webpack pipeline...
So probably better option is val-loader which allows executing your own Node scripts during build - here you can find the example which does almost what you need - Fetching Remote data during build
Trying to get the sidekick image built and having some issues. Is there any documentation other than the README.md file?
My current problem is with getting the JRE requirement working but there are others. The page says "download Oracle JRE and place it inside the working directory. Optionally if you have a company wide distribution url, use that one at a later step." and the help says "Java (JRE) download url or path inside working directory". Have not been able to get this to work.
I went to the JRE link provided and was presented with options to download a rpm file or a tar.gz file. Which is expected (was unable to get either one working)?
It says to place the file in the "working directory" but not sure where exactly. Tried in sidekick folder and in sidekick/jre both without success no matter what I used after the -j command. Is this just the path or should the filename be included as well? Can I get an example?
I'm running this script using my login but noticed the output folder is being created with root user and group. I see no indication that this should be run with sudo. What is the correct way to run this script?
Using debug, I see the function "download if not cached". Can I save these files (JRE, Bamboo jar file, etc.) somewhere so I don't have to worry about downloading them? If so, where should they go? Looks like I might have a problem with the wget to d/l the jar file so would like to just be able to place all these in a folder and be done with it.
It looks like the major problem is the script didn't clean up after itself if it fails. The issue was the first time it failed then that caused subsequent issues as the output folder was already there. Removing this directory between each attempt help.
As for the correct syntax for the -j JRE option I manually downloaded the JRE and placed in a folder called per-build-container/sidekick/stuff/. For the command line it is not just the path but the file name as well (the tar.gz and not the RPM). For my case it was
-j stuff/jre-8u251-linux-x64.tar.gz
Note I also ran the script as sudo. Wasn't stated but seemed to work OK.
Another issue I ran into was the download of the agent jar file. There is a redirect in the wget file that was not working for us. I ended up editing the script and replacing the Altassian based url with the redirected one.
This addresses all the issues I ran into with the initial question.
I am trying to use the docker API for golang that is available from github.com/docker/docker/client. So far I am able to start the containers on the port that is predefined during image built. I am trying to map the port during runtime using the API; something equivalent to
docker run -p 8083:8082 -d myImage:1.0.0
I tried to do something like the following for mapping the ports:
host_config := &container.HostConfig{
PortBindings: nat.PortMap{
"8082/tcp": []nat.PortBinding{
{
HostIP: "0.0.0.0",
HostPort: "8983",
},
},
},
}
The problem here is that the variable "nat" lives inside the vendor folder of the API. I couldn't import something directly from the go vendor folder. Someone on the stackoverflow suggested to copy the go-connection folder into the github folder and remove the nested vendor directory. I did as suggested and created a path as follows:
"github.com/docker/go-connections/nat"
now I get the following error during compile time:
src\main\createcontainer1.go:53: cannot use "github.com/docker/go-connections/nat".PortSet literal (type "github.com/docker/go-connections/nat".PortSet) as type "github.com/docker/docker/vendor/github.com/docker/go-connections/nat".PortSet in field value
src\main\createcontainer1.go:65: cannot use "github.com/docker/go-connections/nat".PortMap literal (type "github.com/docker/go-connections/nat".PortMap) as type "github.com/docker/docker/vendor/github.com/docker/go-connections/nat".PortMap in field value
Have anyone faced this issue and overcome it? I am using the Go ver 1.8.
So you need to do more than just copy it, you need to move it. The same package located in two different locations are different packages to the go tool (because it can't guarantee they are identical, so it uses fully-qualified import paths).
If a package you're using has a vendor directory, and you need to use the packages in it, you have two options:
Move everything out of the vendor directory in that package into your $GOPATH/src
Vendor the package itself, and then move everything from the package's vendor directory into your project's vendor directory (<project root>/vendor). This is known as "flattening" your vendored dependencies, and most Go vendoring utilities (ex. Govendor or Godep) can do this, either automatically or with a flag. You can also do it manually, though.
The latter is generally the recommended strategy. The important key, though, is that the package itself cannot have a version of that library in its own vendor directory, as Go tool automatically uses the deepest vendored version of a package that it can access.