Issue while using Docker API for GO - cannot import "nat" - api

I am trying to use the docker API for golang that is available from github.com/docker/docker/client. So far I am able to start the containers on the port that is predefined during image built. I am trying to map the port during runtime using the API; something equivalent to
docker run -p 8083:8082 -d myImage:1.0.0
I tried to do something like the following for mapping the ports:
host_config := &container.HostConfig{
PortBindings: nat.PortMap{
"8082/tcp": []nat.PortBinding{
{
HostIP: "0.0.0.0",
HostPort: "8983",
},
},
},
}
The problem here is that the variable "nat" lives inside the vendor folder of the API. I couldn't import something directly from the go vendor folder. Someone on the stackoverflow suggested to copy the go-connection folder into the github folder and remove the nested vendor directory. I did as suggested and created a path as follows:
"github.com/docker/go-connections/nat"
now I get the following error during compile time:
src\main\createcontainer1.go:53: cannot use "github.com/docker/go-connections/nat".PortSet literal (type "github.com/docker/go-connections/nat".PortSet) as type "github.com/docker/docker/vendor/github.com/docker/go-connections/nat".PortSet in field value
src\main\createcontainer1.go:65: cannot use "github.com/docker/go-connections/nat".PortMap literal (type "github.com/docker/go-connections/nat".PortMap) as type "github.com/docker/docker/vendor/github.com/docker/go-connections/nat".PortMap in field value
Have anyone faced this issue and overcome it? I am using the Go ver 1.8.

So you need to do more than just copy it, you need to move it. The same package located in two different locations are different packages to the go tool (because it can't guarantee they are identical, so it uses fully-qualified import paths).
If a package you're using has a vendor directory, and you need to use the packages in it, you have two options:
Move everything out of the vendor directory in that package into your $GOPATH/src
Vendor the package itself, and then move everything from the package's vendor directory into your project's vendor directory (<project root>/vendor). This is known as "flattening" your vendored dependencies, and most Go vendoring utilities (ex. Govendor or Godep) can do this, either automatically or with a flag. You can also do it manually, though.
The latter is generally the recommended strategy. The important key, though, is that the package itself cannot have a version of that library in its own vendor directory, as Go tool automatically uses the deepest vendored version of a package that it can access.

Related

Getting an installed module to recognize changes to config files

I have a package that uses config.json for some settings it uses. I keep the package locally rather than installing it from CPAN. My problem is when I make changes to config.json, the package doesn't recognize the changes since the config file's cached elsewhere, forcing me to run zef install --force-install or delete precomp. How can I ensure that the package always recognizes updates to the config file?
When you install packages using zef, it keeps them in the filesystem, but their names are converted into sha1, something like
/home/jmerelo/.rakudobrew/moar-2018.03/install/share/perl6/site/sources/81436475BD18D66BFD96BBCEE07CCCDC0F368879
zef keeps track of them, however, and you can locate them using zef locate, for instance:
zef locate lib/Zef/CLI.pm6
You can run that from a program, for instance this way:
sub MAIN( Str $file ) {
my $location = qqx/zef locate $file/;
my $sha1 = ($location ~~ /\s+ \=\> \s+ (.+)/);
say "$file → $sha1[0]";
}
which will return pretty much the same, except it will give you the first location of the file you give it at the command line:
lib/Zef/CLI.pm6 → /home/jmerelo/.rakudobrew/moar-2018.03/install/share/perl6/site/sources/81436475BD18D66BFD96BBCEE07CCCDC0F368879
You probably need to install your config.json file in a resources directory (which is the preferred location) and then use something like that.
That said, probably actually installing a module you're testing is not the best strategy. If you're still testing things, it's probably better if you just keep it in the directory you're working with and use perl6 -I<that directory> or else use lib <that directory> is probably a better option. You can just delete that when you release, or keep it, since that only adds another directory to the search path and will not harm the released module.

Unable to specify .nsolid-proxyrc for Nsolid Proxy is causing issues

We have a situation where we want to be able to start the Nsolid Proxy/Hub from an arbitrary folder. When we try to do this it fails due to not being able to find .nsolid-proxyrc in one of the parent folders.
We took a look at the source code for Nsolid Proxy and it looks like the library it is using, rc, allows end-users to specify a file location, but Nsolid Proxy doesn't accept a CLI argument that allows us to specify it. It should be functionality that is easy to add, but it appears to be a closed source project.
TL:DR; We need to be able to specify the exact location of .nsolid-proxyrc when starting the hub. Is there a known work-around for this or is there a way we could request this feature gets added to the project?
You can specify the configuration file by using the --config flag (from rc) when starting N|Solid Proxy
$ nsolid proxy.js --config /path/to/config/file
By default it will look at the current working directory and then up the folder tree like node_modules then resort to the following locations:
$HOME/.nsolid-proxyrc
$HOME/.nsolid-proxy/config
$HOME/.config/.nsolid-proxy
$HOME/.config/.nsolid-proxy/config
/etc/nsolid-proxyrc
/etc/nsolid-proxy/config

Why won't OSA_LIBRARY_PATH not work as documented for JXA?

According to Apple's Developer Docs the Library global allows one to import compiled scripts so they can be used as a library in one's current script. This works just fine if you were to do something like the below code with myLibName.scpt located at ~/Library/Script Libraries:
myLib = Library('myLibName');
myLib.myLibMethod() // Works just fine
But, the docs also claim that one can export an environment variable — OSA_LIBRARY_PATH containing a string of : delimited paths — and Library() would then defer to that list of paths before proceeding to it's default path: ~/Library/Script Libraries. Ya know, like the bash environment variable Path. Here's the relevant piece of documentation below; it describes the path hierarchy:
The basic requirement for a script to be a script
library is its location: it must be a script document in a “Script
Libraries” folder in one of the following folders. When searching for
a library, the locations are searched in the order listed, and the
first matching script is used:
If the script that references the library is a bundle, the script’s
bundle Resources directory. This means that scripts may be packaged
and distributed with the libraries they use.
If the application running the script is a bundle, the application’s bundle Resources
directory. This means that script applications (“applets” and
“droplets”) may be packaged and distributed with the libraries they
use. It also enables applications that run scripts to provide
libraries for use by those scripts.
Any folders specified in the environment variable OSA_LIBRARY_PATH. This allows using a library
without installing it in one of the usual locations. The value of this
variable is a colon-separated list of paths, such as /opt/local/Script
Libraries:/usr/local/Script Libraries. Unlike the other library
locations, paths specified in OSA_LIBRARY_PATH are used exactly as-is,
without appending “Script Libraries”. Supported in OS X v10.11 and
later.
The Library folder in the user’s home directory, ~/Library.
This is the location to install libraries for use by a single user,
and is the recommended location during library development.
The
computer Library folder, /Library. Libraries located here are
available to all users of the computer.
The network Library folder,
/Network/Library. Libraries located here are available to multiple
computers on a network.
The system Library folder, /System/Library.
These are libraries provided by OS X.
Any installed application
bundle, in the application’s bundle Library directory. This allows
distributing libraries that are associated with an application, or
creating applications that exist solely to distribute libraries.
Supported in OS X v10.11 and later.
The problem is that it doesn't work. I've tried exporting the OSA_LIBRARY_PATH variable — globally via my .zshrc file — and then running a sample script just like the one above via both the Script Editor and the osascript executable. Nothing works; I get a "file not found" error. I found this thread-where-the-participants-give-up-hope online; it doesn't explain much. Any thoughts?
On a somewhat related note, the Scripting Additions suite provides two other methods — loadScript and storeScript — that seem like they might be useful here. Unfortunately, when you try to use them, osascript gives you the finger. Though, I did manage to return what looked like a hexadecimal buffer from a compiled script using loadScript. Anyway, any insight you guys can shed on this would be much appreciated. Thanks.
The OSA_LIBRARY_PATH environment variable is ignored by restricted executables when running with System Integrity Protection enabled.
To workaround this limitation you can either turn off SIP, or you can use an unrestricted executable.
For instance, to make osascript unrestricted, you should first make a copy, and then re-sign it with an ad-hoc signature:
cp /usr/bin/osascript ./osascript
codesign -f -s - ./osascript
Once you have the unrestricted osascript, you can run it with the OSA_LIBRARY_PATH environment variable set like this:
OSA_LIBRARY_PATH="/path/to/libs" ./osascript path/to/script.scpt
As a lousy alternative, you can put a symlink at one of the "Script Libraries" folders that osascript would look at and point it to the folder you want. Note that the symlink must be a replacement for the entire folder, it can't just exist inside of it.
rm -rf ~/Library/Script\ Libraries
ln -s "/Your/Custom/Path/Goes/Here/" ~/Library/Script\ Libraries
Tested on 10.13.2

How to create RPM subpackages using the same paths for different envs?

I would like to use a rpm to build subpackages for different environments (live,testing,developer) but for the same files, so having a package called name-config-live, one called name-config-testing and one called name-config-developer and in them to have the same paths but each with the configs corresponding to the environment it's named after.
as an example
let's say on all environments I have a file called /etc/name.conf and on testing I want it to contain "1", on development "2" and on live "3". Is it possible to do this in the same spec since the subpackage generation only happens last not in the order I enter it. ( and hopefully not with %post -n )
I tried using BuildRoot but it seems that's a global attribute
I don't think there's a native way; I would do a %post like you had noted.
However, I would do this (similar to something I do with an internal-only package I develop for work):
Three separate files /etc/name.conf-developer, /etc/name.conf-live, etc.
Have all three packages provide a virtual package, e.g. name-config
Have main package require name-config
This will make rpm, yum, or whatever require at least one be installed in the same transaction
Have all three packages conflict with each other
Have each config package's %post (and possibly %verify) symlink /etc/name.conf to the proper config
This also helps show the user what is happening
Cons:
It's a little hackish
rpm --whatprovides /etc/name.conf will say it is not owned by any package

How to get RavenDB to recognize a plugin?

I'm trying setup the Versioning bundle in RavenDB: http://ravendb.net/bundles/versioning
The installation instructions are pretty straight forward:
Simply place the Raven.Bundles.Versioning.dll in the Plugins
directory.
I've tried this do this by creating a "Plugins" directory under the Server directory (the Server directory contains the Raven.Server.exe), and dropping Raven.Client.Versioning.dll into that Plugins directory.
However, when I run RavenDB after that (either from the command line or as a service), it doesn't give me any indication that it has recognized the plugin, and when I save/edit new documents no versioning is being applied.
I've tried running with the default plugin directory settings (which supposedly automatically looks in the Plugins directory), and I've tried manually adding the PluginsDirectory setting to Raven.Server.exe.config, to no avail.
Has anyone been able to get plugins working, specifically the versioning bundle? Do you hae to do anything special?
Mike,
It is supposed to just work. Take a look at the statistics, you should see the versioning trigger registered there.
It is important to ensure that:
You are using the same version of the dlls
You restarts RavenDB after copying the directory
You don't reference another Raven/PluginsDirectory in the configuration
It is probably better to follow this up in the mailing list.
For Raven v2, you'll also add the bundle name to the the Raven/ActiveBundles property on a database document. The names should be semicolon-delimited.
For example, I have a database called MidwestAnimalRescue. To enable the Periodic Backup bundle and the Versioning bundle, my document will look like this: