I've read that It's a good experience to use a fresh image every day and load your code from a repo. And the only way I know to add repo is do select package and press "+Repository". But can I add a repo when I have a clean image and no packages??
What Lukas described is standard practice. I would start there. But the literal answer to your question is e.g.:
repo := MCHttpRepository
location: 'http://www.squeaksource.com/[whatever]'
user: ''
password: ''.
MCRepositoryGroup default addRepository: repo.
You could use a Gofer expression to load packages. Gofer automatically assigns the specified repositories to the loaded packages. Have a look at this collection of scripts to see how this works in practice.
Also Metacello automatically adds the necessary repositories, but requires you to create a configuration of the packages, versions and repositories first.
Related
Is it allowed to create local dbt deps repository so that "dbt deps" command should download libraries from local repository?
N.B: Our client is not interested to connect to external network.
Yes this is possible, provided that the repositories have already been locally cloned or copied.
dbt docs's page on Packages tells you exactly how to do this
Packages that you have stored locally can be installed by specifying the path to the project, like so:
packages:
- local: /opt/dbt/redshift # use a local path
Local packages should only be used for specific situations, for example, when testing local changes to a package.
Note: I think it is worth re-iterating the caveat given in the docs. You will now own downloading the cloning the correct versions of the packages along with the ongoing work of keeping the packages up-to-date.
As for how this works in practice. Consider the following example:
/Users/michelle/repos/my_dbt_project where my dbt project lives (that contains dbt_project.yml and packages.yml
/Users/michelle/repos/dbt_utils the location where I previously cloned the dbt-utils repo
In this example my packages.yml should look like
packages:
- local: /Users/michelle/repos/dbt_utils # use a local path
Please note that the external package does not live within my dbt project directory, but outside of it. While it should work to have it within the repo, this is not best practice. This external package development article goes even further in-depth.
I have a NPM package with a small user base, yesterday I created a new version and wanted to release it. I thought that I might as well make use of the new GitHub Packages and so I setup everything as GitHub suggested and released!
Now the problem is that I still have the old NPM page running on version 2.0.2 while everyone currently uses this as their dependency while the new GitHub package is on 2.0.4, Is there a way to 'synchronize' these two. Of course the GitHub Packages uses the <USER>/<PACKAGE> labeling while NPM just uses <NAME>.
Is the only thing I can do to publish on GitHub Packages and on NPM and just try to move users away from the NPM page?
If your publishing a public package, your better off just publishing it on NPM, as that is what most developers are used to.
I use GitHub Packages at work and the only advantage is that is effective free for hosting internal packages, as we are already paying for GitHub anyway. If it wasn’t for the zero price we wouldn’t be using it.
If you really want to force all your users to migrate to GitHub packages, and have to set up npm to work with it you could mark you old version npm deprecated and use that to point people to the new version.
https://docs.npmjs.com/cli/v6/commands/npm-deprecate
Here is another solution, but there is a catch.
Change your registry.npmjs.org package content to
index.js
export * from '#schotsl/my-package';
Now your registry.npmjs.org package is (almost) pointing to your npm.pkg.github.com package.
Only almost because any development directory for a project downstream of registry.npmjs.org/my-package, must configure the scope-to-server mapping for #schotsl/my-package to npm.pkg.github.com in a package manager config file.
In the case of package managers 'npm' and 'yarn' (v1) that can be done in
an .npmrc file at the same level as package.json.
The required .npmrc content is
#schotsl:registry=https://npm.pkg.github.com
# Github PAT token, packages:read authorization only ok
//npm.pkg.github.co/:_authToken="ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
The first line is the scope to server mapping.
The second line is a Github personal authorization token (PAT) with at least package:read permission. It is actually pretty liberal. A PAT with package:read issued from any github account will allow read access to every github accounts packages.
For the 'yarn' v2 package, the .npmrc file does not work, and instead a couple of keys need to be set in .yarnrc.yml.
Unfortunately there is no way to set the scope-to-server mapping and the token inside the registry.npmjs.org/my-package package itself.
Putting the .npmrc file in there doesn't work, it is ignored. And that wouldn't be a good solution anyway, because not all package managers read .npmrc files.
That is the 'catch' - using npm.pkg.github.com packages requires package manager specific config settings to be made by every downstream developer.
In addition, what if two different upstream packages have colliding scope names, each mapping to a different server? The current config methodology fails in that case.
Feature Proposal not current behavior
Ideally, there would be a common interface agreed upon by all package managers inside package.json - and the scope-to-server mapping would be defined in the package that directly references the scope. For example, in the package.json of my-package on registry.npmjs.org
{
dependencies:{
"#schotsl/my-package":"1.0.0"
},
registries:{
"#schotsl/my-package":"https://npm.pkg.github.com",
},
auths:{
"https://npm.pkg.github.com":"ghp_XXXXXXXXXXXXXXXX",
},
}
Then downstream users would not need to config for each scope, and predictable (and risky) problems with scope name or package name collisions would not occur.
But that is not the way it is. Therefore Github Packages (npm.pkg.github.com) doesn't really seem to be a feasible way to provide public packages which may become dependencies of other public packages. No problem for private packages though.
I want to some changes in jitsi and install from my own repository.
Actually after some changes, i want to install it in many servers with each servers have unique domain.
You can change the front-end with two options :
Option 1 (recommended)
Follow the manual installation documentation and replace «jitsi-meet» component which is in /srv/jitsi-meet
This directory must contain your version of jitsi-meet (you must build the project, follow this step but clone your repository instead official repository)
Option 2 (not recommended)
Follow the quick installation then replace the /usr/share/jitsi-meet
This is the same step but in another directory
Be careful, this option is faster but each update on jisti-meet package (apt) can break your installation and override the /usr/share/jitsi-meet directory
I am trying to use the docker API for golang that is available from github.com/docker/docker/client. So far I am able to start the containers on the port that is predefined during image built. I am trying to map the port during runtime using the API; something equivalent to
docker run -p 8083:8082 -d myImage:1.0.0
I tried to do something like the following for mapping the ports:
host_config := &container.HostConfig{
PortBindings: nat.PortMap{
"8082/tcp": []nat.PortBinding{
{
HostIP: "0.0.0.0",
HostPort: "8983",
},
},
},
}
The problem here is that the variable "nat" lives inside the vendor folder of the API. I couldn't import something directly from the go vendor folder. Someone on the stackoverflow suggested to copy the go-connection folder into the github folder and remove the nested vendor directory. I did as suggested and created a path as follows:
"github.com/docker/go-connections/nat"
now I get the following error during compile time:
src\main\createcontainer1.go:53: cannot use "github.com/docker/go-connections/nat".PortSet literal (type "github.com/docker/go-connections/nat".PortSet) as type "github.com/docker/docker/vendor/github.com/docker/go-connections/nat".PortSet in field value
src\main\createcontainer1.go:65: cannot use "github.com/docker/go-connections/nat".PortMap literal (type "github.com/docker/go-connections/nat".PortMap) as type "github.com/docker/docker/vendor/github.com/docker/go-connections/nat".PortMap in field value
Have anyone faced this issue and overcome it? I am using the Go ver 1.8.
So you need to do more than just copy it, you need to move it. The same package located in two different locations are different packages to the go tool (because it can't guarantee they are identical, so it uses fully-qualified import paths).
If a package you're using has a vendor directory, and you need to use the packages in it, you have two options:
Move everything out of the vendor directory in that package into your $GOPATH/src
Vendor the package itself, and then move everything from the package's vendor directory into your project's vendor directory (<project root>/vendor). This is known as "flattening" your vendored dependencies, and most Go vendoring utilities (ex. Govendor or Godep) can do this, either automatically or with a flag. You can also do it manually, though.
The latter is generally the recommended strategy. The important key, though, is that the package itself cannot have a version of that library in its own vendor directory, as Go tool automatically uses the deepest vendored version of a package that it can access.
so yeah just wondering if darcs has anything equivalent to git's submodules.
i.e. lets say I have a repo (myapp) and I have a folder in it called mylibrary. mylibrary doesn't really have anything to do with myapp's development, it just has to be included. mylibrary's development happens in it's own repo, but when someone pulls myapp, it'll also pull an up-to-date version of mylibrary. any ideas?
My first thought: Since darcs is simpler than git (i.e., no branches and remotes--instead you just use directories and URLs, and it's your task to manage them), a darcs submodule would not give much more than what you can achieve with standard things like subdirectories or files inside you darcs repo.
If you needed a submodule in order to fix a certain state of the source of the used library, you could perhaps simply put a copy of the library's repo as a subdir and add it to your project's darcs. Compared to git, this would have the disadvantage of bloating the data transfer when someone gets your repo.
If you needed a submodule to tell those who get your repo where to get the updated source of the library (without bloating the size of your repo), you could simply put an URL and an instruction into a README file, or a script, or whatever. Compared to git, the disadvantage is that the state of the library's source as it was when you used it wouldn't be recorded in your commit, so people might get another version of the library, and the compilation wouldn't succeed, and it wouldn't be clear why.
So, the really interesting goal of a submodule could be not just to tell people where to get the library source from (as you write in the question), but to record the state of the subproject that you have actually used for compiling your project, and not to bloat your repo for those who don't want to get the source of the subproject.
Probably, this goal could also be achieved by storing more complex metadata about the state of the subproject, and a more complex hook to get exactly that state (or--by choice--another state) of the subproject. AFAIK from the docs, there is no built-in mechanism for such submodules.
Update (found on the darcs site):
http://darcs.net/Ideas/Subrepositories;
http://darcs.net/Ideas/NestedRepositories.
So, darcs will notice another darcs repo inside your working and it won't touch it. So the first way I've suggested above is shut (if you leave the darcs metadata there).
The second way is like something suggested in one of the section of the latter link. (They suggest an "uglu" script for something like this.)
Another (3rd) idea
Import the patches from the repo you intend to have as a submodule, but first move all files into a subdir. If it were possible just to apply such a moving special patch once and if it was effective for all the patches you import from the repo intended as submodule, but not to the patches you import from a "branch" of the main repo...
...well, it could be a special variant of the pull command (say, import) and of the push command (say, export) that would additionaly translate the paths accordingly.
I don't know of any submodule concept for darcs, which means the usual way to refer to another (shared) repo from a darcs repo would be through symlinks.
Since symlinks aren't supported with darcs, that means you needs to put in place a "posthook sh update-symlinks.sh" hook script to restore those links.
But you could also use add to this hook a check to see first what version of the lined repo is currently loaded, and update that version if needed (provided you have store in one way or another the exact version you need for that shared repo).
That last suggestion is actually close to the implementation of Git submodules or Mercurial subrepos.