Configure Sonatype Nexus 3 privileges for hosted docker registry namespace using wildcard - permissions

I have installed Sonatype Nexus 3 OSS with Hosted Repository for Docker (Private Registry for Docker). I want to have couple of users, which will be able to pull/push docker images, based on their permissions.
First way, how I can do it - is to create several hosted repositories for docker and then via Securiy -> Privileges use repository-view with such approach configure permissions based on exact repository:
username: repository name: permission:
user1 docker-internal-1 nexus:repository-view-:docker:docker-internal-1:read
user2 docker-internal-1 nexus:repository-view-:docker:docker-internal-1:add
user3 docker-internal-2 nexus:repository-view-:docker:docker-internal-2:read
user4 docker-internal-2 nexus:repository-view-:docker:docker-internal-2:add
This approach works, but it requires having multiple hosted repositories for docker.
My question will be - is it somehow possible to have one singe hosted repository for docker and then configure permissions, based on docker repository namespace?
So let's say I have a repository called docker-internal and then I have such permissions:
username: repository name: permission:
user1 docker-internal nexus:repository-view-:docker:docker-internal/namespace1:read
user2 docker-internal nexus:repository-view-:docker:docker-internal/namespace1:add
user3 docker-internal nexus:repository-view-:docker:docker-internal/namespace2:read
user4 docker-internal nexus:repository-view-:docker:docker-internal/namespace2:add
Unfortunately in Nexus 3 documentation I haven't found a way how I can do it with repository-view permissions, cause they only allow you to specify repository name, but no namespace. Then there is such thing as wildcard, which is described in Sonatype docs like "Wildcard -> These are privileges that use patterns to group other privileges." So I've tried to create some regex pattern like this:
nexus:repository-view:docker:docker-internal/namespace1:read
And unfortunately it doesn't work.

We find a way to combine content-selectors and permission to support image level permissions.
First you have to create two content selectors:
"docker-login-all" with the expression format=="docker" and path=~"/v2/". If you are support v1 protocol too, make sure to create another selector for it.
"docker-foo-selector" with an expression matching the image you want to grant access. For example to select all the releases of foo/bar-linux, the expression is format=="docker" and path=~".*/foo/bar-linux/.*"
The first selector is very important, as without it you are not able to create a rule that allow your users to login.
Then create two privileges based on content-selectors:
"docker-login-all-privilege" based on "Docker-login-all" applied on all the docker registries, with read grants. This will grant the ability to login via docker cli.
"docker-foo-privilege" based on "docker-foo-selector" applied on all the docker registries, with read grants. This will allow your users to pull only foo/bar-linux images.
Then create a role with only the two privileges, and associate it to the users. It should work.
Please be aware of unexpected behaviours when using some commands: https://issues.sonatype.org/browse/NEXUS-12220

Based on answer from Sonatype Nexus support currently it's not possible to do it via wildcard and namespace in docker registry. So the only working way is to use separate docker repositories and repository-view permissions.

Related

Gitlab server: giving access to only certain ssh keys rather than any key that the user uploads

So, I am new to the GitLab server. Now, what I want to achieve is this:
Allow access to repositories only on certain ssh-keys. There are a limited no of machines and a limited no of users, so if a user adds an ssh-key outside these sets of keys, the repo should not clone there. Because my team size is small, I am okay if I only add those public keys to the account.
I am fine with the idea of ssh access but currently, as an admin, I lose the freedom to conveniently track or choose which all ssh-keys can access my repo. Can I disable users from adding ssh keys?
Is there any other way to ensure this? Would instead of having ssh enabled access HTTPS with whitelisting IP-enabled access work?
GitLab was, in the beginning (2011) based upon gitolite, but switched to its own mechanism in 2013.
Nowadays, it is best to declare a GitLab project private and add users to said project: that way you won't have to manage SSH or HTTPS access: any user who is not part of that project won't be able to see it/clone it (HTTPS or SSH).
In other words, repository access is no longer based on SSH keys (not for years), but is based on project visibility.
The OP adds:
even if a user is part of a project, he should only be able to clone the project on certain remote machines.
That is not a Git or GitLab feature, which means you need:
to restrict Git protocols on GitLab to SSH only
change the gitlab-shell SSH forced command script in order to allow commands only coming from some IPs
There is access to group by IP address restriction feature, since GitLab 12.0 (June 2019), but... only in GitLab Ultimate (meaning: "not free").

Azure DevOps make project read only

We have some old ADO/VSTS projects that we want to archive and make read only. Each project has work items, builds, git repos, etc...
at the moment the only methods I have found are painful.
Remove all groups except read only group and add users in there. this is too painful and long, we have over 300 projects to make read only
Create a new group and then add in other groups (e.g. proj admins, contributors etc..) and then add this group to the top level area/git repo path and set all to DENY. *
I tried this with git repos and There is some issues with this as some
permissions are not inherited down to individual users who created the
git repo and they are still able to checkin.
Here you can see I created a READONLY group and set everything to DENY except Read permissions. (The members of this group are the default groups e.g. contributors, build admins, proj admins)
However, I had a repo created by a test user BEFORE i created the readonly group and it seems that user still has permissions to that repo
ok ok I understand that if the permissions are set at the lower level, then they won't be inherited down from the top level parent. I could create a script that checks the users of every git repo and sets their check-in permissions to deny but that is painful and i would prefer not to do that. Likewise, some projects have over 300 git repos.
FYI I want to make the whole project read only not just git repos.
Azure DevOps now have a feature called: "Disable Repository".
Disable access to to the repository (including builds, pull requests,
etc) but keep the repository discoverable with a warning.
It means your repo will not allow commits, even builds and pipelines cannot use it. Just go to your Devops "Project Settings". Scroll down to "Repositories" menu and select which Repo do you want to disable.
Yeah, you've found one of the nasty features of the Azure DevOps permission model. More specific ACLs trump less specific ACLs. Even for DENY rules.
When there is an explicit ALLOW rule on a more specific ACL, it will override the DENY on a less specific ACL.
Specificity for git is based on:
Server (TFS only)
Organization / Project Collection
Project
Default repo settings
Specific repo settings
Branch folder settings (only settable through API)
Specific branch settings
Similar hierarchies exist for other securables.
There is no easy way to strip these all, apart from scripting the action.
The Azure CLI has a devops extension which will allow you to script out what you want and can output JSON to make it easier to script.
You can use az devops security permission list to list all permissions defined for a identity (group or user) and az devops security permission reset or az devops security permission update to unset or override the given permission.
Other probably needed calls:
az devops security group list
az devops user list
az devops security group membership *
You can use the Azure DevOps disable repository option, which has the disadvantage that the repo is not showing up in the list of repos under the project anymore. This might not be desired if the code should still be readable for reference purposes.
The other method explained in one of the answers is to manually remove any write permissions using the repository settings UI. If you have a lot of access control lists on your repos or even need to do this on multiple repos, the manual approach can become time consuming. Therefore I wrote a script to automate this: https://github.com/ckadluba/RemoveAzureGitRepoWritePermissions.
It basically works like this.
.\Remove-AzureGitRepoWritePermissions.ps1 -OrgName "myorganisation" -ProjectName "MyProject" -RepoName "MyRepo"
It sets an explicit deny for the permissions: GenericContribute, ForcePush, CreateBranch, CreateTag, ManageNote, PolicyExempt, PullRequestContribute
and PullRequestBypassPolicy.

Sharing data between several Google projects

A question about Google Storage:
Is it possible to give r/o access to a (not world-accessible) storage bucket to a user from another Google project?
If yes, how?
I want it to backup data to another Google project, for the case if somebody may incidentally delete all storage buckets from our project.
Yes. Access to Google Cloud Storage buckets and objects are controlled by ACLs that allow you to specify individual users, service accounts, groups, or project role.
You can add users to any existing object through the UI, the gsutil command-line utility, or via any of the APIs.
If you want to grant one specific user the ability to write objects into project X, you need only specify the user's email:
$> gsutil acl ch -u bob.smith#gmail.com:W gs://bucket-in-project-x
If you want to say that every member of the project my-project is permitted to write into some bucket in a different project, you can do that as well:
$> gsutil acl ch -p members-my-project:W gs://bucket-in-project-x
The "-u" means user, "-p" means 'project'. User names are just email addresses. Project names are the strings "owners-", "viewers-", or "editors-" and then the project's ID. The ":W" bit at the end means "WRITE" permission. You could also use O or R or OWNER or READ or WRITE instead.
You can find out more by reading the help page: $> gsutil help acl ch

How to force HDFS to use LDAP user's UID

I have a cloudera cluster with HDFS and Hue services and I'm trying to unify the authentication using LDAP.
I have my LDAP server running thanks to 389-ds (not sure if is the best way) and I can log into Hue with users from the LDAP server. When I login for first time, Hue creates the home directory in the HDFS.
But is not using the UID I set when I added the user to the LDAP server.
It wouldn't be a problem if I just access the HDFS via Hue but I also have a machine with the HDFS mounted via NFS.
I'm also having problems to add LDAP authentication in the machine with the NFS mount. I can do su username (username being a user in the LDAP server) and the system adds a home directory, but I cannot authenticate via SSH using LDAP users. I need this to avoid adding local users too.
My main question is: How to force HDFS or Hue to use the same UID I set when I create LDAP users.
More details:
I have configured LDAP in cloudera for both Hue and Hadoop (not sure if the latter is using it properly)
I know I could, maybe, change the UID a posteriori to the one set by Hue at the first login, but is more a workaround than a clean solution.
Pictures:
In this example, potato user has an uid 10104, but if I do ls -la /users/potato in the NFS mount, it says that the folder belongs to a user with uid 3312528423.

Artifactory - remote repos permissions

I'm using Artifactory 3.0.3 (open source).
In our company, we have two own repositories and both are on a different machine than Artifactory. Let's call them:
OurRepo1 - public, any developer can download artifacts from it
OurRepo2 - private, only some developers are allowed to access and download artifacts from it
And here's the thing:
Due to security reasons we want OurRepo2 not to be cached by Artifactory (easy to do), BUT NOW, how can I define permissions for this OurRepo2 to be accessible only by some users?
When I'm creating a new permission target I can select only local repos and caches of remote repos (e.g. OurRepo1-cache). But I don't want either of that. I want to limit the very access to the physical OurRepo2.
Is it possible with Artifactory?
In this case, I'd use an HTTP proxy like nginx in front of your Artifactory instances, and use rewrite rules to direct traffic to the correct back-end repo. You can then insist on certain auth credentials when trying to access OurRepo2 whilst leaving OurRepo1 free of authentication.
I have helped to manage such an "nginx + Artifactory" combination in an organisation with 100+ developers, and it worked very well.