Artifactory - remote repos permissions - permissions

I'm using Artifactory 3.0.3 (open source).
In our company, we have two own repositories and both are on a different machine than Artifactory. Let's call them:
OurRepo1 - public, any developer can download artifacts from it
OurRepo2 - private, only some developers are allowed to access and download artifacts from it
And here's the thing:
Due to security reasons we want OurRepo2 not to be cached by Artifactory (easy to do), BUT NOW, how can I define permissions for this OurRepo2 to be accessible only by some users?
When I'm creating a new permission target I can select only local repos and caches of remote repos (e.g. OurRepo1-cache). But I don't want either of that. I want to limit the very access to the physical OurRepo2.
Is it possible with Artifactory?

In this case, I'd use an HTTP proxy like nginx in front of your Artifactory instances, and use rewrite rules to direct traffic to the correct back-end repo. You can then insist on certain auth credentials when trying to access OurRepo2 whilst leaving OurRepo1 free of authentication.
I have helped to manage such an "nginx + Artifactory" combination in an organisation with 100+ developers, and it worked very well.

Related

Gitlab server: giving access to only certain ssh keys rather than any key that the user uploads

So, I am new to the GitLab server. Now, what I want to achieve is this:
Allow access to repositories only on certain ssh-keys. There are a limited no of machines and a limited no of users, so if a user adds an ssh-key outside these sets of keys, the repo should not clone there. Because my team size is small, I am okay if I only add those public keys to the account.
I am fine with the idea of ssh access but currently, as an admin, I lose the freedom to conveniently track or choose which all ssh-keys can access my repo. Can I disable users from adding ssh keys?
Is there any other way to ensure this? Would instead of having ssh enabled access HTTPS with whitelisting IP-enabled access work?
GitLab was, in the beginning (2011) based upon gitolite, but switched to its own mechanism in 2013.
Nowadays, it is best to declare a GitLab project private and add users to said project: that way you won't have to manage SSH or HTTPS access: any user who is not part of that project won't be able to see it/clone it (HTTPS or SSH).
In other words, repository access is no longer based on SSH keys (not for years), but is based on project visibility.
The OP adds:
even if a user is part of a project, he should only be able to clone the project on certain remote machines.
That is not a Git or GitLab feature, which means you need:
to restrict Git protocols on GitLab to SSH only
change the gitlab-shell SSH forced command script in order to allow commands only coming from some IPs
There is access to group by IP address restriction feature, since GitLab 12.0 (June 2019), but... only in GitLab Ultimate (meaning: "not free").

private yum/rpm repository in AWS S3 with basic authentication

I need to create a yum repository. I want to store the files in S3 because: A) we already have a ton of files there; and B) because price is not bad (according to some definition of "not bad").
The repository needs to be private - yum clients will need to provide some kind of credentials to access it.
yum allows you to use Basic HTTP Authentication for private repos:
baseurl=https://user:pass#s3-site.whatever.com
I could enable Static Website Hosting on an S3 bucket, but it doesn't seem to support Basic Auth.
There's a yum plugin called yum-s3-iam that allows you to setup access control based on the IAM role of the instance where yum is running, but this only works with instances in Amazon.
I could create a front-end instance, mount the S3 bucket to it with s3fs, install Apache with Basic Auth, but this would require running that front-end, which I'm trying to avoid (I want to reduce the number of moving parts).
Then there's the s3auth proxy which basically does the same thing, but at a higher level. It still requires a front-end instance.
Is there a better solution, something that avoids the need to create a front-end instance?

Can I change gerrit authentication type from openid to ldap?

We in our team are planning to use gerrit. So, to get introduced, I did set up a server, used open-id for authentication and created some test-users and test-projects in it.
Now we are ready to use it. But we actually prefer LDAP for real use.
So, can I change my authentication system from open-id from LDAP? What will happen to current users then?
I want to clear test projects and changes. How can I do them?
Can I complete delete existing gerrit setup and initiate a fresh setup in same machine? (I tried extracting the jar in different folder, but I faced some problems in it)
I am using Ubuntu 12.04 as my server.
Please help.
Delete the database (you're not using the H2 database anymore, but some MySQL or PostgreSQL server, don't you?) plus the directory where Gerrit is running (the -d parameter, see docs). Additionally, remove the git repos, if you configured them to be located on a different path.
Then all your data is gone and you can start from scratch.

Managing commit rights in svn by delegating to project managers

We have multiple projects in svn repo.And for each project there are many users.As number of users is large so its troublesome to manage their commit rights using "Auth file".
I have read somewhere that we can delegate user's rights to their managers by creating a text file.But i am not sure how to achieve this and perhaps hOOKS need to be configured for this .As i am new to SVN so need your expert advice.Please guide me how to achieve this and if you have hook already confgiured p,kindly provide.
How to setup access control in SVN?
I have seen this link and answer by VonC is great and perfect for me.But i dont know how to start .. can anybody help me out here as i am not pro in svn and unix .
Thanks in advance
Preface
Using single repository for multiple projects is Bad Idea (tm): one repo - one project
Forget immediately about old as mammoth's shit SVN 1.5 - use at least 1.6 on client and server (1.8 may be best choice)
Face
Simplified user-management for SVN-users can be reached using LDAP-based authentication instead of ordinary file (in case of "repository per project" <location> from answer will be location of each repo with SVNPath, in case of old structure <location> must be linked to every root of project) and having different groups for different repositories in Require ldap-group directive - read also Apache 2.2 docs in Apache Module mod_authnz_ldap part. From management POV, LDAP-auth and permissions means: each developer must be in LDAP-tree, included in one or more related to repositories groups
In case of additional requirement for Path-Based Authorization within repositories and using groups inside authz-file, you may find useful LDAP Groups to Subversion Authz Groups Bridge, which allow you to regenerate authz-groups from LDAP-data
As result, most (if not all) SVN-related ACLs can be managed in LDAP-side only

Should I use an FTP server as a maven host?

I would like to host a Maven repository for a framework we're working on and its dependencies. Can I just deploy my artifacts to my FTP host using mvn deploy, or should I manually deploy and/or setup some things before being able to deploy artifacts? I only have FTP access to server I want to host the Maven repo on.
The online repository I want to use is not hosted by myself. As I say, I only have FTP access, so if possible, I would like to use that FTP space as a Maven repository. The tools mentioned seem to work when you have full control over the host machine, or at least more than just FTP access since you need to configure the local directories where the repositories will be placed. Is this possible?
You might want to have a look at Nexus, a Maven repository manager. We've replaced our local Maven repository with a Nexus-based one and find it tremendously useful.
I've successfully used Archiva as my repository for several years ... see http://archiva.apache.org/. It's easy to administer and allows you to configure as many repositories as you need (SNAPSHOT, internal, external, etc).
According to the book "Better Builds with Maven", the most common type of repository is HTTP, this paragraph describes what I think you need:
This chapter will assume the repositories are running from http://localhost:8081/ and that artifacts are deployed to the repositories using the file system. However, it is possible to use a repository on another server with any combination of supported protocols including http, ftp, scp, sftp and more. For more information, refer to Chapter 3.
A Maven 2 repository is simply a specific directory structure, so once you get the transport and server specifications right for the repository and deployment portion of your POMs, it should be completely transparent to your users.
You can even use Dropbox. All that you need is a public address to access the files generated with mvn deploy, with any of the protocols in the accepted answer.
I guess there are more services that can work in the same way, but I'm not certain about the URL schemes that alternatives to Dropbox may use.
https://maven.apache.org/wagon/wagon-providers/wagon-ftp/ will tell you that you can use ftp to read from an existing repository, but not to create a new one. I don't think that it is impossible in principle, but no one has cared to write all the fiddly code to do the directory management via ftp.