I am an experienced CI&CD engineer coming from a Configuration Management expertise in processes, methods and tools.
However, I am not so experienced in using GitLab CI/CD for automation. I have plenty experience using Jenkins though, and I find myself a bit confused as there really does not seem to be truly user-private variables for GitLab pipeline setups
Am I right?
All the variables I can manage to see, as administrator, are visible to all other administrators.
Therefore I am quite reluctant to execute a solution such as proposed in https://gitlab.com/gitlab-examples/ssh-private-key/-/blob/master/.gitlab-ci.yml
Am I missing something, or will not the SSH_PRIVATE_KEY be accessible to all administrators in our setup?
Masking only goes as far as to block visibility in logs, which is fine and all, but I do not want anyone to be able to access the variable content unless thet login with the given account for which a private ssh-key is used.
Hardly the visibility I want to ensure for an account that has ssh-access to ALL environment hosts from dev and test to QA and PROD... :7
Your assessments are correct: users with sufficient access (especially administrators) can reveal the values of variables set in CI/CD settings.
To meet your requirements and for more advanced control of secret access, consider using the Vault integration with user-based claim boundaries. Or, similarly, you can use OpenID connect claims to connect to cloud providers that can in turn be used to retrieve secrets stored with your cloud provider (e.g. AWS Secrets Manager)
Related
I want to ask a conceptional question and take advices about possible system design if possible.
The plan is basically authenticating specific Gmail users to use my serverless backend application. I'm thinking about either forwarding users directly to my VPC or I can authenticate them in my host-provider server and then after forward them to the VPC (or directly Cloud Run service?).
I'd be really glad if someone experienced can lead me about concepts and suggest design ideas about this.
As commented by#John Hanley, your question has concepts that do not exist.
To invoke Cloud Run authentication to specific users to use your serverless backend application, go through below required possible systems designs :
1)Initially design how to describe IAM roles that are associated with Cloud Run, and list the permissions that are contained in each role.
2)Design how to secure and Configure Cloud Run to limit access to Cloud Run service with Identity aware Proxy(IAP).
3)Design how to create a Serverless VPC Access connector and also know how to use IAP for TCP forwarding within a VPC Service Controls perimeter.
4)Step by step implementation of how to use IAP to secure portal access without using a Virtual Private Network (VPN). IAP simplifies implementing a zero-trust access model and takes less time than a VPN for remote workers both on-premises and in cloud environments with a single point of control for managing access to your apps.
Solution to the what I had in mind was could be accomplished by Identity-Aware Proxy.
First, I feel very silly.
For fun/slight profit, I rent a vps which hosts an email and web server and which I use largely as a study aid. Recently, I was in the middle of working on something, and managed to lose connection to the box directly after having accidentally changed the ownership of my home folder to an arbitrary non-root, incorrect user. As ssh denies root, and anything but pubkey authentication, I'm in a bad way. Though the machine is up, I can't access it!
Assuming this is the only issue, a single chown should fix the problem, but I haven't been able to convince my provider's support team to do this.
So my question is this: have I officially goofed, or is there some novel way I can fix my setup?
I have all the passwords and reasonable knowledge of how all the following public facing services are configured:
Roundcube mail
Dovecot and postfix running imaps, smtps and smtp
Apache (but my websites are all located in that same home folder, and
so aren't accessible - At least I now get why this was a very bad idea...)
Baikal calendar setup in a very basic fashion
phpMyAdmin but with MySql's file creation locked to a folder which apache isn't serving
I've investigated some very simple ways to 'abuse' some of the other services in a way that might allow me either shell access, or some kind of chown primitive, but this isn't really my area.
Thanks!!
None of these will help you, at least of the services you listed none have the ability to restore the permissions.
All the VPS providers I've used give "console" access through the web interface. This is equivalent to sitting down at the machine, including the ability to login or reboot in recovery mode. Your hosting provider probably offers some similar functionality (for situations just like this, or for installing the operating system, etc), and it is going to be your easiest and most effective means of recovery. Log in there as root and restore your user's permissions.
One thing struck me as odd,
I haven't been able to convince my provider's support team to do this.
Is that because they don't want to do anything on your server which you aren't paying them to manage, or because they don't understand what you're asking? The latter would be quite odd to me, but the former scenario would be very typical of an unmanaged VPS setup (you have root, console access, and anything more than that is your problem).
From what I have read here, the recommendation is to use the secret manager to store secrets during development and then use environment variables when you deploy to IIS. I am not quite sure what is the best way to go about this - I need to be able to set the same variable to different values in different IIS applications so a system wide environment variable setting is not going to work.
I understand that I can set variables for the application in web.config but VS overwrites web.config on the server even if there is no web.conifg in the project when you do a web deploy. I know it may not be good practice to use web deploy to deploy to production but we want to do it for staging environments etc.
Is there a way to stop web deploy from overwriting web.config if it already exists on the target site?
Web.config is not used by ASP.NET Core. It's added as part of publishing for compatibility with hosting in IIS only. It's not intended for you to modify or use in any meaningful way.
The built in options for config providers in ASP.NET Core are: JSON, command-line arguments, User Secrets, environment variables and Azure Key Valult. However, User Secrets is only for development, and command-line arguments are difficult to utilize when hosting in IIS. (Technically, you can actually modify the Web.config to add command-line arguments, but as mentioned, that'll be overwritten on the next publish, so it's not the most tenable approach.)
Of the remaining choices of JSON, environment variables, and Azure Key Vault, only Azure Key Vault supports encryption. You can utilize Azure Key Vault whether you're actually hosting in Azure or not, but it's not free. It's not really expensive either, though, so it might be worth looking into.
JSON is probably the worst option, security-wise, mostly because it requires you to actually store your secrets in your source control, which is (or should be) a non-starter. Environment variables, while not encrypted, can be protected. However, it's difficult (though technically not impossible) to run multiple apps on the same server if they each require different secrets for the same environment variables. You can technically set environment variables at a user-level, and then you can also assign an app pool to run as a specific user. That's a lot of setup though.
That said, it's also entirely possible to create your own config providers, which means you can technically use whatever you like. For example, you can write a SQL Server config provider and then store your credentials there. You'd still need to config the connection string somewhere, but perhaps you could use the same connection string for the config for all the sites.
web.config cannot be used for secrets as it's a non-encrypted plain text file
any other file config sources (like appsettings.json) that could be added via new Configuration API cannot be used until they are not encrypted.
it is OK to use environment variables, but only if you can guarantee that it's not possible to do a snapshot of env variables on prod machine. Look into Environment Variables Considered Harmful for Your Secrets for more details about this risk.
if you are hosting on Azure, look into Azure Key Vault
there are a lot of tools/services, like Hashicorp Vault that helps to deal with sensitive data
I'm setting up a Jenkins server for a project of my company.
I configured the security realm to use LDAP and had no problem until we decided to hire external development team along with our devs.
We cannot create LDAP accounts for them for some reasons, however it is essential to use CI server together to collaborate and to get benefits of using CI server.
Is it possible to add external users who are not in LDAP?
I can think of only one solution so far.
use 'Jenkins's own user database' instead of 'LDAP' and create all users manually.
Any other solutions for this situations?
Seems like PAM is the way to go.
I haven't done it and am looking into doing it, but here is a suggestion from the lead Jenkins developer: http://jenkins-ci.361315.n4.nabble.com/Mixed-mode-authentication-td3447248.html
I don't think so , probably the best you can do it try to persuade your network security team to add the external development team to your LDAP system giving them a different security role.
For example you could create roles for jenkins_admin, jenkins_staff, jenkins_contractor and then give them different privileges but without rights to other resources.
How will you allow your external development team to commit to your SCM?
For those in the know, what recommendations do you have for storing passwords in Windows Azure configuration file (which is accessed via RoleManager)? It's important that:
1) Developers should be able to connect to all production databases while testing on their own local box, which means using the same configuration file,
2) Being Developers need the same configuration file (or very similar) as what is deployed, passwords should not be legible.
I understand that even if passwords in the configuration were not legible Developers can still debug/watch to grab the connection strings, and while this is not desirable it is at least acceptable. What is not acceptable is people being able to read these files and grab connection strings (or other locations that require passwords).
Best recommendations?
Thanks,
Aaron
Hum, devs are not supposed to have access to production databases in the first place. That's inherently non-secure, no matter if it's on Azure or somewhere else. Performing live debugging against a production database is a risky business, as a simple mistake is likely to trash your whole production. Instead I would suggest to duplicate the production data (eventually as an overnight process), and let the devs work against a non-prod copy.
I think it may be solved partially by a kind of credentials storage service.
I mean a kind of service that do not need a passwords, but allows access only for machines and SSPI-authenticated users which are white-listed.
This service can be a simple WebAPI hosted under SSLed server, with simple principles like so:
0) secured pieces have a kind of ACL with IP whitelist, or machine name-based, or certificate-based whitelist per named resource, or mixed.
1) all changes to stored data are made only via RDP access or SSH to the server hosting the service.
2) the secured pieces of information are accessed only via SSL and this API is read-only.
3) client must pre-confirm own permissons and obtain a temporary token with a call to api like
https://s.product.com/
3) client must provide a certificate and machine identity must match with the logical whitelist data for resource on each call.
4) requesting of data looks like so:
Url: https://s.product.com/resource-name
Header: X-Ticket: value obtained at step 3, until it expire,
Certificate: same certificate as it used for step 3.
So, instead of username and password, it is possible it store alias for such secured resource in connection string, and in code this alias is replaced by real username-password, obtained from step 4, in a Sql connection factory. Alias can be specified as username in special format like obscured#s.product.com/product1/dev/resource-name
Dev and prod instances can have different credentials aliases, like product1.dev/resource1 and product1/staging/resource1 and so on.
So, only by debugging prod server, sniffing its traffic, or by embedding a logging - emailing code at compilation time it is possible to know production credentials for actual secured resource.