Compute Engine: Restricting SSH usernames - authentication

I want to use OS Login with GCP because we use IAM for scoping access to all other resources within GCP (storage buckets, SQL, Redis, etc.). I understand how to restrict users from accessing machines using service accounts and roles.
But, I don't understand how to restrict the possible usernames that someone can use to SSH into our Compute Engine machines. Assume we have a VM configured with OS Login. The problem is that everyone connects using a CLI string like
gcloud compute ssh $MACHINE_NAME which (possibly creates and then) logs in to a /home/$USER_DOMAIN_SUFFIX directory. So, the team's shell history, relevant home directory contents (downloaded files, created scripts, etc.), and running processes are all in a different scope (UID). We could soft-enforce that everyone does something like gcloud compute ssh $SPECIAL_USERNAME#$MACHINE_NAME where everyone uses the same $SPECIAL_USERNAME value. But, that doesn't prevent new home directories from being provisioned. It's a convention, not a software policy.
Is there a way to accomplish what I want, where I can freely choose the value of $SPECIAL_USERNAME? I don't want to be locked in to the generated usernames based on the user/service account email.
Using root for everything is unacceptable for a number of reasons (we want to use a non-root container runtime and we want to limit potential damage done by this $SPECIAL_USERNAME).

Related

Gitlab server: giving access to only certain ssh keys rather than any key that the user uploads

So, I am new to the GitLab server. Now, what I want to achieve is this:
Allow access to repositories only on certain ssh-keys. There are a limited no of machines and a limited no of users, so if a user adds an ssh-key outside these sets of keys, the repo should not clone there. Because my team size is small, I am okay if I only add those public keys to the account.
I am fine with the idea of ssh access but currently, as an admin, I lose the freedom to conveniently track or choose which all ssh-keys can access my repo. Can I disable users from adding ssh keys?
Is there any other way to ensure this? Would instead of having ssh enabled access HTTPS with whitelisting IP-enabled access work?
GitLab was, in the beginning (2011) based upon gitolite, but switched to its own mechanism in 2013.
Nowadays, it is best to declare a GitLab project private and add users to said project: that way you won't have to manage SSH or HTTPS access: any user who is not part of that project won't be able to see it/clone it (HTTPS or SSH).
In other words, repository access is no longer based on SSH keys (not for years), but is based on project visibility.
The OP adds:
even if a user is part of a project, he should only be able to clone the project on certain remote machines.
That is not a Git or GitLab feature, which means you need:
to restrict Git protocols on GitLab to SSH only
change the gitlab-shell SSH forced command script in order to allow commands only coming from some IPs
There is access to group by IP address restriction feature, since GitLab 12.0 (June 2019), but... only in GitLab Ultimate (meaning: "not free").

gcloud created a new account when submitting a new SSH key-pair and now I cannot access the original one

I just started with the Google Cloud Platform and created my first VM instance (Debian).
It all worked in a pretty straight-forward way, I hit the SSH button next to my instance and it opened up a command-line interface in a new browser window. My username was the handle (pre-#) part of my gmail.
However, I wanted to use Terminal on my macbook as a CLI for accessing it. Looking at the guides, this seemed to be a long convoluted process. I followed this process (detailed below) but now I can only access some new account on the VM; the username is my full gmail address this time (but with underscores replacing non-alphabet characters, so like the orignal but with "_gmail_com" tacked on to the end). I can no longer access the original account that seems to be the proper account with admin privileges. Note that I can sudo into the root account and open up the directories and files owned by the original account but this seems very dumb.
I've tried posting in the forum for this stuff, Google's group for Google Compute, "gce-discussion", but my posts are held at approval for some reason. It's as though Google are just hoping I cave and pay for technical support.
My aim is to have a python session running a discord bot that continues while I log off. It'd also be good to be able to serve up files (images) via http.
Thank you for any help you can be!
The steps I followed in the convoluted process given in the guide are as follows:
I created an SSH keypair (private and public)
I downloaded and installed the Google SDK to get the gcloud CLI applciation
I issued the gcloud command to set the public key up on my instance
it had me log in at a google page (OAuth-like thing)
I started an SSH session on Terminal, invoking the file containing my private key, trying with different permutations of options
finally got it to connect and log in using my-handle_gmail_com (ie the second username on my instance)
when I tried to access the SSH from within the Google Cloud Platform page, the browser-based CLI automatically logged my into this same second account, "my-handle_gmail_com". So now I have no access to the original.
Thanks!

How to disable Google compute engine from resetting SFTP folder permissions when using SSH-Key

Currently running a Google compute engine instance and using SFTP on the server.
Followed details to lock a user to the SFTP path using steps listed here: https://bensmann.no/restrict-sftp-users-to-home-folder/
To lock the user to a directory, the home directory of that user needs to be owned by root. Initially, the setup worked correctly but found that Google compute engine sporadically "auto-resets" the permissions back to the user.
I am using an SSH key that is set in the Google Cloud Console and that key is associated with the username. My guess is that Google Compute Engine is using this "meta-data" and reconfiguring the folder permissions to match that of the user associated with the SSH key.
Is there any way to disable this "auto-reset"? Or, rather, is there a better method to hosting SFTP and locking a single user to a SFTP path without having to change the home folder ownership to root?
Set your sshd rule to apply to the google-sudoers group.
The tool that manages user accounts is accounts daemon. You can turn it off temporarily but it's not recommended. The tool syncs the instance metadata's SSH keys with the linux accounts on the VM. If you do this any account changes won't be picked up, SSH from Cloud Console will probably stop working.
sudo systemctl stop google-accounts-daemon.service
That said it may be what you want if you ultimately want to block SSH access to the VM.

S3 and semi-public bucket

I am doing some small devices running Debian. They need to sync a S3 bucket to a folder locally. I Have installed S3Tools and s3cmd sync seems to be the perfect tool. But I have to supply the Access Credentials and that seems VERY insecure. I will not be controlling the units once they ship so I need to somehow use the tool without supplying the credentials - AND I need to make sure the credentials can not delete in the bucket.
Does anyone have an idea as to how I go about this?
Regards, Jacob
Use IAM. It allows creation of AWS credentials with predefined permissions, which are under your control.
So you will create one identity per device. You are free to restrict access only to some buckets, keys.
You will not be able updating "device" credentials on your devices (this is simply your constrain), but in case some of your credentials will turn out as compromised, you still have the option to block it via IAM.
And for your primary "root" identity, I strongly recommend using two factor authentication (and of-course never put it to a device, you do not have control of).

Should I use the account-level access keys in AWS or should I stick with user-specific ones?

I'm storing all my content in AWS S3 and I would like to know which is the best approach to retrieve my images:
should I use the account access keys or should I create a user with the correct policies and then use the access keys for that "user"?
Always always always create users with their own IAM policies. You should never use the root account credentials to do anything if you can help it.
It's like permanently running commands on your local machine as the root user. The account-level access and secret access keys are the absolute keys to the kingdom. With them, a hacker, malicious employee, or well-intentioned-but-prone-to-accidents administrator could completely destroy every AWS resource you have, download anything off them, and in general cause chaos and discord. Even machines with pem files aren't safe. A root-level user could just cut an AMI off an existing machine.
Take a look at the IAM policy generator. Writing JSON policies is not fun and error prone, but tools like that one will help you get most of the way there.