How to force HDFS to use LDAP user's UID - authentication

I have a cloudera cluster with HDFS and Hue services and I'm trying to unify the authentication using LDAP.
I have my LDAP server running thanks to 389-ds (not sure if is the best way) and I can log into Hue with users from the LDAP server. When I login for first time, Hue creates the home directory in the HDFS.
But is not using the UID I set when I added the user to the LDAP server.
It wouldn't be a problem if I just access the HDFS via Hue but I also have a machine with the HDFS mounted via NFS.
I'm also having problems to add LDAP authentication in the machine with the NFS mount. I can do su username (username being a user in the LDAP server) and the system adds a home directory, but I cannot authenticate via SSH using LDAP users. I need this to avoid adding local users too.
My main question is: How to force HDFS or Hue to use the same UID I set when I create LDAP users.
More details:
I have configured LDAP in cloudera for both Hue and Hadoop (not sure if the latter is using it properly)
I know I could, maybe, change the UID a posteriori to the one set by Hue at the first login, but is more a workaround than a clean solution.
Pictures:
In this example, potato user has an uid 10104, but if I do ls -la /users/potato in the NFS mount, it says that the folder belongs to a user with uid 3312528423.

Related

Compute Engine: Restricting SSH usernames

I want to use OS Login with GCP because we use IAM for scoping access to all other resources within GCP (storage buckets, SQL, Redis, etc.). I understand how to restrict users from accessing machines using service accounts and roles.
But, I don't understand how to restrict the possible usernames that someone can use to SSH into our Compute Engine machines. Assume we have a VM configured with OS Login. The problem is that everyone connects using a CLI string like
gcloud compute ssh $MACHINE_NAME which (possibly creates and then) logs in to a /home/$USER_DOMAIN_SUFFIX directory. So, the team's shell history, relevant home directory contents (downloaded files, created scripts, etc.), and running processes are all in a different scope (UID). We could soft-enforce that everyone does something like gcloud compute ssh $SPECIAL_USERNAME#$MACHINE_NAME where everyone uses the same $SPECIAL_USERNAME value. But, that doesn't prevent new home directories from being provisioned. It's a convention, not a software policy.
Is there a way to accomplish what I want, where I can freely choose the value of $SPECIAL_USERNAME? I don't want to be locked in to the generated usernames based on the user/service account email.
Using root for everything is unacceptable for a number of reasons (we want to use a non-root container runtime and we want to limit potential damage done by this $SPECIAL_USERNAME).

gcloud compute ssh with local key & project restrictions

We have a user that is allowed to SSH into an VM on the Google Cloud Platform.
His key is added to the VM and he can SSH using
gcloud compute ssh name-of-vm
However connecting in this way will always have gcloud try to update project wide meta data
Updating project ssh metadata...failed
It fails because he only has rights for accessing & administrating this VM
However it's very annoying that every time he has to connect in this way he has to to wait for GCP trying to update metadata, which is not allowed and then check the sshkeys on the machine.
Is there a flag in the command to skip checking/updating project wide ssh keys?
Yes we can 'block project wide ssh keys' on the instance, but that would mean that other project admins cannot log in anymore.
I've also tried to minimise access to this user.
But, ideally, what rights should he have if he is allowed to SSH to the machine, start & stop the instance and store data into a bucket?
What you can do is to enable-oslogin for all the users you need including admins, enabling OS Login on instances disables metadata-based SSH key configurations on those instances.
The role to start, stop and connect via SSH to an instance would be roles/compute.instanceAdmin (take in account that this role is currently in beta) you can check here a list of the Compute Engine roles available so you can choose the one that better suits your needs.
To store data into a bucket, I think the most suitable role is roles/storage.objectCreator that allows users to create objects but not to delete or overwrite objects.
I found this solution very useful.
Create a file called config under ~/.ssh
Add the following to it. Change nickname to anything you prefer, $IP_OF_INSTANCE to the public IP of the instance, and $USER to your machine username.
Host nickname
HostName $IP_OF_INSTANCE
Port 22
User $USER
CheckHostIP no
StrictHostKeyChecking no
IdentityFile ~/.ssh/google_compute_engine
Now, you can simply SSH using:
ssh nickname
Note that the path on Linux and Mac is ~/.ssh while the path on Windows is something like C:\Users\<user>\.ssh
Re: #1: There's no flag on the command to change this behavior on a per-command level instead of a per-instance level ('block-project-ssh-keys', as you mentioned) but you could file a FR at https://issuetracker.google.com/savedsearches/559662.

Create an SSH key for other account on Google Cloud Platform

I have installed the Cloud SDK for Google Cloud. I've logged in using auth which redirected me to the gmail-login. Created the SSH key and even logged in by SFTP using Filezilla.
The problem is, when I log in using the gmail auth, SDK shell (or putty?) logs me into an account that is not admin. It has created another SSH user account (named 'Acer', after my pc) and logs me into it. Due to this, FTP starts at the /home/Acer folder. I want access to the /home/admin/web folder, but I don't have it now.
How can I create a SSH key for the admin account so that I can gain access to the folder mentioned above? Otherwise, is it possible to grant 'Acer' the permissions to access all the folders?
I have a few suggestions.
First a bit of background. If you run this command on your home workstation:
sudo find / -iname gcloud
You'll discover a gcloud configuration folder for each user on your home workstation. You'll probably see something like this:
/root/.config/gcloud
/home/Acer/.config/gcloud
If you change directory into /home/Acer/.config/gcloud/configurations you'll see a file named 'config_default'. This file will contain the default account to use for that user ('Acer').
Because you have performed gcloud auth login as that user, and during that process selected your gmail account, it will contain that gmail ID/account within the config file for that user. If you would like a user named 'admin' to log into your project, you could try adding a user named 'admin' to your home workstation, and then before attempting to use gcloud auth login, ensure you switch user on your home workstation to user 'admin'. This will generate a gcloud configuration on your home workstation for user admin, and propagate SSH keys etc.
If you want to create ssh keys manually there's some useful info here.
(For what it's worth, if you decide to use gcloud compute ssh to log into your instance home workstation, you can specify the user in the command you would like to log in as. For example gcloud compute ssh admin#INSTANCE_NAME).
I want access to the /home/admin/web folder, but I don't have it now.
Even if you are logged into the machine as a different user (in this case 'Acer'), the folder /home/admin/web should still exist on the instance if it existed previously. If you land in folder /home/Acer have you tried changing directory to the folder above and then listing the folders to see if /home/admin/ exists?
For example, from /home/Acer run:
$ cd ..
then
$ ls
You should be able to see /home/admin/.
Otherwise, is it possible to grant 'Acer' the permissions to access
all the folders?
Yes this is also possible. If you access the instance as the project owner (the easiest way would be to log into the Console as the owner of the project and use the SSH functionality in the console to access the instance). Now you can run this command:
$ sudo chown Acer.Acer -R /home/admin/web
This will make user 'Acer' owner of directory /home/admin/web and all files/directories below it (thanks to the -R switch).
Now when you next access the instance as user 'Acer' you'll be able to access /home/admin/web by running the following and you'll also have read/write capabilities:
$ cd /home/admin/web

How to disable Google compute engine from resetting SFTP folder permissions when using SSH-Key

Currently running a Google compute engine instance and using SFTP on the server.
Followed details to lock a user to the SFTP path using steps listed here: https://bensmann.no/restrict-sftp-users-to-home-folder/
To lock the user to a directory, the home directory of that user needs to be owned by root. Initially, the setup worked correctly but found that Google compute engine sporadically "auto-resets" the permissions back to the user.
I am using an SSH key that is set in the Google Cloud Console and that key is associated with the username. My guess is that Google Compute Engine is using this "meta-data" and reconfiguring the folder permissions to match that of the user associated with the SSH key.
Is there any way to disable this "auto-reset"? Or, rather, is there a better method to hosting SFTP and locking a single user to a SFTP path without having to change the home folder ownership to root?
Set your sshd rule to apply to the google-sudoers group.
The tool that manages user accounts is accounts daemon. You can turn it off temporarily but it's not recommended. The tool syncs the instance metadata's SSH keys with the linux accounts on the VM. If you do this any account changes won't be picked up, SSH from Cloud Console will probably stop working.
sudo systemctl stop google-accounts-daemon.service
That said it may be what you want if you ultimately want to block SSH access to the VM.

Gaining root access on ec2

I am trying to write to my /var/www/ folder on my apache server that I setup on EC2. All of the permissions are set to the 'root' user, but amazon only lets you login to their AMI as 'ec2-user'.
I am using WinSCP. I logged in as ec2-user using ssh, and executed sudo su, so I can gain root access that way. But how do I go about having that same access through my SFTP (WinSCP) as well as through putty?
Thanks!
When you SSH in, you can su to the root user whenever necessary.
As far as FTP and SCP, it sounds like you want to make your www folder owned by the user you're going to log in as, rather than root.
It's your server, you control all the permissions, and can create as many users as you want.
Add ec2-user to the group of user which owns your Document root (group to which your apache user belongs to)
Now you can safely WinSCP or SFTP.