How to create a user and copy corresponding pub file to authorized_key using AWS CloudFormation? - authentication

I am having trouble to create a user and copy the corresponding pub file called authorized_keys into the .ssh folder on the instance using AWS Cloud Formation. I do this, because I want to connect with this user using SSH. When I check the SystemLog of the created instance, it does not seem like the user is created or any file is copied as authorized_keys in the .ssh directory,
this is my code:
LinuxEC2Instance:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
users:
ansible:
groups:
- "exampleuser"
uid: 1
homeDir: "/home/exampleuser"
files:
/home/exampleuser/.ssh/authorized_keys:
content: !Sub |
'{{ resolve:secretsmanager:
arn:aws:secretsmanager:availability-zone:account-id:secret:keyname:
SecretString:
keystring }}'
mode: "000600"
owner: "exampleuser"
group: "exampleuser"
Am I missing something so that the user is created and the file is also being copied?

To use AWS::CloudFormation::Init you have to explicitly invoke from your UserData using cfn-init helper script.
An exemple of such a UserData for the Amazon Linux 2 is as follows:
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
yum install -y aws-cfn-bootstrap
/opt/aws/bin/cfn-init -v \
--stack ${AWS::StackName} \
--resource LinuxEC2Instance \
--region ${AWS::Region}
If there are issues, then have to login to the instance and inspect log files such as /var/logs/cloud-init-output to look for error messages.

Related

Using the `s3fs` python library with Task IAM role credentials on AWS Batch

I'm trying to get an ML job to run on AWS Batch. The job runs in a docker container, using credentials generated for a Task IAM Role.
I use DVC to manage the large data files needed for the task, which are hosted in an S3 repository. However, when the task tries to pull the data files, it gets an access denied message.
I can verify that the role has permissions to the bucket, because I can access the exact same files if I run an aws s3 cp command (as shown in the example below). But, I need to do it through DVC so that it downloads the right version of each file and puts it in the expected place.
I've been able to trace down the problem to s3fs, which is used by DVC to integrate with S3. As I demonstrate in the example below, it gets an access denied message even when I use s3fs by itself, passing in the credentials explicitly. It seems to fail on this line, where it tries to list the contents of the file after failing to find the object via a head_object call.
I suspect there may be a bug in s3fs, or in the particular combination of boto, http, and s3 libraries. Can anyone help me figure out how to fix this?
Here is a minimal reproducible example:
Shell script for the job:
#!/bin/bash
AWS_CREDENTIALS=$(curl http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=$(echo "$AWS_CREDENTIALS" | jq .AccessKeyId -r)
export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_CREDENTIALS" | jq .SecretAccessKey -r)
export AWS_SESSION_TOKEN=$(echo "$AWS_CREDENTIALS" | jq .Token -r)
echo "AWS_ACCESS_KEY_ID=<$AWS_ACCESS_KEY_ID>"
echo "AWS_SECRET_ACCESS_KEY=<$(cat <(echo "$AWS_SECRET_ACCESS_KEY" | head -c 6) <(echo -n "...") <(echo "$AWS_SECRET_ACCESS_KEY" | tail -c 6))>"
echo "AWS_SESSION_TOKEN=<$(cat <(echo "$AWS_SESSION_TOKEN" | head -c 6) <(echo -n "...") <(echo "$AWS_SESSION_TOKEN" | tail -c 6))>"
dvc doctor
# Succeeds!
aws s3 ls s3://company-dvc/repo/
# Succeeds!
aws s3 cp s3://company-dvc/repo/00/0e4343c163bd70df0a6f9d81e1b4d2 mycopy.txt
# Fails!
python3 download_via_s3fs.py
download_via_s3fs.py:
import os
import s3fs
# Just to make sure we're reading the credentials correctly.
print(os.environ["AWS_ACCESS_KEY_ID"])
print(os.environ["AWS_SECRET_ACCESS_KEY"])
print(os.environ["AWS_SESSION_TOKEN"])
print("running with credentials")
fs = s3fs.S3FileSystem(
key=os.environ["AWS_ACCESS_KEY_ID"],
secret=os.environ["AWS_SECRET_ACCESS_KEY"],
token=os.environ["AWS_SESSION_TOKEN"],
client_kwargs={"region_name": "us-east-1"}
)
# Fails with "access denied" on ListObjectV2
print(fs.exists("company-dvc/repo/00/0e4343c163bd70df0a6f9d81e1b4d2"))
Terraform for IAM role:
data "aws_iam_policy_document" "standard-batch-job-role" {
# S3 read access to related buckets
statement {
actions = [
"s3:Get*",
"s3:List*",
]
resources = [
data.aws_s3_bucket.company-dvc.arn,
"${data.aws_s3_bucket.company-dvc.arn}/*",
]
effect = "Allow"
}
}
Environment
OS: Ubuntu 20.04
Python: 3.10
s3fs: 2023.1.0
boto3: 1.24.59

Adding file with specific content from secretsmanager into machine on startup using AWS CloudFormation

I am using CloudFormation to provision a Linux Instance. During startup I want to add a file into a specific folder which content comes from a secretstring which resides in the secrets manager.
I tried to add the file using UserData and MetaData, however, instead of what it should do, namely, adding the correct content from the secrets manager to the file, it just adds the string as is which depicts the location of the content instead of the content itself. This is my code:
Metadata:
AWS::CloudFormation::Init:
config:
files:
/home/ansible/.ssh/authorized_keys:
content: !Sub |
'{{ resolve:secretsmanager:
arn:aws:secretsmanager:eu-central-1:account:secret:secretname:
SecretString:
secretstringpath }}'
mode: "000644"
owner: "ansible"
group: "ansible"
Properties:
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
groupadd -g 110 ansible
adduser ansible -g ansible
mkdir /home/ansible/.ssh
yum install -y aws-cfn-bootstrap
/opt/aws/bin/cfn-init -v \
--stack ${AWS::StackName} \
--resource LinuxEC2Instance \
--region ${AWS::Region}
cat /home/ansible/.ssh/authorized_keys
What the cat command prints is this here:
{{ resolve:secretsmanager:
arn:aws:secretsmanager:eu-central-1:account:secret:secretname:
SecretString:
secretstringpath }}
instead of the the pathname of the file.
How do I ensure that it adds the content of the file?
you cannot have dynamic references to secretsmanager within Cloudformation::Init
Could be as simple as switching your quotation marks from single to double:
Metadata:
AWS::CloudFormation::Init:
config:
files:
/home/ansible/.ssh/authorized_keys:
content: !Sub |
"{{ resolve:secretsmanager:
arn:aws:secretsmanager:eu-central-1:account:secret:secretname:
SecretString:
secretstringpath }}"
mode: "000644"
owner: "ansible"
group: "ansible"

Adding a user during startup of a machine using CloudFormation

I am trying to create a user using CloudFormation after startup of a Linux machine.
I use the following code to do so:
Metadata:
AWS::CloudFormation::Init:
config:
groups:
ansible: {}
users:
ansible:
groups:
- "ansible"
homeDir: "/home/ansible"
files:
/home/ansible/.ssh/authorized_keys:
content: !Sub |
'{{ resolve:secretsmanager:
arn:aws:secretsmanager:eu-central-1:account:secret:secretname:
SecretString:
secretstringpath }}'
mode: "000644"
owner: "ansible"
group: "ansible"
Properties:
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
yum install -y aws-cfn-bootstrap
/opt/aws/bin/cfn-init -v \
--stack ${AWS::StackName} \
--resource LinuxEC2Instance \
--region ${AWS::Region}
However, during startup I get the following error message:
[ 96.72999017] cloud-init[2959]: Error occurred during build: Failed to add user ansible
What does this error mean? It does not seem to work as expected the way I do it ...
Before you can assign users to custom groups, you have to create such groups.
For that there is groups option in AWS::CloudFormation::Init.
For example:
groups:
ansible: {}
For anyone coming across the Error occurred during build: Failed to add user error mentioned by Benny above, I managed to solve this by creating a second config and creating the users within it:
Metadata:
AWS::CloudFormation::Init:
configSets:
ascending:
- "config1"
- "config2"
descending:
- "config2"
- "config1"
config1:
groups:
ansible: {}
config2:
users:
ansible:
groups:
- "ansible"
homeDir: "/home/ansible"

Azure ACS: Azure file volume not working

I've been following the instructions on github to setup an azure files volume.
apiVersion: v1
kind: Secret
metadata:
name: azure-files-secret
type: Opaque
data:
azurestorageaccountname: Yn...redacted...=
azurestorageaccountkey: 3+w52/...redacted...MKeiiJyg==
I then in my pod config have:
...stuff
volumeMounts:
- mountPath: /var/ccd
name: openvpn-ccd
...more stuff
volumes:
- name: openvpn-ccd
azureFile:
secretName: azure-files-secret
shareName: az-files
readOnly: false
Creating the containers then fails:
MountVolume.SetUp failed for volume "kubernetes.io/azure-file/007adb39-30df-11e7-b61e-000d3ab6ece2-openvpn-ccd" (spec.Name: "openvpn-ccd") pod "007adb39-30df-11e7-b61e-000d3ab6ece2" (UID: "007adb39-30df-11e7-b61e-000d3ab6ece2") with: mount failed: exit status 32 Mounting command: mount Mounting arguments: //xxx.file.core.windows.net/az-files /var/lib/kubelet/pods/007adb39-30df-11e7-b61e-000d3ab6ece2/volumes/kubernetes.io~azure-file/openvpn-ccd cifs [vers=3.0,username=xxx,password=xxx,dir_mode=0777,file_mode=0777] Output: mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I was previously getting password errors, as I hadn't base64 encoded the account key, but that has resolved now, and I get the more generic Permission denied error, which I suspect is maybe on the mount point, rather than the file storage. In any case, I need advice on how to troubleshoot further please?
This appears to be an auth error to your storage account. Un-base64 your password, and then validate using an ubuntu image in the same region as the storage account.
Here is a sample script to validate the Azure Files share correctly mounts:
if [ $# -ne 3 ]
then
echo "you must pass arguments STORAGEACCOUNT STORAGEACCOUNTKEY SHARE"
exit 1
fi
ACCOUNT=$1
ACCOUNTKEY=$2
SHARE=$3
MOUNTSHARE=/mnt/${SHARE}
apt-get update && apt-get install -y cifs-utils
mkdir -p /mnt/$SHARE
mount -t cifs //${ACCOUNT}.file.core.windows.net/${SHARE} ${MOUNTSHARE} -o vers=2.1,username=${ACCOUNT},password=${ACCOUNTKEY}

Adding custom ssh key to vagrant

I am testing provisioning with ansible locally and using vagrant I am simulate external machine. How to add my own key to vagrant and root user into vagrant box?
In your vagrant file you can use something like
## Ansible Provisioning
cfg.vm.provision :ansible do |ansible|
ansible.playbook = "vagrant-provision.yml"
## Debugging
ansible.verbose = true
ansible.verbose="vvvvv"
end
Create file called vagrant-provision.yml in the same dir as your vagrant file. I am assuming your using ubuntu you might want to amend the groups for other systems
---
#
# This playbook deploys your keys to the vagrant
#
- name: Provision my keys
hosts: all
sudo: True
vars:
localuser: "{{ lookup('ENV','USER') }}"
tasks:
- name: Create your local user
user:
name="{{localuser}}"
home="/home/{{localuser}}"
shell="/bin/bash"
append="true"
group="admin"
comment="{{localuser}}"
- name: Putting you authorized_key
authorized_key:
key="{{lookup('file', '~/.ssh/id_rsa.pub')}}"
user="{{localuser}}"
manage_dir=yes
So in that case when the vagrant comes up it will use the the above code to deploy your keys
It can be done by mixing "file" and "shell" provisining, eg:
$enable_root_passwordless_ssh_access = <<SCRIPT
#vagrant user has sudo passwordless access on precise32.box
[ -d /root ] || sudo mkdir /root
[ -d /root/.ssh ] || sudo mkdir /root/.ssh
[ -f /tmp/id_rsa.pub ] && sudo mv /tmp/id_rsa.pub /root/.ssh/authorized_keys
sudo chmod 0700 /root/.ssh
sudo chmod 0600 /root/.ssh/authorized_keys
sudo chown root:root /root/.ssh/authorized_keys
SCRIPT
machine.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "/tmp/id_rsa.pub"
machine.vm.provision "shell", inline: $enable_root_passwordless_ssh_access