Azure ACS: Azure file volume not working - azure-container-service

I've been following the instructions on github to setup an azure files volume.
apiVersion: v1
kind: Secret
metadata:
name: azure-files-secret
type: Opaque
data:
azurestorageaccountname: Yn...redacted...=
azurestorageaccountkey: 3+w52/...redacted...MKeiiJyg==
I then in my pod config have:
...stuff
volumeMounts:
- mountPath: /var/ccd
name: openvpn-ccd
...more stuff
volumes:
- name: openvpn-ccd
azureFile:
secretName: azure-files-secret
shareName: az-files
readOnly: false
Creating the containers then fails:
MountVolume.SetUp failed for volume "kubernetes.io/azure-file/007adb39-30df-11e7-b61e-000d3ab6ece2-openvpn-ccd" (spec.Name: "openvpn-ccd") pod "007adb39-30df-11e7-b61e-000d3ab6ece2" (UID: "007adb39-30df-11e7-b61e-000d3ab6ece2") with: mount failed: exit status 32 Mounting command: mount Mounting arguments: //xxx.file.core.windows.net/az-files /var/lib/kubelet/pods/007adb39-30df-11e7-b61e-000d3ab6ece2/volumes/kubernetes.io~azure-file/openvpn-ccd cifs [vers=3.0,username=xxx,password=xxx,dir_mode=0777,file_mode=0777] Output: mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I was previously getting password errors, as I hadn't base64 encoded the account key, but that has resolved now, and I get the more generic Permission denied error, which I suspect is maybe on the mount point, rather than the file storage. In any case, I need advice on how to troubleshoot further please?

This appears to be an auth error to your storage account. Un-base64 your password, and then validate using an ubuntu image in the same region as the storage account.
Here is a sample script to validate the Azure Files share correctly mounts:
if [ $# -ne 3 ]
then
echo "you must pass arguments STORAGEACCOUNT STORAGEACCOUNTKEY SHARE"
exit 1
fi
ACCOUNT=$1
ACCOUNTKEY=$2
SHARE=$3
MOUNTSHARE=/mnt/${SHARE}
apt-get update && apt-get install -y cifs-utils
mkdir -p /mnt/$SHARE
mount -t cifs //${ACCOUNT}.file.core.windows.net/${SHARE} ${MOUNTSHARE} -o vers=2.1,username=${ACCOUNT},password=${ACCOUNTKEY}

Related

Error in libcrypto on Github Actions SSH command

I am going to create automatic deploy to my testing server via SSH in Github Actions. I was created connecting by private key. It's work correctly on local (tested in ubuntu:latest docker image), but when I push my code into repository I got error.
Run ssh -i ~/.ssh/private.key -o "StrictHostKeyChecking no" ***#*** -p *** whoami
Warning: Permanently added '[***]:***' (ED25519) to the list of known hosts.
Load key "/home/runner/.ssh/private.key": error in libcrypto
Permission denied, please try again.
Permission denied, please try again.
***#***: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Error: Process completed with exit code 255.
My workflow code:
name: Testing deploy
on:
push:
branches:
- develop
- feature/develop-autodeploy
jobs:
build:
name: Build and deploy
runs-on: ubuntu-latest
steps:
- run: mkdir -p ~/.ssh/
- run: echo "{{ secrets.STAGING_KEY }}" > ~/.ssh/private.key
- run: chmod 600 ~/.ssh/private.key
- run: ssh -i ~/.ssh/private.key -o "StrictHostKeyChecking no" ${{ secrets.STAGING_USER }}#${{ secrets.STAGING_HOST }} -p ${{ secrets.STAGING_PORT }} whoami
I was tried 3rd-hand packages e.g. D3rHase/ssh-command-action and appleboy/ssh-action with another errors.
Resolved. In line, where I making private.key file missing $ character. My bad.

Giving container the same permissions with the workflow in GitHub actions

I am using workload identity federation to provide some permissions to my workflow.
This seems to be working fine
- name: authenticate to gcp
id: auth
uses: 'google-github-actions/auth#v0'
with:
token_format: 'access_token'
workload_identity_provider: ${{ env.WORKLOAD_IDENTITY_PROVIDER }}
service_account: ${{ env.SERVICE_ACCOUNT_EMAIL }}
- run: gcloud projects list
i.e. the gcloud projects list command is successful.
However, in a next step I am running the same command in a container
- name: run container
run: docker run my-image:latest
and the process fails (I don't have access to the logs for the moment but it definately fails)
Is there a way to make the container created having the same auth context as the workflow?
Do I need to bind mount some token generated perhaps?
export the credentials (option provided by the auth action)
- name: authenticate to gcp
id: auth
uses: 'google-github-actions/auth#v0'
with:
token_format: 'access_token'
workload_identity_provider: ${{ env.WORKLOAD_IDENTITY_PROVIDER }}
service_account: ${{ env.SERVICE_ACCOUNT_EMAIL }}
create_credentials_file: true
Make credentials readable
# needed in the docker volume creation so that it is read
# by the user with which the image runs (not root)
- name: change permissions of credentials file
shell: bash
run: chmod 775 $GOOGLE_GHA_CREDS_PATH
Mount the credentials file and perform a gcloud auth login using this file in the container
- name: docker run
run: |
docker run \
-v $GOOGLE_GHA_CREDS_PATH:${{ env.CREDENTIALS_MOUNT_PATH }} \
--entrypoint sh \
${{ env.CLUSTER_SCALING_IMAGE }} \
-c "gcloud auth login --cred-file=${{ env.CREDENTIALS_MOUNT_PATH }} && do whatever"
The entrypoint can of course be modified accordingly to support the case above

How to create a user and copy corresponding pub file to authorized_key using AWS CloudFormation?

I am having trouble to create a user and copy the corresponding pub file called authorized_keys into the .ssh folder on the instance using AWS Cloud Formation. I do this, because I want to connect with this user using SSH. When I check the SystemLog of the created instance, it does not seem like the user is created or any file is copied as authorized_keys in the .ssh directory,
this is my code:
LinuxEC2Instance:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
users:
ansible:
groups:
- "exampleuser"
uid: 1
homeDir: "/home/exampleuser"
files:
/home/exampleuser/.ssh/authorized_keys:
content: !Sub |
'{{ resolve:secretsmanager:
arn:aws:secretsmanager:availability-zone:account-id:secret:keyname:
SecretString:
keystring }}'
mode: "000600"
owner: "exampleuser"
group: "exampleuser"
Am I missing something so that the user is created and the file is also being copied?
To use AWS::CloudFormation::Init you have to explicitly invoke from your UserData using cfn-init helper script.
An exemple of such a UserData for the Amazon Linux 2 is as follows:
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
yum install -y aws-cfn-bootstrap
/opt/aws/bin/cfn-init -v \
--stack ${AWS::StackName} \
--resource LinuxEC2Instance \
--region ${AWS::Region}
If there are issues, then have to login to the instance and inspect log files such as /var/logs/cloud-init-output to look for error messages.

drone "Insufficient privileges to use privileged mode"

i had write a .drone.yml in my gogs git repository. but when i push the git change , drone web tell me Insufficient privileges to use privileged mode. how can i fix it?
this is my .drone.yml:
pipeline:
build:
image: test-harbor.cx580.com/centos/centos7:Beat2.0
privileged: true
commands:
- mkdir -p /data/k8s/drone/jar-db/
- \cp README.md /data/k8s/drone/jar-db/
- ls /data/k8s/drone/jar-db/
push:
image: plugins/docker
repo: test-harbor.cx580.com/centos/centos7:Beat2.0
registry: test-harbor.cx580.com
username: ci
password: '1qaz!QAZ'
tags:
- latest
i had search in google, this websize tell me Your repository isn't in the trusted list of repositories. Get in touch with Devops and ask them to trust it but ,how can i trusted the repositorie?
and i get setting in the drone web , and check the Trusted in the settings,but it also failed :
img
Set drone-server env (My repositorie is GitLab)
...
- DRONE_OPEN=false
- DRONE_ADMIN=<your gitlab username>
- DRONE_GITLAB_PRIVATE_MODE=true
...
Enable drone settings
Settings -> Trusted like this

create a Docker Swarm v1.12.3 service and mount a NFS volume

I'm unable to get a NFS volume mounted for a Docker Swarm, and the lack of proper official documentation regarding the --mount syntax (https://docs.docker.com/engine/reference/commandline/service_create/) doesnt help.
I have tried basically this command line to create a simple nginx service with a /kkk directory mounted to an NFS volume:
docker service create --mount type=volume,src=vol_name,volume-driver=local,dst=/kkk,volume-opt=type=nfs,volume-opt=device=192.168.1.1:/your/nfs/path --name test nginx
The command line is accepted and the service is scheduled by Swarm, but the container never reaches "running" state and swarm tries to start a new instance every few seconds. I set the daemon to debug but no error regarding the volume shows...
Which is the right syntax to create a service with a NFS volume?
Thanks a lot
I found an article here that shows how to mount nfs share (and that works for me): http://collabnix.com/docker-1-12-swarm-mode-persistent-storage-using-nfs/
sudo docker service create \
--mount type=volume,volume-opt=o=addr=192.168.x.x,volume-opt=device=:/data/nfs,volume-opt=type=nfs,source=vol_collab,target=/mount \
--replicas 3 --name testnfs \
alpine /bin/sh -c "while true; do echo 'OK'; sleep 2; done"
Update:
In case you want to use it with docker-compose you can do it the following:
version: '3'
services:
alpine:
image: alpine
volumes:
- vol_collab:/mount
deploy:
mode: replicated
replicas: 2
command: /bin/sh -c "while true; do echo 'OK'; sleep 2; done"
volumes:
vol_collab:
driver: local
driver_opts:
type: nfs
o: addr=192.168.xx.xx
device: ":/data/nfs"
and then run it with
docker stack deploy -c docker-compose.yml test
you could also this in docker compose to create nfs volume
data:
driver: local
driver_opts:
type: "nfs"
o: addr=<nfs-Host-domain-name>,rw,sync,nfsvers=4.1
device: ":<path to directory in nfs server>"