Set mfsymlinks when mounting Azure File volume to ACI instance - azure-container-service

Is there a way to specify the mfsymlinks option when mounting an Azure Files share to an ACI container instance?
As shown on learn.microsoft.com symlinks can be supported in Azure Files when mounted in Linux with this mfsymlinks option enabling Minshall+French symlinks.
I would like to use an Azure Files share mounted to an Azure Container Instance but I need to be able to use symlinks in the mounted file system, but I cannot find a way to specify this. Does anyone know of a way to do this?

Unfortunately, as far as I know, when you create the container and mount the Azure File Share through the CLI command az container create with parameters such as
--azure-file-volume-account-key
--azure-file-volume-account-name
--azure-file-volume-mount-path
--azure-file-volume-share-name
You cannot set the symlinks as you want and there also no parameter for you to set it.
In addition, if you take a look at the Template for Azure Container Instance, then you can find that there no property shows the setting about symlinks. In my opinion, it means you cannot set the symlinks for Azure Container Instance as you want. Hope this will help you.

As a workaround that suits my use case, once the file structure, including symlinks, has been created on the container's local FS, I tar up the files onto the Azure Files share: tar -cpzf /mnt/letsencrypt/etc.tar.gz -C / etc/letsencrypt/ Then when the container runs again, it extracts from the tarball, preserving the symlinks: tar -xzf /mnt/letsencrypt/etc.tar.gz -C /
I'll leave this open for now to see if ACI comes to support the option natively.

Update from Azure docs (azure-files-volume#mount-options):
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl

Related

Auto reload configuration changes without restarting pod/container using Kubernetes ConfigMap for application with large number of configuration files

Our team is planning the migration of legacy enterprise application developed in ASP.net web-Forms, to .Net Core 6 as use the containerized approach. For this, mostly we will target the Kubernetes container orchestration platform.
The application is highly configurable and can be integrated with related apps up to certain extent. It has large number of XML based configuration files (more than 100). Current mode of deployment is IIS (on-premise).
The major technical challenge that we are facing is to manage our application configuration.
So ConfigMap is one the option available in Kubernetes can be used for configuration management. ConfigMap APIs allows to generate ConfigMap from environment, yaml file, existing configuration file or directory. Directory based approach seems more suitable. However, considering the maximum size limit of ConfigMap we may end up creating multiple ConfigMap.
We need to make sure:
Migrated app should be able to use configuration however application image should be separate and configuration can be injected from outside.
The configuration changes should be reflected in application without POD.
Since the ConfigMap is kind of read-only resource when container starts, I am currently looking for mechanism to use with configuration reload without the need of restarting POD/container.
Initial focus is to achieve this. (The impact of changed configuration on active users who might be referring to application feature based on previous configuration is a different topic altogether).
You can do it without restarting the POD using configmap only, however still it more depends on your application end.
You can inject your configmap and mount it to POD Kubernetes auto-reload the config map if mounted to the directory. To note it does not work if you are using the subpath.
Auto reload config map into Kubernetes without restarting the POD, you can more here: https://medium.com/#harsh.manvar111/update-configmap-without-restarting-pod-56801dce3388
YAML example
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
data:
hello: world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: configmaptestapp
image: <Image>
volumeMounts:
- mountPath: /config
name: data-volume
ports:
- containerPort: 80
volumes:
- name: data-volume
configMap:
name: test-config
Official documentation : https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically
Mounted ConfigMaps are updated automatically When a ConfigMap
currently consumed in a volume is updated, projected keys are
eventually updated as well. The kubelet checks whether the mounted
ConfigMap is fresh on every periodic sync. However, the kubelet uses
its local cache for getting the current value of the ConfigMap. The
type of the cache is configurable using the
ConfigMapAndSecretChangeDetectionStrategy field in the
KubeletConfiguration struct. A ConfigMap can be either propagated by
watch (default), ttl-based, or by redirecting all requests directly to
the API server. As a result, the total delay from the moment when the
ConfigMap is updated to the moment when new keys are projected to the
Pod can be as long as the kubelet sync period + cache propagation
delay, where the cache propagation delay depends on the chosen cache
type (it equals to watch propagation delay, ttl of cache, or zero
correspondingly).
ConfigMaps consumed as environment variables are not updated
automatically and require a pod restart.
Note:
A container using a ConfigMap as a subPath volume mount will not
receive ConfigMap updates.
In this case your application need to be handling the content properly with change detection etc.

How to set S3v2 api to Minio (AWS S3 local) in a docker-compose

My origin docker-compose is :
s3:
image: minio/minio
command: server /data --console-address ":9001"
ports:
- 9000:9000
- 9001:9001
networks:
- lambda-local
is it possible to set S3v2 by default when docker starts?
EDIT
I use some Python AWS Lambda and AWS S3 (S3v2) in production. My code is ready for use S3v2 only.
In my computer (for develop unit tests), I want juste switch S3 by Minio started by docker-compose.
I do not want change my Lambda code. I need change Minio local install (docker-compose).
This change of S3 by Minio must be transparent for my application.
So the MinIO Server supports both S3v4 and S3v2 without any additional configuration required. As Prakash noted, it's typically an SDK setting.
You can test this yourself using the mc commandline tool:
mc alias set --api "S3v4" myminiov4 http://localhost:9000 minioadmin minioadmin
mc alias set --api "S3v2" myminiov2 http://localhost:9000 minioadmin minioadmin
echo "foo" > foo.txt
echo "bar" > bar.txt
mc mb myminiov4/testv4
mc mb myminiov2/testv2
mc cp foo myminiov4/testv4/foo
mc cp bar myminiov2/testv2/bar
You can read either file using either alias - one which is using Signature v4 and another using Signature v2.
You should defer to the documentation for your preferred SDK on how to set this value. It's worth noting that Sv2 is deprecated - newer SDK versions might not even support it. So you'll first need to confirm that the version of your preferred SDK supports Sv2 at all, and then enable it when connecting to MinIO. I did not, for example, find an obvious way of setting it in the AWS Java SDK docs (though maybe I just missed it).
MinIO's GO SDK appears to support it via an override, but I haven't tested it myself yet.
If you have a specific application which requires Sv2 that isn't working, you'll need to provide more detail before getting further guidance. From a MinIO perspective, I'm not aware of any specific restrictions on either signature version.
Disclaimer: I work for MinIO.

Access S3 bucket without running aws configure with kubernetes

I have an S3 bucket with some sql scripts and some backup files using mysqldump.
I also have a .yaml file that deploys a fresh mariadb image.
As I'm not very experienced with kubernetes yet, if I want to restore one of those backup files into the pod, I need to bash into it, run aws cli, insert my credentials, then sync the bucket locally and run mysql < backup.sql
This, obviously, destroys the concept of full automated deployment.
So, the question is... how can I securely make this pod immediately configured to access S3?
I think you should consider mounting S3 bucket inside the pod.
This can be achieved by for example s3fs-fuse.
There are two nice articled about Mounting a S3 bucket inside a Kubernetes pod and Kubernetes shared storage with S3 backend, I do recommend reading both to understand how this works.
You basically have to build your own image from Dockerfile and supply necessary S3 bucket info and AWS security credentials.
Once you have the storage mounted you will be able to call scripts from it in a following way:
apiVersion: v1
kind: Pod
metadata:
name: test-world
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: hello
image: debian
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]

Serverless Framework deploy through CircleCI

I'm trying to integrate serverless to my circleci workflow.
I tried first adding both, key and secret to AWS permissions, but that did not work.
Then, I added key and secret to Environment variables and in my config file:
sudo npm install -g serverless
sls config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
sls deploy -v
But I see the same error:
Serverless Error ---------------------------------------
You are not currently logged in. Follow instructions in http://slss.io/run-in-cicd to setup env vars for authentication.
Anyone had this issue? I could not find an answer or hint online. Thanks.
This likely only applies to those trying to use Serverless Enterprise with the monitoring & dashboards they have set up. #wintvelt's answer wouldn't work for me because if i deleted the org variable, it would likely break the connection needed for Enterprise. So steps for my CircleCI setup:
In CircleCI, create a Context for each environment with the AWS Key ID and Secret as environment variables (putting them in a context is a nice to have, you could use other methods of making Circle inject environment variables into builds).
In your Serverless Framework dashboard, create a new access key which you will use in Circle.
Create a new environment variable SERVERLESS_ACCESS_KEY with the value from step 2.
I got this idea from reading how Seed.run has users integrate with Serverless. For more info read this link: https://seed.run/docs/integrating-with-serverless-enterprise.
Just checked Circleci stopped supporting AWS Permissions as a configurable option in the settings page.
You need to set the credentials as environment variables for the projects. The credentials should be named exactly AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
that's all you need to do. you don't have to do any additional step. I tried this on my project and it worked.
Your deployment step should simply be
sls deploy
As a follow-up to the previous answer: I had exactly the same error.
I took the solution from the chat as a solution.
For me the fixes I applied:
In CircleCI project settings, under "AWS permissions" I added the AWS Access Key ID and Secret Access key
In CircleCI project settings, under "Environment variables", I also added the AWS Access Key ID and Secret Access key
From my serverless.yml file, I deleted the line with org variable
For me, 1. and 2. alone was not enough. I also had to remove the line from my yml file to make deployment via CircleCI work.
For those landing here with the same issue, hope this helps!

Docker for Win acme.json permissions

Traefik v1.3.1
Docker CE for Windows: 17.06.0-ce-win18 (12627)
I have the /acme folder routed to a host volume which contains the file acme.json. With the Traefik 1.3.1 update, I noticed that Traefik gets stuck in an infinite loop complaining that the "permissions 755 for /etc/traefik/acme/acme.json are too open, please use 600". The only solution I've found is to remove acme.json and let Traefik re-negotiate the certs. Unfortunately, if I need to restart the container, I have to remove acme.json again or I'm stuck with the same issue again!
My guess is that the issue lies with the Windows volume mapped to Docker but I was wondering what the recommended workaround would even be for this?
Can I change permissions on shared volumes for container-specific deployment requirements?
No, at this point, Docker for Windows does not enable you to control (chmod) the Unix-style permissions on shared volumes for deployed containers, but rather sets permissions to a default value of 0755 (read, write, execute permissions for user, read and execute for group) which is not configurable.
Traefik is not compatible with regular Windows due to the POSIX permissions check. It may work in the Windows Subsystem for Linux since that has a Unix-style permission system.
Stumbled across this issue when trying to get traefik running on Docker for Windows... ended up getting it working by adding a few lines to a dockerfile to create the acme.json and set permissions. I then built the image and despite throwing the "Docker image from Windows against a non-Windows Docker host security warning" when I checked permissions on the acme.json file it worked!
[
I setup a repo and have it auto building to the dockerhub here for further testing.
https://hub.docker.com/r/guerillamos/traefik/
https://github.com/guerillamos/traefikwin/blob/master/Dockerfile
Once I got that built I switched the image out in my docker-compose file and my DNS challenge to Cloudflare worked like a charm according to the logs.
I hope this helps someone!