I want to deploy NextCloud on Azure Container Instances. I was able to set up the container group using Azure CLI like this:
az container create
--resource-group NextCloud
--name nextcloudcontainer
--image nextcloud
--dns-name-label somelabel
--ports 80 443
--azure-file-volume-account-name myaccountname
--azure-file-volume-account-key myaccountkey
--azure-file-volume-share-name nextcloudfs
--azure-file-volume-mount-path /var/lib/nextcloud/
--os-type Linux
--cpu 1
--memory 2
--location germanywestcentral
--restart-policy OnFailure
Problem is, that the drive /var/lib/nextcloud/ is mounted with permissions 777, but for nextcloud I require 770. This cannot be changed with chmod afterwards, but only at deployment time. How could this be achieved?
I saw this post, but I do not understand, how this could be done as with restart of the container, I would have to do this manually every time.
There is a way to change permission in Azure Files at mount time with mount param filemode and dirmode. However in ACI, we don't have that flexibility in ACI to change the param. We are aware of this request and working on it.
Related
I installed spinnaker on my AWS EC2, login into the dashboard in the first time but immediately after I logout and login again using the same base URL i am being directed to a different person github account, what might have happened, does it mean my account is hacked or what, somebody advise please.
Being directed to the link attached below, instead of the ip address taking me to the spinnaker dashboard and yet I am using the correct base address
These are the instructions i follow for Minnaker on EC2 (ap-southeast-2)
Pre-requisites
Obtain an AWS Elastic IP
From AWS EC2 console choose a Region preferably ap-southeast-2 and
launch an EC2 instance with 16 GB memory, 4 cpu min and 60 GB disk.
An initial deployment can be performed using instance= m4.xlarge
Attach the AWS Elastic IP to the Spinnaker Instance
Access the instance through SSH
Get minnaker
curl -LO https://github.com/armory/minnaker/releases/latest/download/minnaker.tgz
Untar
tar -xzvf minnaker.tgz
Go to minnaker directory
cd minnaker
Use the Public IP value from The Elastic IP as the $PUBLIC_IP
Obtain Private IP of the instance hostname -I and add them to local environment variables $PRIVATE_IP
export PRIVATE_IP=$(hostname -I)
export PUBLIC_IP=AWS_ELASTIC_IP
Execute the command below to install Open Source Spinnaker
./scripts/install.sh -o -P $PRIVATE_IP
Validate installation
UI
Validate installation going to generated URL https://PUBLIC_IP
Use user admin and get the password at etc/spinnaker/.hal/.secret/spinnaker_password
The UI should load
Kubernetes Deployment
Minnaker is deployed inside an EC2 as a lightweight Kubernetes K3S cluster
Run kubectl version
Get info from cluster kubectl cluster-info
Tweak bash completion and enable a simple alias.
kubectl completion bash
kubectl completion bash
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
echo 'alias k=kubectl' >>~/.bashrc
`echo 'complete -F __start_kubectl k' >>~/.bashrc
Validate Spinnaker is running
k -n spinnaker get pods -o wide
Halyard Config
Validate a default halyard config is been set up
sudo chmod 755 /usr/local/bin/hal
#!/bin/bash
set -x
HALYARD=$(kubectl -n spinnaker get pod -l app=halyard -oname | cut -d'/' -f 2)
k -n spinnaker exec -it ${HAYLYARD} -- hal $# config
Minnaker repo
Clone the repository
Go to Scripts directory cd minnaker/scripts
Add permissions to the installation script chmod 775 all.sh
git clone https://github.com/armory/minnaker
References
armory/minnaker
I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.
I've installed Docker on CentOS7, now I try to launch the server in a Docker container.
$ docker run -d --name "openshift-origin" --net=host --privileged \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/openshift:/tmp/openshift \
openshift/origin start
This is the output:
Post http:///var/run/docker.sock/v1.19/containers/create?name=openshift-origin: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?
I have tried the same command with sudo and that works fine (I can also run images in OpenShift bash etc.) But it feels wrong to use it, am I right? What is a solution to let is work as normal user?
Docker is running (sudo service docker start). Restarting the CentOS did not help.
The error is:
/var/run/docker.sock: permission denied.
That seems pretty clear: the permissions on the Docker socket at /var/run/docker.sock do not permit you to access it. This is reasonably common, because handing someone acccess to the Docker API is effectively the same as giving them sudo privileges, but without any sort of auditing.
If you are the only person using your system, you can:
Create a docker group or similar if one does not already exist.
Make yourself a member of the docker group
Modify the startup configuration of the docker daemon to make the socket owned by that group by adding -G docker to the options. You'll probably want to edit /etc/sysconfig/docker to make this change, unless it's already configured that way.
With these changes in place, you should be able to access docker from your user account with requiring sudo.
I have a question regarding Docker. That container's concept being totally new to me and I am sure that I haven't grasped how things work (Containers, Dockerfiles, ...) and how they could work, yet.
Let's say, that I would like to host small websites on the same VM that consist of Apache, PHP-FPM, MySQL and possibly Memcache.
This is what I had in mind:
1) One image that contains Apache, PHP, MySQL and Memcache
2) One or more images that contains my websites files
I must find a way to tell in my first image, in the apache, where are stored the websites folders for the hosted websites. Yet, I don't know if the first container can read files inside another container.
Anyone here did something similar?
Thank you
Your container setup should be:
MySQL Container
Memcached Container
Apache, PHP etc
Data Conatainer (Optional)
Run MySQL and expose its port using the -p command:
docker run -d --name mysql -p 3306:3306 dockerfile/mysql
Run Memcached
docker run -d --name memcached -p 11211:11211 borja/docker-memcached
Run Your web container and mount the web files from the host file system into the container. They will be available at /container_fs/web_files/ inside the container. Link to the other containers to be able to communicate with them over tcp.
docker run -d --name web -p 80:80 \
-v /host_fs/web_files:/container_fs/web_files/ \
--link mysql:mysql \
--link memcached:memcached \
your/docker-web-container
Inside your web container
look for the environment variables MYSQL_PORT_3306_TCP_ADDR and MYSQL_PORT_3306_TCP_PORT to tell you where to conect to the mysql instance and similarly MEMCACHED_PORT_11211_TCP_ADDR and MEMCACHED_PORT_11211_TCP_PORT to tell you where to connect to memcacheed.
The idiomatic way of using Docker is to try to keep to one process per container. So, Apache and MySQL etc should be in separate containers.
You can then create a data-container to hold your website files and simply mount the volume in the Webserver container using --volumes-from. For more information see https://docs.docker.com/userguide/dockervolumes/, specifically "Creating and mounting a Data Volume Container".
I have a NFS partition on the host, if add it to a container with
docker run -i -t -v /srv/nfs4/dir:/mnt ubuntu
/mnt will contain the shared data, but doesn't it cause conflicts? Since it hasn't been mounted with nfs-client?
Docker uses bind mounts to share host directories with containers. Docker handles namespace permission so that the container can access the mount. Otherwise from the host's perspective, the bind mounted NFS share is just being accessed by another process. It's safe to bind mount an NFS share elsewhere on the filesystem. Using it from within a Docker container is no different.
As of Docker 1.7+ you can use a Volume Plugin. See the Docker Volume Plugin section for details.
As far as NFS goes you can use the Docker Netshare plugin which handles mounding NFS, CIFS and AWS EFS file systems.
You have to share /srv/nfs4/ in your default docker machine. Go to virtualbox > default (or boot2docker) > settings > Shared Folder