Docker for Win acme.json permissions - traefik

Traefik v1.3.1
Docker CE for Windows: 17.06.0-ce-win18 (12627)
I have the /acme folder routed to a host volume which contains the file acme.json. With the Traefik 1.3.1 update, I noticed that Traefik gets stuck in an infinite loop complaining that the "permissions 755 for /etc/traefik/acme/acme.json are too open, please use 600". The only solution I've found is to remove acme.json and let Traefik re-negotiate the certs. Unfortunately, if I need to restart the container, I have to remove acme.json again or I'm stuck with the same issue again!
My guess is that the issue lies with the Windows volume mapped to Docker but I was wondering what the recommended workaround would even be for this?

Can I change permissions on shared volumes for container-specific deployment requirements?
No, at this point, Docker for Windows does not enable you to control (chmod) the Unix-style permissions on shared volumes for deployed containers, but rather sets permissions to a default value of 0755 (read, write, execute permissions for user, read and execute for group) which is not configurable.
Traefik is not compatible with regular Windows due to the POSIX permissions check. It may work in the Windows Subsystem for Linux since that has a Unix-style permission system.

Stumbled across this issue when trying to get traefik running on Docker for Windows... ended up getting it working by adding a few lines to a dockerfile to create the acme.json and set permissions. I then built the image and despite throwing the "Docker image from Windows against a non-Windows Docker host security warning" when I checked permissions on the acme.json file it worked!
[
I setup a repo and have it auto building to the dockerhub here for further testing.
https://hub.docker.com/r/guerillamos/traefik/
https://github.com/guerillamos/traefikwin/blob/master/Dockerfile
Once I got that built I switched the image out in my docker-compose file and my DNS challenge to Cloudflare worked like a charm according to the logs.
I hope this helps someone!

Related

Waiting for SSH to be available with docker-machine on windows 10

I installed docker machine, and then created a new docker-machine on Windows 10.
Now I run ls to see the list of docker machines.
Now I run the following command
docker-machine start hypervdockermachine
Now I am stuck at this
Waiting for SSH to be available...
Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded
I have seen the git hub issue here, but not clear what to do.
Is there a way to solve this problem? I am not good at ssh
UPDATE
I just found a workaround.
You can run the above commands with git bash.
Most important, you must run git bash as admin. Else you will end up scratching your head.
Even the basic
docker-machine ls
will not show up anything without being an admin.
Finally if you are seeing the following error
Unable to query docker version: Get https://192.168.0.105:2376/v1.15/version: x509: certificate signed by unknown authority
Then you have to look at this issue.
docker-machine regenerate-certs yourdockermachinename
If needed user --force option
I got into the same problem after I moved .docker to partition D: and created a symlink to C:\Users\username\.docker, following this SO answer. I removed the old machines and configured new ones, and tried to regenerate the certs as suggested in the OP workaround but the problem was not solved.
After googling, I found this OpenSSH wiki page
and suspected that the cause of the problem was related to permissions.
So I could solve the problem by trying two different things:
Delete .ssh (source)
fix permissions to D:\path\to\.docker, allowing only SYSTEM, Administrators and my user to have full control access (source). These permissions were the same defined for .docker when it was under C:\Users\username\, but moving the folder to another partition made it inherit different permissions. To avoid dealing to much with it, I keep inheritance enabled changed the permissions directly in D: rather than in .docker folder.

How to run grafical tool (e.g. deja-dup) as root on Ubuntu 17.10

When trying to setup automated backup under Ubuntu 17.10 using Deja-Dup I realized that one can not backup the root directory since a normal user starting the deja-dup application does not have all rights to access all files in /.
(german discussion about rather similar situation can be found here: https://forum.ubuntuusers.de/topic/wie-sichert-an-mit-deja-dup-ein-systemverzeich/)
The usual workaround to gksu the deja-dup application does no longer work on Ubuntu 17.10. It seams that a decision has been made to prevent users from starting graphical applications as root on purpose for it is often a bad/risky think to do.
However to create regular backups of a Systems / directory with deja-dup the application has to be configured and later on started as root.
Since the typical ideas like gksu, gksudo, sudo -H do not work unter Ubuntu 17.10 I would highly appreciate any advice on a secure practice to get to run deja-dup as root. Can someone help with advice?

youtube-dl not able to authenticate from amazon ec2 machine

I installed youtube-dl on my local machine using curl as mentioned in the official README here.
sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl
sudo chmod a+rx /usr/local/bin/youtube-dl
Now when I run below command on my local machine
youtube-dl --cookies cookie.txt https://www.youtube.com/watch?v=x-5V_RS3Q48 -u my_account#gmail.com -p my_pass_word
I am able to download the video without any hassle as shown below.
But when I try to download the same video on one of my ec2 instances, I get the following exception.
The installation procedure on both machine is exactly same, youtube-dl version is exactly same (2017.08.18), python version is same (2.7.6)
The only difference I could figure out is the kernel versions on both machines:
On My Local: Linux-3.19.0-25-generic-x86_64-with-Ubuntu-14.04-trusty
On EC2 Instance: Linux-3.13.0-74-generic-x86_64-with-Ubuntu-14.04-trusty
Also the video I am trying to download is private and uploaded by same user I am providing credentials of.
One important point to note is that EC2 machine is able to download the video without any trouble if I am not using the username & password (which is only possible for videos that are not private)
Thanks
Posting the answer in case someone else is stuck with the similar issue.
The Issue with the cookies not being generated on server edition Linux OS on ec2 instance provided by AWS.
According to what I learnt recently, we don't have support for firefox browser on these machines (at least by default) and that's why it was failing to create the cookie file.
Solution
I created the file locally and set expiry time to 20 years in future and moved that cookie on server ec2 instance and used that cookie to sign in rather than creating one.
Thanks

Restart Kubernetes API server with different options

I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.

Remotely create a vhost on a docker container running rabbitmq

I have a Vagrantfile that does 2 important things; firstly pulls and runs dockerfile/rabbitmq, then builds from a custom Dockerfile that runs an application which assumes a vhost on the rabbitmq server, let's say "/foo".
The problem is the vhost is not there.
The container with rabbitmq is running successfully, the app is linked to it using --link as the built image is run. Using the environment variables docker sets I can hit the server. But somewhere in the middle of these operations I need to create the vhost as my connection is refused, i assume because "/foo" is not there.
How can I get the vhost onto the rabbit server?
Thanks
note - using the webadmin is not an option, this has to be done programatically.
You can put default_vhost in /etc/rabbitmq/rabbitmq.config: http://www.rabbitmq.com/configure.html
It will then be created on the first run. (Stop and delete the mnesia directory if has been started already)
There are few ways to get desired configuration:
Export/import whole configuration with rabbitmqadmin - Management Plugin CLI tool.
or
Use HTTP API from management plugin
or
Use rabbitmqctl cli tool to manage access control.
BTW according to docs in here: https://www.rabbitmq.com/vhosts.html
You can du this via curl by using:
curl -u userename:pa$sw0rD -X PUT http://rabbitmq.local:15672/api/vhosts/vh1
So probably it doesnt matter you are doing this remotely or not..