I run the service in Docker Warm Mode with these labels:
- "traefik.docker.network=proxy"
- "traefik.backend=kibana"
- "traefik.frontend.entryPoints=https,http"
- "traefik.frontend.rule=Host:mydomain"
- "traefik.port=5601"
- "traefik.frontend.auth.basic=test:098f6bcd4621d373cade4e832627b4f6"
And have this problem using HTTPS
curl -u test:test https://my-domain.com
401 Unauthorized
With HTTP all is ok
curl -u test:test http://my-domain.com
Found
Using htpassword solved it for me. It seems Traefik uses the same algorithm to unhash the passwords.
apt install apache2-utils
htpasswd -nb your_username "your_password_here"
You will receive the according hash
your_username:khrglahfslgkha345346
Copy paste it to your .toml or your docker-compose script.
Use your password (not the hash) for the login on your frontend and everything will work fine.
I have recently found out you have to take care of the double dollar signs in the resulting hash. You have to escape the $ in different scenarios....
I found cause of problem, I deploy service as a stack with traefik variable "traefik.frontend.auth.basic=test:$$apr1$$EaOXV0L6$DQbzuXBeb6Y8jjI2ZbGsg/". But after deploy value of this variable looks like test:/.
After manually setting correct value - auth work fine.
Also I have tried deploy service with command docker service create and variable have correct value.
Related
I'm learning Docker so I'm sorry if this question might sound silly. Anyway, my goal is create a LAMP container which handle all the databases in one place and also, I want to setup multiple virtual hosts for many sites. For each of this site I want use certbot to require a SSL certificate.
For doing so, I wrote the following docker-compose.yaml:
version: "3"
services:
web:
image: webdevops/php-apache:alpine-php7
ports:
- "80:80"
volumes:
- ./www:/app
- ./php.ini:/opt/docker/etc/php/php.ini
- ./sites-available:/opt/docker/etc/httpd/vhost.common.d
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "8088:80"
certbot:
image: webdevops/certbot
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
in the first service, I'm declaring Apache as web, and I'm using the alpine image created by webdevops, here the documentation. I bind the port 80, so I can access to Apache externally without specify custom ports.
In the volumes section I added the www folder which contains the php scripts.
I also specified a custom php.ini to overwrite the default php settings. Then, as the last part of volumes I tried to mount all the virtual hosts which I created inside the folder sites-available in the vhost.common.d directory.
Then I have the certbot container as the last part of my docker-compose file, and I would like to do the following:
How can I request a certificate for my subdomain which I've actually stored inside sites-available folder that is mounted as volume of web container?
How can I set a cron job or something like a task that auto renew all the certificates?
How can I store in a volume the obtained certificates?
I will admit, docker at times is often a struggle to piece together all the appropriate parts, with that said, my answer will not be complete, but hopefully will get you a step closer.
The following will create a certificate (note the --dry-run, it is highly recommended you use this to do your testing, else you'll get throttled)
docker run -it --rm \
-v /docker-volumes/etc/letsencrypt:/etc/letsencrypt \
-v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt \
-v /vol/to/the/web/root:/data/letsencrypt \
certbot/certbot certonly \
--noninteractive \
--webroot --webroot-path=/data/letsencrypt \
-d sub.domain.com \
--dry-run
-v /docker-volumes/etc/letsencrypt:/etc/letsencrypt
this is needed to store the cert itself
-v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt
not required, but in-case you want to review log messages
-v /vol/to/the/web/root:/data/letsencrypt
you need to give access to your web root, so certbot can create the .well-known dir and do its checks, this one was a tricky one as you need to link/use the same location used for your web container web-root vol
--noninteractive
certbot will bypass asking you questions
--webroot --webroot-path=/data/letsencrypt
tell certbot where to find webroot (e.g. within its own container)
Although not in the command above, you can add the following to assist in creating the cert if prompted for email address, not sure if it is a requirement or not
--email [email_address] --agree-tos --no-eff-email
Things to keep in-mind:
run certbot in --dry-run mode else, you will be throttled
certbot will need http access to the host, your vhost declaration should not redirect or deny access to http requests at least to the .well-known directory
you will need to add the appropriate SSL options in your vhost, i think certbot can do this automatically, but have not used this myself.
you will then need to reload apache like so /etc/init.d/apache2 reload
remove -it when/if you are running in cron
explore wrapping the cert creation and renewal in a shell-script
While i know this is not "the answer", hopefully some of this helps.
I'm currently struggling to get graylog working over https in a docker environment. I'm using the jwilder/nginx-proxy and I have the certificates in place.
When I run:
docker run --name=graylog-prod --link mongo-prod:mongo --link elastic-prod:elasticsearch -e VIRTUAL_PORT=9000 -e VIRTUAL_HOST=test.myserver.com -e GRAYLOG_WEB_ENDPOINT_URI="http://test.myserver.com/api" -e GRAYLOG_PASSWORD_SECRET=somepasswordpepper -e GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 -d graylog2/server
I get the following error:
We are experiencing problems connecting to the Graylog server running
on http://test.myserver.com:9000/api. Please verify that the server is
healthy and working correctly.
You will be automatically redirected to the previous page once we can
connect to the server.
This is the last response we received from the server:
Error message
Bad request Original Request
GET http://test.myserver.com/api/system/sessions Status code
undefined Full error message
Error: Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
When I go to the URL in the message, I get a reply: {"session_id":null,"username":null,"is_valid":false}
This is the same reply I get when running Graylog without https.
In the docker log file from the graylog is nothing mentioned.
docker ps:
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES 56c9b3b4fc74 graylog2/server "/docker-entrypoint.s" 5
minutes ago Up 5 minutes 9000/tcp, 12900/tcp
graylog-prod
When running docker with the option -p 9000:9000 all is working fine without https, but as soon as I force it to go over https I get this error.
Anyone an idea what I'm doing wrong here?
Thanks a lot!
Did you try GRAYLOG_WEB_ENDPOINT_URI="https://test.myserver.com/api" ?
I would like to deploy my Meteor app to Heroku and make it only accessible through HTTPS. Ideally, I want to do this as cheaply as possible.
Create the Certificate
Run these commands to get certbot-auto. certbot-auto should work on most systems
wget https://dl.eff.org/certbot-auto
chmod 755 certbot-auto
This command starts the process of getting your certificate. The -d flag allows you to pass in the domain you would like to secure. Alternatively, without the -d flag, it will pop up a prompt where you can enter the domain.
./certbot-auto certonly --manual -d app.yoursite.com
Then it will ask you the following. Do not hit enter.
Make sure your web server displays the following content at
http://app.yoursite.com/.well-known/acme-challenge/SOME-LENGTHY-KEY before continuing:
SOME-LONGER-KEY
Use Picker
I suggest using this method because on renewal, you will only need to update an environment variable. You can use public/ as below, but it will require a rebuild of your entire app every time
Run meteor add meteorhacks:picker
In a server side file, add the following
import { Picker } from 'meteor/meteorhacks:picker';
Picker.route('/.well-known/acme-challenge/:routeKey', (params, request, response) => {
response.writeHead('200', {'Content-Type': 'text/plain'});
response.write(process.env.SSL_PAGE_KEY)
response.end();
});
Then set an environment variable SSL_PAGE_KEY to SOME-LONGER-KEY with
heroku config:set SSL_PAGE_KEY=SOME-LONGER-KEY
Use public/
Create the directory path in your public folder. If you don't have one, create one.
mkdir -p public/.well-known/acme-challenge/
Then create the file SOME-LENGTHY-KEY and place SOME-LONGER-KEY inside it
echo SOME-LONGER-KEY > public/.well-known/acme-challenge/SOME-LENGTHY-KEY
Commit and push that change to your Heroku app.
git push heroku master
Now hit enter to continue the verification process. You should receive a message like this
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/app.yoursite.com/fullchain.pem. Your cert will
expire on 2016-04-11. To obtain a new version of the certificate in
the future, simply run Let's Encrypt again.
Upload the Certificate
To upload your certificates to Heroku, first enable the SSL Beta
heroku labs:enable http-sni -a your-app
heroku plugins:install heroku-certs
Then add your fullchain.pem and privkey.pem to Heroku.
sudo heroku _certs:add /etc/letsencrypt/live/app.yoursite.com/fullchain.pem /etc/letsencrypt/live/app.yoursite.com/privkey.pem
You can verify that the certificate was uploaded with
heroku _certs:info
Change your DNS Settings
Update your DNS to point to app.yoursite.com.herokudns.com
Verify SSL is working
To check that SSL is set up, run the following. -v gives you verbose output. -I shows the document info only. -H passes a header to the URL. The header we're passing ensures that a cache is not being used and will ensure you get your new certificate and not an old one.
curl -vI https://app.yoursite.com -H "Cache-Control: no-cache"
Check that the output contains the following
* Server certificate:
* subject: C=US; ST=CA; L=SF; O=SFDC; OU=Heroku; CN=app.yoursite.com
If the subject line does not contain CN=app.yoursite.com, wait 5 to 10 minutes and try again. If it does, you're almost good to go.
Make Meteor Specific Changes
To finish up the process, you'll want to change your ROOT_URL environment variable to the new https version.
heroku config:set ROOT_URL=https://app.yoursite.com
Then you'll want to ensure that your users are always using SSL with the force-ssl package
meteor add force-ssl
Lastly, if you have any OAuth logins set up in your app (Facebook, Google, etc), you'll want to provide them with the new https version of your URL.
Renewal
Run certbot-auto again
./certbot-auto certonly --manual -d app.yoursite.com
It may prompt you for the same endpoint with the same content. If it does, just hit enter. If it does not, you will need to repeat the above steps.
It will then create new certificate files, which you will upload to Heroku with
heroku certs:update /etc/letsencrypt/live/app.yoursite.com/fullchain.pem /etc/letsencrypt/live/app.yoursite.com/privkey.pem
Then to confirm, run the Verify SSL is working commands above
Sources
https://certbot.eff.org/#ubuntutrusty-other
https://devcenter.heroku.com/articles/ssl-beta
https://themeteorchef.com/blog/securing-meteor-applications/
I try to fetch some data from a Microsoft Dynamics Nav WebService.
This service uses the NTML authentication.
If I open the webservice url in a browser and use the given credentials everything works fine.
For setting up the environment for the WebService Client, I used the command line to check whether everything is working fine, I was, at a specific point, unable to authenticate.
Thats the command I am using:
curl --ntlm -u "DOMAIN\USERNAME" -k -v "http://hostname:port/instance/Odata/Company('CompanyName')/Customer"
The command will prompt for the password.
I copy in the password and everything is doing fine.
But when I use this command, with the password already included, it stops working and the authentication fails:
curl --ntlm -u "DOMAIN\USERNAME:PASSWORD" -k -v "http://hostname:port/instance/Odata/Company('CompanyName')/Customer"
The password contains some special chars, so I tried to use the percent encoding, which had no effect at all.
It is very difficult to research this kind of issue. Searching for curl + ntlm authentication issues provides a lot of results, but nothing is related to this specific kind of issue.
Does anyone of you guys already had experience with this kind of issue?
I had a problem with authentication because of cookies. I solved this containing cookies in txt file and using exactly this file through all requests. For example, after login request I saved this cookies:
curl -X POST -u username:password https://mysite/login -c cookies.txt
And with next request I used this file like this:
curl -X POST -u username:password https://mysite/link -b cookies.txt
This solution worked for me, I don't know if your problem is similar, but, I think, you may try this.
I was struggling with similar issue for a long time and finally I found this curl bug report #1253 NTLM authentication fails when password contains special characters (british pound symbol £) .
NTLM authentication in cURL supports only ASCII characters in passwords! This is still the case in version 7.50.1 on Ubuntu but I tested this on many different distributions and it is always the same. This bug also will break curl_init() in PHP (tested on PHP7). The only way to solve that is to avoid non ASCII characters in NTLM authentication passwords.
If you are using Python then you are lucky. Apparently Python developers rewrote cURL implementation and it works with non ASCII characters if you use HttpNtlmAuth package.
Try with nltm flag.
Something like this:
curl -v --proxy-nltm -u 'username:password' youproxy.com:8080 someURL
from > curl --help
-x, --proxy [PROTOCOL://]HOST[:PORT] Use proxy on given port
--proxy-anyauth Pick "any" proxy authentication method (H)
--proxy-basic Use Basic authentication on the proxy (H)
--proxy-digest Use Digest authentication on the proxy (H)
--proxy-negotiate Use Negotiate authentication on the proxy (H)
--proxy-ntlm Use NTLM authentication on the proxy (H)
Our Docker images ship closed sources, we need to store them somewhere safe, using own private docker registry.
We search the simplest way to deploy a private docker registry with a simple authentication layer.
I found :
this manual way http://www.activestate.com/blog/2014/01/deploying-your-own-private-docker-registry
and the shipyard/docker-private-registry docker image based on stackbrew/registry and adding basic auth via Nginx - https://github.com/shipyard/docker-private-registry
I think use shipyard/docker-private-registry, but is there one another best way?
I'm still learning how to run and use Docker, consider this an idea:
# Run the registry on the server, allow only localhost connection
docker run -p 127.0.0.1:5000:5000 registry
# On the client, setup ssh tunneling
ssh -N -L 5000:localhost:5000 user#server
The registry is then accessible at localhost:5000, authentication is done through ssh that you probably already know and use.
Sources:
https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
https://docs.docker.com/userguide/dockerlinks/
You can also use an Nginx front-end with a Basic Auth and an SSL certificate.
Regarding the SSL certificate I have tried couple of hours to have a working self-signed certificate but Docker wasn't able to work with the registry. To solve this I have a free signed certificate which work perfectly. (I have used StartSSL but there are others).
Also be careful when generating the certificate. If you want to have the registry running at the URL registry.damienroch.com, you must give this URL with the sub-domain otherwise it's not going to work.
You can perform all this setup using Docker and my nginx-proxy image (See the README on Github: https://github.com/zedtux/nginx-proxy).
This means that in the case you have installed nginx using the distribution package manager, you will replace it by a containerised nginx.
Place your certificate (.crt and .key files) on your server in a folder (I'm using /etc/docker/nginx/ssl/ and the certificate names are private-registry.crt and private-registry.key)
Generate a .htpasswd file and upload it on your server (I'm using /etc/docker/nginx/htpasswd/ and the filename is accounts.htpasswd)
Create a folder where the images will be stored (I'm using /etc/docker/registry/)
Using docker run my nginx-proxy image
Run the docker registry with some environment variable that nginx-proxy will use to configure itself.
Here is an example of the commands to run for the previous steps:
sudo docker run -d --name nginx -p 80:80 -p 443:443 -v /etc/docker/nginx/ssl/:/etc/nginx/ssl/ -v /var/run/docker.sock:/tmp/docker.sock -v /etc/docker/nginx/htpasswd/:/etc/nginx/htpasswd/ zedtux/nginx-proxy:latest
sudo docker run -d --name registry -e VIRTUAL_HOST=registry.damienroch.com -e MAX_UPLOAD_SIZE=0 -e SSL_FILENAME=private-registry -e HTPASSWD_FILENAME=accounts -e DOCKER_REGISTRY=true -v /etc/docker/registry/data/:/tmp/registry registry
The first line starts nginx and the second one the registry. It's important to do it in this order.
When both are up and running you should be able to login with:
docker login https://registry.damienroch.com
I have create an almost ready to use but certainly ready to function setup for running a docker-registry: https://github.com/kwk/docker-registry-setup .
Maybe it helps.
Everything (Registry, Auth server, and LDAP server) is running in containers which makes parts replacable as soon as you're ready to. The setup is fully configured to make it easy to get started. There're even demo certificates for HTTPs but they should be replaced at some point.
If you don't want LDAP authentication but simple static authentication you can disable it in auth/config/config.yml and put in your own combination of usernames and hashed passwords.