Issue with docker push on local registry https access to ressource denied - ssl

I have a problem with my registry docker. My "server" VM is on kali-linux. I created the registry docker in HTTP and use a centOS VM as a client. I declared the registry insecure in the client VM and it worked perfectly.
Now I try to put it in HTTPS. In order to do that, I use nginx as a proxy. I followed this tutorial : Step 5 — Setting Up SSL except for Part 8 to make it a service (I don't know why but i can't do it).
Because I don't have a domain name, I used a fake one. In order to be recognized, I added my IP (192.168.X.X) and the domain name I used (myregistryexemple) to the /etc/hosts file on both VM.
As asked by the tutorial, I generated the certificat on my "server" VM (the kali one), and send it by scp to my client VM. I make the centOS vm trust the certificate thanks to this commands :
yum install ca-certificates
update-ca-trust force-enable
cp cert.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
I restart the service docker on the client VM. And launch the docker registry and the nginx proxy with "docker-compose up" on my kali VM.
I tag and try to push an ubuntu on the registry :
docker tag ubuntu myregistryexemple/ubuntu
docker push myregistryexemple/ubuntu
But I get this error :
The push refers to a repository [docker.io/myregistryexemple/ubuntu]
56827159aa8b: Preparing
440e02c3dcde: Preparing
29660d0e5bb2: Preparing
85782553e37a: Preparing
745f5be9952c: Preparing
denied: requested access to the resource is denied
Then I try to push to localhost directly :
docker tag ubuntu localhost:5000/ubuntu & docker push localhost:5000/ubuntu
then I docker login on the domain from the client VM, it worked, but when i tried to pull from my domain registry on the client VM, docker cannot find on the registry the docker images i tried to push.
Do someone has any idea why and knows how to help me ?

Ok so i found a way to make it work.
It is quite simple : Juste follow the complete tutorial I quote on the question ( https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04#step-5-%E2%80%94-setting-up-ssl )
After you created the repository, and before you push/pull a docker image.
You need to go, in both client and server VM, on /etc/hosts .
Add the line : domainChosen serverVmIp
Save and quit it.
Now we need the client to trust the certificate generated. In order to do that, you can use this tutorial : http://kb.kerio.com/product/kerio-connect/server-configuration/ssl-certificates/adding-trusted-root-certificates-to-the-server-1605.html .
Then restart your registry and your docker deamon. And you normaly can use your domain name to push/pull in your registry in https.

Related

Compute engine- GCP Click to Deploy solution crashed

I am using Google Cloud deployment manager - Wordpress click to deploy solution.
I installed a certificate through the Virtual machine SSH on the compute engine page using Certbot. Immediately after I installed the certificate the page started showing
"ssl_error_bad_cert_domain " and didn´t open.
I went back to the SSH and deleted the certificate by using the certbot command $ sudo certbot delete . Since that didn´t solved the error I tried turning off and on and also restarting the VM which didn't resolve the issue either
I could see in the logs explorer there was an error coming from the VM saying: Invalid ssh key entry - expired key:[expired_key] so I requested a new one through the VM ssh using:
ssh-keygen -t rsa -f ~/.ssh/gcloud_instance1 -C username printed the content of cd ~/.ssh && cat gcloud_instance1.pub and then added that to the VM ssh-keys text-area. That did stop the errors on the logs explorer but didn't solve the issue since the Wordpress implementation still doesn't open.
Another thing to add is that when the VM was turned off and on the IP address changed, which I am not sure it is also causing the crash.
This is how the page currently looks like: webpage failing
This is the logs explorer : https://docs.google.com/spreadsheets/d/1cKXWkfaFbmUFakomwM_-TtoSelxWKBDxNBK7fnMy_PA/edit?usp=sharing
Any ideas what could be happening ?
Thanks
The "ssl_error_bad_cert_domain" error can happen when the SSL certificate is not properly installed or is not issued for the correct domain.
My guess is that you had a typo in the domain name or you need a static ip.
You can use this "-sudo certbot certificates" command to verify after the installation.
It is recommended to have a static IP for better stability, especially if you are using SSL certificates.

Ambari host registration is failing.

I have installed and configured ambari-server as root user and ambari agents as a non root user.
Also
SSH Passwordless authentication is setup and working fine.
ntp is installed and running
Hostname is updated in /etc/hostname /etc/hosts and
/etc/syscongig/network
Anaconda python 2.7.13 is installed as the python environment and
package manager
Restarted the service systemctl restart systemd.hostnamed as well
Add all the sudoer entries as per documentation
At the host configuration page. It is not able to register the hosts
Getting the information message as below and it eventually times out.
BSHostStatusCollector:55 - Request directory /var/run/ambari-server/bootstrap/6
Since I did a non root installation for ambari client. I had to chose the option manual registration instead of automated registration and after that it worked.

Unable to register host while creating Apache Ambari cluster

I am trying to create localhost Apache Ambari cluster on CentOS7. I am using Ambari 2.2.2 binaries downloaded and installed from the Ambari repository with the following commands
cd /etc/yum.repos.d/
wget http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.2.2.0/ambari.repo
yum install ambari-server
ambari-server setup
ambari-server start
Before starting the server I have done all the necessary preparations steps described on the Hortonworks including the setup of passwordless ssh, which is frequent reason of problems according to the posts found on the internet. I verify it with
ssh root#localhost
During the creation of cluster in the "Install options" window I enter the name of the host I want to create (localhost in my case) and have already tried both of the options, which are
providing rsa secret key direktly - in this case the next window
simply stucks in the "Installing" stage and does not go any further,
showing no errors
performing manual registration of hosts.
For the second option I have downloaded and installed ambari-agent
yum install ambari-agent
ambari-agent start
In case of manual host registration I am getting the following error
"Host checks were skipped on 1 hosts that failed to register.".
When I click on "Failed", which in some cases described over the internet is supposed to deliver more precise description of a problem I see the following
"Registering with the server...
Registration with the server failed."
As a result I don't even now where to start searching for the possible reasons of this error.
Ambari cluster nodes need to be configured with a Fully Qualified Domain Name (FQDN). localhost is not an FQDN. You will need to configure the node with an FQDN and then retry the installation. You could use something like: localhost.local which is an FQDN. This requirement and how to configure the node to meet it are documented in the pre-requirements. From the HDP documentation:
All hosts in your system must be configured for both forward and and reverse DNS.
If you are unable to configure DNS in this way, you should edit the /etc/hosts file on every host in your cluster to contain the IP address and Fully Qualified Domain Name of each of your hosts.
I had the same "Registering with the server... Registration with the server failed." problem just recently.
I found the response on the same topic recommending to take a look at the log file which is located here /var/log/ambari-agent/ambari-agent.log from there was able to check that the hostname was set up incorrectly during some other phase of installation (I had it something like ambari.hadoop instead of localhost). So I went to the /etc/ambari-agent/conf/ambari-agent.ini and fixed it there.
I know that I'm digging some quite old question, but seems that compiling all that at one place might help someone with the same problem.

how docker-machine uses docker api to copy certificates

My question is, as I understand docker-machine uses docker remote API to do whatever it does, for example to regenerate certificates. I have checked docker API but couldn't find how it's possible to send certificates to that machine using only docker api, can someone help please?
The TLS files are hosted locally on the Docker client. For this reason you should protect the files as if they were a root password.
This page will walk you through generating the files needed to negotiate a connection over TLS. Note that the remote daemon must be running TLS.
https://docs.docker.com/engine/security/https/
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=$HOST:2376 version
Note: Docker over TLS should run on TCP port 2376.
Warning: As shown in the example above, you don’t have to run the
docker client with sudo or the docker group when you use certificate
authentication. That means anyone with the keys can give any
instructions to your Docker daemon, giving them root access to the
machine hosting the daemon. Guard these keys as you would a root
password!

artifactory pro registry docker image

I am trying out the 30 day trial version of the artifactory-registry docker image to evaluate the docker repository for our internal use. I am following the documentation https://www.jfrog.com/confluence/display/RTF/Running+with+Docker
After I run the docker image I am able to access the UI on port 8081, however When I try to push an image I get the following error
“The plain HTTP request was sent to HTTPS port”
Heres how I deploy the image
sudo docker pull mysql
sudo docker tag mysql localhost:5002/mysql
sudo docker push localhost:5002/mysql
Also the documentation says that artifactory could be accessed on the following URLS
http://localhost/artifactory
http://localhost:8081/artifactory
https://localhost:5000/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-remote/v2)
https://localhost:5001/v1 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-prod-local/v1)
https://localhost:5002/v1 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-dev-local/v1)
https://localhost:5001/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-prod-local/v2)
https://localhost:5002/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-dev-local/v2)
But I get a 404 trying to access any of the https urls
What am I missing?
This appears to be an NGINX configuration issue (as described here ) with not forwarding HTTPS requests to Artifactory.
Changing the configuration to forward your requests should fix your issue.