Error on Harbor as Docker Hub proxy when "Prevent Vulnerable Images" is enabled - harbor

I am using Harbor as Docker Hub Proxy, where the docker daemon was configure to point for this registry. This part is working ok.
But, I find a problem when I enable the option "Prevent Vulnerable Images" following the image below to the "library" project.
https://i.ibb.co/fDZKWN1/harbor.png
If this options is checked, the docker pull command don't work, showing the error below, but if is unchecked, the command work ok.
https://i.ibb.co/D9cMCXQ/error.png
- harbor version: v1.8.2 (online)
- docker engine version: 18.06.1-ce
- docker-compose version: 1.24.1
Github issue: https://github.com/goharbor/harbor/issues/9094

So far, Harbor does not fully support the registry mirror/proxy function. The good news is registry proxy/mirror is put into the Harbor roadmap now, you can check it here.

Related

testContainers and Rancher

I have a Spring Boot application with integration tests that are using testContainers.
Till recently, I used Docker Desktop and was able to easily run the test from within Intellij or from CLI.
Recently I changed my Windows machine to Rancher desktop.
Now when trying to run the integration tests gradle integrationTest I'm getting this error:
Caused by: java.lang.IllegalStateException: Previous attempts to find a Docker environment failed. Will not retry. Please see logs and check configuration
at org.testcontainers.dockerclient.DockerClientProviderStrategy.getFirstValidStrategy(DockerClientProviderStrategy.java:109)
at org.testcontainers.DockerClientFactory.getOrInitializeStrategy(DockerClientFactory.java:136)
at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:178)
at org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
at org.testcontainers.LazyDockerClient.authConfig(LazyDockerClient.java:12)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:310)
Is there an additional configuration that is needed in Intellij or Rancher or Windows to make it work?
UPDATE Feb 2022: As reported here TestContainers works nicely with Rancher Desktop 1.0.1.
Based on the following two closed issues - first, second - in the testcontainers-java github, Testcontainers doesn't seem to support Rancher Desktop, at least officially.
I'm running rancher desktop version 1.0.0 on my windows mashine and could get testcontainers to work just by simple adding 'checks.disable=true' in .testcontainers.properties (located under C:\Users<your user>)
updating Rancher Desktop to version 1.0.1 fixed this issue for me
I got this error because my Rancher was using containerd. If you also use Rancher Desktop try to switch to dockerd under settings, but first back up the data you have just in case.

Use existing file sharing / volumes in Docker-compose with Docker for Windows WSL2

With Docker for Windows 10 pro Hyper-V I work normally with File sharing. So in the Docker Desktop section Resources I add a folder like K:\data on my Windows host. This works well for me for many years.
So my current configuration is:
Windows 10 pro
Data folder on Windows 10 HOST is e.g. K:\data
Docker for Windows with Hyper-V
Docker compose (stack) file with 2 Docker images: MySql and Jenkins.
Docker components both access the data that is (via a volume specification) residing on the Windows host.
I investigate if I can switch to Docker for Windows WSL2. Then I would like to continue using the docker compose file with the 2 Docker containers. I would like to continue using the data that is residing on the Windows host. More specifically, on the K:\data.
Is it possible to switch from Docker for Windows with Hyper-V to Docker for Windows WSL2 and still use the existing data residing in the Windows folder?
This is not a duplicate question: I work with Docker compose, so I don't want to use the "docker run -v 'host'" solution. In my docker compose file I use e.g. the following line:
volumes:
- //k/data/spring-boot-app:/data/spring-boot-app
This question gets more important because since the new 2.5+ version, the Hyper-V version is hardly working on my standard Win10pro. The WSL2 worked immediately.
I pose this question as a simple user question, so others may benefit from it. I know that there is a world behind this topic.
Easy does it -- just switching to WSL2 works. I tried both the Docker for Windows 2.4 and 3.0.0 (21-12-2020) version.
Let me proof and explain in 2 simple steps. Take care of the way the volumes are used in the docker-compose.yml file.
First, just enabled the WSL2 in your Docker Desktop
1 - Demo with a simple app:
version: '3'
services:
chatbot:
image: myusernameatdocker/chatbot:1.0-SNAPSHOT
ports:
- 8080:8080
volumes:
- /k/data/chatbot:/data/chatbot
I created a simple Spring Boot application. It writes 1 line to my log file on Windows. That file I could perfectly read. Just on k:\data\chatbot\myfile.txt.
2 - Demo with an existing MySql environment:
version: '3'
services:
mymysqldb:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=root123
- MYSQL_DATABASE=mydatabase
- MYSQL_USER=johan
- MYSQL_PASSWORD=bladibladibla
volumes:
- /k/data/var/mysql-data:/var/lib/mysql
ports:
- "3306:3306"
I just started it and it works. Normally I had first to add a k:\data folder on first installing Docker. Now I didn't do that yet.
AND ... working with //k/data also worked. So, the MySql I use for many years still works.
Did I get the warning that accessing the Windows file system is slow? Yes. OK, in my case for development work not a big deal. It's okay, it works!

Building Docker images with GitLab CI/CD on existing git-runner

I have to build and push docker image via gitlab-ci. I have gone through the official document.
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
I want to adopt shell method but my issue is I already had a working gitrunner on my server machine. So what the procedure for it. If I tried to re-register the git runner on the same machine. will it impact the old one?
Thanks in advance.
Assuming that you installed gitlab-runner as a system service and not inside a container, you can easily register another shell runner on your server using the command gitlab-ci-multi-runner register.
This is indirectly confirmed by the advanced configuration documentation, which states that the config.toml of the gitlab-runner service may contain multiple [[runner]] sections.
Note: To allow the shell runner to build docker images, you will need to add the gitlab-runner user to the docker group, e.g.:
sudo gpasswd --add gitlab-runner docker

Up-to-date Ignite web console docker image

I'm wondering if there is a up-to-date docker image for the Ignite web console?
When I pull "docker pull apacheignite/web-console-standalone" I only get an outdated version that isn't compatible with current web agent.
Or is the Dockerfile available so I can build the image myself without starting from the ground?
Maybe there is even a Dockerfile that puts webagent and console in one?
Thanks for any help!
#dontequila.
apacheignite/web-console-standalone will be updated soon.
You can also build docker by yourself:
Checkout Apache Ignite master: Ignite Git: https://git-wip-us.apache.org/repos/asf/ignite
cd modules/web-console/docker/standalone/
./build.sh (you may need sudo).

Docker support intellij

I was running docker inside a VM and was using the Docker integration plugin in IntelliJ (Intellij on my host machine, not vm). I upgraded my OS and now I am able to run my Docker containers directly on my host machine. I can't find how to use the Docker plugin anymore. How can I use the plugin when Docker is running natively? When it was running on a VM, I would go under Settings -> Build, Execution, Deployment -> Clouds and would enter MY_VM_IP:2376 under API URL, but now I have no idea what to put there (or even if that's where I configure it). I tried 127.0.0.1:2376 and also tried 192.168.99.100:2376. Both are giving me 'Network is unreachable' error.
I found how:
I edited the /etc/sysconfig/docker file and added "-H 0.0.0.0:2376 -H unix:///var/run/docker.sock" to OPTIONS. Then I put 127.0.0.1:2376 as API URL under Settings -> Build, Execution, Deployment -> Clouds and it's working.