Trivy Scan with Openshift internal registry | how to authenticate against openshift registry with trivy - authentication

I am currently using the trivy scanner to scan images in the pipeline. This has worked very well until now. But recently it is necessary to scan the image from an internal Openshift registry.
Unfortunately I have the problem that I do not know how to authenticate trivy against the internal registry. The documentation does not give any information regarding Openshift. It describes Azure and AWS as well as github.
My scan command currently looks like this in groovy:
trivy image --ignore-unfixed --format template --template \"path for output" --output trivy_image_report.html --skip-update --offline-scan $image
Output:
INFO Vulnerability scanning is enabled
INFO Secret scanning is enabled
INFO If your scanning is slow, please try '--security-checks vuln' to disable secret scanning
INFO Please see also https://aquasecurity.github.io/trivy/v0.31.3/docs/secret/scanning/#recommendation for faster secret detection
FATAL image scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:
* unable to inspect the image (openshiftregistry/namespace/imagestream:tag): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory
* containerd socket not found: /run/containerd/containerd.sock
* GET https://openshiftregistry/v2/namespace/imagestream/manifests/tag: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:namespace/imagestream Type:repository]]
The image is stored within an imageStream in Openshift. Is there something i can add to the trivy command to authenticate the service against the registry or is there something else what has to be done before i use the command in groovy?
Thanks for help

Thanks to Will Gordon in the comments. This link was very helpfull: Access the Registry (Openshift).
This lines helped me (more information can be found on the linked site):
oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443
And
podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Thanks

Related

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.

AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000; Not able to connect to local node from aql

I have installed aerospike on my mac my following this installation steps
All the validations are working fine. I am able to connect to the cluster using browser chrome. Below is the screen shot.
I have also installed the AQL tools following the instructions here.
But I'm unable to connect to local node from aql.
$ aql
2017-11-21 16:06:09 WARN Failed to connect to seed 127.0.0.1 3000.
AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000
Error -1: Failed to connect
$ asadm
Aerospike Interactive Shell, version 0.1.11
ERROR: Not able to connect any cluster.
Also, I have noticed the Java client is giving error.
AerospikeClient client = new AerospikeClient("localhost", 3000);
when I changed the localhost to actual Ip returned by vagrant ssh -c "ip addr"|grep 'global eth1' it is working fine.
How to connect with aql using customer parameters? I want to pass ip address and port as parameters to aql. Any suggestions.
$ aql --help
https://www.aerospike.com/docs/tools/aql/index.html - discusses all various command line options.
$ aql -h a.b.c.d -p 1234
There is another possibility, you have your owned port instead of the default 3000, so when you try to connect to aerospike, you can try to run command like : aql -p4000
Hope this may help you
Seems like the port is not getting freed even after exiting the vagrant console.
Tried closing all the terminal windows and then starting again. But no luck.
Finally, restarting the system resolved the issue.

docker-machine create --driver generic kills ssh on google compute engine

Hi I am still learning docker's wonderful magical world. I use docker on linux with docker-machine I already added 2 already existing Linux servers with the docker-machine create and successfully run my containers on them. Now I try to do the same with an already existing google compute engine based machine which has Linux too. I use the command:
docker-machine create --driver generic --generic-ip-address ipaddress --generic- ssh-key path_To_Key --generic-ssh-user user_Name machine_Name
And I get an error:
Error creating machine: Error checking the host: Error checking and/or
regenerating the certs: There was an error validating certificates for
host "X.X.X.X:2376": dial tcp X.X.X.X:2376: i/o timeout You can
attempt to regenerate them using 'docker-machine regenerate-certs
[name]'.
Then the docker-machine does not know it's ip But I seems to give it a command trought docker-machine ssh
Altough I am not able to log in with ssh anywhere else and I must stop/remove the created machine and restart it.
Anyone has a similar problem?
According to generic driver's page at docker docs, try to edit --generic-ip-address=ip_address with equal sign.

Docker login error with Get Started tutorial

I'm trying to follow beginner tutorial on Docker's website and I suffer with an error on login.
OS is Ubuntu 14.04, I'm not using VirtualBox and I'm not behind any proxy and want to push to the "regular" docker repository (not private one).
All threads I've found mention proxies and private repositories but that isn't my case, I'm just trying to do simple beginner tutorial.
Here is my attempt:
$ sudo docker login
[sudo] password for myuname:
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: myDHuname
Password:
Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
My docker info:
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 5
Server Version: 1.11.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 28
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.19.0-58-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.686 GiB
Name: myuname-ThinkPad-T420
ID: 6RW3:X3FC:T62N:CWKI:JQW5:YIPY:RAHO:ZHF4:DFZ6:ZL7X:JPOD:V7EC
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Epilogue
Now docker login is passing. I have not touched anything since yesterday when it was broken...
I can't reproduce the behavior anymore.
I encounter this issue when my first use docker. I've shadowsocks proxy on, and configed as pac mode. When I try to run docker run hello-world, I get this timeout error. When I set the proxy mode to global, the error is aslo there.
But when I disable the proxy, docker runs well. It pull remote image successfully.
docker for windows
Note: Some users reported problems connecting to Docker Hub on Docker
for Windows stable version. This would manifest as an error when
trying to run docker commands that pull images from Docker Hub that
are not already downloaded, such as a first time run of docker run
hello-world. If you encounter this, reset the DNS server to use the
Google DNS fixed address: 8.8.8.8. For more information, see
Networking issues in Troubleshooting.
The error Client.Timeout exceeded while awaiting headers indicates:
GET request to the registry https://registry-1.docker.io/v2/ timedout
The library responsible (most likely libcurl) timed out before a response was heard
The connection never formed (proxy/firewall gobbled it up)
If you see the below result you can rule out timed out and network connectivity
$ curl https://registry-1.docker.io/v2/
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
If you get the above response next would be to check if your user environment has some proxy configuration.
env | grep "proxy"
Note: The docker runs as root. Maybe you have http_proxy in your env. Most likely I am wrong. Anywho see what happens with the curl GET request
change the proxy settings in the firefox. May be you are in access restricted mode. Just add the server address in the firefox settings -> preferences -> advanced -> network -> configuration (settings). Add the server ip in the no proxy for the issue can be resolved

Ice namespace error

I'm trying to login to stage1 Bluemix (for IBMers) using command:
ice --verbose login --user username --psswd password --registry 'registry-ice.bluemix_staging_server'
Once invoked I'm prompted:
Namespace(api_key=None, api_url=None, cf=False, cloud=False, host=None, local=False, org=None, psswd='password', reg_host='registry-ice.bluemix_staging_server', skip_docker=False, space=None, subparser_name='login', user='username', verbose=True)
Executing: cf login -u username -p password -a https://api.bluemix_staging_server
API endpoint: https://api.bluemix_staging_server
Authenticating...
OK
Targeted org 'user org'
Select a space (or press enter to skip):
1. dev
2. docker
Once I choose dev or docker I get following error:
------------------------
*Error response from daemon: Login: You must set a namespace before you login to the registry. See 'ice help namespace' (Code: 404; Headers: map[Server:[nginx] Date:[Tue, 10 Nov 2015 10:54:06 GMT] Content-Type:[text/plain] Content-Length:[84] Connection:[keep-alive]])
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.bluemix_staging_server
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images*
------------------------
I've defined a space named 'docker' using console before logging.
Any idea what I'm missing here?
Thanks in advance !
You need to set the namespace before doing the login (namespace is your area within the Images Registry). It is done just once.
Check the docs here: https://www.ng.bluemix.net/docs/containers/container_cli_ov.html#container_cli_login
If it doesn't ask for a namespace, it means you have it already. Then try: ice login -a api.bluemix_staging_server -H containers-api.bluemix_staging_server/v2/containers -R registry.bluemix_staging_server
Note: change bluemix_staging_server by the correct hostname you are connecting to.