In Docker, is there a way to retrieve the image or layer UUID? - docker-image

I am looking for a way to retrieve the docker container image UUID and the layers UUID. I saw the command 'docker images' and 'docker history' but they do not work on the individual image. Is there a command to do this?

You can find some more information about an image if your perform the following commands:
$ docker images
mysql latest 2fd136002c22 8 weeks ago 378.4 MB
inspect the imageID or image name
$ docker inspect 2fd136002c22
output:
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:4dcab49015d47e8f300ec33400a02cebc7b54cadd09c37e49eccbc655279da90",
"sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
"sha256:47bce276c5783a6cfc88e0ac368af70909144d04780222d134090dbf08f897aa",
"sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
"sha256:093c117bc4d3e1cd6e597a89b1648ebb3543be581c61aba80fc41ff6f7ae8e6d",
"sha256:1028156f10f1a0f79dba5be05e935d5f4588ebe7c25a3581843f7a759a2d7bfb",
...
and a lot more information like:
"Id": "sha256:2fd136002c22c9017ea24544fc15810aad7d88ab9d53da1063d2805ba0f31e9a",
"Volumes": {
"/var/lib/mysql": {}
...

This is how I did it with a running image:
$ docker exec youthful_pike cat /sys/class/dmi/id/product_uuid
E2F747AF-0000-0000-AA77-B856C9D179D8

Related

gitlab CI/CD: How to enter into a container for testing i.e getting an interactive shell

Like in docker we can enter a container by and have an interactive shell
docker-compose exec containername /bin/bash
Similary in the script in gitlab CI/CD can we enter into it. Like it provides an interactive shell
Eg:
build:
stage: build
script:
- pwd; ls -al
HERE I WANT TO HAVE AN INTERACTIVE SHELL SO THAT I CAN CHECK FEW THINGS
I think we need to do an small detour here and explain how jobs are working in GitLab CI.
Each job is an encapsulated docker container. The container only executes things you like to be executed within the script directive. By default the jobs on shared runners are using a ruby container image.
If you want to check, what you have available within your image, or you want try things out locally. You can do so running a container with this image locally and mounting your project folder into it.
docker run --rm -v "$(pwd):/build/project" -w "/build/project" -it <the job image> /bin/bash # or /bin/sh or whatever shell is available in the image.
# -v mounts the current directory int /build/project in your container
# -w changes the working directory to the mounting point
# /bin/bash starts the shell, it might be that there are others within the image
If you want to use a different docker image, lets say because you are running some other build tool, you can specify this with the image directive like:
build:
image: maven:latest
script:
- echo "some output"
You do have the functionality available within your job, which is provided by the image. As the job will run within a container of that image.
You can even use some tools like https://github.com/firecow/gitlab-ci-local to verify this locally. But in the end those are just docker images, and you can easily recreate the flow on your own.

openthread/environment docker rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted

I am running openthread/environment:latest docker image (as of 2019-06-15)
When starting on a fresh ubuntu 18.04 with docker 18.09 using the command
ubuntu#ip-172-31-37-198:~$ docker run -it --rm openthread/environment bash
I get the following output
Stopping system message bus dbus [ OK ]
Starting system message bus dbus [ OK ]
Starting enhanced syslogd rsyslogd
rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted
rsyslogd: activation of module imklog failed [v8.32.0 try http://www.rsyslog.com/e/2145 ]
Anyone knows whether this is related to ubuntu setup or the docker container or how to fix.
#Reto's answer will work, but you will be editing that file every time you build your container. Put this in your Dockerfile and you're all set. The edit will be performed automatically while the container is being built.
RUN sed -i '/imklog/s/^/#/' /etc/rsyslog.conf
You will also get rid of this warning if you just comment out the line
module(load="imklog")
inside your Docker container (edit /etc/rsyslog.conf).
I doubt you want to read the kernel messages inside a container ;-)
Try adding the --privileged option.
For example:
docker run -it --rm --privileged openthread/environment bash

MobileFirst Analytics dashboard does not show any data

Our MFP Analytics dashboard was working fine until last week. There is no data shown in the dashboard. Restart the server does not seem to help either. The cluster status on the server is RED. What can I do to resolve this?
I've learned that when the cluster status is red, it is likely due to one or more unassigned shards. The following commands are quite handy and able to solve my issue:
To find the list of all shards, use this CURL command:
curl -XGET http://localhost:9500/_cat/shards
To find the list of unassigned shards, use this CURL command:
curl -XGET http://localhost:9500/_cat/shards | grep UNASSIGNED
Once you get the list of unassigned shards, you can initialize them
with the following command:
for shard in $(curl -XGET http://localhost:9500/_cat/shards | grep UNASSIGNED | awk '{print $2}'); do
curl -XPOST 'localhost:9500/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : “worklight”,
"shard" : $shard,
"node" : "worklightNode_1234",
"allow_primary" : true
}
}
]
}'
sleep 5
done
You need to replace the 'node' with the node that the initialized
shards locate on, in my case, worklightNode_1234. This you can find
from the output in step 1.
Run the following command to check the status:
curl -XGET http://localhost:9500/_cluster/health?pretty
The server status should be green when all the shards are initialized and assigned.
On MobileFirst Platform 7.1, I solved the problem by changing the configuration in the server.xml file of the Analytics server, reducing the number of shards:
<jndiEntry jndiName="analytics/shards" value="5" />
<jndiEntry jndiName="analytics/replicas_per_shard" value="1" />
For default values refer to: http://www.ibm.com/support/knowledgecenter/SSHSCD_7.1.0/com.ibm.worklight.monitor.doc/monitor/c_op_analytics_properties.html

How to disable elasticsearch 5.0 authentication?

I'm just start to use elasticsearch. Created an index with default settings (5 shards, 1 replica). I then indexed ~13G text files with the attachment plugin. As a result, it went very slow searching in Kibana Discover. However, searching in the console is fast:
GET /mytext/_search
{
"fields": [ "file.name" ],
"query": {
"match": {
"file.content": "foobar"
}
},
"highlight": {
"fields": {
"file.content": {
}
}
}
}
To investigate why it's so slow, I installed X-Pack. The guide documentation seems not comprehensive, I didn't get to the security config.
The default install of elasticsearch don't have to be logged in, but it have to be logged in after installed X-Pack plugin. I'm confused with the security settings of elasticsearch, kibana, x-pack, do they share the user accounts whatever? After all, I get the authentication works by:
curl -XPUT -uelastic:changeme 'localhost:9200/_shield/user/elastic/_password' -d '{ "password" : "newpass1" }'
curl -XPUT -uelastic:newpass1 'localhost:9200/_shield/user/kibana/_password' -d '{ "password" : "newpass2" }'
Here comes the problem. I can't login using Java client with org.elasticsearch.plugin:shield. It's likely the latest version of the shield dependency (2.3.3) mismatched with the elasticsearch dependency (5.0.0-alpha).
Well, can I just disable the authentication?
From the node config:
GET http://localhost:9200/_nodes
"nodes" : {
"v_XmZh7jQCiIMYCG2AFhJg" : {
"transport_address" : "127.0.0.1:9300",
"version" : "5.0.0-alpha2",
"roles" : [ "master", "data", "ingest" ],
...
"settings" : {
"node" : {
"name" : "Apache Kid"
},
"http" : {
"type" : "security"
},
"transport" : {
"type" : "security",
"service" : {
"type" : "security"
}
},
...
So, can I modify these settings, and the possible values are?
In a test environment I added the following option to elasticsearch.yml, and/or kibana.yml
xpack.security.enabled: false
assuming that your image name is elasticsearch. you can use id if you don't like name
if you run docker you can use this.
go to bash in docker with command
docker exec -i -t elasticsearch /bin/bash
then remove x-pack
elasticsearch-plugin remove x-pack
exit docker
exit
and restart docker image
docker restart elasticsearch
Disclamer: Solution inspired by
Michał Dymel
When using with docker (in local dev), instead of removing the xpack, you can simply disable it.
docker pull docker.elastic.co/elasticsearch/elasticsearch:5.5.3
docker run -p 9200:9200 \
-p 9300:9300 \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=false" \
docker.elastic.co/elasticsearch/elasticsearch:5.5.3
I've managed to authenticate using this xpack_security_enable equals false but I still getting some authentication errors on my kibana log.
elasticsearch:
image: elasticsearch:1.7.6
ports:
- ${PIM_ELASTICSEARCH_PORT}:9200
- 9300:9300
kibana:
image: docker.elastic.co/kibana/kibana:5.4.1
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://localhost:9200
XPACK_SECURITY_ENABLED: 'false'
ports:
- 5601:5601
links:
- elasticsearch
depends_on:
- elasticsearch
This is my current setup, on the kibana I can see some errors:
KIBANA dashboard
On kibana logs I can see:
kibana_1 | {"type":"log","#timestamp":"2017-06-15T07:43:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2017-06-15T07:43:42Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://localhost:9200/"}
So it seems it still trying to connect using authentication.
I had same xpack issue but with kibana, fixed by following command:
docker run docker.elastic.co/kibana/kibana:5.5.1 /bin/bash -c 'bin/kibana-plugin remove x-pack ; /usr/local/bin/kibana-docker'
so it start container, than removes xpack and after that starts normal process. Same can be done with elasticsearch and logstash.

Bluemix: service bound to container does not appear in VCAP_SERVICES

I'm trying to use IBM Containers for Bluemix to deploy a container and bind it to a Bluemix service.
I start with an existing Bluemix app, which is bound to the MongoDB service I want. I verify that its VCAP_SERVICES environment variable is correctly populated:
$ cf env mamacdon-app
Getting env variables for app mamacdon-app in org mamacdon#ca.ibm.com / space dev as mamacdon#ca.ibm.com...
OK
System-Provided:
{
"VCAP_SERVICES": {
"mongodb-2.4": [
{
"credentials": { /*private data hidden*/ },
"label": "mongodb-2.4",
"name": "mongodb-1a",
"plan": "100",
"tags": [ "nosql", "document", "mongodb" ]
}
]
}
...
Then I run my image in Bluemix using the ice command, with the --bind mamacdon-app argument to bind it to my CF app:
$ ice run --name sshparty \
--bind mamacdon-app \
--ssh "$(cat ~/.ssh/id_rsa.pub)" \ # For SSH access
--publish 22 \ # For SSH access
registry-ice.ng.bluemix.net/ibmliberty:latest
As the name suggests, the image is a trivial example based on the IBM Websphere Liberty docker image -- just enough to let me SSH in and poke around.
At this point, the Containers dashboard tells me that the service has been bound to my container:
But when I finally ssh into the container, the environment does not contain the VCAP_SERVICES variable:
$ ssh -i ~/.ssh/id_rsa root#129.41.232.212
root#instance-000123e2:~# env
TERM=xterm
SHELL=/bin/bash
SSH_CLIENT=[private data hidden]
SSH_TTY=/dev/pts/0
USER=root
LS_COLORS=[omitted]
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/root
LANG=en_CA.UTF-8
SHLVL=1
HOME=/root
LOGNAME=root
SSH_CONNECTION=[private data hidden]
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env
root#instance-000123e2:~#
I expected the VCAP_SERVICES variable to be injected. What am I doing wrong?
I think there is an issue with the way the ssh daemon is getting launched where it does not have visibility to the VCAP_SERVICES environment variable.
However, you can confirm that the container's command will see the variable with following test:
ice run registry-ice.ng.bluemix.net/ibmliberty --bind mamacdon-app --name vcap_services_party printenv; sleep 60
Then, confirm it in the printenv output with ice logs vcap_services_party
Could you give the following a try:
ice run registry-ice.ng.bluemix.net/lintest/tradelite --bind yourappname --name yournewcontainer name
Once the image comes up run the following.
# echo $VCAP_SERVICES
For more info check out the Containers Docs.