I am using Azure Kubernetes service, I found sometimes I'm getting failing health checks to SQL Server, then my API is responding to any request with code 400.
In this case, a simple pod restart usually helps; I thought that liveness / readyness probes will manage that in such scenario, but it's not.
Any ideas how may i automatize restarts on pods if this happened again?
Monitor and restart unhealthy docker containers. This functionality was proposed to be included with the addition of HEALTHCHECK, however didn't make the cut. This container is a stand-in till there is native support for --exit-on-unhealthy https://github.com/docker/docker/pull/22719
Sample compose file is:
docker run -d \
--name autoheal \
--restart=always \
-e AUTOHEAL_CONTAINER_LABEL=all \
-v /var/run/docker.sock:/var/run/docker.sock \
willfarrell/autoheal
Simply execute docker-compose up -d on this
a) Apply the label autoheal=true to your container to have it watched.
b) Set ENV AUTOHEAL_CONTAINER_LABEL=all to watch all running containers.
c) Set ENV AUTOHEAL_CONTAINER_LABEL to existing label name that has the value true.
Refer official document https://hub.docker.com/r/willfarrell/autoheal/ for more details.
Related
Background: I need to change the payara-server master-password. According to the docs the master-password must match the password in the keystore & truststore for the SSL Certificates to work properly. To make my website run on https instead of http.
I got Payara-Server running in a Docker Container through the guide:
I tried to change the payaradomain master-password, but I get an acyclic error.
1. made sure the payara-domain isn't running.
- ./asadmin stop-domain --force=true payaradomain
When I run this command, instead domain1 gets killed. & then kicked out of the docker container:
./asadmin stop-domain --kill=true payaradomain
When I execute this command:
./asadmin list-domains
Response:
domain1 running
payaradomain not running
Command list-domains executed successfully.
Then tried command:
./asadmin stop-domain --force=true payaradomain
Response:
CLI306: Warning - The server located at /opt/payara41/glassfish/domains/payaradomain is not running.
I'm happy with that, but when I try:
./asadmin change-master-password payaradomain
I get this response:
Domain payaradomain at /opt/payara41/glassfish/domains/payaradomain is running. Stop it first.
I have attached the picture below: please help...
If you want to configure Payara server in docker, including the master password, you should do it by creating your own docker image by extending the default Payara docker image. This is the simplest Dockerfile:
FROM payara/server-full
# specify a new master password "newpassword" instead of the default password "changeit"
RUN echo 'AS_ADMIN_MASTERPASSWORD=changeit\nAS_ADMIN_NEWMASTERPASSWORD=newpassword' >> /opt/masterpwdfile
# execute asadmin command to apply the new master password
RUN ${PAYARA_PATH}/bin/asadmin change-master-password --passwordfile=/opt/masterpwdfile payaradomain
Then you can build your custom docker image with:
docker build -t my-payara/server-full .
And then run my-payara/server-full instead of payara/server-full.
Also note that with the default Payara docker image, you should specify the PAYARA_DOMAIN variable to run payaradomain instead of domain1, such as:
docker run --env PAYARA_DOMAIN=payaradomain payara/server-full
The sample Dockerfile above redefines this variable so that payaradomain is used by default, without need to specify it when running the container.
Alternative way to change master password
You cn alternatively run the docker image without running Payara Server. Instead, you can run bash shell first, perform necessary commands in the console and the run the server from the shell.
To do that, you would run the docker image with:
docker run -t -i --entrypoint /bin/bash payara/server-full
The downside of this approach is that the docker container runs in foreground and if you restart it then payara server has to be started again manually, so it's really only for testing purposes.
The reason you get the messages saying payaradomain is running is because you have started domain1. payaradomain and domain1 use the same ports and the check to see if a domain is running looks to see if the admin port for a given domain are in use.
In order to change the master password you must either have both domains stopped or change the admin port for payaradomain.
instead of echoing passwords in the dockerfile it is safer to COPY a file during build containing the passwords and remove that when the build is finished.
I want to use influxdb and grafana in docker environment to show time-series data from jmeter.
I tried the set up from this post: http://www.testautomationguru.com/jmeter-real-time-results-influxdb-grafana/
and the only difference here is, I'm a docker environment. So I set up the influxdb configuration from the information given from docker hub(https://hub.docker.com/_/influxdb/):
I change the configuration file like this:
and type:
"$ docker run -p 8086:8086 \
-v $PWD/influxdb.conf:/etc/influxdb/influxdb.conf:ro \
influxdb -config /etc/influxdb/influxdb.conf"
in termianl,
And finally when I want to get the data from localhost:8083, enter database jemeter, and type"SHOW MEASUREMETNS", nothing shows there.
What might be the reason here?
port 8086 is for HTTP API to add the data. If you use graphite protocol, port 2003 should be enabled and mapped.
docker run -p 8086:8086 -p 2003:2003 ...
will work.
Please check jmeter backendlistner settings. Check here IP of InfluxDb Container and port. it shouldn't be localhost.
I've made an ACS instance.
az acs create --orchestrator-type=kubernetes \
--resource-group $group \
--name $k8s_name \
--dns-prefix $kubernetes_server \
--generate-ssh-keys
az acs kubernetes get-credentials --resource-group $group --name $k8s_name
And run helm init it has provisioned tiller pod fine. I then ran helm install stable/redis and got a redis deployment up and running (seemingly).
I can kube exec -it into the redis pod, and can see it's binding on 0.0.0.0 and can log in with redis-cli -h localhost and redis-cli -h <pod_ip>, but not redis-cli -h <service_ip> (from kubectl get svc.)
If I run up another pod (which is how I ran into this issue) I can ping redis.default and it shows the DNS resolving to the correct service IP but gives no response. When I telnet <service_ip> 6379 or redis-cli -h <service_ip> it hangs indefinitely.
I'm at a bit of a loss as to how to debug further. I can't ssh into the node to see what docker is doing.
Also, I'd initially tried this with a standard Alphine-Redis image, so the helm was a fallback. I tried it yesterday and the helm one worked, but the manual one didn't. Today doing it (on a newly built ACS cluster) it's not working at all on either.
I'm going to spin up the cluster again to see if its a stable reproduce, but I'm pretty confident something fishy is going on.
PS - I have a VNet with overlapping subnet 10.0.0.0/16 in a different region, when I go into the address range I do get a warning there that there is a clash, could that affect it?
<EDIT>
Some new insight... It's something to do with alpine based images (which we've been aiming to use)...
So kube run a --image=nginx (which is ubuntu based) and I can shell in, install telnet and connect to the redis service.
But, e.g. kubectl run c --image=rlesouef/alpine-redis then shell in, and telnet doesn't work to the same redis service.
</EDIT>
There was a similar issue https://github.com/Azure/acs-engine/issues/539 that has been fixed recently. One thing to verify is to check if nslookup works in the container.
I've installed Docker on CentOS7, now I try to launch the server in a Docker container.
$ docker run -d --name "openshift-origin" --net=host --privileged \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/openshift:/tmp/openshift \
openshift/origin start
This is the output:
Post http:///var/run/docker.sock/v1.19/containers/create?name=openshift-origin: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?
I have tried the same command with sudo and that works fine (I can also run images in OpenShift bash etc.) But it feels wrong to use it, am I right? What is a solution to let is work as normal user?
Docker is running (sudo service docker start). Restarting the CentOS did not help.
The error is:
/var/run/docker.sock: permission denied.
That seems pretty clear: the permissions on the Docker socket at /var/run/docker.sock do not permit you to access it. This is reasonably common, because handing someone acccess to the Docker API is effectively the same as giving them sudo privileges, but without any sort of auditing.
If you are the only person using your system, you can:
Create a docker group or similar if one does not already exist.
Make yourself a member of the docker group
Modify the startup configuration of the docker daemon to make the socket owned by that group by adding -G docker to the options. You'll probably want to edit /etc/sysconfig/docker to make this change, unless it's already configured that way.
With these changes in place, you should be able to access docker from your user account with requiring sudo.
I have a question regarding Docker. That container's concept being totally new to me and I am sure that I haven't grasped how things work (Containers, Dockerfiles, ...) and how they could work, yet.
Let's say, that I would like to host small websites on the same VM that consist of Apache, PHP-FPM, MySQL and possibly Memcache.
This is what I had in mind:
1) One image that contains Apache, PHP, MySQL and Memcache
2) One or more images that contains my websites files
I must find a way to tell in my first image, in the apache, where are stored the websites folders for the hosted websites. Yet, I don't know if the first container can read files inside another container.
Anyone here did something similar?
Thank you
Your container setup should be:
MySQL Container
Memcached Container
Apache, PHP etc
Data Conatainer (Optional)
Run MySQL and expose its port using the -p command:
docker run -d --name mysql -p 3306:3306 dockerfile/mysql
Run Memcached
docker run -d --name memcached -p 11211:11211 borja/docker-memcached
Run Your web container and mount the web files from the host file system into the container. They will be available at /container_fs/web_files/ inside the container. Link to the other containers to be able to communicate with them over tcp.
docker run -d --name web -p 80:80 \
-v /host_fs/web_files:/container_fs/web_files/ \
--link mysql:mysql \
--link memcached:memcached \
your/docker-web-container
Inside your web container
look for the environment variables MYSQL_PORT_3306_TCP_ADDR and MYSQL_PORT_3306_TCP_PORT to tell you where to conect to the mysql instance and similarly MEMCACHED_PORT_11211_TCP_ADDR and MEMCACHED_PORT_11211_TCP_PORT to tell you where to connect to memcacheed.
The idiomatic way of using Docker is to try to keep to one process per container. So, Apache and MySQL etc should be in separate containers.
You can then create a data-container to hold your website files and simply mount the volume in the Webserver container using --volumes-from. For more information see https://docs.docker.com/userguide/dockervolumes/, specifically "Creating and mounting a Data Volume Container".