How to time the execution in a container of Docker? - api

I want to time the execution of a process in a container of Docker.
I tried to time it, calculating FinishedAt - StartedAt from docker inspect, but it's not an exact time.
I don't want to execute time in a container.
How can I time it exactly?
EDIT:
The process I want to time is cmd parameter of docker create.

The following creates an image that, when run, will wait for two seconds. We then run the image, outputting the overall time. It shows that the process execution overhead is about 0.3 seconds.
Build an image
$ docker create -ti ubuntu:12.04 sleep 2
785e9a63629b10676672656bc8412840faa6f00fc83e521628b0f9ca9ba01e14
Time a container running the image
$ time docker start -i 785
real 0m2.329s
user 0m0.064s
sys 0m0.016s

Related

Delay in application submission from Oozie and Yarn

We are running a Oozie workflow which has Shell action and a Spark action which means a shell script and a Spark job which runs in sequence.
Running single workflow:
Total: 3 mins
Shell action: 50 secs
Spark job: 2 mins
The rest of the time is gone in initializing from oozie and allocating containers from yarn which is absolutely fine.
Usecase:
We are suppose to run 700 instances of the same workflow at once( by region, zone and area, which is a business case).
When running the 700 instances of the same workflow we are noticing delay in completion of 700 workflows although we have scaled the cluster linearly. We are expecting 700 workflows to complete in 3 mins or atleast by 5mins but this is not the case. There is a delay of 5mins to launch all the 700 workflows which is fine too by that it should complete by 10mins but it is not the case.
What exactly is happening is that when 700 workflows are submitted it is taking arond 5-6 mins to launch all the workflows from ooize (we are ok with this). The overall time taken to complete 700 workflows is around 30 mins which means some workflows which kickstarted at 7:00 would complete at 7:30. But the time taken by actions remains same which means shell action still take 50s and spark job is taking 2-3mins to complete. Noticing delay in starting the shell action and spark job although oozie has taken the workflow into the prep state.
What we checked so far:
Initially we thought it is to do with Oozie and worked on the configurations.
Later we thought Yarn and tuned some configurations.
Also, did create queue to run shell and launcher jobs in one queue and spark jobs in another queue.
We have gone through yarn and oozie logs too.
Can someone throw somelight around this?

Why is GitLab docker-windows executor so slow?

When I run a completely new git repository, with only README.md and .gitlab-ci.yml and using the standard shell executor in GitLab, the whole job takes 4 seconds. When I do the same using the docker-windows executor, it takes 33 seconds!
My .gitlab-ci.yml:
no_git_nor_submodules:
image: base_on_python36:ltsc2019
stage: build
tags:
- docker-windows
variables:
GIT_SUBMODULE_STRATEGY: none
GIT_STRATEGY: none
script:
- echo test
no_docker_no_git_nor_submodules:
stage: build
tags:
- normal_runner
variables:
GIT_SUBMODULE_STRATEGY: none
GIT_STRATEGY: none
script:
- echo test
One problem that I thought that it could be is that docker images on Windows tend to be huge. The one I've tested with here is 5.8 GB. When I start a container manually on the server, it just takes a few seconds for it to start. I have also tested with an even larger image, 36 GB, but it also takes around 33 seconds for a job using that image.
As these jobs doesn't do anything and doesn't have any git clone or submodules, what is it that takes time?
I know that GitLab uses a mysterious helper image for cloning the git repository and for other things like that. Could it be this image that makes it super slow to run?
Update 2019-11-04
I looked a bit more at this, using docker events. It showed that GitLab starts a total of 7 containers, 6 of their own helper image and one of the image that I've defined in .gitlab-ci.yml. Each of these docker containers take around 5 seconds to create, run, and destroy, so that explains the time. The only question now is if this is normal behavior for docker-windows executor, or if I have something set up the wrong way that makes this super slow.
Short answer: Docker on Windows has a high overhead when starting new containers and GitLab uses 7 containers per job.
I opened an issue on GitLab here, but I'll post part of my text from there here as well:
I looked a bit more at this now, and I think I have figured out at least part of what is going on. There's a command that you can run, docker events. This will print all command that are executed for docker, creating/destroying containers/volumes etc. I ran this command and then started a simple job using the docker-windows executor. The output is like this (cleaned up and filtered a bit):
2019-11-04T16:19:02.179255700+01:00 container create image=sha256:6aff8da9cd6b656b0ea3bd4e919c899fb4d62e5e8ac95b876eb4bfd340ed8345, name=runner-Q1iF4bKz-project-305-concurrent-0-predefined-0)
2019-11-04T16:19:07.217784200+01:00 container create image=sha256:6aff8da9cd6b656b0ea3bd4e919c899fb4d62e5e8ac95b876eb4bfd340ed8345, name=runner-Q1iF4bKz-project-305-concurrent-0-predefined-1)
2019-11-04T16:19:13.190800700+01:00 container create image=sha256:6aff8da9cd6b656b0ea3bd4e919c899fb4d62e5e8ac95b876eb4bfd340ed8345, name=runner-Q1iF4bKz-project-305-concurrent-0-predefined-2)
2019-11-04T16:19:18.183059500+01:00 container create image=sha256:6aff8da9cd6b656b0ea3bd4e919c899fb4d62e5e8ac95b876eb4bfd340ed8345, name=runner-Q1iF4bKz-project-305-concurrent-0-predefined-3)
2019-11-04T16:19:23.192798200+01:00 container create image=sha256:b024a0511db77bf777cee287927151584f49a4018798a2bb1aa31332b766cf14, name=runner-Q1iF4bKz-project-305-concurrent-0-build-4)
2019-11-04T16:19:26.221921000+01:00 container create image=sha256:6aff8da9cd6b656b0ea3bd4e919c899fb4d62e5e8ac95b876eb4bfd340ed8345, name=runner-Q1iF4bKz-project-305-concurrent-0-predefined-5)
2019-11-04T16:19:31.239818900+01:00 container create image=sha256:6aff8da9cd6b656b0ea3bd4e919c899fb4d62e5e8ac95b876eb4bfd340ed8345, name=runner-Q1iF4bKz-project-305-concurrent-0-predefined-6)
There are 7 containers created in total, 6 of which is the gitlab helper image. Notice how it is around 5 seconds per gitlab image helper created. 6 * 5 seconds = 30 seconds, so about the extra overhead that I've noticed.
I also tested the performance again 5 months ago and our shell executor takes 2 seconds to just echo a message. The docker executor takes 21 seconds for the same job. The overhead is less than it was two years ago, but still significant.

TFS - 'Run SSH task' option times out

In TFS, Am using SSH task with 'Commands' option to connect to a remote machine and run a set of few commands. Am using cd to a particular folder and running a shell script using 'sh '
This script usually takes around 2 hours to finish execution. The ssh task timesout after 15 minutes and exits the task. But when I see in the machine manually, the process is running.
Why doesn't the ssh task wait until the script finishes completely
According to your description, you may encountered a time out limitation of SSH task or build definition.
First, please double check the time out setting under control options.
Specifies the maximum time, in minutes, that a task is allowed to
execute before being cancelled by server. A zero value indicates an
infinite timeout.
Another place to check is build job time out, under the settings of your build definition: Option ->Build job timeout in minutes.
Specifies the maximum time a build job is allowed to execute on an
agent before being canceled by the server.
An empty or zero value indicates an infinite timeout.
If both set properly and you still get the time out, please attach more detail related build failed log with Verbose Debug Mode by setting system.debug=true for troubleshooting.

How to run forever without sudo on Ec2

The title is pretty much tells what the question is about. I am trying to use forever to start a script on Ec2 but it does not work unless I use sudo.
If I start without sudo, I get
warn: --minUptime not set. Defaulting to: 1000ms
warn: --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
info: Forever processing file: ci.js
But when I do forever list
info: No forever processes running
You should run forever list under same user you've started forever (it seems like you are doing that right).
Try to check ps aux | grep node after you do forever start. Maybe you haven't started any process (because of errors in command line or in your NodeJS file) so forever list returns empty list.
p.s. I've checked forever on my machine and it is behaving exactly as you said - if i run it under my 'ubuntu' user -> list of running is empty even though process is alive... Seem like a bug in forever.

Dockerfile : RUN results in a No op

I have a Dockerfile, in which im trying to run a deamon that starts a java process.
If I embed the script in the Dockerfile, like so.
RUN myscript.sh
When I run /bin/bash on the resulting container, I see no entries from jps.
However, I can easily embed the script as CMD in which case, when i issue
docker run asdfg
I see the process start normally.
So, my question is, when we start a background async process in a Dockerfile, is it always the case that its side effects will be excluded from the container?
Background-processes needs to be started at container-start, not image build.
So your script needs to run via CMD or ENTRYPOINT.
CMD or ENTRYPOINT can still be a script, containing multiple commands. But I would imagine in your case, if you want several background processes, that using example supervisord would be your best option.
Also, check out some already existing Dockerfiles to get an idea of how it all fits together.