When I run a Fargate task, 99% of the time it shows an exit code and has logs in CloudWatch. For example:
However, every now and again it shows no exit code and has no logs in CloudWatch. It appears as though the task was never run and provides no explanation for why it was not run. Example:
Is this normal to not have logs or an exit code? Is there anyway I can find out about why this task was not run?
Related
I have a CI job that ran last week:
Is there a way to find out exactly when it finished? I am trying to debug a problem that we just noticed, and knowing if the job finished at 9:00am or 9:06am or 6:23pm a week ago would be useful information.
The output from the job does not appear to indicate what time it started or stopped. When I asked Google, I got information about how to run jobs in serial or parallel or create CI jobs, but nothing about getting the time of the job.
For the future, I could put date into script or before_script, but that is not going to help with this job.
This is on a self-hosted gitlab instance. I am not sure of the version or what optional settings have been enabled.
We are running a Oozie workflow which has Shell action and a Spark action which means a shell script and a Spark job which runs in sequence.
Running single workflow:
Total: 3 mins
Shell action: 50 secs
Spark job: 2 mins
The rest of the time is gone in initializing from oozie and allocating containers from yarn which is absolutely fine.
Usecase:
We are suppose to run 700 instances of the same workflow at once( by region, zone and area, which is a business case).
When running the 700 instances of the same workflow we are noticing delay in completion of 700 workflows although we have scaled the cluster linearly. We are expecting 700 workflows to complete in 3 mins or atleast by 5mins but this is not the case. There is a delay of 5mins to launch all the 700 workflows which is fine too by that it should complete by 10mins but it is not the case.
What exactly is happening is that when 700 workflows are submitted it is taking arond 5-6 mins to launch all the workflows from ooize (we are ok with this). The overall time taken to complete 700 workflows is around 30 mins which means some workflows which kickstarted at 7:00 would complete at 7:30. But the time taken by actions remains same which means shell action still take 50s and spark job is taking 2-3mins to complete. Noticing delay in starting the shell action and spark job although oozie has taken the workflow into the prep state.
What we checked so far:
Initially we thought it is to do with Oozie and worked on the configurations.
Later we thought Yarn and tuned some configurations.
Also, did create queue to run shell and launcher jobs in one queue and spark jobs in another queue.
We have gone through yarn and oozie logs too.
Can someone throw somelight around this?
I'm setting up an Azure-DevOps pipeline in which I want to include automated tests via the Newman CLI.
Imagine a pipeline like this.
Build Project
Copy build to test folder
Run the application => (API-Server)
Run Newman
Kill API Server Process
On Success Copy Build to another folder.
My Problem is that my server application is in a waiting state after it's initialization.
The next Task in my build pipeline won't start.
Is there a way to run multiple command lines asynchronously in Azure-DevOps?
Starting the process in via "start" won't work since it throws me an
ERROR: Input redirection is not supported, exiting the process immediately.
start "%TESTDIR%\foo\bar.exe"
timeout 10
In TFS, Am using SSH task with 'Commands' option to connect to a remote machine and run a set of few commands. Am using cd to a particular folder and running a shell script using 'sh '
This script usually takes around 2 hours to finish execution. The ssh task timesout after 15 minutes and exits the task. But when I see in the machine manually, the process is running.
Why doesn't the ssh task wait until the script finishes completely
According to your description, you may encountered a time out limitation of SSH task or build definition.
First, please double check the time out setting under control options.
Specifies the maximum time, in minutes, that a task is allowed to
execute before being cancelled by server. A zero value indicates an
infinite timeout.
Another place to check is build job time out, under the settings of your build definition: Option ->Build job timeout in minutes.
Specifies the maximum time a build job is allowed to execute on an
agent before being canceled by the server.
An empty or zero value indicates an infinite timeout.
If both set properly and you still get the time out, please attach more detail related build failed log with Verbose Debug Mode by setting system.debug=true for troubleshooting.
In the Best practices for running Docker guide it's stated, that there should only run one process per docker container. In Ubuntu there are some cron-jobs related to the apache-httpd which run daily (located in the/etc/cron.daily/apache2).
When using the apache-docker-image from the official repository (look here) those cronjobs are not run, only the httpd process is started, cron is not running.
Shouldn't the cron-jobs stated above be executed?
I have a hard time to figure out, how one can execute this cron-jobs from another docker-image, as suggested in the "Best-practices-guide" since the "cron-docker-image" should have access to the apache-process in order to run the cron-jobs correctly.
For basic apache there are no cron jobs to run.
If you have cron jobs to run there is no "right answer".
If they run daily and only run for a certain amount of time, you could certainly just schedule those to run instead of using cron.
If they run more periodically or you dont have a scheduler that can handle that (like AWS lambda) then it's not against best practices to have your webserver run them as a cron, you would just have to build your own container off of apache's to handle it.
If your real question is "How do I run cron jobs" a quick google brought:
https://github.com/aptible/docker-cron-example
https://hub.docker.com/r/hamiltont/docker-cron/
https://getcarina.com/docs/tutorials/schedule-tasks-cron/
You would just modify those to run in the background with & or nohup
What have you tried?