I am running a Linux Azure Container Instance. This is for a workflow where a container is created, runs a script, and is destroyed. The equivalent of "docker run -rm" Why does azure take approximately 1 minute to spin up this container. My container runs in a few seconds, but it takes the fabric at least 50 seconds to respond. This container is 200MB at most.
Related
Am trying to install one agent in my ECS fargate task. Along with application container i have added another container definition for one agent with image as alpine:latest and used run time injection.
While running the task, initially the one agent container is in running state and after a minute it goes to stopped state same time application container will be in running state.
In dynatrace the same host is available and keeps recreating after 5-10mins frequently.
Actually the issue that I had was task was in draining status because of application issue due to which in dynatrace it keeps recreating... And the same time i used run time injection for my ECS fargate so once the binaries are downloaded and injected to volume, the one agent container definition will stop while the application container keeps running and injecting logs in dynatrace.
I have the same problem and connected via ssh to the cluster I saw that the agent needs to be privileged. The only thing that worked for me was sending traces and metrics through Opentelemetry.
https://aws-otel.github.io/docs/components/otlp-exporter
Alternative:
use sleep infinity in the command field of your oneAgent container.
I'm running a task at fargate with CPU as 2048 and memory as 8192. Task after running some time is stopped with error
container was stopped as it ran out of memory.
Thing is that task does not fails every time. If I run the same task 10 time it fails 5 times and works 5 times. However If I take an ec2 machine with 2 vcpu and 4GB memory and try to run the same container it runs successfully.(Infact the memory usage on ec2 instance is very low).
Can somebody please guide me how to figure out the memory issue while running a fargate task?
Thanks
The way to start would be enabling memory metrics from container insights for your fargate tasks and Further correlating the Memory Usage graph with Application logs. help here
The difference between running on EC2 vs Fargate could probably be due to the fact that when you run a container on ECS Fargate, it runs on AWS's internal EC2 Instances. Now, here could possibly arise a Noisy Neighbour Situation although the chances would be pretty low.
What the the minimum/average time for AWS ECS Fargate to boot and run a docker image?
For arguments sake, the 45MB anapsix/alpine-java image.
I would like to investigate using ECS Fargate to speed up the process of building software locally on a slow laptop/pc, by having the software built on a faster remote server.
As such the boot up time of the image is crucial in making the endevour worth while.
I would disagree with the accepted answer given my experience with Fargate.
I have launched 1000's of containers on Fargate, and was even featured in an AWS architecture blog for our usage of Fargate. https://aws.amazon.com/blogs/architecture/building-real-time-ai-with-aws-fargate/
Private subnets, behind a NAT gateway have no different launch times for us than containers behind an IGW. If you use single NAT instances sure, your mileage may vary.
Container launch times in Fargate are entirely determined by how large your container is. Fargate does not cache containers, so every run task results in a docker pull happening. If your images are based on Ubuntu, you will have a bad time.
We have a mix of GO from scratch containers and Alpine node containers.
On average based on the metrics we have aggregated from 1000's of launches, From scratch containers start and are healthy in the target group in 10-15 seconds.
Alpine containers take on average 30-40 seconds to launch and become healthy.
Anything longer than that and your containers are likely too large for Fargate to make any sense until they offer pre cached ecr or something similar.
For your specific example, we have similar sized containers, if your entrypoint is healthy quickly (Ie not a 60 second java start time), your container of 45mb should launch and be ready to go in 30-60 seconds.
I am still waiting for caching in Fargate that is already available in ECS+EC2. This feature request can be tracked here. It is a pain in the ass that containers take such a long time to boot on AWS Fargate. Google Cloud Platform already offers this feature as generally available with a managed Cloud Run (K8s) environment, where containers spin up on the fly (~ 2 seconds) when they receive a request. They go idle after (a configurable) 5 minutes, which causes you to only be billed for those 5 minutes.
AWS Fargate does not offer such a nice feature of "warm containers" yet, although I would highly recommend them in doing so. It is probably technically difficult in getting compute and storage close together to accomplish this, it would require an enormous amount of internal bandwidth to load those containers as fast as Google does.
Nevertheless, below is my experience with Docker containers on AWS Fargate. Boot time is highly correlated with container image size as you can see from the following sample of containers I booted (February 2019):
4000 MB ~ 5 minutes
2400 MB ~ 4 minutes
1000 MB ~ 2 minutes
350 MB ~ 50 seconds
I would recommend building your container image on a light-weight base image, such as Minideb or Alpine. This would make your container image pretty small, ranging from a few 10MBs to a few 100MBs. But then again, when you need a JVM or Python with some additional packages and c-libs, you would easily go to 1000 MB.
I've launched more than 100 containers now in Fargate and on a public VPC they take about 4 mins on average, but I've seen it as long as 7-8 mins on a bad day.
If you launch it on a Private VPC then the timing can go south in a hurry. I've seen it take 2 hours to launch a Fargate container if the NAT instance is overloaded.
Hopefully AWS will speed this up over time. It shouldn't take me longer to launch a Fargate container than it does to upload my docker image to ECR.
One could use ECS_IMAGE_PULL_BEHAVIOR = prefer-cached on EC2 launch type to reduce agent start up timings to great extent.
I have a very simple container which says "hello world"
I have successfully run them and scale them to X.
They all seem to be in a cycle where they would run it then sleep for a bit then run it again.
Marathon cycle would be: Waiting, Running, Delayed and repeat
Swarm cycle woulbd be: Ready, Running, Shutdown and repeat
How do i specify so that the container finishes after first execution whether in swarm or marathon?
You can not, both Swarm and Marathon are designed for long running services.
For running something just one time you should use plain docker run command in Swarm and some other framework in Mesos (Marathon run on Mesos) e.g. Chronos which is cron replacement for Mesos and runs tasks periodically.
I have a t1.micro instance in amazon web services to handle a virtual image (in concrete a formhub image) and sometimes I got an eror of not allocated memory, I solve it rebooting the instance. Any clues?
is possible to reboot the instances automatically every day?
The micro instances are quite constrained with only 600mb or so of RAM. You may solve the problem by moving up to a small or medium instance or even one of the new T2 instances - even the smallest one has 1Gb of RAM.
If this is not an option for you, you can add a cron job to restart the instance at a particular time of day.
ssh in to the instance and type the command:
sudo crontab -e
Enter a line like:
0 5 * * * /sbin/reboot
to restart the system at 5am each day. This is for an Ubuntu system - the reboot command may be elsewhere in other distributions. Run the command which reboot to check.