We've a PHP application that is pushed to ECR Fargate and we've configured an ECS task definition for it, and it works fine as a container in ECS.
I've configured aws-logs for the application and it sends the app logs normally to cloudwatch, but I'm wondering how to send the logs in a file inside the container in
"/var/www/html/app/var/dev.log"
to the same log group that I configured when created the task definition.
I found the answer on the following link:
https://aws.amazon.com/blogs/devops/send-ecs-container-logs-to-cloudwatch-logs-for-centralized-monitoring/
Just needed to install both syslog and awslogs on the php image, then use supervisord to start them with the container along with our php app. From Task definition side, create a volume and a mount point.
Related
I'm running mlflow on my local machine and logging everything through a remote tracking server with my artifacts going to an S3 bucket. I've confirmed that they are present in S3 after a run but when I look at the UI the artifacts section is completely blank. There's no error, just empty space.
Any idea why this is? I've included a picture from the UI.
You should see the 500 response in your artifacts request to the MLflow tracking server e.g. by clicking on the model of interest page (in the browser console). The UI service wouldnt know the location (since you set that to be an S3 bucket) and tries to load the defaults.
You need to specify the --artifacts-destination s3://yourPathToArtifacts argument to your mlflow server command. Also, when running the server in your environment dont forget to supply some common AWS credentials provider(s) (such as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env variables) as well as the MLFLOW_S3_ENDPOINT_URL env variable to point to your S3 endpoint.
I had the same issue with mlflow running on an ec2 instance. I logged into the server and noticed that it was overloaded and no disk space was left. I deleted a few temp files and the mlflow UI started displaying the files again. It seems like mlflow stores tons of tmp files, but that is a separate issue.
I have done the below steps.
Created an EKS Cluster
Installed aws-iam-authenticator client binary
Execute "aws eks update-kubeconfig --name <cluster_name>"
Execute "kubectl get svc"
I am able to view the services available in my cluster. When I see ~/.kube/config file it is using an external command called "aws-iam-authenticator".
My understanding is that "aws-iam-authenticator" uses my ~/.aws/credentials and retrieves the token from AWS(aws-iam-authenticator token -i cluster-1) and uses that token for "kubectl get svc" command. Is my understanding correct?
If my understanding correct, where does heptio comes into picture in this flow? Does Heptio Authenticator be deployed automatically when creating the EKS Cluster?
Basically, Heptio authenticator = aws-iam-authenticator.
You can check the details on here. If your aws-iam-authenticator is working fine, then you don't need to care about heptio additionally. They just renamed it.
I have a docker-composer setup in which i am uploading source code for server say flask api . Now when i change my python code, I have to follow steps like this
stop the running containers (docker-compose stop)
build and load updated code in container (docker-compose up --build)
This take a bit long time . Is there any better way ? Like update code in the running docker and then restarting Apache server without stopping whole container ?
There are few dirty ways you can modify file system of running container.
First you need to find the path of directory which is used as runtime root for container. Run docker container inspect id/name. Look for the key UpperDir in JSON output. You can edit/copy/delete files in that directory.
Another way is to get the process ID of the process running within container. Go to the /proc/process_id/root directory. This is the root directory for the process running inside docker. You can edit it on the fly and changes will appear in the container.
You can run the docker build while the old container is still running, and your downtime is limited to the changeover period.
It can be helpful for a couple of reasons to put a load balancer in front of your container. Depending on your application this could be a "dumb" load balancer like HAProxy, or a full Web server like nginx, or something your cloud provider makes available. This allows you to have multiple copies of the application running at once, possibly on different hosts (helps for scaling and reliability). In this case the sequence becomes:
docker build the new image
docker run it
Attach it to the load balancer (now traffic goes to both old and new containers)
Test that the new container works correctly
Detach the old container from the load balancer
docker stop && docker rm the old container
If you don't mind heavier-weight infrastructure, this sequence is basically exactly what happens when you change the image tag in a Kubernetes Deployment object, but adopting Kubernetes is something of a substantial commitment.
When I serve a Golang Application from within the official Docker Hub Repository I wonder what will be the default working directory the application starts up?
Background: I will have to map local Certificate Authority and server keys into the container to serve TLS https and I wonder where to map them to the application will be able to grab them in current working directory of the application from within the container?
If you are using the golang:1.X-onbuild image from DockerHub will be copied into(https://hub.docker.com/_/golang/)
/go/src/app
this means all files and directories from the directory where you run the
docker build
command will be copied into the container.
And the workdir of all images is
/go
Go will return the current working directory using
currdir, _ = filepath.Abs(filepath.Dir(os.Args[0]))
Executed within a golang container and right after startup, the pwd is set to
/go/src/app
The current working directory of a golang application starting up within a Docker container is thus /go/src/app. In order to map a file/directory into a container you will habe to use the -v-switch as described in the Documentation for run:
-v /local/file.pem:/go/src/app/file.pem
Will map a local file into the pwd of the dockerized golang app.
I have a Vagrantfile that does 2 important things; firstly pulls and runs dockerfile/rabbitmq, then builds from a custom Dockerfile that runs an application which assumes a vhost on the rabbitmq server, let's say "/foo".
The problem is the vhost is not there.
The container with rabbitmq is running successfully, the app is linked to it using --link as the built image is run. Using the environment variables docker sets I can hit the server. But somewhere in the middle of these operations I need to create the vhost as my connection is refused, i assume because "/foo" is not there.
How can I get the vhost onto the rabbit server?
Thanks
note - using the webadmin is not an option, this has to be done programatically.
You can put default_vhost in /etc/rabbitmq/rabbitmq.config: http://www.rabbitmq.com/configure.html
It will then be created on the first run. (Stop and delete the mnesia directory if has been started already)
There are few ways to get desired configuration:
Export/import whole configuration with rabbitmqadmin - Management Plugin CLI tool.
or
Use HTTP API from management plugin
or
Use rabbitmqctl cli tool to manage access control.
BTW according to docs in here: https://www.rabbitmq.com/vhosts.html
You can du this via curl by using:
curl -u userename:pa$sw0rD -X PUT http://rabbitmq.local:15672/api/vhosts/vh1
So probably it doesnt matter you are doing this remotely or not..