I was wondering if there can be a process to restart apache if an alarm is triggered on ec2 instance. Either process can be triggered by Alarm or by SNS. In Alarm Actions i can see Auto Scaling or ECS Services or EC2 instance reboot kind option. I am trying to see if Lambda + SNS can work. But it dosen`t seem appropriate.
I am running ubuntu instances.
Yes you can achieve this by using Combination of AWS Lambda and EC2 Run Command Service from AWS.
https://aws.amazon.com/blogs/aws/new-ec2-run-command-remote-instance-management-at-scale/
You can create Lambda function that will trigger based on Cloudwatch Alarm and on trigger make Lambda to run service apache2 restart on your Ubuntu EC2 instance.
Related
I have ec2 in region A, and eks in region B, eks worker nodes need access a port exposed by ec2, i manually maintaining the ec2 security group which public ip(eks worker ip) can access ec2. The issue is i need update the ec sg manually once scale or upgrade eks node group. there should have a smarter way. i have some ideas, anyone can give some guidance or best practice?
solution 1: use lambda cron job monitor ec2 auto scale group, then update sg.
solution 2: in kubernetes, monitor the node change, use oicd update sg.
note: ec2 and eks in different region.
Am trying to install one agent in my ECS fargate task. Along with application container i have added another container definition for one agent with image as alpine:latest and used run time injection.
While running the task, initially the one agent container is in running state and after a minute it goes to stopped state same time application container will be in running state.
In dynatrace the same host is available and keeps recreating after 5-10mins frequently.
Actually the issue that I had was task was in draining status because of application issue due to which in dynatrace it keeps recreating... And the same time i used run time injection for my ECS fargate so once the binaries are downloaded and injected to volume, the one agent container definition will stop while the application container keeps running and injecting logs in dynatrace.
I have the same problem and connected via ssh to the cluster I saw that the agent needs to be privileged. The only thing that worked for me was sending traces and metrics through Opentelemetry.
https://aws-otel.github.io/docs/components/otlp-exporter
Alternative:
use sleep infinity in the command field of your oneAgent container.
Deployed Spinnaker in AWS to run a test in the same account. However unable to configure server groups. If I click create the task is queued with the account configured via hal on the CLI. Anyway to troubleshoot this, the logs are looking light.
Storage backend needs to be configured correctly.
https://www.spinnaker.io/setup/install/storage/
I want to deploy a autoscalable Redis in GCP and connect it to my app in app engine in my app.yml
Is Cloud Launcher the proper way to launch the Redis Service? I did so and selected the Redis click to deploy option (Not the bitnami one).
I configured the instances and deployed them.
After the instances were ready, the next command appeared:
gcloud compute ssh --project <project-name> --zone <zone-name> <redis-instance-name>
After having this, do I have to configure the next things?
Instance IP address (I want it to be only accessible from inside my GCP Account) (Do I need to configure the 3 instances or does the sentinel takes care of the redirection?)
Password of the sentinel redis
Firewall
I am working on Ruby on Rails app with Mongodb .My app is deployed on heroku and for delayed jobs i am using amazon ec2. Things I have a doubt
1)How to connect to the mongo database in amazon ec2 which is basically at heroku?
2)When i run delayed jobs how it will went to amazon server what are the changes i have to make to the app? If somebody can point me tutorial for this.
If you want to make your EC2 instance visible to your application on Heroku, you need to add your instance to Heroku's security group from Amazon. There are some instructions in Heroku's documentation that explain how to connect to external services like this.
https://devcenter.heroku.com/articles/dynos#connecting-to-external-services
In the case of MongoDB running on its default ports, you'd want to do something like this:
$ ec2-authorize YOURGROUP -P tcp -p 27017 -u 098166147350 -o default
As for how to handle your delayed jobs running remotely on the EC2 instance, you might find this article from the Artsy engineering team helpful. It sounds like they developed a fairly similar setup.
http://artsy.github.io/blog/2012/01/31/beyond-heroku-satellite-delayed-job-workers-on-ec2/