AWS Deployment issue Error code: HEALTH_CONSTRAINTS (In-place deployment) - amazon-s3

i want to Host my website on AWS s3
but when i create code deployment & i followed this url -> https://aws.amazon.com/getting-started/tutorials/deploy-code-vm/
showing this error -> Deployment Failed
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
error Screen shoot -> http://i.prntscr.com/oqr4AxEiThuy823jmMck7A.png
so please Help me

If you want to host your website on S3, you should upload your code into S3 bucket and enable Static Web Hosting for that bucket. If you use CodeDeploy, it will take application code either from S3 bucket or GitHub and host it on EC2 instances.
I will assume that you want to use CodeDeploy to host your website on a group of EC2 instances. The error that you have mentioned could occur if your EC2 instances do not have correct permission through IAM role.
From Documentation
The permissions you add to the service role specify the operations AWS CodeDeploy can perform when it accesses your Amazon EC2 instances and Auto Scaling groups. To add these permissions, attach an AWS-supplied policy, AWSCodeDeployRole, to the service role.
If you are following along the sample deployment from the CodeDeploy wizard make sure you have picked Create A Service Role at the stage that you are required to Select a service role.

Related

ECS fargate, permissions to download file from S3

I am trying to deploy a ECR image to ECS Fargate. In the Dockerfile I run an AWS cli command to download a file from S3.
However, I require the relevant permissions to access the S3 from ECS. There is a task role (under ECS task definition) screenshot below, that I presume I can grant ECS the rights to access S3. However, the dropdown only provided me with the default ecsTaskExecutionRole, and not a custom role I created myself.
Is this a bug? Or am I required to add the role elsewhere?
[NOTE] I do not want to include the AWS keys as an env variable to Docker due to security reasons.
[UPDATES]
Added a new ECS role with permissions boundary with S3. Task role still did not show up.
Did you grant ECS the right to assume your custom role? As per documentation:
https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-iam-roles.html#create_task_iam_policy_and_role
The a trust relationship needs to established, so that ECS service can assume the role on your behalf.

Fargate-unable to load aws credentials from any provider

I have zeppelin docker, I am trying to run on ECS with fargate. For the Zeppelin notebook, I want to configure s3 to store the notebook.
so I have created a bucket and added the permission to read/write the bucket for IAM role(The same is used in ECS task and Execution role) .
When I start ECS+EC2 I see my application comes up, but when I select the launch type as fargate, my zeppelin application throws below error
Unable to load AWS credentials from any provider in the chain
Is there any extra configuration required for fargate?

Stream S3 file from a one AWS subaccount, Flink deployed on Kubernetes cluster in another AWS account

I have 2 AWS accounts, Account A and Account B.
Account A has a EKS cluster running with a flink cluster running on it. To manage the IAM roles, we use Kube2iam.
All the pods on cluster have specific roles assigned to them. For simplicity lets say the role for one of the pods is Pod-Role
The K8s worker nodes have the role Worker-Node-role
Kube2iam is correctly configured to make proper EC2 metadata calls when required.
Account B has a S3 bucket, which the Pod hosted in Account A worked node need to read.
Possible Solution:
Create a role in Account B, let's say, AccountB_Bucket_access_role with a policy that allows reading the bucket. Add Pod-Role as a trusted entity to it.
Add a policy in Pod-role which allows switching to AccountB_Bucket_access_role, basically the STS AssumeRole action.
Create a AWS profile in Pod, let's say, custom_profile, with role_arn set to AccountB_Bucket_access_role role's arn.
While deploying the flink pod, set AWS_PROFILE=AccountB_Bucket_access_role.
QUESTION: Given above whenever the flink app needs to talk to S3 bucket, it first assumes the AccountB_Bucket_access_role role and is able to read the S3 bucket. But setting AWS_PROFILE actually switches the role for flink app, hence all the POD-ROLE permissions are lost, and they are required for proper functioning of flink app.
Is there a way, that this AWS custom_profile could only be used when reading S3 bucket and it switches to POD-ROLE after that.
val flinkEnv: StreamExecutionEnvironment = AppUtils.setUpAndGetFlinkEnvRef(config.flink)
val textInputFormat = new TextInputFormat(new Path(config.path))
env
.readFile(
textInputFormat,
config.path,
FileProcessingMode.PROCESS_CONTINUOUSLY,
config.refreshDurationMs
)
This is what I use in flink job to read S3 file.
Nvm, we can configure a role of one account to access a particular bucket from another account. Access Bucket from another account

Spinnaker Support for App ELB in AWS

Am facing 2 issues with Spinnaker new installation.
I could not see my Application load balancers listed in dropdown of load balancers tab while creating pipeline. We are currently using only app. load balancers in our current set up. I tried editing the JSON file of pipeline with below config and it didn't work. I verfied it by checking the ASG created in my AWS account and checked if there is any ELB/Target group associated but I couldn't see any.
"targetGroups": [
"TG-APP-ELB-NAME-DEV"
],
Hence, I would like to confirm how I can get support of App. ELB into Spinnaker installation and how to use it.
Also I have an ami search issue found.My current set up briefing is below.
One managing account - prod where my spinnaker ec2 is running & my prod application instances are running
Two managed accounts - dev & test where my application test instances are running.
When I create a new AMI in my dev AWS account and am trying to search the newly created AMI from my Spinnaker and it failed with error that it couldn't search the AMI first. Then I shared my AMI in dev to prod after which it was able to search it but failed with UnAuthorized error
Please help me clarify
1. If sharing is required for any new AMI from dev -> Prod or our spinnakerManaged role would take care of permissions
2. How to fix this problem and create AMI successfully.
Regarding #1, have you created the App Load Balancer through the Spinnaker UI or directly through AWS?
If it is the former, then make sure it follows the naming convention agreed by Spinnaker (I believe the balancer name should start with the app name)

Instance Types missing while creating Server Group in Spinnaker

I have used AWS Community AMI for configuring Spinnaker. I am able to get the lists of ELB, AMI and Security Groups while creating Server Group. But, I am not getting the Instance types in the custom drop down list. Any idea about what could be going wrong?
Spinnaker Cluster Error
It looks like you are not having a correct IAM role assigned to the user whose access keys you are using for the spinnaker integration with AWS.
Mostly if you used the spinnaker.Check if you have enough rights in AWS.
If not then create a role and assign AWS POWER USER ACCESS to your user and then try to get the integration .
Spinnaker is a tool which would need AWS EC2 Full access atleast as it directly access EC2 spin up its server groups.
Instance types are cached in the browser's local storage. You can explicitly refresh the cache via the 'Refresh all caches' link:
If you show the network tab of your browser's console (prior to clicking 'Refresh all caches'), you should see a request to http://localhost:8084/instanceTypes.
If the response contains your instance types, you should be good to go.