Lock regional AWS IoT core to a specific CA - ssl-certificate

I am working with custom CA (certificate authority) on AWS IoT. I wonder if there is a way to lock it down to only my CA? i.e. only allow connections from devices that present my custom CA certificate (and not AWS IoT build in certs) upon connection initiation.
Thanks

If you generate the certificates with a particular attribute then a condition in the policy can be used. This condition could restrict connections to those with a particular attribute in the certificate.
e.g.
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"iot:Connect"
],
"Resource":[
"arn:aws:iot:us-east-1:123456789012:client/${iot:Connection.Thing.ThingName}"
],
"Condition":{
"ForAllValues:StringEquals":{
"iot:Certificate.Subject.Organization.List":[
"Example Corp",
"AnyCompany"
]
}
}
}
]
}
The list of certificate policy variables is at https://docs.aws.amazon.com/iot/latest/developerguide/cert-policy-variables.html

Related

AWS CodeBuild ECR CannotPullContainerError

CodeBuild project fails at the Provisioning phase due to the following error
BUILD_CONTAINER_UNABLE_TO_PULL_IMAGE: Unable to pull customer's container image. CannotPullContainerError: Error response from daemon: pull access denied for <image-name>, repository does not exist or may require 'docker login': denied: User: arn:aws:sts::<id>
The issue was with the Image Pull credentials.
CodeBuild was using default AWS CodeBuild credentials for pulling the image while the ECRAccessPolicy was attached to the Project Service Role.
I fixed it by updating the image pull credentials to use project service role.
To add additional clarity (not enough reputation yet to comment on an existing answer), the CodeBuild project service role needs to have the following permissions if trying to pull from a private repository:
{
"Action":[
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
],
"Effect":"Allow",
"Resource":[
"arn:aws:ecr:us-east-1:ACCOUNT_ID:repository/REPOSITORY_NAME*"
]
}
Also, the ECR repository policy should also look something like this (scope down root if desired):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID:root"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
}
]
}
fwiw I stumbled across this issue when using terraform to create my codebuild pipeline.
The setting to change for this was image_pull_credentials_type which should be set to SERVICE_ROLE rather than CODEBUILD in the environment block of the resource "aws_codebuild_project".
Thank you to Chaitanya for the response which pointed me in this direction with the accepted answer.

s3/gcs - Copy s3 subdirectory to gcs without access to root? [duplicate]

I'm looking to set up a transfer job to take files stored within an S3 bucket and load them to a GCS bucket. The credentials that I have give me access to the folder that contains the files that I need from S3 but not to the higher level folders.
When I try to set up the transfer job with the S3 bucket name under 'Amazon S3 bucket' and the access key ID & secret access key filled-in, the access is denied as you would expect given the limits of my credentials. However, access is still denied if I add the extra path information as a Prefix item (e.g. 'Production/FTP/CompanyName') and I do have access to this folder.
It seems as though I can't get past the fact that I do not have access to the root directory. Is there any way around this?
According to the documentation link:
The Storage Transfer Service uses the
project-[$PROJECT_NUMBER]#storage-transfer-service.iam.gserviceaccount.com
service account to move data from a Cloud Storage source bucket.
The service account must have the following permissions for the source
bucket:
storage.buckets.get Allows the service account to get the location of
the bucket. Always required.
storage.objects.list Allows the service account to list objects in the
bucket. Always required.
storage.objects.get Allows the service account to read objects in the
bucket. Always required.
storage.objects.delete Allows the service account to delete objects in
the bucket. Required if you set deleteObjectsFromSourceAfterTransfer
to true.
The roles/storage.objectViewer and roles/storage.legacyBucketReader
roles together contain the permissions that are always required. The
roles/storage.legacyBucketWriter role contains the
storage.objects.delete permissions. The service account used to
perform the transfer must be assigned the desired roles.
You have to set this permissions on your AWS bucket.
Paul,
Most likely your IAM role is missing s3:ListBucket permission. Can you update your IAM role to have s3:ListBucket , s3:GetBucketLocation and try again?
On AWS permission policy should like below , in case you want to give access to subfolder.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::<bucketname>",
"arn:aws:s3:::<bucketname>/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": "arn:aws:s3:::<bucketname>",
"Condition": {
"StringLike": {
"s3:prefix": [
"<subfolder>/*"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::<bucketname>/<subfolder>",
"arn:aws:s3:::<bucketname>/<subfolder>/*"
],
"Condition": {}
}
]
}

ERR_SSL_VERSION_OR_CIPHER_MISMATCH from AWS API Gateway into Lambda

I have set up a lambda and attached an API Gateway deployment to it. The tests in the gateway console all work fine. I created an AWS certificate for *.hazeapp.net. I created a custom domain in the API gateway and attached that certificate. In the Route 53 zone, I created the alias record and used the target that came up under API gateway (the only one available). I named the alias rest.hazeapp.net. My client gets the ERR_SSL_VERSION_OR_CIPHER_MISMATCH error. Curl indicates that the TLS server handshake failed, which agrees with the SSL error. Curl indicates that the certificate CA checks out.
Am I doing something wrong?
I had this problem when my DNS entry pointed directly to the API gateway deployment rather than that backing the custom domain name.
To find the domain name to point to:
aws apigateway get-domain-name --domain-name "<YOUR DOMAIN>"
The response contains the domain name to use. In my case I had a Regional deployment so the result was:
{
"domainName": "<DOMAIN_NAME>",
"certificateUploadDate": 1553011117,
"regionalDomainName": "<API_GATEWAY_ID>.execute-api.eu-west-1.amazonaws.com",
"regionalHostedZoneId": "...",
"regionalCertificateArn": "arn:aws:acm:eu-west-1:<ACCOUNT>:certificate/<CERT_ID>",
"endpointConfiguration": {
"types": [
"REGIONAL"
]
}
}

Cloudwatch monitoring script errors when attempting to send metrics for ASG

I am trying to send use the Cloudwatch monitoring script to send metrics for memory, disk and swap utilization from an EC2 instance to Cloudwatch. In order to run the script I need to provide it AWS credentials or an IAM role. When attempting to use an IAM role I find that I get the below error
[ec2-user#ip-x-x-x-x aws-scripts-mon]$ /home/ec2-user/aws-scripts-
mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --auto-
scaling=only --verbose --aws-iam-role=ACCT-CloudWatch-
service-role
Using AWS credentials file </home/ec2-user/aws-scripts-
mon/awscreds.conf>
WARNING: Failed to call EC2 to obtain Auto Scaling group name. HTTP
Status Code: 0. Error Message: Failed to obtain credentials for IAM
role ACCT-CloudWatch-service-role. Available roles: ACCT-service-role
WARNING: The Auto Scaling metrics will not be reported this time.
[ec2-user#ip-x-x-x-x aws-scripts-mon]$
This is what my IAM policy looks like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingNotificationTypes",
"autoscaling:DescribeAutoScalingInstances",
"ec2:DescribeTags",
"autoscaling:DescribePolicies",
"logs:DescribeLogStreams",
"autoscaling:DescribeTags",
"autoscaling:DescribeLoadBalancers",
"autoscaling:*",
"ssm:GetParameter",
"logs:CreateLogGroup",
"logs:PutLogEvents",
"ssm:PutParameter",
"logs:CreateLogStream",
"cloudwatch:*",
"autoscaling:DescribeAutoScalingGroups",
"ec2:*",
"kms:*",
"autoscaling:DescribeLoadBalancerTargetGroups"
],
"Resource": "*"
}
]
}
What could I be missing?
The message states the problem comes from the role it tries to use:
Failed to obtain credentials for IAM
role ACCT-CloudWatch-service-role. Available roles: ACCT-service-role
Modify this part of your command to --aws-iam-role=ACCT-
service-role (I am assuming that this role is the one configured correctly)

CloudFront SSL Certificate Not Showing up in UI After Uploading

I've been using Cloudfront to terminate SSL for several websites, but I can't seem to get it to recognize my newly uploaded SSL certificate for some reason.
Here's what I've done so far:
Purchased a valid SSL certificate, and uploaded it via the AWS cli tool as follows:
$ aws iam upload-server-certificate \
--server-certificate-name www.codehappy.io \
--certificate-body file://www.codehappy.io.crt \
--private-key file://www.codehappy.io.key \
--certificate-chain file://www.codehappy.io.chain.crt \
--path /cloudfrount/codehappy-www/
For which I get the following output:
{
"ServerCertificateMetadata": {
"ServerCertificateId": "ASCAIKR2OSE6GX43URB3E",
"ServerCertificateName": "www.codehappy.io",
"Expiration": "2016-10-19T23:59:59Z",
"Path": "/cloudfrount/codehappy-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfrount/codehappy-www/www.codehappy.io",
"UploadDate": "2015-10-20T20:02:36.983Z"
}
}
NOTE: I first ran aws configure and supplied my IAM user's credentials (this worked just fine).
Next, I ran the following command to view a list of all my existing SSL certificates on IAM:
$ aws iam list-server-certificates
{
"ServerCertificateMetadataList": [
{
"ServerCertificateId": "ASCAIIMOAKWFL63EKHK4I",
"ServerCertificateName": "www.ipify.org",
"Expiration": "2016-05-25T23:59:59Z",
"Path": "/cloudfront/ipify-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfront/ipify-www/www.ipify.org",
"UploadDate": "2015-05-26T04:30:15Z"
},
{
"ServerCertificateId": "ASCAJB4VOWIYAWN5UEQAM",
"ServerCertificateName": "www.rdegges.com",
"Expiration": "2016-05-28T23:59:59Z",
"Path": "/cloudfront/rdegges-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfront/rdegges-www/www.rdegges.com",
"UploadDate": "2015-05-29T00:11:23Z"
},
{
"ServerCertificateId": "ASCAJCH7BQZU5SZZ52YEG",
"ServerCertificateName": "www.codehappy.io",
"Expiration": "2016-10-19T23:59:59Z",
"Path": "/cloudfrount/codehappy-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfrount/codehappy-www/www.codehappy.io",
"UploadDate": "2015-10-20T20:09:22Z"
}
]
}
NOTE: As you can see, I'm able to view all three of my SSL certificates, including my newly created one.
Next, I logged into the IAM UI to verify that my IAM user account has administrator access:
As you can see my user is part of an 'Admins' group, which has unlimited Admin access to AWS.
Finally, I log into the Cloudfront UI and attempt to select my new SSL certificate. Unfortunately, this is where things seem to not work =/ Only my other two SSL certs are listed:
Does anyone know what I need to do so I can use my new SSL certificate with Cloudfront?
Thanks so much!
Most likely, the issue is that the path is incorrect. It is not cloudfrount but cloudfront
I had a very similar issue and the problem was directly related to my private key's encryption. Reissuing the certificate using RSA 2048-bit instead of RSA 4096-bit CSR encryption solved the issue for me. Could be something else outside of encryption as well, such as the formatting of your blocks or using an encrypted private key.
In short, ACM's import filter won't catch everything nor will it verify working validity across all AWS products, so double check your encryption level settings are compatible with CloudFront when using external certificates. Here's a list of compatibility issues for CloudFront. Remember that compatbility can vary from product to product so always double check. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-requirements.html
Had I simply read first, as usual, I would have saved a headache. 4096-bit is perfectly fine for some ACM functionalities, however this does not include CloudFront.
Importing a certificate into AWS Certificate Manager (ACM): public
key length must be 1024 or 2048 bits. The limit for a certificate that
you use with CloudFront is 2048 bits, even though ACM supports larger
keys.