I've been using Cloudfront to terminate SSL for several websites, but I can't seem to get it to recognize my newly uploaded SSL certificate for some reason.
Here's what I've done so far:
Purchased a valid SSL certificate, and uploaded it via the AWS cli tool as follows:
$ aws iam upload-server-certificate \
--server-certificate-name www.codehappy.io \
--certificate-body file://www.codehappy.io.crt \
--private-key file://www.codehappy.io.key \
--certificate-chain file://www.codehappy.io.chain.crt \
--path /cloudfrount/codehappy-www/
For which I get the following output:
{
"ServerCertificateMetadata": {
"ServerCertificateId": "ASCAIKR2OSE6GX43URB3E",
"ServerCertificateName": "www.codehappy.io",
"Expiration": "2016-10-19T23:59:59Z",
"Path": "/cloudfrount/codehappy-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfrount/codehappy-www/www.codehappy.io",
"UploadDate": "2015-10-20T20:02:36.983Z"
}
}
NOTE: I first ran aws configure and supplied my IAM user's credentials (this worked just fine).
Next, I ran the following command to view a list of all my existing SSL certificates on IAM:
$ aws iam list-server-certificates
{
"ServerCertificateMetadataList": [
{
"ServerCertificateId": "ASCAIIMOAKWFL63EKHK4I",
"ServerCertificateName": "www.ipify.org",
"Expiration": "2016-05-25T23:59:59Z",
"Path": "/cloudfront/ipify-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfront/ipify-www/www.ipify.org",
"UploadDate": "2015-05-26T04:30:15Z"
},
{
"ServerCertificateId": "ASCAJB4VOWIYAWN5UEQAM",
"ServerCertificateName": "www.rdegges.com",
"Expiration": "2016-05-28T23:59:59Z",
"Path": "/cloudfront/rdegges-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfront/rdegges-www/www.rdegges.com",
"UploadDate": "2015-05-29T00:11:23Z"
},
{
"ServerCertificateId": "ASCAJCH7BQZU5SZZ52YEG",
"ServerCertificateName": "www.codehappy.io",
"Expiration": "2016-10-19T23:59:59Z",
"Path": "/cloudfrount/codehappy-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfrount/codehappy-www/www.codehappy.io",
"UploadDate": "2015-10-20T20:09:22Z"
}
]
}
NOTE: As you can see, I'm able to view all three of my SSL certificates, including my newly created one.
Next, I logged into the IAM UI to verify that my IAM user account has administrator access:
As you can see my user is part of an 'Admins' group, which has unlimited Admin access to AWS.
Finally, I log into the Cloudfront UI and attempt to select my new SSL certificate. Unfortunately, this is where things seem to not work =/ Only my other two SSL certs are listed:
Does anyone know what I need to do so I can use my new SSL certificate with Cloudfront?
Thanks so much!
Most likely, the issue is that the path is incorrect. It is not cloudfrount but cloudfront
I had a very similar issue and the problem was directly related to my private key's encryption. Reissuing the certificate using RSA 2048-bit instead of RSA 4096-bit CSR encryption solved the issue for me. Could be something else outside of encryption as well, such as the formatting of your blocks or using an encrypted private key.
In short, ACM's import filter won't catch everything nor will it verify working validity across all AWS products, so double check your encryption level settings are compatible with CloudFront when using external certificates. Here's a list of compatibility issues for CloudFront. Remember that compatbility can vary from product to product so always double check. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-requirements.html
Had I simply read first, as usual, I would have saved a headache. 4096-bit is perfectly fine for some ACM functionalities, however this does not include CloudFront.
Importing a certificate into AWS Certificate Manager (ACM): public
key length must be 1024 or 2048 bits. The limit for a certificate that
you use with CloudFront is 2048 bits, even though ACM supports larger
keys.
Related
I'm using [this][1] library to generate SSL certificates. My storage generates 4 files: certificate.pem, private_key.pem, chain.pem, and fullchain.pem.
I want to install this certificate in acquia cloud using their Rest API post endpoint to install ssl certificate. The payload looks like the following:
{
"legacy": 0,
"certificate": "pasted the content inside our certificate.pem",
"private_key": "pasted the content inside private_key.pem",
"ca_certificates": "pasted the content inside the fullchain.pem",
"label": "My New Cert"
}
When I send a request, I received an error to contact they api owner support, and searching around through the server log I came across this:
Error response: 500 (Internal Server Error). Error message: Site certificate CA chain certificates are out of order..
What exactly does this error mean by saying out of order?
The microstack.openstack project recently enabled/required tls authentication as outlined here. I am working on deploying an openstack cluster to microstack using a terraform example here. As a result of the change, I receive an unknown signed cert error when trying to create an openstack network client data source.
data "openstack_networking_network_v2" "terraform" {
name = "${var.pool}"
}
The error I get when calling terraform plan:
Error: Error creating OpenStack networking client: Post "https://XXX.XXX.XXX.132:5000/v3/auth/tokens": OpenStack connection error, retries exhausted. Aborting. Last error was: x509: certificate signed by unknown authority
with data.openstack_networking_network_v2.terraform,
on datasources.tf line 1, in data "openstack_networking_network_v2" "terraform":
1: data "openstack_networking_network_v2" "terraform" {
Is there a way to ignore the certificate error, so that I can successfully use terraform to create the openstack cluster? I have tried updating the generate-self-signed parameter, but I haven't seen any change in behavior:
sudo snap set microstack config.tls.generate-self-signed=false
I think insecure provider parameter is what you are looking for:
(Optional) Trust self-signed SSL certificates. If omitted, the OS_INSECURE environment variable is used.
Try:
provider "openstack" {
insecure = true
}
Disclaimer: I haven't tried that.
The problem was that I did not source the admin-openrc.sh file that I had downloaded from the horizon web page:
$ source admin-openrc.sh
I faced the same problem, if it could help, here my contribution :
sudo snap get microstack config.tls
Key Value
config.tls.cacert-path /var/snap/microstack/common/etc/ssl/certs/cacert.pem
config.tls.cert-path /var/snap/microstack/common/etc/ssl/certs/cert.pem
config.tls.compute {...}
config.tls.generate-self-signed true
config.tls.key-path /var/snap/microstack/common/etc/ssl/private/key.pem
In terraform directory, do :
cat /var/snap/microstack/common/etc/ssl/certs/cacert.pem : copy paste -> cacert.pem
cat /var/snap/microstack/common/etc/ssl/certs/cert.pem : copy/paste -> cert.pem
cat /var/snap/microstack/common/etc/ssl/private/key.pem : copy/past -> key.pem
And create a file in your terraform directory main.tf :
provider "openstack" {
user_name = "admin"
tenant_name = "admin"
password = "pass" (get with sudo snap get microstack config.credentials.keystone-password)
auth_url = "https://host_ip:5000/v3"
#insecure = true (uncomment & comment cacert_file + key line)
cacert_file = "/terraform_dir/cacert.pem"
#cert = "/terraform_dir/cert.pem" (if needed)
key = "/terraform_dir/private.pem"
region = "microstack" (or regionOne)
}
To finish terraform plan/apply
I am working with custom CA (certificate authority) on AWS IoT. I wonder if there is a way to lock it down to only my CA? i.e. only allow connections from devices that present my custom CA certificate (and not AWS IoT build in certs) upon connection initiation.
Thanks
If you generate the certificates with a particular attribute then a condition in the policy can be used. This condition could restrict connections to those with a particular attribute in the certificate.
e.g.
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"iot:Connect"
],
"Resource":[
"arn:aws:iot:us-east-1:123456789012:client/${iot:Connection.Thing.ThingName}"
],
"Condition":{
"ForAllValues:StringEquals":{
"iot:Certificate.Subject.Organization.List":[
"Example Corp",
"AnyCompany"
]
}
}
}
]
}
The list of certificate policy variables is at https://docs.aws.amazon.com/iot/latest/developerguide/cert-policy-variables.html
I was using Ansible 2.4 and included the letsencrypt module in one of my roles hoping to get a complete `.pem' format file at the end (key, chain, cert). There was no problem generating the key or using the csr to request the new cert, and no problem with the challenge, but when everything was done, I was only getting the certificate back, no chain.
When I tried to use them, Apache would fail to start saying that the key and the cert did not match. I assumed that this was because I didn't include the chain which was missing.
According to the docs here: https://docs.ansible.com/ansible/latest/modules/acme_certificate_module.html the chain|chain_dest and fullchain|fullchain_dest parameters weren't added until Ansible 2.5. So I upgraded to Ansible 2.7 (via git), and I'm still running into the exact same error...
FAILED! => {
"changed": false,
"msg": "
Unsupported parameters for (letsencrypt) module: chain_dest, fullchain_dest
Supported parameters include: account_email, account_key, acme_directory, agreement,
challenge, csr, data, dest, remaining_days"
}
I've tried the aliases and current names for both but nothing is working. Here is my current challenge-response call:
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
letsencrypt:
account_key: /etc/ssl/lets_encrypt.key
account_email: ###########.###
csr: /etc/ssl/{{ myhost.public_hostname }}.csr
dest: /etc/ssl/{{ myhost.public_hostname }}.crt
chain_dest: /etc/ssl/{{ myhost.public_hostname }}.int
fullchain_dest: /etc/ssl/{{ myhost.public_hostname }}.pem
challenge: dns-01
acme_directory: https://acme-v01.api.letsencrypt.org/directory
remaining_days: 60
data: "{{ le_com_challenge }}"
tags: sslcert
The documentation says that this is valid, but the error response does not include chain|chain_dest or fullchain|fullchain_dest as valid parameters.
I would, from the docs, expect that this response should result in the new certificate being created (.crt), the chain being created (.int), and the fullchain to be created (.pem).
Any help would be appreciated.
Should have waited 5 minutes... seems that the newer parameters are only available under the newer module name acme_certificate, even though it says letsencrypt was a valid alias. As soon as I updated this it worked.
I have set up a lambda and attached an API Gateway deployment to it. The tests in the gateway console all work fine. I created an AWS certificate for *.hazeapp.net. I created a custom domain in the API gateway and attached that certificate. In the Route 53 zone, I created the alias record and used the target that came up under API gateway (the only one available). I named the alias rest.hazeapp.net. My client gets the ERR_SSL_VERSION_OR_CIPHER_MISMATCH error. Curl indicates that the TLS server handshake failed, which agrees with the SSL error. Curl indicates that the certificate CA checks out.
Am I doing something wrong?
I had this problem when my DNS entry pointed directly to the API gateway deployment rather than that backing the custom domain name.
To find the domain name to point to:
aws apigateway get-domain-name --domain-name "<YOUR DOMAIN>"
The response contains the domain name to use. In my case I had a Regional deployment so the result was:
{
"domainName": "<DOMAIN_NAME>",
"certificateUploadDate": 1553011117,
"regionalDomainName": "<API_GATEWAY_ID>.execute-api.eu-west-1.amazonaws.com",
"regionalHostedZoneId": "...",
"regionalCertificateArn": "arn:aws:acm:eu-west-1:<ACCOUNT>:certificate/<CERT_ID>",
"endpointConfiguration": {
"types": [
"REGIONAL"
]
}
}