My SSL Certificate on my AWS Elastic Load Balancer is going to expire very soon and I need to replace it with a new one.
I've got the new certificate / bundle / key, uploaded to IAM but it won't show in the drop down in the Load Balancer settings that should let me choose the certificate to apply.
Here is the output when I put
aws iam list-server-certificates
To my mind this shows that I have uploaded the new certificate to IAM ok. The top certificate in the list is the one which is due to expire any moment now and the other two are ones I have recently uploaded with the intention of replacing it (They are actually two attempts to upload using the same pem files).
The image below shows that only one certificate is available to choose to apply to the load balancer. Unfortunately it is the one that is about to expire.
The one thing that does strike me as a little odd is that the certificate name in the dropdown - ptdsslcert - is different to the names in the aws iam list-server-certificates output, even though it is the same certificate that expires imminently.
I'm really stuck here and if I don't figure this out soon I'm going to have an expired certificate on my domain so I would be really appreciative of any help on this.
The AWS CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files.
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Although it's hard to guess the specific local machine configuration issue that resulted in the behavior observed, as noted in the comments, this appeared to be an issue where aws cli was using two different sets of credentials to access two different services, and these two sets of credentials were actually from two different AWS accounts.
The ServerCertificateName returned by the API (accessed through the CLI) should have matched the certificate name shown in the console drop-down for Elastic Load Balancer certificate selection.
The composition of ARNs (Amazon Resource Names) varies by service, but often includes the AWS account number. In this case, the account number shown in the CLI output did not match what was visible in the AWS console... leading to the conclusion that the issue was that an AWS account other than the intended one was being accessed by aws cli.
As cross-confirmed by the differing display names, the "existing" certificate, uploaded a year ago, may have had the same content but was in fact a different IAM entity than the one seen in the dropdown, as the two certificates were associated with entirely different accounts.
Related
I've set up my app running on Cloud Run with a Let's Encrypt wildcard certificate to cover subdomains. It works fine, but everytime I run testssl.sh or other similar tools they notice 2 certificates: mine and Google's. The second certificate throws errors regarding name mismatch and from time to time (couldn't reproduce it, it may not be a problem) even browser notice this and say the cert is not valid, but a refresh will fix it.
Is this something common and should I ignore it? Google's DIG shows that the domain has the correct IP as A record and everything else works fine.
Use only one certificate.
A wildcard certificate with Cloud Run provides few benefits. Only domain names that are mapped will be supported so the wildcard does not help. The negative is that you must manually renew the certificate every 90 days.
Use the Google Managed certificates.
I'm struggling to use AWS Cloud Former to generate a CloudFormation template. I have already launched the Cloud Former stack twice and attempted to connect to the associated DNS for the EC2 instance generated each time and keep receiving the error pictured below.
I have already tried to create a new SSL certification for the EC2 instance via AWS Certificate Manager, but AWS does not allow this for EC2 instances. I'm not very familiar with SSL/HTTPS processes and would appreciate any guidance on next steps I should pursue to address/troubleshoot this.
Upon further research into this, I have found the following issue:
Specifically, I'm seeing the following SSL certification issue:
Has anyone else seen this yet with CloudFormer recently?
CloudFormer uses self signed certificates that are generated by the stack. This is the normal browser warning when the browser encounters a self signed certificate. For your purposes, you can simply click on the link at the bottom (Proceed to EC2-xxx (Unsafe)) of the warning page, and ignore the warning. You will connect successfully in spite of the warning.
SSL certificates require a domain. On AWS you can set one up with the certificate manager, but it will still have an issue until you correctly configure Route 53 on AWS as well.
I am currently struggeling with the following tasks. I don't want to include my TLS certificates in my templates because
I don't want to check in credentials in code management while still checking in the templates
I am using multiple Applications with the same Certificate and I don't want to update repos just because I might distribute another certificate
Now my approach is this. I am using Jenkins for my build pipelines. I have a Repo that is used just for certificate management. It will run when updated and distribute the certificate and private key to Openshift Secrets on various clusters.
When running the Template of an application I am retrieving the Information from the secret and setting the values in the route. And here's where things get tricky. I can only use single line values because
Openshift templates will not accept multiline parameters with oc process
Secrets will not store multiline values
So the solution seemed to be easy. Just store the Certificate with \n and set it in the Route like this. However Openshift will not accept single line certificates resulting in the error
spec.tls.key: Invalid value: "redacted key data": tls: found a certificate rather than a key in the PEM for the private key
Now the solution could be to insert the Certificate as multiple lines directly in the template file before processing and applying it to the cluster but that seems a little bit hacky to me. So my Question is
How can you centrally manage TLS Certificates for your applications and set them correclty in the Templates you're applying?
Secrets can be multiple lines. You can create a secret using a certificate file, and mount that secret as a file into your containers. See here for how to create secrets from files:
https://kubernetes.io/docs/concepts/configuration/secret/
Use the openshift command line tool instead of kubectl.
For certificates, there is something called cert-manager:
https://docs.cert-manager.io/en/latest/
This will generate certs as needed. You might want to take a look.
In order to centrally manage TLS Certificates for the applications, you can create a general secret and use it via volume mounting.
I have a problem when using a custom CNAME and SSL/HTTPS for a CloudFront distribution. I set up a CloudFront distribution to use as a CDN on my WordPress site, using the W3TC plugin to configure things.
I imported an SSL certificate from my hosting provider to use with the CloudFront distribution. I also configured a CNAME at my hosting for the distribution (e.g., "cdn.example.com") to use in place of the CloudFront domain name (e.g., "d1234.cloudfront.net").
After setting all this up I immediately noticed that all the images were just broken image links. Right-clicking an image to open it in a new browser window resulted in the browser warning me that "the connection is not private" and that the website "may be impersonating cdn.example.com". The source showed that none of the CloudFront CDN resources were being loaded. Chrome reported "Failed to load resource: net::ERR_CERT_COMMON_NAME_INVALID" for several resources.
After experimenting I found that if, I stopped using the CNAME (by removing it from the W3TC plugin field) and used the CloudFront domain name (i.e., "d1234.cloudfront.net") instead, everything worked all right. So images loaded successfully from d1234.cloudfront.net, where they wouldn't from cdn.example.com.
I have another site that is set up exactly the same except it doesn't use SSL/HTTPS: the use of a custom CNAME for the CloudFront distribution there doesn't cause any problems at all.
So the problem with CloudFront seems to appear when I try to use SSL/HTTPS and a custom CNAME.
The Chrome error report seems to indicate that there's a problem with the SSL certificate that I imported (what, I don't know - I'm not at all clued-up with SSL certificates). If that's the cause of the problem, should I get a certificate from AWS to enable the use of a custom CNAME? If so, what should I stipulate for the certificate? And I'm not sure how that works having two certificates - one for my domain and another for CloudFront?
It sounds like you may have missed adding your CNAME to the Cloudfront distribution, i.e. under 'Alternate Domains Names':-
(I know this is an old question but as it stands unresolved and I just hit the same issue, I think this might help others)
Below are the issues.
Certificate does not match issuers name
Google Chrome browser error
Address error due Certificate Mismatch
Please check SSL generated for domain is valid and uploaded same to cloudfront.
Goal: I would like to keep sensitive data in s3 buckets and process it on EC2 instances, located in the private cloud. I researched that there is possbility to set up S3 buckets policy by IP and user(iam) arn's thus i consider that data in s3 bucket is 'on the safe side'. But i am worriyng about the next scenario: 1) there is vpc 2) inside theres is an ec2 isntance 3) there is an user under controlled(allowed) account with permissions to connect and work with ec2 instance and buckets. Buckets are defined and configured to work with only with known(authorized) ec2-instances. Security leak: user uploads malware application on ec2 instance and during processing data executes malware application that transfer data to other(unauthorized) buckets under different AWS account. Disabling uploading data to ec2-instance is not an option in my case. Question: is it possible to restrict access on vpc firewal in such way that it will be access to some specific s3 buckets but it will be denied access to any other buckets? Assumed that user might upload malware application to ec2 instance and within it upload data to other buckets(under third-party AWS account).
There is not really a solution for what you are asking, but then again, you seem to be attempting to solve the wrong problem (if I understand your question correctly).
If you have a situation where untrustworthy users are in a position where they are able to "connect and work with ec2 instance and buckets" and upload and execute application code inside your VPC, then all bets are off and the game is already over. Shutting down your application is the only fix available to you. Trying to limit the damage by preventing the malicious code from uploading sensitive data to other buckets in S3 should be the absolute least of your worries. There are so many other options available to a malicious user other than putting the data back into S3 but in a different bucket.
It's also possible that I am interpreting "connect and work with ec2 instance and buckets" more broadly than you intended, and all you mean is that users are able to upload data to your application. Well, okay... but your concern still seems to be focused on the wrong point.
I have applications where users can upload data. They can upload all the malware they want, but there's no way any code -- malicious or benign -- that happens to be contained in the data they upload will ever get executed. My systems will never confuse uploaded data with something to be executed or handle it in a way that this is even remotely possible. If your code will, then you again have a problem that can only be fixed by fixing your code -- not by restricting which buckets your instance can access.
Actually, I lied, when I said there wasn't a solution. There is a solution, but it's fairly preposterous:
Set up a reverse web proxy, either in EC2 or somewhere outside, but of course make its configuration inaccessible to the malicious users. In this proxy's configuration, configure it to only allow access to the desired bucket. With apache, for example, if the bucket were called "mybucket," that might look something like this:
ProxyPass /mybucket http://s3.amazonaws.com/mybucket
Additional configuration on the proxy would deny access to the proxy from anywhere other than your instance. Then instead of allowing your instance to access the s3 endpoints directly, only allow outbound http toward the proxy (via the security group for the compromised instance). Requests for buckets other than yours will not make it through the proxy, which is now the only way "out." Problem solved. At least, the specific problem you were hoping to solved should be solvable by some variation of this approach.
Update to clarify:
To access the bucket called "mybucket" in the normal way, there are two methods:
http://s3.amazonaws.com/mybucket/object_key
http://mybucket.s3.amazonaws.com/object_key
With this configuration, you would block (not allow) all access to all S3 endpoints from your instances via your security group configuration, which would prevent accessing buckets with either method. You would, instead, allow access from your instances to the proxy.
If the proxy, for example, were at 172.31.31.31 then you would access buckets and their objects like this:
http://172.31.31.31/mybucket/object_key
The proxy, being configured to only permit certain patterns in the path to be forwarded -- and any others denied -- would be what controls whether a particular bucket is accessible or not.
Use VPC Endpoints. This allows you to restrict which S3 buckets your EC2 instances in a VPC can access. It also allows you to create a private connection between your VPC and the S3 service, so you don't have to allow wide open outbound internet access. There are sample IAM policies showing how to control access to buckets.
There's an added bonus with VPC Endpoints for S3 that certain major software repos, such as Amazon's yum repos and Ubuntu's apt-get repos, are hosted in S3 so you can also allow your EC2 instances to get their patches without giving them wide open internet access. That's a big win.