AWS SSM added support for on promise VM recently. When following the user guide, I am a bit lost on "Step 3: Install a TLS certificate on On-Premises Servers and VMs". It states that:
On base operating systems, on instances created from AMIs that are not supplied by Amazon, and on your own on-premises servers and VMs, you must install and enable a certificate from Amazon Trust Services
using AWS Certificate Manager (ACM).
Each of your managed instances must have one of the following Transport Layer Security (TLS) certificates installed.
Amazon Root CA 1
Starfield Services Root Certificate Authority - G2
Starfield Class 2 Certificate Authority
Does it mean I need to get a certificate from ACM and installed on the VM if it is to communicate with AWS services? However my understanding is ACM is integrated with AWS services and they never give out private keys. Or I need to add the CA to the VM?
Please browse through the below document, it will be helpful in understanding.
https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/
Related
I recently installed a new self managed certificate on the Google Cloud Platform. This is be cause the old one was out of date. I believe that I have done this correctly.
sgnapper#cloudshell:~ (tactical-curve-284112)$ gcloud compute ssl-certificates list \
--global
NAME: eris-sypro
TYPE: SELF_MANAGED
CREATION_TIMESTAMP: 2022-06-23T06:32:33.689-07:00
EXPIRE_TIME: 2023-06-22T16:59:59.000-07:00
MANAGED_STATUS:
Yet I get:
Your connection isn't private
Attackers might be trying to steal your information from syproltd.co.uk (for example, passwords, messages or credit cards).
NET::ERR_CERT_REVOKED
When I try and connect to the site.
I am not familiar with Google Cloud and I wonder if there is a step I have missed.
If anybody can help, I would be grateful.
gcloud compute ssl-certificates create does not automagically provision the SSL certificate to any services, but only adds it to the infrastructure. NET::ERR_CERT_DATE_INVALID expired 9 days ago. And the new one likely isn't provisioned to the load balancer. This is being explained here: Step 3: Associate an SSL certificate with a target proxy. When the certificate is installed on a VM instance (no load balancer), you may run gcloud compute ssl-certificates delete eris-sypro --global and instead replace the SSL certificate installed on the VM instance.
I have nodejs app using kafkajs package for connecting to AWS MSK.
We are moving to Strimzi Kafka because we already have a kubernetes cluster and we don't need the MSK anymore.
Until now we were connected with SSL but didn't have to specify any CA path or something. We used this way of connection both on our nodejs apps and kafka-ui and it worked with no issues.
We are trying to the same with Strimzi Kafka, but we get SSL handshake failed.
For my understanding is AWS MSK is using amazon certificates that are known while the Strimzi Kafka is generating self signed certificates which is ok by us.
How can I still using this way like we used with AWS MSK? With just use ssl: true in kafkajs (It works)
Thanks.
The easiest way to use a certificate signed by some public CA is using the listener certificate which lets you provide your own server certificate for given listener. I'm not sure how the Amazon CA works, but this blog post shows how to do it for example using Cert-Manager and Let's Encrypt.
Keep in mind that to use the public CAs, you usually need to use some proper domain names and not just internal Kubernetes services. This might for example increase costs or latency if your applications run in the same Kubernetes cluster because the traffic might need to go through a load balancer or ingress.
I have created an image of my TWAS application and deployed it in a container inside an openshift POD. In my TWAS ND I use to go to the admin console WebSphere environment truststore on a node on a virtual machine and set up TLS certificates so my application can have communication with external API's in the secure communication channel HTTPS. These certificates are public certificates and don't have any private keys. They are .crt and .pem files. Now I am wondering how I can set up my third-party TLS certificates for my application running inside the POD as a container? I don't want to make any code changes to my J2EE application which I have migrated from on-prem VM to Openshift.
Note: I am using TWAS base runtime here and not liberty for my newly migrated J2EE app on openshift.
When you build your application image, you can add a trusted signer and a short script into /work/ prior to configure.sh
https://www.ibm.com/docs/en/was/9.0.5?topic=tool-signercertificatecommands-command-group-admintask-object#rxml_atsignercert__cmd1
AdminTask.addSignerCertificate('[-keyStoreName NodeDefaultTrustStore -certificateAlias signer1 -certificateFilePath /work/signer.pem -base64Encoded true]')
AdminConfig.save()
The root signer might not be either the pem/crt you have, those could be the issued certificate and the signers. WebSphere allows you to setup the trust at any level, but it's ideal to trust the root CA that issued the cert.
We've also used a technique of importing a trust store into a Secret and mounting that into the expected location in the pods. This might make sense if you want to isolate any certificate changes from the app build cycle.
our company proxy brokes the SSL Connections and the proxy use our own CA.
So i have always tell the applications i use (RubyGems, Python Pip, Azure CLI ...) to use our company CA Certificate.
Does anyone know, how i can use our CA Certificate with a local Terraform installation?
Is the CA deployed to your OS's certificate store or can you import it? If so, Terraform (and probably other tools) should just be able to work with a proxy like this with no other configuration. If you need some further direction, tell us what operating system and how you typically access you have to the CA.
Edit:
#Kreikeneka have you have the certain the location CentOS expects to import into the store. There is a command you need to run that actually imports it update-ca-trust. Have you run this? If the cert is being used for SSL and you just need to trust it when going through your proxy, that is all you should need to do. You shouldn't need to tell your tools (Terraform, PIP, etc) to trust it for SSL with the proxy. If the cert is imported into your certificate store, it should be passively usable from any connection on from the machine from any process.
If you are using the cert for client authentication to the proxy then just trusting the cert by placing it in the certificate store probably won't work.
I'm not clear from your comments if you need the cert for SSL or for client authentication to the proxy. Check with your IT what it is really used for if you aren't sure and get back to us.
As of CentOS 6+, there is a tool for this. Per this guide,
certificates can be installed first by enabling the system shared CA
store:
update-ca-trust enable
Then placing the certificates to trust as CA's
in /etc/pki/ca-trust/source/anchors/ for high priority
(non-overridable), or /usr/share/pki/ca-trust-source/ (lower priority,
overridable), and finally updating the system store with:
update-ca-trust extract
Et voila, system tools will now trust those
certificates when making secure connections!
Source:
https://serverfault.com/questions/511812/how-does-one-install-a-custom-ca-certificate-on-centos
We are planning to create and install self-signed certificates on azure web roles.
We have a requirement to create certificate on web role itself and installing there.
But we cannot find makecert.exe on azure web and worker role. We did remote desktop on azure role and found that makecert.exe is missing.
Any direction on creating and installing certificate on azure role would be helpful?
If there is any management APIs available for creating certificate on web role, please share with me as I am unable to locate in msdn.
You have a few options to create self-signed certificates:
Deploy makecert.exe with your application (include it in your VS project, set Copy Local = true)
Write something yourself to generate the certificate (example here: https://github.com/mono/mono/blob/master/mcs/tools/security/makecert.cs)
But there's more to it than simply generating a certificate. What will you do if you have more than one instance running? Will you install the certificate on 1 instance? Or do you need all your instances to have the certificate? What if you redeploy the application? ...
In those cases you might want to look ahead. Would it be an option to store all those certificates in blob storage? Maybe you could have a process running on each instance that 'synchronizes' the certificates with the current instance. You could also use AppFabric ServiceBus Topics to notify other instances when a new certificate has been generated...
The direct answer to your questions is that Makecert.exe is an utility which is installed either from installing Visual Studio or Windows SDK or direct download from Microsoft sites. A Windows Azure VM sure not to have this makecert.exe because it is not part of base Windows deployment and if you want to use/run Makecert in Windows Azure VM you really need to add in your project and deploy it.
HOWEVER,
If you have a need to deploy a certificate to Windows Azure you really don't need to generate it on fly (i.e. using Makecert.exe) because there is other easier way to do it. You just need to add (or deploy) your PFX certificate to your Windows Azure Service -> Certificate section and when you VM will be initialize, the certificate will be provisioned to your Windows Azure Role (Web or Worker or VM) so there is no need to add Makecert.exe with your project and then use Startup task to run it.
Instead of depend on Makecert.exe or any other method to have certificate in your role, i would suggest using above method which is actually designed for such requirement. If you don't know how to deploy a certificate to your Windows Azure Service either directly to portal or using PowerShell, please let me know..