I read a lot of the questions and answers about ECS/fargate with private repo,
and I have been assigned to use ECS with our company's internal repo - nexus,
since this Nexus is a HTTPS with a self-sign cert
it seems ECS do not like self-sign cert
Is that any way to bypass the SSL cert check?
error below
CannotPullContainerError: inspect image has been retried 1 time(s): failed to resolve ref "/<repo_acc>/:latest": failed to do request: Head https:///<repo_acc>//manifests/latest: x509: certificate signed by unknown authority
seems other expert n re:post answered my question already, there is no way to bypass via fargate. I will convenience my team to use ECR to bridge the image to.
https://repost.aws/questions/QU7VYfP92kSkSqQCNsrCb4vw/aws-fargate-pulling-from-internal-private-repo-possible-with-ssl-cert-bypass
Quoted
ACCEPTED ANSWER
There is no way to bypass the SSL certificate check.
https://github.com/aws/containers-roadmap/issues/740
Unfortunately, no way to add a private CA certificate is provided either.
https://github.com/aws/containers-roadmap/issues/1301
Related
I am trying to create a GitHub action that runs on a windows server self-hosted runner and I'm stuck on my checkout failing at the LFS download portion
I'm using
- uses: actions/checkout#v3
with:
lfs: true
The checkout for the normal code works fine, but when it gets to the LFS download step I get a lot of messages complaining about x509: certificate signed by unknown authority.
LFS: Get "https://github-cloud.githubusercontent.com/alambic/details_changed_to_protect_the_innocent": x509: certificate signed by unknown authority
The self-hosted runner is on a domain that is behind a firewall that interrogates https traffic and inserts its own certificate into the chain, so I'm guessing that the unknown authority is that certificate, but I don't know where that certificate needs to be trusted so that things work.
The certificate is trusted by the OS and is installed in the certificate store through a group policy, but it seems that git LFS is verifying the certificate chain separate from that and complains anyway because the certificate is unexpected.
A common solution I've seen floating around for things like this is just turn off SSL checking, but that feels like just a temporary hack and not a real solution. I would like for this to work with all security in place.
As an additional note, this is running on a server that is also running TeamCity, and the TeamCity GitHub config is able to clone repos with LFS from that same server, so these problems are just inside of the GitHub action runner environment that gets set up.
Since the firewall only inserts its certificate into https traffic, I was able to get things working using an ssh-key. I added the private key as a secret and the public key to the repo's deploy keys, and now everything is working as expected.
- uses: actions/checkout#v3
with:
lfs: true
ssh-key: ${{secrets.repo_ssh}}
I'd like to deploy an ldap server on my kubernetes cluster. The server itself is up and running, but I'd like to enable SSL encryption for it as well.
I already have cert-manager up and running and I also use a multitude of SSL certificates with my ingresses with my HTTP traffic. It would be really nice if I could just use a CertificateRequest with my ldap server as well, managed and updated by cert-manager.
My problem is I have no idea how to mount a Certificate to my kubernetes pod. I know that cert-manager creates a secret and puts the certificate data in it. The problem with that is I have no idea of the validity of that certificate this way, and can't remount/reapply the new certificate.
Has anybody done anything like this? Is there a non-hacky way to incorporate ingresses to terminate SSL encryption?
I'm currently trying to setup GlusterFS integration for a Kubernetes cluster. Volume provisioning is done with Heketi.
GlusterFS-cluster has a pool of 3 VMs
1st node has Heketi server and client configured. Heketi API is secured with a self-signed certificate OpenSSL and can be accessed.
e.g. curl https://heketinodeip:8080/hello -k
returns the expected response.
StorageClass definition sets the "resturl" to Heketi API https://heketinodeip:8080
When storageclass was created successfully and I try to create a PVC, this fails:
"x509: certificate signed by unknown authority"
This is expected, as ususally one has to allow this insecure HTTPS-connection or explicitly import the issuer CA (e.g. a file simply containing the pem-String)
But: How is this done for Kubernetes? How do I allow this insecure connection to Heketi from Kubernetes, allowing insecure self-signed cert HTTPS or where/how do I import a CA?
It is not an DNS/IP problem, this was resolved with correct subjectAltName settings.
(seems that everybody is using Heketi, and it seems to be still a standard usecase for GlusterFS integration, but always without SSL, if connected to Kubernetes)
Thank you!
To skip verification of server cert, caller just need specify InsecureSkipVerify: true. Refer this github issue for more information (https://github.com/heketi/heketi/issues/1467)
In this page, they have specified a way to use self signed certificate. Not explained thoroughly but still can be useful (https://github.com/gluster/gluster-kubernetes/blob/master/docs/design/tls-security.md#self-signed-keys).
I am trying to publish my Alexa skill using Letsencrypt SSL Certificate.
Google Chrome does not show any warning icon if I browse to my https URL using letsencrypt certificate.
However, when I try to test using Alexa console, an error occurs as :
"SSL Handshake failed".
I see on Amazon Alexa forums that there is a buzz around letsencrypt support.
Some posts say it is supported and some say it isn't.
Could someone here clarify whether letsencrypt free SSL certificate is supported for building custom alexa skills ?
Download the contents of your fullchain.pem cert, from /etc/letsencrypt/live/<domain>/fullchain.pem on your server
On your skill config page, select the "SSL" Tab.
Mark "I will upload a self-signed certificate in X.509 format."
Paste the contents of your fullchain.pem file.
I was able to get a Let's Encrypt wildcard certificate working with an Alexa custom skill after choosing the option "My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority" in the developer console.
I also had an issue with a Nginx reverse proxy configuration that caused a failure in the Alexa Simulator and provided no helpful error. Fortunately you can also use the Manual JSON option which yielded the error "Cannot establish SSL connection to your skill endpoint".
I was able to track the issue down to the ssl_ciphers value in the ssl.conf file. Even though it was compatible with the Intermediate Security recommended setting for TLS 1.2, I had to comment it out to make it work. I hope someone else can determine why Amazon servers reject the certificate when this setting is used, it could be that TLS 1.3 is now required.
:)
I'm using GoLang to start an https server.
according to https://www.ssllabs.com/ssltest my certificate configuration is Grade A with the warning Chain issues - Contains anchor.
In my previous post (GoLang with http ssl GoDaddy's certificate - This server's certificate chain is incomplete.), #VonC helped me for the second time and also alerted me of that warning.
I tried everything I could think of and couldn't resolve the issue. GoDaddy as a repository of certs and I tried to download gdroot-g2.crt and configure it as RootCAs, I tried download the intermindate certificate named gdig2.crt and configured it as ClientCAs, but the results are the same.
what am I missing ?
for full code please view my previous stack overflow post at GoLang with http ssl GoDaddy's certificate - This server's certificate chain is incomplete..