How can I totally disable SSL authentication in puppet? - ssl

In our environment,we often reinstall our machine with the same hostname.
So,everytime I sync the configuration vai puppet,SSL authentication will fail.
How can I totally disable SSL authentication?

The assumption that connections are authenticated and identified through trusted SSL certificates runs very deep in Puppet's core. There is no way to make Puppet run through unencrypted channels only.
Things you can try:
holding on to signed certificates and getting them back onto machines just when they are re-provisioned
creating a hook that will make the master revoke the old certificate when a machine is decommissioned/recommissioned
With the latter approach, the signing of fresh certificates will be eased considerably. You might even implement an autosigning scheme, although that is tricky to do in a secure fashion.

Related

Trusting self signed certificate inside pod

In our company's internal network we have self-signed certificates used for applications that runs on DEV or staging environments. For our local machines it's already trusted because Active Directory provides that using Group Policy Objects. But in the Kubernetes(Openshift) world, we have to do some additional operations to provide successful SSL/TLS traffic.
In the Dockerfile of related application, we copy the certificate into container and trust it while building Docker image. After that, the requests from application that runs in container to an HTTPS endpoint served with that self signed certificate are success. Otherwise we encounter the errors like "SSL/TLS secure channel cannot be established"
COPY ./mycertificate.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/mycertificate.crt && update-ca-certificates
However, I don't think this is the best way to do this. It requires lots of operational work when the certificate has expired. Shortly it's hard to manage and maintain. I wonder what's the efficient way to handle this.
Thanks in advance for your support.
Typically that should be configured cluster-wide by your OpenShift administrators using the following documentation so that your containers trust your internal root CA by default (additionalTrustBundle):
https://docs.openshift.com/container-platform/4.6/networking/configuring-a-custom-pki.html#nw-proxy-configure-object_configuring-a-custom-pki
Best is highly relative but you could start by pulling that out into a ConfigMap and mounting it into your container(s). That pushes all the work of updating it out to runtime, but introduces a fair bit of complexity. It depends on how often it changes and how much you can automate the rebuilds/redeploys when it does.

Terraform Init/apply/destroy - SSL Connection Problems

our company proxy brokes the SSL Connections and the proxy use our own CA.
So i have always tell the applications i use (RubyGems, Python Pip, Azure CLI ...) to use our company CA Certificate.
Does anyone know, how i can use our CA Certificate with a local Terraform installation?
Is the CA deployed to your OS's certificate store or can you import it? If so, Terraform (and probably other tools) should just be able to work with a proxy like this with no other configuration. If you need some further direction, tell us what operating system and how you typically access you have to the CA.
Edit:
#Kreikeneka have you have the certain the location CentOS expects to import into the store. There is a command you need to run that actually imports it update-ca-trust. Have you run this? If the cert is being used for SSL and you just need to trust it when going through your proxy, that is all you should need to do. You shouldn't need to tell your tools (Terraform, PIP, etc) to trust it for SSL with the proxy. If the cert is imported into your certificate store, it should be passively usable from any connection on from the machine from any process.
If you are using the cert for client authentication to the proxy then just trusting the cert by placing it in the certificate store probably won't work.
I'm not clear from your comments if you need the cert for SSL or for client authentication to the proxy. Check with your IT what it is really used for if you aren't sure and get back to us.
As of CentOS 6+, there is a tool for this. Per this guide,
certificates can be installed first by enabling the system shared CA
store:
update-ca-trust enable
Then placing the certificates to trust as CA's
in /etc/pki/ca-trust/source/anchors/ for high priority
(non-overridable), or /usr/share/pki/ca-trust-source/ (lower priority,
overridable), and finally updating the system store with:
update-ca-trust extract
Et voila, system tools will now trust those
certificates when making secure connections!
Source:
https://serverfault.com/questions/511812/how-does-one-install-a-custom-ca-certificate-on-centos

New SSL Certificate for each client deployment?

Context:
I have an application that is deployed to each client as a Virtual Machine. The latter is installed by the clients wherever they want (I don't necessarily know the final domain). The application comprises an JBoss Web Server that provides access to a configuration page, protected by SSL. Right now the server is using a self signed Certificate. However, I want the browsers to stop showing the warning messages associated to self signed certs. Moreover, I provide a free version of the application that has basic functionality.
Question:
For cases where the client is using a free version (and me wanting to reduce costs), what is the best approach when using a SSL cert, and not knowing the final domain (most of the time)?
It is acceptable to use a self-signing cert? If so, a different one
per client install?
Is it best to issue a new cert (maybe a free one) for each
deployment?
Is is acceptable to use the same cert, signed by a proper CA, on all
of the deployment VMs?
A completely different approach?
Thanks guys!
It is acceptable to use a self-signing cert? If so, a different one per client install?
Ask your clients. Will they put up with a browser warning? or not?
Is it best to issue a new cert (maybe a free one) for each deployment?
It is best for the client to acquire his own SSL certificate. You can't do that for him. Nobody can.
Is is acceptable to use the same cert, signed by a proper CA, on all of the deployment VMs?
No, it entirely defeats the purpose. The certificate and the private key it wraps are supposed to uniquely identify the holder.
A completely different approach?
Handball the whole megillah to the clients. Self-identification is their problem, not yours.

Can anyone explain SSH, SSL, HTTPS in the context of Github or Bitbucket?

I don't really know much about IT and have been working in software development for 3 years. I have used version control with Github and Bitbucket, but I really don't know how SSH, SSL, HTTPS works. Can anyone explain them in the context of version control with a cloud service like Github? Why is TLS not used? A user case example would be most helpful. High-level is fine.
Firstly, while a number of people think SSH relies on SSL, it doesn't: it's an entirely different protocol. The fact OpenSSH relies on OpenSSL might be one of the causes of this confusion (whereas in fact OpenSSL can do much more than SSL).
Secondly, TLS is essentially a newer version of SSL, and HTTPS is HTTP over SSL/TLS. You can read more about this in "What's the difference between SSL, TLS, and HTTPS?" on Security.SE, for example.
One of the major differences (in the context of GitHub and Bitbucket) has to do with the authentication mechanisms. Technically, both password and public-key authentication can be used with or on top of SSL/TLS and SSH, but this is done rather differently. Existing libraries and tool support also matters.
GitHub (with Git) relies on an SSH public key for authentication (so that you don't have to store or use a password every time).
Public key authentication in SSH uses "bare keys", whereas you'd need a certificate for SSL/TLS (and in 99.9% cases that's going to be an X.509 certificate). (A certificate binds an identity to a public key by signing them together.) GitHub would have to use or set up a CA, or perhaps use various tricks to accept self-signed client certificates. All of this might be technically possible, but this would add to the learning curve (and may also be difficult to implement cleanly, especially if self-signed cert tricks were used).
At the moment, GitHub simply lets you register your SSH public key in your account and uses this for authentication. A number of developers (at least coming from the Git side) would have been familiar with SSH public keys anyway.
Historically, Git over SSH has always worked, whereas support for HTTP came later.
In contrast, Mercurial was mainly an HTTP-based protocol initially. Hence, it was more natural to use what's available on HTTPS (which would rule out using X.509 certificates if they're deemed too complicated). AFAIK, SSH access for Mercurial is also possible.
In both cases (Git and Hg), the SSH public key presented during the connection is what lets the system authenticate the user. On GitHub or Gitlab, you always connect as SSH user git, but which key you use is actually what determines the user in the system. (Same with Hg on Bitbucket: ssh://hg#bitbucket.org/....)
I doubt if it is a good question for StackOverflow, however.
All these protocols are used as (secured) channel for Git data exchange. And, when you see 'SSL' most likely SSL/TLS is meant - just to not type both abbreviations. TLS is a further development of SSL protocol.

distributed PKI Certificate handling- making it user friendly

I have a Server and N number of clients installed on different hosts. Each host has its self-signed certificate generated during install. The client authentication is turned ON at this point. Which means they can't communicate to each other until these certs are properly imported as described below.
Now, the server needs to import all the clients' certificates. So do all the clients from this single server. This part is really not user friendly to do it during install as either client or the server can be installed independent of each other any time.
What is the better way to import certs between clients and server without the user having to perform some kind of out-of-band manual steps?
PS: The PKI tool I am using can only import/export certificates on a local machine only. Assume I can't change this tool at this time.
In general, this is one of the problems with PKI. It is a pain to securely distribute certificates in an automated fashion.
In an Active Directory domain environment you already have a Kerberos trust in place. Therefore you can use group policy to securely distribute certficates automatically. Don't know if that would apply to you because you haven't given information about your environment/OS etc.