Hyperledger fabric 2.2v - Adding HSM to an existing network/application - cryptography

This is a pre-implementation question.
We have a working fabric(2.2) application with an org containing 2 peers, an intermediate CA with TLS enabled and are now planning to implement HSM to store org related private keys. I read from official doc and other articles that to use HSM we need to have PKCS11 enabled docker centOS images and the setup requires a complete rebuild of the binaries and network.
Can we setup HSM w/o disturbing the existing network?
Does peer's couchDB container requires to be reconfigured as well if the answer for Q1 is "yes"?
How do we transfer the available private keys from local MSP keystore to HSM slots?
What are the points to take care while implementing HSM along with TLS/TLS enable existing keys?
Do we have a ready script for this operation in any samples (I did not find it so far)?
(removed point 6 and rearranged)
I have seen very few people talking about implementing HSM with HLF, is there any major issue of usage.
Also please do provide a "take care" points before starting this operation.

Yes, if you have enough peers or orderers running. You will need to restart the peer or order to use HSM one-by-one after setting the environment variable or the yaml file (Or even rebuild the binary with PKCS11). Since you do it one-by-one, other nodes that are running will keep the network alive.
Nope.
Depends on the HSM provider. They will provide you a binary to do that either by GUI or command line, which usually import pkcs12 (cert, public and private key). After import into the HSM, you may remove the private keys from MSP keystore as the Orderer and Peer binary will use the HSM for private key operation. (Of course you should backup them somewhere else)
Key for TLS server or client has to be stored locally. HSM is not supported for TLS yet.
No.
I assume this is same question as number 3
Ya, there exist some issues like the Java Chaincode does not have HSM support and you will need to write the PKCS11 implementation yourself and override the cryptoPrimitive.java.

PKCS11 is a standard interface, not a standard protocol. Each HSM vendor has its own protocol (usually over TCP). They provide a library that speaks their proprietary protocol that you install in your application.
So a HSM is "just" another TCP based service running outside your cluster. To some extent, you connect to an HSM the same way you would connect to a LDAP server:
Get a PIN to the HSM (similar to a password) and store it... somewhere
Install a library (ex: hsm-provider.so) and its configuration file in your environment
Open any firewall, SecurityGroup, VPC and whatnot to ensure TCP/IP connectivity
Let you application talk natively to the device as it would connect to any other serivce.
The specifics (especially local configuration) depend on the HSM provider. Here is a script that configures an HSM emulated in software for Hashicorp Vault. YMMV but this software emulator separates the HSM part from the networking part.
Once you figure out the HSM part, I suggest you look into Utimaco HSM emulator (registration required). You connect to the emulator via a TCP/IP connection, making it as real as can be from Hyperledger's point of view.

Related

kubernetes traffic with own certificates

I have a kubernetes cluster in a corporate environment, where all HTTPS traffic is man-in-the-middled and the certificates are replaced with the company owns. Right now, all the applications running on the cluster get the Company's certificates injected by rebuilding the Docker image or by mounting them from a secret and adding them to the local store. This is painful and makes it harder to just use public helm charts and docker images without modifying them.
For example, I'm running jenkins on the cluster, which tries to install plugins from https://updates.jenkins-ci.org/. This would normally fail in my case with a SSL exception, unless I add the certficates to the Jenkins keystore.
I was wondering if there's a way to set this up at the cluster level,
So that there's some component that deals with this and the applications can then access the internet normally, without being aware of the whole certificate situation?
My thoughts were:
A cluster proxy pod, that all the applications then use.
Some ambassador container on each pod, that the apps connect to
I would imagine I'm not the only one in this situation but couldn't find a generic solution for this.
You could have a look at istio. It's a service mesh that uses sidecar proxies to (beside other things) take over responsibility for encrypting traffic between applications.
The proxies use the concept of mutual TLS (mTLS), where all connections inside the mesh are encrypted out-of-the-box. The applications them-self don't have to bother with certificates and can send messages in plain text.
Istio also provides a mechanism to migrate to mTLS, so you can include your applications into the mesh one by one, switch to mTLS and disable your own certification overhead.
You can set everything up with your own PKI so you're still using your companies certificates. Also you get a bunch of other features like enhanced observability, canary deployments, on the fly token based authentication/authorization and more.

Programmatically synchronizing keys generated by HSM clients with the RFS server

I am using PKCS11Interop to perform Key Management operations inside an HSM. The HSM I am using is a network HSM, Thales N-Shield. Here are the details of my setup:
1- HSM
1- RFS Server
3- Clients
My software application is distributed and is hosted over the 3 clients. The key will be generated in one of the clients and could be used by the application components present in other clients.
However, I have noticed that a key generated in one client machine is not accessible to other client machines until unless both clients do an rfs-sync.
Question: Is there a way to synchronize the client keys with the RFS using some PKCS11Interop API? If No, then in what way I can synchronize the keys between RFS and the Client machine.
I know that an exe can be execute using C# code but doesn't look like a clean apporach.
What you are trying to do is not part of the PKCS#11 standard. So I doubt that PKCS11Interop will be able to achieve this (from looking at its documentation here).
When you generate an object on the token (Thales n-Shield) using PKCS#11 (PKCS11Interop), the Thale's security manager that's installed on the client is actually doing the generation on the HSM. If I remember correctly, the Thales stores these objects on the client machine as flat files encrypted by the security manager's master key. So technically it is not stored on the HSM. This is the reason you have to do a sync with the RFS, and then update your other clients to see the new keys/objects.
You will have to check with the Thales people to see if they can provide a way to automate this. Or you have to implement your own synching mechanism. Since the rfs-sync is a command line tool Thales provides, you will to see if you can execute the commands through C#. Or check with them if they have a C# library that does this for you.

Can anyone explain SSH, SSL, HTTPS in the context of Github or Bitbucket?

I don't really know much about IT and have been working in software development for 3 years. I have used version control with Github and Bitbucket, but I really don't know how SSH, SSL, HTTPS works. Can anyone explain them in the context of version control with a cloud service like Github? Why is TLS not used? A user case example would be most helpful. High-level is fine.
Firstly, while a number of people think SSH relies on SSL, it doesn't: it's an entirely different protocol. The fact OpenSSH relies on OpenSSL might be one of the causes of this confusion (whereas in fact OpenSSL can do much more than SSL).
Secondly, TLS is essentially a newer version of SSL, and HTTPS is HTTP over SSL/TLS. You can read more about this in "What's the difference between SSL, TLS, and HTTPS?" on Security.SE, for example.
One of the major differences (in the context of GitHub and Bitbucket) has to do with the authentication mechanisms. Technically, both password and public-key authentication can be used with or on top of SSL/TLS and SSH, but this is done rather differently. Existing libraries and tool support also matters.
GitHub (with Git) relies on an SSH public key for authentication (so that you don't have to store or use a password every time).
Public key authentication in SSH uses "bare keys", whereas you'd need a certificate for SSL/TLS (and in 99.9% cases that's going to be an X.509 certificate). (A certificate binds an identity to a public key by signing them together.) GitHub would have to use or set up a CA, or perhaps use various tricks to accept self-signed client certificates. All of this might be technically possible, but this would add to the learning curve (and may also be difficult to implement cleanly, especially if self-signed cert tricks were used).
At the moment, GitHub simply lets you register your SSH public key in your account and uses this for authentication. A number of developers (at least coming from the Git side) would have been familiar with SSH public keys anyway.
Historically, Git over SSH has always worked, whereas support for HTTP came later.
In contrast, Mercurial was mainly an HTTP-based protocol initially. Hence, it was more natural to use what's available on HTTPS (which would rule out using X.509 certificates if they're deemed too complicated). AFAIK, SSH access for Mercurial is also possible.
In both cases (Git and Hg), the SSH public key presented during the connection is what lets the system authenticate the user. On GitHub or Gitlab, you always connect as SSH user git, but which key you use is actually what determines the user in the system. (Same with Hg on Bitbucket: ssh://hg#bitbucket.org/....)
I doubt if it is a good question for StackOverflow, however.
All these protocols are used as (secured) channel for Git data exchange. And, when you see 'SSL' most likely SSL/TLS is meant - just to not type both abbreviations. TLS is a further development of SSL protocol.

Storing an X509 Certificate in a MySQL Database

We're working on a TCP server that secures its communication with its clients using TLS/SSL.
Currently we are storing our public (.cer file) and private (password protected, private key included .p12) certificates in the Windows certificate store. We are going to increase the number of TCP servers soon and depending on the traffic we'll be adding more and more in time.
To facilitate the deployment process and periodic certificate change (or in case we detect some sort of intrusion) we plan to store both (private and public) certificates in the system's common MySQL database that is accessible by the TCP servers.
Is storing the .cer and password protected .p12 files in BLOB columns a bad idea from a security point of view?
P.S: I don't think it is very relevant but the TCP server is being developed in c#.
Skipping the security concerns, your language is PKE with native support for the windows store, you are going to have to roll your own (increase complexity) with this change. It would be better as part of the server start to update the Windows Store.
From a security point of view, you now have additional points where the encrypted key are accessible. Is you password secure enough? This is not a best practice and should be managed by the systems admin doing the install and updates. Lastly, this increase of complexity also increase the attack surface.

distributed PKI Certificate handling- making it user friendly

I have a Server and N number of clients installed on different hosts. Each host has its self-signed certificate generated during install. The client authentication is turned ON at this point. Which means they can't communicate to each other until these certs are properly imported as described below.
Now, the server needs to import all the clients' certificates. So do all the clients from this single server. This part is really not user friendly to do it during install as either client or the server can be installed independent of each other any time.
What is the better way to import certs between clients and server without the user having to perform some kind of out-of-band manual steps?
PS: The PKI tool I am using can only import/export certificates on a local machine only. Assume I can't change this tool at this time.
In general, this is one of the problems with PKI. It is a pain to securely distribute certificates in an automated fashion.
In an Active Directory domain environment you already have a Kerberos trust in place. Therefore you can use group policy to securely distribute certficates automatically. Don't know if that would apply to you because you haven't given information about your environment/OS etc.