Does SSH need certificates? - ssh

I have heard that SSH does not need certificates.
But for RSA authentication of SSH , it should make sure that public key belong to the server and it can be done with certificates.
But it does not use certificates.
So how does it do?

No. It does NOT NEED them, but it CAN use them (but they are different then the certificates used in SSL! for various reasons). Certificates help only to delegate the verification to some certificate authority. To verify the public key, you just need to get the public key using "secure" channel.
So how you can verify the public key of the server you are connecting to?
There are several possibilities. The server admin will send you using different secure channel the public key of fingerprint of the public key. They can look like this:
Public key: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==
You can store this one directly in your ~/.ssh/known_hosts prefixed with the server name and space.
Fingerprint SHA256:zzXQOXSRBEiUtuE8AikJYKwbHaxvSc0ojez9YXaGp1A bitbucket.org (RSA)
When you connect to the server for the first time, you are asked similar question:
The authenticity of host 'bitbucket.org (104.192.143.3)' can't be established.
RSA key fingerprint is SHA256:zzXQOXSRBEiUtuE8AikJYKwbHaxvSc0ojez9YXaGp1A.
Are you sure you want to continue connecting (yes/no)?
Then it is your responsibility to verify that the fingerprint is the same as the one you got from your admin.
If you don't do that, you are in danger that somebody redirected your connection to some malicious server and you are connecting somewhere completely else. The host keys are unique and this attacker would have different key (and therefore different fingerprint) unless he already compromised the original server (and then you are screwed already).
There is also possibility to add the host keys to the SSHFP DNS record, which will eliminate the burden above (you should have DNSSEC, otherwise the DNS records can be modified the same way as your direct connection). For this to work, you need to turn it on in your ssh_config using VerifyHostKeyDNS options.
And what about the certificates?
SSH can use certificates. This is common in company environment, where you are already given a known_hosts file configured with the certificate authority, which is used to sign all the host keys (and usually also the clients authentication keys). In that case, you don't need anything from above and connecting to local infrastructure "just works". Note, these certificates are not X509 as used in SSL/TLS PKI. For more info about these certificates, see manual page for ssh-keygen, which explains that in detail.

Related

Can I use my own Certificate Authority for HTTPS over LAN?

I have a server and a few clients, all running on different docker containers. The users can use the client by entering localhost:3000 on their browser (where the client docker is running).
All the containers run on the same LAN. I want to use HTTPS.
Can I sign a public private key pair using my own CA, then load the CA's public key to the browser?
I want to use the normal flow for public domains, but internally with my own CA.
Or should I look for another solution?
Meta: since you've now disclosed nodejs, that makes it at least borderline for topicality.
In general, the way PKIX (as used in SSL/TLS including HTTPS) works is that the server must have a privatekey and matching certificate; this is the same whether you use a public CA or your own (as you desire). The server should also have any intermediate or 'chain' cert(s) needed to verify its cert; a public CA will always need such chain cert(s) because CABforum rules (codifying common best practice) prohibits issuing 'subscriber' (EE) certs directly from a root, while your own CA is up to you -- you can choose to use intermediate(s) or not, although as I say it is considered best practice to use them and keep the root privatekey 'offline' -- in cryptography, that means not on any system that communicates with anybody, such as in this case servers that request certificates, thus eliminating one avenue of attack -- on a specialized device that is 'airgapped' (not connected or even able to be connected to any network) and in a locked vault, possibly with 'tamper protection', a polite name for self-destruct. As a known example of the rigor needed to secure something as sensitive as the root key of an important CA, compare Stuxnet.
The client(s) does not need and should not be configured with the server cert unless you want to do pinning; it(they) do need the CA root cert. Most clients, and particularly browsers, already have many/most/all public CA root certs builtin, so using a cert from such a CA does not require any action on the client(s); OTOH using your own CA requires adding your CA cert to the client(s). Chrome on Windows uses the Microsoft-supplied (Windows) store; you can add to this explicitly (using the GUI dialog, or the certutil program or powershell), although in domain-managed environments (e.g. businesses) it is also popular to 'push' a CA cert (or certs) using GPO. Firefox uses its own truststore, which you must add to explicitly.
In nodejs you configure the privatekey, server cert, and if needed chain cert(s), as documented
PS: note you generally should, and for Chrome (and new Edge, which is actually Chromium) must, have the SubjectAlternativeName (SAN) extension in the server cert specify its domain name(s), or optionally IP address(es), NOT (or not only) the CommonName (CN) attribute as you will find in many outdated and/or incompetent instructions and tutorials on the Web. OpenSSL commandline makes it easy to do CommonName but not quite so easy to do SAN; there are many Qs on several Stacks about this. Any public CA after about 2010 does SAN automatically.

simple Akka ssl encryption

There are several questions on stackoverflow regarding Akka, SSL and certificate management to enable secure (encrypted) peer to peer communication between Akka actors.
The Akka documentation on remoting (http://doc.akka.io/docs/akka/current/scala/remoting.html)
points readers to this resource as an example of how to Generate X.509 Certificates.
http://typesafehub.github.io/ssl-config/CertificateGeneration.html#generating-a-server-ca
Since the actors are running on internal servers, the Generation of a server CA for example.com (or really any DNS name) seems unrelated.
Most servers (for example EC2 instances running on Amazon Web Services) will be run in a VPC and the initial Akka remotes will be private IP addresses like
remote = "akka.tcp://sampleActorSystem#172.16.0.10:2553"
My understanding, is that it should be possible to create a self signed certificate and generate a trust store that all peers share.
As more Akka nodes are brought online, they should (I assume) be able to use the same self signed certificate and trust store used by all other peers. I also assume, there is no need to trust all peers with an ever growing list of certificates, even if you don't have a CA, since the trust store would validate that certificate, and avoid man in the middle attacks.
The ideal solution, and hope - is that it possible to generate a single self signed certificate, without the CA steps, a single trust store file, and share it among any combination of Akka remotes / (both the client calling the remote and the remote, i.e. all peers)
There must be a simple to follow process to generate certificates for simple internal encryption and client authentication (just trust all peers the same)
Question: can these all be the same file on every peer, which will ensure they are talking to trusted clients, and enable encryption?
key-store = "/example/path/to/mykeystore.jks"
trust-store = "/example/path/to/mytruststore.jks"
Question: Are X.509 instructions linked above overkill - Is there a simple self signed / trust store approach without the CA steps? Specifically for internal IP addresses only (no DNS) and without an ever increasing web of IP addresses in a cert, since servers could autoscale up and down.
First, I have to admit that I do not know Akka, but I can give you the guidelines of identification with X509 certificates in the SSL protocol.
akka server configuration require a SSL certificate bound to a hostname
You will need a server with a DNS hostname assigned, for hostname verification. In this example, we assume the hostname is example.com.
A SSL certificate can be bound to a DNS name or an IP (not usual). In order for the client verification to be correct, it must correspond to the IP / hostname of the server
AKKA requires a certificate for each server, issued by a common CA
CA
- server1: server1.yourdomain.com (or IP1)
- server2: server2.yourdomain.com (or IP2)
To simplify server deployment, you can use a wildcard *.yourdomain.com
CA
- server1: *.yourdomain.com
- server2: *.yourdomain.com
On the client side you need to configure a truststore including the public key of the CA certificate in the JKS. The client will trust in any certificate issued by this CA.
In the schema you have described I think you do not need the keystore. It is needed when you also want to identify the client with a certificate. The SSL encrypted channel will be stablished in both cases.
If you do not have a domain name like yourdomain.com and you want to use internal IP, I suggest to issue a certificate for each server and bound it to the IP address.
Depending on how akka is verifying the server certificate, it would be possible to use a unique self-signed certificate for all servers. Akka probably relies trust configuration to JVM defaults. If you include a self-signed certificate in the truststore (not the CA), the ssl socket factory will trust connections presenting this certificate, even if it is expired or if the hostname of the server and the certificate will not match. I do not recomend it

How sim800 get ssl certificate?

Sim800 supports SSL protocol. AT command "AT+CIPSSL" sets TCP to use SSL function.
In the "sim800_series_ssl_application_note_v1.01.pdf" is noted that: "Module will automatic begin SSL certificate after TCP connected."
My Problem: What is the exact meaning of the begin SSL certificate? what does sim800 do exactly? Does sim800 get SSL certificate from website? where does sim800 save SSL certificate?
As far as I know, SIM800 has some certificates in it and when you use a TCP+SSL or HTTP+SSL connection it will automatically use those certificates.
If those certificates are not ok for you, you will need to use an SD card, save there the certificates you want and use the command AT+SSLSETCERT to set the certificate you saved on your SD card. Here you can find how to use the File System.
Usually the certificates that come with the module are enough and you won't need this. But for example they didn't work for me when I tried to communicate with Azure via MQTT. I had to encrypt the data myself using wolfSSL library and send it using TCP without SSL.
Note: Not all SIM800 modules have SD card support.
There are a very few information about sim800 and ssl certificate on the web, and like you i got a lot of questions about it.
About your questions on how does sim800 get certificate and where does it save it, it seems, according to sim800_series_ssl_application_note_v1.01.pdf, that you can create (defining your own path), write and import a ssl certificate on your own with the AT+FSCREATE, AT+FSWRITE and AT+SSLSETCERT commands. An example is provided at the paragraph 3.10.
I'm sorry, i can't answer your other questions.
Anyway, if you get further informations about sim800 and ssl, i would be grateful if you share it with me.
When you use AT+CIPSSL you tell the SIM-module to use the SSL connection with TCP. When you use +CIPSTART command->
SIM module requests the TCP connection with the server through SSL.
Server sends the Server SSL certificate.
The authenticity of that certificate is checked with internal certificate authority certificate (The one that resides inside SIM-module) which is cryptographically connected with server certificate.
If the authenticity of certificate can not be confirmed SIM-module will close the connection unless you use the command AT+SSLOPT=0,0 (which forces the SIM-module to ignore invalid certificate authentication) prior to AT+CIPSSL command.
//Key exchange
SIM-module then encrypts it's master key (already inside SIM-module cannot be changed or read) with the public key (Which is part of the already sent server certificate) and sends it back to server.
Server then encrypts it's master key with SIM-module's master-key and sends it back to SIM-module. Key exchange is now complete as both (server and SIM-module) recieved master keys.
SIM-module currently doesn't support Client authentication which means that server cannot authenticate the client. That means there must be some other option of authentication (For example in MQTT that can be username and password that only client knows)
If you want your module to be able to authenticate server you will need to create the self-signed certificate for server and certificate authority certificate (for SIM-module) which is cryptographically connected to self-signed certificate and upload them to server and SIM-module (through AT+SSLSETCERT command from SD card).
If you only want to encrypt the data traffic you can ignore invalid certificate (AT+SSLOPT=0,0) as you will recieve publickey nevertheless. But if you want to be sure about server authenticity you will need to upload right certificate to module.

How can I setup an FTPS server on my aws EC2 ubuntu instance

1) I am trying to setup an FTPS server on my EC2 Ubuntu instance. I can only find resources to setup tutorials for an SFTP server.
2)From what I understand, the SSL certificate is only applicable to the server. When a user tries to FTPS to my server, should he/she upload a certificate or public/private key file similar to SFTP? Or only hostname, port, username, password is sufficient?
You might have better luck searching for "ftp over tls" which is another name for ftps. TLS is the successor protocol to SSL, though often still referred to casually as "SSL."
I use proftpd and I mention that primarily because their docs discuss some theory and troubleshooting techniques using openssl s_client -connect which you will find quite handy regardless of which server you deploy.
The SSL cert is only required at the server side, and if you happen to have a web server "wildcard" cert, you may be able to reuse that, and avoid purchasing a new one.
Client certs are optional; username and password will suffice in many applications. Properly configured, authentication will only happen over encrypted connections. (Don't configure the server to also operate in cleartext mode on the standard ftp port; inevitably you'll find a client who thinks they are using TLS when they are not).
If client certs are required, it is because of your policy, rather than technical reasons. You'll find that SSL client certs operate differently than SSH. Typically the client certs are signed you a certificate authority that you create, and then you trust them because they are signed by your certificate authority as opposed to your possession of their public key, as in SSH.

Is using SSH (PuTTy) secure if I do not have an SSL certificate on server?

We have a small office server running Linux Centos for internal use. I can connect to it externally though using Putty with SSH.
Since the server does not have any kind of SSL certificate, is using putty ssh still secure?
Thanks
SSH does not depend on the SSL notion of signed certificate chains. SSH is using encryption, and certificates . You can also use user certificates instead of password based authentication (recommended).
SSH depends on a TOFU (Time of first use) certificate validation. When you connect to the server, it asks you to verify the key. The client then remembers this key. If the server suddenly presents a different key (possibly a man in the middle attack), you will be unable to connect without manual intervention.