Some questions about ssl - ssl

I have a couple questions about SSL.
What happens if someone tampers or changes the encrypted data? There are many ways in which the data being transferred can be tampered, so though the encrypted data will not make any sense to the tamperer, what would happen if he just tampers it? How would I handle such scenarios?
What will happen if a webpage is requested by a browser which does not support SSL? Or the client accessing the website is actually some kind of malware?
I am pretty new to SSL so maybe my questions are very trivial but I don't have answers to them.

The packets including the URL itself is encrypted. Modify the bytes will make the packet invalid. As far as I know it is not accepted by the server then.
If a client browser don't support your SSL protocol it can't access the website. The client get a "Insecured Request Denied Error".
SSL is to establish a secured connection. Any software, including malware, that support the protocol can start a connection. The SSL protocol "just" encrypt the communication so the packets can't be inspected outside. So your software itself need to be protected against any attacks anyway.

The tampering will be detected on arrival and the connection automatically dropped, probably resulting in a dialog box to the browser user.
I'm not aware of any browser that doesn't support SSL. Such a thing would be singularly useless.

Related

Which proxy mode to use if host company terminates TLS on reverse proxy

Friendly Disclaimer: I am new to working with Keycloak and IdP in general. So it's likely that I use incorrect terminology and/or am more confused than I think I am. Corrections are gratefully accepted.
My question is conceptual.
I have a TLS certificate that is terminated on my host machine by my host company. My reverse proxy (Traefik) is picking up that certificate.
Which of the following proxy modes should I use now to be able to deploy Keycloak to production: edge, reencrypt or passthrough? (see here for relevant documentation)
I can pretty much rule out passthrough, because as I wrote, the TLS certificate is terminated on the server. But I am unsure if I have to bring my own certificate and reencrypt or if it is considered safe to go along with edge?
I have done my best to keep this question short and general. However, I am happy to share configurations or further details if needed.
As far as I know, most organizations consider a request to be safe when the proxy validated and terminated the TLS. It also removes the performance overhead (depends on your load). Unless your organization is going for Zero Trust for its internal network, using the edge should be totally acceptable.

Is there a way to see sent data when the server is https?

I am trying to see data (using tcpdump) which my browser sends to server which is using https protocol
tcpdump -i any -w /tmp/http.log
but application data is encrypted(as it was expected).
I am wondering is there a way to see data before it will be encrypted when the server is https?
EDIT: Encryption traffic is created by common web browsers like Firefox, Chrome, IE...
If you control the server, you can set it to permit the null cipher then force your client to use the same. The null cipher is just a fancy way of saying "unencrypted". This should NEVER be deployed, as even having it as an option in the ciphers list is HIGHLY insecure.
You could also add a trusted key to the client, and have the client use a proxy. The communication with the proxy uses the trusted key you created, and can look at the data before sending it on, encrypted with the key of the destination server. This is, effectively, a "Man in the Middle attack," and can be defeated by things like certificate pinning. Some companies use this to track employee computer usage (when used in that way, it's somewhat controversial).
Strictly speaking, both of those are attacks to get around the encryption, not looking at the data before it's encrypted. To see it before it's encrypted, you would, generally, have to modify either the client or server to record what it's sending (or maybe use a debugger), as generally the encryption is done by a library directly linked to the programme.
EDIT: the developer tools in Chrome and Firefox might be what you're looking for: if you click the page on the "network" tab (in chrome, I don't have FF up, but it has almost exactly the same thing) you can see almost all the aspects of the info being sent and received.
Just use Charles Proxy (free trial) on your computer. If the certificate is pinned this will not work bit that is probably not the case for a browser..

NET::ERR_CERT_REVOKED - only for few clients not all - CentOS Server

Salam,
I installed *.domain.com SSL Certificate in a CentOS 7.1 (apache2) Server, it was Ok first but now some of our client having problem with it,
I tried Server Update.
I tried Cache clear in clients Browsers
I tried different browsers
and I check the date and time.
and I actually do not know it's from the server or its from client side.
Error:
Your connection is not private
Attackers might be trying to steal your information from subdomain.domain.com (for example, passwords, messages, or credit cards). NET::ERR_CERT_REVOKED
Automatically report details of possible security incidents to Google. Privacy policy
ReloadHide advanced
subdomain.domain.com normally uses encryption to protect your information. When Chrome tried to connect to subdomain.domain.com this time, the website sent back unusual and incorrect credentials. Either an attacker is trying to pretend to be support.shamal.net, or a Wi-Fi sign-in screen has interrupted the connection. Your information is still secure because Chrome stopped the connection before any data was exchanged.
You cannot visit support.shamal.net right now because this certificate has been revoked. Network errors and attacks are usually temporary, so this page will probably work later.
Your certificate had been revoked. This can be seen here:
https://www.ssllabs.com/ssltest/analyze.html?d=support.shamal.net
This is a problem with the cert configured on the server. You'd need to ask GoDaddy why this happened - could be you asked to get it reissued, could be they think your untrustworthy for sone reason.
So in theory EVERY client should get a message like that above. But there are some subtleties, which might explain why this is not the case:
If the browser was unable to contact the CA which issued the cert then it will assume he best and assume it's not revoked (so called "soft fail").
Unlike other browsers, Chrome does not do real time checks as whether a cert is revoked by default (as they think it is slow and doesn't add that much protection because of reasons like "soft fail" - a contentious opinion). Instead they rely on a concept call CRLSets which are downloaded periodically by Chrome. This contains a list of revoked certificates. So there is obviously a delay in getting into Chrome and there is some question as to how complete CRLSets are. So this might explain why some Chrome clients are rejecting this and some not. Are the ones that reject it newer version perhaps?
Lastly some tools (particularly Antivirus like Avast or Corporate proxy tools) deliberately replace the cert with one of their own so they can still read the traffic to scan for viruses or for other reasons. Of course they shouldn't do this if a cert is revoked like in this case but stranger things have happened. Double click the green padlock to view the certificate and its chain to see if it was issued by a someone other than GoDaddy.

Why can I see SSL communication as a plain text in a sniffer?

I've created WCF Service and I share it via ssl. I have little knowledge about security, but I'm curious why can I see whole communication as a plain text in httpAnalyzer, even though POSTs are sending via https?
When my client application invokes wcf service, then I can see it in sniffer - passwords etc.
Does it mean that SSL works only on the lower layer - while transporting data? So every evil application can sniff communication on client's side and an encryption only secures us against man-in-the-middle?
SSL works indeed on a "lower layer" than HTTP. According to the OSI Model, SSL works on the Session Layer, while HTTP is on the Application Layer.
Most of these clientside HTTP Analyzers work from within the browser, analyzing the HTTP traffic on the application layer, before it is processed by the SSL logic. So it is completely normal to see the plain HTTP request.
Concerning security, an evil application installed within the browser can indeed read upon the traffic. But once it is processed by the SSL layer, it becomes way harder for an evil application to read the traffic.
SSL works by firstly authenticating the server to you as a client. (Do I talk to the one I really want to talk to). As you can't know all of the servers and their certificates before hand, you use some well known root certificates, which are pre-installed on your OS. These are used to check if some server is perhaps known by an already well known service. (I don't know you, but some really important server tells me that you indeed are who you say you are).
This authentication step works independent from the encryption of the traffic. No program can decrypt an arbitrary SSL stream by "installing a root certificate". (As said these root certificates are already on your machine from the first moment you install an OS on it =)
But if a evil programs is able to let you believe that you are talking to a legitimate server, using a forged root certificate for example, instead of actually talking to malware, it is able to see what the contents of the SSL traffic is. But then again, you are talking to the evil program itself, not the server you were intended to talk to. This is however not the case with HTTP Analyzer
This is in short terms how SSL works and hopefully answers your question.
Most likely HTTP analyzer install it's own root certificate, and intercepts SSL traffic, working as man-in-the-middle.

vsftpd : Make sure data transfers are encrypted?

So here is my 'problem', I set up an FTP server thanks to vsftpd so that both login & data transfers should be encrypted.
Here is the interesting part of my vsftpd.conf file.
ssl_enable=YES
allow_anon_ssl=NO
require_ssl_reuse=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=YES
ssl_sslv3=YES
rsa_cert_file=/etc/vsftpd/vsftpd.pem
rsa_private_key_file=/etc/vsftpd/vsftpd.pem
ssl_ciphers=HIGH
I am using Filezilla as an FTP client, the connection is configured like this :
Protocol : FTP - File Transfer Protocol
Encryption : Require explicit FTP over TLS
Logon type: Normal
Some things to note :
Encryption : Plain FTP : does not work and I am happy with that.
(Response: 530 Non-anonymous sessions must use encryption.)
Encryption : Require implicit FTP over TLS : does not work either, the connection is refused by the server. I guess it is because I forced the SSL connection.
Now, once the (explicit) connection is established, Filezilla is showing a small lock icon at the bottom of the window saying The connection is encrypted. Click icon for details.
I wanted to make sure that the data transfers were indeed encrypted and not plain so I captured everything on my eth. card with Wireshark while downloading a file from my server to my computer.
Except that I can not find a single packet of SSL protocol, everything is TCP.
I am out of ideas on how to make sure the data transferred is encrypted, even if filezilla says so, and each time I google "vsftpd how to make sure data transfers are encrypted", the only answers I get is "ssl_enable=YES" or "Check the box Use SSL" ...
Thank you in advance for helping me !
After a little more research and especially after following the Complete walk through on http://wiki.wireshark.org/SSL, I have a better understanding of the whole thing.
I am answering to my own question hoping this will help someone someday, as long as what follows is correct...
Also writing this down is a good way for me, I think, to see if have clearly understood my problem. Any difficulties in writing this answer will prove me wrong.
First :
Typically, SSL uses TCP as its transport protocol.
SSL is wrapped in TCP, that is why I couldn't observe explicitly the SSL protocol while capturing packets.
When analyzing a TCP packet, I could only "Follow TCP stream" but not "Follow SSL stream" which mislead me into thinking the packet was not holding encrypted data. That is funny because the observable data was not human readable ... so encrypted.
To be able to decrypt it I had to provide wireshark the encryption key :
RSA keys list
This option specifies the bindings between an IP address, a port, a protocol and a decryption key.
Then, I could observe both encrypted / unencrypted data.
Also, after reading this on http://wiki.filezilla-project.org/ :
When you apply encryption to your FTP server the CPU will have to do many calculations to encrypt the data being sent and decrypt the data being received.
I simply decided to run the UNIX top command while downloading a file. I was able to observe a high CPU usage of the filezilla client process, contrary to a unencrypted data transfer. This was a second argument that confirmed the data transfered were indeed encrypted, and thus needed to be decrypted.