I'm trying to capture traffic from Insomnia for debugging an API, since the traffic is HTTPS I need the keylog generated by insomnia when it does the handshake with the server so I can see the traffic in plain text.
There is no documentation about that, at least I couldn't find it.
What I do for that purpose, for instance, in Firefox is configure the ssl key log file, so when FF connects to an HTTPS site, I can capture the traffic in Wireshark and see the plain HTTP requests/responses (https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format).
How can I log the keys used in the SSL/TLS handshake from Insomnia for the same purpose? Thanks.
It appears this has already been implemented in insomnia.
In my case, once I defined a file path using SSLKEYLOGFILE, the file was only populated after opening insomnia in the same terminal, then making a request.
Opening insomnia via the GUI doesn't seem to take into account the SSLKEYLOGFILE.
Related
By watching Computerphile YouTube videos, I know that today's browsers perform a TLS Handshake with every HTTPS website they display for me on my PC. For this question, let us assume the request is pointed at my server, running an Express API, protected by a valid SSL certificate. Will a TLS Handshake be performed even when I send a POST request with the help of:
Requests module in Python, a simple POST request to my server from a Python script.
NodeJS (Express.JS), a simple POST request (containing username and password) from a HTML webpage to my server.
From a mobile app programmed in MIT App Inventor 2, which gives me an option of making a POST request.
... and not a browser?
I am asking this question in regards to an app I am programming, wherein the user has to identify himself with a key and a password, and I want the information (log-in and else) to be securely conveyed to my VPS.
HTTPS means HTTP inside TLS. This means accessing a https:// URL always requires also TLS and thus a TLS handshake. It does not matter if the client is a browser, Python code, NodeJS or whatever. It does not matter if it is GET or POST request either.
I'm trying to build a complete web caching proxy using Boost Asio and LibCURL, I've already built the server and everything works fine. It receives http requests (GET, POST, upload using POST ...) correctly and also it sends back the responses to the browser for e.g correctly.
Now, I want to extend it, so it can handles https requests. I read about it in LibCURL web site http://curl.haxx.se/libcurl/c/libcurl-tutorial.html (proxy section), I understood how it works and I have a clear idea how it should be done. But I didn't find a good documentation about how proxies handle https requests. and:
what are the possible messages (information, format, length ...) exchanged by the source application and the proxy ?
things to consider.
...
Thanks in advance :-) .
You will receive the CONNECT command in plain text, and respond to it ditto, then the communications after that will be encrypted. If your proxy is to be an SSL endpoint, which is highly problematic given that HTTPS requires a certificate that matches the target host-address, you will then need to enter SSL mode on both connections. More probably you should just start copying bytes in both directions without attempting to process the contents.
I'm trying to get a better understanding of how SSL works so I installed a self-signed SSL cert on my server for testing.
When I post data to an HTTPS url on the test server, Chrome developer tools shows all the data in plain text. Is that what I should expect or should the data appear as encrypted in the developer tools?
I tried running a packet sniffer (Cocoa Analyzer Packet) and I don't see any of the data that I'm trying to post in plain text, but some messages do show the domain I'm posting too (only the domain, no query params or other data). Is that normal? I was under the impression that everything including the url should be encrypted.
The Chrome developer tools wouldn't be very helpful if they just showed the encrypted data. Those tools are located in the network stack before the data gets encrypted and sent to the server.
As you have noticed, a packet sniffer will show that the HTTP message sent over SSL is encrypted on the wire. The domain names are not encrypted because those are needed in plain text for DNS and TCP to send your data to the correct server.
Are all URLs encrypted when using TLS/SSL (HTTPS) encryption? I would like to know because I want all URL data to be hidden when using TLS/SSL (HTTPS).
If TLS/SSL gives you total URL encryption then I don't have to worry about hiding confidential information from URLs.
Yes, the SSL connection is between the TCP layer and the HTTP layer. The client and server first establish a secure encrypted TCP connection (via the SSL/TLS protocol) and then the client will send the HTTP request (GET, POST, DELETE...) over that encrypted TCP connection.
Note however (as also noted in the comments) that the domain name part of the URL is sent in clear text during the first part of the TLS negotiation. So, the domain name of the server can be sniffed. But not the rest of the URL.
Since nobody provided a wire capture, here's one.
Server Name (the domain part of the URL) is presented in the ClientHello packet, in plain text.
The following shows a browser request to:
https://i.stack.imgur.com/path/?some=parameters&go=here
See this answer for more on TLS version fields (there are 3 of them - not versions, fields that each contain a version number!)
From https://www.ietf.org/rfc/rfc3546.txt:
3.1. Server Name Indication
[TLS] does not provide a mechanism for a client to tell a server
the name of the server it is contacting. It may be desirable for
clients to provide this information to facilitate secure
connections to servers that host multiple 'virtual' servers at a
single underlying network address.
In order to provide the server name, clients MAY include an
extension of type "server_name" in the (extended) client hello.
In short:
FQDN (the domain part of the URL) MAY be transmitted in clear inside the ClientHello packet if SNI extension is used
The rest of the URL (/path/?some=parameters&go=here) has no business being inside ClientHello since the request URL is a HTTP thing (OSI Layer 7), therefore it will never show up in a TLS handshake (Layer 4 or 5). That will come later on in a GET /path/?some=parameters&go=here HTTP/1.1 HTTP request, AFTER the secure TLS channel is established.
EXECUTIVE SUMMARY
Domain name MAY be transmitted in clear (if SNI extension is used in the TLS handshake) but URL (path and parameters) is always encrypted.
MARCH 2019 UPDATE
Thank you carlin.scott for bringing this one up.
The payload in the SNI extension can now be encrypted via this draft RFC proposal. This capability only exists in TLS 1.3 (as an option and it's up to both ends to implement it) and there is no backwards compatibility with TLS 1.2 and below.
CloudFlare is doing it and you can read more about the internals here —
If the chicken must come before the egg, where do you put the chicken?
In practice this means that instead of transmitting the FQDN in plain text (like the Wireshark capture shows), it is now encrypted.
NOTE: This addresses the privacy aspect more than the security one since a reverse DNS lookup MAY reveal the intended destination host anyway.
SEPTEMBER 2020 UPDATE
There's now a draft RFC for encrypting the entire Client Hello message, not just the SNI part:
https://datatracker.ietf.org/doc/draft-ietf-tls-esni/?include_text=1
At the time of writing this browser support is VERY limited.
As the other answers have already pointed out, https "URLs" are indeed encrypted. However, your DNS request/response when resolving the domain name is probably not, and of course, if you were using a browser, your URLs might be recorded too.
I agree with the previous answers:
To be explicit:
With TLS, the first part of the URL (https://www.example.com/) is still visible as it builds the connection. The second part (/herearemygetparameters/1/2/3/4) is protected by TLS.
However there are a number of reasons why you should not put parameters in the GET request.
First, as already mentioned by others:
- leakage through browser address bar
- leakage through history
In addition to that you have leakage of URL through the http referer: user sees site A on TLS, then clicks a link to site B. If both sites are on TLS, the request to site B will contain the full URL from site A in the referer parameter of the request. And admin from site B can retrieve it from the log files of server B.)
Entire request and response is encrypted, including URL.
Note that when you use a HTTP Proxy, it knows the address (domain) of the target server, but doesn't know the requested path on this server (i.e. request and response are always encrypted).
Yes and no.
The server address portion is NOT encrypted since it is used to set up the connection.
This may change in future with encrypted SNI and DNS but as of 2018 both technologies are not commonly in use.
The path, query string etc. are encrypted.
Note for GET requests the user will still be able to cut and paste the URL out of the location bar, and you will probably not want to put confidential information in there that can be seen by anyone looking at the screen.
An addition to the helpful answer from Marc Novakowski - the URL is stored in the logs on the server (e.g., in /etc/httpd/logs/ssl_access_log), so if you don't want the server to maintain the information over the longer term, don't put it in the URL.
It is now 2019 and the TLS v1.3 has been released. According to Cloudflare, the server name indication (SNI aka the hostname) can be encrypted thanks to TLS v1.3. So, I told myself great! Let's see how it looks within the TCP packets of cloudflare.com
So, I caught a "client hello" handshake packet from a response of the cloudflare server using Google Chrome as browser & wireshark as packet sniffer. I still can read the hostname in plain text within the Client hello packet as you can see below. It is not encrypted.
So, beware of what you can read because this is still not an anonymous connection. A middleware application between the client and the server could log every domain that are requested by a client.
So, it looks like the encryption of the SNI requires additional implementations to work along with TLSv1.3
UPDATE June 2020:
It looks like the Encrypted SNI is initiated by the browser. Cloudflare has a page for you to check if your browser supports Encrypted SNI:
https://www.cloudflare.com/ssl/encrypted-sni/
At this point, I think Google chrome does not support it. You can activate Encrypted SNI in Firefox manually. When I tried it for some reason, it didn't work instantly. I restarted Firefox twice before it worked:
Type: about:config in the URL field.
Check if network.security.esni.enabled is true.
Clear your cache / restart
Go to the website, I mentioned before.
As you can see VPN services are still useful today for people who want to ensure that a coffee shop owner does not log the list of websites that people visit.
A third-party that is monitoring traffic may also be able to determine the page visited by examining your traffic an comparing it with the traffic another user has when visiting the site. For example if there were 2 pages only on a site, one much larger than the other, then comparison of the size of the data transfer would tell which page you visited. There are ways this could be hidden from the third-party but they're not normal server or browser behaviour. See for example this paper from SciRate, https://scirate.com/arxiv/1403.0297.
In general other answers are correct, practically though this paper shows that pages visited (ie URL) can be determined quite effectively.
You can not always count on privacy of the full URL either. For instance, as is sometimes the case on enterprise networks, supplied devices like your company PC are configured with an extra "trusted" root certificate so that your browser can quietly trust a proxy (man-in-the-middle) inspection of https traffic. This means that the full URL is exposed for inspection. This is usually saved to a log.
Furthermore, your passwords are also exposed and probably logged and this is another reason to use one time passwords or to change your passwords frequently.
Finally, the request and response content is also exposed if not otherwise encrypted.
One example of the inspection setup is described by Checkpoint here. An old style "internet café" using supplied PC's may also be set up this way.
Linking to my answer on a duplicate question. Not only is the URL available in the browsers history, the server side logs but it's also sent as the HTTP Referer header which if you use third party content, exposes the URL to sources outside your control.
Althought there are some good answers already here, most of them are focusing in browser navigation. I'm writing this in 2018 and probably someone wants to know about the security of mobile apps.
For mobile apps, if you control both ends of the application (server and app), as long as you use HTTPS you're secure. iOS or Android will verify the certificate and mitigate possible MiM attacks (that would be the only weak point in all this). You can send sensitive data through HTTPS connections that it will be encrypted during transport. Just your app and the server will know any parameters sent through https.
The only "maybe" here would be if client or server are infected with malicious software that can see the data before it is wrapped in https. But if someone is infected with this kind of software, they will have access to the data, no matter what you use to transport it.
While you already have very good answers, I really like the explanation on this website: https://https.cio.gov/faq/#what-information-does-https-protect
in short: using HTTPS hides:
HTTP method
query params
POST body (if present)
Request headers (cookies included)
Status code
Additionally, if you're building a ReSTful API, browser leakage and http referer issues are mostly mitigated as the client may not be a browser and you may not have people clicking links.
If this is the case I'd recommend oAuth2 login to obtain a bearer token. In which case the only sensitive data would be the initial credentials...which should probably be in a post request anyway
I am creating a secure web based API that uses HTTPS; however, if I allow the users to configure it (include sending password) using a query string will this also be secure or should I force it to be done via a POST?
Yes, it is. But using GET for sensitive data is a bad idea for several reasons:
Mostly HTTP referrer leakage (an external image in the target page might leak the password[1])
Password will be stored in server logs (which is obviously bad)
History caches in browsers
Therefore, even though Querystring is secured it's not recommended to transfer sensitive data over querystring.
[1] Although I need to note that RFC states that browser should not send referrers from HTTPS to HTTP. But that doesn't mean a bad 3rd party browser toolbar or an external image/flash from an HTTPS site won't leak it.
From a "sniff the network packet" point of view a GET request is safe, as the browser will first establish the secure connection and then send the request containing the GET parameters. But GET url's will be stored in the users browser history / autocomplete, which is not a good place to store e.g. password data in. Of course this only applies if you take the broader "Webservice" definition that might access the service from a browser, if you access it only from your custom application this should not be a problem.
So using post at least for password dialogs should be preferred. Also as pointed out in the link littlegeek posted a GET URL is more likely to be written to your server logs.
Yes, your query strings will be encrypted.
The reason behind is that query strings are part of the HTTP protocol which is an application layer protocol, while the security (SSL/TLS) part comes from the transport layer. The SSL connection is established first and then the query parameters (which belong to the HTTP protocol) are sent to the server.
When establishing an SSL connection, your client will perform the following steps in order. Suppose you're trying to log in to a site named example.com and want to send your credentials using query parameters. Your complete URL may look like the following:
https://example.com/login?username=alice&password=12345)
Your client (e.g., browser/mobile app) will first resolve your domain name example.com to an IP address (124.21.12.31) using a DNS request. When querying that information, only domain specific information is used, i.e., only example.com will be used.
Now, your client will try to connect to the server with the IP address 124.21.12.31 and will attempt to connect to port 443 (SSL service port not the default HTTP port 80).
Now, the server at example.com will send its certificates to your client.
Your client will verify the certificates and start exchanging a shared secret key for your session.
After successfully establishing a secure connection, only then will your query parameters be sent via the secure connection.
Therefore, you won't expose sensitive data. However, sending your credentials over an HTTPS session using this method is not the best way. You should go for a different approach.
Yes. The entire text of an HTTPS session is secured by SSL. That includes the query and the headers. In that respect, a POST and a GET would be exactly the same.
As to the security of your method, there's no real way to say without proper inspection.
SSL first connects to the host, so the host name and port number are transferred as clear text. When the host responds and the challenge succeeds, the client will encrypt the HTTP request with the actual URL (i.e. anything after the third slash) and and send it to the server.
There are several ways to break this security.
It is possible to configure a proxy to act as a "man in the middle". Basically, the browser sends the request to connect to the real server to the proxy. If the proxy is configured this way, it will connect via SSL to the real server but the browser will still talk to the proxy. So if an attacker can gain access of the proxy, he can see all the data that flows through it in clear text.
Your requests will also be visible in the browser history. Users might be tempted to bookmark the site. Some users have bookmark sync tools installed, so the password could end up on deli.ci.us or some other place.
Lastly, someone might have hacked your computer and installed a keyboard logger or a screen scraper (and a lot of Trojan Horse type viruses do). Since the password is visible directly on the screen (as opposed to "*" in a password dialog), this is another security hole.
Conclusion: When it comes to security, always rely on the beaten path. There is just too much that you don't know, won't think of and which will break your neck.
Yes, as long as no one is looking over your shoulder at the monitor.
I don't agree with the statement about [...] HTTP referrer leakage (an external image in the target page might leak the password) in Slough's response.
The HTTP 1.1 RFC explicitly states:
Clients SHOULD NOT include a Referer
header field in a (non-secure) HTTP
request if the referring page was
transferred with a secure protocol.
Anyway, server logs and browser history are more than sufficient reasons not to put sensitive data in the query string.
Yes, from the moment on you establish a HTTPS connection everyting is secure. The query string (GET) as the POST is sent over SSL.
You can send password as MD5 hash param with some salt added. Compare it on the server side for auth.