kerberos: how the client knows the service name to request ticket to? - authentication

Let's assume that the client wants to authenticate himself to a HTTP proxy. The proxy is configured with kerberos, and has clearly the service name HTTP/proxy.foo.bar set in it's configs. How does the client know which service name to request the ticket to ? Does it request the ticket to the domain name he's making request to (in this case it is proxy.foo.bar indeed), or does it receive the name in the authentication sequence, in a 407 reply in this case (which doest contain the negotiate challenge, but I just don't know if there's a way to look into it) ?
I'm trying to debug the kerberos errors on a proxy which suddenly stopped authenticating some clients. The thing is, that looking in the Wireshark, I see that the client is requesting a ticket not for a service name configured on a proxy (same name he's instructed to use), HTTP/proxy.foo.bar, but for a name that the proxy IP resolves to, HTTP/host.foo.bar (well, at least it's the name that the proxy resolves to, may be though the client gets it some other way), and TGS just cannot find one, thus an error happens.

So you’ve got two questions in here (you didn't ask how to actually solve the problem, to do that more details would be needed - see comments).
You asked "The proxy is configured with kerberos, and has clearly the service name HTTP/proxy.foo.bar set in it's configs. How does the client know which service name to request the ticket to?"
A. It works pretty much like this. The client types in a URL in the web browser or clicks on a hyperlink. It looks up the IP host in DNS domain which matches the host name in the URL. Then it goes to that IP host, looking for the service defined in the URL, in this case it is the HTTP service. If it receives an HTTP 401 Negotiate challenge (it's 401, not 407) from the web server, due to it being Kerberos-protected, it goes to its KDC and requests a Kerberos service ticket for HTTP/proxy.foo.bar, zips back to proxy.foo.bar and presents the ticket to that host for the HTTP service running on it. The host validates this ticket and if all is well and the client web browser renders the HTML. You've seen the Kerberos ticket ticket when you ran klist on the client. I don't have any web references for you, this is all off the top of my head.
You also asked “Does it request the ticket to the domain name he's making request to (in this case it is proxy.foo.bar indeed), or does it receive the name in the authentication sequence, in a 407 reply in this case (which doest contain the negotiate challenge, but I just don't know if there's a way to look into it) ?”
A. Your question was a bit hard to follow but if I am understanding you correctly, the answer is the web client requests a ticket as a result of the HTTP 401 Negotiate authentication challenge from the web server (see above).
There’s many diagrams sequencing this process on the web, including here: http://www.zeroshell.org/kerberos/Kerberos-operation/

Related

Using a RESTful API - Is it secure?

We are partnering with a service provider which exposes their services via RESTful API.
We can authenticate with the API by passing a username and password as URL parameters.
Example: https://example.com/api/service.json?api_user=Username&api_key=Password
I know this is using SSL. However, since the username and password are part of the URL, couldn't this be intercepted by a third party?
No, a third party will only be able to see the destination (example.com). The rest of the URL is actually embedded inside the request.
It helps to understand the process of how an HTTP (or HTTPS) request is made.
determine protocol (in this case HTTPS, using port 443)
get IP address of server using DNS
establish a TCP connection to server (if SSL is involved, it's a bit more complicated)
issue a request to server on the new connection which will look something like
GET /api/service.json?api_user=Username&api_key=Password
Since the actual request is part of the encrypted data stream, there's no way for someone monitoring the connection to extract sensitive information.
The previous answers are both technically correct; if you're using HTTPS, the URL and querystring data will be encrypted prior to transmission and can be considered secure.
However, the fact that an API is asking for a username and password as querystring parameters may indicate a somewhat lax approach to security.
For example, many webservers will log the request querystring parameters by default , which means that your plain-text credentials might be lying around on disk somewhere (and many companies will store, or back up, webserver logs in insecure ways).
In short: passing credentials as querystring parameters isn't a security risk per se, but is generally a bad practice and may be symptomatic of larger security issues.
However, since the username and password are part of the URL, couldn't
this be intercepted by a third party?
The URL is sent under encryption as well. In other words, the process that secures the channel occurs before the URL is sent to the server.
You're safe.

Delegation in WCF not working - First hop is NTLM?

I'm working on a project which requires delegation in a double-hop scenario. We have a desktop client, connecting to a WCF service using a net.tcp binding, connecting to a SQL database on another server. Our goal is to use the user's credentials to access the SQL database.
Both the WCF service and SQL database are running under the same domain user, which has delegation enabled for the SQL database. The instructions here have been followed, with no success.
Now, some details recorded in our logs:
The login used on the SQL database appears as the user the WCF service is running under, and uses Kerberos.
The login used on the WCF server appears as the client's user, but uses NTLM.
Using either [OperationBehavior(Impersonation = ImpersonationOption.Allowed)] or using (ServiceSecurityContext.Current.WindowsIdentity.Impersonate()) results in commands being run as the client, on the WCF server. This leads me to believe impersonation is working fine.
So, what could be causing the first hop to fall back to NTLM? We suspect it's a SPN issue, but we've registered the SPNs of both the WCF service and SQL service to the shared domain user. Also, as per the instructions listed above, we've set the SQL service as trusted for delegation on the domain user.
We've used EndpointIdentity.CreateSpnIdentity on the WCF service to set the SPN, and this is the SPN we've registered to the domain user.
Any suggestions? Thanks in advance!
Edit:
We've found something that may have been an issue - We had not used EndpointIdentity.CreateSpnIdentity on the client. After setting this we receive the error
“call to SSPI failed” with an inner exception of “target principle name is incorrect”. But the SPN we've set in the client and the server match, and both match the hostname of the service. If we set both the client and the server SPN to something completely different, or if the client's specified SPN does not match the server's SPN, authentication falls back to NTLM as it did before. We've done research into the error, but cannot find its cause. Any suggestions?
We've also performed packet captures of both cases - falling back to NTLM and when we receive the "call to SSPI failed" error. In both cases, similar packets are sent and received until, on one, NTLM is mentioned. On the other, a "TURN CHANNEL" packet is sent from the client to the server. The packets contain nothing human readable except the IP address of the server until either NTLM is mentioned, and the username and computer names are sent, or the "TURN CHANNEL" packet is sent, which contains what appears to be the SPN, and possibly the hostname. There doesn't appear to be any human readable error codes or error messages. Any suggestions on what to look for in the packets?
We found our error - the client was creating the connection using the IP address of the server. After switching the IP to a fully qualified domain name, the first hop consistently authenticates using Kerberos.
The IP address resolves to the same string we used in both SPNs, but I suppose the client checks if the connection string matches the portion of the SPN following the slash before performing any other checks.
We tested our results using both Network Service and our domain user, and as long as the SPN was registered to the computer or user respectively, there were no issues.
Hopefully this answer will save others some time and hassle!
Additional Note: While this enabled Kerberos authentication for all connections, we later discovered that this was unnecessary in our situation. Part of our connection to the database was not inside the impersonation using block, which was causing the table read to fail. We have since removed all delegation and SPN related code, and the database connection continued to work correctly. Our first hop is using NTLM. We're not exactly sure how the credentials are being used at the SQL server, as our connection appears to be exactly what is described as the double hop scenario, which should require Kerberos and delegation, but it's hard to argue with what's working. I suspect it may have something to do with the note located under delegation in this document:
When a client authenticates to the front-end service using a user name and password that correspond to a Windows account on the back-end service, the front-end service can authenticate to the back-end service by reusing the client’s user name and password. This is a particularly powerful form of identity flow, because passing user name and password to the back-end service enables the back-end service to perform impersonation, but it does not constitute delegation because Kerberos is not used. Active Directory controls on delegation do not apply to user name and password authentication.
If anyone has any other suggestions for the reason it's working, I'd love to hear them. However, I don't feel it's worth opening another question for.

Http Request Life Cycle

I have recently started my job as web application backend developer. I am bit stuck in understanding lifecycle of a Http request.
What I understood is
Every Http request first contacts a DNS server which resolves the request URL domain to a IP address.
After fetching the Webserver IP address request is forwarded to it(via PUT request). A webserver like apache handles this request and forwards this to application which has to handle this.
After this I am lost with
How response is sent by the application to the user who requested it and will Apcache involved in this?
Can I see the entire flow in my browser with some debugging tools?
Can someone refer some links to understand this in depth?
I think you are a bit wrong on your understanding of it.
If you go to www.google.com (not using any forms, just wanting the site), this is what happens:
First the browser needs to translate www.google.com to an IP address if it does not already know it. If it knows it, nothing happens at this point. If it does not know it, it contacts a DNS server to resolve the name.
Then browser will open a TCP connection to the IP address of www.google.com and send a HTTP GET request over. In this example it will be
GET / HTTP/1.1 Host: www.google.com
The server software will get this HTTP request. It will somehow generate a HTTP response and send that back trough the TCP connection. How the server does this is server software dependent. You can for example plug in application code in Apache, or just make Apache return a file from the filesystem. PHP is an application called by some software, which then generates the response sent to the browser. When the response is sent, in HTTP version 1.0 the connection is closed. HTTP 1.1 can have persistent connections though.
When the browser gets the response, it typically renders it on screen. The HTTP request is now done. A click on "search" will send a new request to the server.
GET, PUT, POST, DELETE and others are HTTP request methods. They have special meaning which you can see in the RFC.
Cookies are commonly used to identify the same user across multiple HTTP requests, called sessions. Therefore these cookies are called session cookies
You can debug the communication by using a network sniffer tool, for example Wireshark. Firefox has a third party plugin called Tamper Data that can change the request before they are sent to the server.
The HTTP RFC is a good source of how it all works.
while server receives the request from browser , the browser will be binded to some port on the host , ip address and port number of browser will be attached with the request that sends to server. server sends the responce to the ip address and port number
This is among the popular interview questions asked in various product based companies.
HTTP Is a request-response protocol. For example, a user agent initiates a request to a server, typically by opening a TCP/IP connection to a particular port on a host (port 80 by default). The request itself comprises:
a request line,
a set of request headers, and
an entity.
An HTTP server listening on that port waits for the client to send a request message. Upon receiving the request, the server sends a response that comprises:
a status line,
a set of response headers, and
an entity.
The entity in the request or response can be thought of simply as the payload, which may be binary data. The other items are readable ASCII characters. When the response has been completed, either the browser or the server may terminate the TCP/IP connection, or the browser can send another request.
I found this resource very helpful in understanding the steps taken during the HTTP lifecycle : quite interesting actually though, wasn't aware of all the intermediate steps especially w/the cache checking when determining the IP Address of a URL.
https://medium.com/#maneesha.wijesinghe1/what-happens-when-you-type-an-url-in-the-browser-and-press-enter-bb0aa2449c1a

Are HTTPS URLs encrypted?

Are all URLs encrypted when using TLS/SSL (HTTPS) encryption? I would like to know because I want all URL data to be hidden when using TLS/SSL (HTTPS).
If TLS/SSL gives you total URL encryption then I don't have to worry about hiding confidential information from URLs.
Yes, the SSL connection is between the TCP layer and the HTTP layer. The client and server first establish a secure encrypted TCP connection (via the SSL/TLS protocol) and then the client will send the HTTP request (GET, POST, DELETE...) over that encrypted TCP connection.
Note however (as also noted in the comments) that the domain name part of the URL is sent in clear text during the first part of the TLS negotiation. So, the domain name of the server can be sniffed. But not the rest of the URL.
Since nobody provided a wire capture, here's one.
Server Name (the domain part of the URL) is presented in the ClientHello packet, in plain text.
The following shows a browser request to:
https://i.stack.imgur.com/path/?some=parameters&go=here
See this answer for more on TLS version fields (there are 3 of them - not versions, fields that each contain a version number!)
From https://www.ietf.org/rfc/rfc3546.txt:
3.1. Server Name Indication
[TLS] does not provide a mechanism for a client to tell a server
the name of the server it is contacting. It may be desirable for
clients to provide this information to facilitate secure
connections to servers that host multiple 'virtual' servers at a
single underlying network address.
In order to provide the server name, clients MAY include an
extension of type "server_name" in the (extended) client hello.
In short:
FQDN (the domain part of the URL) MAY be transmitted in clear inside the ClientHello packet if SNI extension is used
The rest of the URL (/path/?some=parameters&go=here) has no business being inside ClientHello since the request URL is a HTTP thing (OSI Layer 7), therefore it will never show up in a TLS handshake (Layer 4 or 5). That will come later on in a GET /path/?some=parameters&go=here HTTP/1.1 HTTP request, AFTER the secure TLS channel is established.
EXECUTIVE SUMMARY
Domain name MAY be transmitted in clear (if SNI extension is used in the TLS handshake) but URL (path and parameters) is always encrypted.
MARCH 2019 UPDATE
Thank you carlin.scott for bringing this one up.
The payload in the SNI extension can now be encrypted via this draft RFC proposal. This capability only exists in TLS 1.3 (as an option and it's up to both ends to implement it) and there is no backwards compatibility with TLS 1.2 and below.
CloudFlare is doing it and you can read more about the internals here —
If the chicken must come before the egg, where do you put the chicken?
In practice this means that instead of transmitting the FQDN in plain text (like the Wireshark capture shows), it is now encrypted.
NOTE: This addresses the privacy aspect more than the security one since a reverse DNS lookup MAY reveal the intended destination host anyway.
SEPTEMBER 2020 UPDATE
There's now a draft RFC for encrypting the entire Client Hello message, not just the SNI part:
https://datatracker.ietf.org/doc/draft-ietf-tls-esni/?include_text=1
At the time of writing this browser support is VERY limited.
As the other answers have already pointed out, https "URLs" are indeed encrypted. However, your DNS request/response when resolving the domain name is probably not, and of course, if you were using a browser, your URLs might be recorded too.
I agree with the previous answers:
To be explicit:
With TLS, the first part of the URL (https://www.example.com/) is still visible as it builds the connection. The second part (/herearemygetparameters/1/2/3/4) is protected by TLS.
However there are a number of reasons why you should not put parameters in the GET request.
First, as already mentioned by others:
- leakage through browser address bar
- leakage through history
In addition to that you have leakage of URL through the http referer: user sees site A on TLS, then clicks a link to site B. If both sites are on TLS, the request to site B will contain the full URL from site A in the referer parameter of the request. And admin from site B can retrieve it from the log files of server B.)
Entire request and response is encrypted, including URL.
Note that when you use a HTTP Proxy, it knows the address (domain) of the target server, but doesn't know the requested path on this server (i.e. request and response are always encrypted).
Yes and no.
The server address portion is NOT encrypted since it is used to set up the connection.
This may change in future with encrypted SNI and DNS but as of 2018 both technologies are not commonly in use.
The path, query string etc. are encrypted.
Note for GET requests the user will still be able to cut and paste the URL out of the location bar, and you will probably not want to put confidential information in there that can be seen by anyone looking at the screen.
An addition to the helpful answer from Marc Novakowski - the URL is stored in the logs on the server (e.g., in /etc/httpd/logs/ssl_access_log), so if you don't want the server to maintain the information over the longer term, don't put it in the URL.
It is now 2019 and the TLS v1.3 has been released. According to Cloudflare, the server name indication (SNI aka the hostname) can be encrypted thanks to TLS v1.3. So, I told myself great! Let's see how it looks within the TCP packets of cloudflare.com
So, I caught a "client hello" handshake packet from a response of the cloudflare server using Google Chrome as browser & wireshark as packet sniffer. I still can read the hostname in plain text within the Client hello packet as you can see below. It is not encrypted.
So, beware of what you can read because this is still not an anonymous connection. A middleware application between the client and the server could log every domain that are requested by a client.
So, it looks like the encryption of the SNI requires additional implementations to work along with TLSv1.3
UPDATE June 2020:
It looks like the Encrypted SNI is initiated by the browser. Cloudflare has a page for you to check if your browser supports Encrypted SNI:
https://www.cloudflare.com/ssl/encrypted-sni/
At this point, I think Google chrome does not support it. You can activate Encrypted SNI in Firefox manually. When I tried it for some reason, it didn't work instantly. I restarted Firefox twice before it worked:
Type: about:config in the URL field.
Check if network.security.esni.enabled is true.
Clear your cache / restart
Go to the website, I mentioned before.
As you can see VPN services are still useful today for people who want to ensure that a coffee shop owner does not log the list of websites that people visit.
A third-party that is monitoring traffic may also be able to determine the page visited by examining your traffic an comparing it with the traffic another user has when visiting the site. For example if there were 2 pages only on a site, one much larger than the other, then comparison of the size of the data transfer would tell which page you visited. There are ways this could be hidden from the third-party but they're not normal server or browser behaviour. See for example this paper from SciRate, https://scirate.com/arxiv/1403.0297.
In general other answers are correct, practically though this paper shows that pages visited (ie URL) can be determined quite effectively.
You can not always count on privacy of the full URL either. For instance, as is sometimes the case on enterprise networks, supplied devices like your company PC are configured with an extra "trusted" root certificate so that your browser can quietly trust a proxy (man-in-the-middle) inspection of https traffic. This means that the full URL is exposed for inspection. This is usually saved to a log.
Furthermore, your passwords are also exposed and probably logged and this is another reason to use one time passwords or to change your passwords frequently.
Finally, the request and response content is also exposed if not otherwise encrypted.
One example of the inspection setup is described by Checkpoint here. An old style "internet café" using supplied PC's may also be set up this way.
Linking to my answer on a duplicate question. Not only is the URL available in the browsers history, the server side logs but it's also sent as the HTTP Referer header which if you use third party content, exposes the URL to sources outside your control.
Althought there are some good answers already here, most of them are focusing in browser navigation. I'm writing this in 2018 and probably someone wants to know about the security of mobile apps.
For mobile apps, if you control both ends of the application (server and app), as long as you use HTTPS you're secure. iOS or Android will verify the certificate and mitigate possible MiM attacks (that would be the only weak point in all this). You can send sensitive data through HTTPS connections that it will be encrypted during transport. Just your app and the server will know any parameters sent through https.
The only "maybe" here would be if client or server are infected with malicious software that can see the data before it is wrapped in https. But if someone is infected with this kind of software, they will have access to the data, no matter what you use to transport it.
While you already have very good answers, I really like the explanation on this website: https://https.cio.gov/faq/#what-information-does-https-protect
in short: using HTTPS hides:
HTTP method
query params
POST body (if present)
Request headers (cookies included)
Status code
Additionally, if you're building a ReSTful API, browser leakage and http referer issues are mostly mitigated as the client may not be a browser and you may not have people clicking links.
If this is the case I'd recommend oAuth2 login to obtain a bearer token. In which case the only sensitive data would be the initial credentials...which should probably be in a post request anyway

Is a HTTPS query string secure?

I am creating a secure web based API that uses HTTPS; however, if I allow the users to configure it (include sending password) using a query string will this also be secure or should I force it to be done via a POST?
Yes, it is. But using GET for sensitive data is a bad idea for several reasons:
Mostly HTTP referrer leakage (an external image in the target page might leak the password[1])
Password will be stored in server logs (which is obviously bad)
History caches in browsers
Therefore, even though Querystring is secured it's not recommended to transfer sensitive data over querystring.
[1] Although I need to note that RFC states that browser should not send referrers from HTTPS to HTTP. But that doesn't mean a bad 3rd party browser toolbar or an external image/flash from an HTTPS site won't leak it.
From a "sniff the network packet" point of view a GET request is safe, as the browser will first establish the secure connection and then send the request containing the GET parameters. But GET url's will be stored in the users browser history / autocomplete, which is not a good place to store e.g. password data in. Of course this only applies if you take the broader "Webservice" definition that might access the service from a browser, if you access it only from your custom application this should not be a problem.
So using post at least for password dialogs should be preferred. Also as pointed out in the link littlegeek posted a GET URL is more likely to be written to your server logs.
Yes, your query strings will be encrypted.
The reason behind is that query strings are part of the HTTP protocol which is an application layer protocol, while the security (SSL/TLS) part comes from the transport layer. The SSL connection is established first and then the query parameters (which belong to the HTTP protocol) are sent to the server.
When establishing an SSL connection, your client will perform the following steps in order. Suppose you're trying to log in to a site named example.com and want to send your credentials using query parameters. Your complete URL may look like the following:
https://example.com/login?username=alice&password=12345)
Your client (e.g., browser/mobile app) will first resolve your domain name example.com to an IP address (124.21.12.31) using a DNS request. When querying that information, only domain specific information is used, i.e., only example.com will be used.
Now, your client will try to connect to the server with the IP address 124.21.12.31 and will attempt to connect to port 443 (SSL service port not the default HTTP port 80).
Now, the server at example.com will send its certificates to your client.
Your client will verify the certificates and start exchanging a shared secret key for your session.
After successfully establishing a secure connection, only then will your query parameters be sent via the secure connection.
Therefore, you won't expose sensitive data. However, sending your credentials over an HTTPS session using this method is not the best way. You should go for a different approach.
Yes. The entire text of an HTTPS session is secured by SSL. That includes the query and the headers. In that respect, a POST and a GET would be exactly the same.
As to the security of your method, there's no real way to say without proper inspection.
SSL first connects to the host, so the host name and port number are transferred as clear text. When the host responds and the challenge succeeds, the client will encrypt the HTTP request with the actual URL (i.e. anything after the third slash) and and send it to the server.
There are several ways to break this security.
It is possible to configure a proxy to act as a "man in the middle". Basically, the browser sends the request to connect to the real server to the proxy. If the proxy is configured this way, it will connect via SSL to the real server but the browser will still talk to the proxy. So if an attacker can gain access of the proxy, he can see all the data that flows through it in clear text.
Your requests will also be visible in the browser history. Users might be tempted to bookmark the site. Some users have bookmark sync tools installed, so the password could end up on deli.ci.us or some other place.
Lastly, someone might have hacked your computer and installed a keyboard logger or a screen scraper (and a lot of Trojan Horse type viruses do). Since the password is visible directly on the screen (as opposed to "*" in a password dialog), this is another security hole.
Conclusion: When it comes to security, always rely on the beaten path. There is just too much that you don't know, won't think of and which will break your neck.
Yes, as long as no one is looking over your shoulder at the monitor.
I don't agree with the statement about [...] HTTP referrer leakage (an external image in the target page might leak the password) in Slough's response.
The HTTP 1.1 RFC explicitly states:
Clients SHOULD NOT include a Referer
header field in a (non-secure) HTTP
request if the referring page was
transferred with a secure protocol.
Anyway, server logs and browser history are more than sufficient reasons not to put sensitive data in the query string.
Yes, from the moment on you establish a HTTPS connection everyting is secure. The query string (GET) as the POST is sent over SSL.
You can send password as MD5 hash param with some salt added. Compare it on the server side for auth.