Why does a browser in a different domain not respond at all to "WWW Authenticate : Negotiate" header sent by mod_auth_kerb? - apache

I have implemented SSO through mod_auth_kerb in our apache-active directory environment and it works just as expected. However the following knowledge is bugging me :
I requested a Kerberos protected page from two client machines, one user belonged to the Kerberos-setup domain and the other user belonged to some other domain.
I then compared the HTTP packets on the two machines. On both the machines, after the request for the Kerberos protected page is sent, the server responds with the following HTTP packet :
HTTP/1.1 401 Authorization Required
Date: Wed, 05 Sep 2012 14:25:20 GMT
Server: Apache WWW-Authenticate: Negotiate
WWW-Authenticate: Basic realm="Kerberos Login"
Content-Length: 60
Connection: close
Content-Type: text/html; charset=iso-8859-1
However, after the above response from the server the client machine's browser belonging to the Kerberos-setup domain responds with a WWW-Authenticate : Negotiate 'token', whereas the other client browser(user belonging to some other domain) does not respond at all.
Now my understanding is, that the client belonging to the other domain should have also responded with its own TGT+Session key token, which the active directory should have rejected. But why this client does not respond at all to the server's WWW-Authenticate : Negotiate challenge is beyond my logic.
What is even more confusing is that the server's HTTP response(given above), does not contain any information about the domain it is linked to.
So on what basis is the client browser belonging to the correct domain decide that it has to respond to the server's WWW-Authenticate : Negotiate challenge, and on what basis does the client belonging to some other domain decide not to respond to the same ?
Note : Both the client machines have Windows 7 and active directory is a Windows 2008 server.
I am trying to understand mod_auth_kerb's implementation of SSO, and this particular knowledge is key to that.

The module has the option KrbMethodK5Passwd turned on. It sends a Basic header to collect you Kerberos credentials. This is pointless for a non-domain client. Disable this option.
There is a hierarchy of strengths of auth mechanisms, the browser is obliged to choose the best. This is: Negotiate, Digest, NTLM, Basic.

Related

start-iap-tunnel unable to connect to a listening port

I'm installing OpenVPN Access Server on a Google Cloud instance. Its webUI listens on port 943 using https. It has a self-signed certificate whose name doesn't match the server's hostname (10.150.0.2). I can't start an SSH tunnel. I'm looking for a way to troubleshoot the connection from the IAP service to my server.
The command I'm running is gcloud compute start-iap-tunnel vpn 943 --local-host-port=localhost:943 I receive the normal Testing if tunnel connection works message.
It errs out with ERROR: (gcloud.compute.start-iap-tunnel) While checking if a connection can be made: Error while connecting [4003: 'failed to connect to backend']. (Failed to connect to port 943)
If I add --log-http to the command invocation the relevant information follows (it looks like a normal req/resp cycle with a 200 that I assume is from my client to the IAP service):
Testing if tunnel connection works.
=======================
==== request start ====
uri: https://oauth2.googleapis.com/token
method: POST
== headers start ==
b'content-type': b'application/x-www-form-urlencoded'
b'user-agent': b'google-cloud-sdk gcloud/367.0.0 command/gcloud.compute.start-iap-tunnel invocation-id/db27de82264f47fcb63f6680afaa8327 environment/None environment-version/None interactive/False from-script/False python/3.7.9 term/xterm-256color (Macintosh; Intel Mac OS X 21.2.0)'
== headers end ==
== body start ==
Body redacted: Contains oauth token. Set log_http_redact_token property to false to print the body of this request.
== body end ==
==== request end ====
---- response start ----
status: 200
-- headers start --
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Encoding: gzip
Content-Type: application/json; charset=utf-8
Date: Fri, 24 Dec 2021 02:11:52 GMT
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: scaffolding on HTTPServer2
Transfer-Encoding: chunked
Vary: Origin, X-Origin, Referer
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 0
-- headers end --
-- body start --
Body redacted: Contains oauth token. Set log_http_redact_token property to false to print the body of this response.
-- body end --
total round trip time (request+response): 0.246 secs
---- response end ----
----------------------
ERROR: (gcloud.compute.start-iap-tunnel) While checking if a connection can be made: Error while connecting [4003: 'failed to connect to backend']. (Failed to connect to port 943)
To my knowledge this is the limit of easily accessible troubleshooting for start-tap-tunnel.
Moving on to the local machine we can connect to 10.150.0.2:943 before puking a la certificate.
root#viongier:/usr/local/openvpn_as# wget https://10.150.0.2:943
--2021-12-24 02:01:47-- https://10.150.0.2:943/
Connecting to 10.150.0.2:943... connected.
ERROR: The certificate of ‘10.150.0.2’ is not trusted.
ERROR: The certificate of ‘10.150.0.2’ doesn't have a known issuer.
The certificate's owner does not match hostname ‘10.150.0.2’
It seems to me that my client happily connects to the IAP service which fails to connect to my server. I would expect to see an IAP error if it was erring out because of the cert. The only thing I can think of to test this is by generating a certificate whose issuer google likes. (LetsEncrypt for example.)
This message means that the backend does not have a socket open in the listening state. Common reasons are that no service has been started or a firewall is blocking the port.
To allow the Identity Aware Proxy into your VPC, allow traffic from 35.235.240.0/20.
ERROR: (gcloud.compute.start-iap-tunnel) While checking if a
connection can be made: Error while connecting [4003: 'failed to
connect to backend']. (Failed to connect to port 943)
This error means that the certificate provided does not match the address that the connection is made to:
ERROR: The certificate of ‘10.150.0.2’ is not trusted. ERROR: The
certificate of ‘10.150.0.2’ doesn't have a known issuer. The
certificate's owner does not match hostname ‘10.150.0.2’
Some clients, such as wget support ignoring SSL certificate validation. For wget see the --no-check-certificate flag.
Once you solve that problem you will run into another set of problems:
Under normal circumstances, you can not use HTTPS with tunnels. Tunnels are a form of man in the middle. There are tricks that can be employed, none of them secure.
Commercial SSL certificates do not support IP addresses only public domain names. You would need to create your own self-signed certificate, which would not be trusted or do not validate the certificate.
The last issue is that HTTPS endpoints require encryption negotiation from the client party. The start-iap-tunnel command does not initiate encryption (TLS negotiation). This command also does not do any form of certificate exchange and that is why you do not see an IAP error about certificates. This command only transfers data between the tunnel endpoints.
In summary, you cannot use HTTPS with TCP / SSH tunnels without deploying tricks and/or disabling features which defeats the purpose of HTTPS.
Allow IAP traffic through the firewall allowed my external client to connect to the internal port 943 via an IAP tunnel.
Allowing port 943 from 35.235.240.0/20 solved my problem.
More information is available at the GCP IAP docs

RabbitMQ Publish via Management HTTP API not_authorised but works in Web UI

I tried to publish a message to both the default exchange and also some other exchange via the HTTP Management API but I always get back an authorization error.
curl -i -u myuser:mypw -XPOST -d'{"properties":{},"routing_key":"my_key","payload":"my body","payload_encoding":"string"}' https://myinstance.rmq.cloudamqp.com/api/exchanges/vhost/myvhost/publish
HTTP/1.1 401 Unauthorized
Server: nginx/1.14.2
Date: Mon, 01 Apr 2019 05:27:10 GMT
Content-Type: application/json
Content-Length: 53
Connection: keep-alive
content-security-policy: default-src 'self'
vary: accept, accept-encoding, origin
{"error":"not_authorised","reason":"Access refused."}%
I tried it both on a self hosted RabbitMQ (installed via helm on k8s) and our CloudAMQP instance.
But if I login on the Management Web UI with the very same user then I can publish a message to the exchange and also consume from a queue.
I expect that the Management Web UI just uses the HTTP API for performing this actions so I am confused why it works when I do it via the UI.
Reading all vhost on the other hand works also with the HTTP API.
curl -i -u myuser:mypw https://myinstance.rmq.cloudamqp.com/api/vhosts
HTTP/1.1 200 OK
Can somebody explain to me whats going on there? What puzzels me the most is the fact that it works on the UI using the same user:pw.
I figured out the problem, I did use the wrong URL path.
For vhost: / and the default exchange it should be:
http://myinstance.rmq.cloudamqp.com/api/exchanges/%2F/amq.default/publish
In my case, using the CloudAmqp free plan, I needed to use my user name as vHost in rhe URL:
https://myinstance.rmq.cloudamqp.com/api/exchanges/[myrandomusernamefromfreeplan]/amq.default/publish

CouchDB Proxy Authentication Doesn't work

When I send a http request to my couchdb server like it is shown in the docs here CouchDB Proxy Authentication, it doesn't give the response shown in the docs, just empty data. What am I doing wrong?
Also, am I able to start a session with this Proxy Auth? If I try a POST /_session, I get 500 error code.
GET /_session HTTP/1.1
Host: 127.0.0.2:5984
User-Agent: curl/7.51.0
Accept: application/json
Content-Type: application/json; charset=utf-8
X-Auth-CouchDB-UserName: john
X-Auth-CouchDB-Roles: blogger
< HTTP/1.1 200 OK
< Cache-Control: must-revalidate
< Content-Length: 132
< Content-Type: application/json
< Date: Sun, 06 Nov 2016 01:10:58 GMT
< Server: CouchDB/2.0.0 (Erlang OTP/17)
<
{"ok":true,"userCtx":{"name":null,"roles":[]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"]}}
I found in the CouchDB issue tracker that the Proxy Authentication is broken in version 2.0.0. Either that or the docs aren't updated to indicate that it only works with clusters or something. I changed back to version 1.6.1 and everything works fine. I must say that the documentation for how Proxy Authentication works is very poor.
How it works is you need your third party authentication server to have the "[couch_httpd_auth] secret" and when a client authenticates, you need to generate a HMAC-SHA1 token by combining the username and secret. Then, on any http requests you make from the client to the CouchDB server, if you include all the headers:
X-Auth-CouchDB-Roles
X-Auth-CouchDB-UserName
X-Auth-CouchDB-Token
that request will be authenticated as a user client.
Also, it is not mentioned in the docs, but POST on the /_session API using these headers does nothing.
It's not the Proxy Authentication itself which is broken in CouchDB 2.0, it's just that in the current release there's no way to configure the authentication handlers like there was in the old 1.6 days.
There are some patches mentioned in the issue tracker which add proxy authentication to the list of authentication handlers. Furthermore there was a pull request which was accepted and merged which brings back configurability to CouchDB 2.0.
However in order to take advantage of those I'm afraid you either have to wait until the next release, or build CouchDB 2.0 yourself from the sources.
Proxy authentication is fixed as of CouchDB 2.1.1. The latest (>2.1.1) documentation shows how to configure proxy authentication again, along with the important proxy_use_secret option.

Is it possible to make API calls from MAMP PRO

I have a localhost setup using MAMP PRO and XIP.IO for sharing on my local network.
I'm also trying to test API requests from with the same application but I keep getting the following error in the log file even though I am using the correct API credentials which work on a remote server.
2015-12-20T12:52:52+00:00 DEBUG (7): HTTP/1.1 401 Unauthorized
Content-type: text/html
Date: Sun, 20 Dec 2015 12:52:52 GMT
Server: nginx
Www-authenticate: Basic realm="very closed site"
Content-length: 188
Connection: keep-alive
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>
If this is indeed due to being on a localhost is there a way to recieve API callbacks using MAMP PRO?
If you want the third party API to be able to send you a post back, your local website/app must be accessible when entering your public IP.
So if I understand your problem you just have to configure your router (or internet provider box) and open a port that you redirect to your local MAMP Pro. You can find a lot of tutorial for "Access MAMP Pro remotely"
WARNING : Do this for tests and then close the port you openned not to leave a security breach

NTLM-authenticaion fails but Basic authentication works

Here's what happens on the local server when application invokes HTTP request on local IIS.
request.Credentials = CredentialCache.DefaultNetworkCredentials;
request.PreAuthenticate = true;
request.KeepAlive = true;
When I execute the request, I can see the following series of HTTP calls in Fiddler:
Request without authorization header, results in 401 with WWW-Authenticate NTLM+Negotiate
Request with Authorization: Negotiate (Base64 string 1), results in 401 with WWW-Authenticate: Negotiate (Base64 string 2)
Request with Authorization: Negotiate (Base64 string 3), results in 401 with WWW-Authenticate: Negotiate (Base64 string 4)
Request with Authorization: Negotiate (Base64 string 3), results in 401 with WWW-Authenticate NTLM+Negotiate
Apparently the client and the server (both running on the same machine) are trying to handshake, but in the end authorization fails.
What is strange is that if I disable Windows authentication of the site and enable Basic authentication and send user/pwd explicitly, it all works. It also works if I use NTLM authentication and try to access the site from the browser specifying my credentials.
Well, after several hours of struggling I figured what the problem was. In order to be able to inspect network traffic in Fiddler I defined a Fiddler rule:
if (oSession.HostnameIs("MYAPP")) { oSession.host = "127.0.0.1"; }
Then I used "MYAPP" instead of "localhost" in the Web app reference, and Fiddler happily displayed all session information.
But server security was far less happy, so this alias basically broke challenge-response authentication on the local server. Once I replaced the alias with "localhost", it all worked.