In the interest of avoiding yak-shaving, I'll try to provide as much context as possible.
We have an internal application that's also available on the public internet. This application runs on several instances of Apache on the IBM i - most of these instances require http basic authentication, except for one instance that acts as the 'welcome page' so to speak. This 'welcome page' has no authentication, but acts as a navigation hub with links for the user to go to other parts of the app (which DO have authentication and run on different instances of Apache).
We also have some documentation stored in Confluence (a wiki application) that runs on a separate server. This wiki application can display the documentation without requiring authentication, but if you authenticate, you then have the option to edit the documentation (assuming you're authorized to do so, of course). But the key is that the documentation is visible without requiring authentication.
My problem is: we want the documentation in Confluence to be accessible from within the main application (both when being accessed internally and over the internet) but, because the documentation is somewhat sensitive, we don't want it accessible to the internet at large.
The solution we came up with was to use a reverse proxy - we configure the Apache instances on the main application such that requests to /help/ on the main application are proxied to the confluence application. Thus, the Confluence application is not directly exposed to the Internet.
But this is where the problem starts.
If we just proxy /help/ through the main application Apache instance that doesn't require authentication, then the documentation is available from the main application without a problem - but since you don't require authentication, it's available to everyone on the Internet as well - so that's a no-go.
if we instead proxy '/help/' through the main application Apache instances that DO require authentication, it seems as though the basic authentication information is passed from the main application servers onto the Confluence server, and then we get an authentication failure, because not everyone who uses the main application has an account on the Confluence server. (For those that do, it works fine - but the majority of users won't have a Confluence account).
(Possible yak shaving alert from this point forward)
So, it seems as though when dealing with HTTP Basic authentication, if you set up proxy configuration from server A to server B, and set up the proxy on server A to require http basic authentication, then that authentication information is passed straight through to the server B, and in this scenario, server B complains since it doesn't expect authentication information.
My solution to that problem was to set up 2 levels of proxying - use the Apache instances requiring authentication to also require authentication for the proxy to /help/, but have /help/ proxy to a different server (Server C). This Server C doesn't require authentication but is not exposed to the internet. And Server C is configured to proxy /help/ to the actual Confluence server.
I did this on the basis of proxy-chain-auth - an environment variable which seems to indicate that by default, if you have a proxy chain, the authentication information is NOT automatically sent along the chain.
Alas, this did not work - i got an authentication error that seems to indicate that Server C did in fact proxy the authentication info onwards, even though i did not set proxy-chain-auth.
So, that's my yak-shaving journey.
I simply want to set up a configuration such that our documentation stored on Confluence requires some sort of authentication, but that authentication comes from the main application, not from Confluence.
(Without the requirement of having it accessible over the internet, none of this would've been an issue since the Confluence server can be viewed by anyone on its network without a problem).
I hope my question is clear enough - I honestly don't mind being pointed in a different direction to achieve the main goal, with the caveat that I can't change the main application (or Confluence for that matter) from using HTTP Basic Authentication.
Ideas, anyone?
PS. To retrieve the documentation from the Confluence server, I'm actually using their REST API to retrieve the page content - i don't know if that has any relevance, but I just wanted that made clear in case it does.
It turns out that the solution to the issue was pretty straightforward.
For my second proxy that does not require authentication, I had to change the Apache configuration to remove any authorization headers.
RequestHeader unset Authorization
This stops the authentication information from being passed from the second proxy onto Confluence.
Related
I'm new on the world of load balancing...
I heard about HAProxy and I wonder if I can achieve this objective (not found yet over searches already done):
HAProxy receive a MQTT/HTTP connection with basic authentication
(login-password) or token based
HAProxy checks credentials from a Database (or
LDAP)
HAProxy manage the access depending on the authenticated User.
--> all of user/credential and ACL should be stored in Database.
Is this possible? Is there in HAProxy a system of custom plugin/add-on to enhance its behavior ?
I found things about settings list of ACL directly in the configuration with already existing list of login/password (but not dynamically even if cached after)
Thanks a lot for your ideas.
I think this is only supported in Enterprise Haproxy:
The HAProxy Single Sign-On solution [...] is also compatible with Microsoft Active Directory or OpenLDAP servers.
https://www.haproxy.com/documentation/hapee/1-8r1/security/using-sso/
The only plugin I found is a http request check to see if user is authenticated on an arbitrary endpoint:
https://github.com/TimWolla/haproxy-auth-request
But it requires a specialized web app responding the requests for that.
I am using apache solr server and i want to secure it by enabling Authentication and Authorization. Is there any way to authenticate solr apart from htaccess and zookeeper.
If you need to have the authentication inside Solr itself, your only existing, supported option is to use the built-in authentication and authorization through uploading a security.json file to Zookeeper. This supports Kerberos and basic http authentication.
There's also a hack based on extracting the bundled jetty and adding basic authentication to it before repackaging it again, but that will make each upgrade something that you have to handle specifically and make the same adjustments.
If you want to add any method outside of this, you're going to have to implement it yourself - either as a service in front of Solr (which is the usual way), or through extending Solr. The hard part about the last option is that if you're not going through the regular security.json configuration, you may forget to close down API endpoints.
By adding a service in front of Solr and configuring Solr to only bind to localhost - so that it's not accessible through the internet - you can customize and add any authentication and authorization you want to. But it will still require you to be careful if you want to control authorization and access to certain cores. If you want inter-node connectivity (sharding, solr cloud, etc) to still work, you'll have to account for that and allow Solr to bind to your local network ips as well.
More of a theoretical question, but I'm really curious!
I have a two part application:
Apache server hosting my UI
Back-end that services all http requests from the UI
The apache service proxies all http requests from the UI to the server. So, if the user is reasonably adept, they can reverse engineer our API by inspecting the calls in the browser's developer tools.
Thus, how do I prevent a user from using the server API directly and instead force them to use the UI?
The server can't determine whether a call came from the UI or not because a user can make a call to myapp.com/apache-proxy/blah/blah/blah from outside of the UI, apache will get the request and forward it to the server, which will have no idea it's not coming from a UI.
The option I see is to inject a header into the request from the UI, that indicates the origin of the request as the UI. This seems ripe for exploitation though.
To me, this is more of a networking question since its something I'd resolve at the network level. If you run your backend application in a private network (or on a public network with firewall rules) you can configure the backend host to only accept communication from your Apache server.
That way the end-user can't connect directly to the API, since its not accessible to the public. Only the allowed Apache server will be able to communicate with the backend API. That way the Apache server acts as an intermediary between the end-user (client side) and the backend API server.
An example diagram from AWS.
You could make the backend server require connections to be authenticated before accepting any requests from them. Then make it so only the Apache server can successfully authenticate in a way that end users cannot replicate. For example, by using SSL/TLS between Apache and the backend, where the backend requires client certificates to be used, and then issue Apache a private certificate that the backend will accept. Then end users will not be able to authenticate with the backend directly.
I have an Apache/2.2.15 web server with the modules, mod_shib, mod_ssl, and mod_jk. I have a virtual host which is configured (attached below) with AuthType Shibboleth, SSLCertificates, and JKMount to direct requests using AJP to my Tomcat 8 server after a session is successfully established with the correct IDP. When my http request reaches my app server, I can see the various Shib-* headers, along with the attributes my SP requested from the IDP.
Is there a way my app server can validate the shibsession cookie or other headers? I am trying to protect against the scenario where my web server, which resides in the DMZ is somehow compromised, and an attacker makes requests to my app server, which resides in an internal zone.
Is there a way I can validate a signature of something available in the headers, to guarantee that the contents did indeed originate from the IDP, and were not manufactured by an attacker who took control of my web server?
Is there something in the OpenSAML library I could use to achieve this?
Is there a way my app server can validate the shibsession cookie or other headers?
mod_shib has already done that difficult work for you. After validating the return of information from the Identity Provider (IdP), mod_shib then sets environment variables (cannot be set by the client) for your application to read and trust. Implementing OpenSAML in your application is unnecessary as mod_shib has done the validation work for you.
From the docs:
The safest mechanism, and the default for servers that allow for it,
is the use of environment variables. The term is somewhat generic
because environment variables don't necessarily always imply the
actual process environment in the traditional sense, since there's
often no separate process. It really refers to a set of controlled
data elements that the web server supplies to applications and that
cannot be manipulated in any way from outside the web server.
Specifically, the client has no say in them.
Let's say I'm running a dedicated server with owncloud and roundcube on it. First idea was to protect those URLs with some kind of reverse proxy. However I would like to make it more secured and implement a two factor authentication.
The idea is to redirect clients to a login page (implemented with Play Framework), once user is authenticated, he is free to use owncloud or roundcube.
I have been thinking about this problem for a while now, here are my thoughts:
Use play router to filter protected pages
redirect to login page built with play
[possible solution : once authenticated, redirect requests to internal web server running on a different port that can not be accessed from outside]
The main challenge is that owncloud is a PHP app running on apache, I need some magic to talk with the apache server (running play with apache as front-end is not excluded). This solution needs to be somewhat generic so that it can be used for other apps in the future.
I hope my idea is all clear, we can see this configuration as a private backend (with applications running in different environments) for a blog.
Question is, do you think this is the best way to go considering how play works and the configuration I want to implement ?
Thanks !