Apache as reverse proxy with authentication passed from back to front - apache

We have an application runing on Weblogic 10.3, with authentication provided on the application itself. We want to put the Weblogic behind an Apache server. The idea is that we will have some public content on the Apache server, and the application will be accessed through the reverse proxy. That's pretty much very standard. The issue comes with the fact that there are some contents on the Apache server that can only be accesssed if the user has logged in the application. So basically the Apache server will server three type of contents, on diferent URIs:
/ -> Will contain the public information, and will be server by the Apache
/myApp - > Will be redirected by the Apache to the weblogic behind
/private - > Will contain the private static information. This should only be accessed if the user has previously logged successfully in myApp.
My question (I'm a total newbie with Apache) is if this possible. My idea is that the application can put a cookie on the responses indicating if the user has logged on the application, and that the Apache will check for that cookie when the user tries to access /private.
Any thoughts?

The / public information is no problem, it's straightforward. Using ProxyPass or ProxyPassMatch to reverse proxy "/myApp" to your internal Weblogic server is also straightforward. You may need to use a couple of other options to make sure proxy hostname and cookie domains are setup correctly. But setting up static protected infrormation in "/private" is going to be a little more tricky.
1) You can check the existence of the cookie set by myApp using mod_rewrite, something like this:
RewriteCond %{HTTP_COOKIE} !the_name_of_the_auth_cookie
RewriteRule ^private - [F,L]
The problem with checking a cookie through something like this is that there's no way to verify that the cookie is actually a valid session. People can arbitrarily create a cookie with that name and be able to access the data in /private.
2) You could set it up so that anything something in "/private" is accessed, the request is rewritten to a php script or something that can check the cookie to ensure that it's a valid session cookie, then serve the requested page. Something like:
RewriteRule ^private/(.*)$ /cookie_check.php?file=$1 [L]
So when someone accesses, for example, "/private/reports.pdf", it gets internally redirected to "/cookie_check.php?file=reports.pdf" and it's up to this php script to access whatever it needs to in order to validate the cookie that /myApp has setup. If the cookie is a valid session, then read the "reports.pdf" file and send it to the browser, otherwise return FORBIDDEN.
I think this is the preferable way of handling this.
3) If you can't run php or any other scripts, or the cookie cannot be verifed (like with a database lookup of session_id or something similar), then you'll have to proxy from within WebLogic. This would be more of less the same basic idea as having access to "/private" through "cookie_check.php" except it's an app on the WebLogic server. Just like /myApp, you'll need to setup a reverse proxy to access it, then this app will get the request (which has been internally rewritten from "/private/some_file") check the cookie's validity, read the "some_file" file ON THE APACHE SERVER, then send it to the browser, or send FORBIDDEN. This is the general idea:
ProxyPass /CheckCookie http://internal_server/check_cookie_app
RewriteCond %{REMOTE_HOST} !internal_server
RewriteRule ^private/(.*)$ /CheckCookie?file=$1 [L]
This condition reroutes all requests for "/private" that didn't originate from "internal_server" through the /CheckCookie app, and since the app is running on "internal_server" it can access the files in "/private" just fine. This is kind of a round-about way of doing this, but if the validity of session cookies issued by /myApp can only be checked on the WebLogic server, you'll have to reroute requests back and forth or something similar.

Related

Http url going to https url without using permanent redirect

I have just added support SSL on my website
However unlike most websites if you go to Http it doesn't automatically change to Https
The system administrator resolved this by configuring a permanent redirect 301, however the server is also used to verify licenses from my Java desktop application, and the permanent redirect caused the code to fail because it just receives Http response 301 so we had to remove the 301
So is there another way for a user to enter the non ssl url and it change to the secure version without breaking my application code that makes calls to non ssl url as well.
FOR BROWSER(S) ONLY you can redirect with meta-refresh or javascript instead of HTTP; Java won't interpret meta or js even if your response to an application request is HTML, which APIs usually aren't. This is not a permanent redirect so it satisfies the constraint unnecessarily stated in your Q, but you could add HSTS (on the HTTPS connection only) so that subsequent browser requests to this domain (and optionally any subdomains) are forced to HTTPS before sending, for a period of time (commonly several months or a year).

Only allow access to files in directory from the website they are a part of

I know there are a lot of similar questions out there, and I've trawled them all, but I can't seem to get any of the solutions to work.
I have a folder on the root of my website containing uploaded files that can be viewed and downloaded from the site when a user is logged in. They are here: https://example.com/uploads (for example). I need the site to continue to be able to access them to display them (some are images) and provide links for download (pdfs etc) so the user can download them, but I want to avoid anyone who get's hold of the url of a particular file being able to download them directly, like this: https://example.com/uploads/2020/02/myfile.pdf. OR these urls getting into search engines (or if they do, the server prevents them from being accessed directly.
I've tried adding an .htaccess file in the uploads directory with the following content:
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
And I've tried
Order Allow,Deny
Deny from all
Allow from 127.0.0.1
...as I read that might allow HTTPS calls from the site itself as well as local urls.
But it forbids the site and a direct url request, which is no good.
Is there a way to do this?
The user interface that provides the ‘official’ access to the files has user authentication, yes, but the files still exist in a directory than won’t stop anyone getting to them if they know the url.
You need to protect the files using the same authentication system that you are using to protect access to the user interface. The only way you could protect these resources by IP address (the client IP address) - as you are currently attempting in .htaccess - is if the client's IP is fixed and known in advance (but if this was the case then you wouldn't need another form of authentication to begin with).
So, this will primarily be an exercise in whatever scripting language/CMS is being used to authenticate the "user interface".
What you can use .htaccess for is to rewrite requests for these files to your server-side script that handles the authentication and then serves the file to the client once authenticated.
For example:
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} -f
RewriteRule ^uploads/. /serve-protected-file.php [L]
Any request for /uploads/<something> (eg. /uploads/2020/02/myfile.pdf), that maps to a valid file is routed to your script: /serve-protected-file.php.
/serve-protected-file.php would then need to do something like the following:
// 1. Parse the file being requested from REQUEST_URI
// 2. Is the requested file "protected"?
// (Serving both protected and public files from the same directory?)
// 3. If not protected then serve/stream the resource to the client. END
// 4. If protected then authenticate the user...
// 5. If user authenticated then serve/stream the resource to the client. END
// 6. Resource is protected and user not authenticated...
// Serve a 403 Forbidden. END
(Ideally, the location of these "protected" resources would be entirely outside of the document root - so they are "private" by default - and the URL the user uses to access these resources is entirely virtual - then you probably wouldn't need any additional coding in .htaccess and everything would be implemented by your front-controller - but that all depends on how your site is implemented and the way in which URLs are routed.)

rewrite subdomain url to www apache php (using slim framework)

I have a website in angular using a api. Now i want to create automated landing pages.
My api url is made like this (https://) system.mydomain.com/api - its a rest api using slim framework
now i have created routes for the landing pages like (https://) system.mydomain.com/content/seo-name-of-item
this works but i dont want to show "system.mydomain.com" in this case (so in the URI "content") but then i want it to be (https://) mydomain.com/content/seo-name-of-item or/and (https://) www.mydomain.com/content/seo-name-of-item
what is the best approach to get this behaviour?
Most elegant probably is to use apaches proxy module in combination with rewriting rules. That leaves the URL visible in the browser unchanged but internally proxies the requests between otherwise separate http hosts.
Use such a rule in the hosts www.example.com and/or example.com host:
RewriteEngine on
RewriteRule ^/?content/seo-name-of-item https://system.example.com/api [END,P]
The syntax should work in the real http host configuration or in htaccess style files. But a general hint: you should always prefer to place such rules inside the http servers host configuration instead of using .htaccess style files. Those files are notoriously error prone, hard to debug and they really slow down the server. They are only provided as a last option for situations where you do not have control over the host configuration (read: really cheap hosting service providers) or if you have an application that relies on writing its own rewrite rules (which is an obvious security nightmare).
If you get an internal server error with that (http status 500), you might have to replace the END flag with the older L flag.
You need validatable ssl certificates for the externally visible host name, so www.example.com and/or example.com.
You can also decide to use http internally, for the internal proxy connection, since ssl encryption does not really make sense there.
Oh, and obviously you need the proxy module installed.
An alternative would be to use the proxy module only. Take a look at the documentation and examples of the ProxyPass rule: https://httpd.apache.org/docs/current/mod/mod_proxy.html

How to properly cache my Symfony2 APIs?

I'm making classic stateless RESTfull APIs on Symfony2: users/apps gets an authentication token on the authenticate API and give it to all others APIs to be logged and post data / access protected/private/personal data on others APIs.
I've got now three concerns regarding this workflow and caching:
How to use HTTP cache for my 'static' APIs (that always deliver the same content, regardless the logged user and its token) assuming that different tokens would be passed in the url by different users for the same API, so that the url would never be the same? How to use HTTP shared cache then?
I've got APIs for the same url that produce a different output, regarding the logged user rights (I've basically 4 different rights levels). Question is: is it a good pattern? It is not better to have 4 different urls, one for each right, that I could cache? If not, how to implement a proper cache on that?
Is shared HTTP Cache working on HTTPS? If not, which type of caching should I implement, and how?
Thanks for your answers and lights on that.
I have had a similar issue (with all 3 scenarios) and have used the following strategy successfully with Symfony's built-in reverse-proxy cache:
If using Apache, update .htaccess to add an environment variable for your application to the http cache off of (NOTE: environment automatically adds REDIRECT_ to the environment variable):
# Add `REDIRECT_CACHE` if API subdomain
RewriteCond %{HTTP_HOST} ^api\.
RewriteRule .* - [E=CACHE:1]
# Add `REDIRECT_CACHE` if API subfolder
RewriteRule ^api(.*)$ - [E=CACHE:1]
Add this to app.php after instantiating AppKernel:
// If environment instructs us to use cache, enable it
if (getenv('CACHE') || getenv('REDIRECT_CACHE')) {
require_once __DIR__.'/../app/AppCache.php';
$kernel = new AppCache($kernel);
}
For your "static" APIs, all you have to do is take your response object and modify it:
$response->setPublic();
$response->setSharedMaxAge(6 * 60 * 60);
Because you have a session, user or security token, Symfony effectively defaults to $response->setPrivate().
Regarding your second point, REST conventions (as well as reverse-proxy recommendations), GET & HEAD requests aren't meant to change between requests. Because of this, if content changes based on the logged in user, you should set the response to private & prevent caching at all for the reverse-proxy cache.
If caching is required for speed, it should be handled internally & not by the reverse-proxy.
Because we didn't want to introduce URLs based on each user role, we simply cached the response by role internally (using Redis) & returned it directly rather than letting the cache (mis)handle it.
As for your third point, because HTTP & HTTPS traffic are hitting the same cache & the responses are having public/private & cache-control settings explicitly set, the AppCache is serving the same response both secure & insecure traffic.
I hope this helps as much as it has for me!

Apache mod_rewrite/mod_proxy - re-write last part of URI as query string?

We have a web resource that can be accessed with a URL/URL of the form:
http://[host1]:[port1]/aaa/bbb.ccc?param1=xxx&param2=yyy...
However, we are working with an external (i.e., not developed by us, so not under our control, i.e., we can't change it) client app that is attempting to access our resource with a URL that looks like:
http://[host2]/[port2]/ddd/fff/param1=xxx&param2=yyy...
In other words, the client is including the "query string" (the ?param1=xxx&param2=yyy... part) as if it's part of the URI, instead of as a proper query string.
We have a separate Apache proxy instance, and we're thinking that we could use that with some RewriteCond/RewriteRule to take the incoming requests (the ones with the query string at the end of the "URI", and without the "?") and rewrite the URI to a "proper" URI with a "proper" query string and then use that modified/re-written URI to access our resource via proxy.
We'd also like to do that without having an HTTP re-direct (e.g., 30x) going back to the client, because it appears that they may not be able to handle such a re-direct.
I've been trying various things, but I'm not that familiar with Apache mod_rewrite, so I was wondering if someone could tell me (1) if this is possible and (2) suggest what RewriteCond/RewriteRule would accomplish this?
P.S. I have gotten some progress. The following re-writes the URL correctly, but when I test, I'm seeing a 302 redirect to the re-written URL, instead of Apache just proxying immediately to the re-written URL. Is it possible to do this without the re-direct (302)?
<Location /test/users/>
RewriteEngine on
RewriteCond %{REQUEST_URI} ^/(.*)/param1=
RewriteRule ^/(.*)/param1=(.*) http://192.168.0.xxx:yyyy/aaa/bbbbb.ccc?base=param1=$2
</Location>
Thanks, Jim