How can I set up HTTP Alternative Service in Apache? - apache

I run a website on the clearnet using Apache and want the connection to be made via the .onion address when a user uses the clearnet URL in a Tor browser.
I know Facebook uses a standard called HTTP Alternative Service, but I don't know how I should implement it myself in Apache.

I found a solution, however can't verify if it actually works as intended.
One can add the header in the .htaccess file by adding
Header add Alt-Svc: h2="example.onion:80"
to the file.
This does in fact add the Alt-Svc header to the responses, however the standard specifies the URL shown to the user in the browser should remain unchanged, despite the connection actually happening on Alt-Svc.
Thus I have so far unable to verify whether the Tor Browser actually connects on the .onion as intended.

Related

Windows Authentication issue with .Net Reverse Proxy using IIS custom HTTP module

We use a custom HTTP module in IIS as a reverse proxy for web applications. Generally this works well and has done for some time, but we've come across an issue with Windows Authentication (WA). We're using IE 11, IIS 10 and Server 2016.
When accessing the target site directly, WA works fine - we get a browser login dialog when the initial HTML page is requested and the subsequent requests (CSS, JS, etc) go through fine.
When accessing via our proxy, the same (correct behaviour) happens for the initial html page, the first CSS/JS request authenticates ok too, but the subsequent ones cause a browser login to popup.
What seems to happen on the 'bad' requests (i,.e. those that cause the login dialog) is:
1) Browser decides it needs to authenticate, so sends an Authorization header (Negotiate, with an NTLM token)
2) Server responds (401) with a WWW-Authenticate: Negotiate response with a full NTLM token
3) Browser re-requests with an Authorization header (Negotiate, with a full NTLM token)
4) Server responds (401) with a WWW-Authenticate: Negotiate (with no token), which causes the browser to show the login dialog
5) With login credentials entered, Browser sends the same request as in (1) - identical NTLM token, server responds as in (2), Browser re-requests as in (3), but this time it works!
We've set up a test web site with one html page, requesting 3 JS and 2 CSS files to replicate this. On our test server we've got two sites, one using our reverse proxy and one using ARR. The ARR site works fine. Also, since step (5) above works, we believe that the proxy pass-through is fundamentally working, i.e. NTLM tokens are not being messed up by dodgy encoding, etc.
One thing that does work, is that if we use Fiddler and put breakpoints on each request, we're able to hold back on the 5 sub-requests (JS & CSS files), letting one go through at a time. If we let each sequence (i.e. NTLM token exchange for each URL/file, through to the 200 response), then it works. This made us think that there is some inter-leaving effect (e.g. shared memory corruption) in our proxy, this is still a possibility.
So, we put code at the start of BeginRequest and end of EndRequest with a Synclock and a shared var to store the Path (AppRelativeCurrentExecutionFilePath). This was for our code to 'Single Thread' each of these request/exchanges. This does what we expected, i.e. only allowing one auth exchange to happen and resulting in a 200 before allowing the next. However, we still have the same problem of the server rejecting the first exchange. So, does this indicate something happening in/before BeginRequest, where if we hold the requests back in Fiddler then they work, but not if we do it in our http module?
Or is there some sort of timing issue where the manual breakpoints in Fiddler also mean we’re doing it at ‘human’ speed and therefore allowing things to work better?
One difference we can see is the ‘Connection: Keep-Alive’. That header is in the request from the browser to our proxy site, but not passed from our proxy to the base site, yet the ARR site does pass that through... It’s all using HTTP 1.1. and so we can't find a way to set Keep-Alive on our outgoing request - could this be it?
Regarding 'things to try', we think we've eliminated things like having the site in the Intranet Zone for IE by having the ARR site work ok, and having the same IE settings for that site. Clearly, something is not right, so we could have missed something here!
In short, we've been working on this for days, and have tried most of what we can find on SO and elsewhere, but can't figure out what the heck is going on.
Any suggestions - let me know if you want any further info. All help will be very gratefully received!

Getting mixed-content errors even though I’m only using https URLs

I'm having the SSL warning messages all over my website after switching to SSL for several assets:
Mixed Content: The page at 'https://example.com' was loaded over HTTPS,
but requested an insecure script 'http://example.com/script.js'. This
request has been blocked; the content must be served over HTTPS.
I checked the page source, every single script/css is requested over https.
I even checked the dynamically created html by using the code inspector.
I disabled Javascript in case a script was loading these assets dynamically.
None of these things showed a single http:// request. I'm out of ideas to try and find what is causing this. Any ideas or suggestions?
When seeing a mixed-content message about a http://example.com/script.js (non-https) URL that doesn’t actually appear anywhere in your sources, the basic strategy to follow is:
Replace the http in the URL with https and put that into the address bar in your browser: https://example.com/script.js
If your browser redirects from that https://example.com/script.js URL back to (non-https) http://example.com/script.js, then you’ve found the cause: example.com/script.js isn’t actually available from an https URL, and ends up getting served from a http URL even though your source is requesting the https URL.
My 2 cents regarding this issue.
I have a project hosted on one domain that works flawlessly.
I need to make it international so I am cloning the master branch to a new branch, making some necessary text changes and deploying new site (new domain) with code from the new branch.
Everything works fine, except 1 ajax call (api route) that gets blocked due to Mixed content.
First things first, I checked these 3 things:
I check in the Network tab in dev tools and it is actually loaded through https.
I open the file directly in browser and it is https.
I try to open it as http:// and it automatically redirects to https://
This is very strange because the 2 domains are both using Cloudflare and their backend setup is identical, the code is the same (only text changes for the new one) yet for the new setup there is console error for 1 specific api route, an all others (some 20+ ajax requests across the page) work just fine. They are even using the same function to make the Ajax request, so it is definitely not a configuration error.
After doing some investigation I found out the issue:
The call that was 'buggy' was ending in /. For example, all other calls were made to:
https://example.com/api/posts
https://example.com/api/users
And this particular one was making requests to
https://example.com/api/todos/
The slash at the end was making it fail with mixed content issue. I am not sure why this is causing issue and how it isn't an issue on the original site (since there the same ajax call works just fine), but it definitely fixed my issue.
If I figure out what caused the / to fail so miserably, I will post an update.

htaccess redirect to shared SSL

Apologies if this is a duplicate, but I couldn't find a question fitting my exact circumstances.
I am redesigning a site, part of which will require SSL coverage. I have set up SSL with our hosting provider, but this is shared SSL. Whereas our current site is at www.companyname.com, the secure server is at companyname.genericssl-host.com.
I believe the best way to proceed is to simply shift all the web files onto the secure server, whether they need to be secure or not, then redirect www.companyname.com to there. However, the provider informs me that if I do that, the URL in the browser address bar will still read companyname.genericssl-host.com once the redirect completes, and that I would need to edit the htaccess file to make it read good ol' www.companyname.com again.
What does the htaccess file need to contain in order to do this?
Not sure what your hosting provider is referring to, but changing it back to "www.companyname.com" defeats the purpose of using SSL at all. What shows up in the browser's address bar is:
what host the browser is going to send a request to
what URI it will request
the query string if there is any
If you change it back to www.companyname.com, it's going to send a non-SSL request to that host, which defeats the purpose of redirecting it to SSL in the first place.
You need to buy a certificate for *.companyname.com and install if on a host specific to your server.

Google Chrome Forces HTTPS

I am developing a Rails application that uses SSL connection. I am currently using third party resources that are js and css files for implementing a map (OpenStreetMap) . I have already tried to import these resources (js and css) into my application, but the javascript code tries to access an external WMS via HTTP.
The problem is that Google Chrome is blocking access to third-party resources from HTTP when the application is in HTTPS.
So I disabled SSL on a certain pages of the application and tried to force the HTTP or HTTPS the way I desire.
Following this blog: http://www.simonecarletti.com/blog/2011/05/configuring-rails-3-https-ssl/ and it works.
But when I force the HTTP protocol to the page where these resources will be used using Google Chrome, it forces HTTPS connection causing infinite loop.
If I clear the Chrome cache (that have already accessed the same page with HTTPS) in order access it via HTTP it works. But if I have accessed a HTTPS page and try to access via HTTP, Chrome forces the HTTPS connection resulting in an infinite loop.
The question is: Is there something I can set in the request that causes Chrome to accept the connection?
Regards
I've been doing some research on this, and it turns out that turning on force_ssl = true on Rails 3 causes the app to send an HSTS header. There's a bit of information about it here: How to disable HTTP Strict Transport Security?
Essentially, the HSTS header tells Chrome (and Firefox) to access your site only through HTTPS for a specific amount of time.
So... the answer I have for you now is that you can clear your own HSTS setting by going to about:net-internals within your Chrome browser and removing the HSTS state.
I think the answers here can help you: Rails: activating SSL support gets Chrome confused

SWT Browser Plugin does not promt for proxy authentication

I have successfully configured my SWT Browser application to use the proxy by setting VM arguments -Dnetwork.proxy_host and -Dnetwork.proxy_port to the according values.
However the proxy needs authentication, but the username / password prompt does not open. Futhermore when registering an authentication listener, the listener is never triggered.
The problems occured with a Linux Debian 64 Bit distribution. When compiling the same application for windows, all works fine, i.e. the password promt opens. The SWT Browser is configured to use MOZILLA, not WEBKIT. Unfortunatelly I cannot test with WEBKIT as I am limited to a given environment.
Temp solution: When starting the Linux Mozilla Browser, the prompt comes up. If entering there correct values and afterwards starting the SWT Browser application, then no authentication is needed at all and internet access is possible. But this is not a good solution.
When I register a location listener with "addLocationListener" to look whats going on with url calls, then I can see that the initial url (for example www.google.de) results to call a certain http site of the proxy server. And this http site is a redirect to a https site of the proxy. Then the https site results in calling the http redirect page again. This is then an endless loop.
I would guess that somewhere in the JAVA code of the SWT Browser class there is a routine that calls setUrl with those pages (what results in an
endless loop) and skip to call any authentication listener for some reason.
Maybe someone has an idea whats going wrong in this authentication process?
I have no solution but a hint: I'm not sure what you mean by "Linux Mozilla Browser" - I know Firefox and Xulrunner. But your workaround suggests that profile information is shared somehow and that shouldn't happen.
I tried to find some information how to define the profile (where the web browser keeps its cache, config, SSL certificates, plugins, ...) but to no avail.
This entry in the FAQ shows how to set the proxy host: How do I set a proxy for the Browser to use?
Try to find a way to add the user/password information into the request sent to the proxy server. If that fails, create a local proxy which connects to the real proxy as upstream and which can authenticate itself.
Looking at the bug database, there is no support for Browser profiles: Flexible Mozilla profile support - new API request