Relative link (from https) gives 301 Permanently Moved (to http) - apache

I own a domain name (let's call it example.org), and have set up a CNAME pointing from foo.example.org to an AWS ELB FooBar-Load-Balancer-123456789.us-east-1.elb.amazonaws.com. That ELB has a Port Configuration of 443 (HTTPS, ACM Certificate <GUID>) forwarding to 80 (HTTP). The sole EC2 instance behind the ELB runs a Docker image exposing Apache on port 80.
When I open https://foo.example.org in my web browser, everything works fine - the page loads as expected. If I navigate to https://foo.example.org/path, it likewise loads correctly. However, if a page contains <a href="path">, upon clicking that the browser attempts to load http://foo.example.org/path, which (correctly) gives an error - "ERR_CONNECTION_REFUSED" in Chrome, "Unable to connect" in Firefox.
Checking the network activity in Chome Dev Tools, I see an initial request to https://foo.example.org/path which results in 301 Moved Permanently (from cache) with Location http://foo.example.org/path. This is obviously what's causing the browser's behaviour - is this misconfiguration on my server (which believes itself to be serving on HTTP - or, at least, port 80), on my ELB, or on the site's html itself?
I guess I could get around this by using absolute paths, but, given that I want to be able to spin up a Docker image locally (opening <IP Address>/path in my browser) to test before pushing changes, that doesn't sound like a true solution.
EDIT: Inspired by this, I checked behaviour in a fresh Chrome Incognito Mode window and in Chrome after clearing history - same behaviour in all cases.

Posting this as an answer since I'm technically unblocked, although I'd still very much appreciate someone more knowledgable explaining why these symptoms occurred.
A bit of investigation with curl led me to this:
$ curl -v https://foo.example.org/path
* Trying 52.0.230.252...
* Connected to foo.example.org (52.0.230.252) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: example.org
* Server certificate: Amazon
* Server certificate: Amazon Root CA 1
* Server certificate: Starfield Services Root Certificate Authority - G2
> GET /path HTTP/1.1
> Host: foo.example.org
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Content-Type: text/html; charset=iso-8859-1
< Date: Sun, 05 Jun 2016 18:35:58 GMT
< Location: http://foo.example.org/path/
< Server: Apache/2.4.7 (Ubuntu)
< Content-Length: 336
< Connection: keep-alive
<
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved here.</p>
<hr>
<address>Apache/2.4.7 (Ubuntu) Server at foo.example.org Port 80</address>
</body></html>
* Connection #0 to host foo.example.org left intact
$ curl https://foo.example.org/path/ # Note trailing slash
<expected html>
So it looks like making a request to apache for a directory without a trailing slash ("Directories require a trailing slash, so mod_dir issues a redirect to http://servername/foo/dirname/."). That explains why the Location header in the 301 response used http:// - apache was serving http://, so "knew no better". I guess I can solve this by making my anchor tags explicitly link to a href with a trailing slash.
Why is Apache configured this way? Why not just automatically resolve to the appropriate location "internally", without having to make a round-trip of the 301 response? And, most importantly - is there a better way for me to solve this problem? Can ELB's be configured to rewrite Location headers (I guess not - I'm not InfoSec pro, but that strikes me as a vulnerability waiting to happen)? Can Apache?

Related

Can the Host Header be different from the URL

We run a website which is hosted using WCF.
The website is hosted on: https://foo.com and the ssl certicate is registered using the following command:
netsh http add sslcert hostnameport=foo.com:443
When we browse the website on the server, all is fine, and the certificate is valid.
There is a loadbalance in front of the server which listens to bar.com and then redirects the request to our server.
The loadbalancer doesn't rewrite the get URL, but only the Host Header.
The rewritten header looks like this:
GET https://foo.com/ HTTP/1.1
Host: bar.com
Connection: keep-alive
Now we have some issues which indicates that the ssl certificate is invalid in this case.
The Loadbalancer itself has a certificate registered listening to https://bar.com
Questions:
Is it ok/allowed that the get URL and the Host in the http header are different?
If it is ok to have different values in the header, under which url should we run the site? get URL or Host url?
Well, referencing the RFC2616:
If Request-URI is an absolute URI, the host is part of the
Request-URI. Any Host header field value in the request MUST be
ignored.
So, back to your questions:
It is allowed but a bad idea as it will create confusion, better to use relative path. i.e.
GET /path HTTP/1.1
instead of
GET https://foo.com/path HTTP/1.1.
Modify the loadbalance configuration to do so. Or make the both values the same.
If Host header has a value different than the request URI, then the URI is taking priority over the Hosts header.

ERR_TOO_MANY_REDIRECTS when disable SSL in all pages on Prestashop

I have disabled SSL in all pages at Prestashop and now there is this error (i do not enter https and it puts https):
I tried deleting htaccess and regenerating it, but it didn't work.
This is the Prestashop configuration:
Which is the solution?
EDIT: My configuration of SSL at Prestashop configuration tab
I'm not familiar with Prestashop, however the issue is that at the time being the site enforces the HTTPS. In fact, the non-HTTPS version redirects to the HTTPS, there may be other redirects (that are not enabled now as I can access the redirect target) that may have caused the loop.
➜ ~ curl -I http://runvaspain.com
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Sun, 24 Jan 2016 10:44:09 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Cache-Control: no-cache
Location: https://runvaspain.com/
X-Powered-By: PleskLin
Vary: Accept-Encoding
Strict-Transport-Security: max-age=15768000;includeSubDomains
Moreover, it looks like the site is setting the HSTS header
Strict-Transport-Security: max-age=15768000;includeSubDomains
This header is ignored when served via HTTP, however I suppose this was also served via HTTPS, therefore your browser have probably saved the configuration and it's enforcing the HTTPS locally (given that's what the HSTS is about).
You'll have to manually remove the strict transport configuration for the domain in your browser. However, please note that any user that previously accessed your site will have such setting, therefore they will be forced to use HTTPS for the main site and all subdomains for 6 months (as this is the policy you previous set).
Also note that, since you previously sent that header, HTTPS will be enabled for the entire site (and also subdomains), it's not possible to enable it on a single page (at least for the users who visited it before). The best thing to do is to turn HTTPS on again for the entire site.
To solve the first issue (the redirect to HTTPS) you should contact the Prestashop service. However, please note it will be almost irrelevant if the HSTS header was previously sent.
The http version of the website ( http://runvaspain.com ) send a 301 redirect to the https version.
The https version of the website ( https://runvaspain.com ) use the HSTS header with a max-age of 6 month
That's mean that anyone who visited the website is forced to visit the HTTPS version
It's a security feature added by HSTS.
You have two solutions:
Reactivate the https version (It's my advice, as https add a significant security to your visitors)
Redirect the https to http AND modify the HSTS header (probably in your apache configuration) to send a max-age of 0. You MUST send it for at least 6 month, you can't just remove it! Sending a max-age on 0 indicate to visitors who already visited the website to forget the HSTS settings. If you just remove it, those visitors will not be able to visits the webpage for the next six months !
in prestashop after configure with ssl, admin dashboard works. but public sites doesn't work anymore. because you have to enable SSL for front pages, using admin dashboard like bellow.
Enable SSL - yes
Enable SSL on all pages - yes
change your shop parameters like this
refer the image
I think Chrome just cached the URL. Happens to me all the time during development.
Try
List item
Click Menu (three dots or whatever icon is it now in Chrome in the upper right corner)
Hover over More tools then Developer tools
Click Network tab
Click Disable cache checkbox

502 Bad Gateway - Nginx

I am receiving:
14201#0: *16 connect() failed (111: Connection refused) while connecting to
upstream, client: 22.222.222.222, server: myserver.com, request: "GET
/favicon.ico HTTP/1.1", upstream: "https://70.88.100.212:7081/favicon.ico",
host: "myserver.com", referrer: "https://myserver.com/"
From a sub domain of my server. Now I've looked and know its not an issue of fpm because this happened when I was installing gitlab in a separate subdomain git.myserver.com. My plesk controller said their was a configuration issue and suggested to run a configure script which than broke my sub domain.
Here is the thing - git.myserver.com is still accessible, it actually just broke the myserver.com instead. I am not to sure what is going on what I have looked through my /etc/nginx/conf.d/*.conf and everything seems correct.
The layout of that file is:
include /etc/nginx/plesk.conf.d/server.conf;
include /etc/nginx/plesk.conf.d/vhosts/*.conf;
include /etc/nginx/plesk.conf.d/forwarding/*.conf;
include /etc/nginx/plesk.conf.d/webmail.conf;
Any suggestions?
UPDATE
70.88.100.212 is the primary server - I have multiple domains pointed there and webspaces built. Those are still accessible fine.
Check if your port 7081 is listening on ip 70.88.100.212.
Try this command :
netstat -ntlpu
In your Nginx conf it should be:
location / {
proxy_pass http://70.88.100.212:7081/;

Why Firefox and Chrome insist on using HTTPS for a manually typed non-SSL website

I would appreciate some help to understand what is going on: both Firefox and Chrome are failing to load my non-SSL website, say subdomain.example.com, with the following SSL errors (both on ubuntu 14.04 i386):
FF30: ssl_error_rx_record_too_long
Chrome 35: ERR_SSL_PROTOCOL_ERROR
This started to occur after I set (and follow) a redirect (302) to SSL on the parent domain, say http://example.com to https://example.com. It gets back to normal after a full cache clean on the browser. But as soon as I access the parent domain I get the problem on the subdomain.
I have never entered the subdomain URL with the "https://" scheme prefix. I don't usually type any prefix and it is happening even if I explicitly prefix with "http://". And it is not only on the address bar, the same happens for links.
I am very confident that there is nothing wrong with the non-SSL site on the subdomain.
I thought about filling a bug report but it is unlikely this is a bug in both browsers and more likely I am missing something.
It there any rule that if a website on a given domain supports SSL (or redirects http to https), then sites on subdomains are assumed to do as well?
I later found the cause of the SSL errors. But the problem still persists (now the message is connection refused):
Apache web server was configured to listen on both ports 80 and 443, but with no "SSLEngine on" clause. This effectively makes it serve plain HTTP on port 443.
It is worth to mention that this Apache configuration mistake is not that hard to fall into. Actually, in the default Ubuntu configuration (possibly the same for Debian), it is just a matter of enabling/loading the SSL module (and not providing a site configuration that uses SSL).
I have just found the cause. The ssl site on the parent domain is including the following STS response header:
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
That triggers the browser behavior by spec.

How to get mod_python site to allow clients to cache selected image content?

I have a small dynamic site implemented in mod_python. I inherited this, and while I have successfully made relatively minor changes to its content and logic, with HTTP caching I'm out of my depth. The site works fine already, so this isn't "the usual question" about how to disable caching for a dynamic site.
My problem is there is one large banner image on each page (the same image from same URL on each page) which accounts for ~90% of site bandwidth but which so far as I can tell isn't being cached; as I browse the site every time I click to a new page (or back to a previously visited one) there it is downloading it yet again.
If I wget the banner's image URL (to see the headers) I see:
$ wget -S http://example.com/site/gif?name=banner.gif
--2012-04-04 23:02:38-- http://example.com/site/gif?name=banner.gif
Resolving example.com... 127.0.0.1
Connecting to example.com|127.0.0.1|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Wed, 04 Apr 2012 22:02:38 GMT
Server: Apache/2.2.14 (Ubuntu)
Content-Location: gif.py
Vary: negotiate
TCN: choice
Set-Cookie: <blah blah blah>
Connection: close
Content-Type: image/gif
Length: unspecified [image/gif]
Saving to: `gif?name=banner.gif'
and the code which is serving it up isn't much more than
req.content_type = 'image/gif'
req.sendfile(fullname)
where fullname is a file-path munged from the request's name parameter.
My question is: is there some quick fix along the lines of setting an Expires: or Vary: field in the image's request response which will result in clients being less keen to repeatedly download it ?
The site is hosted on Ubuntu 10.04 and doesn't have any non-default apache mods enabled other than rewrite.
I note that most (not all) of the site pages' headers themselves do contain
Pragma: no-cache
Cache-Control: no-cache
Expires: -1
Vary: Accept-Encoding
(and the original site author has clearly thought about this as no-cache is applied selectively to non-static content pages). I don't know enough about caching to know whether this somehow poisons the included .gif IMG into being reloaded every time too though.
I don't know if my answer can help you or not, but I post it anyway.
Instead of serving image files from within python application, you can create another virtualhost within apache (on same server) just to serve static and image file. In your python application, you can embed image likes this
<img src="http://img.yoursite.com/banner.gif" alt="banner" />
With separate virtualhost, you can add various header to various content type using mode header, or add another caching layer for your static file.
Hope this help.