I have an asp.net 5 MVC 6 site I am working on. I turned on require HTTPS like so:
services.AddMvc(options =>
{
options.Filters.Add(new RequireHttpsAttribute());
});
It worked great, and I have been working like this for a little while. Today I need to turn it off, so I commented out the options filter, but it is still requiring HTTPS.
I have not used the [RequireHttps] attribute on the controllers or actions themselves.
I have gone into the properties and unchecked "Enable SSL" and pasted the http url in the "Launch URL" box.
I have shutdown IIS Express and relaunched the site. It doesn't seem to matter what I do, it continues to try to redirect to HTTPS.
Is it possible IIS Express or Kestral has cached something I need to delete?
Anyone have any suggestions of what else could be forcing it to HTTPS?
The RequireHttpsAttribute implementation will send a permanent redirect response (301) back to browser:
// redirect to HTTPS version of page
filterContext.Result = new RedirectResult(newUrl, permanent: true);
This means that when you initially had the attribute enabled and requested a url like http://localhost:62058/, the server will respond with:
301 (Moved permanently)
Location: https://localhost:62058/
If you look at the definition of the 301 response code you will see that browsers will cache it by default:
The requested resource has been assigned a new permanent URI and any
future references to this resource SHOULD use one of the returned
URIs. Clients with link editing capabilities ought to automatically
re-link references to the Request-URI to one or more of the new
references returned by the server, where possible. This response is
cacheable unless indicated otherwise.
The new permanent URI SHOULD be given by the Location field in the
response. Unless the request method was HEAD, the entity of the
response SHOULD contain a short hypertext note with a hyperlink to the
new URI(s).
After you remove the RequireHttps filter, the browser will still use the cached response:
So all you need to do after removing the RequireHttps filter is rebuild and clear the browser cache!
Related
I have an application that was recently given an AWS certificate (and put in an ELB - classic I think).
The web application has a Flash movie that makes web calls (to same site URL) in order to fetch data using Zend Framework 1 models. The page in browser does not change.
When I request the site over https, all of the imported items have been changed over to https protocol, but when the Flash movie initializes, it makes non-secure requests over http.
It makes these non-secure requests when I load the site over http, or https.
The reason I mentioned the AWS ELB is because I was told that the ELB is doing some kind of redirect to port 80.
If I request the site over https, and immediately do print_r on $_SERVER array I am only seeing HTTPS as a REDIRECT key, and not seeing $_SERVER['HTTPS'] set, which I think is important.
In summary, the Flash movie, inside a Zend 1.12 site, is making POST requests over http, and I'd like it to make the same requests, but over https.
It is a very old Flash movie, and although I've opened the swf file with a decompiler, I do not know much about actionscript to see where (in the many code files) I'd be able to instruct the movie to call https instead of http.
My theory is that when the site is properly running as SSL/https that the flash movie may ?possibly? start making https calls since at the moment is "is" using the address bar URL, but there also could be that ELB redirect stuff happening that's gumming it up as well.
Update: I found (what appears to be) evidence that if https is detected in the URL it's given, that it will then make secure requests...
FILE: mx.rpc.remoting.RemoteObject
mx_internal function initEndpoint() : void
{
var chan:Channel = null;
if(endpoint != null)
{
if(endpoint.indexOf("https") == 0)
{
chan = new SecureAMFChannel(null,endpoint);
}
else
{
chan = new AMFChannel(null,endpoint);
}
channelSet = new ChannelSet();
channelSet.addChannel(chan);
}
}
Thanks,
Adam
I was able to restore the original/old unedited Flash SWF File, and instead modified the PHP Code that passes in a variable and value called "endpoint".
In the code sample I provided, it checks if endpoint has https in it (which I initially thought that it did).
I added code to modify the value of "endpoint" when HTTP_X_FORWARDED_PROTO was "https", sample below: (the $request->getBaseUrl() is from Zend Framework).
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') {
$endpoint = sprintf(
'%s://%s%s',
$_SERVER['HTTP_X_FORWARDED_PROTO'],
$_SERVER['HTTP_HOST'],
$request->getBaseUrl()
);
} else {
// use existing (and working) value for endpoint
}
With that code in place, the FLASH movie loads in, and operates properly
whether site is loaded with http or with https
I currently have two web apps that are set up in cloudflare with the following CNAMEs. Both are keystonejs applications.
app1.example.com ===pointing to ===> AWS ALB 1
app2.example.com ===pointing to ===> AWS ALB 2
I have Cloudflare Enterprise set up, so i'm able to use the "Render Override" feature in my page rules. I have 2 page rules set up using the following:
www.example.com ===render override ===> app1.example.com
www.example.com/app2/* ===render override ===> app2.example.com
Now in order to access the keystonejs application on app2.example.com. The application is called using app2.example.com/pa
The problem that i'm facing is that render override doesnt allow me to use sub paths, and i do not want to use the forwarding rule. Do i need to make my keystone application accessible through the root url, namely app2.example.com/ ? or is there another way to do this? Otherwise, would i need to use a reverse proxy? such as nginx ?
Thanks
Note: Since you are an enterprise customer, I highly recommend contacting your Customer Success Manager and/or Solutions Engineer at Cloudflare. They are there to help with exactly these kinds of questions. That said, I'll answer the question here for the benefit of self-serve customers.
I think when you say "Render Override" you actually mean "Resolve Override". This setting changes the DNS lookup for the request such that it is routed to a different origin IP address than it would be normally.
Note that Resolve Override does not rewrite the request in any way; it only routes it to a different server. So, a request to www.example.com/app2/foo will go to the server app2.example.com, but the path will still be /app2/foo (not /foo), and the Host header will still be Host: www.example.com.
It sounds like in your case you really want /app2/* to be rewritten to /pa/*, in addition to redirecting to a different origin. You can accomplish this using Cloudflare Workers, which lets you execute arbitrary JavaScript on Cloudflare's edge. Here's what the script might look like:
addEventListener("fetch", event => {
event.respondWith(handle(event.request));
});
async function handle(request) {
let url = new URL(request.url) // parse the URL
if (url.pathname.startsWith("/app2/")) {
// Override the target hostname.
url.host = "app2.example.com"
// Replace /app2/ with /pb/ in the path.
url.pathname = "/pb/" + url.pathname.slice("/app2/".length)
// Send the request on to origin.
return fetch(url, request)
} else {
// Just override the hostname.
url.host = "app1.example.com"
// Send the request on to origin.
return fetch(url, request)
}
}
With this deployed, you can remove your Resolve Override page rules, as they are now covered by the Worker script.
Note that the above script actually does rewrite the Host header in addition to the path. If you want the Host header to stay as www.example.com, then you will need to use the cf.resolveOverride option. This is only available to enterprise customers; ask your CSM or SE if you need help using it. But, for most cases, you actually want the Host header to be rewritten, so you probably don't need this.
I have a website (userbob.com) that normally serves all pages as https. However, I am trying to have one subdirectory (userbob.com/tools/) always serve content as http. Currently, it seems like Chrome's HSTS feature (which I don't understand how it works) is forcing my site's pages to load over https. I can go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set, and the next query will work as I want without redirecting to an https version. However, if I try to load the page a second time, it ends up redirecting again. The only way I can get it to work is if I go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set after each request. How do I let browsers know that I want all my pages from userbob.com/tools/ to load as http? My site uses an apache/tomcat web server.
(Just FYI, the reason I want the pages in the tools directory to serve pages over http instead of https is because some of them are meant to iframe http pages. If I try to iframe an http page from an https page I end up getting mixed-content errors.)
HTTP Strict Transport Security (or HSTS) is a setting your site can send to browsers which says "I only want to use HTTPS on my site - if someone tries to go to a HTTP link, automatically upgrade them to HTTPS before you send the request". It basically won't allow you to send any HTTP traffic, either accidentally or intentionally.
This is a security feature. HTTP traffic can be intercepted, read, altered and redirected to other domains. HTTPS-only websites should redirect HTTP traffic to HTTPS, but there are various security issues/attacks if any requests are still initially sent over HTTP so HSTS prevents this.
The way HSTS works is that your website sends a HTTP Header Strict-Transport-Security with a value of, for example, max-age=31536000; includeSubDomains on your HTTPS requests. The browser caches this and activates HSTS for 31536000 seconds (1 year), in this example. You can see this HTTP Header in your browsers web developer tools or by using a site like https://securityheaders.io . By using the chrome://net-internals/#hsts site you are able to clear that cache and allow HTTP traffic again. However as soon as you visit the site over HTTPS it will send the Header again and the browser will revert back to HTTPS-only.
So to permanently remove this setting you need to stop sending that Strict-Transport-Security Header. Find this in your Apache/Tomcat server and turn it off. Or better yet change it to max-age=0; includeSubDomains for a while first (which tells the browser to clear the cache after 0 seconds and so turns it off without having to visit chrome://net-internals/#hsts, as long as you visit the site over HTTPS to pick up this Header, and then remove the Header completely later.
Once you turn off HSTS you can revert back to having some pages on HTTPS and some on HTTP with standard redirects.
However it would be remiss of me to not warn you against going back to HTTP. HTTPS is the new standard and there is a general push to encourage all sites to move to HTTPS and penalise those that do not. Read his post for more information:
https://www.troyhunt.com/life-is-about-to-get-harder-for-websites-without-https/
While you are correct that you cannot frame HTTP content on a HTTPS page, you should consider if there is another way to address this problem. A single HTTP page on your site can cause security problems like leaking cookies (if they are not set up correctly). Plus frames are horrible and shouldn't be used anymore :-)
You can use rewrite rules to redirect https requests to http inside of subdirectory. Create an .htaccess file inside tools directory and add the following content:
RewriteEngine On
RewriteCond %{HTTPS} on
RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
Make sure that apache mod_rewrite is enabled.
Basically any HTTP 301 response from an HTTPS request indicating a target redirect to HTTP should never be honored at all by any browser, those servers doing that are clearly violating basic security, or are severaly compromized.
However a 301 reply to an HTTPS request can still redirect to another HTTPS target (including on another domain, provided that other CORS requirements are met).
If you navigate an HTTPS link (or a javascript event handler) and the browser starts loading that HTTPS target which replies with 301 redirect to HTTP, the behavior of the browser should be like if it was a 500 server error, or a connection failure (DNS name not resolved, server not responding timeout).
Such server-side redirect are clearly invalid. And website admins should never do that ! If they want to close a service and inform HTTPS users that the service is hosted elsewhere and no longer secure, they MUST return a valid HTTPS response page with NO redirect at all, and this should really be a 4xx error page (most probably 404 PAGE NOT FOUND) and they should not redirect to another HTTPS service (e.g. a third-party hosted search engine or parking page) which does not respect CORS requirements, or sends false media-types (it is acceptable to not honor the requested language and display that page in another language).
Browsers that implement HSTS are perfectly correct and going to the right direction. But I really think that CORS specifications are a mess, just tweaked to still allow advertizing network to host and control themselves the ads they broadcast to other websites.
I strongly think that serious websites that still want to display ads (or any tracker for audience measurement) for valid reasons can host these ads/trackers themselves, on their own domain and in the same protocol: servers can still get themselves the ads content they want to broadcast by downloading/refreshing these ads themselves and maintaining their own local cache. They can track their audience themselves by collecting the data they need and want and filtering it on their own server if they want this data to be analysed by a third party: websites will have to seriously implement thelselves the privacy requirements.
I hate now those too many websites that, when visited, are being tracked by dozens of third parties, including very intrusive ones like Facebook and most advertizing networks, plus many very weak third party services that have very poor quality/security and send very bad content they never control (including fake ads, fake news, promoting illegal activities, illegal businesses, invalid age rating...).
Let's return to the origin of the web: one site, one domain, one third party. This does not mean that they cannot link to other third party sites, but these must done only with an explicit user action (tapping or clicking), and visitors MUST be able to kn ow wherre this will go to, or which content will be displayed.
This is even possible for inserting videos (e.g. Youtube) in news articles: the news website can host themselves a cache of static images for the frame and icons for the "play" button: when users click that icon, it will start activating the third party video, and in that case the thirf party will interact directly with that user and can collect other data. But the unactivated contents will be tracked only by the origin website, under their own published policy.
In my local development environment I use apache server. What worked for me was :
Open you config file in sites-availabe/yoursite.conf. then add the following line inside your virtualhost:
Header always set Strict-Transport-Security "max-age=0". Restart your server.
I'm having the SSL warning messages all over my website after switching to SSL for several assets:
Mixed Content: The page at 'https://example.com' was loaded over HTTPS,
but requested an insecure script 'http://example.com/script.js'. This
request has been blocked; the content must be served over HTTPS.
I checked the page source, every single script/css is requested over https.
I even checked the dynamically created html by using the code inspector.
I disabled Javascript in case a script was loading these assets dynamically.
None of these things showed a single http:// request. I'm out of ideas to try and find what is causing this. Any ideas or suggestions?
When seeing a mixed-content message about a http://example.com/script.js (non-https) URL that doesn’t actually appear anywhere in your sources, the basic strategy to follow is:
Replace the http in the URL with https and put that into the address bar in your browser: https://example.com/script.js
If your browser redirects from that https://example.com/script.js URL back to (non-https) http://example.com/script.js, then you’ve found the cause: example.com/script.js isn’t actually available from an https URL, and ends up getting served from a http URL even though your source is requesting the https URL.
My 2 cents regarding this issue.
I have a project hosted on one domain that works flawlessly.
I need to make it international so I am cloning the master branch to a new branch, making some necessary text changes and deploying new site (new domain) with code from the new branch.
Everything works fine, except 1 ajax call (api route) that gets blocked due to Mixed content.
First things first, I checked these 3 things:
I check in the Network tab in dev tools and it is actually loaded through https.
I open the file directly in browser and it is https.
I try to open it as http:// and it automatically redirects to https://
This is very strange because the 2 domains are both using Cloudflare and their backend setup is identical, the code is the same (only text changes for the new one) yet for the new setup there is console error for 1 specific api route, an all others (some 20+ ajax requests across the page) work just fine. They are even using the same function to make the Ajax request, so it is definitely not a configuration error.
After doing some investigation I found out the issue:
The call that was 'buggy' was ending in /. For example, all other calls were made to:
https://example.com/api/posts
https://example.com/api/users
And this particular one was making requests to
https://example.com/api/todos/
The slash at the end was making it fail with mixed content issue. I am not sure why this is causing issue and how it isn't an issue on the original site (since there the same ajax call works just fine), but it definitely fixed my issue.
If I figure out what caused the / to fail so miserably, I will post an update.
I have an ASP.NET MVC action that sends a GET request to another server via HttpWebRequest. I'd like to include all cookies in the original action's request in the new request. Some of the System.Web.HttpCookies in the original request have empty domain values (i.e. ""), which apparently doesn't cause any issues. When I create a System.Net.Cookie using the name, value, path, and domain of each of these cookies and add it to the request's CookieContainer, I get this error:
"System.ArgumentException: The parameter '{0}' cannot be an empty string. Parameter name: cookie.Domain"
Here's some code that will throw the same error (when the cookie is added):
var request = (HttpWebRequest)WebRequest.Create("http://www.whatever.com");
request.Method = "GET";
request.CookieContainer = new CookieContainer();
request.CookieContainer.Add ( new Cookie ( "MyCookieName", "MyCookieValue", "/", "") );
EDIT
I sort of fixed this by using "localhost" for the domain, instead of the null or empty string value from the original HttpCookie. So, why does an empty domain not work for the CookieContainer? And does HttpCookie use an empty value to signify localhost, or do I need to find another fix for this problem?
Disclaimer:
As stated earlier by #feroze, setting your cookies' domain to localhost is not going to work out so well for you. I'm assuming you're writing a helper that allows you to tunnel HTTP requests out to foreign domains. Note that this is not best practice and in a lot of cases is not needed (i.e. jQuery has a lot of cool cross-domain support built-in, also see the new CORS specification). But sometimes you may be stuck doing this (i.e. the external resource is XML only, and is on a server that doesn't support CORS).
Background Information on Cookie Domains and How They Work:
If you haven't already take a look at HTTP Cookie: Domain and Path on Wikipedia -- pretty much everything you need to know is in there.
When evaluating a cookie, the Domain and Path are taken into account by both the client (the "local" requester) and the web server (the "foreign" responder). When a client requests a resource, the client should only send cookies where those cookies match the Domain (or a more generic parent domain) and Path (or a more generic parent path) of the URI being requested.
Web browsers handle this correctly. If a web browser has a cookie for the domain "localhost" and you're requesting "google.com", for example, those cookies for the "localhost" domain won't be sent in the request to "google.com". -- In fact, most modern browsers won't just not send them, they'll completely ignore them in Set-Cookie response headers that they receive (these are called third-party cookies -- enabling the acceptance of third party cookies in your web browser is a huge privacy/security concern -- don't do it!).
It works in the other direction as well -- even though it's unlikely for a client to include a third party cookie in a request, if it does, the foreign web server is supposed to ignore it (and even some cookies for correct domains/paths, so as to prevent the infamous super-cookie issue. (i.e. The web server hosting "example.com" should ignore cookies belonging to its parent domain: ".com", because ".com" is a "public suffix")).
What You Should Do [if you have to]:
The course of action I recommend for you, is when you read in your client's cookies (I'm not an MVC guy, but in regular ASP.NET this would be in Request.Cookies), loop through them (make sure to filter out your own site's legitimate cookies, especially SessionId, etc -- or use Path properly so they never get sent to this page in the first place), then add them one at a time to the outgoing request's cookie collection, rewriting the domain to be "www.whatever.com" (per your example -- if you're doing this dynamically, load the URL into a new Uri() object and use the .Host property), and then set the Path to "/". -- This will build the "Cookie" header for the outgoing request to the foreign web server.
When that request returns to your server, you then need to check it's incoming response for new cookies -- those cookies can be repackaged and sent back down to your client in much the same type of loop as I illustrated in the previous paragraph, except you'll want to rewrite Host to be Request.Url.Host -- and you'll want to set path back to "/" unless the path to your passthru page is static (I'm guessing it isn't since you're using MVC) then you'd want to set it to Request.Url.AbsolutePath for instance.
Happy Coding!
EDIT:
Also, you'll want to set the X-Forwarded-For tag of the outgoing request, so that the website you're calling doesn't think your web server is one single client that's been spamming the crap out of them.
Not sure it solves your problem. But to add cookies without the "Domain" property you must add to the headers the cookies using HttpRequestHeader.Cookie as follows.
request.Headers.Add(HttpRequestHeader.Cookie, "Your cookies...");
Hope it helps!
Some background
This occurs because CookieContainer is client-side container designed to be reused across multiple HttpWebRequest. Reusing it provides the expected cookie behavior that cookies set by the remote host are sent back with every subsequent HttpWebRequests targeted at the same host.
As a result of the reuse, a CookieContainer might actually contain cookies from multiple request and\or hosts.
So, in order to determine which of the cookies in the container need to be sent with a particular HttpWebRequest to some host (domain), CookieContainer examines the Domain and the Path property.
That's why a Cookie in a CookieContainer needs to have a valid Domain.
Conversely, on the server-side cookies are delivered via a different type, CookieCollection which a simple list of cookies with no extra logic.
Specifically, in your case, while copying cookies from the CookieCollection to the CookieContainer you need to set the Domain property of every cookie to the domain your are going to forward the request to, so that HttpWebRequest will know to include the cookies while sending the request.
You are trying to get cookies sent to localhost, right?
Why don't you do something like this where you give your own machine a real name:
Edit your hosts file and add a line "127.0.0.1 myname.com"
Test using myname.com - which is actually your localhost.
Your browser or app will not know the difference and send cookies to myname.com if that is where the cookie belongs.
Detailed info:
The Hosts file on windows is located at C:\Windows\System32\drivers\etc\hosts