Lets say I have my website named SiteA.com running on an Apache web server. I have defined the ff. below on my httpd.conf file:
Header set Access-Control-Allow-Origin "CustomBank.com"
Questions:
Does this mean only CustomBank.com can access my site (SiteA.com) directly? or does it mean only my site (SiteA.com) can access the CustomBank.com domain directly? I am confused if this setting is for inbound or outbound.
In reality I don't have any CORS requirement needed for my site, so I didn't implement the setting mentioned above, the one below shows up in my response header.
Access-Control-Allow-Origin: *
Penetration Testing team said this setting is overly permissive. Do I just need to remove it? if not what should I do?
It means javascript loaded from CustomBank.com can make requests to your site (the site whose configuration has changed) via XMLHTPRequest in the background.
Since XMLHTTPRequest will send a users existing session cookie with your site, malicious scripts could do all kinds of nefarious/misleading things on behalf of your user. That's why * is not normally a suitable fix.
The restrictions apply to other script-like invocations that are more esoteric that you can read about in the specs.
Related
I run a website on the clearnet using Apache and want the connection to be made via the .onion address when a user uses the clearnet URL in a Tor browser.
I know Facebook uses a standard called HTTP Alternative Service, but I don't know how I should implement it myself in Apache.
I found a solution, however can't verify if it actually works as intended.
One can add the header in the .htaccess file by adding
Header add Alt-Svc: h2="example.onion:80"
to the file.
This does in fact add the Alt-Svc header to the responses, however the standard specifies the URL shown to the user in the browser should remain unchanged, despite the connection actually happening on Alt-Svc.
Thus I have so far unable to verify whether the Tor Browser actually connects on the .onion as intended.
I have a website (userbob.com) that normally serves all pages as https. However, I am trying to have one subdirectory (userbob.com/tools/) always serve content as http. Currently, it seems like Chrome's HSTS feature (which I don't understand how it works) is forcing my site's pages to load over https. I can go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set, and the next query will work as I want without redirecting to an https version. However, if I try to load the page a second time, it ends up redirecting again. The only way I can get it to work is if I go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set after each request. How do I let browsers know that I want all my pages from userbob.com/tools/ to load as http? My site uses an apache/tomcat web server.
(Just FYI, the reason I want the pages in the tools directory to serve pages over http instead of https is because some of them are meant to iframe http pages. If I try to iframe an http page from an https page I end up getting mixed-content errors.)
HTTP Strict Transport Security (or HSTS) is a setting your site can send to browsers which says "I only want to use HTTPS on my site - if someone tries to go to a HTTP link, automatically upgrade them to HTTPS before you send the request". It basically won't allow you to send any HTTP traffic, either accidentally or intentionally.
This is a security feature. HTTP traffic can be intercepted, read, altered and redirected to other domains. HTTPS-only websites should redirect HTTP traffic to HTTPS, but there are various security issues/attacks if any requests are still initially sent over HTTP so HSTS prevents this.
The way HSTS works is that your website sends a HTTP Header Strict-Transport-Security with a value of, for example, max-age=31536000; includeSubDomains on your HTTPS requests. The browser caches this and activates HSTS for 31536000 seconds (1 year), in this example. You can see this HTTP Header in your browsers web developer tools or by using a site like https://securityheaders.io . By using the chrome://net-internals/#hsts site you are able to clear that cache and allow HTTP traffic again. However as soon as you visit the site over HTTPS it will send the Header again and the browser will revert back to HTTPS-only.
So to permanently remove this setting you need to stop sending that Strict-Transport-Security Header. Find this in your Apache/Tomcat server and turn it off. Or better yet change it to max-age=0; includeSubDomains for a while first (which tells the browser to clear the cache after 0 seconds and so turns it off without having to visit chrome://net-internals/#hsts, as long as you visit the site over HTTPS to pick up this Header, and then remove the Header completely later.
Once you turn off HSTS you can revert back to having some pages on HTTPS and some on HTTP with standard redirects.
However it would be remiss of me to not warn you against going back to HTTP. HTTPS is the new standard and there is a general push to encourage all sites to move to HTTPS and penalise those that do not. Read his post for more information:
https://www.troyhunt.com/life-is-about-to-get-harder-for-websites-without-https/
While you are correct that you cannot frame HTTP content on a HTTPS page, you should consider if there is another way to address this problem. A single HTTP page on your site can cause security problems like leaking cookies (if they are not set up correctly). Plus frames are horrible and shouldn't be used anymore :-)
You can use rewrite rules to redirect https requests to http inside of subdirectory. Create an .htaccess file inside tools directory and add the following content:
RewriteEngine On
RewriteCond %{HTTPS} on
RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
Make sure that apache mod_rewrite is enabled.
Basically any HTTP 301 response from an HTTPS request indicating a target redirect to HTTP should never be honored at all by any browser, those servers doing that are clearly violating basic security, or are severaly compromized.
However a 301 reply to an HTTPS request can still redirect to another HTTPS target (including on another domain, provided that other CORS requirements are met).
If you navigate an HTTPS link (or a javascript event handler) and the browser starts loading that HTTPS target which replies with 301 redirect to HTTP, the behavior of the browser should be like if it was a 500 server error, or a connection failure (DNS name not resolved, server not responding timeout).
Such server-side redirect are clearly invalid. And website admins should never do that ! If they want to close a service and inform HTTPS users that the service is hosted elsewhere and no longer secure, they MUST return a valid HTTPS response page with NO redirect at all, and this should really be a 4xx error page (most probably 404 PAGE NOT FOUND) and they should not redirect to another HTTPS service (e.g. a third-party hosted search engine or parking page) which does not respect CORS requirements, or sends false media-types (it is acceptable to not honor the requested language and display that page in another language).
Browsers that implement HSTS are perfectly correct and going to the right direction. But I really think that CORS specifications are a mess, just tweaked to still allow advertizing network to host and control themselves the ads they broadcast to other websites.
I strongly think that serious websites that still want to display ads (or any tracker for audience measurement) for valid reasons can host these ads/trackers themselves, on their own domain and in the same protocol: servers can still get themselves the ads content they want to broadcast by downloading/refreshing these ads themselves and maintaining their own local cache. They can track their audience themselves by collecting the data they need and want and filtering it on their own server if they want this data to be analysed by a third party: websites will have to seriously implement thelselves the privacy requirements.
I hate now those too many websites that, when visited, are being tracked by dozens of third parties, including very intrusive ones like Facebook and most advertizing networks, plus many very weak third party services that have very poor quality/security and send very bad content they never control (including fake ads, fake news, promoting illegal activities, illegal businesses, invalid age rating...).
Let's return to the origin of the web: one site, one domain, one third party. This does not mean that they cannot link to other third party sites, but these must done only with an explicit user action (tapping or clicking), and visitors MUST be able to kn ow wherre this will go to, or which content will be displayed.
This is even possible for inserting videos (e.g. Youtube) in news articles: the news website can host themselves a cache of static images for the frame and icons for the "play" button: when users click that icon, it will start activating the third party video, and in that case the thirf party will interact directly with that user and can collect other data. But the unactivated contents will be tracked only by the origin website, under their own published policy.
In my local development environment I use apache server. What worked for me was :
Open you config file in sites-availabe/yoursite.conf. then add the following line inside your virtualhost:
Header always set Strict-Transport-Security "max-age=0". Restart your server.
I am using CloudFront to front requests to our service hosted outside of amazon. The service is protected and we expect an "Authorization" header to be passed by the applications invoking our service.
We have tried invoking our service from Cloud Front but looks like the header is getting dropped by cloud front. Hence the service rejects the request and client gets 401 forbidden response.
For some static requests, which do not need authorization, we are not getting any error and are getting proper response from CloudFront.
I have gone through CloudFront documentation and there is no specific information available on how headers are handled and hence was hoping that they will be passed as is, but looks like thats not the case. Any guidance from you folks?
The list of the headers CF drops or modifies can be found here
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#RequestCustomRemovedHeaders
CloudFront does drop Authorization headers by default and will not pass it to the origin.
If you would like certain headers to be sent to the origin, you can setup a whitelist of headers under CloudFront->Behavior Settings->Forward headers. Just select the headers that you would like to be forwarded and CloudFront will do the job for you. I have tested it this way for one of our location based services and it works like a charm.
One thing that I need to verify is if the Authorization header will be included in the cache key and if its safe to do that?? That is something you might want to watch out for as well.
It makes sense CF drops the Authorization header, just imagine 2 users asking for the same object, the first one will grant access, CF will cache the object, then the second user will get the object as it was previously cached by CloudFront.
Great news are using forward headers you can forward the Authorization header to the origin, that means the object will be cached more than once as the header value is part of the cache "key"
For exmple user A GETS private/index.html
Authorization: XXXXXXXXXXXXX
The object will be cached as private/index.html + XXXXXXXXXXXXX (this is the key to cahce the object in CF)
Now when the new request from a diferent user arrives to CloudFront
GET private/index.html
Authorization: YYYYYYYYYYYY
The object will be passed to the origin as the combinaiton of private/index.html + YYYYYYYYYYYY is not in CF cache.
Then Cf will be cached 2 diferent objects with the same name (but diferent hash combinaiton name).
In addition to specifying them under the Origin Behaviour section, you can also add custom headers to your origin configuration. In the AWS documentation for CloudFront custom headers:
If the header names and values that you specify are not already present in the viewer request, CloudFront adds them. If a header is present, CloudFront overwrites the header value before forwarding the request to the origin.
The benefit of this is that you can then use an All/wildcard setting for whitelisting your headers in the behaviour section.
It sounds like you are trying to serve up dynamic content from CloudFront (at least in the sense that the content is different for authenticated vs unauthenticated users) which is not really what it is designed to do.
CloudFront is a Content Distribution Network (CDN) for caching content at distributed edge servers so that the data is served near your clients rather than hitting your server each time.
You can configure CloudFront to cache pages for a short time if it changes regularly and there are some use cases where this is worthwhile (e.g. a high volume web site where you want to "micro cache" to reduce server load) but it doesn't sound like this is the way you are trying to use it.
In the case you describe:
The user will hit CloudFront for the page.
It won't be in the cache so CloudFront will try to pull a copy from the origin server.
The origin server will reply with a 401 so CloudFront will not cache it.
Even if this worked and headers were passed back and forth in some way, there is is simply no point in using CloudFront if every page is going to hit your server anyway; you would just make the page slower because of the extra round trip to your server.
I have built a cookie consent module that is used on many sites, all using the same server architecture, on the same cluster. For the visitors of these sites it is possible to administer their cookie settings (eg. no advertising cookies, but allow analytics cookes) on a central domain that keeps track of the user preferences (and sites that are visited).
When they change their settings, all sites that the visitor has been to that are using my module (kept in cookie) are contacted by loading it with a parameter in hidden iframes. I tried the same with images.
On these sites a rewrite rule is in place that detects that parameter and then retracts the cookie (set the date in the past) and redirects to a page on the module site (or an image on the module site).
This scheme is working in all browsers, except IE, as it needs a P3P (Probably the reason why it is not working for images is similar).
I also tried loading a non-existent image on the source domain (that is, the domain that is using the module) through an image tag, obviously resulting in a 404. This works on all browsers, except Safari, which doesn't set cookies on 404's (at least, that is my conclusion).
My question is, how would it be possible to retract the cookie consent cookie on the connected domains, given that all I can change are the rewrite rules?
I hope that I have explained the problem well enough for you guys to give an answer, and that a solution is possible...
I am still not able to resolve this question, but when looked at it the other way around there is a solution. Using JSONP (for an example, see: Basic example of using .ajax() with JSONP?), the client domain can load information from the master server and compare that to local information.
Based on that, the client site can retract the cookie (or even replace it) and force a reload which will trigger the rewrite rules...
A drawback of this solution is that it will hit the server for every pageview, and in my case, that's a real problem. Only testing that every x minutes or so (by setting a temporary cookie) would provide a solution.
Another, even more simple solution would be to expire all the cookies on the client site every x hour. This will force a revisit of the main domain as well.
I'm making classic stateless RESTfull APIs on Symfony2: users/apps gets an authentication token on the authenticate API and give it to all others APIs to be logged and post data / access protected/private/personal data on others APIs.
I've got now three concerns regarding this workflow and caching:
How to use HTTP cache for my 'static' APIs (that always deliver the same content, regardless the logged user and its token) assuming that different tokens would be passed in the url by different users for the same API, so that the url would never be the same? How to use HTTP shared cache then?
I've got APIs for the same url that produce a different output, regarding the logged user rights (I've basically 4 different rights levels). Question is: is it a good pattern? It is not better to have 4 different urls, one for each right, that I could cache? If not, how to implement a proper cache on that?
Is shared HTTP Cache working on HTTPS? If not, which type of caching should I implement, and how?
Thanks for your answers and lights on that.
I have had a similar issue (with all 3 scenarios) and have used the following strategy successfully with Symfony's built-in reverse-proxy cache:
If using Apache, update .htaccess to add an environment variable for your application to the http cache off of (NOTE: environment automatically adds REDIRECT_ to the environment variable):
# Add `REDIRECT_CACHE` if API subdomain
RewriteCond %{HTTP_HOST} ^api\.
RewriteRule .* - [E=CACHE:1]
# Add `REDIRECT_CACHE` if API subfolder
RewriteRule ^api(.*)$ - [E=CACHE:1]
Add this to app.php after instantiating AppKernel:
// If environment instructs us to use cache, enable it
if (getenv('CACHE') || getenv('REDIRECT_CACHE')) {
require_once __DIR__.'/../app/AppCache.php';
$kernel = new AppCache($kernel);
}
For your "static" APIs, all you have to do is take your response object and modify it:
$response->setPublic();
$response->setSharedMaxAge(6 * 60 * 60);
Because you have a session, user or security token, Symfony effectively defaults to $response->setPrivate().
Regarding your second point, REST conventions (as well as reverse-proxy recommendations), GET & HEAD requests aren't meant to change between requests. Because of this, if content changes based on the logged in user, you should set the response to private & prevent caching at all for the reverse-proxy cache.
If caching is required for speed, it should be handled internally & not by the reverse-proxy.
Because we didn't want to introduce URLs based on each user role, we simply cached the response by role internally (using Redis) & returned it directly rather than letting the cache (mis)handle it.
As for your third point, because HTTP & HTTPS traffic are hitting the same cache & the responses are having public/private & cache-control settings explicitly set, the AppCache is serving the same response both secure & insecure traffic.
I hope this helps as much as it has for me!