AWS Signed Cookies in an IFrame? - amazon-s3

Is it possible to support cookies in an IFrame that won't be broken by some of the recent cookie security improvements? The IFrame is embedded on arbitrary domains that we don't control. Other than the initial request URI being passed, we don't care about any special message passing or cross domain access.
Context: an app I've inherited serves authenticated S3 content in an IFrame to users. The content is proxied by CloudFront, leaning on their Signed Cookies feature to authenticate the initial HTML page, as well as every other asset (CSS, JS, images, etc.) that might be on the page. The cookie is generated/set after a successful auth handshake.
Recently, the move towards blocking third party cookies has broken this model. Users need to downgrade their security settings, and this will flat out stop working soon.
Short of a larger architectural change, is there a way to configure the cookies or CF to work within an IFrame, embedded on domains we don't control? My assumption is that this model is fundamentally broken now, but I wanted to triple check before reaching for a larger architectural change.
Thanks

Related

Is it possible to have GitHub Readme images follow redirects?

I'm trying to add a test coverage badge to the Readme of a private repository on GitHub. Our continuous integration process saves out the image to a secured Google Cloud Storage bucket that's not accessible to the public, and should remain that way.
Google's authorization layer is smart enough that if I go to the URL for the image, I'm automatically redirected to the resource with a valid auto-generated signed URL.
E.g., if I go to http://storage.cloud.google.com/secret-files/mysecretfile.png, then if I'm logged in and allowed to view it, I'm automatically redirected to something like https://blahblah-apidata.googleusercontent.com/download/storage/v1/b/secret-files/o/mysecretfile.png?key=verylongkey, where I can load the image.
This seemed perfect. Reference the canonical path in the GitHub Readme, authenticated users see the image, unauthenticated users are still blocked, we don't have to make the file public, and we don't have to do anything complicated.
Except that GitHub is proxying the image request, meaning that it will always be unauthenticated. My browser is loading something like https://camo.githubusercontent.com/mysecretimage.png.
Is there a clever way to work around this? Or do I need to go back to the drawing board?
All images on github.com are proxied using the Camo image proxy. There are a couple reasons for this:
It preserves the privacy of users. It isn't possible for a document to track users by directing them to a different site or using cookies to track them.
It means images can be cached and served at an appropriate size.
GitHub can have a very strict content security policy that does not allow loading from untrusted sites, which means that any sort of accidental security problem (like an XSS) is a lot less likely to work.
Note the last part. Even if you found some sneaky way to get another image URL to render properly in the website, your browser wouldn't load it because it violates the Content-Security-Policy header the site sent, and moreover, your browser would tattle about that to the reporting URL that GitHub provided.
So any image URL you provide will need to be readable by GitHub's image proxy and it won't be possible to serve different content to different users.

removing cookies on another domain using mod-rewrite and apache

I have built a cookie consent module that is used on many sites, all using the same server architecture, on the same cluster. For the visitors of these sites it is possible to administer their cookie settings (eg. no advertising cookies, but allow analytics cookes) on a central domain that keeps track of the user preferences (and sites that are visited).
When they change their settings, all sites that the visitor has been to that are using my module (kept in cookie) are contacted by loading it with a parameter in hidden iframes. I tried the same with images.
On these sites a rewrite rule is in place that detects that parameter and then retracts the cookie (set the date in the past) and redirects to a page on the module site (or an image on the module site).
This scheme is working in all browsers, except IE, as it needs a P3P (Probably the reason why it is not working for images is similar).
I also tried loading a non-existent image on the source domain (that is, the domain that is using the module) through an image tag, obviously resulting in a 404. This works on all browsers, except Safari, which doesn't set cookies on 404's (at least, that is my conclusion).
My question is, how would it be possible to retract the cookie consent cookie on the connected domains, given that all I can change are the rewrite rules?
I hope that I have explained the problem well enough for you guys to give an answer, and that a solution is possible...
I am still not able to resolve this question, but when looked at it the other way around there is a solution. Using JSONP (for an example, see: Basic example of using .ajax() with JSONP?), the client domain can load information from the master server and compare that to local information.
Based on that, the client site can retract the cookie (or even replace it) and force a reload which will trigger the rewrite rules...
A drawback of this solution is that it will hit the server for every pageview, and in my case, that's a real problem. Only testing that every x minutes or so (by setting a temporary cookie) would provide a solution.
Another, even more simple solution would be to expire all the cookies on the client site every x hour. This will force a revisit of the main domain as well.

How do I get placemark icons to load over ssl?

I'm working on a web application that uses the google earth plugin. Recently, a new requirement to have non-public users logon was added, which meant that some users were now using the site over https. Among the things that broke in testing were the custom placemark icons (They were working using http).
The icons are hosted on the same server which servers the page.
Here are the urls for each of the protocols.
http - http://localhost/Images/yellow.png
https - https://localhost/Images/yellow.png
I can follow that link and the image will appear as you would expect.
The images hrefs are declared as icon styles in dynamically generated kml.
I want to avoid loading the images over http because I think that will cause internet explorer to present the user with a mixed content warning.
How do I get the images to load properly while using https?
I have been wrestling with this myself -- the short answer is that this won't work. If the content is served off of an HTTPS site that generates any kind of error/prompt (authentication, invalid certificate, etc.) the plugin will simply not load the content.
Interestingly, the desktop client works fine and prompts the user for credentials if necessary. However, neither client will allow content to be served off of site with an untrusted certificate.
The only workaround that I have found is:
Use a trusted HTTPS certificate on the server hosting the content (either trust the certificate on the client systems or just use a real certificate.)
Do not use HTTPS basic auth as that will always generate 401/Challenge responses which the web browser client will simply ignore
If authentication is a requirement, use NTLM authentication and common (e.g., domain) logins. If you load the plugin in Interent Explorer (or in a .NET WebBrowserControl) the authentication will be handled properly and the images will show up.
I was at a Google Earth administrator's training last week and the trainer confirmed this "bug". It is supposed to be fixed in the next version of the plugin (it may actually be fixed already -- what version of the plugin are you using?)

how can cookies track users despite same origin policy?

Article here discusses tactics used by political campaigns.
http://www.nytimes.com/2012/10/14/us/politics/campaigns-mine-personal-lives-to-get-out-vote.html
The part in question is quoted:
The campaigns have planted software known as cookies on voters’
computers to see if they frequent evangelical or erotic Web sites for
clues to their moral perspectives. Voters who visit religious Web
sites might be greeted with religion-friendly messages when they
return to mittromney.com or barackobama.com.
How is that possible? I thought all modern browsers have same origin policy security where website A doesn't have access to any information about other website B, website C, etc.
The article makes it sound like a user browses:
1. presidentialcandidate.com
2. website2.com
3. website3.com
4. website4.com
5. presidentialcandidate.com
How can a cookie from visit #1 track user history and be revealed in visit #5?
It's true that browsers commonly won't accept or send cookies set for a different domain than the request itself. While actual implementations vary, one straightforward technique is using third-party cookies. If website2.com, website3.com and website4.com all embed resources from presidentialcandidate.com -- for example, an advertisement in an iframe, or a 1x1 pixel image -- and the user's browser accepts and sends third-party cookies, then presidentialcandidate.com can learn, through setting and retrieving of the cookie and HTTP referer headers, that the visitor has previously visited those sites.
RFC 6265 discusses the privacy implications of third-party cookies in greater detail.
It may not always be resources from presidentialcandidate.com that are enabling this process. For example, some services use cookie syncing to align cookie identifiers between services (a description of cookie syncing in one scenario).

Images on SSL enabled site with Internet explorer

I have a problem with my site after implementation of SSL that images do not appear. The scenario is that images come from images.domain.com (hosted on Amazon S3) and my certificate is for www.domain.com.
This problem only seems to happen in IE and not in any other browsers.
The issue is related to "mixed content" - HTTPS pages which have HTTP resources (images, scripts, etc) embedded.
The point of using HTTPS is to ensure that only the originating server and the client have access to the secured page. However, in theory it might be possible for this security to be compromised if HTTP resources are embedded - a server might intercept an unsecured javascript file and inject some code to alter the secured page onload.
Most browsers will indicate that a secure page has mixed content by altering the "secure lock" icon, either by showing the lock as open or broken, or by making the icon red (Chrome displayed a skull and crossbones for a short time, but they realised that this was a bit serious for the potential threat level).
Internet Explorer (depending on the version) will display a message either asking whether the insecure content should be shown (IE<=7), or whether only the secure content should be shown (IE>=8). It sounds like you have somehow disabled this message to always hide the insecure content, however that's not the default behaviour.
I think the best solution for you is to replace your S3 links with HTTPS versions.
I am not a web developer, but someone who often deals with the crap experience that is IE. I am not sure what version you are using, but you do not have a wildcard SSL cert (i.e. *.domain.com), so does it have something to do with an old-school limitation in 3rd party images?
See here for what I allude to above and a very good explanation of how IE caches cross-domain HTTPS content, specifically images. I am not sure what the solution is, but I was curious so I researched a little myself and this might help.