I'm using a Google API (e.g., for Maps Embed) with a key that is restricted via a list of HTTP Referrers. In this case, the map is embedded in my.site.com, so within the Google API -> Credentials page, I allow access for referrer .site.com/. When I visit my.site.com from most browsers, Google maps displays correctly as the browser sets the referrer field to my.site.com. When using the Brave browser, however, it sets the referrer field to the origin and displays an error:
Request received from IP address 98.229.177.122, with referrer: https://www.google.com/
Of course I could add google.com to the list of allowed referrers, but that defeats the purpose of limiting the use of the API key to my own website - anyone could "borrow" the API key, add it to their site for the same API, and anyone using Brave would be able to access the feature. Now that each access costs $, I'd rather not do this. Any ideas for a work-around?
Note: #geocodezip - thanks for the reference. Indeed, I forgot to add that when I set the site-specific shield to "All cookies allowed", or even completely turn shields off for the site, the behavior is still the same (error). However, in the default shield settings, when I set the cookies field to "All cookies allowed", then it works as intended (maps are displayed), even though for the default settings section it states:
These are the default Shields settings. They apply to all websites
unless you change something in the Shields panel on a particular site.
Changing these won't affect your existing per-site settings.
which I interpret to mean that the site-specific settings take precedence over the defaults.
So I'm thinking this (site-specific cookies setting not over-riding the default) is a brave bug, though that is a bit separate from my initial hope for a different approach that didn't require manual intervention on the user's part.
Related
I have an application that has a VueJs based front end and NodeJs based backend API. The client-side is a SPA and it communicates with API for getting data. Now in a security scan, it is mentioned that the app doesn't have a Permission-Policy HTTP header and I would like to add it. I but not sure is there any option I can add in the VueJS and I am confused whether this is something that needs to be added from the front end. From the Node app, it is possible to set the header, but here the pages are not generated from the server-side. It will be helpful if someone can let me know how can I add these headers to the app.
Technically you can publish Permissions-Policy header when you sent an initial SPA's HTML code (you have to use some packages or Node.js server facility to publish response header). Even more so scanners do not execute ajax and will not see the pages of your SPA.
But there are some doubts whether it is worth doing it at all.
Permissions Policy is a new name of Feature Policy, below I will use Feature Policy term, but all of the below also applies to Permissions Policy.
Browsers poorly support Feature Policy and do not support Permissions Policy. Only Chrome supports the interest-cohort directive, but you have to set specific flags to enable Permissions Policy support. Feature Policy / Permissions Policy spec still is under development.
Feature Policy is rarely published via HTTP header, because it is intended to restrict the capabilities of nested browsing contexts (iframes), and not the main page itself. Therefore it's mostly published via <iframe allow="..." attribute for each third-party iframe embedded.
But the scanners are not aware of this and do not check the allow= attribute.
Scanners don't know much about real security, they are more focused on visualization baubles like Grade A+ and labels with green/red color. Therefore scanners:
are not recognize Content Security Policy in meta tag, just in the HTTP header.
require X-Frame-Options header for any web page despite presence of CSP's frame-ancestors derictive and ignore fact that some sites are inbtended to be embedded (widgets, youtube/vimeo video etc.).
require Feature Policy / Permissions Policy header despite these are not supported or are published by another way.
Mostly scanners results have nothing with real security, all is how to get A+ grade, nothing else (see a relevant thread "headers manipulatin to get Grade A+").
Of course, scanners can draw your attention to some overlooked headlines, but final decision which headers do web-app need to publish is up to you.
I'm trying to add a test coverage badge to the Readme of a private repository on GitHub. Our continuous integration process saves out the image to a secured Google Cloud Storage bucket that's not accessible to the public, and should remain that way.
Google's authorization layer is smart enough that if I go to the URL for the image, I'm automatically redirected to the resource with a valid auto-generated signed URL.
E.g., if I go to http://storage.cloud.google.com/secret-files/mysecretfile.png, then if I'm logged in and allowed to view it, I'm automatically redirected to something like https://blahblah-apidata.googleusercontent.com/download/storage/v1/b/secret-files/o/mysecretfile.png?key=verylongkey, where I can load the image.
This seemed perfect. Reference the canonical path in the GitHub Readme, authenticated users see the image, unauthenticated users are still blocked, we don't have to make the file public, and we don't have to do anything complicated.
Except that GitHub is proxying the image request, meaning that it will always be unauthenticated. My browser is loading something like https://camo.githubusercontent.com/mysecretimage.png.
Is there a clever way to work around this? Or do I need to go back to the drawing board?
All images on github.com are proxied using the Camo image proxy. There are a couple reasons for this:
It preserves the privacy of users. It isn't possible for a document to track users by directing them to a different site or using cookies to track them.
It means images can be cached and served at an appropriate size.
GitHub can have a very strict content security policy that does not allow loading from untrusted sites, which means that any sort of accidental security problem (like an XSS) is a lot less likely to work.
Note the last part. Even if you found some sneaky way to get another image URL to render properly in the website, your browser wouldn't load it because it violates the Content-Security-Policy header the site sent, and moreover, your browser would tattle about that to the reporting URL that GitHub provided.
So any image URL you provide will need to be readable by GitHub's image proxy and it won't be possible to serve different content to different users.
I have built a cookie consent module that is used on many sites, all using the same server architecture, on the same cluster. For the visitors of these sites it is possible to administer their cookie settings (eg. no advertising cookies, but allow analytics cookes) on a central domain that keeps track of the user preferences (and sites that are visited).
When they change their settings, all sites that the visitor has been to that are using my module (kept in cookie) are contacted by loading it with a parameter in hidden iframes. I tried the same with images.
On these sites a rewrite rule is in place that detects that parameter and then retracts the cookie (set the date in the past) and redirects to a page on the module site (or an image on the module site).
This scheme is working in all browsers, except IE, as it needs a P3P (Probably the reason why it is not working for images is similar).
I also tried loading a non-existent image on the source domain (that is, the domain that is using the module) through an image tag, obviously resulting in a 404. This works on all browsers, except Safari, which doesn't set cookies on 404's (at least, that is my conclusion).
My question is, how would it be possible to retract the cookie consent cookie on the connected domains, given that all I can change are the rewrite rules?
I hope that I have explained the problem well enough for you guys to give an answer, and that a solution is possible...
I am still not able to resolve this question, but when looked at it the other way around there is a solution. Using JSONP (for an example, see: Basic example of using .ajax() with JSONP?), the client domain can load information from the master server and compare that to local information.
Based on that, the client site can retract the cookie (or even replace it) and force a reload which will trigger the rewrite rules...
A drawback of this solution is that it will hit the server for every pageview, and in my case, that's a real problem. Only testing that every x minutes or so (by setting a temporary cookie) would provide a solution.
Another, even more simple solution would be to expire all the cookies on the client site every x hour. This will force a revisit of the main domain as well.
I want to use an application that checks for broken links. I got to know that, Xenu is one such software. I do not have access to internal aspx/http files on a drive. The Problem I am facing is the Website requires the user to be authenticated. After login I need to crawl the site to determine which links are broken.
As an example, I kick off with mail.google.com. We end up typing the Username and password after which we are served different URLs. If I give the Xenu (or similar programs) the link such as mail.google.com it will not be able to fecth URLs inside the mail.google.com which will be of type - /mail/u/0/?shva=1#inbox/ etc. There lies the problem.
With minimal or least scripting language how can I provide Xenu (or other similar app) capability to Login by providing external URL (mail.google.com) in this example in order to do whatever xenu has to do.
Thanks
Balaji S
Xenu can be used with an authenticated user as long as the cookies are persistent. You will need to enable cookies in Xenu and login once yourself using IE.
From their FAQ:
By default, cookies are disabled, and Xenu rejects all cookies. If you
need cookies because
you have used Internet Explorer to authenticate yourself before
starting a run
to prevent the server from delivering URLs with a
session ID
then you can enable the cookies in the advanced options
dialog. (This has been available since Version 1.2g)
Warning: You
should not use this option if you have links that delete data, e.g. a
database or a shop - you are risking data loss!!!
You can enable cookies in the Options menu. Click Preferences and switch to the Advanced tab.
For single page applications (like gmail) you will also need to configure Xenu to parse Javascript
This is done by modifying the ini file (traditionally at C:\Program Files (x86)\Xenu135\Xenu.ini) and adding a line of code under [Options]
Javascript=[Jj]ava[Ss]cript: *[_a-zA-Z0-9]+ *\( *['"]((/|ftp://|https?://)[^'"]+)['"]
There are several variations provided in their FAQ, but I didn't get them to work perfectly.
I can do this in FF and IE, and I know it doesn't exist in Chrome yet. Anybody know if you can do this in a Safari plugin? I can't find anything that says one way or another in the documentation.
Edit (November 2021): as pointed out in the comments, ParosProxy seems to no longer exist (and was last released ~2006 from what I can see). There are more modern options for debugging on Mac (outside of browser plugins on non-Safari browsers) like Proxyman. Rather than adding another list of links that might expire, I'll instead advise people to search for "debugging proxy" on their platform of choice instead.
Original Answer (2012):
The Safari "Develop" menu in advanced preferences allows you to partially customize headers (like the user agent), but it is quite limited.
However, if a particular browser or app does not allow you to alter the headers, just take it out of the equation. You can use things like Fiddler or ParosProxy (and many others) to alter the requests regardless of the application sending the request.
They also have the advantage of allowing you to make sure that you are sending the same headers regardless of the application in question and (depending on your requirements) potentially work across multiple browsers and apps without modification.
Safari has added extension support but its APIs don't let you have granular level control over Request & Response as compared to Chrome/Firefox/Edge.
To have granular level control over your Request and Response, you need setup a system wide proxy instead.
Requestly Desktop App automatically does this for you and on top of that, you can do various types of modifications too like:
Modify Request/Response Headers
Redirect URLs
Modify Response
Delay Network request
Insert Custom Scripts
Change User-Agent
Here's an article about header modification using requestly
https://requestly.io/feature/modify-request-response-headers/
Disclaimer: I work at Requestly