I can do this in FF and IE, and I know it doesn't exist in Chrome yet. Anybody know if you can do this in a Safari plugin? I can't find anything that says one way or another in the documentation.
Edit (November 2021): as pointed out in the comments, ParosProxy seems to no longer exist (and was last released ~2006 from what I can see). There are more modern options for debugging on Mac (outside of browser plugins on non-Safari browsers) like Proxyman. Rather than adding another list of links that might expire, I'll instead advise people to search for "debugging proxy" on their platform of choice instead.
Original Answer (2012):
The Safari "Develop" menu in advanced preferences allows you to partially customize headers (like the user agent), but it is quite limited.
However, if a particular browser or app does not allow you to alter the headers, just take it out of the equation. You can use things like Fiddler or ParosProxy (and many others) to alter the requests regardless of the application sending the request.
They also have the advantage of allowing you to make sure that you are sending the same headers regardless of the application in question and (depending on your requirements) potentially work across multiple browsers and apps without modification.
Safari has added extension support but its APIs don't let you have granular level control over Request & Response as compared to Chrome/Firefox/Edge.
To have granular level control over your Request and Response, you need setup a system wide proxy instead.
Requestly Desktop App automatically does this for you and on top of that, you can do various types of modifications too like:
Modify Request/Response Headers
Redirect URLs
Modify Response
Delay Network request
Insert Custom Scripts
Change User-Agent
Here's an article about header modification using requestly
https://requestly.io/feature/modify-request-response-headers/
Disclaimer: I work at Requestly
Related
I'm currently testing out some content security policies for my React project. I am using a web config to add the custom headers as I'm hosting in IIS.
This all works well in Chrome etc but doesn't have any effect in older browsers such as IE11, as they don't support 99% of CSPs
What is the point if I can just bypass CSPs by using an old browser? Or am I missing some way of enforcing these rules, even for old browsers?
The purpose of CSPs is not to protect against malicious browser users, it's to protect against malicious or buggy websites. (This article has some example attacks that CSPs are designed to mitigate.) Such attackers have no control over what browser a given person uses, so they can't use old browsers to "bypass" CSP protections.
"bypass CSPs" does not means that somebody can hack your web page. It just means that user used obsolete browser is vulnerable to XSS (XSS it's a third-party action). It's user's decision, and you cen do nothing. Also users can use some browsers plugins to remove the CSP header with the same result in modern browsers.
However you can do you best to protect even users with obsolete browsers. Additionally to CSP header, use X-XSS-Protection header, IE supports it since IE 8.
Additionally you can publish X-Frame-Options header to prevent clickjacking, IE supports it.
But if your application works with some sensitive/financial data, perhaps the best solution is to prohibit the use of outdated browsers or warn the visitor that he is using an outdated browser and bears all responsibility of that.
I have an application that has a VueJs based front end and NodeJs based backend API. The client-side is a SPA and it communicates with API for getting data. Now in a security scan, it is mentioned that the app doesn't have a Permission-Policy HTTP header and I would like to add it. I but not sure is there any option I can add in the VueJS and I am confused whether this is something that needs to be added from the front end. From the Node app, it is possible to set the header, but here the pages are not generated from the server-side. It will be helpful if someone can let me know how can I add these headers to the app.
Technically you can publish Permissions-Policy header when you sent an initial SPA's HTML code (you have to use some packages or Node.js server facility to publish response header). Even more so scanners do not execute ajax and will not see the pages of your SPA.
But there are some doubts whether it is worth doing it at all.
Permissions Policy is a new name of Feature Policy, below I will use Feature Policy term, but all of the below also applies to Permissions Policy.
Browsers poorly support Feature Policy and do not support Permissions Policy. Only Chrome supports the interest-cohort directive, but you have to set specific flags to enable Permissions Policy support. Feature Policy / Permissions Policy spec still is under development.
Feature Policy is rarely published via HTTP header, because it is intended to restrict the capabilities of nested browsing contexts (iframes), and not the main page itself. Therefore it's mostly published via <iframe allow="..." attribute for each third-party iframe embedded.
But the scanners are not aware of this and do not check the allow= attribute.
Scanners don't know much about real security, they are more focused on visualization baubles like Grade A+ and labels with green/red color. Therefore scanners:
are not recognize Content Security Policy in meta tag, just in the HTTP header.
require X-Frame-Options header for any web page despite presence of CSP's frame-ancestors derictive and ignore fact that some sites are inbtended to be embedded (widgets, youtube/vimeo video etc.).
require Feature Policy / Permissions Policy header despite these are not supported or are published by another way.
Mostly scanners results have nothing with real security, all is how to get A+ grade, nothing else (see a relevant thread "headers manipulatin to get Grade A+").
Of course, scanners can draw your attention to some overlooked headlines, but final decision which headers do web-app need to publish is up to you.
I'm using a Google API (e.g., for Maps Embed) with a key that is restricted via a list of HTTP Referrers. In this case, the map is embedded in my.site.com, so within the Google API -> Credentials page, I allow access for referrer .site.com/. When I visit my.site.com from most browsers, Google maps displays correctly as the browser sets the referrer field to my.site.com. When using the Brave browser, however, it sets the referrer field to the origin and displays an error:
Request received from IP address 98.229.177.122, with referrer: https://www.google.com/
Of course I could add google.com to the list of allowed referrers, but that defeats the purpose of limiting the use of the API key to my own website - anyone could "borrow" the API key, add it to their site for the same API, and anyone using Brave would be able to access the feature. Now that each access costs $, I'd rather not do this. Any ideas for a work-around?
Note: #geocodezip - thanks for the reference. Indeed, I forgot to add that when I set the site-specific shield to "All cookies allowed", or even completely turn shields off for the site, the behavior is still the same (error). However, in the default shield settings, when I set the cookies field to "All cookies allowed", then it works as intended (maps are displayed), even though for the default settings section it states:
These are the default Shields settings. They apply to all websites
unless you change something in the Shields panel on a particular site.
Changing these won't affect your existing per-site settings.
which I interpret to mean that the site-specific settings take precedence over the defaults.
So I'm thinking this (site-specific cookies setting not over-riding the default) is a brave bug, though that is a bit separate from my initial hope for a different approach that didn't require manual intervention on the user's part.
We have recently fixed a nagging error on our website similar to the one described in How to stop javascript injection from vodafone proxy? - basically, the Vodafone mobile network was vandalizing our pages in transit, making edits to the JavaScript which broke viewmodels.
Adding a "Cache-Control: no-transform" header to the page that was experiencing the problem fixed it, which is great.
However, we are concerned that as we do more client-side development using JavaScript MVP techniques, we may see it again.
Is there any reason not to add this header to every page served up by our site?
Are there any useful transformations that this will prevent? Or is it basically just similar examples of carriers making ham-fisted attempts to minify things and potentially breaking them in the process?
The reasons not to add this header is speed performance and data transfer.
Some proxy / CDN services encode the media, so if your client is behind proxy or are you using a CDN service, the client may get higher speed and spend littler data transfer. This header actually orders proxy / CDN - not to encode the media , and leave the data as is.
So, if you don't care about this, or your app not use many files like images or music, or you don't want any encoding on your traffic, there is no reason not to do this (and the opposite, recommended to).
See the RFC here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.5
Google has recently incorporated the service googleweblight so if your pages has the "Cache-Control: no-transform" header directive you'll be opting-out from transcoding your page in case the connection comes from a mobile device with slow internet connection.
More info here:
https://support.google.com/webmasters/answer/6211428?hl=en
I want to use an application that checks for broken links. I got to know that, Xenu is one such software. I do not have access to internal aspx/http files on a drive. The Problem I am facing is the Website requires the user to be authenticated. After login I need to crawl the site to determine which links are broken.
As an example, I kick off with mail.google.com. We end up typing the Username and password after which we are served different URLs. If I give the Xenu (or similar programs) the link such as mail.google.com it will not be able to fecth URLs inside the mail.google.com which will be of type - /mail/u/0/?shva=1#inbox/ etc. There lies the problem.
With minimal or least scripting language how can I provide Xenu (or other similar app) capability to Login by providing external URL (mail.google.com) in this example in order to do whatever xenu has to do.
Thanks
Balaji S
Xenu can be used with an authenticated user as long as the cookies are persistent. You will need to enable cookies in Xenu and login once yourself using IE.
From their FAQ:
By default, cookies are disabled, and Xenu rejects all cookies. If you
need cookies because
you have used Internet Explorer to authenticate yourself before
starting a run
to prevent the server from delivering URLs with a
session ID
then you can enable the cookies in the advanced options
dialog. (This has been available since Version 1.2g)
Warning: You
should not use this option if you have links that delete data, e.g. a
database or a shop - you are risking data loss!!!
You can enable cookies in the Options menu. Click Preferences and switch to the Advanced tab.
For single page applications (like gmail) you will also need to configure Xenu to parse Javascript
This is done by modifying the ini file (traditionally at C:\Program Files (x86)\Xenu135\Xenu.ini) and adding a line of code under [Options]
Javascript=[Jj]ava[Ss]cript: *[_a-zA-Z0-9]+ *\( *['"]((/|ftp://|https?://)[^'"]+)['"]
There are several variations provided in their FAQ, but I didn't get them to work perfectly.