"cross domain javascript source file inclusion" reported by OWASP ZAP for trusted resource - http-headers

My host is www.example.com and I'm serving html pages with URL www.example.com/html-url.html.
In these htmls I have included a javascript file from our S3 bucket say bucket-name.
The OWASP ZAP scan is raising a low level alert for this - Cross-Domain JavaScript Source File Inclusion
Evidence: <script src="bucket-name/custom.js" type="text/javascript"/>
Question: Is there any way that I can configure a list of trusted resources/URLs in configuration? I'm using NGINX server.
Please note that, for some reasons I need to get a clean vulnerability scan report.
I understand that there is this HTTP header "Content Security Policy" and on the official site it says "Content Security Policy (CSP) is an HTTP header that allows site operators fine-grained control over where resources on their site can be loaded from."
But I'm not sure if I can make use of this.

You can delegate custom domain name for S3 bucket, for example subdomain cdn.example.com to avoid Cross-Domain JavaScript Source File.
I do not know what is "under the hood" of OWASP ZAP and is it possible to influence on OWASP ZAP alert above with CSP header, but CSP definitely should give additional points to security.

A "clean vulnerability scan report" is desirable but I personally dont think it should be mandatory - web scanners report potential vulnerabilities so some may be false positives or even true positives that are not actually relevant for your situation.
In any case "bucket-name/custom.js" is not cross domain so this looks like a false positive. Have you updated ZAP? If so please raise this as an ZAP issue and we can fix it: https://github.com/zaproxy/zaproxy/issues

Related

Set Permission-Policy for Vue and NodeJS app

I have an application that has a VueJs based front end and NodeJs based backend API. The client-side is a SPA and it communicates with API for getting data. Now in a security scan, it is mentioned that the app doesn't have a Permission-Policy HTTP header and I would like to add it. I but not sure is there any option I can add in the VueJS and I am confused whether this is something that needs to be added from the front end. From the Node app, it is possible to set the header, but here the pages are not generated from the server-side. It will be helpful if someone can let me know how can I add these headers to the app.
Technically you can publish Permissions-Policy header when you sent an initial SPA's HTML code (you have to use some packages or Node.js server facility to publish response header). Even more so scanners do not execute ajax and will not see the pages of your SPA.
But there are some doubts whether it is worth doing it at all.
Permissions Policy is a new name of Feature Policy, below I will use Feature Policy term, but all of the below also applies to Permissions Policy.
Browsers poorly support Feature Policy and do not support Permissions Policy. Only Chrome supports the interest-cohort directive, but you have to set specific flags to enable Permissions Policy support. Feature Policy / Permissions Policy spec still is under development.
Feature Policy is rarely published via HTTP header, because it is intended to restrict the capabilities of nested browsing contexts (iframes), and not the main page itself. Therefore it's mostly published via <iframe allow="..." attribute for each third-party iframe embedded.
But the scanners are not aware of this and do not check the allow= attribute.
Scanners don't know much about real security, they are more focused on visualization baubles like Grade A+ and labels with green/red color. Therefore scanners:
are not recognize Content Security Policy in meta tag, just in the HTTP header.
require X-Frame-Options header for any web page despite presence of CSP's frame-ancestors derictive and ignore fact that some sites are inbtended to be embedded (widgets, youtube/vimeo video etc.).
require Feature Policy / Permissions Policy header despite these are not supported or are published by another way.
Mostly scanners results have nothing with real security, all is how to get A+ grade, nothing else (see a relevant thread "headers manipulatin to get Grade A+").
Of course, scanners can draw your attention to some overlooked headlines, but final decision which headers do web-app need to publish is up to you.

Is it possible to have GitHub Readme images follow redirects?

I'm trying to add a test coverage badge to the Readme of a private repository on GitHub. Our continuous integration process saves out the image to a secured Google Cloud Storage bucket that's not accessible to the public, and should remain that way.
Google's authorization layer is smart enough that if I go to the URL for the image, I'm automatically redirected to the resource with a valid auto-generated signed URL.
E.g., if I go to http://storage.cloud.google.com/secret-files/mysecretfile.png, then if I'm logged in and allowed to view it, I'm automatically redirected to something like https://blahblah-apidata.googleusercontent.com/download/storage/v1/b/secret-files/o/mysecretfile.png?key=verylongkey, where I can load the image.
This seemed perfect. Reference the canonical path in the GitHub Readme, authenticated users see the image, unauthenticated users are still blocked, we don't have to make the file public, and we don't have to do anything complicated.
Except that GitHub is proxying the image request, meaning that it will always be unauthenticated. My browser is loading something like https://camo.githubusercontent.com/mysecretimage.png.
Is there a clever way to work around this? Or do I need to go back to the drawing board?
All images on github.com are proxied using the Camo image proxy. There are a couple reasons for this:
It preserves the privacy of users. It isn't possible for a document to track users by directing them to a different site or using cookies to track them.
It means images can be cached and served at an appropriate size.
GitHub can have a very strict content security policy that does not allow loading from untrusted sites, which means that any sort of accidental security problem (like an XSS) is a lot less likely to work.
Note the last part. Even if you found some sneaky way to get another image URL to render properly in the website, your browser wouldn't load it because it violates the Content-Security-Policy header the site sent, and moreover, your browser would tattle about that to the reporting URL that GitHub provided.
So any image URL you provide will need to be readable by GitHub's image proxy and it won't be possible to serve different content to different users.

Refused to load the script 'xyz.js' because it violates the following Content Security Policy directive with ChromeDriver Chrome headless Selenium

I am running selenium with headless chrome and getting error refused to load the image , because it violates the following Content Security Policy directive. I am parsing third party url. So what change or option i need to set in selenium to remove error.
Content Security Policy
Content Security Policy a tool which developers can use to lock down their applications in various ways, mitigating the risk of content injection vulnerabilities such as cross-site scripting, and reducing the privilege with which their applications execute.
CSP is not intended as a first line of defense against content injection vulnerabilities. Instead, CSP is best used as defense-in-depth. It reduces the harm that a malicious injection can cause, but it is not a replacement for careful input validation and output encoding.
Content Security Policy Directives
As per Content Security Policy Directives to mitigate the risk of cross-site scripting attacks, web developers should include directives that regulate sources of script and plugins. They can do so by including:
Both the script-src and object-src directives, or
a default-src directive
In either case, developers should not include either 'unsafe-inline', or data: as valid sources in their policies. Both enable XSS attacks by allowing code to be included directly in the document itself; they are best avoided completely.
Chrome Content Security Policy
In order to mitigate a large class of potential cross-site scripting issues, Chrome's extension system has incorporated the general concept of Content Security Policy (CSP) . This introduces some fairly strict policies that will make extensions more secure by default, and provides you with the ability to create and enforce rules governing the types of content that can be loaded and executed by your extensions and applications.
In general, CSP works as a block/allowlisting mechanism for resources loaded or executed by your extensions. Defining a reasonable policy for your extension enables you to carefully consider the resources that your extension requires, and to ask the browser to ensure that those are the only resources your extension has access to. These policies provide security over and above the host permissions your extension requests; they're an additional layer of protection, not a replacement.
On the web, such a policy is defined via an HTTP header or meta element. Inside Chrome's extension system, neither is an appropriate mechanism. Instead, an extension's policy is defined via the extension's manifest.json file as follows:
{
...,
"content_security_policy": "[POLICY STRING GOES HERE]"
...
}
Solution
Create a new Chrome Profile
Browse to the extension page for Disable Content-Security-Policy and Add to Chrome permanently.
Use the same customized Chrome Profile along with the extension while initiating your script:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("user-data-dir=C:\\Users\\user_name\\AppData\\Local\\Google\\Chrome\\User Data\\Profile 2")
driver = webdriver.Chrome(executable_path=r'C:\path\to\chromedriver.exe', chrome_options=options)
driver.get("https://www.google.co.in")
tl; dr
An Introduction to Content Security Policy
Issue 546106: CSP: Chrome complains the style-src hashes mismatch even though it shows them to be the same

Safari 9 disallowed running of insecure content?

after upgrading to Safari 9 I'm getting this error in the browser:
[Warning] [blocked] The page at https://localhost:8443/login was not allowed to run insecure content from http://localhost:8080/assets/static/script.js.
Anyone knows how to enable the running of insecure content on the new Safari?
According to the Apple support forums Safari does not allow you to disable the block on mixed content.
Though this is frustrating for usability in legitimate cases like yours, it seems to be part of their effort to force secure content serving / content serving best practices.
As a solution for you you can either upgrade the HTTP connection to HTTPS (which it seems you have done) or proxy your content through an HTTPS connection with an HTTPS-enabled service (or, in your case, port).
You can fix the HTTPS problem by using HTTPS locally with a self signed SSL certificate. Heroku has a great how-to article about generating one.
After setting up SSL on all of your development servers, you will still get an error loading the resource in Safari since an untrusted certificate is being used(self signed SSL certificates are not trusted by browsers by default because they cannot be verified with a trusted authority). To fix this, you can load the problematic URL in a new tab in Safari and the browser will prompt you to allow access. If you click "Show Certificate" in the prompt, there will be a checkbox in the certificate details view to "Always allow content from localhost". Checking this before allowing access will store the setting in Safari for the future. After allowing access just reload the page originally exhibiting a problem and you should be good to go.
This is a valid use case as a developer but please make sure you fully understand the security implications and risks you are adding to your system by making this change!
If like me you have
frontend on port1
backend on port2b
want to load script http://localhost:port1/app.js from http://localhost:port2/backendPage
I have found an easy workaround: simply redirect with http response all http://localhost:port2/localFrontend/*path to http://localhost:port1/*path from your backend server configuration.
Then you could load your script directly from http://localhost:port2/localFrontend/app.js instead of direct frontend url. (or you could configure a base url for all your resources)
This way, Safari will be able to load content from another domain/port without needing any https setup.
For me disabling the Website tracking i.e. uncheck the Prevent cross-site tracking worked.

Images on SSL enabled site with Internet explorer

I have a problem with my site after implementation of SSL that images do not appear. The scenario is that images come from images.domain.com (hosted on Amazon S3) and my certificate is for www.domain.com.
This problem only seems to happen in IE and not in any other browsers.
The issue is related to "mixed content" - HTTPS pages which have HTTP resources (images, scripts, etc) embedded.
The point of using HTTPS is to ensure that only the originating server and the client have access to the secured page. However, in theory it might be possible for this security to be compromised if HTTP resources are embedded - a server might intercept an unsecured javascript file and inject some code to alter the secured page onload.
Most browsers will indicate that a secure page has mixed content by altering the "secure lock" icon, either by showing the lock as open or broken, or by making the icon red (Chrome displayed a skull and crossbones for a short time, but they realised that this was a bit serious for the potential threat level).
Internet Explorer (depending on the version) will display a message either asking whether the insecure content should be shown (IE<=7), or whether only the secure content should be shown (IE>=8). It sounds like you have somehow disabled this message to always hide the insecure content, however that's not the default behaviour.
I think the best solution for you is to replace your S3 links with HTTPS versions.
I am not a web developer, but someone who often deals with the crap experience that is IE. I am not sure what version you are using, but you do not have a wildcard SSL cert (i.e. *.domain.com), so does it have something to do with an old-school limitation in 3rd party images?
See here for what I allude to above and a very good explanation of how IE caches cross-domain HTTPS content, specifically images. I am not sure what the solution is, but I was curious so I researched a little myself and this might help.