I am behind with CSP, this morning all sites on one of my servers stopped working with safari with the following error:
[Error] Refused to load the script
'http://code.jquery.com/jquery-1.9.1.js' because it violates the
following Content Security Policy directive: "default-src 'self'".
Note that 'script-src' was not explicitly set, so 'default-src' is
used as a fallback.
How can I fix this server-wide without having to change each sites 1 by 1.
As mentionned I am a bit behind with CSP, as such I don't even know where to put the rules
For future reference, here's what I had done 'incorrectly'.
in the file /usr/local/apache/conf/includes/pre_main_global.conf
I placed a bunch of default headers to secure the server including:
Header set X-WebKit-CSP: "default-src 'self'"
Which caused Safari to refuse any script not hosted on localhost.
The confusion came because nobody found the problem before a week after the fact.
Related
I've discovered an interesting issue when attempting to use Content Security Policy to secure my site and also ServiceWorker to speed it up and let it run offline.
It's a standard Wordpress site and Plugin developers have a naughty habit of using external resources, particularly in the /wp-admin/ section. I don't what to whitelist a ton of stuff on the main site (particularly unsafe-eval, a frequent culprit in the admin section), so what I did was make a main CSP, then in /wp-admin/ I unset and reset a less restrictive set.
Here's a sample of the code I'm using to unset the CSP when you're in the admin area of the site:
<Location /wp-admin/>
<IfModule mod_headers.c>
Header always unset Content-Security-Policy
Header unset Content-Security-Policy
Header set Content-Security-Policy " default-src 'self' ps.w.org;"
</IfModule>
</Location>
And it works fine unless you've been to (or have another tab open to) the main area of the site, at which point the ps.w.org directive (and others) are ignored. A bunch of assets end up blocked, scripts don't work, etc. Refreshing the page while in the admin section temporarily loads the correct CSP, so I know it's being used; it's just being overwritten by the main one. Sometimes the same happens to the main site, loading the admin CSP too.
Is the ServiceWorker caching the CSP or what exactly is going on here? Is there some way to get the ServiceWorker to respect the CSP that page should be sending in it's headers? For now I've just merged the two CSP settings into one and removed my over-broad rules for the admin area but it's not ideal.
I have an Apache server that I'm attempting to send requests over HTTPS to, but I've been struggling to get past cross origin issues as well as issues using SSL.
I'm not exactly sure where the problem lies, as I seem to be getting different responses back from the web consoles (testing with Firefox + Chrome) concerning the failed request. In Chrome, I simply see that the request that is sent as a POST is changed to OPTIONS and notes that it failed without much else. In Firefox, I see the following two issues:
In the console, the request says it fails due to CORS:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://123.456.789.001. This can be fixed by moving the resource to the same domain or enabling CORS.
However, inspecting the failed request in the network tab shows the following issue about the certificate:
123.456.789.001 uses an invalid security certificate. The certificate is not trusted because it is self-signed. (Error code: sec_error_unknown_issuer)
After digging, I'm having issues determining what is actually causing the request to fail - is it because my CORS rules are not setup properly? Or is it because I'm attempting to send requests to a server that is using a self signed certificate and therefore not being trusted by my request/browser?
I believe CORS is setup properly on my end, here are the contents of the files I'm using to enable CORS:
Crossdomain.xml
<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<site-control permitted-cross-domain-policies="all"/>
<allow-access-from domain="*" secure="false"/>
<allow-http-request-headers-from domain="*" headers="*" secure="false"/>
</cross-domain-policy>
.htaccess:
# Always set these headers.
Header always set Access-Control-Allow-Origin "*"
Header always set Access-Control-Allow-Methods "POST, GET, OPTIONS, DELETE, PUT"
Header always set Access-Control-Max-Age "1000"
Header always set Access-Control-Allow-Headers "x-requested-with, Content-Type, origin, authorization, accept, client-security-token, Access-Control-Allow-Origin, X-Frame-Options"
# Added a rewrite to respond with a 200 SUCCESS on every OPTIONS request.
RewriteEngine On
RewriteCond %{REQUEST_METHOD} OPTIONS
RewriteRule ^(.*)$ $1 [R=200,L]
Obviously these settings aren't great for production, but after spending hours trying to pinpoint the issue, I went with some examples that were the least restrictive in terms of enabling CORS hoping I'd see the requests go through and then go back and edit them properly. However, I still see the Cross Origin errors in the console with these changes uploaded to the Apache server (and server restarted after files changed).
So is there anyway to tell if CORS or the self-signed certificate is causing the issue? I didn't necessarily want to go ahead and purchase a SSL certificate at this time since I'm still in development, and the site I'm using to host the content is forced to https, so I can't pass the requests over http.
There's probably an answer already on stackoverflow that I'm missing, sorry in advance for that, I just can't find it.
I have a small TCP server running on my localhost that, for security reasons, will not support CORS.
My question is, if CORS is for cross-domain protection, why is it being requested when I have a page on http://localhost/ request a connection to http://localhost:xxxx
I know I can turn off the security in my browser, but Im trying to understand why localhost to localhost connections are being treated as cross-origin.
XMLHttpRequest cannot load http://localhost:8000/. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:63342' is therefore not allowed access. The response had HTTP status code 500.
Because localhost (port 80) is a different host than localhost:8000.
See RFC 6454, Section 5:
If the two origins are scheme/host/port triples, the two origins
are the same if, and only if, they have identical schemes, hosts,
and ports.
Same-origin Policy
The same-origin policy permits scripts running in a browser to only make requests to pages on the same domain. This means that requests must have the same URI scheme, hostname, and port number. This post on the Mozilla Developer Network clearly defines the definition of an origin and when requests result in failure. If you send a request from http://www.example.com/, the following types of requests result in failure.
https://www.example.com/ – Different protocol (or URI scheme).
http://www.example.com:8080/myUrl – Different port (since HTTP requests run on port 80 by default).
http://www.myotherexample.com/ – Different domain.
http://example.com/ – Treated as a different domain as it requires the exact match (Notice there is no www.).
For more information refer to this link
I would appreciate some help to understand what is going on: both Firefox and Chrome are failing to load my non-SSL website, say subdomain.example.com, with the following SSL errors (both on ubuntu 14.04 i386):
FF30: ssl_error_rx_record_too_long
Chrome 35: ERR_SSL_PROTOCOL_ERROR
This started to occur after I set (and follow) a redirect (302) to SSL on the parent domain, say http://example.com to https://example.com. It gets back to normal after a full cache clean on the browser. But as soon as I access the parent domain I get the problem on the subdomain.
I have never entered the subdomain URL with the "https://" scheme prefix. I don't usually type any prefix and it is happening even if I explicitly prefix with "http://". And it is not only on the address bar, the same happens for links.
I am very confident that there is nothing wrong with the non-SSL site on the subdomain.
I thought about filling a bug report but it is unlikely this is a bug in both browsers and more likely I am missing something.
It there any rule that if a website on a given domain supports SSL (or redirects http to https), then sites on subdomains are assumed to do as well?
I later found the cause of the SSL errors. But the problem still persists (now the message is connection refused):
Apache web server was configured to listen on both ports 80 and 443, but with no "SSLEngine on" clause. This effectively makes it serve plain HTTP on port 443.
It is worth to mention that this Apache configuration mistake is not that hard to fall into. Actually, in the default Ubuntu configuration (possibly the same for Debian), it is just a matter of enabling/loading the SSL module (and not providing a site configuration that uses SSL).
I have just found the cause. The ssl site on the parent domain is including the following STS response header:
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
That triggers the browser behavior by spec.
I configured Apache Jackrabbit 2.6.3 to use WebDAV in an anonymous mode (empty credentials are mapped to anonymous:anonymous).
If I click on a direct link to some file (e.g. JPG or DOC) HTTP 403 error is thrown by GlassFish server. If I press F5, 403 is still there.
BUT if I simply press Enter in address bar in my browser on the same URL, everything is OK, and resource is accessible.
I think that only difference is a referrer in the HTTP header.
I searched for any information about a similar problem, but I couldn't find anything.
Does anybody have some idea how to force WebDAV (or Jackrabbit) to serve files in the anonymous mode despite the referrer or any other reason?
I found a solution.
In web.xml file in section WebDAV the following part must be uncommented:
<init-param>
<param-name>csrf-protection</param-name>
<param-value>disabled</param-value>
</init-param>
With disabled as param-value.
As description says:
Defines the behaviour of the referrer based CSRF protection
1) If omitted or left empty the (default) behaviour is to allow only requests with
an empty referrer header or a referrer host equal to the server host
2) May also contain a comma separated list of additional allowed referrer hosts
3) If set to 'disabled' no referrer checking will be performed at all