Apache2 + CloudFront: Fonts not loading (CORS Problem) (2023 new Interface) - apache

Using CloudFront pointing to my Apache2 server is causing CORS errors for my fonts.
I don't understand why this problem effects only fonts.
This is the error message I see in my browser:
This is my current behavior configuration. What do I have to change?

Related

MIME type conflict with TYPO3 compressed CSS and JS resources

I am rather new to TYPO3. Recently I noticed some very weird behavior in my installation: Some CSS-files in the directory typo3temp/assets/compressed got the MIME-type text/html instead of the expected text/css. Therefore my browser received a 403 Forbidden status code from the webserver for these resources. That resulted in some parts of the backend being shown without styling.
I tried clearing all caches and deleting the typo3temp/assets/compressed directory, however now all the stuff in there (CSS and JS) is served with MIME-type text/html. Getting the backend without JavaScript means, that I am now basically locked out of the backend. I can however still reach and use the install tool.
Do you have any ideas how this might happen and how to fix it?
Some details of my setup:
TYPO3 v10.4.13 (recently updated from 10.4.9)
Apache web server (I don't have access to its config and have to rely on .htaccess files)
I suggest to set
TYPO3_CONF_VARS/FE/compressionLevel=0
TYPO3_CONF_VARS/BE/compressionLevel=0
in order not have these kind of problems. The problem is that this compression creates compressed files but relies on webserver configuration in order to deliver them as text/css and NOT applying the default webserver's transport compression to them (or they could end up double-compressed and you might not even easily notice - some browsers can deal with that, others not).
It is a kind of micro-optimization that sounded useful in times when we avoided https:// because of the processing overhead...
Here's some docs (the first statement is outdated in my oppinion): https://docs.typo3.org/m/typo3/reference-skinning/master/en-us/BackendCssApi/CssCompression/Index.html

IIS 7.5 problems LetsEncrypt HTTP validation due to not serving extension-less files

I am struggling with generating LetsEncrypt SAN SSL for SBS 2011 for few days. All is going fine, until ACME CHALLENGE verification.
I cannot use DNS verification, because DNS is at ISP and it takes days for any change to get live. So only HTTP validation can be used.
Where IIS stucks?
Simply when it tries to server extension-less ACME VALIDATION file, IIS returns 404 ERROR. File is there, Acme client generates it just fine in proper folder, but it does not show up via web browser, just 404 error due to MIME type. When testing with test.html file in same folder it gets displayed properly, no problem.
I've already tried:
Adding MIME type text/plain for "." and ".*" extensions, but no go
Moved StaticFile mappings above ExtensionLessUrlHandlers, but still no go
Edited applicationhost.config file and set to Allow: <section name="handlers" overrideModeDefault="Allow" />
Restarted IIS and whole server, still at no avail
Used different LE clients, but all of them use IIS and stuck at the same point
Solution from here does NOT work: IIS: How to serve a file without extension?
When I try localy, I always get this 404 Error in browser:
HTTP Error 404.0 - Not Found
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
Module IIS Web Core
Notification MapRequestHandler
Handler StaticFile
Error Code 0x80070002
Any more idea?
Sorry, folks! It was my bad, not being carefull enough when passing details to you.
The solution to add "." as MIME Type "text/plain" is the only thing needed in my OP case.
What was wrong in my case was the "autodiscover" sub-domain, which I still do not know, where it's being served from, but definitelly it is NOT from "Autodiscover" application under Default Web Site. As of now, when I browse "autodiscover.domain.com..." link I still get cached test.html content, but I've deleted all test.html files which I planted there.
Ok, but, that's not the subject here.
BTW...LE test failed on my Firewall on Country Blocking rules. Oh, my...
Thank you for participation.

Nginx is Slower than Apache downloading main.bundle.js

I have an Angular2 app that I've been developing for a bit now. Locally I run an Nginx server but the deployment server is using Apache. To unify things I worked to move the deployment server to Nginx but I am getting extremely slow results with Nginx.
Apache loads in ~5 seconds (1.1MB transferred)
Nginx loads in 16-20 seconds (5MB transferred)
These are both on the same server pointing to the exact same directory. The actual size of main.bundle.js is 4470365 main.bundle.js so it seems Nginx is loading the entire file.
How is Apache able to download only 737K?
You can check for the features enabled in both the files with nginx and apache by clicking on the exact file in Inspect element Network Tab. Then go to Headers and then Response Headers as illustrated in the attached image.
Check if the gzip compression is enabled in any one of the server. That is the only reason for lesser file size.

HAProxy + mod_pagespeed

I have currently 3 web server configuration with HAProxy dividing the traffic to one of each web server. Each server is running apache2 with mod_pagespeed. The HAProxy takes care of the SSL termination as the web servers are in local network.
HAProxy sets the X-Forwareded-Proto header to each request and I have enabled "ModPagespeedRespectXForwardedProto on" in each pagespeed configuration.
Apache services are running in custom port 8012 and now I am getting an error to javascript console from pagespeed when going to the site:
Mixed Content: The page at 'https://www.example.com/' was loaded over HTTPS, but requested an insecure script 'http://www.example.com:8012/_,Mjo.NZsywmsdso.js.pagespeed.jm.OLNkjPSHpv.js'. This request has been blocked; the content must be served over HTTPS.
Any idea what could still be wrong? Here is the pagespeed HTTPS configuration:
ModPagespeedFetchFromModSpdy on
ModPagespeedFetchHttps enable
ModPagespeedSslCertDirectory /etc/ssl/certs
ModPagespeedSslCertFile /etc/ssl/certs/cert.pem
ModPagespeedMapOriginDomain "http://www.example.com" "https://www.example.com"
ModPagespeedRespectXForwardedProto on
Any help is appreciated!
This question is old, but I am going to answer how I fix it on my own setup.
The issue comes if you are using pagespeed on each server vs somehow running it on haproxy itself for caching. Since pagespeed saves a copy of any modified file with a modification on filename, then it also changes the source of the HTML to match that new filename it stored, which should work fine. But the issue is that is pagespeed on web server 1 is modifying the HTML to match background compressing of the files(images, js, css, etc) then when it gets to the users computer and their browser requests such files it wont find it if you are round-robin it between servers because that file will be only on web server 1 and not on the others, the way around it is to use a shared folder for the pagespeed so when one compresses a file into that folder, the other web servers will see it through their pagespeed.

Apache is incorrectly converting jsp pages to "text/plain"

I've got a fairly normal setup in which Apache proxies requests to a servlet running inside Tomcat over the AJP protocol.
We've run this setup on Apache 2.0.46/Tomcat 5.0.28 for ages without problems but have recently updated to Apache 2.2.3/Tomcat 5.5.
The problem is that we've noticed that intermittently (maybe one time in 3) Apache will somehow convert the "Content-Type" HTTP header of a page served by the servlet from "text/html" to "text/plain", which results in the browser displaying the HTML source instead of rendering it.
Has anyone seen this sort of behavior before and know what might be the cause? I suspect we're doing something bad in our servlet code that the old version of Tomcat/Apache was more forgiving of.
Update: I have confirmed that it's Apache changing the headers. If I browse directly to Tomcat the problem doesn't occur.
Some webapps do not properly set mime types of content they serve, but still may work properly when served standalone because client applications like browsers are able to interpret the type of the content. But when served behind Apache, these apps will not behave correctly because Apache will provide a default type of text/plain.
A solution is to add a DefaultType None line to your apache virtual host for these web apps:
DefaultType None
http://httpd.apache.org/docs/2.2/mod/core.html#defaulttype
From my blog post:
http://patternbuffer.wordpress.com/2011/11/30/mime-type-issue-with-apache-mod_jk-and-mod_proxy-serving-plain-text/
If you're seeing this problem intermittently, it's almost certain to be something in the servlet code rather than a misconfiguration of Tomcat or httpd. Do you have logging that you can turn on to print the contents of the HTTP headers ?
To isolate the problem a bit further, you could also try bypassing httpd and going direct to the Tomcat URLs for your pages.
I haven't seen this particular behaviour before myself, so sorry I can't be more specific.
By intermittent, do you mean that some pages exhibit this behaviour and others don't, or that there are pages that sometimes exhibit the behaviour and sometimes not?
Can you attach any logging to the AJP layer to log HTTP headers at that level, so you can verify whether it's Apache or Tomcat adding the bogus header?
Are you proxying back to a cluster? Maybe one of the servers is configured wrong.
Ok. I figured it out, it was a bug in the servlet code:
We were doing something like this to write serialized Java objects as the result of HTTP requests:
DeflaterOutputStream dos = new DeflaterOutputStream(response.getOutputStream());
ObjectOutputStream oos = new ObjectOutputStream(dos);
response.setContentType("application/x-java-serialized-object");
oos.writeObject(someObject);
What seemed to be happening was that the DeflaterOutputStream and ObjectOutputStream would get garbage-collected three or four requests later when they were still attached to the response object's output stream and this would cause something to happen on the stream that confused Apache and caused it to rewrite the headers.
I replaced the above with:
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
DeflaterOutputStream dos = new DeflaterOutputStream(byteStream);
oos = new ObjectOutputStream(dos);
response.setContentType("application/x-java-serialized-object");
oos.writeObject(someObject);
oos.flush();
dos.finish();
byteStream.writeTo(response.getOutputStream());
and the problem has gone away.
The following links seem to describe a similar problem:
AJP Flush Packet causing text/plain
ASF Bugzilla – Bug 43478
I was also facing the same issue it got resolved.
If problem in only one folder then there is some servlet that is blocking the request/response and making a customize request/response to tomcat.
Tomcat 7.0.x