I am trying to get a simple apache stack running and came across something I have not seen before. This is an AWS instance running the Bitnami LAMP stack. If I create an incomplete html file as:
<h1>Something Here</h1>
Apache is prepending to the response. E.g.
<head/><h1>Something Here</h1>
I am serving an angular2 app from this stack and loading of the component templates is failing since they are being seen as malformed. Does anyone know what apache setting or module might be doing this?
Thanks
PageSpeed is the one that's adding that <head/>. PageSpeed is enabled by default on the Bitnami LAMP Stack.
This is added by the default mod_pagespeed add_head filter. You can disable it adding the line below to /opt/bitnami/apache2/conf/pagespeed.conf:
ModPagespeedDisableFilters add_head
However, note that this filter is needed for many other filters which will only write contents in the element.
You can also disable PageSpeed as explained in the guide below to check that the header disappear:
https://docs.bitnami.com/aws/infrastructure/lamp/#how-to-disable-the-cache-in-the-server
Related
I have some react code (written by someone else) that needs to be served. The preferred method is via a Google Storage Bucket, fronted by their Cloud CDN, and this works. However, due to some quirks in the code, there is a requirement to override 404s with 200s, and serve content from the homepage instead (i.e. if there is a 404, don't serve a 404, serve the content of the homepage and return as a 200 instead)
(If anyone is interested, this override currently is implemented in CloudFront on AWS. Google CDN does not provide this functionality yet)
So, if the code is served at "www.mysite.com/app/" and someone hits "www.mysite.com/app/not-here" (which would return a 404), what should happen is that the response should NOT be 404, but a 200 with the content being served from index.html instead.
I was able to get this working by bundling all the code inside a docker container and then using the solution here. However, this setup means if we have a code change, all the running containers need to be restarted, and the customer expects zero downtime, hence the bucket solution.
So I now need to do the same thing but with the files being proxied in (with the upstream being the CDN).
I cannot use the original solution since the files are no longer local, and httpd can't check for existence of something that is not local.
I've tried things like ProxyErrorOverride and ErrorDocument, and managed to get it to redirect, but that is not what is needed.
Does anyone know how/if this can be done?
If the question is: how to catch the 404 error provided by Cloud Storage when a file is missing with httpd/apache? I don't know.
However, I think that isn't the best solution. Serving files directly from Cloud Storage is convenient but not industrial.
Imagine, you deploy several broken files successively, how to rollback in a stable format?
The best is to package your different code release in an atomic bag, a container for instance. Each version are in a different container and performing a rollback is easier and consistent.
Now your "container restart" issue. I don't know on which platform you are running your container. If your run it on a Compute Engine (a VM) it's maybe the worse solution. Today, there is container orchestration system that allows you to deploy, scale up and down the containers, and to perform progressive rollout, to replace, without downtime, the existing running containers by a newer version.
Cloud Run is a wonderful serverless solution for that, you also have Kubernetes (GKE on Google Cloud) that you can use with Knative for a better developer experience.
I'm trying to setup a Shipyard server (controller) at work, but I've run into an issue. The server is up and running, which I can confirm with curl just fine. And we've configured Apache httpd to do forwarding, as we intend for the machine running Shipyard to not be directly accessible. So basically we setup a rule for Apache that incoming requests to /shipyard map to :8080/ which is where it's being served from. So the problem is that I need a way to tell Shipyard to remap "/" to "/shipyard". When I try to go to the Shipyard homepage, nothing on the page loads correctly. For example, Shipyard tried to load some js files:
/app/images/images.module.js
But to work with our forwarding, it needs to try to load:
/shipyard/app/images/images.module.js
With the kinds of servers I'm used to working with, this would normally be done by specifying a "context" or "base path" in your server config for it to serve from. I'm wondering how to do something similar for Shipyard?
It turns out there is already a github issue for this exact scenario:
https://github.com/shipyard/shipyard/issues/972
I am using the Bitnami Magento 1.9 Stack and want to disable every kind of cache. I made these attempts:
I disabled Magento Caching (as described here)
I disabled Apache2 Caching (as described here) for (html|htm|js|css|phtml|php) and checked the HTTP-Header as described here
I disabled Browser Caching with Clear Cache
Unfortunately, my phtml-files are still being cached. CSS- & JS-files are gladly not changed anymore.
Is there any further cache that I have forgotten to disable?
I found one last cache which I had not yet disabled:
OPcache
It is installed by Zend.
To disable this cache, open the file php.ini (in my case C:\Bitnami\magento-1.9.2.4-2\php\php.ini) and set
opcache.enable=0
(solution from Bitnami Wiki)
I am doing some reverse engineering on a website.
We are using LAMP stack under CENTOS 5, without any commercial/open source framework (symfony, laravel, etc). Just plain PHP with an in-house framework.
I wonder if there is any way to know which files in the server have been used to produce a request.
For example, let's say I am requesting http://myserver.com/index.php.
Let's assume that 'index.php' calls other PHP scripts (e.g. to connect to the database and retrieve some info), it also includes a couple of other html files, etc
How can I get the list of those accessed files?
I already tried to enable the server-status directive in apache, and although it is working I can't get what I want (I also passed the 'refresh' parameter)
I also used lsof -c httpd, as suggested in other forums, but it is producing a very big output and I can't find what I'm looking for.
I also read the apache logs, but I am only getting the requests that the server handled.
Some other users suggested to add the PHP directives like 'self', but that means I need to know which files I need to modify to include that directive beforehand (which I don't) and which is precisely what I am trying to find out.
Is that actually possible to trace the internal activity of the server and get those file names and locations?
Regards.
Not that I tried this, but it looks like mod_log_config is the answer to my own question
We developed a web application which uses opencmis and a windows client which uses dotcmis. The web application runs behind an apache httpd.
We are facing the following problem:
Small files can be uploaded by the client without problems (< 1,5 gigabytes).
However, if we try to upload larger files, we get a "Proxy Error". The stacktrace does not give any more information.
We also tried to upload via cmis workbench with the same result...
Are there any configuration parameters for apache we maybe overlooked? Or do you think the problem should be searched elsewhere?
EDIT: I should mention, that the file is uploaded completely nevertheless. And also: We tried disable apache, connect via http instead of https and upload a file and it works perfectly.
EDIT2: We found a solution, although it does not seem to be a very good one... We set the following configuration entries in httpd.conf:
Timeout=500 and ProxyTimeout=500. Default value is 60 for these entries.
This solved the problem. However, it would be nice to know, why this problem occures in the first place.
Greets