How to Enable CORS for Docker Apache httpd server? - apache

I need to create an apache server to host my files and to get them by ajax. So, I'm using docker to deploy my server.
My docker image is httpd:2.4.
I deployed the server with the following command :
docker run -p 80:80 -dit --name file-server \
-v /sources/docker/apache-server/www/:/usr/local/apache2/htdocs/ httpd:2.4
But when I want to make the request for ajax, this is the result:
XMLHttpRequest cannot load http://server/kml/example.kml. No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'null' is therefore not allowed access.
So, I want to follow the next steps How to Enable CORS for Apache httpd server? (Step-By-Step Process). But I do not know how to add that command in the httpd.conf of the component. And I don't have the httpd.conf template to replace it with:
v /sources/docker/apache-server/my-httpd.conf:/usr/local/apache2/conf/httpd.conf
Please help me with this question.

You have that enter in shell-terminal with command docker exec -it nameContainer sh . write in terminal: su with those commands now you are root user in your Docker.
Now in terminal, you have than write a2enmod headers and restart your docker
The last command was for activate mod_header and just now you have that create a .htaccess in your project root file and write inside:
Header add Access-Control-Allow-Origin "*"
With that for me work fine and didn't need install apache in my machine

This has certainly already been addressed elsewhere and has many different solutions, however it's the first search hit for 'cors apache docker' so here's my 2 cents:
The best solution (because it is auditable and repeatable) is to use the apache docker image as a base for your own image using the docker build command and a Dockerfile. I won't go in to that as that's not the direct question here, but the basic bits are the same.
Create an "extra" apache config file with the CORS header bits:
me#mymachine:~$ docker exec file-server bash -c 'printf "<Directory \"/var/www/html\">\n Header set Access-Control-Allow-Origin \"*\"\n</Directory>\n" >> /usr/local/apache2/conf/httpd.conf'
me#mymachine:~$ docker exec file-server bash -c 'tail /usr/local/apache2/conf/extra/cors.conf'
<Directory "/var/www/html">
Header set Access-Control-Allow-Origin "*"
</Directory>
Restart apache to use this new config:
me#mymachine:~$ docker exec file-server bash -c 'apachectl -k restart'
NOTE: If you haven't configured a ServerName anywhere, then you might see this warning which is unrelated to the CORS configuration:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message

Related

Enable mod_gzip in docker apache httpd alpine

I've setup a httpd with docker:
FROM httpd:2.4-alpine
# load required modules (unfortunately gzip is not available)
RUN sed -i '/LoadModule rewrite_module/s/^#//g' /usr/local/apache2/conf/httpd.conf
RUN sed -i '/LoadModule deflate_module/s/^#//g' /usr/local/apache2/conf/httpd.conf
# AllowOverride All so that custom .htaccess is applied
RUN sed -i 's#AllowOverride [Nn]one#AllowOverride All#' /usr/local/apache2/conf/httpd.conf
Runs fine, but i need the mod_gzip module enabled which is not listed in httpd.conf
What i need to do in order to get the mod_gzip enabled in official docker httpd:2.4-alpine image?
Seems like mod_gzip is obsolete for apache 2.4 and mod_deflate is the way to go for doing the compression.
mod_deflate is integrated in the default apache package and just have to be enabled, mod_gzip remained external extension module and the integration is complex

Apache configuration: how to get quick feedback?

When writing configuration files for Apache web server I would like to have a quick feedback loop.
I, for example have a script that doesn't seem to work. It is either not picked up, or the variables I use are not set, or maybe overriding is not allowed. How to debug this?
I expected to at least print some debug log statements like REQUEST_URI: %{REQUEST_URI}. Can't find such a thing.
apachectl is a front end to the Apache HyperText Transfer Protocol (HTTP) server. It is designed to help the administrator control the functioning of the Apache httpd daemon.
Here is a link to the documentation.
Different platform might use different binary names such as apache, apache2 or apache2ctl. To test the configuration - just run:
apachectl configtest
# or, depending on your OS
httpd -t
EDIT
If you are trying to debug your virtual host configuration, you may find the Apache -S command line switch useful. That is, type the following command:
httpd -S
This command will dump out a description of how Apache parsed the configuration file

Using docker bridge network IP to access vhost in apache container

I'd like to understand how I'm able to access a domain configured in a vhost inside a docker container by providing an entry in my local /etc/hosts file with the docker bridge network IP.
I use docker-compose (v2)
Docker network bridge IP (by default): 172.17.0.1
I have an apache container running on 172.19.0.10
Vhost is simple like :
<VirtualHost *:80>
ServerName mywebsite.local
In my local /etc/hosts file, I have : 172.17.0.1 mywebsite.local
And it works... but how ?
Is Docker using the port to guess where to forward the traffic (from 172.17.0.1 to 172.19.0.10) ?
Can someone give me with some explanations and if possible documentation ?
Thanks.
At some point you had to start your container/docker with something like this:
docker run -d -p 1337:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
1337:80 means that localhost:1337 in your browser gets looked up on port 80, and therefore your apache-container.
Hopefully what you had in mind?!
Also see this or that.
You can try using
docker run -d --network=host <image> or in docker compose
network_mode: "host". Documentations on this can be found here.
Both of these put your container on top of the host network stacks.

How to run a bat file in apache server using Rotatelogs configuration ?

I am trying to rotate the log file in apache server. I tried a lot using rotatelog configuration in httpd.conf in apache configuration. But I can't.
while i configuring the rotatelog in httpd.conf, i found that Rotatelog having the option to run a program that can be acheived by using parameter -p. So i wrote a bat file to rotate the logs. And configured with apache server like below command
ErrorLog "|bin/rotatelogs.exe -l -p logs/apachelog.bat errorlogs 1M"
Whether My way of configuration is correct or not. Once error log file reached that particular file size , i want to execute the bat file. So i tried like this.
Thanks in advance.

force mediawiki squid cache to fill up with all pages

For speeding up a MediaWiki site which has content that uses a lot of templates but otherwise pretty much has static content when the templates have done their jobs I'd like to setup a squid server
see
https://www.mediawiki.org/wiki/Manual:PurgeList.php
and
https://www.mediawiki.org/wiki/Manual:Squid_caching
and then fill the squid server's cache "automatically" by using a script doing wget/curl calls that hit all pages of the Mediawiki. My expecation would be that after this procedure every single page is in the squid cache (if I make it big enough) and then each access would be done by squid.
How would i get this working?
E.g.:
How do I check my configuration?
How would I find out how much memory is needed?
How could I check that the pages are in the squid3 cache?
What I tried so far
I started out by finding out how to install squid using:
https://wiki.ubuntuusers.de/squid
and
https://www.mediawiki.org/wiki/Manual:Squid_caching
I figured out my ip address xx.xxx.xxx.xxx (not disclosed here)
via ifconfig eth0
in /etc/squid3/squid.conf I put
http port xx.xxx.xxx.xxx:80 transparent vhost defaultsite=XXXXXX
cache_peer 127.0.0.1 parent 80 3130 originserver
acl manager proto cache_object
acl localhost src 127.0.0.1/32
# Allow access to the web ports
acl web_ports port 80
http_access allow web_ports
# Allow cachemgr access from localhost only for maintenance purposes
http_access allow manager localhost
http_access deny manager
# Allow cache purge requests from MediaWiki/localhost only
acl purge method PURGE
http_access allow purge localhost
http_access deny purge
# And finally deny all other access to this proxy
http_access deny all
Then I configured my apache2 server
# /etc/apache2/sites-enabled/000-default.conf
Listen 127.0.0.1:80
I added
$wgUseSquid = true;
$wgSquidServers = array('xx.xxx.xxx.xxx');
$wgSquidServersNoPurge = array('127.0.0.1');
to my LocalSettings.php
Then I restarted apache2 and started squid3 with
service squid3 restart
and did a first access attempt with
wget --cache=off -r http://XXXXXX/mediawiki
the result is:
Resolving XXXXXXX (XXXXXXX)... xx.xxx.xxx.xxx
Connecting to XXXXXXX (XXXXXXX|xx.xxx.xx.xxx|:80... failed: Connection refused.
Assuming Apache 2.x.
While not Squid related, you can achieve this using just Apache modules. Have a look at mod_cache here: https://httpd.apache.org/docs/2.2/mod/mod_cache.html
You can simply add this to your Apache configuration and ask Apache to do disk caching of rendered content.
You need to ensure your content has appropriate cache expiry information in the resulting PHP response, MediaWiki should take care of this for you.
Adding such a cache layer may not have the desired outcome as this layer does not know if a page has changed, cache management is difficult here and should only be used for actually static content.
Ubuntu:
a2enmod cache cache_disk
Apache configuration:
CacheRoot /var/cache/apache2/mod_disk_cache
CacheEnable disk /
I would not recommend pre-filling your cache by accessing every page. This will only cause dormant (not frequently used) pages to take up valuable space / memory. If you still wish to do this, you may look at wget:
Description from: http://www.linuxjournal.com/content/downloading-entire-web-site-wget
$ wget \
--recursive \
--no-clobber \
--page-requisites \
--html-extension \
--convert-links \
--restrict-file-names=windows \
--domains website.org \
--no-parent \
www.website.org/tutorials/html/
This command downloads the Web site www.website.org/tutorials/html/.
The options are:
--recursive: download the entire Web site.
--domains website.org: don't follow links outside website.org.
--no-parent: don't follow links outside the directory tutorials/html/.
--page-requisites: get all the elements that compose the page (images, CSS and so on).
--html-extension: save files with the .html extension.
--convert-links: convert links so that they work locally, off-line.
--restrict-file-names=windows: modify filenames so that they will work in Windows as well.
--no-clobber: don't overwrite any existing files (used in case the download is interrupted and
resumed).
A better option: Memcached
MediaWiki also supports the use of Memcached as a very fast in-memory caching service for data and templates only. This is not as brutal as a website-wide cache like Squid or Apache mod_cache. MediaWiki will manage Memcached so that any changes are immediately reflected in the cache store, meaning your content will always be valid.
Please see the installation instructions at MediaWiki here: https://www.mediawiki.org/wiki/Memcached
My recommendation is not to use Apache mod_cache or Squid for this task, and instead to install Memcached and configure MediaWiki to use it.