Share folder from one container to another within ECS Fargate 1.4.0 - aws-fargate

I have one Nginx and one WordPress(php-fpm) container declared in ECS Fargate task running on Platform 1.3.0 (or what is marked as LATEST).
I am trying to switch to Platform 1.4.0 for the support of EFS Volumes.
I want Nginx container to serve static assets directly from WordPress container. On 1.3.0 I just bind volumes between containers and everything works. The files from /var/www/html on WP container mapped to `/var/www/html/ on Nginx container.
[
{
"name": "wordpress",
"image": "....",
...
"mountPoints": [
{
"readOnly": false,
"sourceVolume": "asset-volume",
"containerPath": "/var/www/html"
}
]
},
{
"name": "nginx",
"image": "....",
...
"mountPoints": [
{
"sourceVolume": "asset-volume",
"containerPath": "/var/www/html",
"readOnly": true
}
]
}
]
volume {
name = "asset-volume"
host_path = null
}
However, on 1.4.0 instead of mapping folder from WordPress to Nginx, it seems like it creates an empty volume on host and maps that to /var/www/html on both containers thus removing all contents of /var/www/html on WordPress container.
I've been researching that issue for the last couple of days; one solution is to save code to /var/www/code and then copy it to /var/www/html on container runtime, but that seems like a very bad workaround. I wonder if anyone managed to share data from one container to another on Fargate 1.4.0 and how they achieved that.
Thank you

I had similar issue. Also found the conversation on this topic on Github:
https://github.com/aws/containers-roadmap/issues/863
tl;dr; Add VOLUME /var/www/html to you wordpress dockerfile, delete wordpress container definition mountPoints section and in nginx container definition replace mountPoints with volumesFrom.
like this:
"volumesFrom": [
{
"sourceContainer": "wordpress",
"readOnly": true
}
]

please follow this conversation on GitHub:
https://github.com/USACE/instrumentation/issues/88
Ensure dockerfiles for both wordpress and nginx each include the following directive:
VOLUME [ "/var/www/html" ]

Related

Nginx Unit: Static file serving with different folder name

I'm rewriting a Flask application to use NGINX Unit and I am now trying to configure static resources. However I'm using flask blueprints and the folder structure is different from what NGINX Unit expects:
{
"match": {
"uri": "/static/my-module/*"
},
"action": {
"share": "/app/myapp/modules/my-module/static/"
}
},
Now what I would like is that everything after /static/my-module/ becomes added to the local path /app/myapp/modules/my-module/static/ like this:
/static/my-module/main.css => /app/myapp/modules/my-module/static/main.css
But what happens is:
/static/my-module/main.css => /app/myapp/modules/my-module/static/static/my-module/main.css
I don't see any way to use regex, or to set make $uri only be the matching part and not the full.
Given the size of the application it's not trivial to change local path. Now I could do exotic things like symlinking but that's a hassle to maintain.
I'm using unit:1.26.1-python3.9
This is not possible today. In the future you will be able to specify URI rewrites to reconstruct the URI to match the new layout.
For now you can mitigate the symlinking pain by using the $uri variable explicitly.
"share": "/app/myapp/modules/my-module$uri"
And
$ cd /app/myapp/modules/my-module/static
$ ln -s my-module .

How can I set up a devserver proxy in Vue, but have it ignore local php files?

I have a devserver proxy setup in VueJS that allows me to talk to my website for APIS and get around CORS problems when working in my local devbuild.
However, with this setup, if the requested file exists locally, the local devserver proxy doesn't get the file from my website (which I want it to do for PHP files), it instead tries to load the local PHP file via the Vue devserver.
my vue.config.js file looks like this:
module.exports = {
"transpileDependencies": [
"vuetify"
],
devServer: {
proxy: 'https://www.example.com',
}
}
Is it possible to set up exclusions for specific files, or specific file types (ie PHP files)?

Sendgrid integration: How to host AASA file on same domain as CNAME redirect

How can I host the AASA file on the same domain where the CNAME redirects to thirdparty.bnc.lt ? Trying to download the AASA will just always redirect to thirdparty.bnc.lt, won't it?
Here's where I'm stuck:
1. Begin setup of Sendgrid email integration
2. Step 2 (Configure ESP): put in the correct info (see pic)
Get these errors (although notice the CNAME is happy):
How can the AASA ever be found/valid when the click tracking domain redirects to thirdparty.bnc.lt
It could be because the AASA file hosted on your domain is incorrectly formatted. Here is a sample AASA file contents for your reference:
{
"applinks": {
"apps": [],
"details": [
{
"appID": "3XXXXX9M83.io.branch-labs.Branchster",
"paths": [ "NOT /e/*", "*", "/", “/archives/201?/* ]
}
]
}
}
Hope this helps. Please check out our blog to know more about AASA files.
If you have already tried this and still facing issue let us know or write to us at integrations#branch.io and we'll be happy to provide you with the required support!

Firefox 59 and self signed certificates error on local environnement [duplicate]

Suddenly Google Chrome redirects my virtual-host domain myapplication.dev to https://myapplication.dev. I already tried to go to
chrome://net-internals/#hsts
And enter myapplication.dev into the textbox at the very bottom "Delete domain security policies" but this had no effect.
I also tried to delete the browser data.
What I also did is to change the v-host to .app instead of .dev but Chrome still redirected me to https:// ...
It's a Laravel application running on Laragon.
On other PCs in the same network, it works perfectly.
There is no way to prevent Chrome (>= 63) form using https on .dev domain names.
Google now owns the official .dev tld and has already stated that they will not remove this functionality.
The recommendation is to use another tld for development purposes, such as .localhost or .test.
More information about this update can be found in this article by Mattias Geniar.
For Firefox:
you can disable the property network.stricttransportsecurity.preloadlist by visiting the address : about:config .
For IE it seems to be still working .
For Chrome, there is no solution, I think it's hardcoded in the source code.
See that article : How to prevent Firefox and Chrome from forcing dev and foo domains to use https
This problem can't be fixed. Below is the reason:
Google owns .dev gTLD
Chrome forces HTTP to HTTPS on .dev domain directly within the source code.
From the 2nd link below:
...
// eTLDs
// At the moment, this only includes Google-owned gTLDs,
// but other gTLDs and eTLDs are welcome to preload if they are interested.
{ "name": "google", "include_subdomains": true, "mode": "force-https", "pins": "google" },
{ "name": "dev", "include_subdomains": true, "mode": "force-https" },
{ "name": "foo", "include_subdomains": true, "mode": "force-https" },
{ "name": "page", "include_subdomains": true, "mode": "force-https" },
{ "name": "app", "include_subdomains": true, "mode": "force-https" },
{ "name": "chrome", "include_subdomains": true, "mode": "force-https" },
...
References
ICANN Wiki Google
Chromium Source - transport_security_state_static.json
Check that link
https://laravel-news.com/chrome-63-now-forces-dev-domains-https
Based on this article by Danny Wahl he recommends you use one of the following: “.localhost”, “.invalid”, “.test”, or “.example”.
Chrome 63 forces .dev domains to HTTPS via preloaded HSTS
and soon all other browsers will follow.
.dev gTLD has been bought by Google for internal use and can not be used anymore with http, only https is allowed. See this article for further explanations:
https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/
May be worth noticing that there are other TLD that are forced to https: https://chromium.googlesource.com/chromium/src.git/+/63.0.3239.118/net/http/transport_security_state_static.json#262
google, dev, foo, page, app and chrome right now.
MacOS Sierra, Apache: After Chrome 63 forces .dev top level domains to HTTPS via preloaded HSTS phpmyadmin on my mac stop works. I read this and just edit /etc/apache2/extra/http-vhosts.conf file:
<VirtualHost *:80>
DocumentRoot "/Users/.../phpMyAdmin-x.y.z"
ServerName phpmyadmin.localhost
</VirtualHost>
and restart apache (by sudo /usr/sbin/apachectl stop; sudo /usr/sbin/apachectl start ) - and now it works on http://phpmyadmin.localhost :) . For laravel applications solution is similar.
The nice thing is that using *.localhost top level domain when you set up new project you can forget about editing /etc/hosts.
How cool is that? :)
There's also an excellent proposal to add the .localhost domain as a
new standard, which would be more appropriate here.
UPDATE 2018
Using *.localhost is not good - some applications will not support it like cURL (used by php-guzzle) - more details here. Better is to use *.local.

Wordpress redirecting to localhost instead of virtual host when being proxied by Webpack Dev Server

I am trying to do a relatively simple set up with:
An Apache server with a virtual host for the HTML.
Webpack and the Webpack Dev Server for asset reloading and generation.
To accomplish this, the Webpack Dev Server proxies the other server to pass through requests that it doesn't know how to handle.
Using a basic python server (which works):
Start the python webserver on http://localhost:5000.
Run npm start.
Webpack Dev Server starts and proxies the python server using http://localhost:9090.
Visit http://localhost:9090 and see HTML result of python server plus Webpack assets.
Using Apache server:
Start the Apache server.
Run npm start.
Webpack dev server starts and proxies the Apache server using http://localhost:9090.
Visit localhost:9090 the browser automatically redirects to http://localhost and displays "It works!".
If I separately visit http://vhost.name in the browser I see the correct HTML. My environment is the default Apache Server version: Apache/2.4.16 (Unix) on Mac OSX El Capitan.
package.json
"scripts": {
"test": "node server.js",
"start": "npm run start-dev-server",
"start-dev-server": "webpack-dev-server 'webpack-dev-server/client?/' --host 0.0.0.0 --port 9090 --progress --colors",
"build": "webpack"
},
webpack.config.js
/*global require module __dirname*/
var webpack = require('webpack');
module.exports = {
entry: {
app: [
'webpack-dev-server/client?http://localhost:9090',
'webpack/hot/only-dev-server',
'./src/app.js'
]
},
output: {
path: __dirname + '/dist',
filename: '[name]_bundle.js',
publicPath: 'http://localhost:9090/dist'
},
devServer: {
historyApiFallback: true,
proxy: {
// Python server: works as expected at localhost:9090
// '*': 'http://localhost:5000'
// Apache virtual host: redirects to localhost instead of
// serving localhost:9090
'*': 'http://vhost.name'
}
},
plugins: [
new webpack.HotModuleReplacementPlugin()
]
};
httpd-vhosts.conf
<VirtualHost vhost.name:80>
DocumentRoot "/Path/To/vhost.name"
ServerName vhost.name
<Directory "/Path/To/vhost.name">
Options FollowSymLinks Indexes MultiViews
AllowOverride All
# OSX 10.10 / Apache 2.4
Require all granted
</Directory>
</VirtualHost>
hosts
127.0.0.1 localhost
127.0.0.1 vhost.name
Wordpress uses canonical URLs to help normalize URLs for content from different sources:
[Y]ou can’t control who links to you, and third parties can make errors when typing or copy-pasting your URLs. This canonical URL degeneration has a way of propogating. That is, Site A links to your site using a non-canonical URL. Then Site B see’s Site A’s link, and copy-pastes it into their blog. If you allow a non-canonical URL to stay in the address bar, people will use it. That usage is detrimental to your search engine ranking, and damages the web of links that makes up the Web. By redirecting to the canonical URL, we can help stop these errors from propagating, and at least generate 301 redirects for the errors that people may make when linking to your blog.
This rewriting is what strips the Webpack-dev-server port number when attempting to proxy. The solution is to add in your theme/functions.php:
remove_filter('template_redirect', 'redirect_canonical');
We obviously don't want this running on the live site, so add a variable to the wp_config.php to set depending on environment:
wp-config.php
// Toggle to disable canonical URLs to allow port numbers.
// Required for Webpack-dev-server proxying.
define('DISABLE_CANONICAL', 'Y', true);
yourtheme/functions.php
// Hack for Webpack dev server
if (DISABLE_CANONICAL == 'Y') {
remove_filter('template_redirect', 'redirect_canonical');
}
I have experienced issues with the browser "caching" the previous URL redirect, so when you make a change it may not appear immediately. Try opening the URL in incognito mode, using a different browser, or restarting the Webpack and Apache servers.