Sendgrid integration: How to host AASA file on same domain as CNAME redirect - branch.io

How can I host the AASA file on the same domain where the CNAME redirects to thirdparty.bnc.lt ? Trying to download the AASA will just always redirect to thirdparty.bnc.lt, won't it?
Here's where I'm stuck:
1. Begin setup of Sendgrid email integration
2. Step 2 (Configure ESP): put in the correct info (see pic)
Get these errors (although notice the CNAME is happy):
How can the AASA ever be found/valid when the click tracking domain redirects to thirdparty.bnc.lt

It could be because the AASA file hosted on your domain is incorrectly formatted. Here is a sample AASA file contents for your reference:
{
"applinks": {
"apps": [],
"details": [
{
"appID": "3XXXXX9M83.io.branch-labs.Branchster",
"paths": [ "NOT /e/*", "*", "/", “/archives/201?/* ]
}
]
}
}
Hope this helps. Please check out our blog to know more about AASA files.
If you have already tried this and still facing issue let us know or write to us at integrations#branch.io and we'll be happy to provide you with the required support!

Related

Remove trailing slashes from AWS S3 static webhosting site

I have generated my WordPress website as static html and deployed it to s3 bucket to serve as static website.
Here is my website endpoint: wwwalls.com
My SEO team wants that the page URLs must not contain trailing slashes, but I could not find any document online that could allow me to get rid of trailing slashes from the URL eg : https://wwwalls.com/demo leads to https://wwwalls.com/demo/
I tried using redirects but then it ends up in too many redirects.
[
{
"Condition": {
"KeyPrefixEquals": "demo/"
},
"Redirect": {
"ReplaceKeyPrefixWith": "demo"
}
}
]
Please advice !

Share folder from one container to another within ECS Fargate 1.4.0

I have one Nginx and one WordPress(php-fpm) container declared in ECS Fargate task running on Platform 1.3.0 (or what is marked as LATEST).
I am trying to switch to Platform 1.4.0 for the support of EFS Volumes.
I want Nginx container to serve static assets directly from WordPress container. On 1.3.0 I just bind volumes between containers and everything works. The files from /var/www/html on WP container mapped to `/var/www/html/ on Nginx container.
[
{
"name": "wordpress",
"image": "....",
...
"mountPoints": [
{
"readOnly": false,
"sourceVolume": "asset-volume",
"containerPath": "/var/www/html"
}
]
},
{
"name": "nginx",
"image": "....",
...
"mountPoints": [
{
"sourceVolume": "asset-volume",
"containerPath": "/var/www/html",
"readOnly": true
}
]
}
]
volume {
name = "asset-volume"
host_path = null
}
However, on 1.4.0 instead of mapping folder from WordPress to Nginx, it seems like it creates an empty volume on host and maps that to /var/www/html on both containers thus removing all contents of /var/www/html on WordPress container.
I've been researching that issue for the last couple of days; one solution is to save code to /var/www/code and then copy it to /var/www/html on container runtime, but that seems like a very bad workaround. I wonder if anyone managed to share data from one container to another on Fargate 1.4.0 and how they achieved that.
Thank you
I had similar issue. Also found the conversation on this topic on Github:
https://github.com/aws/containers-roadmap/issues/863
tl;dr; Add VOLUME /var/www/html to you wordpress dockerfile, delete wordpress container definition mountPoints section and in nginx container definition replace mountPoints with volumesFrom.
like this:
"volumesFrom": [
{
"sourceContainer": "wordpress",
"readOnly": true
}
]
please follow this conversation on GitHub:
https://github.com/USACE/instrumentation/issues/88
Ensure dockerfiles for both wordpress and nginx each include the following directive:
VOLUME [ "/var/www/html" ]

Restrict Amazon S3 to CloudFront and http referrer

I have an Amazon S3 REST endpoint for images and file assets. I want the S3 bucket only accessible by CloudFront and the website accessing the images (using http referrer).
This is my bucket policy so far:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<DOMAIN>/*",
"Condition":{
"StringLike":{"aws:Referer":["http://<DOMAIN>/*"]}
}
}
]
}
But once I apply the policy, the images are not accessible on the website.
Is this possible to do?
CloudFront strips Referer header by default so S3 will not see it.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html
You need to Whitelist the Referer header in CloudFront and invalidate the cache to see if it works.
I went about this a little bit differently instead of a whitelist. The method below only allows CloudFront to access the content and then you put firewall rules on CloudFront that only your website (refer) can access the cached content.
For the bucket policy, I blocked all access and cleared out the Bucket policy JSON:
In Cloudfront, create a Origins and Origin Group Policy:
Then choose your Bucket from the list in Origin Domain Name
Origin Path I left blank and Enable Origin Shield I left as no.
Restrict Bucket Access: Choose Yes
Choose Create a New Identity
Grant Read Permissions on Bucket: Yes or Create (This will update the block policy on the S3 bucket to allow only the CloudFront to get the content.
Everything else I left to default and saved.
Now to make sure I restricted the refer from my website, I went the AWS WAF Service.
From here I went to Regex pattern sets on the left menu:
Click on create regex pattern.
Name: I put DomainAccess_Only
Description: Use Waterever
Region: Important, choose Global (Cloudfront)
For the regular expressions, I put .+ and click create regex pattern set
Web ACL Details:
Name: Whatever you want, leave metric default
Resource type: CloudFront distributions
Add AWS Resources, click it and check your cloudfront domain and add it (click next)
Next Choose Rule builder
Choose whatever name for your rule and choose Regular rule
Then choose If a request Matches the statement (unless you have more than one domain)
Inspect: Header
Header field name: referrer
Match type: Starts with string
String to match: https://yourdomain.com (this needs to be exactly what your domain is)
Scroll down and choose Action: allow
Then Add rule
Once you have done that, make rule to go to Rules, and make sure the default rule is to Block.
If it's not set to block, click edit and change it.
Now your content can only be accessed by your website through cloudfront. Hotlink and Direct access to images will not work unless it's coming from your website.

can not load i18n assets file while hosting on IIS

Update
I added json in MIME type in IIS and it worked in Chrome but in IE, it is going to "http://localhost:8080/Advisoryportal" which is fine but immediately redirected to "http://localhost:8080/" which is giving HTTP Error 404.3 - Not Found and going to http://localhost:8080/Default.asp.
Can some one please help in this?
===========================================================================
I have a small application i am working on and I can host it on IIS but en.json is not getting loaded and hence I am not able to support multilingual text.I am getting 404 errro.I tried many solutions which I can find over internet.
"http://localhost:8080/Advisoryportal/assets/i18n/en.json", ok: false,
WebPackVersion:webpack#4.8.3
Package.json
"assets": [
"src/favicon.ico",
"src/assets/",
"src/assets/i18n",
"src/web.config",
"src/loan-insurance-overview",
"src/assets/i18n/en.json"
],
Module.ts
export function HttpLoaderFactory(httpClient: HttpClient) {
return new TranslateHttpLoader(httpClient,"./assets/i18n/", ".json");
}
Issue was fixed after adding application/json MIME type in IIS

Firefox 59 and self signed certificates error on local environnement [duplicate]

Suddenly Google Chrome redirects my virtual-host domain myapplication.dev to https://myapplication.dev. I already tried to go to
chrome://net-internals/#hsts
And enter myapplication.dev into the textbox at the very bottom "Delete domain security policies" but this had no effect.
I also tried to delete the browser data.
What I also did is to change the v-host to .app instead of .dev but Chrome still redirected me to https:// ...
It's a Laravel application running on Laragon.
On other PCs in the same network, it works perfectly.
There is no way to prevent Chrome (>= 63) form using https on .dev domain names.
Google now owns the official .dev tld and has already stated that they will not remove this functionality.
The recommendation is to use another tld for development purposes, such as .localhost or .test.
More information about this update can be found in this article by Mattias Geniar.
For Firefox:
you can disable the property network.stricttransportsecurity.preloadlist by visiting the address : about:config .
For IE it seems to be still working .
For Chrome, there is no solution, I think it's hardcoded in the source code.
See that article : How to prevent Firefox and Chrome from forcing dev and foo domains to use https
This problem can't be fixed. Below is the reason:
Google owns .dev gTLD
Chrome forces HTTP to HTTPS on .dev domain directly within the source code.
From the 2nd link below:
...
// eTLDs
// At the moment, this only includes Google-owned gTLDs,
// but other gTLDs and eTLDs are welcome to preload if they are interested.
{ "name": "google", "include_subdomains": true, "mode": "force-https", "pins": "google" },
{ "name": "dev", "include_subdomains": true, "mode": "force-https" },
{ "name": "foo", "include_subdomains": true, "mode": "force-https" },
{ "name": "page", "include_subdomains": true, "mode": "force-https" },
{ "name": "app", "include_subdomains": true, "mode": "force-https" },
{ "name": "chrome", "include_subdomains": true, "mode": "force-https" },
...
References
ICANN Wiki Google
Chromium Source - transport_security_state_static.json
Check that link
https://laravel-news.com/chrome-63-now-forces-dev-domains-https
Based on this article by Danny Wahl he recommends you use one of the following: “.localhost”, “.invalid”, “.test”, or “.example”.
Chrome 63 forces .dev domains to HTTPS via preloaded HSTS
and soon all other browsers will follow.
.dev gTLD has been bought by Google for internal use and can not be used anymore with http, only https is allowed. See this article for further explanations:
https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/
May be worth noticing that there are other TLD that are forced to https: https://chromium.googlesource.com/chromium/src.git/+/63.0.3239.118/net/http/transport_security_state_static.json#262
google, dev, foo, page, app and chrome right now.
MacOS Sierra, Apache: After Chrome 63 forces .dev top level domains to HTTPS via preloaded HSTS phpmyadmin on my mac stop works. I read this and just edit /etc/apache2/extra/http-vhosts.conf file:
<VirtualHost *:80>
DocumentRoot "/Users/.../phpMyAdmin-x.y.z"
ServerName phpmyadmin.localhost
</VirtualHost>
and restart apache (by sudo /usr/sbin/apachectl stop; sudo /usr/sbin/apachectl start ) - and now it works on http://phpmyadmin.localhost :) . For laravel applications solution is similar.
The nice thing is that using *.localhost top level domain when you set up new project you can forget about editing /etc/hosts.
How cool is that? :)
There's also an excellent proposal to add the .localhost domain as a
new standard, which would be more appropriate here.
UPDATE 2018
Using *.localhost is not good - some applications will not support it like cURL (used by php-guzzle) - more details here. Better is to use *.local.