Show different website based on the current State on AWS using Cloudfront/S3 - amazon-s3

We want to show a different website depending on the current State of the visiting user, is it possible? I've seen you can do Restriction, but I guess this is kind of the opposite.
Without a Redirect would be awesome, ie. using different S3 buckets.

You can use Lambda#Edge to inspect the request and redirect the user as you wish.
Here is a standard pretty URL redirect example that you can adjust to your needs.
You can also add multiple origins e.g. S3 buckets and use path matching pattern to direct traffic based on the URL to specific path.

Related

Is there a quick way to detect redirections?

I am migrating a website and it has many redirections. I would like to generate a list in which I can see all redirects, target and source.
I tried using Cyotek WebCopy but it seems to be unable to give the data I need. Is there a crawling method to do that? Or probably this can be accessed in Apache logs?
Of course you can do it by crawling the website, but I advise against it in this specific situation, because there is an easier solution.
You use Apache, so you are (probably) working with HTTP/HTTPS protocol. You could refer to HTTP referrer, if you use PHP, then you can reach the previous page via $_SERVER['HTTP_REFERER']. So, you will need to do the following:
figure out a way to store previous-next page pairs
at the start of each request store such a pair, knowing what the current URL is and what the previous was
maybe you will need to group your URLs and do some aggregation
load the output somewhere and analyze

Pull in the page path with Referrer variable in GTM?

I am trying to collect the referrer URL in Google Tag Manager and I want it to include the referring path page. I want to do this because I have multiple links from the same domain pointing to one form. I want to track which page is bringing in the most form fills and so that I can trigger an email series based on which landing page they came from.
For example, I have 3 landing pages directing to one of my forms:
www.site1.com/first-page-path
www.site1.com/second-page-path
www.site1.com/third-page-path
When I check the referrer variable in Google Tag Manager, it simply displays the domain name as follows:
referrer: https://www.site1.com/
How do I collect the the full URL including the page path so that it shows up like this:
referrer: https://www.site1.com/second-page-path
Any help would be appreciated.
It's limit by the referrer policy. These days, browsers usually set very restrictive defaults for the referrer policies, so only the referring domain is sent.
If you can manage the other domain or you can give each page with different form url.
You can add some parameter at the form url and add proper setting in GTM to retrieve it.
In general, referrer has always been a bit unreliable, and is now so limited that you probably should not use it for business critical purposes.

hiding s3 path in aws cloudfront url

I am trying to make sure I did not miss anything in the AWS CloudFront documentation or anywhere else ...
I have a (not public) S3 bucket configured as origin in a CloudFront web distribution (i.e. I don't think it matters but I am using signed urls).
Let's say a have a file in a S3 path like
/someRandomString/someCustomerName/someProductName/somevideo.mp4
So, perhaps the url generated by CloudFront would be something like:
https://my.domain.com/someRandomString/someCustomerName/someProductName/somevideo.mp4?Expires=1512062975&Signature=unqsignature&Key-Pair-Id=keyid
Is there a way to obfuscate the path to actual file on the generated URL. All 3 parts before the filename can change, so I prefer not to use "Origin Path" on Origin Settings to hide the begging of the path. With that approach, I would have to create a lot of origins mapped to the same bucket but different paths. If that's the only way, then the limit of 25 origins per distribution would be a problem.
Ideally, I would like to get something like
https://my.domain.com/someRandomObfuscatedPath/somevideo.mp4?Expires=1512062975&Signature=unqsignature&Key-Pair-Id=keyid
Note: I am also using my own domain/CNAME.
Thanks
Cris
One way could be to use a lambda function that receives the S3 file's path, copies it into an obfuscated directory (maybe it has a simple mapping from source to origin) and then returns the signed URL of the copied file. This will ensure that only the obfuscated path is visible externally.
Of course, this will (potentially) double the data storage so you need some way to clean up the obfuscated folders. That could be done on a time-based manner, so if each signed URL is expected to expire after 24 hours, you could create folders based on date, and each of the obfuscated directories could be deleted every other day.
Alternatively, you could use a service like tinyurl.com or something similar to create a mapping. It would be much easier, save on storage, etc. The only downside would be that it would not reflect your domain name.
If you have the ability to modify the routing of your domain then this is a non-issue, but I presume that's not an option.
Obfuscation is not a form of security.
If you wish to control which objects users can access, you should use Pre-Signed URLs or Cookies. This way, you can grant access to private objects via S3 or CloudFront and not worry about people obtaining access to other objects.
See: Serving Private Content through CloudFront

Planning url rewrite for my web app

I'm working on a site which shows different products for different countries. The current url scheme I'm using is "index.php?country=US" for the main page, and "product.php?country=US&id=1234" to show a product from an specific country.
I'm planning now to implement url rewrite to use cleaner urls. The idea would be using each country as subdomain, and product id as a page. Something like this:
us.example.com/1234 -> product.php?country=US&id=1234
I have full control of my dns records and web server, and currently have set a * A record to point to my IP in order to receive *.example.com requests. This seems to work ok.
Now my question is what other things I'd need to take care of. Is it right to assume that just adding a .htaccess would be enough to handle all requests? Do I need to add VirtualHost to each subdomain I use as well? Would anything else be needed or avoided as well?
I'm basically trying to figure out what the simplest and correct way of designing this would be best.
The data you need to process the country is already in the request URL (from the hostname). Moving this to a GET variable introduces additional complications (how do you deal with POSTs).
You don't need seperate vhosts unless the domains have different SSL certs.

Passing the referrer to omniture in server-side code

I'm trying to implement some omniture requests on server-side. I've got the calls set up, and the requests make it to omniture, but the referrer is not showing up in omniture.
Here is an example of one of the urls for omniture my code creates. Am I missing something?
http://[id].112.2o7.net/b/ss/[group]/1/H23.2/s1328206514850?AQB=1&ndh=1&ns=[id]&g=http%3A%2F%2F[domain]%2Flogin.asp&vid=1328206514850&pageName=Login%20Page%20!test!&r=http%3A%2F%2Ftest.com
The Internal URL Filters in the Report Suite Admin Console specifies what your internal domains are (i.e your domains). Any referal from any other domain will be recognised as a referrer.
I generally use a Firefox addon like WATS to debug the variables that are on a particular page, including referrer.
Keep in mind that there needs to be a referral from an external site. If you just type in the URL, or reload, or click from your own site, there is no referral. When testing this, I would create a page on another domain (e.g. localhost), and create a link to my page.
https://omniture-help.custhelp.com/app/answers/detail/a_id/1652/kw/JavaScript/related/1
COMPARISON: s.linkInternalFilters vs. Internal URL Filters
s.linkInternalFilters: The linkInternalFilters variable within the s_code.js file is used in exit link tracking. If s.trackExternalLinks is set to true, it is used to determine if a specific link a visitor clicked on is internal to your organization's site or not. Clicked links that match a value in s.linkInternalFilters are ignored, while links that do not match any values are sent to SiteCatalyst as an exit link.
Internal URL Filters: The Internal URL filters within the Admin Console is used in Traffic Sources reports, such as the Referring Domain report. Every s.t() request checks to see if the referring URL (contained within the referrer variable) matches any of the rules set up. Referring URLs that match any of these rules are excluded from all Traffic Sources reports, while referring URLs that do not are included.
It is recommended that s.linkInternalFilters and Internal URL filters match eachother, however the two operate completely independently and serve completely different functions.
The last part of that image is the referrer value, r= . Is that the correct value? Also you should check your Internal URL Filters in the admin console for that report suite. Typically for new report suites you will find the value of . (a single period) set in there. If you do have that then no referrers will be recorded.