DNS resolves correctly using command line tools but fails on browser - amazon-s3

Dig, wget, nslookup and curl commands work perfectly for a specific URL I have pointed to another server less than 24 hours ago.
Problem is, it just refuses to be resolved by the browser (Chrome, Safari and Firefox). The strangest part is that it is being successfully resolved by Postman (by testing the OPTIONS and the GET methods separately), but still doesn't return a proper response on the browser side of things.
DNS checks are returning positive, so this is when I started suspecting that the problem is actually within the headers of the HTTP protocol's requests which are sent - alongside the fact that different responses are being returned for the requests that don't include the default browser headers (being issued through the different command-line tools & Postman) and the ones who do (being issued by the browsers automatically or manually using the dev tools).
After fully flushing the current local system's DNS cache, including the browsers's and even trying another device on another network - I still get still no response on the browser.
Kept going, and attempted to verify that with a VPN (locally - which didn't work), and an online web proxy tool (which did work).
Finally, I extracted the router's default DNS server address, used nslookup to look up the URL again, this time specifically mentioning the desired DNS server (the one stated above), and after getting a successful response with the correct values, I am now pretty much sure the HTTP request is causing the problem.
The URL is hosted on Amazon S3 Static Hosting option, which I used many times before, and didn't have a problem with, with that exact same configuration. Looking up the recent changes/features that were possibly added, pointed out that I may need to explicitly set a CORS policy for the newly created bucket, on top of the usual public access policy that is needed.
After applying that as-well - it still doesn't seem to work.
As a quick change in direction that may possibly make some parts clearer about what's going on (and as I started to think that the browser might not be getting the correct Content-Type header in the response, which should be text/html header as its response, and therefore, possibly doesn't resolve the URL with the expected behavior), I went ahead and applied a 301 redirection on the S3 bucket, instead of the static files hosting, and again, it all works perfectly through the command line tools, but not through the browsers.
Anyway, the browser just doesn't seem to complete any of the requests being sent to the URL.
That might be the OPTIONS pre-flight request failing to respond correctly, and the browser just doesn't continue to issuing the GET request, or the URL is not being found by the DNS route the browser is taking, which is unclear to me currently if that is the option.
Any ideas? (besides the fact that sometimes it just takes longer time for some DNS servers that happen to be on the chosen route to update/refresh their cache, which doesn't appear to be affecting my local machine's DNS route specifically for this case. That, being said with caution, was verified by validating the different parts of DNS configuration and prioritization throughout the different possible parts on my system (Mac OS X), including the fact that the response gets back with the correct address successfully).

Found my answer here:
https://serverfault.com/questions/942030/aws-s3-static-hosting-how-to-debug-connection-timeout
As linked there, more details can be found here:
Non-Authoritative-Reason header field [HTTP]
Solution & Explanation: Because of the nature of the domain extension I have purchased (.dev extension) Chrome was silently using HTTPS because of the URL being part of Chrome's HTTP Strict Transport Security (HSTS), because all .dev domains should be using HTTPS only. Therefore, the issue was still showing up, even when explicitly typing http:// into the URL address bar.
This can be overridden by applying a CloudFront distribution with HTTPS support on top of the S3 Static Hosting, as usual (but still, it should be noted as HSTS listings can cause that for different cases, including this one as part of them, because of the .dev domain extension).
Useful Resources (for debugging purposes)
In addition to what is stated here:
https://gist.github.com/stollcri/7c09bafc97223481920e
You can issue a lookup query (and also add or delete your local set of HSTS listings) through the following Chrome's settings URL:
You can also check the current listings here: https://hstspreload.org/

Related

Cookies starting with HASH_ pointing to the same Path created when accessing my Apache Server from outside my local network

I have a software based on Java HttpServlet, which I cannot modify in its source code.
This software sits behind an SSL enabled Apache Server with a VirtualHost.
When accessing a webpage, JSESSIONID cookies are set for different Paths.
This works fine as long as I am connected to the same network as my server.
When accessing the server from outside of the local network, a counterpart for each JSESSIONID cookie is created with a HASH_ prefix, resulting in many HASH_JSESSIONID cookies.
Additionally, my session is lost/destroyed mid-request when accessing the server from outside, resulting in unwanted Java code to be executed.
I suppose that the creation of these HASH_JSESSIONID cookies is somehow linked to the loss of my session, thus resulting in an unstable user experience.
The only difference between both cookies is that the HASH_-cookies are SameSite:"None" and the original-cookies are SameSite:"Lax".
Editing the cookies through Apache via Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure;SameSite=Lax only results in the original-cookies to be edited.
Has anyone had any experience with these HASH_-cookies and can point me in the right direction, to e.g. prevent their creation?
Cheers!

Get FQDN from domain

this is my first question here, so I will try my best.
I am trying to get the protocol and the FQDN (fully qualified domain name) from a bunch of domains, i.e. get https://es.aliexpress.com from aliexpress.com.
I have tried Selenium webdriver, but it takes too long to compute all the domains (even with short timeouts and blocking images).
I am asking if someone knows a way to do this without loading the content, something like wget but only for the URL.
Thank you for reading.
Not really...
First of all, http and https have nothing to do with domain names. Those are transfer protocols.
Ignoring that part, what you are calling FQDN are often generated at the time you access them.
For instance, many websites redirect the browser from a desktop site to a mobile version (the typical m.something.com) based on your User Agent string. Which mean www.something.com and m.something.com are both valid answers
In the example you gave, aliexpress.com, prepended es. which means there is most likely some code on the server that reads in either your location (based on IP address) or a locale setting in your browser to direct you to the es version of the website as opposed to the en or dk version.
These changes can be done via an .htaccess file in the root folder of the website, or via back end code.
Google Chrome itself automatically tries to add www. if it looks like you typed a URL into the everything bar.
It's also possible that the URL is one giant redirect. Some websites buy up extra domain names that all redirect to their core site. So even if you input xyz.com you'll end up at abcd.com.
There is no algorithmic way to go from a base URL to what you're calling the FQDN.
P.S. Here is an article about what FQDN means.

How to create a friendly url in Tomcat?

I want to modify my application URL from //localhost:8080/monitor/index.html to just monitor , so that on putting monitor on browser, my application should open. Is there a way to achieve this, can someone suggest the configuration changes which will be required for this.
Can I map my short URL to the existing one may be somewhere in web.xml. I am not sure about the approach any suggestions will be great.
Thanks and regards
Deb
You're mixing up several different protocol layers in your question.
If you just enter nothing but "monitor" in the browser URL bar the browser is going to first lookup "monitor" in DNS and finding nothing it will then probably send a query to Google or your configured search engine. In the past browsers have taken other steps, such as appending ".com" and prepending "www." but I don't think modern browsers do that any more.
So far, your server is not even remotely involved.
If you're a large ISP user (TimeWarner, Comcast) and use their DNS it's also possible the ISP will intercept your failed DNS lookup and route the request to a "helpful" search page (i.e. SPAM) of their own.
At this point the request is still nowhere near your server.
I suppose you could mess with the /etc/hosts file on your local system to resolve "monitor" to the proper hostname, but that's an extremely brittle solution that has to be hard coded on each machine you want to have this "shortcut" link (and which breaks when the hostname changes).
You're much better off just setting up a web shortcut in your browser that points to the right place.

Liferay using http and https

I'm trying to use Liferay for http and https
if I include in portal-ext.properties:
company.security.auth.requires.https=true
web.server.protocol=https
Will be working ok with https but in http is showing incorrect themes due is trying to load https://domain.com/theme
If I remove this two lines is working ok for http but not for https.
What can I do?
IMHO mixed mode, e.g. offering http as well as https never gives you what you expect: You expect security from https, but you always risk leaking session information, e.g. being vulnerable to session-hijacking attacks (ala Firesheep). My actual advice would be to go https only if you do https for security. Read on if that's not an option for you, but don't complain when you find information leaking (this is not dependent on Liferay, but for any web-based environment)
What is the exact problem that you have with the themes? (images/css through http?) Which version of Liferay are you using?
Before you specify more, you might want to configure your theme's "virtual path", this will rewrite all the URLs referring to your theme. It's typically used to serve static resources through a webserver or cdn, but it works with any kind of URL. Simply using a protocol-relative URL should work (I love this mostly unknown http feature):
Add this to your theme's liferay-look-and-feel.xml:
<look-and-feel>
<theme id="my" name="My Theme">
<virtual-path>//domain.com/myTheme</virtual-path>
</theme>
</look-and-feel>
note that the URL omits the protocol part, http: or https:, thus the browser will use the same protocol that the whole page is loaded with.
Edit: corrected the xml. Will investigate if there's a problem with protocol-relative URLs in themes.
Edit 2: Something is weird. It seems, virtual-path does not work like this, but I recall it did earlier. Do you add domain.com as cdn.host.http or cdn.host.https? (this would be concatenated)
On related stuff, please check if you're running Apache in front of your appserver. In this case you might forward some traffic for the portal (e.g. in the virtual host for http) but not forward the traffic in the https virtual host.

Strange domains in mod_pagespeed cache folder

About a year ago I have installed mod_pagespeed on my VPS server, set it up and left it running. Recently I was exploring files on my server, went to pagespeed cache folder and discovered some strange folders.
All folders usually named this way ,2Fwww.mydomain.com or ,2F111.111.111.111 for IP addresses. I was surprised to see some domains that does not belong to me, like:
24x7-allrequestsallowed.com
allrequestsallowed.com
m.odnoklassniki.ru
www.fbi.gov
www.securitylab.ru
It looks like something dodgy is going on, was my server compromised, is there any reasonable explanation?
That does look peculiar. Everything in the cache folder should be files that mod_pagespeed tried to rewrite. There are two ways that I know of that this can happen:
1) You reference some third-party resource (say an image from another domain, or google analytics script) and you have explicitly enabled rewriting of that domain with ModPagespeedDomain www.example.com or ModPagespeedDomain *.
2) If your server accepts HTTP requests with invalid Host headers. Try (for example) wget --header="Host: www.fbi.gov" www.yourdomain.com/foo/bar.html. If your server accepts requests like that it may be providing mod_pagespeed with an incorrect base domain, and then subresources would be fetched from the same domain (so if www.yourdomain.com/foo/bar.html references some.jpeg, and your server accepts invalid host headers, we could fetch www.fbi.gov/foo/some.jpeg as the resource). There was a recent security release that makes sure all of these subrequests are done against localhost (not arbitrary third-party websites). Please see: https://developers.google.com/speed/docs/mod_pagespeed/CVE-2012-4001
You might want to look through these folders and see what specific resources are in there. I think that the biggest concern you should have is that someone might be trying to perform an XSS attack on your users or maybe a DDoS attack against another website (like www.fbi.gov), using your server as one vector. I do not think that these folders are indicative that your server itself is compromised.
If you would like to discuss this more, https://groups.google.com/forum/?fromgroups#!forum/mod-pagespeed-discuss is a good list to join and email.