Acumatica behind reverse proxy causes issues with GetFile.ashx - reverse-proxy

I am running my dev instances of Acumatica behind an reverse proxy that consists of IIS with Application Request Routing 3.0
For the most part things run and behave as expected, however I have issues with images, e.g. logos, inventory pics, etc. The issue is that upon first load the url delivered to the client is an absolute url. If move between branches then logo url switches to a relative url and the image displays properly.
if you would like an example here is a url to a test instance.
https://2019r2.acumatica.govelocit.com/test20r1
user: admin
pass: P#ssword1
when you login the logo will have a broken link icon
Image with Broken Link
if you switch to a new branch the logo shows.
Working Image
if you switch back to the branch you started with the logo still displays fine. It is just an initial load issue.
Thoughts?

The issue here is that absolute url being built using not current url schema but, the schema the site was called. And since you are calling the site from your reverse proxy via http, the link generated for images is also http, and therefore cannot be loaded. Additionally you are getting the security warning, as the you are calling http content via site on https.
like here
and if you just edit url schema in browser, the image will appear -
here you see the image
There are at least 2 good solutions to suggest:
Point your Reverse proxy on HTTPS site. This is quite a straightforward solution that might however bring a little headache in configuration if your reverse proxy will not like the self signed IIS certificate. It also would not allow to analyze the requests, as all transports will be encrypted.
Another solution is a little more sophisticated and will enable you to call http site and make it thinking you are calling https. For this you would need to set the X-Forwarded-Proto header as https, in your reverse proxy config.
Unfortunately not familiar with Application Request Routing 3.0, for better understanding the NginX proxy location will look like this:
location ~ ^/(MySite){
proxy_pass http://localhost:82; //note, you are calling https here
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https; //here you are tricking the site
}

Related

Iframe doesn't work in website wile hotlinking is deactivated on remote server

I have an unusual problem when I used an iframe on a site i'm building. The hotlink protection is off on both servers. The iframe still doesn't work. both are ssl sites. What is strange is I can add a subdomain to the website where the webpage for the iframe, and redirect to the other server, and the site shows up in the iframe after that, but directly it doesn't. Is there by chance a setting on the webserver that doesn't allow external iframes? Is it better to just leave this alone and do a subdomain hop (I'm wondering if the web host guys at hostgator did that on purpose for security, and I should just do the hop method i stumbled upon). both servers are running nginx, webserver is using nginx+apache
Using iframes on external sites can be prevented with HTTP Header like X-Frame-Options
Documentation can be found from here:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options
This header can be set by either the web server or the software that is running on the web server.
well, I got it working. in the .htaccess (after I turned back on hotlinking)
I wrote after the RewriteEngine on line:
AllowOverride All
Header set X-Frame-Options "ALLOW-FROM https://www.theothersite.com/"
and it works! of course I added the http and https urls too on the exception list. now I can Iframe and use document-forms POST method

HTTP Basic Auth in Selenium Grid

I want to implement Basic Http Authentication in Selenium Grid how do I do that? For eg: I want to send request to grid but not without authentication in the URL. I need to create something like this http://username:password#mygrid.com:4444/wd/hub in our internal Selenium grid. How do I do that?
Ok. I was able to achieve what I needed. I installed nginx and added the selenium grid endpoint to it. Then added
auth_basic “Grid’s Area”;
auth_basic_user_file /etc/apache2/.htpasswd;
in the nginx.conf. That's it.
Please remember grid has multiple URI and does not have any root (In nginx terms) URI. So when you proxy let's say /grid to http://localhost:4444/grid/console all the static content cannot be served. In this case we need to proxy / to http://localhost:4001. This is because the static content is served from a different URI. In our case it's being served from /grid/resources/org/openqa/grid/images/ which is different from /grid/console.
As far as getting SSL working I follow this guide and it's super easy.

Prevent http page from redirecting to https page

I have a website (userbob.com) that normally serves all pages as https. However, I am trying to have one subdirectory (userbob.com/tools/) always serve content as http. Currently, it seems like Chrome's HSTS feature (which I don't understand how it works) is forcing my site's pages to load over https. I can go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set, and the next query will work as I want without redirecting to an https version. However, if I try to load the page a second time, it ends up redirecting again. The only way I can get it to work is if I go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set after each request. How do I let browsers know that I want all my pages from userbob.com/tools/ to load as http? My site uses an apache/tomcat web server.
(Just FYI, the reason I want the pages in the tools directory to serve pages over http instead of https is because some of them are meant to iframe http pages. If I try to iframe an http page from an https page I end up getting mixed-content errors.)
HTTP Strict Transport Security (or HSTS) is a setting your site can send to browsers which says "I only want to use HTTPS on my site - if someone tries to go to a HTTP link, automatically upgrade them to HTTPS before you send the request". It basically won't allow you to send any HTTP traffic, either accidentally or intentionally.
This is a security feature. HTTP traffic can be intercepted, read, altered and redirected to other domains. HTTPS-only websites should redirect HTTP traffic to HTTPS, but there are various security issues/attacks if any requests are still initially sent over HTTP so HSTS prevents this.
The way HSTS works is that your website sends a HTTP Header Strict-Transport-Security with a value of, for example, max-age=31536000; includeSubDomains on your HTTPS requests. The browser caches this and activates HSTS for 31536000 seconds (1 year), in this example. You can see this HTTP Header in your browsers web developer tools or by using a site like https://securityheaders.io . By using the chrome://net-internals/#hsts site you are able to clear that cache and allow HTTP traffic again. However as soon as you visit the site over HTTPS it will send the Header again and the browser will revert back to HTTPS-only.
So to permanently remove this setting you need to stop sending that Strict-Transport-Security Header. Find this in your Apache/Tomcat server and turn it off. Or better yet change it to max-age=0; includeSubDomains for a while first (which tells the browser to clear the cache after 0 seconds and so turns it off without having to visit chrome://net-internals/#hsts, as long as you visit the site over HTTPS to pick up this Header, and then remove the Header completely later.
Once you turn off HSTS you can revert back to having some pages on HTTPS and some on HTTP with standard redirects.
However it would be remiss of me to not warn you against going back to HTTP. HTTPS is the new standard and there is a general push to encourage all sites to move to HTTPS and penalise those that do not. Read his post for more information:
https://www.troyhunt.com/life-is-about-to-get-harder-for-websites-without-https/
While you are correct that you cannot frame HTTP content on a HTTPS page, you should consider if there is another way to address this problem. A single HTTP page on your site can cause security problems like leaking cookies (if they are not set up correctly). Plus frames are horrible and shouldn't be used anymore :-)
You can use rewrite rules to redirect https requests to http inside of subdirectory. Create an .htaccess file inside tools directory and add the following content:
RewriteEngine On
RewriteCond %{HTTPS} on
RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
Make sure that apache mod_rewrite is enabled.
Basically any HTTP 301 response from an HTTPS request indicating a target redirect to HTTP should never be honored at all by any browser, those servers doing that are clearly violating basic security, or are severaly compromized.
However a 301 reply to an HTTPS request can still redirect to another HTTPS target (including on another domain, provided that other CORS requirements are met).
If you navigate an HTTPS link (or a javascript event handler) and the browser starts loading that HTTPS target which replies with 301 redirect to HTTP, the behavior of the browser should be like if it was a 500 server error, or a connection failure (DNS name not resolved, server not responding timeout).
Such server-side redirect are clearly invalid. And website admins should never do that ! If they want to close a service and inform HTTPS users that the service is hosted elsewhere and no longer secure, they MUST return a valid HTTPS response page with NO redirect at all, and this should really be a 4xx error page (most probably 404 PAGE NOT FOUND) and they should not redirect to another HTTPS service (e.g. a third-party hosted search engine or parking page) which does not respect CORS requirements, or sends false media-types (it is acceptable to not honor the requested language and display that page in another language).
Browsers that implement HSTS are perfectly correct and going to the right direction. But I really think that CORS specifications are a mess, just tweaked to still allow advertizing network to host and control themselves the ads they broadcast to other websites.
I strongly think that serious websites that still want to display ads (or any tracker for audience measurement) for valid reasons can host these ads/trackers themselves, on their own domain and in the same protocol: servers can still get themselves the ads content they want to broadcast by downloading/refreshing these ads themselves and maintaining their own local cache. They can track their audience themselves by collecting the data they need and want and filtering it on their own server if they want this data to be analysed by a third party: websites will have to seriously implement thelselves the privacy requirements.
I hate now those too many websites that, when visited, are being tracked by dozens of third parties, including very intrusive ones like Facebook and most advertizing networks, plus many very weak third party services that have very poor quality/security and send very bad content they never control (including fake ads, fake news, promoting illegal activities, illegal businesses, invalid age rating...).
Let's return to the origin of the web: one site, one domain, one third party. This does not mean that they cannot link to other third party sites, but these must done only with an explicit user action (tapping or clicking), and visitors MUST be able to kn ow wherre this will go to, or which content will be displayed.
This is even possible for inserting videos (e.g. Youtube) in news articles: the news website can host themselves a cache of static images for the frame and icons for the "play" button: when users click that icon, it will start activating the third party video, and in that case the thirf party will interact directly with that user and can collect other data. But the unactivated contents will be tracked only by the origin website, under their own published policy.
In my local development environment I use apache server. What worked for me was :
Open you config file in sites-availabe/yoursite.conf. then add the following line inside your virtualhost:
Header always set Strict-Transport-Security "max-age=0". Restart your server.

How to ensure my website loads all resources via https?

URL in question: https://newyorkliquorgiftshop.com/admin/
When you open the above page, you can see in the console that there are lots of error messages saying "...was loaded over HTTPS, but requested an insecure stylesheet.."
This website was working well until all of a sudden this problem shows up. I am not very familiar with https, but I have contacted with Godaddy and the SSL certificate is valid, and there is no obvious problem with "https://newyorkliquorgiftshop.com". And I am stuck here, I've some experiences with HTTPS website before, if the URL of website's homepage is "https", then every resources it loads is via "https" too. I don't know why my website behave differently and I don't know where to start to solve the problem? Any hint is appreciated especially articles about HTTPS that is related to my problem.(I have done a brief research regarding HTTPS but most of the articles I found are about the basic concepts.)
If you have access to the code (not sure what you built the website using), try using https instead of http for the URL's you use to load your style sheets and script files.
For example one of the errors is
Mixed Content: The page at 'https://newyorkliquorgiftshop.com/admin/' was loaded over HTTPS, but requested an insecure script 'http://www.newyorkliquorgiftshop.com/admin/view/javascript/common.js'. This request has been blocked; the content must be served over HTTPS.
You are requesting the .js file using HTTP, try using HTTPS like so:
https://www.newyorkliquorgiftshop.com/admin/view/javascript/common.js

ARR with SSL offloading: app needs to know it was SSL

I have set up a web farm with ARR, using SSL offloading. Although the connection from ARR to the content site is proceeding with just HTTP, the application running on the site needs to know the original URL was HTTPS, so that links given in the result can be HTTPS. Can this be done?
I know I can capture the original HTTPS status as a new server variable (I'm using HTTP-X-ORIGINAL-HTTPS) using URL Rewrite on the ARR server. But how can I restore it to the content site using URL Rewrite? Obviously a redirect rule is not appropriate; a none action that sets server variables seems like it might be. I don't have an SSL binding on the content site. Do I have to make my content application look for the HTTP-X-ORIGINAL-HTTPS? Seems ugly.
Eventually I did -- I made the content application look for the request header HTTPS. (I have also switched from ARR to haproxy because haproxy gives me wildcard-bound TLS termination for free.)