"This content cannot be displyed in a frame" displayed after a period of time - internet-explorer-11

I have a vendor web application that sits within a frame being viewed with IE 11 on both Wondow7 and Windows 10. Emulation mode is set as default (Edge). The application functions properly most of the time. However, the application will randomly displays the error page "This content cannot be displayed in a frame". The headers being set are:
X-Frame-Options: SAMEORIGIN
X-Powered-By: Servlet/3.1
X-XXS-Protection: 1; mode-block
All content is coming from the same domain.
Everything I found on this topic so far relates to an issue occurring when the page is initially loaded not at some random point in time after page load. It also seems to occur not when the user is on the page but when they shift focus to another page or browser tab then return.
Any Ideas?

You could try to change the X-XSS-Protection value to 0. From this answer, we can see that:
The token mode=block will prevent browser (IE8+ and Webkit browsers) to render pages (instead of sanitizing) if a potential XSS reflection (= non-persistent) attack is detected.
So this might be the reason why the content can't be displayed sometimes.
Besides, if you're trying to open HTTPS page from non SSL site, you can also get this kind of error. You could refer to this thread.

Related

Page not updating when hitting back button, after submitting form in POST request

I have a simple test web site with 2 different pages, Page1 and Page2. Pages are generated by a web server.
All responses contain these headers:
Cache-control: no-cache, no-store, must-revalidate
Pragma: no-cache
Page1 contains a link to Page2 and a TextArea element, enclosed in a FORM tag. Page2 is a blank page (it doesn't contain anything useful).
Clicking on the Page2 link (inside Page1) will submit the form (and TextArea value) to the server and switch to Page2.
Each time Page1 is requested from the server, a new line is added to the TextArea. This happens at server-side (no DOM manipulation via JavaScript, at browser side).
So, when application starts, Page1's TextArea contains:
"Line 1"
If I click on the link to Page2 and then hit the back button, Page1's TextArea now contains 2 lines, as expected:
"Line 1"
"Line 2"
This shows 2 things:
BFCache (Back-Forward cache) is not being used (because each time the back button is hit, a new request is sent to the web server). It can also be confirmed by the fact that pageshow's event parameter persisted property is false.
The browser is updating the page correctly (because new content added to the page, at server-side, is being shown at browser side, as expected)
However, if I clear the text area, click on the same link to Page2 and then hit the back button, the TextArea will still show an empty content. I would expect it to show exactly one line of text (inserted by the server). This happens in all tested browsers, including WebKit, FireFox and even Internet Explorer.
Using Network tab from Developer Tools I can see that the server responds with the correct content and the preview sub-tab (in dev tools/network) shows exactly that.
So, in the second scenario, the browser is definitely retrieving the updated content from server, but refuses to update the page with that content. Seems to me that whenever the page is submitted, the browser will use the submitted page when the user hits the back button, regardless of content updated at server-side.
This issue is very similar to another here: How to prevent content being displayed from Back-Forward cache in Firefox?
with same symptoms but in that case there is no form submit and change in response headers would fix it.
My questions:
a) Why browsers retrieve an updated content from the server but don't use it to update the page in that particular scenario? Is it a documented behavior?
b) Is there a way to change this behavior? I'm particularly thinking about a way to force the browser to update the page with the response from the server (the same way it occurs when no submitting is involved).
BTW, submitting the form in an AJAX request is not a solution but merely a workaround for this issue, having in mind that a full post back may be needed in some scenarios.

ADF Essentials in Glassfish: AUTOSUBMIT and VALUECHANGELISTENER attribute makes a message appear in the Web Browser

I have a ADF project in Jdeveloper 11.1.2.4.0, one of my pages contains this:
<af:selectOneChoice label="HEllO" value="#{bean.data}" id="id1" autoSubmit="true" valueChangeListener="#{bean.createNewData}">
<f:selectItems value="#{data.list}" id="id2"/>
</af:selectOneChoice>
I deployed it to Weblogic and everything worked fine.
Then I deployed it to Glassfish using the ADF Essential libraries. And it seems to work fine, but there is a unsuspected behavior at any place where there is a attribute AUTOSUBMIT. Everytime the value of the component containing the AUTOSUBMIT="true" is changed I have this behavior...
Firefox: A message saying: "To display this page, Firefox must send
information that will repeat any acction..."
IE: A message saying: "To display the webpage again, the web browser needs to resend the information you've previously submitted.."
Chrome: It redirects to the back page.
Opera: It redirects to the back page.
EDIT: The same happen when I have PARTIALSUBMIT set to true. I realized that I have to have the valueChangeListener attribute in order to get message.
Autosubmit=true will make (by default) your page to resubmit entirely. You should use partial triggers to avoid this. Set the ID of this component to the 'partial Triggers' attribute of the component you want to refresh (form,table, etc.). You should set 'partialSubmit=true' to your first component.

In QTP how to identify if a web element is visible on the current visible browser window

On my full screen browser page the header is visible but the footer is not visible on the current window. To see the footer we needs to page down N times as the intermediate contents is populated when we page down (dynamically populate). So my problem is to know how many times i needs to page down to see the footer. Adding to this question, is it possible to know if an web element is below the current visible browser area ?
If you are using QTP for identifying and operating on the objects, you need not scroll down. Make sure that you are using strong locator properties (htmlId, ObjectId etc) for identifying the element and your code will work just fine. QTP works on the HTML source of the web page; so it is immaterial whether or not the element that you want to work on is visible or not. I am assuming there are no AJAX components here. With AJAX, you need to employ a different strategy.

Rails 3: How to prevent the browser from loading a page from cache on back/forward navigation?

I have the following situation:
Page A: /something/new
Which posts back to: /something/create
Which redirects to Page B: /something/edit
So far it all works. Now, /something/edit is a page that lets you do a bunch of things through AJAX, so it starts up empty, and as you use it it gets "fuller", so to speak. If you reload at any time, you get everything back, rendered by the server.
However, if after being redirected, and making modifications to the page, you hit Back and then Forward again, the browser (Chrome at least) doesn't hit the server again (not even an Etag check that might result in a 304, nothing), it just loads Page B from cache, which shows up empty, and can be quite confusing...
When first rendering Page B, the server responds with the following headers:
Cache-Control:must-revalidate, private, max-age=0
Connection:Keep-Alive
Content-Length:18577
Content-Type:text/html; charset=utf-8
Date:Thu, 02 Aug 2012 20:19:59 GMT
Server:WEBrick/1.3.1 (Ruby/1.9.3/2012-04-20)
Set-Cookie: (redacted)
X-Miniprofiler-Ids:["ma2x1rjc0kgrijiug5dj","nnmovj2wz1lux85jwhd3"]
X-Request-Id:2dd3fa62799beadc1b39b8db1aa5f45f
X-Runtime:0.245014
X-Ua-Compatible:IE=Edge
I don't see an Etag, or anything similar that could be bothering. Also, if I'm interpreting "Cache-control" correctly (i'm not very experienced with it, though), it seems to be saying to not cache, ever...
Is there any way to avoid this behaviour, and have the browser hit the server again on Back/Forward?
Thanks!
Daniel
I would investigate the answers posted here.
Is there a cross-browser onload event when clicking the back button?
I just changed this answer because some of the utilities I was referring to are pretty old and not maintained, which is not a useful answer.

Combining age verification and google indexing

As spiders will generally not execute javascript i am thinking of taking one of the options below in order to successfully get them to index the content of a website that requires age verification.
My preferred solution:
Checking for a cookie 'ageverification'. If it does not exist, add some javascript to
redirect the user to ~/verifyage.aspx which will add the required cookie and redirect the user to their previous page.
Another solution:
As above, but do not redirect the user. Instead, if the cookie doesnt exist, draw the age verification form 'over the top' of the existing page.
Another solution:
Add a 'Yes I am over 18' anchor link that a crawler can follow. I am slightly skeptical over the legality of this.
Any insight or ideas greatly appreciated.
What I do - I store age verification in session data. If the session variable does not exist, the server appends a div to the end of the body (after the footer) with the click to verify or click to exit. I use CSS to have it cover the content.
For the css - I use:
display: block; width: 100%; height: 100%; position: fixed; top: 0px; left: 0px; z-index: 9999;
That causes the div the cover all other content in a graphical browser even though it is placed at the very end of the body.
For users without JS enabled, the "Enter" link points to a web page that sets the session variable and returns the user to the page they requested. That results in two page loads of the browser for them to get to the content they want which is not ideal, but it is the only way to do it for non JS enabled browsers.
For JS enabled browsers, a small JavaScript is attached to the page that will change the "Enter" link href link to # and attach a very basic function to the click event, so that clicking on Enter triggers the use XMLHttpRequest to tell the server the person clicked "Enter". The server then updates the session and responds to the XMLHttpRequest with a 200 OK response, triggering the JavaScript to hide the age verification div covering the content. Thus the session is updated so the server knows the user verified the age and the user gets to see the content they wanted with no page reloading in the browser, a much better user experience.
The age verification thus works without JavaScript by sending the user to the verify page the stateless way or in a much friendlier way with JavaScript.
When a search spider crawls the site, it gets the age verification div on every page because a spider will not have the necessary session variable set, but since the div is at the very end of the html body the spider still indexes the real content first.
You've got a real problem either way.
If you let the crawler onto the age-verified portion of your site, then it has that content in its index. Which means it will present snippets of that to users who search for things. Who haven't been through your age verification. In the case of Google, this means users actually have access to the entire body of content you were putting behind the verifywall without going through your screener - they can pull it from the Google cache!
No-win situation, sorry. Either have age-verified content or SEO, not both. Even if you somehow tell the search engine not to spit out your content, the mere fact that your URL shows up in search results tells people about your site's (restricted) contents.
Additionally, about your JavaScript idea: this means that users who have JavaScript disabled would get the content without even knowing that there should have been a click-through. And if you display a banner on top, that means that you sent the objectionable content to their computer before they accepted. Which means it's in their browser cache. Or they could just hack out your banner and have at whatever it is you were covering up without clicking 'OK'.
I don't know what it is your website does, but I really suggest forcing users to POST a form to you before they're allowed to view anything mature. Store their acceptance status in a session variable. That's not fakeable. Don't let the search engine in unless it's old enough, too, or you have some strong way to limit what it does with what it sees AND strong information about your own liability.