I'm a web application developer, who runs a site http://myfav.es. We've been struggling with this issue for about a month now.
We use the HTML application cache spec - www.w3.org/TR/offline-webapps/ - with dynamically generated manifest files - myfav.es/personal.manifest - to speed page delivery. These dynamically generated manifest files use proper headers, and PHP to serve up custom manifests for users.
We also use gzip compression to serve the site from a linux/apache host.
For the life-cycle of our site, users report getting a err_failed similar to this screenshot in chrome. twitpic.com/272237.
This error is intermittent, occuring once every 200-300 visits, but will persists on every page refresh, including hard refreshes, which presumably means that an error using app cache is causing them to continuously load a failed version of the site. However, mysteriously JUST clearing cookies causes the error to fix itself.
I'm completely out of ideas on how to approach this error, and googling the error message appears to get a ton of confused users with voodoo-ish approaches to solving it. I've personally seen the error, along with a number of complaint from other users of chrome, so I'm fairly certain it cannot be caused by a particular user having abnormal settings or browser preferences.
Does anyone have any insight into the cause of this browser error and its origins? Whether its likely server-side or a byproduct of app design?
Related
What is the issue with Cloudflare custom URL purge caching not working on different accounts? Bascially I have the account, and this functionality does work, but my colleague who lives in USA, I live in Europe, for her this does not work.
Can the issues be with account settings? Basically she gets the error or sometimes the cache does not want to work
I've faced issues in the past where features in Cloudflare was not working due to plugins or browser configuration. First you should confirm if the error is indeed coming from Cloudflare or from the browser.
Ask her to try to use a different browser and also a different device to confirm the issue is either tied to the account or to her location.
Try to inspect the page when performing the action to try and find out what's happening behind the scenes.
I'm trying to scrape my own banking information by automating the process using Selenium in Ruby.
I'm running into a bizarre situation where performing the exact same sequence in the browser (whether just the normal browser or private/incognito) works fine, but when I try to log in under a Selenium-controlled browser I get back a strange 500 error from the server.
I've noticed the browser console logs also look different in terms of certain logging messages related to cookies, JS errors, libraries being loaded, etc.
I have found an answer on SO mentioning one possible difference in Chrome being a specific "cdc" string that might be detectable, but is there some kind of corresponding difference in Firefox/Geckodriver that could be used to detect the fact that I'm trying to automate the browser?
I'm not really sure where to look, because my understand was that running via Selenium should basically have identical behaviour to running via the browser itself.
Would love some guidance on what mechanisms may be in play to explain the difference in behaviour!
Using MS Edge and apache w/ php, I just discovered via access.log that when I have the JavaScript debug panel (i.e. developer panel) open, it is making every http call twice. When I closed this panel, it has fixed the issue of all insert statements getting called twice.
Question: Does this doubling of http calls happen on every / most browsers that I need to look out for, or is this something special/unique with MS Edge?
I can't speak for all browsers and all developer tools. But, for IE and Edge the first time you open the tools and then open a JS file in the sources view it will try to request the file again. That request will be served from the local browser cache, sometimes not, depending on the cache settings for the file being requested.
The reason browser tools need to make this request is that browsers will often throw out the original source file as it doesn't need it to execute the page, as the source has been parsed it into something else that it can work with.
However, after you've opened the developer tools the browser will keep around sources in future navigations, either in the tools front end or elsewhere. Not keeping sources is an optimization for the first time use case, to save browsers keeping around source on the very low odds of the tool being used on any given navigation.
Of course some files are never cached by the browser and will need to be downloaded when requested by the tools, for example sourcemapped files.
In general any resources on your site that can be accessed by HTTP GET should be idempotent. That is, a GET shouldn't change the resource being requested (or generall the state of your site), so hopefully making additional requests shouldn't be an issue.
We are in the process of implementing Success Factors LMS, and trying to play and view SCORM compatible files exported from Adobe Captivate 8 and 9 in Success Factors LMS.
I get the message - 'ERROR – unable to acquire LMS API, content may not play properly and results may not be recorded. Please contact technical support’
I have tried SCORM versions 1.2 v3 and 2004 V2 and V4. We can view the content, however it does not track, show as complete etc.
We are also producing Scorm compliant files using Skillcast and Articulate, but we still hit the same issue, we can view the content after closing the API error window, but still does not track.
Anyone experienced this problem before? Or know of a fix?
Many thanks
Normally this issue comes up when the course is unable to get the SCORM API from the LMS...I have seen a ton of SCORM content running in Success Factors before, so I wonder if the issue is in the setup. Are you seeing any "Access Denied" type errors in the browser element inspector/developer tools? I wonder if the course just can not find/have access to the player window. If the course is launching in a new window, you may want to try launching it in the frameset. I have seen folks get around this issue by making sure the player and sco are in the same window...
If you wanted to rule out the content being the issue, you can always test your content in the SCORM Cloud's free sandbox (https://cloud.scorm.com) to make sure the course is properly asking for the API...
If you have any other questions, we would be happy to help...you can just shoot us an email at support#scorm.com.
Thank you!
Joe
The error occurs because the content is not speaking to the Learning Management System (LMS). The code that runs to initialize the session doesn't happen. There is no return "ping" from the LMS.
You will get this error when you publish in SCORM and run from your desktop, or from a web server that isn't connected to an LMS. If it occurs when you are launching from an LMS it can either mean that the SCORM API isn't configured correctly, or your content server is on a different domain (cross-domain) than your application servers.
To test, you should try launching your content in different browsers. Our system was configured in such a way that Firefox and Chrome read our content to be cross-domain issue, and threw the SCORM API error, but Internet Explorer worked just fine.
In the end, it was determined that our server configuration in tandem with our firewall and security settings read the Content server as cross-domain and we had to redeploy our content servers within the firewall.
I am part of the developer team for a quite a large online system using ASP.NET(4).
Asp.net Ajax completely breaks down for Webkit browsers and we are getting full page postbacks when we should be getting partial only for the UpdatePanels.
I am starting to believe it has something to do with my Application Configuration, mainly for the following reasons.
If I move the ajax enabled controls to a new project they will work as expected for all browsers, including Webkit.
I created a static .aspx file with nothing but an UpdatePanel,ScriptManager and a button making a literal visible on click.
I get no Javascript errors from any browser, and i see an http request for the asp.net-ajax (ScriptResource.axd) in both Firebug and Chrome Developer tools
I tried ye'old safari fix from this highly referenced thread
Edit: After a bit more testing and http sniffing i noticed a major difference between the test application and the actual application. The test application generates 2 additional .axd files which are not generated from the actual application. These WebResource.axd, seem to contain data related to the async postback. However this is only the case for Webkit browsers. The WebResource.axd files are generated for Firefox as i can see them in firebug
What i am asking from the community, is any ideas or suggestions as to what could be the cause of this problem and if i am correct to assume that the problem is probably on the server side
Thanks for any help
The problem was due to a deprecated config file that's used to limit the content that bots/spiders/crawlers receive, which was loading by mistake thanks to our lovely inhouse CMS
In short if u get behavior similar to my case, check your or configs
I was having a similar issue however my problem was with all browsers and not just webkit. I ended up going through and tearing up the web.config file and found out that a line: <xhtmlConformance mode="Legacy"> was preventing webresource.axd from working properly. The fix was to simply remove that line from my web.config file.
For a little more information on xhtmlConformance, visit http://technet.microsoft.com/en-us/librarY/ms228268(v=vs.85).aspx.
If you scroll all the way to the bottom you'll notice it explicitly states that it causes issues with webresource.axd and scriptresource.axd.