Imagine an Internet speed of 20KB/s and a 20MB file. It takes 1000 seconds to download the file completely.
Using proxies like browsec, download fails at the middle of it. each time, after a random seconds. In these situations, Web server log (server is apache) indicates a 404 error. (problem 1)
I thought 404 means file doesn't exist. Doesn't it? but the file exists and some percents of it (again, random percents. once 50%, once 20% ...) is downloaded.
I don't really know what is happening between the proxy and the server. But I am sure that the browser doesn't show a failed text as it seems the file downloaded completely.
Downloading the file directly without any proxies, there is no problem and it downloads completely. But Chrome doesn't have resume capability, while Firefox does. really why chrome doesn't resume the download? (problem 2)
Any help about these two problems is appreciated.
Thanks.
Related
when I start the test, the browser is opened but it does not load the URL.
After 10-15 seconds it stopped to load (see screenshot).
I had updated intelliJ, updated playwright to version 1.28.1 (doesn't help)
It happened 7 out of 10 trials.
Any idea why it suddenly becomes that behavior?
Many thanks!
there are 3 probabilities causing this error when hitting the URL:
Your browser version could be a outdated which the site is blocking, so try updating playwright browsers using the command 'npx playwright install'.
Check whether the site is being blocked due to any firewall or proxy setting and if so, get it whitelisted.
As the request also seems to be a crash due to timeout, kindly check whether you have decent internet speed.
Using MS Edge and apache w/ php, I just discovered via access.log that when I have the JavaScript debug panel (i.e. developer panel) open, it is making every http call twice. When I closed this panel, it has fixed the issue of all insert statements getting called twice.
Question: Does this doubling of http calls happen on every / most browsers that I need to look out for, or is this something special/unique with MS Edge?
I can't speak for all browsers and all developer tools. But, for IE and Edge the first time you open the tools and then open a JS file in the sources view it will try to request the file again. That request will be served from the local browser cache, sometimes not, depending on the cache settings for the file being requested.
The reason browser tools need to make this request is that browsers will often throw out the original source file as it doesn't need it to execute the page, as the source has been parsed it into something else that it can work with.
However, after you've opened the developer tools the browser will keep around sources in future navigations, either in the tools front end or elsewhere. Not keeping sources is an optimization for the first time use case, to save browsers keeping around source on the very low odds of the tool being used on any given navigation.
Of course some files are never cached by the browser and will need to be downloaded when requested by the tools, for example sourcemapped files.
In general any resources on your site that can be accessed by HTTP GET should be idempotent. That is, a GET shouldn't change the resource being requested (or generall the state of your site), so hopefully making additional requests shouldn't be an issue.
HTTP requests of resources randomly - about between 1-5% of the time (per resource, not per page loads) - take extremely long to be delivered to the browser (~20 seconds), not uncommonly hanging indefinitely even. (Server details listed in list at the bottom).
This results in about every 5th request to any page appear to hang due to a JavaScript resource hanging within the <head> tag.
The resources are css, js and small image files, served directly by apache (no scripting language), although page loads (involving PHP or Rails) also rarely hang, with equal chances as any other resource (1-5% of the time), so this seems to be an Apache Request related issue.
Additional information:
I've checked the idle workers on server-status and as expected, I still have 98% of my idle workers. Although this may be relevant as the hangings apply to static resources not served by FastCGI (the resources are static).
I am not the only one with this problem. Someone else is also having the same problem, and from a different IP address.
This happens in both Google Chrome and Firefox as HTTP clients.
I have tried constantly force refreshing the same JS file in a new tab. It eventually led to the same kind of hanging.
The Timing tab for Google Chrome reports 34ms waiting and 19.27s receiving for one of these hanging requests. Would that mean Apache already had the file contents to be delivered ready, only had trouble delivering it in a sensible amount of time?
error.log doesn't show any errors. There are some expected 404 and 500 errors in error.log, but those aren't related to the hanging; those are actual errors for nonexisting pages and PHP fatal errors.
I get some suspicious 206 Partial Content responses mostly for static content, although the hanging happens more often then those partial contents. I mostly get 200 OK responses everywhere and I can confirm indefinitely hanging resources that were reported as 200 OK in the apache access.log.
I do have mod_passenger installed for Redmine. I don't know if that helps, but suspiciously this server has it installed unlike all the other servers I worked with. Although mod_passenger shouldn't affect static content, especially not within a non-ruby project folder, should it?
The server is using Apache 2.4 Event MPM on Ubuntu 13.10, hosted on Digital Ocean.
What may be causing these hangings and how could I fix this?
I had the same problem, so after reading this thread I tried setting KeepAlive Off in my apache config which seems to have helped- all resources have expected waiting times now.
Not a great "fix", but at least I am one step closer to figuring out the cause and pages aren't taking 15s to fully load in the mean time.
I'm a web application developer, who runs a site http://myfav.es. We've been struggling with this issue for about a month now.
We use the HTML application cache spec - www.w3.org/TR/offline-webapps/ - with dynamically generated manifest files - myfav.es/personal.manifest - to speed page delivery. These dynamically generated manifest files use proper headers, and PHP to serve up custom manifests for users.
We also use gzip compression to serve the site from a linux/apache host.
For the life-cycle of our site, users report getting a err_failed similar to this screenshot in chrome. twitpic.com/272237.
This error is intermittent, occuring once every 200-300 visits, but will persists on every page refresh, including hard refreshes, which presumably means that an error using app cache is causing them to continuously load a failed version of the site. However, mysteriously JUST clearing cookies causes the error to fix itself.
I'm completely out of ideas on how to approach this error, and googling the error message appears to get a ton of confused users with voodoo-ish approaches to solving it. I've personally seen the error, along with a number of complaint from other users of chrome, so I'm fairly certain it cannot be caused by a particular user having abnormal settings or browser preferences.
Does anyone have any insight into the cause of this browser error and its origins? Whether its likely server-side or a byproduct of app design?
I just upgraded my browser to Safari 4 and find that our website is having some major issues specific to that browser version. As I click through pages on our site it takes one or two clicks before the browser window simply goes blank. When the window goes blank, there is no source to view and no matter how many times I try to reload or if I try to load other pages of the site, I still get the blank window. It's as if the server takes the request and simply returns a blank page.
If I wait over 15 seconds and then hit refresh again, the page loads fine. Not sure why it starts working again... Maybe a cache issue???
It's a PHP site and I've tried turning on error_reporting(E_ALL);, but that doesn't give any information. I also tried putting an echo statement at the very beginning of the index.php file and verified that the page still goes blank without echoing that statement, so I'm thinking the problem is not php code specific. The Apache error log does not show any issues. I have the same site on my local development server and it doesn't have the problem.
Safari 4 is the only browser that shows this problem. Does anyone have any ideas how to debug/fix this?
My webserver is ubuntu Hardy running Apache 2 an Mysql 5.
We have an nginx load balancer in front of the apache server and I just figured out that Safari 4 requires the nginx keepalive_timeout setting to be 0. Took all day to figure that one out...
I've been having the same issue with Safari 4 on my site but found that when reloading pages that return blanks, the request never even makes it to the server. No entry shows up in Apache's logs.
The keepalive setting for your LB sounds like a direction I could sniff in. Not sure what leeway I will have though, being on shared hosting.
Mike
This looks to be a safari bug. We experience it too, and I have read other reports.
http://discussions.apple.com/thread.jspa?threadID=2064488&start=0&tstart=0