Disable retrying of POST request via AJAX if connection was dropped - apache

Problem
Sometimes important HTTP POST requests sent with AJAX got duplicated so several entries of the same data got created in the production database, that is of course not supposed by users.
What is important is that users have a poor internet connection and this request is taking a long time (9-20 seconds). We can't reduce this time because it is the requirements of business logic.
Requests are sent with http, not https.
Details
We have Apache/2.4.18 (Ubuntu) with PHP module loaded and two frontends: one for desktop (AngularJS) and one for mobile (React) devices. AngularJS is sending requests with the $http service, and React is using whatwg-fetch (tried whatwg-fetch-timeout also).
We know from Apache access.log and PHP logs that the same request is coming from the client several times and PHP processes them without the errors. But! These requests have response with 200 status code, %b > 0, and %O = 0, that means request is aborted before a response is sent (Apache logging format docs).
Reproduce
So we tried to reproduce and the same happens sometimes. The following case is just a reproduced case, but this happens on mobile devices (iPhones and Android phones) with different browsers installed. Also, we have repeated it in the Firefox under Windows.
Environment: Windows; both Chrome and Firefox; React frontend version; no proxy used.
That's what we found out: Google Chrome identifies the request as "Stalled", but is internally trying to send the request multiple times (and it is received and processed on the server actually), because it got the network error (ERR_CONNECTION_CLOSED). Only when the browser succeeded to fetch the response, it stops sending the repeating requests.
Gathered info
URL_REQUEST event log from chrome://net-internals/help.html#events (headers are also available there)
Google Chrome dev tools request screenshots:
Headers tab
Timeline tab
I personally can't reproduce this even once and I suppose a good internet connection is a reason for this.
I have googled a lot and even found some similar Chromium bugs, but nothing exactly about this problem.
Thank you in advance for any useful information.
I am also not pretty sure which tags should I set for this question, so if I should add or remove some, please tell me.

Related

Correctly timing out XHR Requests in React-Native on Android

We're facing an issue with handling unexpected behaviours when performing xmlHttpRequests on Android devices using React-Native. We've experienced behaviour where the app becomes unavailable to complete API calls, even though the device is connected to the internet perfectly well (browser can access non-cached sites just fine). The only way to resolve this issue for our users has been to completely restart the app.
To understand the issue and its severity, we wrapped all our API calls to a timer function in production and sent reports to Sentry whenever a request took longer than 30 seconds to finish. Now we've been receiving these kind of reports in the hundreds per day with the duration sometimes being in the hours or even days.
First, instead of using whatwg-fetch, we moved to using axios so that we can manually set the timeout of each request, but this ended up not helping at all.
Second, we dove deeper into understanding how React-Native actually implements timing out XHR requests on Android, and found that it uses OkHttp3 under the hood. OkHttp has a default value for connect, read and write timeouts and react-native allows developers to change the value of connect timeout here. However, OkHttp also has a method for setting a call timeout (everything from connect to reading the response body), but this has a default value of 0 (no timeout) and React-Native doesn't allow users to change that. More on okhttp timeouts here
My question is whether this can be the cause of our worries and whether it should be reported to React-Native as a bug. Lets assume the following case:
app makes an API call to a resource
okhttp is able to connect to the resource within specified timeout limit (default 10s)
okhttp is able to write the request body to the server within timeout limit (10s)
server processes request but for some reason it fails to start sending a response to the client. I guess there could be many reasons for this like server being disconnected, server crashing or server simply losing the connection to the client without the client noticing it. As there is no timeout here, okhttp will happily wait for days on end for the server to start responding to the request.
Am I missing something here or misunderstanding how timeouts work in okhttp? Is there another, perhaps better solution than implementing the ability for developers to set callTimeout on API calls performed on android? And also, isn't it a bit stupid that developers cant set their own write and read timeouts? Cant this lead to unexpected behaviour when you want to send large amounts of data in either direction on a slow connection? 10s is quite long, but perhaps not long enough for all use-cases.
Thanks for your help in advance!

Scrapy on Ubuntu web server getting 417 error

I have been developing a crawling script for a number of news websites and using Scrapy to handle the logic.
When I run my script on an Ubuntu web server (Digital Ocean, if that helps), a lot of the websites that return 200 on my local machine turn out to be 417 instead.
I was wondering how I should fix this, if it is a problem at all? I'm actually not quite sure if it is affecting the final output, but it seems like it has been.
Some of my own research has turned up:
http://www.checkupdown.com/status/E417.html . I've tried adding an Expect header to my requests, which hasn't worked
I've heard that it might be a problem with HTTP 1.1 vs 1.0? EDIT: Nope. Scrapy's HTTPDownloaderHandler automatically chooses 1.1 if it is available
417 is the error a web server gives you when your client says it expects content-types a,b,c, but the content that the server could deliver doesn't match any of these types.
This looks like a scrapy bug or, more likely, misconfiguration.
It seems either your public ip address was already banned or was banned while you scraped by the web server of the page you want to scrape. For the first situation you can reboot your instance to get a new public ip (at least this works on Amazon). For the second scenario, here are some tips from the official documentation to avoid this situation:
rotate your user agent from a pool of well-known ones from browsers
(google around to get a list of them)
disable cookies (see COOKIES_ENABLED) as some sites may use cookies to spot bot behaviour
use download delays (2 or higher). See DOWNLOAD_DELAY setting.
if possible, use Google cache to fetch pages, instead of hitting the
sites directly
use a pool of rotating IPs. For example, the free Tor
project or paid services like ProxyMesh
use a highly distributed downloader that circumvents bans internally, so you can just focus on parsing clean pages. One example of such downloaders is Crawlera
Additionally, you can reduce concurrent requests settings in your spider, that worked once for me.

Capture web driver network traffic across all browsers

I want to capture all the network calls from Web Driver in Java. I am not doing any UI testing, just testing JS execution and, requests and responses of some network calls.
I tried using Browser Mob as is suggested in most forums, but I need it to work across all browsers. It worked flawlessly with Firefox, but I was facing some issues with the others. Safari driver doesn't event support a Proxy capability.
I don't want to use Fiddler as it involves some manual steps around invoking and storing the calls. Whereas, Browser Mob being an in-code proxy can be integrated in a more smoother fashion.
I also tried using the RC-like package included in Selenium standalone server package. But, I have some HTTPS calls and some nested iframes in cross domains. I am particularly interested in some cross domain POST call and it doesn't work out that well. Also, people keep saying it's not recommended to use that package.
So, I had a solution where we can use a standalone proxy server running on a machine. Using host entries, we'll point Web Driver to hit the proxy instead of the actual server. The proxy will record all the incoming calls and route them to the actual server host. Later, I can make a request to the proxy which will return me all the calls it intercepted. I am not sure whether it's still called a proxy or a router.
I came across TCPmon, but it's no longer being supported. Does anyone know some similar tools that could run on Unix systems or any alternate solutions?
We modified the Fiddler rules script to include a new exec action. If you use their native script editor, it also provide auto complete features and we were comfortably able to get around it. The syntax is similar to that of JavaScript.
The Fiddler package comes with a ExecActions.exe which can be used to pass console arguments to a running Fiddler instance using the command prompt.
The code we wrote processed all the sessions captured by Fiddler and wrote it to a file in a custom JSON format and later used GSON to deserialize it.
Please let me know, if you want further details.

Apache - resources randomly hang (resulting in slow page loads)

HTTP requests of resources randomly - about between 1-5% of the time (per resource, not per page loads) - take extremely long to be delivered to the browser (~20 seconds), not uncommonly hanging indefinitely even. (Server details listed in list at the bottom).
This results in about every 5th request to any page appear to hang due to a JavaScript resource hanging within the <head> tag.
The resources are css, js and small image files, served directly by apache (no scripting language), although page loads (involving PHP or Rails) also rarely hang, with equal chances as any other resource (1-5% of the time), so this seems to be an Apache Request related issue.
Additional information:
I've checked the idle workers on server-status and as expected, I still have 98% of my idle workers. Although this may be relevant as the hangings apply to static resources not served by FastCGI (the resources are static).
I am not the only one with this problem. Someone else is also having the same problem, and from a different IP address.
This happens in both Google Chrome and Firefox as HTTP clients.
I have tried constantly force refreshing the same JS file in a new tab. It eventually led to the same kind of hanging.
The Timing tab for Google Chrome reports 34ms waiting and 19.27s receiving for one of these hanging requests. Would that mean Apache already had the file contents to be delivered ready, only had trouble delivering it in a sensible amount of time?
error.log doesn't show any errors. There are some expected 404 and 500 errors in error.log, but those aren't related to the hanging; those are actual errors for nonexisting pages and PHP fatal errors.
I get some suspicious 206 Partial Content responses mostly for static content, although the hanging happens more often then those partial contents. I mostly get 200 OK responses everywhere and I can confirm indefinitely hanging resources that were reported as 200 OK in the apache access.log.
I do have mod_passenger installed for Redmine. I don't know if that helps, but suspiciously this server has it installed unlike all the other servers I worked with. Although mod_passenger shouldn't affect static content, especially not within a non-ruby project folder, should it?
The server is using Apache 2.4 Event MPM on Ubuntu 13.10, hosted on Digital Ocean.
What may be causing these hangings and how could I fix this?
I had the same problem, so after reading this thread I tried setting KeepAlive Off in my apache config which seems to have helped- all resources have expected waiting times now.
Not a great "fix", but at least I am one step closer to figuring out the cause and pages aren't taking 15s to fully load in the mean time.

IE8 Post Body Becomes Empty after form submission

Okay here is our setup:
Simple form being submitted via AJAX using Prototype 1.7 to a Apache server captured by ColdFusion. (We have noticed similar bugs on pages that submit form data in the conventional way but these pages are used far less.)
Some of our clients are reporting an error. After looking through the logs and doing live testing from their machine Firebug Light is reporting that the request was being sent with the post data.
However on the server side the post data is not present in raw logs or ColdFusion's FORM object or in GetHttpRequestData().
This problem has been isolated to IE only even when running Chrome Frame and is intermittent.
We can not reproduce this error with our IE8 installs on our machines OR on their machines running Firefox or Chrome.
Any thoughts on this extremely difficult bug to track down?
Do you have an HTTP proxy involved in this somewhere? We have had issues in the past, I can't recall the details, but I know that it had something to do with using AJAX to POST. The proxy was configured such that a certain combination of headers would make it misbehave. Take a good look at the HTTP headers coming from the browser, comparing one that works and one that doesn't.