Long running CSV download works with Chrome but not Postman/application - api

I have a legacy REST API that allows downloading of CSV files which it generates from a database.
Sometimes, the SQL is long-running and won't return any data for 5 minutes, then it starts to stream out the data.
For some reason, my application that consumes this API, as well as Postman, can't handle this 5-minute scenario. It eventually closes the connection and no data is received.
However, using Google Chrome works fine. It idles for that 5 minutes or so, then data starts to slowly download. It also seems to work using CURL.
I have analyzed and compared the HTTP headers between Chrome and Postman, and they seem to be identical (including Keep-Alive/timeout, etc..). How is Chrome able to stream data with a longer idle time? How can I change my application or Postman to negotiate the connection more smoothly like how Chrome and Curl seem to be able to?

Related

Difference between running via Firefox versus Selenium WebDriver (GeckoDriver)

I'm trying to scrape my own banking information by automating the process using Selenium in Ruby.
I'm running into a bizarre situation where performing the exact same sequence in the browser (whether just the normal browser or private/incognito) works fine, but when I try to log in under a Selenium-controlled browser I get back a strange 500 error from the server.
I've noticed the browser console logs also look different in terms of certain logging messages related to cookies, JS errors, libraries being loaded, etc.
I have found an answer on SO mentioning one possible difference in Chrome being a specific "cdc" string that might be detectable, but is there some kind of corresponding difference in Firefox/Geckodriver that could be used to detect the fact that I'm trying to automate the browser?
I'm not really sure where to look, because my understand was that running via Selenium should basically have identical behaviour to running via the browser itself.
Would love some guidance on what mechanisms may be in play to explain the difference in behaviour!

Disable retrying of POST request via AJAX if connection was dropped

Problem
Sometimes important HTTP POST requests sent with AJAX got duplicated so several entries of the same data got created in the production database, that is of course not supposed by users.
What is important is that users have a poor internet connection and this request is taking a long time (9-20 seconds). We can't reduce this time because it is the requirements of business logic.
Requests are sent with http, not https.
Details
We have Apache/2.4.18 (Ubuntu) with PHP module loaded and two frontends: one for desktop (AngularJS) and one for mobile (React) devices. AngularJS is sending requests with the $http service, and React is using whatwg-fetch (tried whatwg-fetch-timeout also).
We know from Apache access.log and PHP logs that the same request is coming from the client several times and PHP processes them without the errors. But! These requests have response with 200 status code, %b > 0, and %O = 0, that means request is aborted before a response is sent (Apache logging format docs).
Reproduce
So we tried to reproduce and the same happens sometimes. The following case is just a reproduced case, but this happens on mobile devices (iPhones and Android phones) with different browsers installed. Also, we have repeated it in the Firefox under Windows.
Environment: Windows; both Chrome and Firefox; React frontend version; no proxy used.
That's what we found out: Google Chrome identifies the request as "Stalled", but is internally trying to send the request multiple times (and it is received and processed on the server actually), because it got the network error (ERR_CONNECTION_CLOSED). Only when the browser succeeded to fetch the response, it stops sending the repeating requests.
Gathered info
URL_REQUEST event log from chrome://net-internals/help.html#events (headers are also available there)
Google Chrome dev tools request screenshots:
Headers tab
Timeline tab
I personally can't reproduce this even once and I suppose a good internet connection is a reason for this.
I have googled a lot and even found some similar Chromium bugs, but nothing exactly about this problem.
Thank you in advance for any useful information.
I am also not pretty sure which tags should I set for this question, so if I should add or remove some, please tell me.

How to terminate an EventSource CGI?

A bit unsure where to look for this one...
Context:
HTML5 web page, that uses HTML5 EventSource / server-side events to get refresh notifications
OpenWrt BarrierBreaker server, running uHTTPd as the web server
a two-level CGI script that provides the server-side events:
the CGI is a shell script (ash, not bash), that parses QUERY_STRING, and calls...
a C application that do the true data extraction (from an SQLite database) and pushes the data to the web page
Everything works, except for a little detail: when the web page is closed,
the C application keeps running. Since it doesn't expect any user input, its current structure is a simple while(1). So after some time, the openwrt box has dozens of copies of the app running.
So the question: how can the application be changed to detect that the client isn't there anymore, and that it should quits?
Thanks
[Edit]
Since posting this a few hours ago, i investigated if the information was somehow available in the script's input stream. It appears it isn't.
I also found http://html5doctor.com/server-sent-events/ that describes a strategy to do exactly this in a Node.js environment, but I have no idea how to translate this in a script-based one.
[/Edit]

Capture web driver network traffic across all browsers

I want to capture all the network calls from Web Driver in Java. I am not doing any UI testing, just testing JS execution and, requests and responses of some network calls.
I tried using Browser Mob as is suggested in most forums, but I need it to work across all browsers. It worked flawlessly with Firefox, but I was facing some issues with the others. Safari driver doesn't event support a Proxy capability.
I don't want to use Fiddler as it involves some manual steps around invoking and storing the calls. Whereas, Browser Mob being an in-code proxy can be integrated in a more smoother fashion.
I also tried using the RC-like package included in Selenium standalone server package. But, I have some HTTPS calls and some nested iframes in cross domains. I am particularly interested in some cross domain POST call and it doesn't work out that well. Also, people keep saying it's not recommended to use that package.
So, I had a solution where we can use a standalone proxy server running on a machine. Using host entries, we'll point Web Driver to hit the proxy instead of the actual server. The proxy will record all the incoming calls and route them to the actual server host. Later, I can make a request to the proxy which will return me all the calls it intercepted. I am not sure whether it's still called a proxy or a router.
I came across TCPmon, but it's no longer being supported. Does anyone know some similar tools that could run on Unix systems or any alternate solutions?
We modified the Fiddler rules script to include a new exec action. If you use their native script editor, it also provide auto complete features and we were comfortably able to get around it. The syntax is similar to that of JavaScript.
The Fiddler package comes with a ExecActions.exe which can be used to pass console arguments to a running Fiddler instance using the command prompt.
The code we wrote processed all the sessions captured by Fiddler and wrote it to a file in a custom JSON format and later used GSON to deserialize it.
Please let me know, if you want further details.

IE8 Post Body Becomes Empty after form submission

Okay here is our setup:
Simple form being submitted via AJAX using Prototype 1.7 to a Apache server captured by ColdFusion. (We have noticed similar bugs on pages that submit form data in the conventional way but these pages are used far less.)
Some of our clients are reporting an error. After looking through the logs and doing live testing from their machine Firebug Light is reporting that the request was being sent with the post data.
However on the server side the post data is not present in raw logs or ColdFusion's FORM object or in GetHttpRequestData().
This problem has been isolated to IE only even when running Chrome Frame and is intermittent.
We can not reproduce this error with our IE8 installs on our machines OR on their machines running Firefox or Chrome.
Any thoughts on this extremely difficult bug to track down?
Do you have an HTTP proxy involved in this somewhere? We have had issues in the past, I can't recall the details, but I know that it had something to do with using AJAX to POST. The proxy was configured such that a certain combination of headers would make it misbehave. Take a good look at the HTTP headers coming from the browser, comparing one that works and one that doesn't.