What is the source of /path/(null) requests? - apache

We are beginning to see requests in our Apache Logs in the form
/abc/(null)
These requests all have MSIE 8.0 and Trident 4.0 in the User-Agent field. The requests began to appear when we hadn't deployed any changed code for several weeks.
What is the source of these requests? Is this a bug in MSIE 8?
What is a systematic way to determine if this is a browser bug, javascript library bug or an issue with our code?

It's likely either a buggy browser, buggy plugin, or a bot. There are a lot of bots out there crawling websites (badly) and pretending to be browsers.
I wouldn't worry about it unless it's actually causing a problem.
Here's a discussion of the (null) requests and how to block them: http://www.webmasterworld.com/apache/3837651.htm

Related

How can I fix the Lighthouse returned error: NOT_HTML. The page provided is not HTML (served as MIME type ) error for square/weebly website?

I am trying to use PageSpeed Insights in Google Search Console for Weebly/Square website and getting an error:
Lighthouse returned error: NOT_HTML. The page provided is not HTML (served as MIME type )
It worked for me at the beginning (I tested 2-3 times). I resized some images and tried again. Getting this error since then.
Square's support states it's not on their side.
Lighthouse returning NOT_HTML can have at least three causes:
The page is really served as text/plain or without any valid Content-Type, potentially because of browser or bot detection.
You might be able to reproduce this by making a request with the same User-Agent as Lighthouse:
curl -IA "Mozilla/5.0 (Linux; Android 7.0; Moto G (4)) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.175 Mobile Safari/537.36 Chrome-Lighthouse" 'https://www.rustichappyplace.com/'
The webserver supports HTTP/2 or QUIC, but doesn't implement the protocol exactly as Lighthouse expects it, causing the Content-Type to be misdetected.
You should be able to reproduce the error in the newest Google Chrome or Chromium Nightly browser. In that case there is little you can do except asking your hoster to disable these features or to update the server software.
Lighthouse has a bug that is triggered because of some feature the web server uses.
Currently (March 2021) Lighthouse on Google PageSpeed Insights seems to have a bug that produces NOT_HTML in some constellations when HTTP/2 Early Hints are activated in the web server. I've had a similar problem today and found that disabling H2EarlyHints in Apache 2.4.46 prevented the issue.
If your hoster uses that feature to accelerate page loading, ask them to disable it for now.

HTTP/2 protocol not recognised

The latest update of PSI is reporting that all our site pages are using HTTP1/1 when in the fact they are running under HTTP/2. Confirmed this with the domain host and two separate tests.
Yup can confirm you're definitely using HTTP/2.
This looks to be a bug with PageSpeed Insights and looks like you're not the only one experiencing it.
If you run Lighthouse locally (either through Chrome, or even with the command line), or through Web Page Test then it doesn't complain about missing HTTP/2 but it does for PageSpeed Insights and web.dev/measure.
Suggest you follow that GitHub issue I linked above, and maybe add a comment with your own site.

Disable retrying of POST request via AJAX if connection was dropped

Problem
Sometimes important HTTP POST requests sent with AJAX got duplicated so several entries of the same data got created in the production database, that is of course not supposed by users.
What is important is that users have a poor internet connection and this request is taking a long time (9-20 seconds). We can't reduce this time because it is the requirements of business logic.
Requests are sent with http, not https.
Details
We have Apache/2.4.18 (Ubuntu) with PHP module loaded and two frontends: one for desktop (AngularJS) and one for mobile (React) devices. AngularJS is sending requests with the $http service, and React is using whatwg-fetch (tried whatwg-fetch-timeout also).
We know from Apache access.log and PHP logs that the same request is coming from the client several times and PHP processes them without the errors. But! These requests have response with 200 status code, %b > 0, and %O = 0, that means request is aborted before a response is sent (Apache logging format docs).
Reproduce
So we tried to reproduce and the same happens sometimes. The following case is just a reproduced case, but this happens on mobile devices (iPhones and Android phones) with different browsers installed. Also, we have repeated it in the Firefox under Windows.
Environment: Windows; both Chrome and Firefox; React frontend version; no proxy used.
That's what we found out: Google Chrome identifies the request as "Stalled", but is internally trying to send the request multiple times (and it is received and processed on the server actually), because it got the network error (ERR_CONNECTION_CLOSED). Only when the browser succeeded to fetch the response, it stops sending the repeating requests.
Gathered info
URL_REQUEST event log from chrome://net-internals/help.html#events (headers are also available there)
Google Chrome dev tools request screenshots:
Headers tab
Timeline tab
I personally can't reproduce this even once and I suppose a good internet connection is a reason for this.
I have googled a lot and even found some similar Chromium bugs, but nothing exactly about this problem.
Thank you in advance for any useful information.
I am also not pretty sure which tags should I set for this question, so if I should add or remove some, please tell me.

IE8 Post Body Becomes Empty after form submission

Okay here is our setup:
Simple form being submitted via AJAX using Prototype 1.7 to a Apache server captured by ColdFusion. (We have noticed similar bugs on pages that submit form data in the conventional way but these pages are used far less.)
Some of our clients are reporting an error. After looking through the logs and doing live testing from their machine Firebug Light is reporting that the request was being sent with the post data.
However on the server side the post data is not present in raw logs or ColdFusion's FORM object or in GetHttpRequestData().
This problem has been isolated to IE only even when running Chrome Frame and is intermittent.
We can not reproduce this error with our IE8 installs on our machines OR on their machines running Firefox or Chrome.
Any thoughts on this extremely difficult bug to track down?
Do you have an HTTP proxy involved in this somewhere? We have had issues in the past, I can't recall the details, but I know that it had something to do with using AJAX to POST. The proxy was configured such that a certain combination of headers would make it misbehave. Take a good look at the HTTP headers coming from the browser, comparing one that works and one that doesn't.

IE7 and IE8 dropping cookies

We've recently upgraded our production systems from Java 1.5, Apache HTTPD 1.3 and Tomcat (sorry, not sure which version) to Java 1.6, Apache HTTPD 2.2 and the latest version of Tomcat (again, sorry, not sure of the numbers).
Since this upgrade, we've noticed that a (very) small percentage of traffic to our site from IE7 and IE8 drops one of our cookies. The session cookie is always sent back, but sometimes, the cookie which determines which of our otherwise-load-balanced servers to send the request to, is missing.
We can find no explanation for this and can only guess that there's something different in our Apache config which is causing this behaviour, but why only on IE7 and IE8, and then only very infrequently, we've no idea.
I know I haven't provided much information to go on, but if anyone has ever heard of this or similar happening, please do let me know what you did about it! Or if anyone has particularly in depth knowledge of the vagaries of IE cookie handling and can provide some insight, please do!
One thing I can say is that I don't think it's anything to do with the underscore-in-domain-name issue I've been reading so much about these past couple of days.
Thanks,
Andy.
You can try -
Find out browser's cookies total size limitation, something like ~4K (i am not sure)
Using Fiddler verify that the web browser is sending cookies and they are dropped by web-server, or the IE itself is not sending cookies.
Make sure your cookies are actually sent to browser (and not overwritten by some part of code)? Use Fiddler.
- ankit