Random high content download time in chrome? - api

We have an API which randomly takes high content download time in chrome, It works fine always in firefox and takes an only few ms. The response size is 20kb uncompressed and 4kb compressed. The same request also works fine using curl.
Things that we have tried:
Disabling If-None-Match header to disable cache response from the browser.
Trying various compressions (gzip, deflate, br).
Disabling compression.
Disabling all chrome extensions.
The same request works fine sometimes on chrome but randomly returns very high content download time.
We are unable to understand the root cause of this issue. What are the other things we can try to minimize this time?
I made three requests here and the 3rd one took the most time (before the last spike). CPU does not seem to be maxing out for a longer period of time. Most of the time is idle time.
Also, When replaying the call using Replay XHR menu, the Content download period drops from 2s to 200 ms.

Are you by chance trying to implement infinite scrolling? If you are, try dragging the scroll bar instead of using the mouse wheel. For some reason, Chrome seems to struggle with mouse scroll events. If the scroll bar worked just fine, keep reading.
This post provides a detailed walkthrough of someone experiencing something similar - https://github.com/TryGhost/Ghost/issues/7934
I had attached a watcher on the scroll event which would trigger an AJAX request. I had throttled the request and could see that only 1 was being sent. I watched my dev server return the response within a few ms but there would be a 2 second delay in chrome. No render, no api calls, no and scripts executing. But the "Content Download" would take 3 seconds for 14kb. No other browser had this issue.
I stumbled upon suggestions that using requestAnimationFrame instead of setTimeout would solve the problem. That approach seems that approach works when the "Waiting" or green is significant, not so much for the "Content Download" or blue.
After hours of digging, I tried conditionally calling e.preventDefault() on the mousewheel event and to my amazement, it worked.
A few things to note:
1) I did not use the mousewheel event to make the api call. I used the scroll event along with throttling.
2) The mousewheel event is non-standard and should not be used. See https://developer.mozilla.org/en-US/docs/Web/Events/mousewheel
3) BUT in this case, you have to watch and handle the mousewheel event because of chrome. Other browsers ignore the event if they don't support it and I have yet to see it cause an issue in another browser.
4) You don't want to call preventDefault() every time because that disables scrolling with a mouse :) You only want to call it when deltaY is 1 if you are using vertical scroll. You can see from the attached image that deltaY is 1 when you basically can't scroll anymore. the mousewheel event is fired even though the page cannot scroll. As a side note, deltaX is -0 when you are scrolling vertically and deltaY is -0 when scrolling horizontally.
My solution:
window.addEventListener("mousewheel", (e) => {
if (e.deltaY === 1) {
e.preventDefault();
}
})
That has been the only solution that I've seen work and I haven't seen it mentioned or discussed elsewhere. I hope that helps.
console log of mousewheel event

I think you may be doing it wrong.™
Fundamentally, if this really only happens with Chrome, then perhaps the client-side code is to blame, of which you don't reveal any details.
Otherwise, you are trying to debug what you present as a backend condition (based on the choice on the nginx tag) with front-end tools:
Have you tried using tcpdump(8) to troubleshoot the issue? What packets gets exchanged and at what times?
Have you tried logging the times of the request being received and processed by nginx? E.g., $request_time?
Where is the server located? Perhaps you're experiencing packet loss, which may require timeouts and retransmission of some TCP packets, which invariably will introduce a random delay?
Finally, the last possibility is that the field doesn't mean what you think it does -- it sounds like it may take a hit from CPU load, as this is the result of the XMLHTTPRequest (XHR) processing -- perhaps you run some advertising with user tracking that randomly consumes a significant amount of CPU, slowing down your metrics?

Related

Ctrl+5F shows me the two different types of display

Attachment
I'm currently working on a website.
Attached is the part of the website that I can watch on my computer monitor.
Changed Display below is the what I want.
However, if I keep hitting 'ctrl+f5', the screen shows me either unchanged display or changed display.
I have no idea why it shows me two different types of screen.
As far as I know, 'ctrl+f5' deletes the cache and updates data but it is not for me.
Ridiculously, If I keep hitting 'f5', I can only have the changed display as I want.
I guess I have a problem on css because I get an error message: DevTools failed to load SourceMap: Could not load content for http://localhost:8090/asset/css/sub.css.map: HTTP error: status code 404, net::ERR_HTTP_RESPONSE_CODE_FAILURE.
Does anyone know the keys on this problem?
There's not enough info here for anyone to tell you how to fix your problem with any amount of certainty.
What do you mean by changed display? Just the website looking different? Or a different debug preview (like scaling your website down to a phone's screen size)?
If it looks different, that might just have to do with caching.
The difference between CTRL + F5 and just F5 is that CTRL + F5 doesn't use your browser's cache, instead fetching everything fresh from the server, whereas just F5 uses your browser's cache. Your browser generally keeps track of when it cached things and will automatically fetch data anew if the time of caching was too long ago.
The former generally takes longer to load, naturally, which might be why the website looks different, at least until everything has been loaded.
Other than that, CSS gets a little weird sometimes, applying styles in a weird order. This generally has to do with the order in which, and where in your HTML document you actually load your stylesheets. Generally, loading all of them in the head of the document is a good idea. Complete redefinitions of styles in separate stylesheets can get very weird, even if it should follow normal precedence (Thread on CSS precedence)
Though, again, you'll have to elaborate on your problem further, maybe provide some screenshots, for anyone to be able to definitively help you.

Does Electron have a standard way of killing a useless renderer process?

My app creates a window with a local page that requires node integration to be enabled.
After I click a button on this page, I am navigated to a third party page.
Because I want node to be disabled in this third party page, and I can't toggle node integration in a BrowserWindow, I load this third party page in a sandboxed BrowserView that is embedded inside of the window and is stretched to fit the entire screen.
Now doing this navigates the embedded view, but the BrowserWindow is stuck pointing to the old local page that is no longer relevant.
To prevent this extra page from sitting around in the background, I navigate my BrowserWindow to "about:blank" to effectively clear it out and make room for the BrowserView.
I am realizing now that while this "clears" out the old page, it keeps the renderer process that's associated with it alive. From here:
Chromium creates a renderer process for each instance of a site the user visits
And understandably, navigating to "about:blank" doesn't signal to Electron that it should kill the other process.
I want to get rid of this renderer process, so it doesn't sit around unnecessarily and use CPU and memory when I interact with the window.
Two things that have worked:
I removed the extra navigation to "about:blank" since we're now killing the process and:
1) When my button in my renderer sends a message to the main process telling it to create a BrowserView and navigate to the new site, I do a process.exit();. I guess a part of me is nervous about the process exit interfering with the message that gets queued up for main, though it seems to work fine.
2) Instead of killing the process from the renderer, I created and navigated my BrowserView and then ran a little browserWindow.webContents.executeJavascript("process.exit()");. I find this uglier, though it does mitigate by concern above in #1.
There isn't a webcontents.destroy() type of method, and I don't know of a way to signal to Electron that it needs to destroy this unnecessary process.
I suppose I might have a pretty unique case, but is there a nicer way (or more standard way) of handling this than explicitly doing a process.exit()?
There is now a WebContents::forcefullyCrashRenderer() API that accomplishes this (introduced by this PR):
Forcefully terminates the renderer process that is currently hosting this webContents. This will cause the render-process-gone event to be emitted with the reason=killed || reason=crashed. Please note that some webContents share renderer processes and therefore calling this method may also crash the host process for other webContents as well.
As of now (July 2021, Electron v13) - there's also an undocumented webContents.destroy().
https://github.com/electron/electron/issues/10096
As the documentation mentions, some webContents share renderer processes and if you use webContents.forcefullyCrashRenderer() you may terminate them as well.
I'm not sure about how webContents.destroy() handles it, but from the name it seems to be more narrow in scope. I would assume that it kills the renderer if webContents is the only webContents attached to it (I tested this) and spares the renderer if other webContents is using it (needs confirmation).

What could be causing this slow fetch in react native?

In the following code, the first console.log message prints pretty much instantly. Then everything just hangs (I'm initially assumed it was waiting for the body of the response to be returned). The Body of the response is only about 26K, the time waiting seems to be indefinite UNLESS, I shake the phone and interact with the debug menu. As soon as I interact with the debug menu, the promise resolves and everything moves along as expected. My interactions with the debug menu can be simple, like hide inspector, show inspector, just takes something to kick the promise resolution into gear and everything is fine.
fetch(SEARCH_URL, requestBody)
.then((response) => {console.log(response); return response.json();})
.then((responseData) => {
debugger
...
Note:
Disconnecting from the debugger and running the code does not exhibit the slowness (and not being connected to the debugger ignores the debugger statements)
And yes, I have rebooted the computer.
Might have found something in https://github.com/facebook/react-native/issues/6679
As you've found out yourself, this is a known bug that should be fixed in react-native v0.31
It is a known bug that parsing responses can lag badly when remote debugging is enabled. Disabling remote debugging should speed this up a lot.
You can read the issue for details and other workarounds.
What worked for me is moving the fetch calls inside the constructor of a react component. Otherwise they never resolve. Hope this helps

Safari html5 video timeupdate event gets disabled

We are playing videos from a server. We attach an 'ontimeupdate' event which fires periodically, as the video plays. For slow connections, we can compare where the video currently IS, to where it SHOULD be. Then we can do some other things we need to do, if it is lagging. Everything works fine in Chrome, FF, IE. In Safari, when the connection is slow, the event only fires twice. Why does it get removed? Is there a way to add the event again, inside of the handler for the event? Thanks
The HTML5 audio/video element is still less than perfect. The biggest issues I've noticed is that it doesn't always behave the same way in every browser. I do not know why the timeupdate event stops firing in Safari, but one option you have is to monitor whether the video is playing or not and verifying the information independently. Example,
$(video).bind('play', function() {
playing = true;
}).bind('pause', function() {
playing = false;
}).bind('ended', function() {
playing = false;
})
function yourCheck() {
if (playing) {
if (video.currentTime != timeItShouldBe) {
//do something
}
} else {
return;
}
setTimeout( yourCheck(), 100);
}
Something to that effect. Its not perfect, but neither is the current HTML5 audio/video element. Good luck.
The event will not fire if the currentTime does not change, so it may not be firing if the video has stopped playing to buffer. However, there are other events you can listen for:
1) "stalled" - browser is trying to load the video file, but it's not getting anything from the network.
2) "waiting" - playback has stopped because you ran out of buffered data, but it will probably pick up again once more data comes in from the network. This is probably the most useful one for you.
3) "playing" - playback has resumed. Not to be confused with "play" which just means it's "trying" to play. This event fires when the video is actually playing.
4) "progress" - browser got more data from the network. Sometimes just fires every so often, but it can also fire after it recovers from the "stalled" state.
See the spec for reference.
I've heard some people say that these events can be unreliable in some browsers, but they seem to be working just fine here: http://www.w3.org/2010/05/video/mediaevents.html
If you want to be extra cautious, you can also poll periodically (with a timeout as tpdietz wrote) and check the state of the video. The readyState property will tell you whether you have enough data to show the current frame ( >= 2 ), enough to keep playing at least a little bit into the future ( >= 3 ) or enough to play all the way to the end (probably). You can also use the buffered property to see how much of the video has actually been buffered ahead of where you're playing, so you can roughly estimate the data rate (if you know how big the file is).
MDN has a great reference on all these properties and events.

Why doesn't refreshing this page refresh the frame in Opera?

On this page in Opera on Windows, pressing the refresh button or F5 doesn't refresh the frame with the game in. Does anyone know why?
It's a bug. When a request for the main FRAMESET file gets a 304 Not Modified response, Opera will not re-load the frames inside. Try right-click, "Frame > Reload" instead.
I thought so. I filed a bug report for it (DSK-320851). Would be great if it was picked up.
As it stands I rewrote the entire site to use static html, with the frames even inlined. So that effectively means Opera can't refresh any of the demo's. Kind of sucks... I was afraid it was an edge case (side effect for inlining the frames), but I guess it's not.
The alternative is to press enter in the address bar btw. That does properly refresh the frames.