Single request to specific API stalled for long - asp.net-core

I've built up an API application with ASP.NET Core 2.2.
Everything has been fine. Except one PATCH API, which takes an ID and a list, to replace the list of corresponding item.
This API works fine with POSTMAN too. Simply and fast, works just as expected.
However, to run on browsers, it stalls 1 minute to send that request.
I've tried to make it simple by rewriting the App within only one jQuery function, to check if the problem is on my frontend app; however it still stalls for 1 minute.
I've looked up stalled, people say that it can be a Chrome policy to load maximum 6 requests at the same time; however it's not my case. There's only such request at that time, and every other API works fine except this one.
Also, I've tried with other browsers: Firefox and Edge, but it's still the same.
According to the article Chrome provides:
Queueing. The browser queues requests when:
There are higher priority requests.
There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and > HTTP/1.1 only.
The browser is briefly allocating space in the disk cache
Stalled. The request could be stalled for any of the reasons described in Queueing.
It seems that getting "stalled" for long, means that the request wasn't event sent. Does it mean that I can just exclude the possibility to fix backend API?
And also, since that there's no other request at the same time, does it mean that it most likely goes to the reason that "The browser is briefly allocating space in the disk cache", or is there any other reason?
And I also wander why only this API gets this issue. Is there anything special with the method "PATCH"?

At first use stopwatch and evaluate response time of your code in browser and postman and see how take long time in each.
If both is same, don't touch your code because your problem isn't from your method.
If you can, test it with 'post http attribute' till know your problem is because of it or not.
However I guess reason of it is your system.
Of course it's possible ypur problem resolve with changing pipeline (startup.cs) . There are also problems like CORS that occurred only in browsers and not postman.

Related

HTML video performance in Safari - repeated byte range requests

I have a page that displays a looping video and the performance of the playback appears less than ideal - the video stutters and lags instead of playing smoothly. I learned through a bit of searching that Safari handles streaming video differently than other browsers because it makes byte range requests and expects the server to respond with status 206. Safari makes a series of requests with a range header set, while Chrome is able to make a single request.
When I view the network requests in Safari dev tools, I see the series of byte range requests happening as expected. But when the video loops back and starts from the beginning, I see the same series of requests happening a second time and continuously.
JSFiddle to reproduce in Safari.
<video src="https://jsoncompare.org/LearningContainer/SampleFiles/Video/MP4/Sample-MP4-Video-File-Download.mp4" autoplay loop muted playsinline preload="auto" controls/>
Question is: is this by design? It seems inefficient that the browser is re-downloading the pieces of the video every time it plays. Performance wise, I suspect this is what is causing the non-smooth playback. Is caching supported for byte range requests in Safari?
I also suspect this behavior may have to do with the size of the asset. I see the described behavior for a video that’s ~40 MB but smaller videos are downloaded in two requests and the requests don’t repeat.
Helpful resources I came across
https://blog.logrocket.com/streaming-video-in-safari/
https://developer.apple.com/library/archive/documentation/AppleApplications/Reference/SafariWebContent/CreatingVideoforSafarioniPhone/CreatingVideoforSafarioniPhone.html#//apple_ref/doc/uid/TP40006514-SW6
The re-requesting for the byte range requests is by design, or at least by current implementation.
It seems that the mechanism that Safari uses to cache requests does not currently allow for byte ranges - i.e. in simplistic terms, it looks just at the URL so would respond with whatever happened to be in cache for that URL, ignoring the byte range.
It seems this is a limitation (or maybe a very 'pure' interpretation of the specs, not sure...) but the current advice is definitely not to cache when using byte range requests on Apple based solutions:
NSURLRequestReloadIgnoringLocalCacheData = 1
This policy specifies that no existing cache data should be used to satisfy a URL load request.
Important
Always use this policy if you are making HTTP or HTTPS byte-range requests.
(https://developer.apple.com/documentation/foundation/nsurlrequestcachepolicy/nsurlrequestreloadignoringlocalcachedata)
You can see more discussion on this here also in the Apple Developer forum: https://developer.apple.com/forums/thread/92119
I also think this is by design.
Recently I just implemented video streaming for a website and also saw this behaviour. In chrome and firefox everything just works fine and even with the bye-range headers it always requests little chunks.
The safari devtools state that it downloads big chunks and often aborts these requests. This is a very strange behaviour, especially when you proxy a video from a aws s3 or something like that. Because safari requests a large chunk, the server loads this chunk from the s3 and sends it back, but safari only needs a few bytes.
Here is a good article which goes into detail of this behaviour:
https://www.stevesouders.com/blog/2013/04/21/html5-video-bytes-on-ios/

Why do I get many SSE request that slow down my webpage in my NUXT.js project?

I have a project implemented using NUXT.js (ssr mode). Every time I refresh pages, I got three or four sse requests (like _loading/sse) in the network console. Those sse requests are slow and would fail in the end and they cause page loading time slow in my computer (I run whole project in local computer).
Anyone knows what those sse requests are and how to get rid of them?
What you refer to is a loading-screen and looks like there must be something in your app which si firing many requests and takes quite a while to render or fail.
You need to check through your app code what generates those requests and where they might fail.

How REST API handle continuous data update

I have REST backend api, and front end will call api to get data.
I was wondering how REST api handles continuous data update, for example,
in jenkins, we will see that if we execute build job, we can see the continous log output on page until job finishes. How REST accomplish that?
Jenkins will just continue to send data. That's it. It simply carries on sending (at least that's what I'd presume it does). Normally the response contains a header field indicating how much data the response contains (Content-Length). But this field is not necessary. The server can omit it. In such a case the response body ends when the server closes the connection. See RFC 7230:
Otherwise, this is a response message without a declared message body length, so the message body length is determined by the number of octets received prior to the server closing the connection.
Another possibility would be to use the chunked transfer encoding. Then the server sends a chunk of data having its own Content-Length header. The server terminates this by sending a zero-length last chunk.
Websocksts would be a third possibility.
I was searching for an answer myself and then the obvious solution struck me. In order to see what type of communication a service is using, you can simply view it from browser side using Developer Tools.
In Google Chrome it will be F12 -> Network.
In case of Jenkins, front-end is sending AJAX requests to backend for data:
every 5 seconds on Dashboards page
every second during Pipeline run (Console Output page), that you have mentioned.
I have also checked the approach in AWS. When checking the status of instances (example: Initializing... , Booting...), it queries the backend every second. It seems to be a standard interval for its services.
Additional note:
When running an AWS Remote Console though, it first sends requests for remote console instance status (backend answers with { status: "BOOTING" }, etc.). After backend returns status as "RUNNING", it starts a WebSocket session between your browser and AWS backend (you can notice it by applying WS filter in developer tools).
Then it is no longer REST API, but WebSockets, that is a different protocol (stateful).

What Is Meant By Server Response Time

I'm doing website optimisations using Google's Pagespeed Insights to test improvements. Among the high-priority fix suggestions, is this:
Reduce server response time
In our test, your server responded in 2.1 seconds.
I read the 'helpful' doc linked in this section, and now I'm really confused.
Is the server response time the DNS response, the time to first-byte, or a combination? Is it purely a server-side thing, or could this be affected by, for example, a slow JavaScript resource or ready events in the DOM?
My first guess would have been that it's the time taken from the moment the request was issued, to the 1st byte received from the server, however Google's definition is not quite that:
(from this page https://developers.google.com/speed/docs/insights/Server)
Server response time measures how long it takes to load the necessary
HTML to begin rendering the page from your server, subtracting out the
network latency between Google and your server. There may be variance
from one run to the next, but the differences should not be too large.
In fact, highly variable server response time may indicate an
underlying performance issue.
To take 2.1 seconds would suggest to me that your application/webserver is buffering it's output, so all your server side processing is happening before it sends the content. If you don't buffer then the html can begin being sent to the browser more quickly which may help, however you lose the ability to do things like change response headers late in your logic.

page is being received too long

I have rewritten web application from using mod_python to using mod_wsgi. Problem is that now it takes at least 15 seconds before any request is served (firebug hints that almost all of this time is spent by receiving data). Before the rewrite it took less than 1 second. I’m using werkzeug for app development and apache as a server. Server load seems to be minimal and same goes for memory usage. I’m using apache2-mpm-prefork.
I’m using the default setting for mod_wsgi - I think it’s called the ‘embedded mode’.
I have tested if switching to apache2-mpm-worker would help but it didn’t.
Judging from app log it seems that app is done with request quite fast - less than 1 second.
I have changed the apache logging to debug, but I can’t see anything suspicious.
I have moved the app to run on a different machine but it was all the same.
Thank in advance for any help.
Sounds a bit like your response content length doesn't match how much data you are actually sending back, with content length returned being longer. Thus browser waits for more data until possibly times out.
Use something like:
http://code.google.com/p/modwsgi/wiki/DebuggingTechniques#Tracking_Request_and_Response
to verify what data is being sent back and that things like content length match.
Otherwise it is impossible to guess what issue is if you aren't showing small self contained example of code illustrating problem.