i can't understand why use Asynchronous and await when fetch data from server
A network request from a client to a server, possibly over a long distance and slow internet can take an eternity in CPU time scales.
If it weren't async, the UI would block until the request is completed.
With async execution the UI thread is free to update a progress bar or render other stuff while the framework or Operating System stack is busy on another thread to send and receive the request your code made.
Most other calls that reach out to the Operating System for files or other resources are async for the same reason, while not all of them are as slow as requests to a remote server, but often you can't know in advance if it will be fast enough to not hurt your frame rate and cause visible disruption or janks in the UI.
await is used to make code after that statement starting with wait is executed only when the async request is completed. async / await is used to make async code look more like sync code to make it easier to write and reason about.
Async helps a lot with scalability and responsiveness.
Using synchronous request blocks the client until a response has been received. As you increase concurrent users you basically have a thread per user. This can create a lot of idle time, and wasted computation. One request gets one response in the order received.
Using asynchronous requests allows the client to receive requests/send responses in any random order of execution, as they are able to be received/sent. This lets your threads work smarter.
Here's a pretty simple and solid resource from Mozilla:
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests#Asynchronous_request
Related
So I'm trying to make API request in parallel, but it doesn't seems any faster. Am I doing it wrong? Here are my codes.
fun getUserInfo(username: String): Mono<String> {
return webclient
// some config and params
.post()
.bodyToMono(String::class)
.subscribeOn(Schedulers.parallel())
}
fun main(){
val time = measureTimeMillis {
Mono.zip(getUserInfo("doge"), getUserInfo("cheems"), etc...)
.map { user ->listOf(it.t1, it.t2, etc...) }
.block()
}
// give the same amount of time doesn't seems faster
// with and without subscribeOn(Schedulers.parallel())
}
It has nothing to do with your code.
You must understand that any i/o-work is mostly spent waiting. As in waiting for a response.
If we look at the lifecycle of a thread, it will do a bit of preprocessing and then send the request. When the request has been sent it has to wait for the response. This is where the majority of the time is spent, then you get a response and process the response. Here's the thing, as much as 90% of the request time could be spent just waiting. This is a lot of wasted resources having the thread just waiting.
This is what is good with webflux/reactor. When a request is sent, the thread will not wait, it will go on to process other requests/responses, and when that first requests response comes back, any free thread can pick up that response, it does not have to the be the thread that sent the request in the first place.
What i have just described is usually what is called async or asynchronous work.
So lets look at what you are doing. You want to run your requests in parallel, meaning utilizing multiple threads on multiple cores at the same time.
For this to work, you will need to contact the other cpus and tell them to get prepared for incoming work. The cpus will then need to initialize a number of threads on each cpu and then the data must be sent to all the cpus. As you can see there is a setup time involved here.
Then all the requests are made from multiple cpus at the same time, but the waiting for the responses are constant! Its the exact same waiting time as before (up to as much as 90% of the total request time). Then when all responses are return they are collected and processed on multiple cpus, and then they are sent back to the original thread on the original cpu.
What have you gained? Most likely almost nothing, but you also most likely utilized a lot more resources for this very, very minimal gain.
Parallel is usually good if you have the need for raw cpu computing power, like calculations of some sort, examples could be a 3D renderer, or I don't know, cracking hashes etc. not i/o work.
I/O work is usually more about orchestration than raw cpu power. Slow responses will not be solved by parallel computing a slow response is always a slow response. You will just consume more resources to no good.
This is the reason why just regular flatMap Is so powerful in reactor.
It will perform everything async for you without needing to deal with threads, locks, synchronization, joins etc. It will perform all work async as quick as possible utilizing as few resources as possible.
I am creating a new net core 2.2 API for use with a JavaScript client. Some examples in Microsoft have the controller having all async methods and some examples aren't. Should the methods on my API be async. Will be using IIS if this is a factor. An example method will involve calling another API and returning the result whilst another will be doing a database request using entity Framework.
It is best practice to use async for your controller methods, especially if your services are doing things like accessing a database. Whether or not your controller methods are async or not doesn't matter to IIS, the .net core runtime will be invoking them. Both will work, but you should always try to use async when possible.
First, you need to understand what async does. Simply put, it allows the thread handling the request to be returned to the pool to field other requests, if the thread enters a wait state. This is almost invariably caused by I/O operations, such as querying a database, writing/reading a file, etc. CPU-bound work such as calculations require active use of the thread and therefore cannot be handled asynchronously. As side benefit of async is the ability to "cancel" work. If the client closes the connection prematurely, this will fire a cancellation token which can be used by supported asynchronous methods to cancel work in progress. For example, assuming you passed the cancellation token into a call to something like ToListAsync(), and the client closes the connection, EF will see this an subsequently cancel the query. It's actually a little more complex than that, but you get the idea.
Therefore, you need to simply evaluate whether async is beneficial in a particular scenario. If you're going to be doing I/O and/or want to be able to cancel work in progress, then go async. Otherwise, you can stick with sync, if you like.
That said, while there's a slight performance cost to async, it's usually negligible, and the benefits it provides in terms of scalability are generally worth the trade-off. As such, it's pretty much preferred to just always go async. Additionally, if you're doing anything async, then your action should also be async. For example, everything EF Core does is async. The "sync" methods (ToList rather than ToListAsync) merely block on the async methods. As such, if you're doing a query via EF, use async. The sync methods are only there to support certain limited scenarios where there's no choice but to process sync, and in such cases, you should run in a separate thread (Task.Run) to prevent deadlocks.
UPDATE
I should also mention that things are a little murky with actions and particularly Razor Page handlers. There's an entire request pipeline, of which an action/handler is just a part of. Having a "sync" action does not preclude doing something async in your view, or in some policy handler, view component, etc. The action itself only needs to be async if it itself is doing some sort of asynchronous work.
Razor Page handlers, in particular, will often be sync, because very little processing typically happens in the handler itself; it's all in subordinate processes.
Async is very important concept to understand and Microsoft focus too much on this. But sometimes we don't realise the importance of this. Every time you are not using Async you are blocking the caller thread.
Why Use Async
Even if your API controller is using single operation (Let's say DB fetch) you should be using Async. The reason is your server has limited number of threads to handle client requests. Let's assume your application can handle 20 requests and if you are not using Async you are blocking the handler thread to do the operation (DB operation) which could be done by other thread (Async). In turn your request queue grows because your main thread is busy dealing other things and not able to look after new requests , at some stage your application will stop responding. If you would use Async the Main thread is free to handle more client requests while other operation run in the background.
More Resources
I would recommend definitely watching very informative official video from Microsoft on Performance issues.
https://www.youtube.com/watch?v=_5T4sZHbfoQ
I want to stub-out a JAX-RS client request. Instead of making an HTTP call, I want to return an immediately-completed client Response. I tried invoking javax.ws.rs.core.Response.ok().build(), unfortunately when the application invokes Response.getEntity() later on it gets this exception:
java.lang.IllegalStateException: Method not supported on an outbound message.
at org.glassfish.jersey.message.internal.OutboundJaxrsResponse.readEntity(OutboundJaxrsResponse.java:144
I dug into Jersey's source-code but couldn't figure out a way to do this. How can one translate a server-side Response to a client-side Response in Jersey (or more generally JAX-RS)?
Use-case
I want to stub-out the HTTP call in a development environment (not a unit test) in order to prove that the network call is responsible for a performance problem I am seeing. I tried using a profiler to do this, but the call is asynchronous with 500+ threads and some network calls return fast (~100ms) while others return much slower (~1.5 seconds). Profilers do not follow asynchronous workflows well, and even if they did they only display the average time consumed across all invocations. I need to see the timing for each individual call. Stubbing-out the network call allows me to test whether the server is returning calls with such a large delta (100ms to 1.5 seconds) or whether the surrounding code is responsible.
I have an asynchronous controller,
I know the action will work asynchronously (no other action wait for that) and returns after completion of the task.
So my question is how it is deferent from making an asynchronous Ajax request to an action.
I think both are same in result.
An async request from javascript is not the same as an async task on the server.
An async task from javascript is still processed synchronously on the server, and as such, you may encounter thread pool starvation on large applications.
Having an async request that is processed asynchronously on the server is different, it frees up the IIS thread to immediately process other requests while that request, either from javascript or a full post/get is processed in the background.
some reading may help
http://msdn.microsoft.com/en-us/library/ee728598(v=vs.98).aspx#processing_asynchronous_requests
I am curious to know how multiple async NSURLConnection connections handles internally ? I know they use an internal background thread to run it but lets say if in code i am creating two async NSURLConnection concurrently , does that will create two thread internally to run them in parllel or second connection will wait for first to complete ? In brief please confrim how multiple async NSURLConnection achieve concurrency ?
I guess it will run in parallel. You can have a look on WWDC Session Video about network programming.
Apple engineer said handling url request one by one is expensive, running them in parallel is much more reasonable. The reason is, for processing a request, actually most of the time is spent on latency, not logic processing in devices and servers. So, handling requests parallel will efficiently reduce time waste for latency.
so I guess they wont do async NSURLConnection one by one because it's contradicting this basic theory.
Besides, I have tried to download images Async using NSURLConnection. I sent out a few request once. like
for ( i = 1 to 4) {
send request i
}
The response is also not in sequence.
Each async NSURLConnection runs on it's own thread after you start the connection (async NSURLConnection has to be created and started on main thread!) and their delegate and datadelegate methods called on main thread.
Other option that you can use it as using "NSOperationQueue" and execute request using NSOperations. Please refer http://www.icodeblog.com/2012/10/19/tutorial-asynchronous-http-client-using-nsoperationqueue/ for more detail.
Thanks,
Jim