Kotlin: most efficient coroutine based http client - kotlin

What would you suggest as the most efficient coroutine based http client for kotlin (that runs on linux).
One additional requirement is to be also able to limit the number of in progress requests.

Ktor is a pretty standard HTTP client & server library (based on coroutines using the [CIO] engine). You can also create a custom plugin which would allow you to limit requests as you see fit.

Related

correct pattern of Ktor's HttpClient usage

What's the correct pattern of usage for HttpClient in KTOR. Should I use it like singleton per app lifecycle, or should I create it per each request?
I would say that you may have more than one client per app if you need to connect to more than one logical services.
But if you are dealing with one single HTTP server it's better to have one client because it establishes and holds a connection with server. It also allocates the following resources: prepared threads, coroutines and connections. If you have multiple clients you can potentially run out of these resources.
Should I use it like singleton per app lifecycle, or should I create it per each request
Creation of a http client instance is usually a bit resource intensive, hence you should not create an instance of client for every request. You should create just one http client instance per app's lifecycle, injected wherever required in your app, ensuring that
you have used the right http client configurations like the thread pool size, timeouts etc
you are releasing the resources upon the app's shutdown.
The client can be configured with HttpClientEngineConfig(doc) or any of its inheritors. More details in the documentation here.
It is better to reuse the HttpClient instance for performance reasons if the requests can be performed using the same configuration/settings.
But in some cases you have to create separate instances because the features of a HttpClient are determined by the engine and the plugins specified when creating the instance.
For example when using bearer authentication the HttpClient instances can be reused only when sending requests to the same resource server (with the same authorization configuration).
Similarly, if two requests should use different timeouts then they can be performed only by different HttpClients.
To summarize, a HttpClient instance should be created per "feature set", determined by the required engine and plugins.

Can You Use The Server To Bundle Files & Reduce HTTP Requests?

If I understand HTTP requests correctly, they come from the client side and ask the server for the various resources needed to build the website. If this is true, is there a way to use server-side scripts to bundle everything the visitor needs in order to reduce HTTP requests?
Am I misunderstanding the way HTTP requests work?
Is there a drawback to this?
There are several techniques:
you can bundle several of your ressource in a single file, exemple, all css, javascript, etc...
you can send multiple files in one:http pipelining
http 1.1 reuse the connection so this reduce also the request/response time....

Correct way to handle multiple requests in UniMRCP Plugin

I'm trying to create a UniMRCP plugin. It is not clear from the documentation how multiple simultaneous requests from clients are supposed to be handled with the Plugin. Which of the options below is the case?
The server creates a plugin on a different thread for each request.
The server queues the requests and sends them serially.
The plugin is supposed to manage the different requests based on session ID.
other option.
Based on the answer to the above, what would be the best strategy to implement the plugin.

need distributed web load testing tool with custom HTTP requests

I searched some of the similar questions, but haven't had a right solution yet.
I need to test a web cluster (which consists of many nodes, to provide some set of REST-ful APIs).
Not only HTTP GET request, I need to generate dynamic POST/PUT request in some manners. There are many tools, but I couldn't find right tool for generating POST/PUT request with non-static data.
Since I need to generate quite a large amount of requests, the load test tool should run in distributed nodes. In shorts:
ability to write the custom request for HTTP GET, POST and PUT. (any kind of major language such as Java, Ruby, etc. is okay)
ability to works in distributed Linux environment. (i.e. use multiple nodes to generate the requests)
ability to works on both HTTP and HTTPS
optional: generating nice-looking graphs
optional: construct a new request and queue for later (for state-ful API testing)
Based on certain condition, the request generator needs to parse JSON document in the HTTP body, and process it to make another GET/POST/PUT request.
Checkout Tsung, Faban, and Rain. Most likely, you have to edit some scripts within their frameworks.

Injecting data caching and other effects into the WCF pipeline

I have a service that always returns the same results for a given parameter. So naturally I would like to cache those results on the client.
Is there a way to introduce caching and other effect inside the WCF pipeline? Perhaps a custom binding class that could site between the client and the actual HTTP binding.
EDIT:
Just to be clear, I'm not talking about HTTP caching. The endpoint may not necessarily be HTTP and I am looking at far more effects than just caching. For example, one effect I need is to prevent multiple calls with the same parameters.
The WCF service can use Cache-Control directives in the HTTP header to say the client how it should use the client side cache. There are many options, which are the part of HTTP protocol. So you can for example define how long the client can just get the data from the local cache instead of making requests to the server. All clients implemented HTTP, like all web browsers, will follow the instructions. If your client use ajax requests to the WCF server, then the corresponding ajax call just return the data from the local cache.
Moreover one can implement many interesting caching scenarios. For example if one set "Cache-Control" to "max-age=0" (see here an example), then the client will always make revalidation of the cache by the server. Typically the server send so named "ETag" in the header together with the data. The "ETag" represent the MD5 hash or any other free information which will be changed if the data are changed. The client send automatically the "ETag", received previously from the server, together inside the header of the GET request to the server. The server can answer with the special response HTTP/1.1 304 Not Modified (instead of the typical HTTP/1.1 200 OK response) and with the body having no data. In the case the client will safe to get the data from the local cache.
I use "Cache-Control:max-age=0" additionally with Cache-Control: private which switch off caching the data on the proxy and declare that the data could be cached, but not shared with another users.
If you want read more about caching control with respect of HTTP headers I'll recommend you to read the following Caching Tutorial.
UPDATED: If you want implement some general purpouse caching you can use Microsoft Enterprise Library which contains Caching Application Block. The Microsoft Enterprise Library are published on the CodePlex with the source code. As an alternative in .NET 4.0 you can use System.Runtime.Caching. It can be used not only in ASP.NET (see here)
I continue recommend you to use HTTP binding with HTTP caching if it only possible in your environment. In the way you could save many time of development and receive at the end more simple, scalable and effective application. Because HTTP is so important, one implemened already so much useful things which you can use out-of-the-box. Caching is oly one from the features.