What's the correct pattern of usage for HttpClient in KTOR. Should I use it like singleton per app lifecycle, or should I create it per each request?
I would say that you may have more than one client per app if you need to connect to more than one logical services.
But if you are dealing with one single HTTP server it's better to have one client because it establishes and holds a connection with server. It also allocates the following resources: prepared threads, coroutines and connections. If you have multiple clients you can potentially run out of these resources.
Should I use it like singleton per app lifecycle, or should I create it per each request
Creation of a http client instance is usually a bit resource intensive, hence you should not create an instance of client for every request. You should create just one http client instance per app's lifecycle, injected wherever required in your app, ensuring that
you have used the right http client configurations like the thread pool size, timeouts etc
you are releasing the resources upon the app's shutdown.
The client can be configured with HttpClientEngineConfig(doc) or any of its inheritors. More details in the documentation here.
It is better to reuse the HttpClient instance for performance reasons if the requests can be performed using the same configuration/settings.
But in some cases you have to create separate instances because the features of a HttpClient are determined by the engine and the plugins specified when creating the instance.
For example when using bearer authentication the HttpClient instances can be reused only when sending requests to the same resource server (with the same authorization configuration).
Similarly, if two requests should use different timeouts then they can be performed only by different HttpClients.
To summarize, a HttpClient instance should be created per "feature set", determined by the required engine and plugins.
Related
I have started to develop an ecommerce application using a microservices architecture. Every microservice will have a separate database. For now, I know I want to use a Node.js microservice to handle products and also serve as a search engine for them. I plan on having a Ruby on Rails server-microservice that should handle all the requests and then if the request is not meant to be processed by it, (e.g. the request is to add a new product) to send this information somehow using RabbitMQ to the Node.js microservice, and let it perform the action. Is this an acceptable architectural design or I'm completely off route?
Ruby on Rails server-microservice that should handle all the requests (You can do better)
A. For this, what you need is an Reverse Proxy.
A reverse proxy is able to forward each incoming request to the microservice that's responsible for processing it.
It can also act as a Load Balancer : it'll distribute the incoming requests accross many services (if, for instance, you want to deploy multiple instances of the same service)
...
B. You will also need an API Gateway for managing Authentication & Authorization, and handling Security, Traceability, Logging, ... of the requests.
For (A) & (B), you can you use either Nginx or Kong
Use RabbitMQ in case you want to establish Event-based and/or Asynchronous communication among your microservices. Here's a simple example : Everytime a user confirms an Order, OrderService informs ProductService to update the quantity of the product that's been ordered.
The advantage of using RabbitMQ here is that OrderService won't stay on a blocking state while waiting for ProductService to let him know whether he received the info or not, or he updated the quantity or not ... he'll move on and handle the other incoming requests.
I have a very basic question in how a external HTTP request is processed in an ABAP (S/4 system).
Are the requests handled by per process or per thread. (terms taken more from the java http world). ?
By threads will mean which already have the objects initialised in memory by the previous request.
By process will mean that the objects are initialised in memory every time which is obviously time consuming and non performant.
In case of a clustered system the request can be load balanced to a new systems which is a separate topic.
Best Regards,
Saurav
Internet Communication Manager (ICM) handle request and forward it to your class which is extend from IF_HTTP_EXTENSION interface by url (configure it in SICF).
SAP need authorization for accept http request. Web logon screen set cookie to client for tracking it. If you configure static user to your service on t-code SICF, you can add cookies to client (with http header in response) for tracking and checking it.
There is no cache for object in this interface, but you can create your own with static class attributes and other general function caching capabilities from ABAP. Please check below rest service api for sample project:
https://github.com/pacroy/abap-rest-api
Load balancers has cookie based route capabilities (session based) for finding correct system.
I'm seeing some strange behaviour when using stateless token-based authentication on a rest API written using Spring Boot.
The client includes a JWT token with each request, and a custom filter I've written that extends GenericFilterBean adds an Authentication object based on the claims in the token to the security context using the following :
SecurityContextHolder.getContext().setAuthentication(authentication);
And clears the context after processing the request by doing :
SecurityContextHolder.getContext().setAuthentication(null);
However when the simple app I've developed performs a range of operations, I sometimes see that the security context isn't being set correctly - sometimes it's null for a request that has supplied a token. The filter is being called correctly, setAuthencation() is also being called, but the request fails authentication, and throws a 403 denied.
If I explicitly turn off any http Session management by setting the session creation policy to STATELESS, this behaviour stops.
Any ideas what could be happening here? Is the security context being shared between threads dealing with requests in some way?
It seems that the context can be shared, according the official documentation here :
http://docs.spring.io/spring-security/site/docs/3.1.x/reference/springsecurity-single.html
In an application which receives concurrent requests in a single session, the same SecurityContext instance will be shared between threads. Even though a ThreadLocal is being used, it is the same instance that is retrieved from the HttpSession for each thread. This has implications if you wish to temporarily change the context under which a thread is running. If you just use SecurityContextHolder.getContext(), and call setAuthentication(anAuthentication) on the returned context object, then the Authentication object will change in all concurrent threads which share the same SecurityContext instance. You can customize the behaviour of SecurityContextPersistenceFilter to create a completely new SecurityContext for each request, preventing changes in one thread from affecting another. Alternatively you can create a new instance just at the point where you temporarily change the context. The method SecurityContextHolder.createEmptyContext() always returns a new context instance.
I have a sample web API hosted in an OWIN process (self hosted, not in IIS). I get a JWT token in my controller and I want to be able to retreive it in another part of the application, a class that implements NserviceBus IMutateOutgoingTransportMessages. In my other web application POC (hosted in IIS), I used a simple session variable and it works just fine. But I'd like to know what would be the best way to do it in my new OWIN self hosted environment ? Static property in static class ?
This question is really broad and difficult to answer without detailed knowledge of your specific needs. Here's my interpretation of your issue:
You're already signing each request, perhaps storing the token in the browser sessionStorage (or even localStorage), but this does not suffice
You need to retrieve the token outside of or not in relation to any request cycle (if not, this is probably where you should be looking for answers)
Your application does not need to be stateless
Just one static property for one token in a static class would of course start breaking as soon as more than one request hits the application at the same time. Implementing a class that maintains a list of tokens may be a solution, although I can't tell what key you should use to identify each token. Interface details would vary depending on things like if you need to retrieve the token more than once.
Thread safety issues would apply to all handling and implementation of such a class. Using Immutable Collections and functional programming practices as an inspiration may help.
If lingering tokens poses a problem (and they probably would from a security perspective, if nothing else), you need to figure out how to make sure that tokens do not outstay their welcome, even if the cycle is for some reason not completed.
Seeing how you used Session as a solution in your POC, I'm assuming you want some similar behavior, and that one user should not be allowed to carry two tokens at the same time. You could store the tokens i a database, or even in the local file system, making maintenance and validity a separate issue all together.
There are implementations of cache-like functionality already available for OWIN self-hosted applications, and maybe one of those would serve as a shortcut to implementing everything yourself.
If this token business in fact is the only reason for introducing state in your application, then the best solution IMHO would be to rethink your architecture so that the application can remain stateless.
I'm facing a similar dilemma on a server i'm currently developing for a customer. My problem is that the server must make calls (and retain a live connection) with a legacy, multithreaded DLL, (aka the SDK).
I struggled to get this working on IIS with a regular Web API project. Failed badly since IIS recycles threads when it determines that a thread is going rogue... witch is what the SDK thread looks like in that perspective. Also, the SDK must be able to callback on the caller (client - single page app) and for this I'm using SignalR.
I then tried a multi-part system (single page + web api on IIS + WCF service for the SDK integration). But it is a real nightmare to manage because of the 2 way async communication that must occur between all apps. Again: failure.
So I reverted to a single self hosted OWIN + WebAPI service in a console app (for now). My problem is that some of the calls are lengthy and are processed in a worker thread. I managed to pass the SignalR client id in each ajax calls via headers. I can extract the id when in web api controller. But when the task goes async, I need to get the id (via an Unity injected service) from the class that manages the async task. This is where my problem is similar to yours. In IIS hosted apps, we have HttpContext. It is contextualized on each client calls, and follows any thread changes in the pipeline... But not in self hosted OWIN WCF apps...
I'm looking into Thread Local Storage, CallContext... and other means of keeping track of the original caller info during the lifecycle of the async call. I have read about OWIN pipeline, I can capture the info in a OWIN middleware... but how to safely keep that info for use in injected services? I'm still searching for an answer...
I was wondering if you have found a solution to this rather interesting problem ?
I prefer adding to your thread rather than start another parallel thread / SO question.
Greetings,
in our company we are developing wcf service. This is used as a server and it works quite well. Hover there is a wish from customer that after they login to application they would like to see which users are logged in too.
I read about CallbackContract (based on some wcf chat application). How can we achive this goal?
Similar question asked here
You can deffinetly manage the logged users inside the server. I have created a personal pattern for dealing with such situations, and it ussually goes like this:
create a client class inside the WCF server that will hold all the needed information about the client.
create 2 methods in the service: logIn, logOut. the login method should be able to gather all the informations about the client that you want to store. Make sure to define properties that can uniquely identify a client instance. When the client conencts to the server it calls the login method, allowing the server to gather and save the information from the client. If using callbacks, this is the place to save the CallBack context object, in the client obejt. You can now save the Client object in the WCF server instance (I use a dictioary). When the client logs out, it calls the log out method and the server removes the entry.
create a KeepAlive method in the server that regularry checks the connected clients to see if they are still connected (in case of network failure or app crash a client may not call the logout method).
I think this is the simplest way (not
saying it's the best) to manage
clients in the server. There is no
problem with having multiple clients
from the same computer (you save the
Context when a client logges in) as
long as you have a way of uniquely
identify clients.
As for your last question, having
multiple services should not be a
problem. In fact you have the same WCF
server with different contracts (and
endpoints) for the different services
you offer. ALl the contracts reside in
the same WCF server instance so they
all can access the connected client
list.
If you have further questions, I would
be happy to answer them.
You can find the code you need to actually build the WCF service you require here