I am try planning to introduce Apache Ignite into existing old .net web api project to use it as key/value store for detecting duplicate requests sent to the load balanced api.
I would like to introduce minimum overhead to each request.
As I understand Client node is communicating to the server through TCP.
My current go to plan is to create a singleton object which will establish connection to the remote cache and register it in my DI container.
Is it ok to leave node running and TCP connection open or should make the ignite object scoped to start close on each request/response cycle?
Keep in open, as a singleton.
Ignite object is thread safe
It is expensive to create and connect to the cluster (in case of classic "Thick" client)
There is also a "Thin" client, which is very lightweight and can be created and disposed often. Note that thin client is also thread safe.
Also, you can try to use REST:
https://apacheignite.readme.io/docs/rest-api
Related
I've started exploring Redis Cluster and it's C client(hiredis). I've been unable to find much info about the client's interaction with the Redis cluster. I've got some queries in this regard:
Does the client make a connection with all the nodes of the cluster(master and slaves) in the beginning?
Is there any coordinator node which proxies the client's request to the correct node?
If not, does the client periodically get the info about the hash-slot holdings of each node in the cluster(in order to send its request to the correct node)?
Which client-cluster connection specific parameters are configurable?
Does the client make a connection with all the nodes?
Yes, the client maintains a connection with all the masters at least.
Is there a coordinator node which proxies the client's request to the correct node?
No, there isn't. By design, redis cluster does not have a proxy.
(Aside: There is some talk of developing a proxy solution for redis - but I don't expect it to be released any time soon.)
Does the client periodically get info about hash slot bindings?
When a client starts up, it builds up a cache of hash-slot mappings. Then, at runtime, if a slot is migrated to another master, redis cluster will return a specific error that will tell the client the new owner for that slot. The client is then expected to cache the new owner, and retry the request against the new node.
As a result of this design, clients usually have a very good cache of every slot and it's owner, and there is very little overhead.
which client connection parameters are configurable?
The most important parameter is the list of server nodes to connect to the cluster. You don't have to specify all the nodes - the client can auto-discover all the masters. As long as even one node is active, the client will discover all the other nodes.
Apart from that, you have connection timeout parameters, parameters to control TLS.
I want to setup a rabbitmq cluster behind a load balancer and connect to it using spring amqp. Questions :
Does spring client need to know the address of each node in the RMQ cluster or it is sufficient for it to know just the address of load balancer.
If Spring client is only aware of the load balancer, how will it maintain connections/connection factory for each node in the cluster.
Is there any code sample, which shows how to make the spring client work with load balancer.
It only needs the load balancer; however, Spring AMQP maintains a long-lived shared connection, so a load balancer generally doesn't bring much value unless you have multiple applications.
With a single application (with one connection factory), you will only be connected to one broker.
Clarification
See the documentation.
Starting with version 1.3, the CachingConnectionFactory can be configured to cache connections as well as just channels. In this case, each call to createConnection() creates a new connection (or retrieves an idle one from the cache). Closing a connection returns it to the cache (if the cache size has not been reached). Channels created on such connections are cached too. The use of separate connections might be useful in some environments, such as consuming from an HA cluster, in conjunction with a load balancer, to connect to different cluster members. Set the cacheMode to CacheMode.CONNECTION.
By default all components (listener containers, RabbitTemplates) share a single connection to the broker.
Starting with version 2.0.2, RabbitTemplate has a property usePublisherConnection; if this is set to true, publishers will use a separate connection to the listener containers - this is generally recommended to avoid a blocked publisher connection preventing consumers from receiving messages.
As shown in the quote, the use of a single (or 2) connections is controlled by connection factory's cache mode.
Setting the cache mode to CONNECTION, means that each component (listener container consumer, RabbitTemplate) gets its own connection. In practice there will only be one or two publisher connections because publish operations are, generally, short lived and the connection is cached for reuse. You might get one or two more publisher connections if concurrent publish operations are performed.
I have developed a WCF service for consumption within the organization's Ethernet.
The service is currently hosted on a windows-service and is using net.tcp binding.
There are 2 operation contracts defined in the service.
The client connecting to this service is a long running windows desktop application.
Employees(>30,000) usually have this client running throughout the week from Monday morning to Friday evening straight.
During this lifetime there might be a number of calls to the wcf service in question depending on a certain user action on the main desktop client.
Let us just say 1 in every 3 actions on the main desktop application would
trigger a call to our service.
Now we are planning to deploy this window service on each employee's desktop
I am also using `autofac` as the dependency resolver container.
My WCF service instance context is `PerSession`, but ideally speaking we have both the client and service running in the same desktop (for now) so I am planning to inject the same service instance for each new session using `autofac` container.
Now am not changing the `InstanceContext` attribute on the service implementation
because in future I might deploy the same service in a different hosting environment where I would like to have a new service object instance for each session.
Like mentioned earlier the client is a long running desktop application and I have read that it is a good practise to `Open` and `Close` the proxy for each call but if I leave the service to be PerSession it will create a new service instance for each call, which might not be required given the service and client have a 1-1 mapping. Another argument is that I am planning to inject the same instance for each session in this environment, so Open & Close for each service call shouldn't matter ?
So which approach should I take, make the service `Singleton` and Open Close for each call or
Open the client-side proxy when the desktop application loads/first service call and then Close it only when the desktop application is closed ?
My WCF service instance context is PerSession, but ideally speaking we have both the client and service running in the same desktop (for now) so I am planning to inject the same service instance for each new session using autofac container
Generally you want to avoid sharing a WCF client proxy because if it faults it becomes difficult to push (or in your case reinject) a new WCF to those parts of the code sharing the proxy. It is better to create a proxy per actor.
Now am not changing the InstanceContext attribute on the service implementation because in future I might deploy the same service in a different hosting environment where I would like to have a new service object instance for each session
I think there may be some confusion here. The InstanceContext.PerSession means that a server instance is created per WCF client proxy. That means one service instance each time you new MyClientProxy() even if you share it with 10 other objects being injected with the proxy singleton. This is irrespective of how you host it.
Like mentioned earlier the client is a long running desktop application and I have read that it is a good practise to Open and Close the proxy for each call
Incorrect. For a PerSession service that is very expensive. There is measurable cost in establishing the link to the service not to mention the overhead of creating the factories. PerSession services are per-session for a reason, it implies that the service is to maintain state between calls. For example in my PerSession services, I like to establish an expensive DB connection in the constructor that can then be utilised very quickly in later service calls. Opening/closing in this example essentially means that a new service instance is created together with a new DB connection. Slow!
Plus sharing a client proxy that is injected elsewhere sort of defeats the purpose of an injected proxy anyway. Not to mention closing it in one thread will cause a potential fault in another thread. Again note that I dislike the idea of shared proxies.
Another argument is that I am planning to inject the same instance for each session in this environment, so Open & Close for each service call shouldn't matter ?
Yes, like I said if you are going to inject then you should not call open/close. Then again you should not share in a multi-threaded environment.
So which approach should I take
Follow these guidelines
Singleton? PerCall? PerSession? That entirely depends on the nature of your service. Does it share state between method calls? Make it PerSession otherwise you could use PerCall. Don't want to create a new service instance more than once and you want to optionally share globals/singletons between method calls? Make it a Singleton
Rather than inject a shared concrete instance of the WCF client proxy, instead inject a mechanism (a factory) that when called allows each recipient to create their own WCF client proxy when required.
Do not call open/close after each call, that will hurt performance regardless of service instance mode. Even if your service is essentially compute only, repeated open/close for each method call on a Singleton service is still slow due to the start-up costs of the client proxy
Dispose the client proxy ASAP when no longer required. PerSession service instances remain on the server eating up valuable resources throughout the lifetime of the client proxy or until timeout (whichever occurs sooner).
If your service is localmachine, then you consider the NetNamedPipeBinding for it runs in Kernel mode; does not use the Network Redirector and is faster than TCP. Later when you deploy a remote service, add the TCP binding
I recommend this awesome WCF tome
Suppose you expose a WCF Service from one project and consume it in another project using 'Add Service Reference' (in this case a Framework 3.5 WPF Application).
Will ClientBase perform any kind of connection pooling of the underlying channel when you re-instantiate a ClientBase derived proxy or will you incur the full overhead of establishing the connection with the service every time? I am especially concerned about this since we are using security mode="Message" with wsHttpBinding.
Please take a look at this article which describes best practices on how to cache your client proxies. If you're creating your proxy directly (MyProxy p = new MyProxy(...)), then it seems that you really can't cache the underlying ChannelFactory, which is the expensive part. But if you use ChannelFactory to create your proxy, the ChannelFactory is cached by the proxy at the AppDomain level, and it's based on the parameters you pass to the proxy (kind of like connection pooling which is based on the connection string).
The article goes through a number of details on what's going on under the covers, but the main point is that you get a performance bump if you use ChannelFactory to create your proxy instead of instantiating it directly.
Hope this helps!!
This article explains that yes, there is TCP connection pooling for WCF. What it doesn't explain though is in which cases it will take effect. As far as I can tell, as long as you construct your proxy object by providing it a named endpoint (IE not using a custom Binding object), connection pooling will work. I tested this by throwing load at my web app and checking open TCP connections with netstat.
But the bottom line is you do not need to cache your proxy objects in order to re-use the TCP connections.
We have a service which is hosted as a Windows service. netTcpBinding with message security type without reliable session.
On the client side we have a proxy collection cached in a list as channel creation and dispose is costly operations. My client is connecting to server and getting the data from server.
Now if I stop the server, then the CPU jumps up. The worker thread which consumes CPU is for the code execution of
void System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, NativeOverlapped *)
When i dispose all the proxies the client application CPU consumption becomes none. I need to know how we can fix up this issue on the WCF.
One question is why are you having collection of proxy's on client for single wcf service. Say you have 20 proxies and WCF Service instancing is per-session then it will create 20 instances of service on your server each having memory allocated to it. If you are having per-call (which is default) then you will have even more instances. Instead of having list of proxies cant you reuse one proxy.
I suppose when you are stopping service, cpu has to clean(garbage collect) too many instances of service in short time hence it jumps.
Unless you do not close proxies their respective instances on server wont be released.Try making instancing Singleton.