According to MSDN here we should cache the objects used to communicate with Service Bus. It dosen't however explain it in more details.
To be more specific I create the MessagingFactory for given connection string and cache it as long as possible. I use the factory to create the MessageReciever and MessageSender instances for different queues and topics. Now my question is: Should I also cache them?
I do not call the Close on them.
Just to be clear, when we say cache here, what we mean is keep a reference to the object, not store it in a cache [like Redis]. The guidance from Microsoft is just pointing out that establishing a connection to Service Bus is an expensive operation compared to just sending/receiving messages, and there's no benefit to tearing down the connection and reestablishing it on every send/receive.
When I write code using these objects, I usually create a static property on a class and keep it in there, so the objects last for the lifetime of the app domain. In an ASP.NET application, if you don't like the static class approach, you could keep the Service Bus objects in the HttpContext.Application collection, for example, Application["ServiceBusReceiver"] = myServiceBusReceiver; and then you just keep pulling it out when you need it.
(And, yes, there are other ways to do "global" objects in ASP.NET... not looking to wade into that topic here. :-) )
This is (sort-of) the same idea as SQL connection pooling... once the connections are established, they're kept around and reused. Ultimately, it's not a functional difference, it's just a performance optimization that reduces the number of calls over the network.
Hope that helps,
Scott
Related
How can I make the WCF server instance (the instance of the class in the .svc.cs / .svc.vb file) stay alive between requests?
It's a stateless, read-only type of service: I'm fine with different clients reusing the same instance. However, it's not thread-safe: I don't want two threads to execute a method on this instance concurrently.
Ideally, what I'm looking for is that WCF manages a "worker pool" of these instances. Say, 10. New request comes in: fetch an instance, handle the request. Request over, go back to the pool. Already 10 concurrent requests running? Pause the 11th until a new worker is free.
What I /don't/ want is per-client sessions. Startup for these instances is expensive, I don't want to do that every time a new client connects.
Another thing I don't want: dealing with this client-side. This is not the responsibility of the client, which should know nothing about the implementation of the server. And I can't always control that.
I'm getting a bit lost in unfamiliar terminology from the MSDN docs. I have a lot working, but this pool system I just can't seem to get right.
Do I have to create a static pool and manage it myself?
Thanks
PS: A source of confusion for me is that almost anything in this regard points toward the configuration of the bindings. Like basicHttp or wsHttp. But that doesn't sound right: this should be on a higher level, unrelated to the binding: this is about the worker managers. Or not?
In the event that you have a WCF service that centralizes business logic, provides/controls access to another “single” backend resource (e.g. data file, network socket) or otherwise contains some type of shared resource, then you most likely need to implement a singleton.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
In general, use a singleton object if it maps well to a natural singleton in the application domain. A singleton implies the singleton has some valuable state that you want to share across multiple clients. The problem is that when multiple clients connect to the singleton, they may all do so concurrently on multiple worker threads. The singleton must synchronize access to its state to avoid state corruption. This in turn means that only one client at a time can access the singleton. This may degrade responsiveness and availability to the point that the singleton is unusable as the system grows.
The singleton service is the ultimate shareable service, which has both pros(as indicated above) and cons (as implied in your question, you have to manage thread safety). When a service is configured as a singleton, all clients get connected to the same single well-known instance independently of each other, regardless of which endpoint of the service they connect to. The singleton service lives forever, and is only disposed of once the host shuts down. The singleton is created exactly once when the host is created.
https://msdn.microsoft.com/en-us/magazine/cc163590.aspx
I have been tasked with creating a set of web services. We are a Microsoft shop, so I will be using WCF for this project. There is an interesting design consideration that I haven't been able to figure out a solution for yet. I'll try to explain it with an example:
My WCF service exposes a method named Foo().
10 different users call Foo() at roughly the same time.
I have 5 special resources called R1, R2, R3, R4, and R5. We don't really need to know what the resource is, other than the fact that a particular resource can only be in use by one caller at a time.
Foo() is responsible to performing an action using one of these special resources. So, in a round-robin fashion, Foo() needs to find a resource that is not in use. If no resources are available, it must wait for one to be freed up.
At first, this seems like an easy task. I could maybe create a singleton that keeps track of which resources are currently in use. The big problem is the fact that I need this solution to be viable in a web farm scenario.
I'm sure there is a good solution to this problem, but I've just never run across this scenario before. I need some sort of resource tracker / provider that can be shared between multiple WCF hosts.
Any ideas from the architects out there would be greatly appreciated!
Create another central service which only the web services know about. This service takes on the role of the resource manager.
All of the web services in the farm will communicate with this central service to query for resource availability and to "check out" and "check in" resources.
You could track the resource usage in a database table, which all the servers on the farm could access.
Each resource would have a record in the database, with fields that indicate whether (or since when) it is in use. You could get fancy and implement a timeout feature as well.
For troubleshooting purposes, you could also record who is using the resource.
If you record when each resource is being used (in another table), you would be able to verify that your round-robin is functioning as you expect, decide whether you should add more copies of the resource, etc.
There are any number of approaches to solving this problem, each with their own costs and benefits. For example:
Using MSMQ to queue all requests, worker processes pull messages from the queue, pass to Rn and post responses back to a response queue for Foo to dispatch back to the appropriate caller.
Using an in-memory or locally persisted message dispatcher to send the next request to an on-box service (e.g. via Named Pipes for perf) based upon some distribution algorithm of your choice.
etc.
Alas, you don't indicate whether your requests have to survive power outages, if they need to be transactionally aware, whether the callers are also WCF, how quickly these calls will be received, how long it takes for Rn to return with results, etc.
Whichever solution you choose, I strongly encourage you to split your call to Foo() into a RequestFoo() and GetFooResponse() pair or implement a WCF callback hosted by the caller to receive results asynchronously.
If you do NOT do this, you're likely to find that your entire system will grind to a halt the second traffic exceeds your resources' abilty to satisfy the workload.
I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled.
But they are not being called concurrently.
If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I want to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible.
I found this article on MSDN that states:
While channels and clients created by
the channels are thread-safe, they
might not support writing more than
one message to the wire concurrently.
If you are sending large messages,
particularly if streaming, the send
operation might block waiting for
another send to complete.
Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages.
Can anyone shed some light on this?
Addendum: I am also considering creating a pool of channels that the web server uses to fulfill requests. But again, I see no reason why my existing approach should block and I'd rather avoid the complexity if possible.
After much ado, this all came down to the fact that I wasn't calling Open explicitly on the channel before using it. Apparently an implicit Open can preclude concurrency in some scenarios.
You can cache the WCF proxy, but still create a channel for each service call - this will ensure concurrency, is not very expensive in comparison to creating a channel from scratch, and re-authentication for each call will not be necessary. This is explained on Wenlong Dong's blog - "Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices" (a much better source of WCF information and guidance than MSDN).
Just for completeness: Here is a blog entry explaining the observed behavior of request serialization when not opening the channel explicitly:
http://blogs.msdn.com/b/wenlong/archive/2007/10/26/best-practice-always-open-wcf-client-proxy-explicitly-when-it-is-shared.aspx
I was told that WCF callbacks are not to be used in situations when the connection is kept for a long time (say, a week) even though the callback operations themselves are short (< 1s). Is this true? Where can I find more information on this?
Since I still got no reply, I'll add my own thoughts.
To answer the actual question, no, WCF connections can be used for long-term connections. Nothing in the design prevents that by itself, and it's not an anti-pattern.
However, since any kind of connection is unstable to a certain degree, it is required to handle (both intended and accidental) connection faults. Clients need to be able to reconnect, and servers should not choke on lost connections. In the specific case of WCF the server should also be able to persist and restore its data no matter when or how it is disposed.
I'm currently playing around a little with WCF, during this I stepped on a question where I'm not sure if I'm on the right track.
Let's assume a simple setup that looks like this: client -> service1 -> service2.
The communication is tcp-based.
So where I'm not sure is, if it makes sense that the service1 caches the client proxy for service2. So I might get a multi-threaded access to that proxy, and I have to deal with it.
I'd like to take advantage of the tcp session to get better performance, but I'm not sure if this "architecture" is supported by WCF/network/whatever at all. The problem I see is that all the communication goes over the same channel, if I'm not using locks or another sync.
I guess the better idea is to cache the proxy in a threadstatic variable.
But before I do that, I wanted to confirm that it's really not a good idea to have only one proxy instance.
tia
Martin
If you don't know that you have a performance problem, then why worry about caching? You're opening yourself to the risk of improperly implementing multithreading code, and without any clear, measurable benefit.
Have you measured performance yet, or profiled the application to see where it's spending its time? If not, then when you do, you may well find that the overhead of multiple TCP sessions is not where your performance problems lie. You may wish you had the time to optimize some other part of your application, but you will have spent that time optimizing something that didn't need to be optimized.
I am already using such a structure. I have one service that collaborates with some other services and realise the implementation. Of course, in my case the client calls some one-way method of the first service. I am getting very good benifit. Of course, I also have configured it to limit the number of concurrent calls in some of the cases.
Yes, that architecture is supported by WCF. I deal with applications every day that use similar structures, using NetTCPBinding.
The biggest thing to worry about is the ConcurrencyMode of the various services involved, and making sure that they do not block unnecessarily. It is very easy to get into a scenario where you will be guaranteed timeouts, or at the least have poor performance due to multiple, synchronous calls across service boundaries. Even OneWay calls are not guaranteed to immediately return.
careful with threadstatic, .net changes the thread so the variable can get null.
For session...perhaps you could use session enabled calls:
http://msdn.microsoft.com/en-us/library/ms733040.aspx
But i would not recomend using if you do not have any performance issue. I would use the normal way, or if service 1 is just for forwarding you could use that functionality easily with 4.0:
http://www.sdn.nl/SDN/Artikelen/tabid/58/view/View/ArticleID/2979/Whats-New-in-WCF-40.aspx
Regards
Firstly, make sure you know about the behaviour of ThreadStatic in ASP.NET applications:
http://piers7.blogspot.com/2005/11/threadstatic-callcontext-and_02.html
The same thread that started your request may not be the same thread that finishes it. Basically the only safe way of storing Thread local storage in ASP.NET applications is inside HttpContext. The next obvious approach would be to creat a wrapper client to manage your WCF client proxy and ensure each IO request is thread safe using locks.
Although my personal preference would be to use a pool of proxy clients. Whenever you need one pop it off the pool queue and when you're finished with it put it back on.