I would like to be able to prioritize the outgoing data/messages from a WCF service.
Here's the basic scenario:
Client requests from server a data stream. The data stream is live, large, and potentially unending (equipment monitoring data). We'll call this HighPriorityDataStream.
Client requests additional data. We'll call this LowPriorityData.
The bandwidth is limited (think dial-up modem or satellite). It is very important that the current HigPriorityDataStream not be interrupted or delayed when a request for LowPriorityData is made.
I have a sockets-based legacy system already where this is accomplished by manually controlling the order that data is placed into the socket buffer. High-priority data is placed in the buffer, and if there's room left over, lower priority data is added to fill the rest of the buffer.
I'm trying to reengineer this process with WCF... I don't know of any out-of-the box solutions and am thinking I may need to write a custom channel behavior, but I'd like to pick the brains of the community before I go that route :)
I think there is no general out-of-the box solution. The solution is dependend on your other requirements. Do you want to control bandwith per client or per whole server (all clients)? Do you want to call low priority operation from the same proxy or do you start new proxy for new operation? Do you want to run more high priority operations at the same time? Do you want to prioritize incomming requests?
The easiest solution expects that you control bandwith per client, you are reusing same proxy for all calls, only one high priority operation can be completed at the same time and requests are processed in FIFO order. Than you just mark your service implementation with [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession, ConcurrencyMode = ConcurrencyMode.Single)] (this should be default setting for services exposed over NET.TCP). This settings will reuse the same service instance for all calls from the same client proxy but only one call will be processed at time (other will wait in queue until they are processed or time outed).
Best regards,
Ladislav
After a lot of poking around (thanks Ladislav for your thoughtful ideas), I've come to the conclusion that I'm asking the communication layer to solve a buisness-layer problem. To better state the problem, there are multiple connections and one data source. The datasource must prioritize which data it gathers from it's own data sources (live data streams and also persisted databases) and send the data back to the various clients based on their priority. To be clear, the clients have a relative priority based on their role-based identity, the data sources have a priority (prefer live data over persisted data) and individual fields within a data source have a priority order (all else being equal, field X must always be sent before field Y).
This is all firmly business-logic and the solution we adopted as a set of priority queues that autoamatically sorted the input data items based on these priority requirements and then served each request in that order.
Related
Is there an upper limit to the number of unique IEndpointInstances that be hosted within in a single process?
I'm considering a design that will see up to a 100 unique IEndpointInstances, all listening on separate queues, be active simultaneously.
Will this cause a problem for NServiceBus? Could the process deadlock or spin up so many threads as to be unresponsive and useless?
The question NServiceBus - How to get separate queue for each message type receiver subscribes to? seems to suggest that you can not have multiple endpoints in a process, but this is an older post. I have built a small sample against NServiceBus 6--beta4 that does work.
There is a similar question NServiceBus Single Process, but Multiple Input queues that concluded, based on the OP's context using Satellite Features was the recommended approach. However, in my case, I have 100 (functionally different) sagas (1 per queue), where each saga could need to receive similar messages, but I need to make sure that only the correct saga receives the message. Therefor, I don't think implementing a custom feature will meet my requirements. Or will Satellite Features support Sagas?
One of the options is to use self multi hosting. Using this approach, you self the endpoints yourself in the same process. There are a few things to take into consideration, such as:
Assembly scanning (might require custom scanning logic per endpoint).
Throughput (for heavy throughput endpoints I'd recommend a separate hosting process).
To update/redeploy a single endpoint, you'll be taking all of the other 99 endpoints down as well.
While there's no hard limit on how many endpoints can be co-hosted, 100 sounds a bit a lot. Saying that, it also depends how heavy the load on those endpoints is. If you process 1 msg/sec or 1K msg/sec determine a lot if this is a viable option or not.
Have a look at the sample that does exactly that.
This is a new area for me so hopefully my question makes sense.
In my program I have a large number of clients which are windows services running on laptops - that are often disconnected. Occasionally they come on line and I want them to receive updates based on user profiles. There are many types of notifications that require the client to perform some work on the local application (i.e. the laptop).
I realize that I could do this with a series of restful database queries, but since there are so many clients (upwards to 10,000) and there are lots of different notification types, I was curious if perhaps this was not a problem better suited for a messaging product like RabbitMQ or even 0MQ.
But how would one set this up. (let's assume in RabbitMQ?
Would each user be assigned their own queue?
Or is it preferable to have each queue be a distinct notification type and you would use some combination of direct exchanges or filtering messages based on a routing key, where the routing key could be a username.
Since each user may potentially have a different set of notifications based on their user profile, I am thinking that each client/consumer would have a specific message for each notification sitting on a queue waiting for them to come online and process it.
Is this the right way of thinking about the problem? Thanks in advance.
It will be easier for you to balance a lot of queues than filter long ones, so it's better to use queue per consumer.
Messages can have arbitrary headers and body so it is the right place for notification types.
Since you will be using long-living queues, waiting for consumers on disk - you better use lazy queues https://www.rabbitmq.com/lazy-queues.html (it's available since version 3.6.0)
I use a queue per message type. I have tended to create a windows service per queue to process those messages. Is this the best use of resources? I suspect not. How do you decide how many processes should service a queue(s)?
One thing to consider here is service levels. Does all of the data represented by the message types require identical processing service levels? Are some messages more important than others? Do some messages have latency requirement for delivery? Are some messages critical to the business whereas others not? Are the expected volumes of all message types different?
Currently the way you have things set up means that you can manage each of your message type channels as a separate concern, which allows you maximum flexibility to support all possible service level scenarios. However this comes as a cost of higher resource cost/more moving parts.
I would say that unless resource usage is a concern, then your set up is the best possible as you decouple your data processing channels from one another very effectively in this way.
Our company leases a music service to it's clients. The product consists of an automated mp3 player and daily renewals/updates of the costumers music library (mp3 songs) downloaded to their machines. So far we use an ugly solution for the mp3 updates, by synchronizing server and client folders using GBridge. This is obviously a disadvantage, as we force our clients to download our whole music library (currently 25.000 songs) while most of them will never play songs from all of our music categories (pop, rock etc). Most important we can only offer one subscription packet (our whole music library) while our competitors offer packets by categories with lower prices. For those reasons we decided to turn to WCF.
The service uses PerCall instancing mode and implements two operations, invoked from a winform client application with the classic request-reply pattern.
The first operation retrieves from a database the categories a client is allowed to download from (request) and sends back to the client a list of these categories (reply).
The second operation is used for downloading. The client first downloads an xml version of the server's database. A similar xml lies on the client side. The client app checks which songs, in each of the categories returned from the first operation, are missing in it's own xml compared to the server's xml file. If there are any files (elements on the xml) missing, it downloads them one file at a time. After each download, the client updates his xml and does the same comparison again until all files (elements) match in the 2 xml.
Long story short, considering that the instancing mode on the service is PerCall for throughput reasons and keeping memory consumption low and that both my operations use the request-reply pattern which means that the acknowledgement messages will be send back to the client with each response from the service, so if something goes wrong in the connection or if the client can't reach the service I can catch the CommunicationObjectFaultedException on the client, reconstruct the proxy and retry do you think theres a need for reliable sessions on my service implementation? What problems could arise if I don't have reliable sessions in the operations just described?
What problems could arise if I don't have reliable sessions in the
operations just described?
I am aware of only few problems being solved by reliable sessions while it puts a lot of stress on the server.
I would personally go for BasicHttpBinding (for better interoperability) without reliable session.
UPDATE
In order to understand Reliable Sessions, have a read of this and this.
If you are a bank, it makes sense to use Reliable Sessions, if you are sending money to and from other banks. This will ensure the message is received by the final party involved. But in most cases, you would not need it.
I created a RESTful service using WCF which calculates some value and then returns a response to the client.
I am expecting a lot of traffic so I am not sure whether I need to manually implement queues or it is not neccessary in order to process all client requests.
Actually I am receiving measurements from clients which have to be stored to the database - each client sends a measurement every 200 ms so if there are a multiple clients there could be a lot of requests.
And the other operation performed on received data. For example a client could send an instruction "give me the average of the last 200 measurements" so it could take some time to calculate this value and in the meantime the same request could come from another client.
I would be very thankful if anyone could give any advice on how to create a reliable service using WCF.
Thanks!
You could use the MsmqBinding and utilize the method implemented by eedsi9n. However, from what I'm gathering from this post is that you're looking for something along the lines of a pub/sub type of architecture.
This can be implemented with the WSDualHttpBinding which allows subscribers to subscribe to events. The publisher will then notify the user when the action is completed.
Therefore you could have Msmq running behind the scenes. The client subscribes to the certain events, then perhaps it publishes a message that needs to be processed. THe client sits there and does work (because its all async) and when the publisher is done working on th message it can publish an event (The event your client subscribed to) letting you know that its done. That way you don't have to implement a polling strategy.
There are pre-canned solutions for this as well. Such as NService Bus, Mass Transit, and Rhino Bus.
If you are using Web Service, Transmission Control Protocol (TCP/IP) will act as the queue to a certain degree.
TCP provides reliable, ordered
delivery of a stream of bytes from one
program on one computer to another
program on another computer.
This guarantees that if client sends packet A, B, then C, the server will received it in that order: A, B, then C. If you must reply back to the client in the same order as request, then you might need a queue.
By default maximum ASP.NET worker thread is set to 12 threads per CPU core. So on a dual core machine, you can run 24 connections at a time. Depending on how long the calculation takes and what you mean by "a lot of traffic" you could try different strategies.
The simplest one is to use serviceTimeouts and serviceThrottling and only handle what you can handle, and reject the ones you can't.
If that's not an option, increase hardware. That's the second option.
Finally you could make the service completely asynchronous. Implement two methods
string PostCalc(...) and double GetCalc(string id). PostCalc accepts the parameters, stuff them into a queue (or a database) and returns a GUID immediately (I like using string instead of Guid). The client can use the returned GUID as a claim ticket and call GetCalc(string id) every few seconds, if the calculation has not finished yet, you can return 404 for REST. Calculation must now be done by a separate process that monitors the queue.
The third option is the most complicated, but the outcome is similar to that of the first option of putting cap on incoming request.
It will depend on what you mean by "calculates some value" and "a lot of traffic". You could do some load testing and see how the #requests/second evolves with the traffic.
There's nothing WCF specific here if you are RESTful
the GET for an Average would give a URI where the answer would wait once the server finish calculating (if it is indeed a long operation)
Regarding getting the measurements - you didn't specify the freshness needed (i.e. when you get a request for an average - how fresh do you need the results to be) Also you did not specify the relative frequency of queries vs. new measurements
In any event you can (and IMHO should) use the queue (assuming measuring your performance proves it) behind the endpoint. If you change the WCF binding you might still be RESTful but will not benefit from the standard based approach of REST over HTTP