What is the best way to keep cached data to be shared across diff WPF applications across same machine? - wcf

I am thinking of keeping data in a DataSet in a WCF hosted service and other apps (on same box) can access the data via Named Pipes (exposed through the WCF service). The apps then keep a copy of dataSet inside them so as to not re-fetch the data from the WCF unless it gets changed.
Data is retrieved from the server in the form of a datarow collection, so I am writing it as DataTables and storing it as a Dataset.
Data will rarely change but when it does I have to inform all client apps which had retrieved the data to refresh.

I do something similar with an app I wrote.
You can easily let the service update the clients when the data changes by using a callback. When a clients connects to the service you will need to store their callback info and when the data is updated you just fire off the message to each subscribed clients.
Here is the contract for the callback:
public interface IServiceMessageCallback
{
[OperationContract(IsOneWay = true)]
void OnReceivedServiceMessage(ServiceMessage serviceMessage);
}
The service implements this interface. The service has this private field:
/// <summary>
/// Holds the callback recipients
/// </summary>
private List<IServiceMessageCallback> callbackMessages =
new List<IServiceMessageCallback>();
When the clients connects do something like this:
IServiceMessageCallback callback =
OperationContext.Current.GetCallbackChannel<IServiceMessageCallback>();
callbackMessages.Add(callback);
And finally, whatever method you have that updates the data on the service should also have this:
Action<IServiceMessageCallback> fire =
delegate(IServiceMessageCallback callback)
{ callback.OnReceivedServiceMessage(serviceMessage); };
// loop thru the callback channels and perform the action
callbackMessages.ForEach(fire);
I sort of patched this code together from a rather hefty service I wrote... hopefully the pieces make sense out of context.

Your solution would work. If you have access to a proper cache service, like ScaleOut StateServer, Velocity, or memcached, you could use it for your needs. It would be doing much the same thing, but you would be getting a proven solution with additional features surrounding cache management, as well the ability to scale to more than a single machine should the need arise.

Related

Using SharpArch Nhibernate with different types of SessionStorage

I have a server application where I have 3 scenarios in which I seem to need different kind of nhibernate sessions:
Calls to the repository directly from the server itself (while bootstrapping)
Calls to the repository coming from a Ria Service (default ASP.NET Memberschip Service)
Calls to the repository coming from a WCF Service
Currently I have set up my nhibernate config with sharparch like this
/// <summary>
/// Due to issues on IIS7, the NHibernate initialization cannot reside in Init() but
/// must only be called once. Consequently, we invoke a thread-safe singleton class to
/// ensure it's only initialized once.
/// </summary>
protected void Application_BeginRequest(object sender, EventArgs e)
{
NHibernateInitializer.Instance().InitializeNHibernateOnce(
() => InitializeNHibernateSession());
BootStrapOnce();
}
private void InitializeNHibernateSession()
{
NHibernateSession.Init(
wcfSessionStorage,
new string[] { Server.MapPath("~/bin/bla.Interfaces.dll") },
Server.MapPath("~/Web.config"));
}
This works for the third scenario, but not for the first two.
It seems to need some wcf-session-specific context.
The SharpArch Init method seems to have protection from re-initializing it with another type of sessionstorage;
What is the best way to create a different session for three different kinds of contexts?
To me it looks like this post seems related to this one which has helped me looking in the right direction, but I have not found a solution so far.
I'm not sure you are going to be able to do what you are wanting with S#. The reason being is that you are really wanting to have 3 separate Nhibernate sessions, each with it's own storage mechanism. The current implementation only allows for one storage mechanism, regardless of the number of sessions.
I can easily get you #'s 1 and 3, but not two since I've never used RIA services. In the case of 1 and 3, you would need to take the WCF service out of the site and have it in it's own site. No way of really getting around that as their session lifecycles are different.
Your other option would be to come up with your own Session Management for NHibernate and not use the default S# one. You could look at the code for the S# version and create your own based on that.

WCF Multiple Interface

i am really wanting to get my head around this WCF technology and it seems the last months of information cramming has somewhat distorted my overall concept of how i should build my client/server application.
If someone out there could shed some light on the best practises when developing my app and implementing a Duplex WCF service with multiple interfaces.
General outline: I am wanting to develop an app where users connect to a server and lets say'.. add contacts to an sql database. I have discovered many ways of doing this but would ultimatly like to know im heading down the right path when it comes time to developing the app further.
Some models i have discovered are...
Client has its own LINQ to SQL classes and handles all data to and from data.... BAD. really slow. overheads with LINQ and SQL connections amongst poor implementation of Linq Select command.
Another model was the develop the service to implement the linq to sql commands which are used for CRUD operations however this still doesnt provide live data updates to other clients connected to the service.
So i made a basic app that when a client logs in the to the service there Callback Channel gets added to the Callback List. When a client feeds in a new contact to the service, it invokes a callback to all channel clients with the new contact and the client side function takes care of adding the contact to the right spot.
So now i want to implement a User object and perhaps 2 more other business objects say Project and Item and lets say Item... my idea is to create my service like this
[Serializable]
[DataContract]
[ServiceBehavior(
ConcurrencyMode = ConcurrencyMode.Single,
InstanceContextMode = InstanceContextMode.PerCall)]
public class Project: IProject
{
[DataMember()]
public int projectID;
public int Insert(objSubItem _objSubItem)
{
// code here
}
etc and
[ServiceContract(
Name = "Project",
Namespace = "",
SessionMode = SessionMode.Required,
CallbackContract = typeof(IProjectCallback))]
public interface IProject
{
/// <summary>
/// Inserting a Project record to the database
/// </summary>
/// <param name="_project">Project from Client</param>
/// <return>ProjectID back to the client if -1 then fail</return>
[OperationContract()]
int Insert(Project _project);
and
public interface IProjectCallback
{
/// <summary>
/// Notifies the clients that a Project has been added
/// </summary>
/// <param name="_project">Inserted Project</param>
[OperationContract(IsOneWay = true)]
void NotifyProjectInserted(Project _project);
}
obviously i have other crud functions and functions to ensure that both client and server data records are read only when being editited.
now if i have multi objects what is it the best way to lay it out.
Im thinking to create a servce.cs and an Iservice.cs and an IserviceCallback to negotiate the client channel population.. sould i also use partial classes of the service to implement the Iproject and IUser to properly ivoke the service callbacks aswell as invoking the objects insert.
would i do it like this
[ServiceContract(Name = "Service",
Namespace = "",
SessionMode = SessionMode.Required,
CallbackContract = typeof(IServiceCallBack))]
[ServiceKnownType(typeof(Project))]
[ServiceKnownType(typeof(User))]
public interface IService
{
// code here
}
and also
[ServiceBehavior(
ConcurrencyMode = ConcurrencyMode.Single,
InstanceContextMode = InstanceContextMode.PerCall)]
public partial class Service : IUser
{
public int Insert(User _User)
{
//
}
}
public partial class Service : IProject
{
public int Insert(Project _project)
{
// code here
}
}
public partial class Service : IService
{
// functions here
}
}
if feels as though the approach feels right if it was for one interface but feel that i need some "Best Practice" assistance.
Many thanks in advance,,
Chris Leach
Hi Richard,
I appreciate your response. As you see this is my first post and third ever on any forum related to programming. I have lived my programming life very close to google as shown by my google autofill history but its time to start asking questions of my own so i thank-you for your assistance so far. I am really wanting to understand an overall approach to how best managing data consistency amongst a distributed client/service application. I am looking into Telerik ORM and also Entity Framework as a solution and exposing the entities through a WCF service but i lack the understanding to implement data consistency amongst the clients. i have managed to develop a netDualTcp chat application and have used a list of client callback context to send join/leave and chat functions. I lack the overall picture however it seems that if i have a in memory (static) version of all of the tables in my sql database and either have the clients bind directly to these lists if this is possible or it seems best for my custom user controls to handle the connections so the server is aware of who has that particular user control open and can direct changes to those clients who are registered to the callback contract. that way the clients arent having to load the entire project every time they wish to open the application. I am thinking of a multi purpose application such as a contact/grant application program where users will be using different parts of the application and do not always need to access all of the information at one time. When the user first logs in i am hoping that the service will attach a callback contract for the client and several bits of information are loaded back to the client on authentaction such as a basic state i.e if they are an admin they get notifications etc. once they are logged in they are presented with a blank canvas but then begin to load custom user controls into a docking panel type interface. i guess this is where i become a little stuck about how to best manage concurrency and consistency whilst minimizing load/data transfer times to the client and freeing up cpu proccess time on both client. I know in programming there are multiple ways of doing this but i would like to know from the people on this forum what they feel the best approach to this type of soultion is. I understand its a deep topic but i feel i have come this far and a guiding hand would be appreciated. Thanks again
Generally I find taking a non-abstract view of a service gets me to the right place. What is it that consumers of my service are going to need to do?
I obviously have internal domain objects that are used by my business layer to create and manipulate the data. However, the way the business layer does tings isn;t necessarily the best way to partition functionality for my service.
So for example, if any project should have at least one user in it then when you create the project you should send over at least one user at the same time. The service operations need to encapsulate all of the data required to carry out a self contained business transaction.
Similarly, the death knell of many distributed systems is latency - they require lots of round trips to complete something. So, for example, you want to be able to add a user to a project; in reality you probably want to add a number of users to as project. Therefore, you should model the operation to accept a list of users not a single one which must be invoked multiple times
So a project service should allow you to do all the things related to a project, or projects, through a service contract. If users can live independently of projects then also have a user service. If they cannot then don;t have a user service as everything needs to be project focussed.
Business transactions are often more than straight forward CRUD operations on domain entities and the service should model them rather than reflecting the data model

How to handle multiple storage backends transparently

I'm working with an application right now that uses a third-party API for handling some batch email-related tasks, and in order for that to work, we need to store some information in this service. Unfortunately, this information (first/last name, email address) is also something we want to use from our application. My normal inclination is to pick one canonical data source and stick with it, but round-tripping to a web service every time I want to look up these fields isn't really a viable option (we use some of them quite a bit), and the service's API requires the records to be stored there, so the duplication is sadly necessary.
But I have no interest in peppering every method throughout our business classes with code to synchronize data to the web service any time they might be updated, and I also don't think my entity should be aware of the service to update itself in a property setter (or whatever else is updating the "truth").
We use NHibernate for all of our DAL needs, and to my mind, this data replication is really a persistence issue - so I've whipped up a PoC implementation using an EventListener (both PostInsert and PostUpdate) that checks, if the entity is of type X, and any of fields [Y..Z] have been changed, update the web service with the new state.
I feel like this is striking a good balance between ensuring that our data is the canonical source and making sure that it gets replicated transparently and minimizing the chances for changes to fall through the cracks and get us into a mismatch situation (not the end of the world if eg. the service is unreachable, we just do a manual batch update later, but for everybody's sanity in the general case, the goal is that we never have to think about it), but my colleagues and I still have a degree of uncomfortableness with this way forward.
Is this a horrid idea that will invite raptors into my database at inopportune times? Is it a totally reasonable thing to do with an EventListener? Is it a serviceable solution to a less-than-ideal situation that we can just make do with and move on forever tainted? If we soldier on down this road, are there any gotchas I should be wary of in the Events pipeline?
In case of unreliable data stores (web service in your case), I would introduce a concept of transactions (operations) and store them in local database, then periodically pull them from DB and execute against the Web Service (other data store).
Something like this:
public class OperationContainer
{
public Operation Operation; //what ever operations you need CRUD, or some specific
public object Data; //your entity, business object or whatever
}
public class MyMailService
{
public SendMail (MailBusinessObject data)
{
DataAcceessLair<MailBusinessObject>.Persist(data);
OperationContainer operation = new OperationContainer(){Operation=insert, Data=data};
DataAcceessLair<OperationContainer>.Persist(operation);
}
}
public class Updater
{
Timer EverySec;
public void OnEverySec()
{
var data = DataAcceessLair<OperationContainer>.GetFirstIn(); //FIFO
var webServiceData = WebServiceData.Converr(data); // do the logic to prepare data for WebService
try
{
new WebService().DoSomething(data);
DataAcceessLair<OperationContainer>.Remove(data);
}
}
}
This is actually pretty close to the concept of smart client - technically not logicaly. Take a look at book: .NET Domain-Driven Design with C#: Problem-Design-Solution, chapter 10. Or take a look at source code from the book, it's pretty close to your situation: http://dddpds.codeplex.com/

Best way to share data between .NET application instance?

I have create WCF Service (host on Windows Service) on load balance server. Each of this service instance maintain list of current user. E.g. Instance A has user A001, A002, A005, instance B has user A003, A004, A008 and so on.
On each service has interface that use to get user list, I expect this method to return all user in all service instance. E.g. get user list from instance A or instance B will return A001, A002, A003, A004, A005 and A008.
Currently I think that I will store the list of current users on database but this list seem to update so often.
I want to know, is it has another way to share data between WCF service that suit my situation?
Personally, the database option sounds like overkill to me just based on the notion of storing current users. If you are actually storing more than that, then using a database may make sense. But assuming you simply want a list of current users from both instances of your WCF service, I would use an in-memory solution, something like a static generic dictionary. As long as the services can be uniquely identified, I'd use the unique service ID as the key into the dictionary and just pair each key with a generic list of user names (or some appropriate user data structure) for that service. Something like:
private static Dictionary<Guid, List<string>> _currentUsers;
Since this dictionary would be shared between two WCF services, you'll need to synchronize access to it. Here's an example.
public class MyWCFService : IMyWCFService
{
private static Dictionary<Guid, List<string>> _currentUsers =
new Dictionary<Guid, List<string>>();
private void AddUser(Guid serviceID, string userName)
{
// Synchronize access to the collection via the SyncRoot property.
lock (((ICollection)_currentUsers).SyncRoot)
{
// Check if the service's ID has already been added.
if (!_currentUsers.ContainsKey(serviceID))
{
_currentUsers[serviceID] = new List<string>();
}
// Make sure to only store the user name once for each service.
if (!_currentUsers[serviceID].Contains(userName))
{
_currentUsers[serviceID].Add(userName);
}
}
}
private void RemoveUser(Guid serviceID, string userName)
{
// Synchronize access to the collection via the SyncRoot property.
lock (((ICollection)_currentUsers).SyncRoot)
{
// Check if the service's ID has already been added.
if (_currentUsers.ContainsKey(serviceID))
{
// See if the user name exists.
if (_currentUsers[serviceID].Contains(userName))
{
_currentUsers[serviceID].Remove(userName);
}
}
}
}
}
Given that you don't want users listed twice for a specific service, it would probably make sense to replace the List<string> with HashSet<string>.
A database would seem to offer a persistent store which may be useful or important for your application. In addition it supports transactions etc which may be useful to you. Lots of updates could be a performance problem, but it depends on the exact numbers, what the query patterns are, database engine used, locality etc.
An alternative to this option might be some sort of in-memory caching server like memcached. Whilst this can be shared and accessed in a similar (sort of) way to a database server there are some caveats. Firstly, these platforms are generally not backed by some sort of permanent storage. What happens when the memcached server dies? Second they may not be ACID-compliant enough for your use. What happens under load in terms of additions and updates?
I like the in memory way. Actually I am designing a same mechanism for one my projects I'm working now. This is good for scenarios where you don't have opportunities to access database or some people are really reluctant to create a table to store simple info like a list of users against a machine name.
Only update I'd do there is a node will only return the list of its available users to its peer and peer will combine that with its existing list. Then return its existing list to the peer who called. Thats how all the peers would be in sync with same list.
The DB option sounds good. If there are no performance issues it is a simple design that should work. If you can afford to be semi realtime and non persistent one way would be to maintain the list in memory in each service and then each service updates the other when a new user joins. This can be done as some kind of broadcast via a centralised service or using msmq etc.
If you reconsider and host using IIS you will find that with a single line in a config file you can make the ASP Global, Application and Session objects available. This trick is also very handy because it means you can share session state between an ASP application and a WCF service.

wcf - transfer context into the headers

I am using wcf 4 and trying to transparently transfer context information between client and server.
I was looking at behaviors and was able to pass things around. My problem is how to flow the context received in the incoming headers to the other services that might be called by a service.
In the service behavior I intercept the the message and read the headers but don't know where to put that data to be accessible to the next service call that the current service might make.
What I am looking for is something like:
public void DoWork()
{
var someId = MyContext.SomeId;
//do something with it here and call another service
using(var proxy = GetProxy<IAnotherService>())
proxy.CallSomeOtherMethodThatShouldGetAccessTo_ MyContextualObject();
}
If I store the headers in thread local storage I might have problems due to thread agility(not sure this happens outside ASP.NET, aka custom service hosts). How would you implement the MyContext in the code above.
I chose the MyContext instead of accessing the headers directly because the initiator of the service call might not be a service in which case the MyContext is backed by HttpContext for example for storage.
In the service behavior I intercept
the the message and read the headers
but don't know where to put that data
to be accessible to the next service
call.
Typically, you don't have any state between calls. Each call is totally autonomous, each call gets a brand new instance of your service class created from scratch. That's the recommended best practice.
If you need to pass that piece of information (language, settings, whatever) to a second, third, fourth call, do so by passing it in their headers, too. Do not start to put state into the WCF server side! WCF services should always be totally autonomous and not retain any state, if at ever possible.
UPDATE: ok, after your comments: what might be of interest to you is the new RoutingService base class that will be shipped with WCF 4. It allows scenarios like you describe - getting a message from the outside and forwarding it to another service somewhere in the background. Google for "WCF4 RoutingService" - you should find a number of articles. I couldn't find antyhing in specific about headers, but I guess those would be transparently transported along.
There's also a two-part article series Building a WCF Router Part 1 (and part 2 here) in MSDN Magazine that accomplishes more or less the same in WCF 3.5 - again, not sure about headers, but maybe that could give you an idea.