WCF Multiple Interface - wcf

i am really wanting to get my head around this WCF technology and it seems the last months of information cramming has somewhat distorted my overall concept of how i should build my client/server application.
If someone out there could shed some light on the best practises when developing my app and implementing a Duplex WCF service with multiple interfaces.
General outline: I am wanting to develop an app where users connect to a server and lets say'.. add contacts to an sql database. I have discovered many ways of doing this but would ultimatly like to know im heading down the right path when it comes time to developing the app further.
Some models i have discovered are...
Client has its own LINQ to SQL classes and handles all data to and from data.... BAD. really slow. overheads with LINQ and SQL connections amongst poor implementation of Linq Select command.
Another model was the develop the service to implement the linq to sql commands which are used for CRUD operations however this still doesnt provide live data updates to other clients connected to the service.
So i made a basic app that when a client logs in the to the service there Callback Channel gets added to the Callback List. When a client feeds in a new contact to the service, it invokes a callback to all channel clients with the new contact and the client side function takes care of adding the contact to the right spot.
So now i want to implement a User object and perhaps 2 more other business objects say Project and Item and lets say Item... my idea is to create my service like this
[Serializable]
[DataContract]
[ServiceBehavior(
ConcurrencyMode = ConcurrencyMode.Single,
InstanceContextMode = InstanceContextMode.PerCall)]
public class Project: IProject
{
[DataMember()]
public int projectID;
public int Insert(objSubItem _objSubItem)
{
// code here
}
etc and
[ServiceContract(
Name = "Project",
Namespace = "",
SessionMode = SessionMode.Required,
CallbackContract = typeof(IProjectCallback))]
public interface IProject
{
/// <summary>
/// Inserting a Project record to the database
/// </summary>
/// <param name="_project">Project from Client</param>
/// <return>ProjectID back to the client if -1 then fail</return>
[OperationContract()]
int Insert(Project _project);
and
public interface IProjectCallback
{
/// <summary>
/// Notifies the clients that a Project has been added
/// </summary>
/// <param name="_project">Inserted Project</param>
[OperationContract(IsOneWay = true)]
void NotifyProjectInserted(Project _project);
}
obviously i have other crud functions and functions to ensure that both client and server data records are read only when being editited.
now if i have multi objects what is it the best way to lay it out.
Im thinking to create a servce.cs and an Iservice.cs and an IserviceCallback to negotiate the client channel population.. sould i also use partial classes of the service to implement the Iproject and IUser to properly ivoke the service callbacks aswell as invoking the objects insert.
would i do it like this
[ServiceContract(Name = "Service",
Namespace = "",
SessionMode = SessionMode.Required,
CallbackContract = typeof(IServiceCallBack))]
[ServiceKnownType(typeof(Project))]
[ServiceKnownType(typeof(User))]
public interface IService
{
// code here
}
and also
[ServiceBehavior(
ConcurrencyMode = ConcurrencyMode.Single,
InstanceContextMode = InstanceContextMode.PerCall)]
public partial class Service : IUser
{
public int Insert(User _User)
{
//
}
}
public partial class Service : IProject
{
public int Insert(Project _project)
{
// code here
}
}
public partial class Service : IService
{
// functions here
}
}
if feels as though the approach feels right if it was for one interface but feel that i need some "Best Practice" assistance.
Many thanks in advance,,
Chris Leach
Hi Richard,
I appreciate your response. As you see this is my first post and third ever on any forum related to programming. I have lived my programming life very close to google as shown by my google autofill history but its time to start asking questions of my own so i thank-you for your assistance so far. I am really wanting to understand an overall approach to how best managing data consistency amongst a distributed client/service application. I am looking into Telerik ORM and also Entity Framework as a solution and exposing the entities through a WCF service but i lack the understanding to implement data consistency amongst the clients. i have managed to develop a netDualTcp chat application and have used a list of client callback context to send join/leave and chat functions. I lack the overall picture however it seems that if i have a in memory (static) version of all of the tables in my sql database and either have the clients bind directly to these lists if this is possible or it seems best for my custom user controls to handle the connections so the server is aware of who has that particular user control open and can direct changes to those clients who are registered to the callback contract. that way the clients arent having to load the entire project every time they wish to open the application. I am thinking of a multi purpose application such as a contact/grant application program where users will be using different parts of the application and do not always need to access all of the information at one time. When the user first logs in i am hoping that the service will attach a callback contract for the client and several bits of information are loaded back to the client on authentaction such as a basic state i.e if they are an admin they get notifications etc. once they are logged in they are presented with a blank canvas but then begin to load custom user controls into a docking panel type interface. i guess this is where i become a little stuck about how to best manage concurrency and consistency whilst minimizing load/data transfer times to the client and freeing up cpu proccess time on both client. I know in programming there are multiple ways of doing this but i would like to know from the people on this forum what they feel the best approach to this type of soultion is. I understand its a deep topic but i feel i have come this far and a guiding hand would be appreciated. Thanks again

Generally I find taking a non-abstract view of a service gets me to the right place. What is it that consumers of my service are going to need to do?
I obviously have internal domain objects that are used by my business layer to create and manipulate the data. However, the way the business layer does tings isn;t necessarily the best way to partition functionality for my service.
So for example, if any project should have at least one user in it then when you create the project you should send over at least one user at the same time. The service operations need to encapsulate all of the data required to carry out a self contained business transaction.
Similarly, the death knell of many distributed systems is latency - they require lots of round trips to complete something. So, for example, you want to be able to add a user to a project; in reality you probably want to add a number of users to as project. Therefore, you should model the operation to accept a list of users not a single one which must be invoked multiple times
So a project service should allow you to do all the things related to a project, or projects, through a service contract. If users can live independently of projects then also have a user service. If they cannot then don;t have a user service as everything needs to be project focussed.
Business transactions are often more than straight forward CRUD operations on domain entities and the service should model them rather than reflecting the data model

Related

Multi-tenant .Net Core Web API

I have a requirement to build a Web API for an existing system. There are various agencies throughout the country, and each agency has its own database. All databases are on one single server. All databases are identical in structure. All databases have their own username and password. An agency has one or more users. A user can belong to one or more agencies. There is also one special database which contains a table of all users, a table of all agencies, and user-agencies bridge table.
Currently they are using a traditional Windows desktop application. When a user sets up this Windows program, they log in with a username and password. The system then displays for them a list of all the agencies that they belong to (normally just one, but some "power users" can belong to a few). They pick an agency, and then the program connects to the correct database. For the remainder of the session, everything that the user does will be done on that database.
The client wants to create a web app to eventually replace the Windows program (and the two will be running side by side for a while). One developer is creating the front end in Angular 5, and I am developing the API in ASP .Net Core 2.1.
So the web app will function in a similar manner to the Windows app. A user logs in to the web app. The web app, which consumes my Web API, tells the API which user just logged in. The API then checks which agency(s) this user belongs to from that database that stores that data. The API returns the list of agencies the user belongs to to the web app. There, the user picks an agency. From this point on, the web app will include this Agency ID in the header of all API calls. The API, when it receives a request from the web app, will know which database to use, based on the Agency ID in the header of the request.
Hope that makes sense...
Obviously this means that I will have to change the connection string of the DbContext on the fly, depending on which database the API must talk to. I've been looking at this, firstly by doing it on the controller itself, which worked but would involve a lot of copy-and-paste anti-patterns in all my controllers. So I am trying to move this to the DbContext's OnConfiguring event. I was thinking it'd be best to create a DbContext Factory to create the DbContexts, using the appropriate connection string. I'm just a bit lost though. You see, when the web app calls an end point on the web api (let's say an HTTP GET request to get a list of accounts), this will fire the HttpGet handler in the Accounts controller. This action method then reads the Agency ID header. But this is all happening on the controller.... If I call the DbContext Factory from the DbContext's OnConfiguring() event, it would have to send the Agency ID (which was read in the controller) to the factory so that the factory knows which connection string to create. I'm trying not to use global variables to keep my classes loosely coupled.
Unless I have some service running in the pipeline that intercepts all requests, reads the Agency ID header, and this somehow gets injected into the DbContext constructor? No idea how I would go about doing this...
In summary, I'm a bit lost. I'm not even sure if this is the correct approach. I've looked at some "multi-tenant" examples, but to be honest, I've found them a bit hard to understand, and I was hoping I could do something a bit simpler for now, and with time, as my knowledge of .Net Core improves, I can look at improving the code correspondingly.
I am working on something similar you describe here. As I am also quite at the start, I have no silver bullet yet. There is one thing where could help you with your approach though:
firstly by doing it on the controller itself, which worked but would involve a lot of copy-and-paste anti-patterns in all my controllers.
I took the approach of having a middleware being in charge of swapping the dbconnection string. Something like this:
public class TenantIdentifier
{
private readonly RequestDelegate _next;
public TenantIdentifier(RequestDelegate next)
{
_next = next;
}
public async Task Invoke(HttpContext httpContext, GlobalDbContext dbContext)
{
var tenantGuid = httpContext.Request.Headers["X-Tenant-Guid"].FirstOrDefault();
if (!string.IsNullOrEmpty(tenantGuid))
{
var tenant = dbContext.Tenants.FirstOrDefault(t => t.Guid.ToString() == tenantGuid);
httpContext.Items["TENANT"] = tenant;
}
await _next.Invoke(httpContext);
}
}
public static class TenantIdentifierExtension
{
public static IApplicationBuilder UseTenantIdentifier(this IApplicationBuilder app)
{
app.UseMiddleware<TenantIdentifier>();
return app;
}
}
Here I am using a self-created http-header called X-Tenant-Guid to identify the tenants GUID. Then I make a request to the global Database, where I do get the connection string of this tenants db.
I made the example public here. https://github.com/riscie/ASP.NET-Core-Multi-Tenant-multi-db-Example (it's not yet updated to asp net core 2.1 but it should not be a problem to do so quickly)

Web service coordination

We are creating a WCF infrastructure to allow other systems in the organization to consume our business logic. Some of this logic has to do with user authentication, so securing the services is of high concern. The transport layer is secured by certificates. I am more concerned with securing the business layer.
One of our clients calls these services in a certain sequence, in order to support a business process. What I would like to do is put in place some mechanism to verify that the sequence is indeed kept. The sequence can be disrupted by developer errors on the consuming side or by attackers trying to compromise the system. I do not want to put the logic of the process inside the services themselves, since this will couple them to this specific client`s process. I would like to put the logic for coordinating the different services in a separate layer, which will be client specific (or maybe something more generic to support any process?)
Can someone point me to specific patterns or resources which discuss this issue?
I have been searching Google for half a day, and I can`t seem to find any resource discussing this specific issue.
Most web services should be designed to be called independently, since there's no guarantee what order the caller will compose them.
That having been said, one way to encourage them to be called in order is to use a design akin to a Fluent Interface, in which Service A returns an object that is an input parameter to Service B.
[DataContract]
public class ServiceAResult
{
// ...
}
[DataContract]
public class ServiceBResult
{
// ...
}
[ServiceContract]
public interface IServiceA {
[OperationContract]
public ServiceAResult OperationA() {
// ...
}
}
[ServiceContract]
public interface IServiceB {
[OperationContract]
public ServiceBResult OperationB(ServiceAResult input) {
// ...
}
}
Here, the easiest way to create a ServiceAResult to pass to ServiceB.OperationB is to call ServiceA.OperationA.
I recommend you separate your concerns.
Have a web service whose operations are called in order to perform your business processes.
Have a second service which orchestrates your business processes and which calls the operations of the first service in the required order.
Do not make it the responsibility of the first service to ensure that the second service calls things in the correct order. The responsibility of the order of calls should belong to a different service.

Domain Model – Repositories – Communication across Sub-Systems

I am currently in the process of designing a system which will use multiple data sources to consume the required data. I am attempting to model the concepts shown below (would post an image but don't have enough points!) where a customer can have an association with a number of products. The Customer would be stored in the "Customer subsystem" and the Product and CustomerProduct would be stored in thee "Product subsystem"
public class Customer
{
public string FirstName { get; set; }
public Guid ID { get; set; }
}
public class CustomerProduct
{
public Guid ID { get; set; }
public Customer Customer { get; set; }
public Product Product { get; set; }
}
public class Product
{
public string Name { get; set; }
public double Price { get; set; }
public Guid ID { get; set; }
}
The “Customer” entity will be physically persisted in a system which must be accessed via a web-service. The “ConsumerProduct” and “Product” entities will be persisted in a SQL database, using NHibernate as the ORM.
As part of the design I was planning to use Repositories to abstract the data persistence technologies away from the domain model. Therefore I would have 3 repository interfaces, ICustomerRepository, ICustomerProductRepository and IProductRepository. I would then create a concrete NHibernate implementation for the CustomerProduct and Product repositories and a concrete web service access implementation for the Customer repository.
What I am struggling with is how the entities, which are persisted in different sub-systems will interact. Ideally I would like a rich domain model, where the CustomerProduct entity would have a physical “Customer” property which returns a Customer object. However I have no idea how this would work as the Customer entity would need to be accessed from a different data store.
The only way I can see to solve this issue is to not maintain a full reference to Customer in the CustomerProduct entity and instead just hold a reference, and then every time I need to get a reference to the Customer I would just go via the Customer Repository.
I would be grateful for any suggestions anyone could put forward on how to solve this issue.
hi I haven't been in your situation before, but I have designing domains that communicate with other subsystems.
I do not have the whole picture, but it seems like the customer entity is more isolated from the others, CustomerProduct and Product. So am I guessing correct that you will present the model in a common GUI but its only the datasource that are separated?
First you can solve this by different ways and you should also ask yourself about non-functional requirements such as maintenance, uptime and support. Will both systems always be up and running simultaneously or will it happened that you take on system down. The clue I'm fishing for is should you communicate sync or async (message queuing?) with subsystems. This can be achieved by using NServiceBus.
But to focus on your Domain, you should go for making the Domain look like it only has one model. This can be accomplished in different ways:
1) Have your ICustomerRepository (an interface contract that acts like is working against a collection of objects) be implemented by a infrastructure related repository that consume your web service in your subsystem. A hint is that you should use GUID as keys so keyconfilcts occur. This approach will not let you have any relationships/associations to customer from your other entities. They will but only through the repository (This a solution that Jimmy Nilsson uses in his book (http://jimmynilsson.com/blog/) to not tighten the model with to many bidirectional relationships).
2) Depends how your use cases will target/use the model, but you can create a application wide service layer that resides at one physical place but uses CustomerService and CustomerProcuctService and ProductService. To prevent that domain logic will leak into application layer some of the coordination between these entites can be encapsulated in a domain event handlers that coordinate some events between different services.
3) you can also create a CustomerAdapter class that have the other subsystems CustomerGUID as a key (it cannot generate keys since Customer webservice have control of that). But you can map it in Nhibernate and have relationsship between CustomerProduct and CustomerAdapter. But when you Map CustomerAdapter you will only load the GUID. Then make sure you will have a ICustomerAdapterService injected into a property using Spring.Net Windsor or some other DI tool. Then you to not map properties (like customername, adress etc) for customerAdapter in Nhibernate. But when you get/read Adress from CustomerAdapter it will get it from ICustomerAdapterService and set all other values as well.
This is not a recommended solution since it will break some DDD rules, like not having services in domain model. But if you see it from this perspective: it actually can be considered a Domain Service since it solves problem within your distributed domain. However it includes infrastructure related things like a WCF service implementation and therefore should the service implementation be in another infrastructure layer/assembly
Most simple is the solution 2 If you can handle the fact that Customer Entity will be accessed only by a Service in application layer.
However this application serviclayer can be a good anticorruption layer between the two subsystems. There is probably a reason why you have two subsystems today.
But an example of interaction flow (with no detailed knowledge of how your domain is):
GUI calls Application Service CustomerProductService method BuyNewProduct(CustomerDTO customer, ProductDTO newProduct)
CustomerProductService have ICustomerProductRepository and IProductRepository injected into constructor. It will also have a infrastructure Service ICustomerFacadeService (change name now :-)) that is injected into a Property CustomerFacadeService. The creation of this service is done by a factory that have two creation methods, Create() and CreateExtendendedWithCustomerService(). The later one will also inject CustomerServiceFacade
The method BuyNewProduct(...) will now Assemble the CustomerDTO and use the CustomerGUID to load Customer from CustomerFacadeService that will call the web service in the other subsystem.
The loaded customer will ensure that it actually exists but now we load the Product with IProductRepository
With both CustomerGUID value and Product Entity we create a new CustomerProduct entity (which is actually just a mapping class between Products and Customer GUID's) and save it through ICustomerProductRepository
Now you can call another infrastructure service to send an email to your customer that will be notified that it has access to the new product. Or you can create Domain events in CustomerProduct entity that delegates this notification to en eventhandler (in application service layer) that has the IEmailService injected in ctor. Then you have incapsulted the domain knowledge of sending notifications when you connect a new customer to a product.
Hope this help you in modelling your domain with less pain. Because its painful to do DDD. Requires a lot discussions with colleagues, domain experts and yourself in front of the mirror :) Is this the right path?
Look at the DDDsample.net for Domain Events or search for Udi Dahan and domain events.
I write an answer here, more space:
Regarding CustomerAdadpter also refered as CustomerFacadeService in the interaction flow example here is my opinion: How to implement depends of your application. Will most usercase be calling mainsystem calling your "cloud-subsystem" which will have a good uptime: -Then you maybe do not need a queue and will have a WCF service in the cloud. Your CustomerFacadeService will be a service Wrapper that just exposes the method your application layer needs and also assemble all necessary DTO objects.
If your cloud system will also call back to your mainsystem then you need to expose some of your methods as a service. THEN you have the option to expose a NServiceBus endpoint as a WCF service. This gives you the possibility to take down the mainsystem without loosing information.
But there is always a lot of buts...
You need of course to have the WCF service on another machine if your infra-tech guys want to install hotfixes/reboot main system's web server.
If you have client's waiting for an response while main system is down, how long will they wait? Not too long I guess.
So one scenario I can see benefits of this is if you have batches/reports that need to be carried out, and if one part of the system is down, the reporting will continue once it's up again.
Here is some example of NServiceBus exposed as a WCF service. But I have no experience in doing exactly that, just the knowledge of "it can be done".
http://docs.particular.net/nservicebus/architecture/nservicebus-and-wcf

Need some advice for a web service API?

My company has a product that will I feel can benefit from a web service API. We are using MSMQ to route messages back and forth through the backend system. Currently we are building an ASP.Net application that communicates with a web service (WCF) that, in turn, talks to MSMQ for us. Later on down the road, we may have other client applications (not necessarily written in .Net). The message going into MSMQ is an object that has a property made up of an array of strings. There is also a property that contains the command (a string) that will be routed through the system. Personally, I am not a huge fan of this, but I was told it is for scalability and every system can use strings.
My thought, regarding the web services was to model some objects based on our data that can be passed into and out of the web services so they are easily consumed by the client. Initially, I was passing the message object, mentioned above, with the array of strings in it. I was finding that I was creating objects on the client to represent that data, making the client responsible for creating those objects. I feel the web service layer should really be handling this. That is how I have always worked with services. I did this so it was easier for me to move data around the client.
It was recommended to our group we should maintain the “single entry point” into the system by offering an object that contains commands and have one web service to take care of everything. So, the web service would have one method in it, Let’s call it MakeRequest and it would return an object (either serialized XML or JSON). The suggestion was to have a base object that may contain some sort of list of commands that other objects can inherit from. Any other object may have its own command structure, but still inherit base commands. What is passed back from the service is not clear right now, but it could be that “message object” with an object attached to it representing the data. I don’t know.
My recommendation was to model our objects after our actual data and create services for the types of data we are working with. We would create a base service interface that would house any common methods used for all services. So for example, GetById, GetByName, GetAll, Save, etc. Anything specific to a given service would be implemented for that specific implementation. So a User service may have a method GetUserByUsernameAndPassword, but since it implements the base interface it would also contain the “base” methods. We would have several methods in a service that would return the type of object expected, based on the service being called. We could house everything in one service, but I still would like to get something back that is more usable. I feel this approach leaves the client out of making decisions about what commands to be passed. When I connect to a User service and call the method GetById(int id) I would expect to get back a User object.
I had the luxury of working with MS when I started developing WCF services. So, I have a good foundation and understanding of the technology, but I am not the one designing it this time.
So, I am not opposed to the “single entry point” idea, but any thoughts about why either approach is more scalable than the other would be appreciated. I have never worked with such a systematic approach to a service layer before. Maybe I need to get over that?
I think there are merits to both approaches.
Typically, if you are writing an API that is going to be consumed by a completely separate group of developers (perhaps in another company), then you want the API to be as self-explanative and discoverable as possible. Having specific web service methods that return specific objects is much easier to work with from the consumer's perspective.
However, many companies use web services as one of many layers to their applications. In this case, it may reduce maintenance to have a generic API. I've seen some clever mechanisms that require no changes whatsoever to the service in order to add another column to a table that is returned from the database.
My personal preference is for the specific API. I think that the specific methods are much easier to work with - and are largely self-documenting. The specific operation needs to be executed at some point, so why not expose it for what it is? You'd get laughed at if you wrote:
public void MyApiMethod(string operationToPerform, params object[] args)
{
switch(operationToPerform)
{
case "InsertCustomer":
InsertCustomer(args);
break;
case "UpdateCustomer":
UpdateCustomer(args);
break;
...
case "Juggle5BallsAtOnce":
Juggle5BallsAtOnce(args);
break;
}
}
So why do that with a Web Service? It'd be much better to have:
public void InsertCustomer(Customer customer)
{
...
}
public void UpdateCustomer(Customer customer)
{
...
}
...
public void Juggle5BallsAtOnce(bool useApplesAndEatThemConcurrently)
{
...
}

What is the best way to keep cached data to be shared across diff WPF applications across same machine?

I am thinking of keeping data in a DataSet in a WCF hosted service and other apps (on same box) can access the data via Named Pipes (exposed through the WCF service). The apps then keep a copy of dataSet inside them so as to not re-fetch the data from the WCF unless it gets changed.
Data is retrieved from the server in the form of a datarow collection, so I am writing it as DataTables and storing it as a Dataset.
Data will rarely change but when it does I have to inform all client apps which had retrieved the data to refresh.
I do something similar with an app I wrote.
You can easily let the service update the clients when the data changes by using a callback. When a clients connects to the service you will need to store their callback info and when the data is updated you just fire off the message to each subscribed clients.
Here is the contract for the callback:
public interface IServiceMessageCallback
{
[OperationContract(IsOneWay = true)]
void OnReceivedServiceMessage(ServiceMessage serviceMessage);
}
The service implements this interface. The service has this private field:
/// <summary>
/// Holds the callback recipients
/// </summary>
private List<IServiceMessageCallback> callbackMessages =
new List<IServiceMessageCallback>();
When the clients connects do something like this:
IServiceMessageCallback callback =
OperationContext.Current.GetCallbackChannel<IServiceMessageCallback>();
callbackMessages.Add(callback);
And finally, whatever method you have that updates the data on the service should also have this:
Action<IServiceMessageCallback> fire =
delegate(IServiceMessageCallback callback)
{ callback.OnReceivedServiceMessage(serviceMessage); };
// loop thru the callback channels and perform the action
callbackMessages.ForEach(fire);
I sort of patched this code together from a rather hefty service I wrote... hopefully the pieces make sense out of context.
Your solution would work. If you have access to a proper cache service, like ScaleOut StateServer, Velocity, or memcached, you could use it for your needs. It would be doing much the same thing, but you would be getting a proven solution with additional features surrounding cache management, as well the ability to scale to more than a single machine should the need arise.