WCF Business logic handling - wcf

I have a WCF service that supports about 10 contracts, we have been supporting a client with all the business rules specific to this client now we have another client who will be using the exact same contracts (so we cannot change that) they will be calling the service exactly the same way the previous client called now the only way we can differentiate between the two clients is by one of the input parameters. Based on this input parameter we have to use a slightly different business logic – the logic for both the Client will be same 50% of the time the remainder will have different logic (across Business / DAL layers) . I don’t want to use if else statement in each of contract implementation to differentiate and reroute the logic also what if another client comes in. Is there a clean way of handling a situation like this. I am using framework 3.5. Like I said I cannot change any of the contracts (service / data contract ) or the current service calling infrastructure for the new client. Thanks

Can you possibly host the services twice and have the clients connect to the right one? Apart from that, you have to use some kind of if-else, I guess.

I can't say whether this is applicable to you, but we have solved a similar problem along this path:
We add a Header information to the message that states in which context some logic is called.
This information ends up in a RequestContext class.
We delegate responsibility of instantiating the implementation of the contract to a DI Container (in our case StructureMap)
We have defined a strategy how certain components are to be provided by the container:
There is a default for a component of some kind.
Attributes can be placed on specializations that denote for which type of request context this specialization should be used.
This is registered into the container through available mechanisms
We make a call to the Container by stating ObjectFactory.With(requestcontext).getInstance<CONTRACT>()
Dependencies of the service implementations are now resolved in a way that the described process is applied. That is, specializations are provided based ultimately on a request information placed in the header.
This is an example how this may be solvable.

Related

Multi-tenancy in Golang

I'm currently writing a service in Go where I need to deal with multiple tenants. I have settled on using the one database, shared-tables approach using a 'tenant_id' decriminator for tenant separation.
The service is structured like this:
gRPC server -> gRPC Handlers -
\_ Managers (SQL)
/
HTTP/JSON server -> Handlers -
Two servers, one gRPC (administration) and one HTTP/JSON (public API), each running in their own go-routine and with their own respective handlers that can make use of the functionality of the different managers. The managers (lets call one 'inventory-manager'), all lives in different root-level packages. These are as far as I understand it my domain entities.
In this regard I have some questions:
I cannot find any ORM for Go that supports multiple tenants out there. Is writing my own on top of perhaps the sqlx package a valid option?
Other services in the future will require multi-tenant support too, so I guess I would have to create some library/package anyway.
Today, I resolve the tenants by using a ResolveTenantBySubdomain middleware for the public API server. I then place the resolved tenant id in a context value that is sent with the call to the manager. Inside the different methods in the manager, I get the tenant id from the context value. This is then used with every SQL query/exec calls or returns a error if missing or invalid tenant id. Should I even use context for this purpose?
Resolving the tenant on the gRPC server, I believe I have to use the UnaryInterceptor function for middleware handling. Since the gRPC
API interface will only be accessed by other backend services, i guess resolving by subdomain is unneccessary here. But how should I embed the tenant id? In the header?
Really hope I'm asking the right questions.
Regards, Karl.
I cannot find any ORM for Go that supports multiple tenants out there. Is writing my own on top of perhaps the sqlx package a valid option?
ORMs in Go are a controversial topic! Some Go users love them, others hate them and prefer to write SQL manually. This is a matter of personal preference. Asking for specific library recommendations is off-topic here, and in any event, I don't know of any multi-tenant ORM libraries – but there's nothing to prevent you using a wrapper of sqlx (I work daily on a system which does exactly this).
Other services in the future will require multi-tenant support too, so I guess I would have to create some library/package anyway.
It would make sense to abstract this behavior from those internal services in a way which suits your programming and interface schemas, but there's no further details here to answer more concretely.
Today, I resolve the tenants by using a ResolveTenantBySubdomain middleware for the public API server. I then place the resolved tenant id in a context value that is sent with the call to the manager. Inside the different methods in the manager, I get the tenant id from the context value. This is then used with every SQL query/exec calls or returns a error if missing or invalid tenant id. Should I even use context for this purpose?
context.Context is mostly about cancellation, not request propagation. While your use is acceptable according to the documentation for the WithValue function, it's widely considered a bad code smell to use the context package as currently implemented to pass values. Rather than use implicit behavior, which lacks type safety and many other properties, why not be explicit in the function signature of your downstream data layers by passing the tenant ID to the relevant function calls?
Resolving the tenant on the gRPC server, I believe I have to use the UnaryInterceptor function for middleware handling. Since the gRPC API interface will only be accessed by other backend services, i guess resolving by subdomain is unneccessary here. But how should I embed the tenant id? In the header? [sic]
The gRPC library is not opinionated about your design choice. You can use a header value (to pass the tenant ID as an "ambient" parameter to the request) or explicitly add a tenant ID parameter to each remote method invocation which requires it.
Note that passing a tenant ID between your services in this way creates external trust between them – if service A makes a request of service B and annotates it with a tenant ID, you assume service A has performed the necessary access control checks to verify a user of that tenant is indeed making the request. There is nothing in this simple model to prevent a rogue service C asking service B for information about some arbitrary tenant ID. An alternative implementation would implement a more complex trust-nobody policy whereby each service is provided with sufficient access control information to make its own policy decision as to whether a particular request scoped to a particular tenant should be fulfilled.

Interaction of services in the service layer

What is the best way to organize interaction between services in the service layer?
For example, I have document service and product service. In my case products can have their own documents and to manage documents of product I call appropriate methods from the document service in the product service. So, I need to create instance of document service in product service. And I need to call some methods from product service in the document service too. So, each of these services refers to other and I get stackoverflowexception respectively.
Which design solutions should I use to eliminate these problem?
Application Services are supposed to provide external clients an API for executing cohesive business operations. An application service method generally matches a use case of your application.
While an application service operation may require calling another service (eg, the Create Product use case includes the Create Document use case, which can also be called separately), this is not the norm and you should look to make your application services as cohesive as possible. In particular, just because at some point in your business case you start to manipulate another kind of entity doesn't mean you should delegate that part to another application service - in other words, one application service per entity is not necessarily right.
In any case, from your domain it should appear clearly in which direction the dependency between 2 applications services points. In your example, Product Service seems to depend on Document Service - it's difficult to imagine why it would be the other way around.
If you really need a round-trip between service A and service B (which I wouldn't do unless I have no other option), you could try and have the instance of A inject itself into B instead of relying on a DI container to resolve the dependency with a new instance, solving the stack overflow problem - if that's why you get a stack overflow in the first place.
Obviously, circular dependencies are wrong.
You can use shared identifiers to decouple Products and Documents.
Moreover you can orchestrate the service interaction from outside them, in the application: in the ProductService you can have a LoadProducts(ProductIdentifiers[] identifiers) returning an immutable collection of products and in the DocumentService you can have a LoadDocuments(DocumentIdentifiers[] identifiers) returning an immutable collection of documents.

Passing client context using Unity in WCF service application

I have a WCF service application (actually, it uses WCF Web API preview 5) that intercepts each request and extracts several header values passed from the client. The idea is that the 'interceptor' will extract these values and setup a ClientContext object that is then globally available within the application for the duration of the request. The server is stateless, so the context is per-call.
My problem is that the application uses IoC (Unity) for dependency injection so there is no use of singleton's, etc. Any class that needs to use the context receives it via DI.
So, how do I 'dynamically' create a new context object for each request and make sure that it is used by the container for the duration of that request? I also need to be sure that it is completely thread-safe in that each request is truly using the correct instance.
UPDATE
So I realize as I look into the suggestions below that part of my problem is encapsulation. The idea is that the interface used for the context (IClientContext) contains only read-only properties so that the rest of the application code doesn't have the ability to make changes. (And in a team development environment, if the code allows it, someone will inevitably do it.)
As a result, in my message handler that intercepts the request, I can get an instance of the type implementing the interface from the container but I can't make use of it. I still want to only expose a read-only interface to all other code but need a way to set the property values. Any ideas?
I'm considering implementing two interfaces, one that provides read-only access and one that allows me to initialize the instance. Or casting the resolved object to a type that allows me to set the values. Unfortunately, this isn't fool-proof either but unless someone has a better idea, it might be the best I can do.
Read Andrew Oakley's Blog on WCF specific lifetime managers. He creates a UnityOperationContextLifetimeManager:
we came up with the idea to build a Unity lifetime manager tied to
WCF's OperationContext. That way, our container objects would live
only for the lifetime of the request...
Configure your context class with that lifetime manager and then just resolve it. It should give you an "operation singleton".
Sounds like you need a Unity LifetimeManager. See this SO question or this MSDN article.

Is shared assembly the only way to create objects from WCF REST service

I am writing an application that is consuming an in-house WCF-based REST service and I'll admit to being a REST newbie. Since I can't use the "Add Service Reference", I don't have ready-made proxy objects representing the return types from the service methods. So far the only way I've been able to work with the service is by sharing the assembly containing the data types exposed by the service.
My problem with this arrangment is that I see only two possibilities:
Implement DTOs (DataContracts) and expose those types from my service. I would still have to share an assembly but this approach would limit the types contained in the assembly to the service contract and DTOs. I don't like to use DTOs just for the sake of using them, though as they add another layer of abstraction and processing time to convert from domain object to DTO and vice versa. Plus, if I want to have business rules, validation, etc. on the client, I'd have to share the domain objects anyways, so is the added complexity necessary.
Support serialization of my domain objects, expose those types and share that assembly. This would allow me to share business and validation logic with the client but it also exposes parts of my domain objects to the client that are meant only for the service app.
Perhaps an example would help the discussion...
My client application will display a list of documents that is obtained from the REST service (a GET operation). The service returns an array of DocumentInfo objects (lightweight, read-only representation of a Document).
When the user selects one of the items, the client retrieves the full Document object from the REST service (GET by id) and displays a data entry form so the user can modify the object. We would want validation rules for a rich user experience.
When the user commits the changes, the Document object is submitted to the REST service (a PUT operation) where it is persisted to the back-end data store.
If the state of the Document allows, the user may "Publish" the Document. In this case, the client POSTs a request to the REST service with the Document.ID value and the service performs the operation by retrieving the server-side Document domain object and calling the Publish method. The Publish method should not be available to the client application.
As I see it, my Document and DocumentInfo objects would have to be in a shared assembly. Doing this makes Document.Publish available to the client. One idea to hide it would be to make the method internal and add an InternalsVisibleTo attribute that allows my service app to call the method and not the client but this seems "smelly."
Am I on the right track or completely missing something?
The classes you use on the server should not be the same classes you use on the client (apart from during the data transfer itself). The best approach is to create a package (assembly/project) containing DTOs, and share these between the server and the client. You did mention that you don't want to create DTO's for the sake of it, but it is best practice. The performance impact of adding extra layers is negligible, and layering actually helps make your application easier to develop and maintain (avoiding situations like yours where the client has access to server code).
I suggest starting with the following packages:
Service: Resides on server only, exposes the service and contains server application logic.
DTO: Resides on both server and client. Contains simple classes which contain data which need to be passed between server and client. Classes have no code apart from properties. These are short lived objects which survive long enough only to transfer data.
Repository: Resides on client only. Calls the server, and turns Model objects into DTO's (and vice versa).
Model: Resides on client only. Contains classes which represent business objects and relationships. Model objects stay in memory throughout the life of the application.
Your client application code should call into Repository to get Model objects (you might also consider looking into MVVM if your not sure how to go about this).
If your service code is sufficiently complex that it needs access to Model classes, you should create a separate Model package (obviously give it a different name) - the only classes which should exist both on server and client are DTO classes.
I thought that I'd post the approach I took while giving credit to both Greg and Jake for helping guide me down the path.
While Jake is correct that deserializing the data on the client can be done with any type as long as it implements the same data contract, enforcing this without WSDL can be a bit tricky. I'm in an environment where other developers will be working with my solution both to support and maintain the existing as well as creating new clients that consume my service. They are used to "Add Service Reference" and going.
Greg's points about using different objects on the client and the server were the most helpful. I was trying to minimize duplicate by sharing my domain layer between the client and the server and that was the root of my confusion. As soon as I separated these into two distinct applications and looked at them in isolation, each with their own use cases, the picture became clearer.
As a result, I am now sharing a Contracts assembly which contains my service contracts so that a client can easily create a channel to the server (using WCF on the client-side) and data contracts representing the DTOs passed between client and service.
On the client, I have ViewModel objects which wrap the Model objects (data contracts) for the UI and use a service agent class to communicate with the service using the service contracts from the shared assembly. So when the user clicks the "Publish" button in the UI, the controller (or command in WPF/SL) calls the Publish method on the service agent passing in the ID of the document to publish. The service agent relays the request to the REST API (Publish operation).
On the server, the REST API is implemented using the same service contracts. In this case, the service works with my domain services, repositories and domain objects to carry out the tasks. So when the Publish service operation is invoked, the service retrieves the Document domain object from the DocumentRepository, calls the Publish method on the object which updates the internal state of the object and then the service passes the updated object to the Update method of the repository to persist the changes.
I am pleased with the outcome as I believe this gives me a more robust and extensible architecture to work with. I can change the ViewModels as needed to support the UI with no concern over poluting the service(s) and, likewise, change the internal implementation of the service operations (domain layer) without affecting the client application(s). All that binds the two are the contracts they share. Pretty clean.
You can serialize your domain objects and then de-serialize them into different types on the client. Both types need to implement the same data contract. All serializable types have at least a default data contract that includes all public read/write properties and fields.

Can Server-side and Client-side WCF Share Validation Library?

Greetings!
I am using a WCF library on an application server, which is referenced by an IIS server (which is therefore the client). I would like to put my validation in a place so that I can just call .Validate() which returns a string array of errors (field too short, missing, etc). The problem is, such functions don't cross the WCF boundary and I really don't want to code the same logic in the WCF service and in IIS/WCF client. Is there a way to use extension methods or something similar so both side can use use a .Validat() method which calls the same code?
Many thanks for any ideas!
Steve
If you control both sides of the wire, i.e. the server-side (service) and the client-side, then you could do the following:
put all your service and data contracts into a shared assembly
reference that "Contracts" assembly from both the server and the client
manually create the client proxy (by deriving from ClientBase<T> or by creating it from a ChannelFactory<T>) - do not use "Add Service Reference" or svcutil.exe!
put all validation logic into a shared assembly
reference that shared validation assembly from both projects
If you want to use a shared validation assembly, you must make sure the data types used on your server and client are identical - this can only be accomplished if you also share service and data contracts. Unfortunately, that requires manual creation of the client proxy (which is really not a big deal!).
If you'd use "Add Service Reference", then Visual Studio will inspect the service based on its metadata, and create a new set of client-side objects, which look the same in terms of their fields and all, but they're a separate, distinct type, and thus you wouldn't be able to use your shared validation on both the server-side and the client-side objects.
Do you have a problem with sending the data over to the server to be validated? In other words, your service interface actually offers the "Validate" method and takes a data contract full of data, validates it and returns a List where T is some kind of custom ValidationResult data contract that contains all the info you need about validation warnings/errors.
In a service architecture, you can't trust the client, who could theoretically be some other company altogether, to have done proper data validation for you. You always need to do it at the service layer and design for communication of those validation issues back to your client. So if you're doing that work at the server anyway, why not open that logic up to the clients so they can use it directly? Certainly the clients can (should) still do some kind of basic input validation such as checking for null values, empty strings, values out of range, etc, but core business logic checks should be shipped off to the service.