How to organize stateful WCF web services with custom session handling - wcf

Consider a company with a large set of functionality to be exposed via web services. A subset of this functionality is used for building up some very complex and computation intensive scenarios, and requires a session to be maintained during this iterative build-up. Each scenario targets one single base structure, representing, say a single customer. That is, a scenario is a series of heavy operations on a single customer structure. The operations can be grouped by which area they target, but basically all operations in the same scenario roots in the same customer structure.
The following decision is given from the outside, and cannot be altered: an already made custom session handler must be used, which basically operates on a session given a simple GUID-token to be send to/from the client. Therefore, from a technical perspective the session need not to be limited to a single service, but can live across multiple services.
Besides the stateful operations, there is also a number of stateless operations.
Given the above decision about the custom session handler, the question is now: how should all these operations be organized? What organization is most elegant?
Some possibilities:
All stateful operations are gathered in one single stateful service, while all stateless services are grouped in an arbitrary set of services, possibly by which area they target. Possible problem: the single stateful service can become very large.
Both stateful and stateless operations are grouped into smaller services, but stateful and stateless operations are still separated so that no service contains both stateful and stateless operations. Possibly, all session estabilshment and finalization can be put in a separate thin dedicated service, say SessionService. With this a approach we have no huge single stateful service. But is the organization elegant? Why force a strict separation of the stateful and stateless operations at all?
Group all operations by their target, ignoring their statefulness. This gives a number of services with mixed stateful and stateless operations. The former can take the session GUID token as input argument, and a service behavior can take care of automatically handle the session estabilshment given some appropriate naming convention for the session key/token.
Similar to above, a separate dedicated service can take care of session establishment and finalization.
Something else, not mentioned above?
Which organizaion is the most elegant?

I have implemented what is basically your third option. We have 20+ operations, some of which check the request for a SessionID (GUID), some of which do not (like Ping() and Login(DeviceID)). All the session handling is "Custom" in the sense that we're not using WCF persistant sessions; rather, we've created a Login() function that takes a GUID, ID, Password from client requests for authentication.
On Login(), we validate the DeviceID, UserID and Pwd against the DB, creating a row on the Session table containing StartTime (session only good for <8 hrs) and DeviceID. We then send back to the client a SessionID (GUID) that he/she uses in all subsequent connections, whether uploading or downloading.
As far as organization is concerned, the subs and methods are organized by the device type (iOS, PC, Android) and the type of operation, just to keep the apples from the oranges. But each function that's Session-related always authenticates the request, validating the inbound SessionID. That may seem wasteful, checking each session with each request (again and again), but because we're using BasicHTTPBinding, we're forced to use a stateless model.

Related

Seperate or Merge Kafka Consumer and API services together

After recently reading about event-based architecture, I wanted to change my architecture into one making use of such strengths.
I have two services that expose an API (crud, graphql), each based around a different entity and using a different database.
However, now whenever someone deletes a certain type of row in service A, i need to delete a coupled row in Service B.
So I added Kafka to my design, and whenever I delete the entity in service A, it publishes a notification message into Kafka.
In service B I am currently consuming the same topic so whenever a new message is received the Service will also handle the deletion of the matching entity, because it already has access that table because the same service already exposes the CRUD API to users.
What i'm not sure about is whether putting the Kafka Consumer and the API together in the same service is a good design. It contradicts the point of single responsibility in micro services, and whether there is an issue in one part of the service, it will likely affect the second.
However, creating a new service will also cause me issues - i will have 2 different services accessing the same table, and i will have to make sure i always maintain them together, whenever making changes to the table or database.
What is the best practice in a incident such as this? Is it inevitable to have different services have data coupling or is it not so bad to use the same service for two, similiar usages.
There is nothing wrong with using Kafka... You could do the same with point-to-point service communication, however (JSON-RPC / gRPC), however.
The real problem you seem to be asking about is dual-writes or race-conditions leading to data inconsistency.
While you could use a single consumer group and one topic-partition to preserve order and locking across consumers interested in those events, that does not lock out other consumer-groups from interacting with the database to perform the same action. Therefore, Kafka itself won't help with this problem.
You'll need external, distributed locks (e.g. Zookeeper can be used here) that fence off your database clients while you are performing actions against it.
To the original question, Kafka Connect offers an API and is also a Producer and Consumer client (and would be recommended for database interactions). So is Confluent Schema Registry, KSQLdb, etc.
I believe that the consumer of your service B would not be considered "a service" or part of the "service", as in that it is not called as part the code which services requests. Yet it does provide functionality that is required for the domain function of your microservice. So yes I would consider the consumer part of the Microservice in terms of team/domain responsibility.
There may be different opinions on if the consumer code should share the same code base/repo as the "service" code. Some people believe that it is better to limit the repo scope to a single "executable", others believe it is beneficial to keep the domain scope and have everything in a single repo. I probably belong to the latter group but do not have a very strong opinion on it. I would argue it is more important to have a central documentation / wiki for the domain that will point to the repos involved etc.

MVC with Service architecture

I'm creating a MVC project where in one of its View, there will be search part and the listing part.
At the same time I have an idea of using a service layer (Web API or WCF).
I would like to ask which one is correct way or setup for building this search and listing page ?
The way I'm doing it at the moment is using partial view for listing part that will get updated every time searching occurs and position the service layer behind the controller (service layer in the middle of controller and business layer).
Thank you.
MVC Controllers should be thin route drivers. In general your controller actions should look similar to
[Authorize(Roles = "User,Admin"]
[GET("hosts")]
public ActionResult Hosts(int id)
{
if (false == ModelState.IsValid)
return new HttpStatusCodeResult(403, "Forbidden for reasons....");
var bizResponse = bizService.DoThings();
if(bizResponse == null) return HttpNotFound(id + "could not be found")
if(false == bizResponse.Success)
return new HttpStatusCodeResult(400, "Bad request for reasons....");
return View(bizResponse);
}
You can also generalize the model state checking and response object checking (if you use a common contract - base type or interface) to simply have:
[Authorize(Roles = "User,Admin"]
[GET("hosts")]
[AutoServiceResponseActionFilter]
public ActionResult Hosts(int id)
{
var bizResponse = bizService.DoThings();
return View(bizResponse);
}
I am a proponent of using serialization to pass from the business layer to the http/MVC/ASP.NET layer. Anything that you use should not generate any http or tcp requests if it is in-process and should used named-pipes for in memory transport. WCF with IDesign InProcFactory gives you this out of the box, you can't emulate this very well WebApi, you may be able to emulate this with NFX or Service Stack but I am not sure off hand.
If you want the bizService to be hosted out of process the best transport at this point is to use a Message Bus or Message Queue to the bizService. Generally when working with this architecture you need a truly asynchronous UI that once the http endpoint accepts the request it can immediately receive the http OK or http ACCEPTED response and be informed later of the execution of the action.
In general a MVC controller / ASP.NET http endpoint should never initiate a http request. Your bizService if necessary is free to call a third party http service. Ultimately roundtrip network calls are what kills the perceived performance of everything. If you cannot avoid roundtrip calls you should strive to limit it to at most one for read and at most one for write. If you find yourself needing to invoke multiple read and multiple write calls over the wire that is highly illustrative of a bad architectural design of the business system.
Lastly in well designed SOA, your system is much more functional than OO. Functional logic with immutable data / lack of shared state, is what scales. The more dependent you are on any shared state the more fragile the system is and starts to actively become anti-scale. Being highly stateful can easily lead to systems that fracture at the 20-50 req/s range. Nominally a single server system should handle 300-500 req/s of real world usage.
The reason to proxy business services such as this is to follow the trusted subsystem pattern. No user is ever able to authenticate to your business service, only your application is able to authenticate. No user is ever able to determine where your business services are hosted. Related to this is users should never authorize to business service itself, a business service action should be able to authorize the originator of the request if necessary. In general this is only needed for fine grained control such as individual records can be barred from a user.
Since clients are remote and untrustworthy (users can maliciously manipulate them whether they're javascript or compiled binaries) they should never have any knowledge of your service layer. The service layer itself could literally be firewalled off from the entire internet only allowing your web servers to communicate to the service layer. Your web server may have some presentation building logic in it, such as seeding your client with userId, name, security tokens etc but it will likely be minimal. It is the web server acting as a proxy that needs to initiate calls to the service layer
Short version, only a controller should call your service layer.
One exception, if you use a message queuing system like Azure Service Bus for example, depending on security constraints it could be fine by your UI to directly enqueue messages to the ASB as the ASB could be treated as a DMZ and still shields your services from any client knowledge. The main risk of direct queue access is a malicious user could flood your queue for a denial of service type attack (and costing you money). A non-malicious risk is if you change the queue contract out of date clients could result in numerous dead letters or poison messages
I really believe the future of all development are clients that directly enqueue messages but current technology is very lacking for doing this easily and securely. Direct queue access will be imperative for the future of Internet of Things. Web servers just do not have the capacity to receive continuous streams of events from thousands or millions of light bulbs and refrigerators.

How to connect separate microservice applications?

I am building huge application using microservices architecture. The application will consist of multiple backend microservices (deployed on multiple cloud instances), some of which I would like to connect using rest apis in order to pass data between them.
The application will also expose public api for third parties, but the above mentioned endpoints should be restricted ONLY to other microservices within the same application creating some kind of a private network.
So, my question is:
How to achieve that restricted api access to other microservices within the same application?
If there are better ways to connect microservices than using http transport layer, please mention them.
Please keep the answers server/language agnostic if possible.
Thanks.
Yeah easy. Each client of a micro service has an API key. Micro services only accept requests from clients with a valid API Key.
Also, its good to know that REST is simply a protocol that allows communication between bounded contexts.
It doesn't have to be over HTTP. The requirement is that it has a uniform interface (this is why HTTP is used with its PUT, POST, GET, DELETE... methods) and that it is stateless (all state being transferred through a URI).
So if all your micro services run on the same box, all you need to do is something like this:
class SomeClass implements RestfulMethods {
public function get(params){ // return something}
public function post(params){ // add something}
public function put(params){ // update something}
public function delete(params){ // delete something}
}
Micro services then communicated by interacting with the RestfulMethod implementations of other services.
But if your micorservices are on different machines, its probably best to use HTTP as the transport mechanism.
One way is to use HTTPS for internal MS communication. Lock down the access (using a trust store) to only your services. You can share a certificate among the services for backend communication. Preferably a wildcard certificate. Then it should work as long as your services can be adressed to the same domain. Like *.yourcompany.com.
Once you have it all in place, it should work fine. HTTPS sessions does imply some overhead, but that's primarily in the handshake process. Using keep-alive on your sessions, there shouldn't be much overhead with encrypted channels.
Of course, you can simply add some credentials to your http headers as well. That would be less secure.
RestAPI is not only way to do it, one of the some ideas that i have seeming is about the usage of Service Registry link Eureka (Netflix), Zookeeper (Apache) and others.
Here is an example:
https://github.com/tiarebalbi/qcon2015-sao-paolo-microservices-workshop
...the above mentioned endpoints should be restricted ONLY to other
microservices within the same application...
What you are talking about in a broad sense is authorisation.
Authorisation is the granting or denying of "powers" or "abilities" within your application to authentic users.
Therefore the job of any authorisation mechanism is to validate the "claim" implicit in any inbound API request - that the user is allowed to do the thing encoded in the request.
As an example, imagine I turned up at your API with a PUT request for Widget 1234:
PUT /widgetservice/widget/1234 HTTP/1.1
This could be interpreted as me (Bob Smith, a known user) making a claim that I am allowed to make changes to a widget in your system with id 1234.
Whatever you do to validate this claim, I hope you can see this needs to be done at the application level, rather than at the API level. In fact, authorisation is an application-level concern, rather than an API-level concern (unlike authentication, which is very much an API level concern).
To demonstrate, in our example above, it's theoritically possible I'm allowed to create a new widget, but not to update an existing widget:
POST /widgetservice/widget/1234 HTTP/1.1
Or even I'm allowed to update only widget 1234 and requests to change other widgets should not be allowed
PUT /widgetservice/widget/5678 HTTP/1.1
How to achieve that restricted api access to other microservices
within the same application?
So this becomes a question about how can you build authorisation into your application so that you can validate individual requests coming from known users (in your case your other services in your ecosystem are just another kind of known user).
Well, and apologies but I'm going to be prescriptive here, you could use a claims-based authorisation service, which stores valid claims based on user identity or membership of roles.
It depends largely on how you are handling authentication, and whether or not you are supporting roles as part of that process. You could store claims against individual users but this becomes arduous as the number of users increases. OAuth, despite being pretty heavy to implement, is a leading platform for this.
I am building huge application using microservices architecture
The only thing I will say here is read this first.
The easiest way is to only enable access from the IP address that your microservices are running on.
I know i'm super late for this question :)) but for anyone who came across this thread, Kafka is a great option for operations similar to this question.
based on Kafka's own introduction
Kafka is generally used for two broad classes of applications:
Building real-time streaming data pipelines that reliably get data between systems or applications
Building real-time streaming applications that transform or react to the streams of data
Side Note: Kafka is created by LinkedIn and is being used in many huge companies so it's kindda battle tested.
you can use RabbitMQ
publish your resquests to queue and then consume tasks

NServiceBus Sagas and REST API Integration best-practices

What is the most sensible approach to integrate/interact NServiceBus Sagas with REST APIs?
The scenario is as follows,
We have a load balanced REST API. Depending on the load we can add more nodes.
REST API is a wrapper around a DomainServices API. This means the API can be consumed directly.
We would like to use Sagas for workflow and implement NServiceBus Distributor to scale-out.
Question is, if we use the REST API from Sagas, the actual processing happens in the API farm. This in a way defeats the purpose of implementing distributor pattern.
On the other hand, using DomainServives API directly from Sagas, allows processing locally within worker nodes. With this approach we will have to maintain API assemblies in multiple locations but the throughput could be higher.
I am trying to understand the best approach. Personally, I’d prefer to consume the API (if readily available) but this could introduce chattiness to the system and could take longer to complete as compared to to in-process.
A typical sequence could be similar to publishing an online advertisement,
Advertiser submits a new advertisement request via a web application.
Web application invokes the relevant API endpoint and sends a command
message.
Command message initiates a new publish advertisement Saga
instance.
Saga sends a command to validate caller permissions (in
process/out of process API call)
Saga sends a command to validate the
advertisement data (in process/out of process API call)
Saga sends a
command to the fraud service (third party service)
Once the content and fraud verifications are successful,
Saga sends a command to the billing system.
Saga invokes an API call to save add details. (in
process/out of process API call)
And this goes on until the advertisement is expired, there are a number of retry and failure condition paths.
After a number of design iterations we came up with the following guidelines,
Treat REST API layer as the integration platform.
Assume API endpoints are capable of abstracting fairly complex micro work-flows. Micro work-flows are operations that executes in a single burst (not interruptible) and completes with-in a short time span (<1 second).
Assume API farm is capable of serving many concurrent requests and can be easily scaled-out.
Favor synchronous invocations over asynchronous message based invocations when the target operation is fairly straightforward.
When asynchronous processing is required use a single message handler and invoke API from the handlers. This will delegate work to the API farm. This will also eliminate the need for a distributor and extra hardware resources.
Avoid Saga’s unless if the business work-flow contains multiple transactions, compensation logic and resumes. Tests reveals Sagas do not perform well under load.
Avoid consuming DomainServices directly from a message handler. This till do the work locally and also introduces a deployment hassle by distributing business logic.
Happy to hear out thoughts.
You are right on with identifying that you will need Sagas to manage workflow. I'm willing to bet that your Domain hooks up to a common database. If that is true then it will be faster to use your Domain directly and remove the serialization/network overhead. You will also lose the ability to easily manage the transactions at the database level.
Assuming your are directly calling your Domain, the performance becomes a question of how the Domain performs. You may take steps to optimize the database, drive down distributed transaction costs, sharding the data, etc. You may end up using the Distributor to have multiple Saga processing nodes, but it sounds like you have some more testing to do once a design is chosen.
Generically speaking, we use REST APIs to model the commands as resources(via POST) to allow interaction with NSB from clients who don't have direct access to messaging. This is a potential solution to get things onto NSB from your web app.

WCF Design questions

I am designing a WCF service.
I am using netTCP binding.
The Service could be called from multi-threaded clients.
The multi-threaded clients are not sharing the proxy.
1. WCF Service design question.
Client has to sent these 2 values in every call: UserID and SourceSystemID. This will help the Service to identify the user and the system he belongs.
Instead of passing these 2 values in every call, I decided to have them cached with the Service for the duration of call from the client.
I decided to have a parameterized constructor for the Service and store these values in the ChannelContext as explained in this article.
http://www.danrigsby.com/blog/index.php/2008/09/21/using-icontextchannel-extensions-to-store-custom-data/
Initially I wanted to go with storing the values in the Session and have a method for initialization and termination. But there I found that I need to manually clean up the session in each case. When I am storing values in the channel context, I don’t have to clean it up every time and when the channel closes the values stored are already destroyed.
Can somebody please make sure that I am correct in my assumption?
2. Should I use SessionMode?
For my contract, I used : [ServiceContract(SessionMode = SessionMode.Required)] and without this service attribute.
Irrespective of my choice, I am always finding a value for : System.ServiceModel.OperationContext.Current.SessionId
How can this be explained?
When I say SessionMode.Required, does my InstanceContextMode automatically change to PerSession?
3. InstanceContextMode to be used?
My service is stateless except that I am storing some values in the Channel Context as mentioned in (1).
Should I use Percall or PerSession as InstanceContextMode?
The netTcp always has a transport-level session going - so that's why you always have a SessionId. So basically, no matter what you choose, with netTcp, you've got a session-ful connection right from the transport level on up.
As for InstanceContextMode - as long as you don't need anything else from a session except the SessionId - no reliable messaging etc. - then I'd typically pick Per-Call - it's more scalable, it typically performs better, it gives you less "glue" to worry about and less bits and pieces that you need to manage.
I would use an explicitly required session only if you need to turn on reliable messaging or something else that absolutely requires a WCF session. If you don't - then it's just unnecessary overhead, in my opinion.
Setting SessionMode to SessionMode.Required will enforce using bindings which support sessions, like NetTcpBinding, WSHttpBinding, etc. In fact if you try using a non-session-enabled binding , the runtime will throw an exception when you try to open the host.
Setting InstanceContextMode to PerSession means that only one instance of the service will be crated per session and that instance will serve all the requests coming from that session.
Having SessionId set by the runtime means that you might have a transport session or a reliable session or security session. Having those does not necessarily mean you have an application session , that is a single service object serving the requests per proxy. In other words, you might switch off application session by setting InstanceContextMode=PerCall forcing the creation of a new service object for every call, while maintaining a transport session due to using netTcpBinding, or a reliable or security session.
Think of the application session that is configured by InstanceContextMode and Session Mode as a higher level session, relying on a lower-level session /security, transport or reliable/. An application session cannot actually be established without having one of the other sessions in place, from there the requirement for the binding .
It is getting a bit long already, but for simple values I would recommend you to pass those values every time instead of creating application session. That will ensure the service objects have a short lifetime and no unnecessary resources will be kept alive on the server. It makes a lot sense with more clients, or proxies talking to your service. And you could always cache the values in the clients, even pass them as custom headers if you want.