I have an overloaded query method on the server side. I wanted to know if I can overload the async callback, depending upon the method signature? Or is it advised to define two different asyncallbacks? These are my two methods o the server.
public String fetchInADay(String startTime, String endTime) {}
public String fetchInADay(String startTime, String endTime, String type) {}
As an aside, Please comment:
If I am required to make two different callbacks, isnt this against the principles of OO?
There's no way to overload the async callback in this situation because the onSuccess methods will have the same signature.
You can pass the same AsyncCallback object to multiple services, but it won't be able to tell which service or function called it. If you want different behavior for your different service calls, you need two different callbacks.
I assume you plan to reuse the logic implemented in onSuccess(String result). This works fine independent of what service method you call. You could even share the same instance across multiple calls.
Since javascript is single threaded you're on the safe side that the responses (onSuccess() call) of multiple async calls won't interfere with each other. But because of the asynchronous nature of these calls the order of their callbacks won't be guaranteed.
Related
Perhaps I might be confused or misunderstanding some basic principles. When applying DDD in an Axon based event sourcing project, you define a number of aggregates each of them having several command handler methods verifying if a state change of the aggregate is valid and a corresponding collection of event handler methods applyling each requested state change.
EDIT START:
So an aggregate could look like
#Aggregate
public class Customer {
#AggregateIdentifier
private CustomerId customerId;
private Name name;
#CommandHandler
private Customer(CreateCustomerCommand command) {
AggregateLifecycle.apply(new CustomerCreatedEvent(
command.getCustomerId(), command.getName());
}
#EventSourcingHandler
private void handleEvent(CustomerCreatedEvent event) {
this.customerId = event.getCustomerId();
this.name = event.getName();
}
}
EDIT END
So my first question is: Am I correct to conclude the aggregate does not implement any state changing methods directly (typical public methods altering the properties of the aggregate instance) but all state changing methods are defined in a separate domain service interacting with Axon's command gateway?
EDIT START:
In other words, should an aggregate define setters responsible for sending a command to the framework which will result in the corresponding command handler and event handler being called by adding the code below to the aggregate?
public void setName(String name){
commandGateway.send(new UpdateNameCommand(this.customerId, name));
}
#CommandHandler
private void handleCommand(UpdateNameCommand command) {
AggregateLifecycle.apply(new NameUpdatedEvent(command.getCustomerId(), command.getName());
}
#EventSourcingHandler
private void handleEvent(NameUpdatedEvent event) {
this.name = event.getName();
}
This seems to violate some recommendations since a reference to the gateway is needed from within the aggregate.
Or do you typically define such methods in a separate service class which will then send the command to the framework.
#Service
public class CustomerService {
#Autowired
private CommandGateway gateway;
public void createCustomer(String name) {
CreateCustomerCommand command = new CreateCustomerCommand(name);
gateway.send(command);
}
public void changeName(CustomerId customerId, String name) {
UpdateNameCommand command = new UpdateNameCommand (customerId, name);
gateway.send(command);
}
}
This seems the correct approach to me. This seems to make (at least to my opinion) the aggregate's behavior not directly accessible for the outer world (all command and event handler methods can be made private) like a more "traditional" objects which are the entry-point for requesting state changes...
EDIT END
And secondly: Isn't this in contradiction with OOP principles of each class defining its (public) interface methods to interact with? In other words, doesn't this approach make an object a more or less dump object in which direct interaction is impossible?
Thanks for your feedback,
Kurt
So my first question is: Am I correct to conclude the aggregate does not implement any state changing methods directly (typical public methods altering the properties of the aggregate instance) but all state changing methods are defined in a separate domain service interacting with Axon's command gateway?
Absolutely not! The aggregate itself is responsible for any state changes.
Probably, the misconception is around the Command Handler (method) within the aggregate, when using event sourcing. In that case, the Command Handler (method) should not directly apply changes. Instead, it should apply an Event, which then invokes an Event Sourcing Handler (method) within that same aggregate instance to apply the state changes.
Whether using Event Sourcing or not, the aggregate should expose actions (e.g. Command Handlers) only, and make its decisions based on those actions. Preferably, the Aggregate (of a Command Handler) doesn't expose any state outside of the Aggregate boundaries.
And secondly: Isn't this in contradiction with OOP principles of each class defining its (public) interface methods to interact with? In other words, doesn't this approach make an object a more or less dump object in which direct interaction is impossible?
That would have been the case if the aggregate relied on external components to manage its state. But it doesn't.
additional reactions after question edit
So my first question is: Am I correct to conclude the aggregate does not implement any state-changing methods directly (typical public methods altering the properties of the aggregate instance) but all state changing methods are defined in a separate domain service interacting with Axon's command gateway?
I think it's exactly the opposite. The first aggregate in the question is an example of an aggregate that has all state changing operations inside of itself. Exposing setters to "change state" is a very bad idea. It forces the logic to decide on when to change this value outside of the Aggregate. That, in my opinion, is a "violation" of OOP.
Aggregates, in a DDD and/or CQRS context should no be instructed to change state. Instead, they should receive the actual business intent to react to. That is what a Command should reflect: the business intent. As a result, the Aggregate may change some attributes, but only to ensure that any commands happening after that behave in such a way that it reflects the things that have happened before. With event sourced aggregates, that has an extra intermediate step: applying an event. That needs to be done to ensure that sourcing an aggregate from past decisions yields the exact same state. Also note that these events are not "state change decisions", but "business decisions". The state changes are a result of that.
Final comments
The service class shown in the end would be the typical interaction. A component sending commands, not directly interacting with the Aggregate instance. However, the "UpdateNameCommand" as a comparison with the "setName" in the previous example put me off, since Commands should not be "C(R)UD" operations, but actual business operations. At may be the case that UpdateName is such a business operation.
If I have some WCF methods like
GetEmployeeDetailsResponse GetEmployeeDetails(GetEmployeeDetailsRequest request)
GetCustomerDetailsResponse GetEmployeeDetails(GetCustomerDetailsRequest request)
and I need to perform input validation on the Request objects, can I use static methods?
Many of the validations will be common like Request object should not be null and employee id/customer ID (in the request message) should not be 0 and things like that. I am guessing that since the Request objects themselves are separate objects, passing them into a static method should not cause any thread-safety issues.
I am using Per-Call services.
Thanks
Vikas
Yes, you can.
But - think about situations when you will verify request #1, and will receive request#2 before request#1 will be done.
If your static method will do something common for both of this requests, you can find yourself thinking about locks, ...
Using some kind of inspectors, like IClientMessageInspector , will be more
right choice for such things - IMO.
I am trying to create a WCF service that supports asynchronous calls. I followed all samples and tutorials I could find, and all of them have the customary pattern of one synchronous method, and the async Begin and End such as:
[OperationContract(AsyncPattern = false)]
string GetData(int value);
[OperationContract(AsyncPattern = true)]
IAsyncResult BeginGetData(int value, AsyncCallback callback, object asyncState);
string EndGetData(IAsyncResult result);
However, only the synchronous GetData gets called, no matter what I do on the client side. Fiddler tells me that the message is always the same:
<s:Envelope
xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><s:Body><GetData
xmlns="http://tempuri.org/"><value>0</value></GetData></s:Body></s:Envelope>
When I remove the synchronous GetData interface, the async method now is properly called.
Is this normal behavior? Is there anything else I should do to support sync and async versions of a method?
This is a common misconception. You assume that you need to make the server asynchronous in order for the client to be able to make async calls. This is not true. Server and client are 100% independent. They are separated by a binary wire protocol.
The message that you see in Fiddler is always the same because SOAP does not know anything about sync or async. At the SOAP level your decision does not manifest itself. For that reason the client cannot observe your server-side decision, either.
This means you can just make the server synchronous in still have a truely async client, or the other way around.
In any case, you should only implement one pattern on the server: Either sync or async. Never both. Get rid of one of your implementations. From a functional standpoint it doesn't matter which one stays.
I'm pulling up important information from the comments here:
It is hard to fit an explanation about when to use server-side async
into this comment box. In short, don't use it on the server by
default. Use it if special circumstances make it attractive or
necessary.
On a meta-level let me point out that async IO has become
a fad that should not be followed lightly. The community is in a very
unfortunate state of misinformation about this right now.
I am trying to solve a problem where i have a WCF system that i have built a custom Host, Factory host, instance providers and service behaviors to do authentication and dependency injection.
However I have come up with a problem at the authorisation level as I would like to do authorisation at the level of the method being called.
For example
[OperationContract]
[WebGet(UriTemplate = "/{ConstituentNumber}/")]
public Constituent GetConstituent(string ConstituentNumber)
{
Authorisation.Factory.Instance.IsAuthorised(MethodBase.GetCurrentMethod().Name, WebOperationContext.Current.IncomingRequest.Headers["Authorization"]);
return constituentSoapService.GetConstituentDetails(ConstituentNumber);
}
Basically I now have to copy the Call to IsAuthorised across every web method I have. This has two problems.
It is not very testable. I Have extracted the dependecies as best that I can. But this setup means that I have to mock out calls to the database and calls to the
WebOperationContext.
I Have to Copy that Method over and over again.
What I would like to know is, is there a spot in the WCF pipeline that enables me to know which method is about to be called. Execute the authorisation request. and then execute the method based on the true false value of the authorisation response.
Even better if i can build an attribute that will say how to evaluate the method.
One possible way to do what you want might be by intercepting requests with a custom IDispatchMessageInspector (or similar WCF extension point).
The trick there, however, is that all you get is the raw message, but not where it will be processed (i.e. the method name). With a bit of work, however, it should be possible to build a map of URIs/actions and the matching method names (this is how you'd do it for SOAP, though haven't tried it for WebGet/WebInvoke yet).
I have an idea, but I need help implementing it.
WCF does not support delegates in its contracts.
Instead it has a cumbersome callback contracts mechanism, and I'm looking for a way to overcome this limitation.
I thought about using a IDataContractSurrogate to replace each delegate in the contract with a token that will be serialized to the remote endpoint. There, the token will be deserialized into a generated delegate. This generated delegate will send a generic callback message which encapsulates all the arguments (that the delegate was invoked with).
The generic callback message will reach the first endpoint, and there the original delegate would be invoked with the arguments.
Here is the purposed (simplified) sequence:
A calls B-proxy.Foo(callback)
callback is serialized through a DelegateSurrogate.
The DelegateSurrogate stores the delegate in a dedicated delegate storage and replaces it with a token
The message arrives to B's endpoint
the token is deserialized through a DelegateSurrogate
The DelegateSurrogate constructs a generated delegate
B.Foo(generatedCallback) is invoked
Later, B is invoking generatedCallback(args)
generatedCallback(args) calls a dedicated generic contract on A's endpoint: CallbackContract-proxy.GenericCallback(args)
CallbackContract.GenericCallback(args) is invoked on A's endpoint
The original callback is retrieved from the storage and is invoked: callback(args)
I have already implemented this previously using service bus (NServiceBus), but I want to adapt the idea to WCF and I'm having hard time. I know how to implement steps 3,6,9 and 11. I don't know yet how to wire everything in WCF - especially the surrogate part.
That's it - I hope my question made sense, and that the collective wisdom here will be able to help me build this up.
Here's a sample usage for my desired solution:
// client side
remoteSvc.GetEmployeeById(17, emp =>
{
employees.Add(emp);
logger.log("Result received");
});
// server side
public void GetEmployeeById(int id, Action<Employee> callback)
{
var emp = getEmpFromDb(id);
callback(emp);
}
Actually, in this scenario I would look into the Expression API. Unlike a delegate, an Expression can be deconstructed at runtime. You can't serialize them by default, but a lot of work has been done in that space. It is also a bit like what a lot of LINQ providers do in the background, for example WCF Data Services.
Of course, another approach is simply to use a lambda expression as the hook for RPC, which is what I describe here. The code that implements this is freely available in the protobuf-net tree. You could customize this by using an attribute to associate your token with the method, and obtain the attribute from the MethodInfo.
IMO, the problem with delegates is that they are too tightly coupled to the implementation, so you can't have different implementations at each end (which is a common requirement).
Expressions have the advantage that lambdas still support intellisense etc, so you can do things like:
client.Invoke(svc => svc.Foo(123, "abc"));
and from that obtain Foo (the MethodInfo), 123 and "abc" separately, including captured variables, ref/out, etc. It all works.