Spring Webflux HttpResponse - spring-webflux

class Test{
public Mono<ServerResponse> test(ServerRequest req){
Mono<String> data = Mono.just("test");
System.out.print(data);
return ServerResponse.ok.body(data, String.class);
}
}
When the client makes a request, "MonoJust" is printed at line 4,
but "test" is returned in the Http Response Body.
I know that a publisher doesn't produce data before a subscription, so why does the Http Response contain "test" not "MonoJust"?

This behaviour might look a bit odd because you've just used Mono to wrap an actual value - but this is not what reactor (and reactive frameworks in general) are designed for.
Remember that Mono is a publisher which may or may not emit an element in the future, not just a wrapper for a given value. When you return ServerResponse.ok.body(), you're explicitly stating that you want the body to contain the result emitted by the publisher that you're passing in - the method then returns another publisher, Mono<ServerResponse>, that publishes the required server response when your data publisher emits a value.
System.out.print on the other hand is implicitly calling the toString() method on Mono, which needs to produce a value now, without blocking or waiting (it's returning a String after all, not a Mono<String>.) It can't print the value since it won't necessarily be there, so instead it just prints the name of its class (MonoJust in this case refers to the fact you've instantiated the Mono with Mono.just().)

Related

template method design pattern - how to save and feed information into next step

I have an interface RequestProcessorInterface. There are different scenarios for processing a json request e.g. async vs synchronous request. I am trying to use template method pattern where the steps are like validate, preProcess, saveInDatabase, postProcess, etc
ProcessingContext processingContext = validate(storeContentRequestContainer);
processingContext = preProcess(storeContentRequestContainer, processingContext);
saveInDatabase(storeContentRequestContainer, processingContext);
return postProcess(storeContentRequestContainer, processingContext);
My ProcessingContext class has these attributes:
Map<String, String> errors;
String contentId;
String correlationId;
String presignedUrl;
String objectKey;
String bucketLocation;
DbEntity dbEntity; // Entity for database storage
When the json request is parsed, I assign values to 'processingContext' object. To keep the system flexible and not worry about what step might need the parsed information, I am encapsulating the extracted information in the context object.
Also I am passing the context object to every step so that in future, every step has this information readily available. I was going in the direction of most of the steps to be able to read the context and update some attribute and return the modified context , so the subsequent steps have access to attributes populated earlier.
I have a feeling that accepting context object (which is mutable) and modifying it and returning it is not a good idea. This context object is going to be in the method scope of a singleton class (spring boot). It will not be something lingering on forever and that should make it simpler.
How do I achieve this flexibility of multiple steps to be able to augment / update information in a context object? Will it make sense to make this context object immutable ?

How can I split a GroupedExchangeAggregationStrategy aggregate exchange into the original exchanges?

After aggregating exchanges using a GroupedExchangeAggregationStrategy I need to split them back apart (to emit individual processing time metrics) into the original exchanges.
I tried splitting with the following but the resulting split exchange wraps the original exchange and puts it in the Message body.
Is it possible to split a GroupedExchangeAggregationStrategy aggregate exchange into the original exchanges without the wrapper exchange? I need to use the original exchange properties and would like to do so with a SpEL expression.
.aggregate(constant(true), myGroupedExchangeAggregationStrategy)
.completionInterval(1000)
.completeAllOnStop()
.process { /* do stuff */ }
.split(exchangeProperty(Exchange.GROUPED_EXCHANGE))
.to(/* micrometer timer metric using SpEL expression */)
// ^- the resulting split exchange is wrapped in another exchange
In the event that this isn't currently supported, I'm trying to figure out the best way to implement this behavior on my own without creating a custom Splitter processor for this single feature. I was hoping to somehow override the SplitterIterable that does the wrapping but it doesn't appear to be possible.
Yeah, the GroupedExchangeAggregationStrategy does nothing else than create a java.util.List of all Exchanges. The Splitter EIP on the other hand splits by default a List into the elements and puts the element into the message body. Therefore you end up with an Exchange that contains an Exchange in its body.
What you need is an AggregationStrategy that collects all body Objects in a List instead of all Exchanges.
You could try to use Camels FlexibleAggregationStrategy that is configurable through a fluent API.
new FlexibleAggregationStrategy()
.storeInBody()
.accumulateInCollection(ArrayList.class)
.pick(new SimpleExpression("${body}"));
This should create an AggregationStrategy that extracts the body of every message (you can perhaps omit the pick method since body extraction is the pick default), collects them in a List and stores the aggregation in the message body.
To split this aggregate again, a simple split(body()) should be enough.
EDIT due to comment
Yes, you are right, a side effect of my solution is that you lose properties and headers of the original messages because it only aggregates the message bodies.
What you want to do, is splitting the List of Exchanges back into the originals. i.e. the Splitter must not create new Exchanges, but use the already present ones and throw away the aggregated wrapper Exchange.
As far as I can see in the source code of the Splitter, this is currently not possible:
Exchange newExchange = ExchangeHelper.createCorrelatedCopy(copy, false);
...
if (part instanceof Message) {
newExchange.setIn((Message) part);
} else {
Message in = newExchange.getIn();
in.setBody(part);
}
Per the accepted answer, it doesn't appear to be natively supported.
This custom processor will unwrap a split exchange (i.e. copying the nested exchange Message and properties to the root exchange). The unwrapped exchange will be nearly identical to the original -- it will retain all non-conflicting properties from the root exchange (e.g. Splitter-related properties like split index, etc.)
class ExchangeUnwrapper : Processor {
override fun process(exchange: Exchange) {
val wrappedExchange = exchange.`in`.body as Exchange
ExchangeHelper.copyResultsPreservePattern(exchange, wrappedExchange)
}
}
// Route.kt
from(...)
.aggregate(...)
.process { /* do things with aggregate */ }
.split(exchangeProperty(Exchange.GROUPED_EXCHANGE))
.process(ExchangeUnwrapper())
.process { /* do something with the original exchange */ }
.end()

WCF Service Contract

I have a problem using an custom data type in a WCF service method, below is my sample code
[ServiceContract()]
public class SampleServise : ISampleServise
{
void object GetSomething(ICustomData objectData)
{
// Do Something
}
}
What shall I do with ICustomData class interface?
Thanks
Afshin
WCF is based on message passing, and that message passing is modelled using XML schema (XSD). As such, whatever can be expressed in XML schema can be used in WCF.
This also means: interfaces are not supported. You need to use actual, concrete types for the parameters in your WCF service methods.
In your case, create a concrete class that implements ICustomData and then use that class as the parameter type.
For a good reference, read MSDN Designing Service Contracts which states for parameters:
Parameters and Return Values
Each operation has a return value and a parameter, even if these are
void. However, unlike a local method, in which you can pass references
to objects from one object to another, service operations do not pass
references to objects. Instead, they pass copies of the objects.
This is significant because each type used in a parameter or return
value must be serializable; that is, it must be possible to convert an
object of that type into a stream of bytes and from a stream of bytes
into an object.

Handling WCF Faults

I am working on a client that is consuming a WCF service. In various cases, the service simply raises a FaultException with an associated message informing of the reason behind the given fault.
Some of these faults are things that can be handled by our client application, but I'm hesitant to simply try and perform some string matching on the FaultExceptions Message or Reason to determine if it is something we can cater for or not.
I was hoping that the FaultCode on FaultException could be used to identify a specific type of Fault that we could handle, but it seems that this is purely to identify a handful of SOAP faults. Please correct me if my interpretation of this is incorrect.
I know that it could be that a FaultException could be raised, but I feel it is unrealistic to expect that a new type should be created for each reason behind a fault.
How do you handle this situation. As a contrived example. Consider a service that provides the following methods;
RecordDetails GetRecordById(string id)
void Add(RecordDetails record)
void Update(RecordDetailsUpdateRequest rdur)
Now in the example above, if you call GetRecordById with an id that doesn't exist, you receive a FaultException with a Message stating "Record cannot be found". Similarly, if you call Add for a record that already exists, or Update for a record that doesn't exist, you simply get a FaultException with a Message/Reason detailing the reason for failure. I need to know if a record exists or not to determine whether I should update or insert. As I mentioned, I'm hesitant to simply match on strings as I have no control over whether they will remain the same or not.
What would you expect in this situation (a type associated with the FaultException detailing RecordNotFoundException etc) or some generic type associated with FaultException that defines specific details relating to the error. For example, a RecordOperationExcpetion class with members Code (a constant or enum identifier of the reason for failure), along with a user friendly message.
At least this way, I could identify the error cause without having to resort to string matching.
Your thoughts are appreciated.
I would go with what you said above - a type associated with the FaultException. You can create any number of classes represented as a DataContract to handle various faults, and then assign them to the WCF Service operations.
[DataContract]
public class RecordOperationException
{
private int code;
private string message;
[DataMember]
public int Code
{
get
{
return code;
}
set
{
code = value;
}
}
[DataMember]
public string Message
{
get
{
return message;
}
set
{
message = value;
}
}
}
Then you can assign the this class as a FaultException:
[OperationContract]
[FaultContract(typeof(RecordOperationException))]
RecordDetails GetRecordById(string id)
[OperationContract]
[FaultContract(typeof(RecordOperationException))]
void Add(RecordDetails record)
[OperationContract]
[FaultContract(typeof(RecordOperationException))]
void Update(RecordDetailsUpdateRequest rdur)
You can then throw the appropriate FaultException in your methods, as desired.
This will eliminate the need to compare strings (which is a good idea, IMO).
I always use FaultExceptions and advertise them as part of the OperationContract, as your code does.
However, I think that there is more to it than this.
We all know that separation of concerns is a good thing, and the way you can achieve this with your services is by created classes that implement IErrorHandler.
These can then be used with your class and your error handling can be separated from your logic, making a cleaner way to do this. It also means that you don't have to repeat identical blocks all over your code.
This can be used with the generic FaultException as well.
A good resource is: http://msdn.microsoft.com/en-us/library/system.servicemodel.dispatcher.ierrorhandler.aspx

best practice for return value in WCF services

I have a WCF service sitting in the cloud.
And my application makes several calls to this WCF service.
Is it a best practise:
1] to always use return value as bool which indicates if the operation was sucessful or not.
2] returning the values you meant to return as the OUT parameters
I would:
return an atomic value (bool, string, int) if appropriate
return a complex type (a class instance) if I need to return more than one value - make sure to mark that class with [DataContract] and its properties with [DataMember]
a SOAP fault FaultException<T> when an error occurs; the <T> part allows you to define your own custom error classes, and again - don't forget to mark them with [DataContract] / [DataMember] and declare them as FaultContract on your operations
1] to always use return value as bool which indicates if the operation was sucessful or not
Yes, if the operation isnĀ“t time consuming AND the return status is always relevant:
Waiting on a return value can affect both client and service host(server) performance/scalability. Ex. in a Request-Responsecall, Requests can keep connections open for a long preriod of time waiting for operation completion. You could implement in a way similar to "HTTP 202 Accepted" status code usage(i.e operation received arguments and has started(patially), but does wait for the completion)
No, if the operation logic only makes sense if synchronous.
No, if you are keen on refactorability/maintainability ex. when you want to return include an error message/code in the return.
2] returning the values you meant to return as the OUT parameters
Yes, this makes the service operation more wsdl compliant and easy to read.