How to log the MessageBodyReader used for a Web Service Call in Apache TomEE+ - jax-rs

I own a few web services in a large code base. I see several custom MessageBodyReaders attached to my application.
I would like to know which one is used for a web service call.
I've looked around, but I can't seem to find a way to log that.

Your question is somewhat broad and therefore hard to answer. I'll try it anyways.
You have multiple services with multiple MessageBodyReader<>s, maybe distributed across several applications. While CXF itself supports tracing and message logging, there isn't a switch you can just turn on to see which MessageBodyReader implementations are readable and which not.
1. Approach - Adding logging to the implementing classes
One might think this is a good idea, and yes it is! On first sight.
Adding log outputs to every isReadable(..) method on every implementing class would solve your issue but is not practical, as it involved way too much manual logging and code changes.
2. Approach - AOP to the rescue!
Aspect Oriented Programming specifically addresses issue like that. You want to take actions (logging) over several classes and even log out, what happened in that classes. Those requirements are aspects as they don't add functionality to your classes, but to your program on a technical point of view.
It basically involves adding proxies around the isReadable(..) method on every MessageBodyReader implementation and log out whether it returned true or false.
Let's start
Assume a simple MessageBodyReader<InputStream> like the example I took from the CXF online documentation:
#Consumes("application/octet-stream")
#Provider
public class MyReader implements MessageBodyReader<InputStream> {
#Override
public boolean isReadable(Class<?> aClass, Type type, Annotation[] annotations, MediaType mediaType) {
return return (type == MyEntity.class);
}
public InputStream readFrom(Class<InputStream> clazz, Type t, Annotation[] a, MediaType mt,
MultivaluedMap<String, String> headers, InputStream is) throws IOException {
return new FilterInputStream(is) {
#Override
public int read(byte[] b) throws IOException {
return is.read();
}
};
}
}
To reach this class and every other implementation, we create an Advice with the following Pointcut:
execution(public boolean javax.ws.rs.ext.MessageBodyReader+.isReadable(..))
For the sake of simplicity we will just log out what was returned. Using Spring AOP this is ridiculously easy. We just use the #AfterReturning annotation:
#Slf4j
#Component
#Aspect
public class MyReaderLoggerAspect {
#AfterReturning(value = "execution(public boolean javax.ws.rs.ext.MessageBodyReader+.isReadable(..))", argNames = "joinPoint,called", returning = "called")
public void logReaderName(JoinPoint joinPoint, boolean called) {
log.info(String.format("MessageBodyReader %s executed: %s", joinPoint.getTarget().getClass().getName(), called));
}
}
And our logging output looks like:
2020-07-06 17:56:04.305 INFO 9204 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-07-06 17:56:04.313 INFO 9204 --- [ restartedMain] com.example.demo.DemoApplication : Started DemoApplication in 2.424 seconds (JVM running for 3.676)
2020-07-06 18:02:03.480 INFO 9204 --- [ scheduling-1] c.e.demo.test.MyReaderLoggerAspect : MessageBodyReader com.example.demo.test.MyReader executed: false
2020-07-06 18:03:44.191 INFO 9204 --- [ scheduling-1] c.e.demo.test.MyReaderLoggerAspect : MessageBodyReader com.example.demo.test.MyReader executed: true
2020-07-06 18:03:44.192 INFO 9204 --- [ scheduling-1] c.e.demo.test.MyReaderLoggerAspect : MessageBodyReader com.example.demo.test.MyOtherReader executed: false
This is an easy approach to log out possible candidates for MessageBodyReaders and other automagically invoked implementation classes. You can even log out method parameters, change the invocation, and do much more.
What's next?
You may of course want to add a few more logging to your Advices, because the class name and a simple true or false won't fit real-world-needs, but this is a good starting point on putting some manual tracing to existent software architectures.
Mentioning Tracing - that's only a few steps from using common solutions like OpenTracing in your services. While it generally focuses on identifying bottle necks and issues in long chains of service calls, it would generally allow to answer questions like:
Why is this request taking so long?
What filter is involved in that call chain? <-- You are here
Where is this message processor involved?
Who would be affected by a major change in my service?
And so on

Related

Hwo to convert Flux<Item> to List<Item> by blocking

Background
I have a legacy application where I need to return a List<Item>
There are many different Service classes each belonging to an ItemType.
Each service class calls a few different backend APIs and collects the responses to create a SubType of the Item.
So we can say, each service class implementation returns an Item
All backend API access code is using WebClient which returns Mono of some type, and I can zip all Mono within the service to create an Item
The user should be able to look up many different types of items in one call. This requires many backend calls
So for performance sake, I wanted to make this all asynchronous using reactor, so I introduced Spring Reactive code.
Problem
If my endpoint had to return Flux<Item> then this code work fine,
But this is some service code which is used by other legacy code caller.
So eventually I want to return the List<Item> but When I try to convert my Flux into the List I get an error
"message": "block()/blockFirst()/blockLast() are blocking,
which is not supported in thread reactor-http-nio-3",
Here is the service, which is calling a few other service classes.
Flux<Item> itemFlux = Flux.fromIterable(searchRequestByItemType.entrySet())
.flatMap(e ->
getService(e.getKey()).searchItems(e.getValue()))
.subscribeOn(Schedulers.boundedElastic());
Mono<List<Item>> listMono = itemFlux
.collectList()
.block(); //This line throws error
Here is what the above service is calling
default Flux<Item> searchItems(List<SingleItemSearchRequest> requests) {
return Flux.fromIterable(requests)
.flatMap(this::searchItem)
.subscribeOn(Schedulers.boundedElastic());
}
Here is what a single-item search is which is used by above
public Mono<Item> searchItem(SingleItemSearchRequest sisr) {
return Mono.zip(backendApi.getItemANameApi(sisr.getItemIdentifiers().getItemId()),
sisr.isAddXXXDetails()
?backendApi.getItemAXXXApi(sisr.getItemIdentifiers().getItemId())
:Mono.empty(),
sisr.isAddYYYDetails()
?backendApi.getItemAYYYApi(sisr.getItemIdentifiers().getItemId())
:Mono.empty())
.map(tuple3 -> Item.builder()
.name(tuple3.getT1())
.xxxDetails(tuple3.getT2())
.yyyDetails(tuple3.getT3())
.build()
);
}
Sample project to replicate the problem..
https://github.com/mps-learning/spring-reactive-example
I’m new to spring reactor, feel free to pinpoint ALL errors in the code.
UPDATE
As per Patrick Hooijer Bonus suggestion, updating the Mono.zip entries to always contain some default.
#Override
public Mono<Item> searchItem(SingleItemSearchRequest sisr) {
System.out.println("\t\tInside " + supportedItem() + " searchItem with thread " + Thread.currentThread().toString());
//TODO: how to make these XXX YYY calls conditionals In clear way?
return Mono.zip(getNameDetails(sisr).defaultIfEmpty("Default Name"),
getXXXDetails(sisr).defaultIfEmpty("Default XXX Details"),
getYYYDetails(sisr).defaultIfEmpty("Default YYY Details"))
.map(tuple3 -> Item.builder()
.name(tuple3.getT1())
.xxxDetails(tuple3.getT2())
.yyyDetails(tuple3.getT3())
.build()
);
}
private Mono<String> getNameDetails(SingleItemSearchRequest sisr) {
return mockBackendApi.getItemCNameApi(sisr.getItemIdentifiers().getItemId());
}
private Mono<String> getYYYDetails(SingleItemSearchRequest sisr) {
return sisr.isAddYYYDetails()
? mockBackendApi.getItemCYYYApi(sisr.getItemIdentifiers().getItemId())
: Mono.empty();
}
private Mono<String> getXXXDetails(SingleItemSearchRequest sisr) {
return sisr.isAddXXXDetails()
? mockBackendApi.getItemCXXXApi(sisr.getItemIdentifiers().getItemId())
: Mono.empty();
}
Edit: Below answer does not solve the issue, but it contains useful information about Thread switching. It does not work because .block() is no problem for non-blocking Schedulers if it's used to switch to synchronous code.
This is because the block operator inherited the reactor-http-nio-3 Thread from backendApi.getItemANameApi (or one of the other calls in Mono.zip), which is non-blocking.
Most operators continue working on the Thread on which the previous operator executed, this is because the Thread is linked to the emitted item. There are two groups of operators where the Thread of the output item differs from the input:
flatMap, concatMap, zip, etc: Operators that emit items from other Publishers will keep the Thread link they received from this inner Publisher, not from the input.
Time based operators like delayElements, interval, buffer(Duration), etc. will schedule their tasks on the provided Scheduler, or Schedulers.parallel() if none provided. The emitted items will then be linked to the Thread the task was scheduled on.
In your case, Mono.zip emits items from backendApi.getItemANameApi linked to reactor-http-nio-3, which gets propagated downstream, goes outside both the flatMap in searchItems and in itemFlux, until it reaches your block operator.
You can solve this by placing a .publishOn(Schedulers.boundedElastic()), either in searchItem, searchItems or itemFlux. This will cause the item to switch to a Thread in the provided Scheduler.
Bonus: Since you requested to pinpoint errors: Your Mono.zip will not work if sisr.isAddXXXDetails() is false, as Mono.zip discards any element it could not zip. Since you return a Mono.empty() in that case, no items can be zipped and it will return an empty Mono.
If we have only spring-boot-starter-webflux defined as application dependency, then springbok spin up a `Netty server.
One is not expected to block() in a reactive application using a non-blocking server.
However, once we add spring-boot-starter-web dependency then even with the presence of spring-boot-starter-webflux, springboot spinup a tomcat server. Which is a thread-per-request model and is expected to have blocking calls
So to solve my problem, all I had to do above is, to add spring-boot-starter-web dependency in pom.xml. After that applications is started in Tomcat
with timcat .collectList().block() works in Controller class to return the List<Item>.
Whereas with the Netty server I could return only Flux<Item> not List<Item>, which is expected.

Apache Beam : RabbitMqIO watermark doesn't advance

I need some help please. I'm trying to use Apache beam with RabbitMqIO source (version 2.11.0) and AfterWatermark.pastEndOfWindow trigger. It seems like the RabbitMqIO's watermark doesn't advance and remain the same. Because of this behavior, the AfterWatermark trigger doesn't work. When I use others triggers which doesn't take watermark in consideration, that works (eg: AfterProcessingTime, AfterPane) Below, my code, thanks :
public class Main {
private static final Logger LOGGER = LoggerFactory.getLogger(Main.class);
// Window declaration with trigger
public static Window<RabbitMqMessage> window() {
return Window. <RabbitMqMessage>into(FixedWindows.of(Duration.standardSeconds(60)))
.triggering(AfterWatermark.pastEndOfWindow())
.withAllowedLateness(Duration.ZERO)
.accumulatingFiredPanes();
}
public static void main(String[] args) {
SpringApplication.run(Main.class, args);
// pipeline creation
PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create();
Pipeline pipeline = Pipeline.create(options);
// Using RabbitMqIO
PCollection<RabbitMqMessage> messages = pipeline
.apply(RabbitMqIO.read().withUri("amqp://guest:guest#localhost:5672").withQueue("test"));
PCollection<RabbitMqMessage> windowedData = messages.apply("Windowing", window());
windowedData.apply(Combine.globally(new MyCombine()).withoutDefaults());
pipeline.run();
}
}
class MyCombine implements SerializableFunction<Iterable<RabbitMqMessage>, RabbitMqMessage> {
private static final Logger LOGGER = LoggerFactory.getLogger(MyCombineKafka.class);
/**
*
*/
private static final long serialVersionUID = 6143898367853230506L;
#Override
public RabbitMqMessage apply(Iterable<RabbitMqMessage> input) {
LOGGER.info("After trigger launched");
return null;
}
}
I spent a lot of time looking into this. After opening https://issues.apache.org/jira/browse/BEAM-8347 I left some notes in the ticket on what I think the problems are with the current implementation.
Re-stated here:
The documentation for UnboundedSource.getWatermark reads:
[watermark] can be approximate. If records are read that violate this guarantee, they will be considered late, which will affect how
they will be processed. ...
However, this value should be as late as possible. Downstream windows may not be able to close until this watermark passes their
end.
For example, a source may know that the records it reads will be in timestamp order. In this case, the watermark can be the timestamp
of the last record read. For a source that does not have natural
timestamps, timestamps can be set to the time of reading, in which
case the watermark is the current clock time.
The implementation in UnboundedRabbitMqReader uses the oldest timestamp as the watermark, in violation of the above suggestion.
Further, the timestamp applied is delivery time, which should be monotonically increasing. We should reliably be able to increase the watermark on every message delivered, which mostly solves the issue.
Finally, we can make provisions for increasing the watermark even when no messages have come in. In the event where there are no new messages, it should be ok to advance the watermark following the approach taken in the kafka io TimestampPolicyFactory when the stream is 'caught up'. In this case, we would increment the watermark to, e.g., max(current watermark, NOW - 2 seconds) when we see no new messages, just to ensure windows/triggers can fire without requiring new data.
Unfortunately, it's difficult to make these slight modifications locally as the Rabbit implementations are closed to extension, and are mostly private or package-private.
Update: I've opened a PR upstream to address this. Changes here: https://github.com/apache/beam/pull/9820

Activiti BPMN - How to pass username in variables/expression who have completed task?

I am very new to Activiti BPMN. I am creating a flow diagram in activiti. I m looking for how username (who has completed the task) can be pass into shell task arguments. so that I can fetch and save in db that user who has completed that task.
Any Help would be highly appreciated.
Thanks in advance...
Here's something I prepared for Java developers based on I think a blog post I saw
edit: https://community.alfresco.com/thread/224336-result-variable-in-javadelegate
RESULT VARIABLE
Option (1) – use expression language (EL) in the XML
<serviceTask id="serviceTask"
activiti:expression="#{myService.toUpperCase(myVar)}"
activiti:resultVariable="myVar" />
Java
public class MyService {
public String toUpperCase(String val) {
return val.toUpperCase();
}
}
The returned String is assigned to activiti:resultVariable
HACKING THE DATA MODEL DIRECTLY
Option (2) – use the execution environment
Java
public class MyService implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String myVar = (String) execution.getVariable("myVar");
execution.setVariable("myVar", myVar.toUpperCase());
}
}
By contrast here we are being passed an ‘execution’, and we are pulling values out of it and twiddling them and putting them back.
This is somewhat analogous to a Servlet taking values we are passed in the HTMLRequest and then based on them doing different things in the response. (A stronger analogy would be a servlet Filter)
So in your particular instance (depnding on how you are invoking the shell script) using the Expression Language (EL) might be simplest and easiest.
Of course the value you want to pass has to be one that the process knows about (otherwise how can it pass a value it doesn't have a variable for?)
Hope that helps. :D
Usually in BPM engines you have a way to hook out listener to these kind of events. In Activiti if you are embedding it inside your service you can add an extra EventListener and then record the taskCompleted events which will contain the current logged in user.
https://www.activiti.org/userguide/#eventDispatcher
Hope this helps.
I have used activiti:taskListener from activiti app you need to configure below properties
1. I changed properties in task listener.
2. I used java script variable for holding task.assignee value.
Code Snip:-

JAX-RS return a Map<String,String>

I want to retrieve a Map from a using JAX-RS (text/xml)
#GET
public Map<String,String> getMap(){
}
but I am getting the error below:
0000001e FlushResultHa E org.apache.wink.server.internal.handlers.FlushResultHandler handleResponse The system could not find a javax.ws.rs.ext.MessageBodyWriter or a DataSourceProvider class for the java.util.HashMap type and application/x-ms-application mediaType. Ensure that a javax.ws.rs.ext.MessageBodyWriter exists in the JAX-RS application for the type and media type specified.
[10:43:52:885 IST 07/02/12] 0000001e RequestProces I org.apache.wink.server.internal.RequestProcessor logException The following error occurred during the invocation of the handlers chain: WebApplicationException (500 - Internal Server Error) with message 'null' while processing GET request sent to http://localhost:9080/jaxrs_module/echo/upload/getSiteNames
The solution I choose is to wrap a Map and use it for the return param.
#XmlRootElement
public class JaxrsMapWrapper {
private Map<String,String> map;
public JaxrsMapWrapper(){
}
public void setMap(Map<String,String> map) {
this.map = map;
}
public Map<String,String> getMap() {
return map;
}
}
and the method signature will go like this
#GET
public JaxrsMapWrapper getMap()
Your problem is that the default serialization strategy (use JAXB) means that you can't serialize that map directly. There are two main ways to deal with this.
Write an XmlAdaptor
There are a number of questions on this on SO but the nicest explanation I've seen so far is on the CXF users mailing list from a few years ago. The one tricky bit (since you don't want an extra wrapper element) is that once you've got yourself a type adaptor, you've got to install it using a package-level annotation (on the right package, which might take some effort to figure out). Those are relatively exotic.
Write a custom MessageBodyWriter
It might well be easier to write your own code to do the serialization. To do this, you implement javax.ws.rs.ext.MessageBodyWriter and tag it with #Provider (assuming that you are using an engine that uses that to manage registration; not all do for complex reasons that don't matter too much here). This will let you produce exactly the document you want from any arbitrary type at a cost of more complexity when writing (but at least you won't be having complex JAXB problems). There are many ways to actually generate XML, with which ones to choose between depending on the data to be serialized
Note that if you were streaming the data out rather than assembling everything in memory, you'd have to implement this interface.
Using CXF 2.4.2, it supports returning Map from the api. I use jackson-jaxrs 1.9.6 for serialization.
#Path("participation")
#Consumes({"application/json"})
#Produces({"application/json"})
public interface SurveyParticipationApi {
#GET
#Path("appParameters")
Map<String,String> getAppParameters();
....
}
With CXF 2.7.x use
WebClient.postCollection(Object collection, Class<T> memberClass, Class<T> responseClass)
,like this in your rest client code.
(Map<String, Region>) client.postCollection(regionCodes, String.class,Map.class);
for other collections use WebClient.postAndGetCollection().

SessionFactory - one factory for multiple databases

We have a situation where we have multiple databases with identical schema, but different data in each. We're creating a single session factory to handle this.
The problem is that we don't know which database we'll connect to until runtime, when we can provide that. But on startup to get the factory build, we need to connect to a database with that schema. We currently do this by creating the schema in an known location and using that, but we'd like to remove that requirement.
I haven't been able to find a way to create the session factory without specifying a connection. We don't expect to be able to use the OpenSession method with no parameters, and that's ok.
Any ideas?
Thanks
Andy
Either implement your own IConnectionProvider or pass your own connection to ISessionFactory.OpenSession(IDbConnection) (but read the method's comments about connection tracking)
The solution we came up with was to create a class which manages this for us. The class can use some information in the method call to do some routing logic to figure out where the database is, and then call OpenSession passing the connection string.
You could also use the great NuGet package from brady gaster for this. I made my own implementation from his NHQS package and it works very well.
You can find it here:
http://www.bradygaster.com/Tags/nhqs
good luck!
Came across this and thought Id add my solution for future readers which is basically what Mauricio Scheffer has suggested which encapsulates the 'switching' of CS and provides single point of management (I like this better than having to pass into each session call, less to 'miss' and go wrong).
I obtain the connecitonstring during authentication of the client and set on the context then, using the following IConnectinProvider implementation, set that value for the CS whenever a session is opened:
/// <summary>
/// Provides ability to switch connection strings of an NHibernate Session Factory (use same factory for multiple, dynamically specified, database connections)
/// </summary>
public class DynamicDriverConnectionProvider : DriverConnectionProvider, IConnectionProvider
{
protected override string ConnectionString
{
get
{
var cxnObj = IsWebContext ?
HttpContext.Current.Items["RequestConnectionString"]:
System.Runtime.Remoting.Messaging.CallContext.GetData("RequestConnectionString");
if (cxnObj != null)
return cxnObj.ToString();
//catch on app startup when there is not request connection string yet set
return base.ConnectionString;
}
}
private static bool IsWebContext
{
get { return (HttpContext.Current != null); }
}
}
Then wire it in during NHConfig:
var configuration = Fluently.Configure()
.Database(MsSqlConfiguration.MsSql2005
.Provider<DynamicDriverConnectionProvider>() //Like so