What is the difference between ProjectReactor.io vs Spring WebFlux?
I've read the documentation here: https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html and https://projectreactor.io/, for me both are very similar to each other. I am interested to learn the highlights on this.
They are on different abstraction level, so they can't really be compared as such.
Project Reactor is a general purpose reactive library. Similarly to RxJava, it is based on the reactive-streams specification. It is like Java 8 Stream and Optional, except it has support for asynchronous programming, error handling is built-in, supports backpressure and has a large number of operators (map, filter and many more).
Spring Webflux is a framework to create web services using reactive libraries. Its main goal is to ensure high scalability with low resource usage (i.e. small number of threads). Under the hood it uses Project Reactor, however, you can also use it with RxJava (or any other reactive-streams implementation) and it works well even with Kotlin Coroutines.
I am new to reactive concepts and these terminologies are looking similar to me. Am I correct in stating that Spring Reactor, Akka, RxJava are similar but how is this different from gRPC. Can I use Project Reactor with gRPC and RSocket. This is getting overwhelming. How are these related? Any explanation on this will be very much useful to me.
Am I correct in stating that Spring Reactor, Akka, RxJava are similar
Yes, to a point. Reactor & RxJava are pretty much one and the same, conceptually (though I'm sure I'll attract some stick from at least someone for saying that...!)
Akka is a bit different - it's a more fully featured framework that uses reactive concepts. It's also written in Scala rather than Java, and while it works just fine in Java, it's a reasonably unusual choice in my experience.
how is this different from gRPC
gRPC is a completely different beast - it's a framework meant for connecting different services together. You can build your services in any number of languages, Java included, and then choose whether you make blocking or non-blocking calls to those services from the framework you've constructed (the non-blocking calls could then be trivially interfaced with a reactive framework if that was required.) It's much more similar conceptually to REST / SOAP than it is Reactor or RxJava.
RSocket is different again - it's more a competitor to HTTP for reactive service intercommunication than anything else. It's a communications protocol (rather than a framework) that services can use to inter-communicate, and it's designed specifically to be efficient while supporting reactive semantics at the stream level. RSocket can for example manage flow control, passing backpressure messages between services, to try to ensure upstream reactive services don't overwhelm downstream services - not something you can do with HTTP (at least, not without adding another protocol on top.)
Overall, if you're new to reactive generally and want to start somewhere (and keep in Java land), then my advice would be to stick with Reactor for the time being to avoid getting overwhelmed - it's probably the most used framework in that regard since it's built right into Spring. Once you're familiar with the fundamentals, other related components like RSocket will start to make a lot more sense.
gprc is not reactive it's basically http2 + protobuf
Moving from JSF to Wicket I continue my habits of having all JPA operations in a EJB facade use the container's transaction management. I use and know wicket-cdi for injection, which works fine.
Unfortunately, if I inject an EJB in a wicket page, the serialization checks of wicket complain that it is not serializable. This is true for EJB, I suppose since they are proxied.
My thinking is blocked at this point. How can I use jpa with container managed transactions with wicket? All examples I goggled are just reading data or are using Spring, what I do not want to do.
Thank You
Dieter
I repeated the question in the wicket-users mailing list and it was an interesting thread with 3 solutions.
One of them is my idea of encapsulating the EJB in a LoadableDetachableModel and realize the load by a JNDI lookup of the bean. See http://mail-archives.apache.org/mod_mbox/wicket-users/201210.mbox/%3C5072F013.9040702%40tremel-computer.de%3E
A little more generic solution I posted in my blog, sorry only in german language.
We run multiple websites which use the same rich functional backend running as a library. The backend is comprised of multiple components with a lot of objects shared between them. Now, we need to separate a stateless rule execution component into a different container for security reasons. It would be great if I could have access to all the backend objects seamlessly in the rules component (rather than defining a new interface and objects/adapters).
I would like to use a RPC mechanism that will seamlessly support passing our java pojos (some of them are hibernate beans) over the wire. Webservices like JAXB, Axis etc. are needing quite a bit of boiler plate and configuration for each object. Whereas those using Java serialization seem straightforward but I am concerned about backward/forward compatibility issues.
We are using Xstream for serializing our objects into persistence store and happy so far. But none of the popular rpc/webservice framework seem use xstream for serialization. Is it ok to use xstream and send my objects over HTTP using my custom implementation? OR will java serialization just work OR are there better alternatives?
Advance thanks for your advise.
The good thing with standard Java serialization is that it produces binary stream which is quite a bit more space- and bandwidth-efficient than any of these XML serialization mechanisms. But as you wrote, XML can be more back/forward compatibility friendly, and it's easier to parse and modify by hand and/or by scripts, if need arises. It's a trade-off; if you need long-time storage, then it's advisable to avoid plain serialization.
I'm a happy XStream user. Zero problems so far.
AOP is an interesting programming paradigm in my opinion. However, there haven't been discussions about it yet here on stackoverflow (at least I couldn't find them). What do you think about it in general? Do you use AOP in your projects? Or do you think it's rather a niche technology that won't be around for a long time or won't make it into the mainstream (like OOP did, at least in theory ;))?
If you do use AOP then please let us know which tools you use as well. Thanks!
Python supports AOP by letting you dynamically modify its classes at runtime (which in Python is typically called monkeypatching rather than AOP). Here are some of my AOP use cases:
I have a website in which every page is generated by a Python function. I'd like to take a class and make all of the webpages generated by that class password-protected. AOP comes to the rescue; before each function is called, I do the appropriate session checking and redirect if necessary.
I'd like to do some logging and profiling on a bunch of functions in my program during its actual usage. AOP lets me calculate timing and print data to log files without actually modifying any of these functions.
I have a module or class full of non-thread-safe functions and I find myself using it in some multi-threaded code. Some AOP adds locking around these function calls without having to go into the library and change anything.
This kind of thing doesn't come up very often, but whenever it does, monkeypatching is VERY useful. Python also has decorators which implement the Decorator design pattern (http://en.wikipedia.org/wiki/Decorator_pattern) to accomplish similar things.
Note that dynamically modifying classes can also let you work around bugs or add features to a third-party library without actually having to modify that library. I almost never need to do this, but the few times it's come up it's been incredibly useful.
Yes.
Orthogonal concerns, like security, are best done with AOP-style interception. Whether that is done automatically (through something like a dependency injection container) or manually is unimportant to the end goal.
One example: the "before/after" attributes in xUnit.net (an open source project I run) are a form of AOP-style method interception. You decorate your test methods with these attributes, and just before and after that test method runs, your code is called. It can be used for things like setting up a database and rolling back the results, changing the security context in which the test runs, etc.
Another example: the filter attributes in ASP.NET MVC also act like specialized AOP-style method interceptors. One, for instance, allows you to say how unhandled errors should be treated, if they happen in your action method.
Many dependency injection containers, including Castle Windsor and Unity, support this behavior either "in the box" or through the use of extensions.
I don't understand how one can handle cross-cutting concerns like logging, security, transaction management, exception-handling in a clean fashion without using AOP.
Anyone using the Spring framework (probably about 50% of Java enterprise developers) is using AOP whether they know it or not.
At Terracotta we use AOP and bytecode instrumentation pretty extensively to integrate with and instrument third-party software. For example, our Spring intergration is accomplished in large part by using aspectwerkz. In a nutshell, we need to intercept calls to Spring beans and bean factories at various points in order to cluster them.
So AOP can be useful for integrating with third party code that can't otherwise be modified. However, we've found there is a huge pitfall - if possible, only use the third party public API in your join points, otherwise you risk having your code broken by a change to some private method in the next minor release, and it becomes a maintenance nightmare.
AOP and transaction demarcation is a match made in heaven. We use Spring AOP #Transaction annotations, it makes for easier and more intuitive tx-demarcation than I've ever seen anywhere else.
We used aspectJ in one of my big projects for quite some time. The project was made up of several web services, each with several functions, which was the front end for a complicated document processing/querying system. Somewhere around 75k lines of code. We used aspects for two relatively minor pieces of functionality.
First was tracing application flow. We created an aspect that ran before and after each function call to print out "entered 'function'" and "exited 'function'". With the function selector thing (pointcut maybe? I don't remember the right name) we were able to use this as a debugging tool, selecting only functions that we wanted to trace at a given time. This was a really nice use for aspects in our project.
The second thing we did was application specific metrics. We put aspects around our web service methods to capture timing, object information, etc. and dump the results in a database. This was nice because we could capture this information, but still keep all of that capture code separate from the "real" code that did the work.
I've read about some nice solutions that aspects can bring to the table, but I'm still not convinced that they can really do anything that you couldn't do (maybe better) with "normal" technology. For example, I couldn't think of any major feature or functionality that any of our projects needed that couldn't be done just as easily without aspects - where I've found aspects useful are the kind of minor things that I've mentioned.
I use AOP heavily in my C# applications. I'm not a huge fan of having to use Attributes, so I used Castle DynamicProxy and Boo to apply aspects at runtime without polluting my code
We use AOP in our session facade to provide a consistent framework for our customers to customize our application. This allows us to expose a single point of customization without having to add manual hook support in for each method.
Additionally, AOP provides a single point of configuration for additional transaction setup and teardown, and the usual logging things. All told, much more maintainable than doing all of this by hand.
The main application I work on includes a script host. AOP allows the host to examine the properties of a script before deciding whether or not to load the script into the Application Domain. Since some of the scripts are quite cumbersome, this makes for much faster loading at run-time.
We also use and plan to use a significant number of attributes for things like compiler control, flow control and in-IDE debugging, which do not need to be part of the final distributed application.
We use PostSharp for our AOP solution. We have caching, error handling, and database retry aspects that we currently use and are in the process of making our security checks an Aspect.
Works great for us. Developers really do like the separation of concerns. The Architects really like having the platform level logic consolidated in one location.
The PostSharp library is a post compiler that does the injection of the code. It has a library of pre-defined intercepts that are brain dead easy to implement. It feels like wiring in event handlers.
Yes, we do use AOP in application programming . I preferably use AspectJ for integrating aop in my Spring applications. Have a look at this article for getting a broader prospective for the same.
http://codemodeweb.blogspot.in/2018/03/spring-aop-and-aspectj-framework.html