What is the difference between ProjectReactor.io vs Spring WebFlux?
I've read the documentation here: https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html and https://projectreactor.io/, for me both are very similar to each other. I am interested to learn the highlights on this.
They are on different abstraction level, so they can't really be compared as such.
Project Reactor is a general purpose reactive library. Similarly to RxJava, it is based on the reactive-streams specification. It is like Java 8 Stream and Optional, except it has support for asynchronous programming, error handling is built-in, supports backpressure and has a large number of operators (map, filter and many more).
Spring Webflux is a framework to create web services using reactive libraries. Its main goal is to ensure high scalability with low resource usage (i.e. small number of threads). Under the hood it uses Project Reactor, however, you can also use it with RxJava (or any other reactive-streams implementation) and it works well even with Kotlin Coroutines.
Related
I am new to reactive concepts and these terminologies are looking similar to me. Am I correct in stating that Spring Reactor, Akka, RxJava are similar but how is this different from gRPC. Can I use Project Reactor with gRPC and RSocket. This is getting overwhelming. How are these related? Any explanation on this will be very much useful to me.
Am I correct in stating that Spring Reactor, Akka, RxJava are similar
Yes, to a point. Reactor & RxJava are pretty much one and the same, conceptually (though I'm sure I'll attract some stick from at least someone for saying that...!)
Akka is a bit different - it's a more fully featured framework that uses reactive concepts. It's also written in Scala rather than Java, and while it works just fine in Java, it's a reasonably unusual choice in my experience.
how is this different from gRPC
gRPC is a completely different beast - it's a framework meant for connecting different services together. You can build your services in any number of languages, Java included, and then choose whether you make blocking or non-blocking calls to those services from the framework you've constructed (the non-blocking calls could then be trivially interfaced with a reactive framework if that was required.) It's much more similar conceptually to REST / SOAP than it is Reactor or RxJava.
RSocket is different again - it's more a competitor to HTTP for reactive service intercommunication than anything else. It's a communications protocol (rather than a framework) that services can use to inter-communicate, and it's designed specifically to be efficient while supporting reactive semantics at the stream level. RSocket can for example manage flow control, passing backpressure messages between services, to try to ensure upstream reactive services don't overwhelm downstream services - not something you can do with HTTP (at least, not without adding another protocol on top.)
Overall, if you're new to reactive generally and want to start somewhere (and keep in Java land), then my advice would be to stick with Reactor for the time being to avoid getting overwhelmed - it's probably the most used framework in that regard since it's built right into Spring. Once you're familiar with the fundamentals, other related components like RSocket will start to make a lot more sense.
gprc is not reactive it's basically http2 + protobuf
I have read the spring guide with kotlin and its says data class for JPA is not recommended
but i am quite confused after seeing some tutorials and video using data class for JPA
did spring find a way to deal with data class in new versions?
We have developed several services with Spring and Kotlin and used data classes as e.g. entities. This works fine and leads to a lot less boilerplate. You do, however, need to configure your project with these build options/dependencies to avoid Spring interoperability issues:
https://kotlinlang.org/docs/all-open-plugin.html
You can use Kotlin for Spring Data entities. This is true for all Spring Data modules including JPA where Spring Data is not the one doing the mapping, but your JPA implementation does.
The problem is that all the libraries involved are developed with mainly Java in mind and Kotlin isn't developed with Hibernate or Spring Data in mind. Therefore problems are bound to occur.
For example Kotlin does generate a lot of stuff that isn't visible for normal users, like special constructors. But this is visible for reflection so in the past we had situations where the developer only sees a single constructor, but Spring Data saw multiple constructors and couldn't decide which one to use.
So you may use Kotlin, but especially when the next Kotlin version comes a long you might experience some extra pain.
This question probably applies to other libraries as well, but using Cassandra as a specific example to try to ensure I'm asking an answerable question:
With Kotlin, I can either use Cassandra's async methods, then wrap them with the ListenableFuture integration, or I can use Cassandra's synchronous methods and wrap their usage with a suspending method and launch/async.
I'm guessing that the better technique is to use a library's existing async methods, presuming that would more easily avoid deadlocks and be faster, but I'm speculating and am new to coroutines.
Is this an obvious answer for people more experienced with coroutines, or are there specific areas where "it depends"?
It depends on the internal details of the library you are using and on your performance/scalability goals:
If your library is internally asynchronous then it would be always advisable to use it via its native asynchronous API. Disclaimer: I have no idea how Cassandra is structured internally (sync or async).
If your library is internally synchronous/blocking (and most legacy libraries are), then it depends:
If your application is IO-bound (reads/writes a lot of bytes to/from
network/disk) and you are optimizing it for throughput (maximizing
number of bytes processing on large batch loads), then, as a rule of
thumb, you'll be better of using synchronous/blocking APIs.
If your application is memory-bound and you want to scale it to more
concurrent connections/requests, then, as a rule of thumb, you'll be
better of using asynchronous APIs.
I am trying to build an object oriented wrapper, which will wrap API specification; this includes a many structures, events, and APIs.
This API specification will be revised every year, there by releasing new specification; the updates are likely to have newer structures, events and APIs. Updates will also include
Updates to existing structures, events and APIs, the APIs as such does not change but as they take various structures as parameters which eventually have updates
The challenges
The API specification is nothing but an SDK to a lower layer,
what I am trying to build is also an SDK but will be an object
orient wrapper over this SDK.
The requirement is that the users
want Objects and methods and no āCā like structures and APIs
The frequent version change should not have any impact on high level
application and should seamlessly work with any underlying API
version
Older application should work on newer APIs
Newer application should work on older APIs
The last one is a tricky one, what I mean is that the newer application when it sees that it an older version of SDK should somehow transform itself to an older version of API
Is there any design pattern which will help me achieve this task and tied over the frequent changes to internal data and also achieve backward compatibility and forward compatibility?
OS: Windows
Dev Environment : Visual C++
Your problem is too high level to be answerable by a design pattern.
What you are asking for are architectural principles.
These you should base on your well-founded design decisions ("API is backwards compatible using versioning because...") which in turn are based on your requirements (e.g. "Older application should work on newer APIs").
Look into this (a presentation keynote about API design by Joshua Bloch):
How to Design a Good API and Why it Matters
1) All that comes to mind at the moment, if the sdk API involves manual resource allocation:
RAII, or ctor,dtor resource management: https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
2-5) Determine a function decomposition of the API you're building, that becomes expressible in terms of each version tier of the SDK API. Some details on semi-formal function decomposition here (towards the bottom):
http://jfeltz.com/posts/2015-08-30-cost-decreasing-software-architecture.html
You can then take the resulting function compositions and make them construct-able objects if you have to. Don't worry about the final object model until you have a working understanding of the function compositions involved. This is hard at first, but trust me, it is far more powerful than iterating through several possible object model designs.
For C++, you'll probably need to perform #define pre-processing against a scheme of versions for each upstream SDK API, unless your sdk encodes its version in a file somewhere, such that you can do dll loading instead (in which case, this may be Factory design pattern), but I suspect you already knew that.
We run multiple websites which use the same rich functional backend running as a library. The backend is comprised of multiple components with a lot of objects shared between them. Now, we need to separate a stateless rule execution component into a different container for security reasons. It would be great if I could have access to all the backend objects seamlessly in the rules component (rather than defining a new interface and objects/adapters).
I would like to use a RPC mechanism that will seamlessly support passing our java pojos (some of them are hibernate beans) over the wire. Webservices like JAXB, Axis etc. are needing quite a bit of boiler plate and configuration for each object. Whereas those using Java serialization seem straightforward but I am concerned about backward/forward compatibility issues.
We are using Xstream for serializing our objects into persistence store and happy so far. But none of the popular rpc/webservice framework seem use xstream for serialization. Is it ok to use xstream and send my objects over HTTP using my custom implementation? OR will java serialization just work OR are there better alternatives?
Advance thanks for your advise.
The good thing with standard Java serialization is that it produces binary stream which is quite a bit more space- and bandwidth-efficient than any of these XML serialization mechanisms. But as you wrote, XML can be more back/forward compatibility friendly, and it's easier to parse and modify by hand and/or by scripts, if need arises. It's a trade-off; if you need long-time storage, then it's advisable to avoid plain serialization.
I'm a happy XStream user. Zero problems so far.