Difference between project Reactor and gRPC - spring-webflux

I am new to reactive concepts and these terminologies are looking similar to me. Am I correct in stating that Spring Reactor, Akka, RxJava are similar but how is this different from gRPC. Can I use Project Reactor with gRPC and RSocket. This is getting overwhelming. How are these related? Any explanation on this will be very much useful to me.

Am I correct in stating that Spring Reactor, Akka, RxJava are similar
Yes, to a point. Reactor & RxJava are pretty much one and the same, conceptually (though I'm sure I'll attract some stick from at least someone for saying that...!)
Akka is a bit different - it's a more fully featured framework that uses reactive concepts. It's also written in Scala rather than Java, and while it works just fine in Java, it's a reasonably unusual choice in my experience.
how is this different from gRPC
gRPC is a completely different beast - it's a framework meant for connecting different services together. You can build your services in any number of languages, Java included, and then choose whether you make blocking or non-blocking calls to those services from the framework you've constructed (the non-blocking calls could then be trivially interfaced with a reactive framework if that was required.) It's much more similar conceptually to REST / SOAP than it is Reactor or RxJava.
RSocket is different again - it's more a competitor to HTTP for reactive service intercommunication than anything else. It's a communications protocol (rather than a framework) that services can use to inter-communicate, and it's designed specifically to be efficient while supporting reactive semantics at the stream level. RSocket can for example manage flow control, passing backpressure messages between services, to try to ensure upstream reactive services don't overwhelm downstream services - not something you can do with HTTP (at least, not without adding another protocol on top.)
Overall, if you're new to reactive generally and want to start somewhere (and keep in Java land), then my advice would be to stick with Reactor for the time being to avoid getting overwhelmed - it's probably the most used framework in that regard since it's built right into Spring. Once you're familiar with the fundamentals, other related components like RSocket will start to make a lot more sense.

gprc is not reactive it's basically http2 + protobuf

Related

What is the difference between ProjectReactor.io vs Spring WebFlux?

What is the difference between ProjectReactor.io vs Spring WebFlux?
I've read the documentation here: https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html and https://projectreactor.io/, for me both are very similar to each other. I am interested to learn the highlights on this.
They are on different abstraction level, so they can't really be compared as such.
Project Reactor is a general purpose reactive library. Similarly to RxJava, it is based on the reactive-streams specification. It is like Java 8 Stream and Optional, except it has support for asynchronous programming, error handling is built-in, supports backpressure and has a large number of operators (map, filter and many more).
Spring Webflux is a framework to create web services using reactive libraries. Its main goal is to ensure high scalability with low resource usage (i.e. small number of threads). Under the hood it uses Project Reactor, however, you can also use it with RxJava (or any other reactive-streams implementation) and it works well even with Kotlin Coroutines.

RESTful for Axon Repositories

PROBLEM: Application uses Axon Framework and org.axonframework.eventsourcing.EventSourcingRepository and building _links in HAL format is needed in responses.
RESEARCH: Can be tuned with Spring Hateoas, but a lot requires to be handcoded in rest-controller. Spring Data REST offers autogeneration of links with an only annotation on CRUD repository. The project is not RDBS & JPA-based, so Spring Data REST is not an option.
QUESTION: Does Axon offer any RESTful solutions from the box, or is there a better autoconfigured alternative to Spring HATEOAS?
Gotcha, so you are essentially looking to expose a service's capabilities when it comes to which commands can be handled by a given Command Handling Component, disregarding whether that component is an Aggregate or an External Command Handler.
Note, that interaction between a component which dispatches commands and one which handles them resides within the CommandBus. When an Axon application starts up, it's the CommandBus which receives all the registrations for known command handlers.
That way, the CommandBus provides the location transparency for this part of the application. And it's location transparency which provides clear and cleanly segregated components; essentially what will help you to take an evolutionary microservices approach (as AxonIQ describes here).
I'd thus argue the necessity of sharing all known command handlers on a given service/aggregate through REST.
Regardless, whether it makes sense is always a question of "it depends". I for one have created a means to share the known commands a service could handle as JSON schema, as you can see here in a sample project I helped built between AxonIQ and Pivotal.
So, to come round to your question:
QUESTION: Does Axon offer any RESTful solutions from the box, or is there a better autoconfigured alternative to Spring HATEOAS?
No, Axon does not provide something like this out of the box, as it expect you use the CommandBus for communication. I do know you might need a starting point somewhere, for which REST makes sense, but even then exposing all known commands can be regarded as exposing your internal domain to the outside world. In the majority of scenarios, that would be undesirable, but as stated this highly "depends" on your use case.

Akka http vs Lagom

Please help in understanding
1)Choosing Akka http vs Lagom for building a microservice
2)Is there any difference between REST API and Akka http/Lagom based microservice.
Thanks
This is a very broad question, but let me give you some pointers.
1) Akka is a library (or, as the Akka team calls it, a toolkit), while Lagom is a framework. What's the difference? To quote Martin Fowler:
A library is essentially a set of functions that you can call, these
days usually organized into classes. [..]
A framework embodies some abstract design, with more behavior built
in.
Akka gives you everything you need to write a reactive microservice if you know what you're doing. Lagom tells you, to some extend, how to write a reactive microservice. For example it prescribes a certain project structure, and provides ready-made implementations for common patterns in microservices, such as service lookup, circuit breakers, asynchronous messaging, and even event sourcing and CQRS. You can do all this with Akka as well (in fact, that's what Lagom uses underneath), but you'll end up implementing a lot if it yourself. If you're not very experienced with Akka (and you probably aren't, otherwise you wouldn't be asking the question), I would recommend you give Lagom a shot.
2) A microservice is an application that is concerned with one business capability, and interacts with other microservices to form a functional system. REST is an architectural style for accessing and manipulating resources. These are completely independent, you can do microservices without REST, and REST without microservices. But you can also combine them, i.e. build your microservice as a REST service. This, or more specifically REST over HTTP and using JSON, is very common for public-facing microservices that don't only interact with other microservices, but are called from web front-ends or arbitrary client applications. So yes, there is a difference, in fact they have nothing to do with each other, but you can use Lagom (or Akka HTTP) to build a REST API.

WCF; what's the big deal?

I'm just about getting into WCF; but from what I've read so far, like the sample scenarios I found on MSDN and some other sites, I can do all that with web services and applications that call those web services. So why the need for an elaborate layer like WCF?
Most of the comparisons I've googled for explain it more from a programming point of view. Still trying to find answers without much success as to when it makes business and of course programming sense to use the WCF layer as opposed to traditional application to web services model.
Anyone here with experience on both and can advice on how to go about choosing either web services or going the WCF way? What are those things that can't absolutely be done using just plain old web services called by applications and where the WCF layer will save the day.
You've fallen for the Microsoft trap of "it's just about web services" :-)
It's actually a lot more:
it's about service-oriented programming in general (not just web services - you can also write TCP/IP based services, MSMQ queue-based messaging and a lot more)
it's about unifying all the diverse programming models that existed so far (ASMX, Enterprise Services, DCOM, .NET remoting)
it's about providing a lot of ready-made and ready-to-use plumbing which can handle things like reliable messaging, transaction support, security in any shape or form you'd like, service discovery, and a lot more
it's about separating the service implementation from the details of how clients will call it and making this a configurable stack of protocols, encodings etc.
Sure - most of this stuff can be done in ASMX, or .NET remoting - but try to convert an ASMX web service to be callable in your intranet using TCP/IP and transport security... Many of those "older" technologies have a very intricate and direct link to how they're being used - you can't easily change that without changing the whole service code.
WCF separates all these "plumbing details" like what endpoint to call, what protocol to use to call it, how to handle security etc. out into a "WCF stack" that's configurable and composable, so you can easily switch your service XYZ to use HTTP allowing anonymous users to call it, to using TCP/IP with Windows credentials required - your service code won't change a bit - it's only configuration of the plumbing.
That to me is the most compelling reason for WCF - I can totally concentrate on my actual service code, and not pollute it with lots of plumbing stuff - how to handle transports and text encodings and all that. And I can easily change that and adapt to new requirements and needs in deployment without having to touch my actual service code.
Plus, the second major point is extensibility - most of the older technologies just had their one, set way of doing things and many didn't lend themselves to being extended. You had to either adapt to use it the way they did it - or forget about it. WCF has a vast and very intricate system for extending just about anything - you can create your own transport protocol (people have created UDP or SMTP based bindings), you can create your own message encoders (like I had to do to talk to a web service which could only understand ISO-8859-1 encoded messages), and you can extend just about anything else in WCF - all in an organized, well-documented, very stable and safe way.
So these two things - separating out plumbing into configurable layers, and extensibility to the maximum - are the most compelling reasons for me to use WCF.
Edit: Kobi's link above, is a far better answer than mine.
WCF is basically a better architecture for supporting communications. It breaks many dependencies such as hosting (not iis dependent), transport, security, addressing into plugin components, and allows customisation to a very high degree.
Yes you can do a lot with traditional technologies, however you can do more with WCF. If you don't need the features now then of course you can can continue with legacy technologies, however if you prefer you can opt for a better architecture now with an eye on the future but it comes at a cost of having to switch technologies now.
Take this example. If you have a legacy asmx web service, how easily can you offer the same service via an MSMQed endpoint? With WCF its as simple as adding new config settings.
I assume that you are not asking "why not just stick with SOAP/HTTP". WSF allows you to choose a number of different transports rather than just simple HTTP, but as you observe the WS-* technologies allow you to do all that. So I think you're asking why use a powerful but complex framework when the raw technolgies are not impossibly complex?
You could ask this same question of any Framework. You could just use the basic technologies and avoid the learning curve of adopting the framework.
Frameworks such as WCF do have a learning curve, but consider what happens if you don't use them:
You find that you write boiler-plate code for each service invocation. You then either accept duplication or begin to refactor and build your own libraries. Before long you've developed your own Framework, but it's not the same as anybody else's. So then any new team memeber has to learn your local framework, serious learning currve.
Note also that WCF addresses issues such as the monitoring of the deployed solution.
The biggest appeal to me is testability. Services are defined by a CLR interface, which is quite easy to mock inside a test harness. Some words of warning, however. With great flexibility comes some pain in the configuration process, along with a few "gotchas". An example of a gotcha is that WCF--adhering closely to a "best practice"--requires an active SSL connection in order to pass SOAP authentication credentials over HTTP. This hinders testing quite a bit.

Has anybody compared WCF and ZeroC ICE?

ZeroC's ICE (www.zeroc.com) looks interesting and I am interested in looking at it and comparing it to our existing software that uses WCF. In particular, our WCF app uses server callbacks (via HTTP).
Anybody who's compared them? How did it go? I'm particularly interested in the performance aspect, since interoperability isn't much of a concern for us right now. Thanks!
I did a very terse review of ICE a few years ago, and although I haven't compared them directly before, having reasonable knowledge of WCF my thoughts might have some relevance.
Firstly, it's not entierely fair to compare WCF with ICE as WCF as ICE is a specific remote communication mechanism and WCF is a higher level remote communications framework.
While WCF is often thought of as implementing SOAP web services, and that is indeed its main use to date, it can also be used for implementing remote services using all manner of encodings and transport channels, which means it can theoretically be used for performant comms between applications.
In comparison, ICE is a cross-platform remote communicaton mechanism that uses binary encoding for performant communications between applications. It's something of a simplified evolution of CORBA and is more directly comparable to CORBA, DCOM, .NET Remoting, and JNI.
However, even though there's no direct correspondence between ICE and WCF, if you need your .NET app to communicate remotely then they're both contenders. Some of the decision points you might want to consider include:
Resourcing. It'll be easier to find developers with WCF experience than ICE experience.
Performance. If you want performance then ICE performs fast, but WCF can also be used in a performant configuration. Alternatively, .NET Remoting can provide very good performance, and whatever the MS-sponsored benchmarks say I've seen it outperform WCF by 10%.
Cross-platform. If you need to communicate with non-Windows applications then you're limited with the WCF options you can use. In addition, since every SOAP stack seems to implement the standards differently it can be a pain creating truly generic Web Services (though WS-I helps)
If you don't need every ounce of performance from day one, then I'd personally plump for WCF to start with, and then consider ICE if performance ever becomes critical. Even then it might be cheaper to scale out your service boxes than it is to move to ICE, and if you don't have any exotic cross-platform needs then you could always look at reconfiguring WCF for binary encoding etc
Michi Henning from ZeroC has recently published a white paper on just this topic -- "Choosing Middleware: Why Performance and Scalability do (and do not) Matter". It compares Ice, WCF (binary & SOAP), and RMI with various performance metrics, platforms, languages, etc. There's more information on Michi's blog, but the white paper is also quite readable, with all the standard caveats of any benchmark.
Disclaimer: I've used Ice and RMI extensively, but never WCF.
Apache Thrift is another contender to ICE and WCF. It was developed and open sourced by Facebook. Apache Thrift is nice in some ways because its not only extremely efficient on the encoding side, it also supports adding of fields to structures without breaking all of the clients (something we found extremely useful for our projects).
Google Protocol Buffers would seem not really a contender as it doesn't mention .NET support on the home page. However, some community addons support C#. In addition, ICE provides emulation for Google Protocol Buffers if you're working with existing services.
Data point: we just converted a callback multi-platform and multi-language project from Ice to Thrift with pretty good results. Ice does a lot for you, so we had to implement disconnection listeners, connection events, etc. ourselves. And in one case we got bit in the proverbial with a big object lock that Ice was letting us get away with -- this caused a deadlock in the Thrift server but it was easily fixed by less lazy coding on the C# side.
I've just finished benchmarking, and in our application anything that pushes large amounts of data is faster than, or on par with, Ice. Shorter messages with more over-head (i.e., a "heartbeat" that updates a status over the protocol) is a bit slower.
The most important bit was that in order to implement the callback service correctly we had to extend Thrift interfaces and define our own protocol, along with a Thrift "Processor" and callback client-server. But I freely admit our application is /very/ special. The existing protocols and servers should be sufficient. But extending them, even to use multiplex sockets from .Net, was not terribly difficult.
We are using ICE to integrate modules written in both C++, Java and C#. The nice thing is that our server can access components on remote machines as well, so if we need more performance we can shift processing to different machines.
I've used both WCF and ICE, and I'd say that ICE is cleaner on the implementation side. ICE also has very detailed and readable documentation.
ICE supports some things that WCF cannot do, including load balancing, automated remote client updates, etc.