Why Does Ignite Use Spring framework? - ignite

I had used Spring framework in my apps and while it is nice conceptually, it is not suitable for real-time apps due to its run-time overhead. For instance, http://apache-ignite-users.70518.x6.nabble.com/Failed-to-map-keys-for-cache-all-partition-nodes-left-the-grid-td23510.html shows the actual run-time Spring stack.
The Spring features that Ignite uses for loading application-defined beans are just many layers wrapped around simple Java reflection features. So Why Ignite uses Spring instead of straight Java'reflection ?
To make Ignite more performant, is there plan with Ignite to switch from Spring framework to Java reflection features ?
Similarly, if Ignite uses Spring Boot to handle port requests, why does it not use a light-weight framework such as www.sparkjava.com ?

Ignite uses Spring only to convert XML configuration files into configuration beans during startup. This way Ignite provides a convenient well-known way of configuring instead of introducing a custom one. In the runtime, after node is started, Spring is not used for anything.
In the thread you provided it's actually other way around - Spring invokes Ignite. Apparently, that's a Spring application with an embedded Ignite node.

Related

Integration tests with Cucumber using embedded GemFire for a Spring Boot application deployed in an Apache Geode client/server topology

I intend to write integration tests with Cucumber for a GemFire cache client application using Spring Boot and deployed in an Apache Geode client/server topology. I referred to the question - How to start Spring Boot app without depending on Pivotal GemFire cache which was answered in 2018 and also referred to the integration test documentation here - Integration Testing with STDG.
The link to an example concrete client/server Integration Test extending STDG’s ForkingClientServerIntegrationTestsSupprt class appears to be broken.
The purpose of my integration tests would be to:
run an embedded locator and a server during the integration test phase
define the regions for the servers using cluster.xml
create, read, update and delete cache entries and verify the different use cases
Any help regarding the ideal approach to write integration tests (probably using an embedded GemFire locator and server) will be very helpful.
Tried an embedded GemFire CacheServer instance for integration tests using #CacheServerApplication annotation but not sure on how to create ClientCache objects to use the embedded GemFire or whether this is the right way to write the integration tests.
Edit: Also came across this - Is it possible to start a PIvotal GemFire Server, Locator and Client in one JVM? where it is mentioned as - In short, NO, you cannot have a peer Cache instance (with embedded Locator) and a ClientCache instance in the same JVM (or Java application process).
DISCLAIMER: I do not have experience with Apache Cucumber...
However, it is not difficult to spin up multiple GemFire or Geode server-side processes, such as 1 or more Locator and [multiple] CacheServers in a single test class. The Locators can be standalone JVM processes or embedded, as part of the servers.
In this typical test configuration arrangement the GemFire or Geode server-side processes are forked, yet coordinated, and the test class itself acts as the ClientCache instance.
You can see 1 such test configuration in the SBDG Multi-site Caching sample, here.
The key to this test configuration is the extension of the ForkingClientServerIntegrationTests class from STDG, as well as the forking of the 2 clusters (and specifically), in the test class setup method.
The configuration for each cluster is handled by Spring config and the coordination is all handled using GemFire/Geode properties (specifically) combined with some Spring Profiles (for example, then see here) to control which configuration gets applied for each GemFire/Geode JVM process.
Of course, this example and test configuration is quite complex given the fact that the test also employs GemFire/Geode's WAN capabilities, hence the "multi-site" caching reference, but serves to demonstrate that Spring and SBDG/SDG/STDG supports as complex or as simple of a setup as your testing needs require.
You can start any number of GemFire/Geode processes (Locators, CacheServers, etc). And, in nearly all cases, the test class (JVM) itself is the cache client (ClientCache instance).
Here are a couple more examples from the Spring Data for Apache Geode (SDG) codebase and test suite: here and here.
I am certain I have another test class or example (somewhere) that for a single Locator, then joined 2 CacheServer instances, and then the test (JVM process) proceeded as ClientCache instance, but I cannot seem to find it at the moment.
In any case, I hope this gives you some ideas.

Apache Ignite with Spring framework

Does the Apache Ignite operate on a spring framework basis?
Can I register a spring controller in classpath at server remote node and use it?(using component , like #Controller)
Apache Ignite is integrated with Spring but isn't based on it.
You can register spring beans when starting remote node (using normal spring approach) and then use them from e.g. compute or distributed services.
I'm not sure if you can register beans remotely in runtime, but I don't see why not.

Hystrix command does not run in Hystrix environment

I am having an issue with my Hystrix commands. If the call to hystrix wrapped method comes from within the class, the hystrix-wrapped method does not run in Hystrix enviroment
In that case I see logs as
05-02-2018 22:51:25.809 [http-nio-auto-1-exec-3] INFO c.i.q.v.e.ConnectorImpl.populateFIDSchema -
populating FID Schema
But, if I make call to the same method from outside the class, I see it running it in Hystrix enviroment
05-02-2018 22:54:53.735 [hystrix-ConnectorImpl-1] INFO c.i.q.v.e.ConnectorImpl.populateFIDSchema -
populating FID Schema
I am wrapping my method with HystrixCommand like this
#HystrixCommand(commandKey = "getSchemaCommand", fallbackMethod = "getSchemaCommandFallback")
Any ideas?
Contrary to #pvpkiran's answer, this is not a limitation of AspectJ, but a limitation Spring AOP. Spring AOP is a solution that tries to implement a subset of AspectJ through proxies, and the proxy based approach is what causing the advices not being called when the calls are not made through the proxy.
See Spring AOP capabilities and goals and AOP Proxies in the Spring Framework Reference for more details.
AspectJ on the other hand directly modifies the bytecode of the advised class, involves no proxies at all, and doesn't suffer from the limitation of the proxy based Spring AOP.
AspectJ is superior in pretty much all aspects to Spring AOP so I would advise you to switch over from Spring AOP to AspectJ (you don't need to ditch Spring for this as Spring and AspectJ can work together very well).
This is a limitation of Spring AOP (Hystrix-Javanica is based on AOP).
When you call a method locally, it doesn't go through a proxy and hence it doesn't really run in Hystrix environment, instead it runs as if it's another method.
But when you make a call from outside the class, it goes through proxy and hence it works.
This is true of many other functionalities. Another example is #Cacheable
When you call from outside the class, Hystrix (Spring AOP) intercepts the call and wraps it around its own environment. But when you do a call locally, it cannot intercept the call.

AspectJ Spring AOP pointcut hibernate entity functions

How do i pointcut the execution of functions defined in the Hibernate Entities, Which is not created or loaded as spring beans. Couldnt find any help over internet to how to do this.
Is there a way to use spring to point cut hibernate entities.
This is what I found, but no solution
With Spring AOP, you cannot do it. Spring AOP is a limited AOP solution that is only similar to AspectJ. Spring AOP is less capable than AspectJ in a number of ways:
Spring AOP supports only a limited subset of the AspectJ pointcuts (only execution type of pointcuts)
Spring AOP has different semantics compared AspectJ, because it uses dynamic proxies instead of direct bytecode manipulation. With the proxy based solution the Spring AOP uses, advices are not executed when the control flow doesn't leave the proxied object, as when invoking another method in the same object, like this.someOtherMethod()
Spring AOP only works for Spring managed beans. Hibernate entities are not Spring managed beans, so Spring AOP doesn't apply to them.
I encourage you to switch over to native AspectJ to be able to advise Hibernate entites or any other non spring managed beans. Spring supports AspectJ nicely and you should be able to change your configuration to use native AspectJ instead of Spring AOP.

Consuming Spring Boot "metricsChannel" via Apache Camel/RabbitMQ

Spring Boot publishes all metrics events to a message channel "metricsChannel" when a dependency on spring-messaging is present. In my project I am using Apache Camel along with RabbitMQ as the broker. Is there any way to consume these metrics messages using purely Camel and not spring integration?
I can see Apache Camel has a component SpringIntegration, however I would like to know if there is a way to directly access these messages via the RabbitMQ component or do I have to add dependency on the camel component spring-integration as well?
It would be far easier for you to implement MetricChannel and do whatever you want with Camel instead of trying to bridge a result with Spring integration. Look at MessageChannelMetricWriter, the implementation is quite straightforward.