Getting ClassNotFoundException while storing a list of custom objects in ignite cache and works well if I don't use a list - ignite

I have a non-persistent ignite cache that stores the following elements,
Key --- java.lang.String
Value --- Custom class
public class Transaction {
private int counter;
public Transaction(int counter) {
this.counter = counter;
}
public String toString() {
return String.valueOf(counter);
}
}
The below code works fine even though I am trying to put a custom object into Ignite.
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(true);
cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
Ignite ignite = Ignition.start(cfg);
IgniteCache<String, Transaction> cache = ignite.getOrCreateCache("blocked");
cache.put("c1234_t2345_p3456", new Transaction(100);
The below code fails with ClassNotFoundException when I am trying to set a list of objects instead. This code is exactly same as the above, except for the list of objects. Why is it that list of objects fail where-in custom objects stored directly works fine?
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(true);
cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
Ignite ignite = Ignition.start(cfg);
IgniteCache<String, List<Transaction>> cache = ignite.getOrCreateCache("blocked");
cache.put("c1234_t2345_p3456", Arrays.asList(new Transaction(100)));
Storing custom objects in-memory to ignite worked, but trying to store List objects instead caused ClassNotFoundException in server. I was able to solve this by copying the custom class definition to "/ignite_home/bin/libs", but curious to know why the first case worked and second case didn't. Can anyone please help me understand what's happening in this case? Is there any other way to resolve this issue?

Ok, after many trials, I have an observation that evens out the differences between the above 2 scenarios. In dynamic declaration of caches as I have done earlier in code, somehow I am seeing the error from Ignite to keep the custom classes in bin/libs folder. But if I define the cache in the ignite-config.xml, then ignite is somehow able to digest both use-case evenly and doesn't even throw the ClassNotFoundException. So, I take the summary here that pre-declared caches are much safer as I am seeing some different behaviour while using them as dynamic ones from code. So, I changed the cache declaration to declarative model and now the above use-cases are working fine.

Related

How to evenly distribute data into and compute in an apache ignite cluster

The purpose is to demonstrate data balancing and compute collocation. For this purpose, I want to load say 100000 records into the ignite cluster.
(Using IgniteRepository, from ignite-spring), and do affinityRun with an IgniteRunnable that retrieves data by some search condition and process it.
Ignite is consistently passing the compute job to another node(different from where I submit), however, all 100K records are processed onto that single node.
So either my data is not balanced, or affinityRun is not taking effect.
Thanks in advance for any help!
Ignite config
#Bean
public Ignite igniteInstance() {
IgniteConfiguration config = new IgniteConfiguration();
CacheConfiguration cache = new CacheConfiguration("ruleCache");
cache.setIndexedTypes(String.class, RuleDO.class);
//config.setPeerClassLoadingEnabled(true);
cache.setRebalanceBatchSize(24);
config.setCacheConfiguration(cache);
Ignite ignite = Ignition.start(config);
return ignite;
}
RestController method to trigger processing
#RequestMapping("/processOnNode")
public String processOnNode(#RequestParam("time") String time) throws Exception {
IgniteCache<Integer, String> cache = igniteInstance.cache("ruleCache");
igniteInstance.compute().affinityRun(Collections.singletonList("ruleCache"), 0, new NodeRunnable(time));
return "done";
}
NodeRunner -> run()
#Override
public void run() {
final RuleIgniteRepository igniteRepository = SpringContext.getBean(RuleIgniteRepository.class);
igniteRepository.findByTime(time).stream().forEach(ruleDO -> System.out.println(ruleDO.getId() + " : " + ruleDO));
System.out.println("done on the node");
}
I expect 100k processing to be evenly distributed on my 3 nodes.
You execute the logic for a single partition 0 only
igniteInstance.compute().affinityRun(Collections.singletonList("ruleCache"), 0, new NodeRunnable(time));
The data gets distributed across 1024 partitions (by default) and a primary copy of partition 0 is stored on one of the nodes. This code needs to be executed for several partitions or different affinity keys if you want to see that every node takes part in the calculation.
Thanks all for the help! Specially #dmagda broadcast worked well, however with the repository method it ran on whole lot of data on the cluster defeating purpose of collocation.
I had to chuck out jpa and use cache methods which worked wonders.
this is the IgniteRunnable class :
#Override
public void run() {
final Ignite ignite = SpringContext.getBean(Ignite.class);
IgniteCache<String, RuleDO> cache = ignite.cache("ruleCache");
cache.localEntries(CachePeekMode.ALL)
.forEach(entry -> {
System.out.println("working on local data, key, value" + entry.getKey() + " : " + entry.getValue();
}
}
And instead of affinityRun i am calling broadcast :
igniteInstance.compute().broadcast(new NodeRunnable(time));

Apache Ignite Exception - Failed to initialize cache store (data source is not provided)

I am trying to implement persistent store for my ignite cache ,I am using CacheJdbcPojoStoreFactory,My cache store factory initialization looks like this
#Autowired
DataSorce datasource;
#Bean
public CacheJdbcPojoStoreFactory<?, ?> cacheJdbcdPojoStorefactory(){
CacheJdbcPojoStoreFactory<?, ?> factory = new CacheJdbcPojoStoreFactory<>();
factory.setDataSource(dataSource);
return factory;
}
My implementation of the cache looks like this
CacheConfiguration pesonConfig = new CacheConfiguration();
pesonConfig.setName("personCache");
cacheJdbcdPojoStorefactory.setTypes(jdbcTypes.toArray(new JdbcType[jdbcTypes.size()]));
Collection<QueryEntity> qryEntities = new ArrayList<>();
qryEntities.add(qryEntity);
pesonConfig.setQueryEntities(qryEntities);
pesonConfig.setCacheStoreFactory((Factory<? extends CacheStore<Integer, Person>>) cacheJdbcdPojoStorefactory);
ROCCache<Integer, Person> personCache= rocCachemanager.createCache(pesonConfig);
personCache.put(1, p1);
personCache.put(2, p2)
(I am passing correct query Entities and JdbcTypes , for simplicity i have not shown that code here)
But when i run this code i get the below stack trace
Failed to initialize cache store (data source is not provided).
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8385)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1269)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1638)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCachesStart(GridCacheProcessor.java:1563)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.startCaches(GridDhtPartitionsExchangeFuture.java:944)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:511)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1297)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: Failed to initialize cache store (datasource is not provided). at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.start(CacheAbstractJdbcStore.java:297)
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8381)
... 8 more
When i debug i can see that my datasource parameters are correctly set inside cacheJdbcdPojoStorefactory object. Where am i going wrong ?
Instead of wiring data source bean and setting it to the factory, you can provide its bean ID and the factory will fetch it from the application context. Here is the example:
#Bean
public CacheJdbcPojoStoreFactory<?, ?> cacheJdbcdPojoStorefactory(){
CacheJdbcPojoStoreFactory<?, ?> factory = new CacheJdbcPojoStoreFactory<>();
factory.setDataSourceBean("data-source-bean");
return factory;
}
The issue is that factory will be serialized, but data source field is transient. This makes setDataSource() property very confusing, I think it should be deprecated and reworked.

Hazelcast No DataSerializerFactory registered for namespace: 0 on standalone process

Trying to set a HazelCast cluster with tcp-ip enabled on a standalone process.
My class looks like this
public class Person implements Serializable{
private static final long serialVersionUID = 1L;
int personId;
String name;
Person(){};
//getters and setters
}
Hazelcast is loaded as
final Config config = createNewConfig(mapName);
HazelcastInstance node = Hazelcast.newHazelcastInstance(config);`
Config createNewConfig(mapName){
final PersonStore personStore = new PersonStore();
XmlConfigBuilder configBuilder = new XmlConfigBuilder();
Config config = configBuilder.build();
config.setClassLoader(LoadAll.class.getClassLoader());
MapConfig mapConfig = config.getMapConfig(mapName);
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setImplementation(personStore);
return config;
}
and my myhazelcast config has this
<tcp-ip enabled="true">
<member>machine-1</member>
<member>machine-2</member>
</tcp-ip>
Do I need to populate this tag in my xml?
I get this error when a second instance is brought up
com.hazelcast.nio.serialization.HazelcastSerializationException: No DataSerializerFactory registered for namespace: 0
2275 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:98)
2276 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39)
2277 at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:41)
2278 at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:276)
Any help is highly appericiated.
Solved my problem, I had a pom.xml with hazelcast-wm so I did not have actual hazelcast jar in my bundled jar. Including that fixed my issue.
Note that this same "No DataSerializerFactory registered for namespace: 0" error message can also occur in an OSGi environment when you're attempting to use more than one Hazelcast instance within the same VM, but initializing the instances from different bundles. The reason being that the com.hazelcast.util.ServiceLoader.findHighestReachableClassLoader() method will sometimes pick the wrong class loader during Hazelcast initialization (as it won't always pick the class loader you set on the config), and then it ends up with an empty list of DataSerializerFactory instances (hence causing the error message that it can't find the requested factory with id 0). The following shows a way to work around that problem by taking advantage of Java's context class loader:
private HazelcastInstance createHazelcastInstance() {
// Use the following if you're only using the Hazelcast data serializers
final ClassLoader classLoader = Hazelcast.class.getClassLoader();
// Use the following if you have custom data serializers that you need
// final ClassLoader classLoader = this.getClass().getClassLoader();
final com.hazelcast.config.Config config = new com.hazelcast.config.Config();
config.setClassLoader(classLoader);
final ClassLoader previousContextClassLoader = Thread.currentThread().getContextClassLoader();
try {
Thread.currentThread().setContextClassLoader(classLoader);
return Hazelcast.newHazelcastInstance(config);
} finally {
if(previousContextClassLoader != null) {
Thread.currentThread().setContextClassLoader(previousContextClassLoader);
}
}
}

Ninject Inject Common DbContext Into Numerous Repositories

There’s something which I am doing that is working, but I think it can probably be done a lot better (and therefore, with more maintainability).
I am using Ninject to inject various things into a controller. The problem which I needed to solve is that the DbContext for each repository needed to be the same. That is, the same object in memory.
Whilst, the following code does achieve that, my Ninject common config file has started to get quite messy as I have to write similar code for each controller:
kernel.Bind<OrderController>().ToMethod(ctx =>
{
var sharedContext = ctx.Kernel.Get<TTSWebinarsContext>();
var userAccountService = kernel.Get<UserAccountService>();
ILogger logger = new Log4NetLogger(typeof(Nml.OrderController));
ILogger loggerForOrderManagementService = new Log4NetLogger(typeof(OrderManagementService));
var orderManagementService = new OrderManagementService(
new AffiliateRepository(sharedContext),
new RegTypeRepository(sharedContext),
new OrderRepository(sharedContext),
new RefDataRepository(),
new WebUserRepository(sharedContext),
new WebinarRepository(sharedContext),
loggerForOrderManagementService,
ttsConfig
);
var membershipService = new MembershipService(
new InstitutionRepository(sharedContext),
new RefDataRepository(),
new SamAuthenticationService(userAccountService),
userAccountService,
new WebUserRepository(sharedContext)
);
return new OrderController(membershipService, orderManagementService, kernel.Get<IStateService>(), logger);
}).InRequestScope();
Is there a neater way of doing this?
Edit
Tried the following code. As soon as I make a second request, an exception is chucked that the DbContext has already been disposed.
kernel.Bind<TTSWebinarsContext>().ToSelf().InRequestScope();
string baseUrl = HttpRuntime.AppDomainAppPath;
kernel.Bind<IStateService>().To<StateService>().InRequestScope();
kernel.Bind<IRefDataRepository>().To<RefDataRepository>().InRequestScope().WithConstructorArgument("context", kernel.Get<TTSWebinarsContext>());
var config = MembershipRebootConfig.Create(baseUrl, kernel.Get<IStateService>(), kernel.Get<IRefDataRepository>());
var ttsConfig = TtsConfig.Create(baseUrl);
kernel.Bind<MembershipRebootConfiguration>().ToConstant(config);
kernel.Bind<TtsConfiguration>().ToConstant(ttsConfig);
kernel.Bind<IAffiliateRepository>().To<AffiliateRepository>().InRequestScope().WithConstructorArgument("context", kernel.Get<TTSWebinarsContext>());
kernel.Bind<IWebinarRepository>().To<WebinarRepository>().InRequestScope().WithConstructorArgument("context", kernel.Get<TTSWebinarsContext>());
kernel.Bind<IWebUserRepository>().To<WebUserRepository>().InRequestScope().WithConstructorArgument("context", kernel.Get<TTSWebinarsContext>());
kernel.Bind<IOrderRepository>().To<OrderRepository>().InRequestScope().WithConstructorArgument("context", kernel.Get<TTSWebinarsContext>());
kernel.Bind<IInstitutionRepository>().To<InstitutionRepository>().WithConstructorArgument("context", kernel.Get<TTSWebinarsContext>());
kernel.Bind<IUserAccountRepository>().To<DefaultUserAccountRepository>().InRequestScope();
kernel.Bind<IRegTypeRepository>().To<RegTypeRepository>().InRequestScope().WithConstructorArgument("context", kernel.Get<TTSWebinarsContext>());
kernel.Bind<UserAccountService>().ToMethod(ctx =>
{
var userAccountService = new UserAccountService(config, ctx.Kernel.Get<IUserAccountRepository>());
return userAccountService;
});
kernel.Bind<IOrderManagementService>().To<OrderManagementService>().InRequestScope();
//RegisterControllers(kernel, ttsConfig);
kernel.Bind<AuthenticationService>().To<SamAuthenticationService>().InRequestScope();
kernel.Bind<IMembershipService>().To<MembershipService>().InRequestScope();
There's something about InRequestScope I'm misunderstanding.
Edit:
.InRequestScope() will ensure everything which gets injected that binding will receive exactly the same instance when during injection (creation) the HttpContext.Current is the same. That means when a client makes a request and the kernel is asked to provide instances with .InRequestScope(), it will return the same instance for the exact same request. Now when a client makes another request, another unique instance will be created.
When the request ends, ninject will dispose the instance in case it implements IDisposable.
However consider the following scenario:
public class A
{
private readonly DbContext dbContext;
public A(DbContext dbContext)
{
this.dbContext = dbContext;
}
}
and binding:
IBindingRoot.Bind<DbContext>().ToSelf().InRequestScope();
IBindingRoot.Bind<A>().ToSelf().InSingletonScope();
You got yourself a major problem. There's two scenarios how this can pan out:
You are trying to create an A outside of a request. It will fail. Instantiating the DbContext, ninject will look for HttpContext.Current - which is null at the time - and throw an Exception.
You are trying to create an A during a request. Instantiating will succeed. However, When you try to use some functionality of A (which is accessing DbContext in turn) after the request or during a new request, it will throw an ObjectDisposedException
To sum it up, an ObjectDisposedException when you access the DbContext can only be caused by two scenarios:
-you ar disposing the DbContext (or some component which in turn disposes the DbContext) before the request is over.
-you are keeping a reference to the DbContext (again, or to some component which in turn references the DbContext) across request boundaries.
That's it. Nothing complicated about this, but your object graph.
So what would help is drawing an object graph. Start from the root / request root. Then when you're done, start from the DbContext and check who's calling Dispose() on it. If there is no usage inside your code, it must be Ninject who's cleaning up when the request ends. That means, you need to check all references to the DbContext. Someone is keeping a reference across requests.
Original Answer:
You should look into scopes: https://github.com/ninject/ninject/wiki/Object-Scopes
Specifically, .InRequestScope() - or in case that is not appliccable to your problem - .InCallScope() should be interesting to you.
As you are already using .InRequestScope() for the original binding, i suggest that binding the shared context type also .InRequestScope() should be sufficient. It means every dependency of the OrderController will receive the same webinar context instance. Furthermore, if someone else in the same request wants to get a webinar context injected, he will also get the same instance.
You should look into scopes: https://github.com/ninject/ninject/wiki/Object-Scopes
Specifically, .InRequestScope() - or in case that is not appliccable to your problem - .InCallScope() should be interesting to you.
As you are already using .InRequestScope() for the original binding, i suggest that binding the shared context type also .InRequestScope() should be sufficient. It means every dependency of the OrderController will receive the same webinar context instance. Furthermore, if someone else in the same request wants to get a webinar context injected, he will also get the same instance.

nhibernate: Why is my entity not loaded into the first level cache?

I'm loading an instance twice from the same session, but nhibernate returns two instances which I am assuming means that the entity is not in the first level cache. What can cause this sort of behaviour?
Test:
using (new TransactionScope())
{
// arrange
NewSessionUnitOfWorkFactory factory = CreateUnitOfWorkFactory();
const int WorkItemId = 1;
const string OriginalDescription = "A";
WorkItemRepository repository = new WorkItemRepository(factory);
WorkItem workItem = WorkItem.Create(WorkItemId, OriginalDescription);
repository.Commit(workItem);
// act
using (IUnitOfWork uow = factory.Create())
{
workItem = repository.Get(WorkItemId);
WorkItem secondInstance = repository.Get(WorkItemId);
// assert
Assert.AreSame(workItem, secondInstance);
}
}
Update
The reason for this odd behaviour was this line of code:
NewSessionUnitOfWorkFactory factory = CreateUnitOfWorkFactory();
When I replaced it with this factory impl:
ExistingSessionAwareUnitOfWorkFactory factory = new ExistingSessionAwareUnitOfWorkFactory(CreateUnitOfWorkFactory(), new NonTransactionalChildUnitOfWorkFactory());
It works as expected.
I'm just guessing here, as you did not include the code for your Repository/UnitOfWork implementations. Reading this bit of code though, how does your Repository know which UnitOfWork it should be acting against?
First Level Cache is at the Session level, which I am assuming is held in your IUnitOfWork. The only setting on the Repository is the Factory, so my next assumption is that the code for repository.Get() is instantiating a new Session and loading the object through it. So the next call to Get() will instantiate another new Session and load the object. Two different level 1 caches, two different objects retrieved.
Of course, if your UnitOfWork is actually encapsulating Transaction, and the Factory is encapsulating Session, then this doesn't actually apply :)