IgniteJdbcThinDriver is not supporting transactional mode - ignite

We are planning to use the Apache Ignite inmemory database as a replacement for the RDBMS that we are currently using.
I am using the IgniteJdbcThinDriver to query/modify data on the database. Though the queries are executed within a transaction scope , they are actually getting executed in the atomic mode and hence the rollbacks are not working.
Found out from the documentation that this was a known issue and that the feature would be available in v2.5. So I tried with the latest Ignite version (v 2.5) but that didn't work either.
Since the documentation mentions that the Ignite inmemory database is transactional, is there any fix or some other solution to the issue that we can implement to achieve this, using Ignite ? This feature is crucial for us and any help would be appreciated.
Thanks

This feature is still in beta and release is delayed because of it's complexity. It should be included in 2.6 IGNITE-3478

Related

DHIS2 and mysql?

The DHIS2 documentation mentions that it supports mysql (https://docs.dhis2.org/2.28/en/implementer/html/installation.html), however thats the last point mysql is ever mentioned.
Does the current version really support mysql? If it does, will GIS still work?
From direct dhis2 support email...
Up until and including version 2.28, mysql should work.
However, from version 2.29 we require PostgreSQL as the database platform, together with the PostGIS spatial extension. This means that MySQL is no longer supported.
The minimum version required is PostgreSQL 9.1. However we recommend upgrading to a later version as we plan to take advantage of some of the useful features part of PostgreSQL 10 such as logical replication and native partitioning in future versions of DHIS 2.
First of all it is recommended to use postgres.
Secondly most of the testing and QA is done on instances with postgres.
Thirdly POST GIS extension is available only in postgres , which can cause a hurdle for you at later stage.
Fourthly , the GIS data points and boundaries are stored in a format which is better handled in postgres db structure.
Therefore please go with postgres and chill

Mondrian support for nosql like mongodb

Does Mondrian support nosql db like mongodb in the current version. I read some blogs and bugs related to the same.
Any help is appreciated.
thanks
Lokesh
please read the following Blog from Julian Hyde, creator of Mondrian
http://julianhyde.blogspot.com.es/2014/03/improvements-to-optiqs-mongodb-adapter.html
Here you can see Julian have been working on a new approach that converts even complex SQL queries into MongoDB Queries behind the scenes.
Mondrian does not directly support MongoDB at the moment. MongoDB does not have a JDBC implementation.
There are a few options. One of them can be setup if you have access to a Pentaho Data Integration server. You can use a thin JDBC implementation which will allow Mondrian to access a SQL to Mongo bridge.
There are certainly other ways to set this up, since there are a lot of data federation engines out there.
Not directly as far as I know. Maybe someone is working on a dialect, is that even possible..? Interesting question though... May be worth linking the blogs you found so far?
One solution however could be to use the kettle jdbc driver, this driver works with mondrian and then the other end can be any ETL process. So you could use a mongodb input step etc.
There is Apache Drill. You can query MongoDB through standard SQL and Drill has a JDBC driver. So maybe it is possible that Mondrian uses this driver.
Uwe

Infinispan keyset() not suitable for production

I decided to use infinispan distributed grid to extend my application to support cluster but I encountered a limitation when using this kind of shared resource.
How can I retrieve all the values or keys in the Distributed cache? I'm asking this because in their documentation all the collection methods are not recommended for running in production (meaning keySet()).
Right now I have a local bucket/cache with the pairs key/value but in order to process the values I need to retrieve the keys and iterate throught the set.
Set set = cache.keySet();
When having a large number of entries in the local cache, the keySet() returns a copy and this is a heavy load for the memory.
I tried to use the query feature but there are some network calls if I want to find the values and I don't need that. Also the query feature does not support complex filters.
Do you know which is the best approach when using infinispan in production?
As this is an experimental phase I'm using the last infinispan version.
Thanks a lot.
Map/Reduce functionality allows you to iterate over all the entries stored and also migrates the logic where the data is, so doesn't add a lot of burden.
We are using keySet() on production for informational purpose only. Performance do not seem to be a big issue under low data loads but of course you should use such methods with great care because they could have large performance impact depending by how you are using the cache. Remote cache queries seems a pretty handy feature to me.

Grails Updating Production Database

What happens in Grails, when you update your model, and deploy it to your web server? Does the existing data get overwritten?
If your model is changed you need to upgrade your db, you can use this plugin.
Make sure you change you setting from create-drop to update.
There are a couple of good guides you can look at, here is a good one.
Good luck!
The behavior is defined in the GORM configuration
Basically, the default for development is drop-create, which erase all the content, and recreate tables.
On more stable releases, update might be a good setting knowing that Hibernate prefer to fails that conflict.
As always, I would recommend making a backup of the DB before performing such operation on pre-production and production systems.

JBoss TreeCache vs PojoCache when using invaludation rather than replication

We are setting up a Jboss cluster and we are building an own distributed cache solution built upon Jboss cache (Cant use it as 2nd level cache to ORM layer in our case). We want to use invalidation and not replication as cache mode. As far as i can see after (very) little testing both solutions seem to work, objects are put into the cache and objects seem to be evicted when they are updated on any of the servers.
This leads me to believe that PojoCache with AOP instrumentation is only needed when using replication so that you can replicate only updated field values and not whole objects. Am I correct here or are there any other advantages with using PojoCache over TreeCache in our scenario? And if PojoCache have advantages, do we still need AOP instrumentation and to annotate our entities with #PojoCacheable (yes, we are using JBCache 1.4.1) since we are not using relication?
Regards
Jonas Heineson
PoJoCache has the ability through AOP to:
only replicate changed fields and not whole objects. Makes a difference if e.g. your person object containes a huge image of the person and you only change the password
detect changes and thus can automatically put them on the list to be replicated.
TreeCache (plain) does not need AOP, but can thus not replicate individual fields or detect what has changed so that you need to trigger replication yourself.
If you don't replicate, those points are probably irrelevant.
IIrc, you don't need the #PojocaCacheable annotation for Pojo cache - without it, you need to specify the classes to be enhanced in a different way.
I have the feeling that if you are not replicating, the plain TreeCache will be enough.