I am newbie to Apache Ignite. We are trying to explore Ignite as key value DB to be replaced with our existing Berkely DB in application.
Currently, Bekley DB is embedded in the application and db container operations are performed using Berkely DB apis , similar functionalities we would need for Ignite.
The idea is to replace berkley db apis to Ignite apis to use Ignite as key value DB.
I could not find any docs for the usage of ignite libraries to be used in the application.
Any help?
You can find a comprehensive documentation on data manipulation here: https://apacheignite.readme.io/docs/data-grid
Also you can find some examples of usage of Ignite API at the GitHub repository:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheApiExample.java
Related
Can I get some advice on whether it is possible to proceed like the steps below?
SQL Server data is loaded in Ignite Cluster
The data in SQL Server has been changed.
-> Is there any other way to reflect this changed data without reloading the data from SQL Server?
When used as a cache in front of the database, when changes are made directly to the DB without going through the Ignite Cluster, can the already loaded cache data be directly reflected in the Ignite cache?
Is it possible to set only the value to change without loading the data again?
If possible, which part should I set? Please.
I suppose the real question is - how to propagate changes applied to SQL Server first to the Apache Ignite cluster. And the short answer is - you need to do it by yourself, i.e. you need to implement some synchronization logic between the two databases. This should not be a complex task if most of the data updates come through Ignite and SQL Server-first updates are rare.
As for the general approach, you can check for the Change Data Capture (CDC) pattern implementations. There are multiple articles on how you can achieve it using external tools, for sample, CDC Between MySQL and GridGain With Debezium or this video.
It's worth mentioning that Apache Ignite is currently working on its own native implementation of CDC.
Take a look at Ignite's external storage integration, and the read/write through features. See: https://ignite.apache.org/docs/latest/persistence/external-storage
and https://ignite.apache.org/docs/latest/persistence/custom-cache-store
examples here: https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store
I am new to Apache Ignite. Can you please suggest a way to get a large data set (preferably CSVs along with DDL statements that is Ignite compliant), which I could use it to create schema, tables in Ignite (uses native persistence), to test a few use cases that I have.
You can use Web Console to copy data from relational DB into Apache Ignite, creating data structure and project files along the way.
Apply it on existing database or something like MySQL Employees sample database.
Web Console will connect to existing internally deployed Database using 'agent' program ran locally.
I want to connect to a specific database in our new Azure Redis cache but can't seem to figure out how to do it.
I've tried adding the database id to connection string in various forms as well as looking for GetDatabase(dbid) on the IDistributedcache object (which doesn't seem to exist).
FYI, I want to use the same cache for our testing and production without having to pay for an additional redis cache so I'm open to alternative approaches.
You could also use the ConnectionMultiplexer object directly and access the Database via the GetDatabase method.
I am new to Magnolia CMS and the Apache Jackrabbit content repository concepts.
There is a web application which is using Magnolia CMS. Magnolia is using SQL SERVER 2012 database as persistence manager.
Here Apache Jackrabbit content repository implementation is done. There are two separate configurations of the Magnolia CMS which are used for the application, referred to as the public and author instances.
Now here we are trying to replace the existing Magnolia CMS with a custom ASP.NET MVC 5 application with all the functionalities.
I analysed the tables in the SQL SERVER database and found that data stored in format of Node_ID and Bundle_Data which is very difficult to analyse.
In short, it is not easy to interpret.
Based on the custom CMS a new database model for author instance (SQL SERVER 2012) is developed.
Hence as part of migration task ,I am trying to migrate the old data that is stored in the SQL SERVER with the Apache Jackrabbit content repository implementation to a normal SQL SERVER 2012 (as per the new database model).
Can anyone help me to know are there are any proven methods or tools available to accomplish this task.
The question is more on the jackrabbit-side, not so much on the Magnolia side, especially since you want to replace Magnolia entirely, not just the persistence layer:
Now here we are trying to replace the existing Magnolia CMS with a
custom ASP.NET MVC 5 application with all the functionalities.
although my question really is whether you really want to replace Jackrabbit entirely, or still use Jackrabbit with your ASP.NET application but with a MS SQL Server datastore (which would be my personal suggestion)? Otherwise you will be getting rid of all the benefits that Jackrabbit has.
Jackrabbit does support SQL Server and I would suggest to use it.
https://wiki.apache.org/jackrabbit/DataStore#Configuration-1:
Currently supported are: db2, derby, h2, mssql, mysql, oracle,
sqlserver.
Developing a WebCMS with just ASP.NET and SQL Server and without a content repository layer in between sounds like developing everything that a WebCMS usually comes with from scratch, especially if you want to have all the functionality that Magnolia offers (versioning, history, search, etc.).
You can check details regarding Jackrabbit data store here: http://wiki.apache.org/jackrabbit/DataStore although I am wondering why you or your customer would want to change the data store of the content repository to SQL Server. I guess you are not speaking of using MySQL for the persistence of the meta data, but really to store the binary content (a mistake that by the way OpenCms, another Java-based open source WebCMS, made in their architecture design - imho).
Note that usually large files are not stored in the database itself (with Magnolia), but on the file system.
https://wiki.magnolia-cms.com/display/WIKI/Setting+up+a+Jackrabbit+persistence+manager#SettingupaJackrabbitpersistencemanager-Datastorageandbackup:
BLOBs are not by default stored in the database when they exceed a
certain threshold definied in your Jackrabbit configuration - instead
they are saved on the file system. The default threshold used by a
Magnolia installation is 1024 bytes. All files above the defined
threshold are put onto the filesystem and not in the database.
In case you really want to get rid of Jackrabbit entirely and only use SQL Server as the persistence layer and store all binary content in it regardless of size (not recommended), I would write a custom export/import script for it, which queries the Jackrabbit repo (standard CMIS protocol) and takes the content from the file system, reading as FileInputStream and writing it to the Oracle DB (Example: http://www.java2s.com/Code/Java/Database-SQL-JDBC/StoreBLOBsdataintodatabase.htm). This would be my suggested method.
I don't think there are any out-of-the-box tools for that.
I just want to clarify of what I read in the Spring Data Neo4j 4.0.0 documentation. So, the provided way to configure index & unique constraint is just by defining it directly in the web console using Cypher query, and no more inside the application (like what #indexing tag does previously). Is it correct?
Thank you in advance and your response would be really appreciated!
That's right. Index maintenance and configuration is not the responsibility of the OGM or Spring Data. It can be configured as you said via the shell, or you can use the Session/Neo4jTemplate.execute with your Cypher statement.
Neo4j’s schema indexes are used automatically by Cypher when set up in your database. Spring Data Neo4j (version 4) does not provide facilities for handling that setup out of the box.
its clearly mentioned in official docs,