I am new to apache ignite. I created ignite cluster and connect my nodejs thin client to it. It is working fine but It only create cache create functions specified in js file. Now I want to sync my sql server data with ignite. Any idea how I will do it?
I tried to connect with Grid gain but it does not allow me to create free cluster?
Please refer to 3rd Party Persistence documentation regarding RDBMS integration.
GridGain Web Console can help you set up database integration by generating Maven project corresponding to your RDBMS data model.
GridGain Community Edition is free to use as long as you deploy it on your own. But, it is also supported by stock Apache Ignite.
Related
I'm new to using IBM IIDR and I am considering using IIDR to do data replication between DB2 - kafka - Postgresql but I can't find an easy way to test this software, I know that the management console and access server can be obtained from IBM central fix, but how can I get the CDC to test on my local machine?
Any help i will appreciate it a lot
You can find the replication engines for Db2, Kafka and PostgreSQL on FixCentral as well.
For example, the IBM InfoSphere Data Replication CDC for all Linux agents 11.4.0.2 Build x installer has all the Linux x64 engines.
The installer will ask you which database type you would like to use. If you will be replicating from PostgreSQL, please select "PostgreSQL source". If you will replicate to PostgreSQL, select "FlexRep". For Kafka and Db2, simply select the matching entry.
To get started with CDC for Kafka, I recommend starting with this CDC Kafka Installation and Configuration guide. More resources are available on the IBM Data Replication wiki.
To get started with CDC for PostgreSQL as a target, see the JDBC configuration information in Knowledge Center. For PostgresSQL as a source, check here for required database user privileges and settings.
CDC for Db2 has a number of deployment options to choose from, described here.
If you can't find the info you need, reach out to the IBM Data Replication support team.
Hope that helps,
Sarah
IBM Data Replication development
How can I create a maven project in java to load an Oracle database table on the Apache Ignite server?
Also, I'm supposed to create the project on my local machine while Apache Ignite runs on a remote machine to which I have SSH connection.
You can use Ignite Web Console to do that. There is a public Ignite Web Console hosted by GridGain.
It will ask you to download Ignite Web Console Agent, connect to your Oracle database, analyze your data structure and output a zipped Maven project with data load functionality out of box (via loadCache).
Deployment of the project to remote machine is out of scope of this excercise.
I have developed my web-application using spring-boot and spring-data-jpa and and in-memory database, and I have a couple questions:
how can i now switch to a persistent, let's say, MySQL database? What do I have to change in my configuration?
Can spring-boot set a database up for me with a specific port and where does it get stored in my file system?
Does IntelliJ provide a datasource browser for the created database?
I am sure this must be covered somewhere in the endless jungle of spring-boot documentation.
You can change the application properties for the datasource according to the link Gabor Bakos already provided.
That depends on the type of the database you want to use. HSQLDB and H2 allow you to specify a file path for the database file, however the database instance itself is still running within your application process. With full RMDBS like MySQL you have to install and configure the MySQL server yourself and provide the connection data to your Spring Boot application.
Yes, IntelliJ has a datasource browser for all major databases (maybe you have to download the database driver).
Making an Oracle ADF application which should get data about is the Weblogic server running or not. Weblogic servers are installed on many different machines. So I need to get data about is the server running, how much OS RAM is available.. All the data will be populated to the ADF table.
Any ideas how to do this?
Looks like you trying reimplement oracle enterprise manager.
If you need monitoring solution - use this one.
Otherwise its not an ADF question.
I wanted to broach the issue of SQL Server's Hadoop distribution called HDInsight.
Given that there is a connection provided to Hadoop, does anyone have experience with HDInsight and particularly a comparison between the Hadoop / SQL Server connector and HDIinsight / SQL Server from a real life DTP scenario or personal 1 node installation?
http://sqlmag.com/blog/use-ssis-etl-hadoop
http://www.microsoft.com/en-us/download/details.aspx?id=27584
http://www.microsoft.com/en-us/sqlserver/solutions-technologies/business-intelligence/big-data.aspx
HDInsight is the distribution of Hadoop that Microsoft maintains for use in Azure. You could roughly compare this to Amazon Elastic MapReduce. They both serve the purpose of being a hosted Hadoop service that has almost no management overhead.
The Hortonworks Data Platform for Windows contains the open source changes that Hortonworks and Microsoft have collaborated on to make Hadoop run well on Windows. HDP isn't HDInsight.
In short - you don't need to use HDInsight if you want to run Hadoop in a Windows environment.
While I can't speak directly to using HDInsight and moving data back and forth between SQL Server, I've done implemented a data processing solution using SQL Server, Hadoop, and Elastic MapReduce. Barring some data quality issues and BULK INSERT weirdness, the process was painless.
Finally, you ask "do we really want to run Hadoop size datasets on Windows servers?" - Windows performs well and has solid tooling around it. I've been somewhat skeptical about running Hadoop and other Java platform software on Windows because of legacy Java I/O issues and a lack of community support, not because of any performance issues.
The largest issues that Windows companies will find moving to Hadoop is there will be limited support in community forums and channels when the problem becomes a Hadoop + Windows issue. It's very easy for people to throw their hands up and say "Nope, not helping out, don't have Windows." With time and adoption, this problem goes away. Besides, nothing says you have to finish on the same platform you start with. You could easily deploy with HDP on Windows and move to HDP on Linux at a later date.
I have put together some SQL Server and Hadoop basics for DBAs that should be helpful.