I need to configure Ignite with multiple caches with different names within one cluster using Spring XML. The goal is to eventually have one jar file for each cache for the ease of deployment.
Use GridGain web console to generate the cluster configurations code for each cache. So, for example I create two jars for two different caches, each jar will have a xml file for the cache configs.
Copied the two jars to the GridGain/libs directory.
Started Ignite from the bin/ignite.sh. My understanding that Ignite should automatically load the two caches. It doesn't seem to do so.
I have noticed that I should pass the config path when running the ignite.sh script, however, I am not sure how to pass multiple files. Should I create a root xml file that wildcard imports multiple xml configurations from multiple locations and pass that root xml to the ignite.sh script?
Any help or suggestions on how should I approach this?
I tried below kind of solution when I had similar kind of requirement.
1- If all you want to use xml then you at least need to pass one xml specifying IgniteConfiguration bean. Please not that you can start ignite using all java config as after all xml configuration is fully convertible to java code.
2- So, when you start your node with basic IgniteConfiguration, you can load another xmls where you will have other bean configs such as CacheConfiguration. You need to load those bean using classic spring methods(to load bean from xml) and you can use that loaded bean to create cache using ignite started in step 1.
Related
Trying to read about the precedence of loading several properties in Spring cloud config, I am not finding my case to figure it out which is the precedence of properties. My case is the next:
I have the next properties in the spring cloud config application:
application.properties
application-dev.properties
nameOfApplicationXX.properties
nameOfApplicationXX-dev.properties
I am launching the app nameOfApplicationXX with the dev profile. My case is that application-dev.properties has one property and this property is not being overriden by the same property present in nameOfApplication.properties. So, application-dev.properties has preference over nameOfApplicationXX.properties because the first one is specifying a profile?
Which is the precedence of each one? Do you know the docs reference because I am not finding it
Thanks
If I understood your problem correctly then the below is the solution I have found from the Spring Cloud Config document reference:
"If the repository is file-based, the server creates an Environment from application.yml (shared between all clients) and foo.yml (with foo.yml taking precedence). If the YAML files have documents inside them that point to Spring profiles, those are applied with higher precedence (in order of the profiles listed). If there are profile-specific YAML (or properties) files, these are also applied with higher precedence than the defaults. Higher precedence translates to a PropertySource listed earlier in the Environment. (These same rules apply in a standalone Spring Boot application.)"
Spring Cloud Config reference link : Documentation
Note: By seeing the above problem statement I can say that you are using file based profile in Spring cloud Config server. The Spring Cloud Config server will return List of Property Sources for each type as a classpath resource properties.
To override the the default implementation I have implemented the same and reference code is available in gitHub link : Source Code
Not a similar issue but may help you : reference issue
Hope this will help you to fix the above mentioned problem statement.
I need to implement apache ignite in existing NODE js project. I read apache ignite documentation but unble to find from where to start and how to change queries from existing one. I create a NODE in java but how to link that node with node js thin client.
Have you tried copying & modifying some of Node.JS examples, such as SqlExample.js?
More are available through Ignite Node.JS docs section.
I am using https://apacheignite-mix.readme.io/v1.7/docs/automatic-persistence to load data from database into Ignite cache.
I run my code with Ignite client mode, and want to load the data into Ignite Cluster.
It looks that I have to put my user code jar and dependent jars into $IGNITE_HOME/libs to make my code work?
I would ask:
Did I do the right thing to put these jars to $IGNITE_HOME/libs to make my code work?
If I have to put my jars into $IGNITE_HOME/libs, it looks that if I have more tables need to be loaded after I have loaded some tables into the cache, should I have to shutdown the cluster and restart for the server to load these new classes? If so, then the data in the cache will be lost since they reside in memory,and have to reload?
There are two ways to load the data, through IgniteDataStreamer and through CacheStore implementation. See this page for details: https://apacheignite.readme.io/docs/data-loading
In case of IgniteDataStreamer you will load the data from DB on the client and stream it into the cluster. In this case you don't need to add any classes to server classpath.
In case of CacheStore you will load data from DB on server side. In this case you need to explicitly deploy (add to libs folder) the implementation of the CacheStore and anything it depends on. If you're using Automatic Persistence, the implementation is already there and nothing to deploy as well.
You're never required to have model classes on server classpath and you can dynamically change the schema without cluster restart. See this page for more information: https://apacheignite.readme.io/docs/binary-marshaller
I publish a RESTful webservice with Talend ESB and want to run it in the Talend runtime.
As I want to use some variables from my own custom config file. I.e. database credentials etc.
This file should be external fro the OSGI deploy file to get modified after compilation.
Where could I place this file and how would I reference it in the Talend job design?
There are two ways we can load external config files to a talend job
1 . Using Implicit context option as shown below
Using tFileInputProperties and tContextLoad
Talend has a built in method (called implicit context) for importing your own configuration file and accessing those values in your code. This works the same for both Talend ESB and Platform for Data Management and takes literally a couple of minutes at most to setup.
In your ESB studio go to file--> edit project properties. In the Project Setting window select Job Settings --> Implicit Context Load. Choose the file option, set the path and choose a field separator. The file layout is simple: key and value separated by the field separator you chose.
I use this for database credentials and other things as you mentioned. In your job you need to add each key as a context and Talend will automatically load these for you at run time. Makes no difference if its a Data Integration job or an ESB running on OSGI. This uniformity across products is a great benefit of using Talend.
I want to know if it's possible to create JDBC Realm configuration in Glassfish 3.1 without admin console, like creation of a Data Source with the glassfish-resources.xml.
When developers download my GIT repository they don't like to configure Glassfish, it's configured in deployment time.
Best regards
Mounir
I'd create a shell script or batch file which runs the required asadmin commands.
Here you can find a complete example: Creating JDBC Objects Using asadmin
(Btw, DTD of GlassFish Resources Descriptor does not contain any realm-related tag (include create-auth-realm).)