I am using https://apacheignite-mix.readme.io/v1.7/docs/automatic-persistence to load data from database into Ignite cache.
I run my code with Ignite client mode, and want to load the data into Ignite Cluster.
It looks that I have to put my user code jar and dependent jars into $IGNITE_HOME/libs to make my code work?
I would ask:
Did I do the right thing to put these jars to $IGNITE_HOME/libs to make my code work?
If I have to put my jars into $IGNITE_HOME/libs, it looks that if I have more tables need to be loaded after I have loaded some tables into the cache, should I have to shutdown the cluster and restart for the server to load these new classes? If so, then the data in the cache will be lost since they reside in memory,and have to reload?
There are two ways to load the data, through IgniteDataStreamer and through CacheStore implementation. See this page for details: https://apacheignite.readme.io/docs/data-loading
In case of IgniteDataStreamer you will load the data from DB on the client and stream it into the cluster. In this case you don't need to add any classes to server classpath.
In case of CacheStore you will load data from DB on server side. In this case you need to explicitly deploy (add to libs folder) the implementation of the CacheStore and anything it depends on. If you're using Automatic Persistence, the implementation is already there and nothing to deploy as well.
You're never required to have model classes on server classpath and you can dynamically change the schema without cluster restart. See this page for more information: https://apacheignite.readme.io/docs/binary-marshaller
Related
When I try to execute JMeter (Version 5.1.1 & 5.2.1) recorded script in Non-Gui mode using distributed testing, It is displaying below shown "java.lang.NullPointerException" error while generating HTML report. Also JTL report is creating an empty notepad file without any data.
.
Note:- This error occurrs only when I place CSV Data Set Config - Config Element in the test plan. When I remove/disable it, HTML and JTL reports get generated without any error. Also I can't skip this CSV Data Set Config plugin on test execution.
.
Please let me know, If there is any other solution to overcome this issue.
Thanks in advance.
You're highlightling not the cause but rather a consequence, you should be rather paying attention to the Summariser output which states summary = 0
which basically means that no Samplers were executed so your test script execution on slaves failed somewhere somehow. First of all I would recommend checking jmeter.log on the master and jmeter-server.log files on the remote machines, most probably you will be able to figure out the root cause from there.
Quick checklist:
Make sure to use the same Java version on the master and the slaves
Make sure to use the same JMeter version (better the latest one) on the master and the slaves
If your test relies on JMeter Plugins - you need to install all the plugins used in the test onto all the slaves
If you define some properties in user.properties file you need to do the same on all the remote machines (or alternatively pass them via -G command-line argument)
If you're using external 3rd-party files (CSV files, files to be uploaded, etc.) - you will need to manually copy them to the slave machines
Double check Remote hosts and RMI configuration to ensure that the slaves can communicate with the master in order to send Sample Results back to it. Also make sure that the relevant ports are open in Windows Firewall
More information: How to Perform Distributed Testing in JMeter
The issue seems like a with csv file path.Make sure you are providing the correct path in csv-file-config.Normally this happens when it is not able to read the data from the location.
I need to configure Ignite with multiple caches with different names within one cluster using Spring XML. The goal is to eventually have one jar file for each cache for the ease of deployment.
Use GridGain web console to generate the cluster configurations code for each cache. So, for example I create two jars for two different caches, each jar will have a xml file for the cache configs.
Copied the two jars to the GridGain/libs directory.
Started Ignite from the bin/ignite.sh. My understanding that Ignite should automatically load the two caches. It doesn't seem to do so.
I have noticed that I should pass the config path when running the ignite.sh script, however, I am not sure how to pass multiple files. Should I create a root xml file that wildcard imports multiple xml configurations from multiple locations and pass that root xml to the ignite.sh script?
Any help or suggestions on how should I approach this?
I tried below kind of solution when I had similar kind of requirement.
1- If all you want to use xml then you at least need to pass one xml specifying IgniteConfiguration bean. Please not that you can start ignite using all java config as after all xml configuration is fully convertible to java code.
2- So, when you start your node with basic IgniteConfiguration, you can load another xmls where you will have other bean configs such as CacheConfiguration. You need to load those bean using classic spring methods(to load bean from xml) and you can use that loaded bean to create cache using ignite started in step 1.
i need to be able to configure JBOSS as 7.2 to only startup the services required in the project.
what is the best aproach to customize JBOSS as 7.2 configuration and set it to a user defined configuration ?
i'm intending to use :
JAAS , EJB , JSF ..
If I understand your request correctly,
it would be removing the unnecessary subsystems.
In standalone, you can configure multiple xmls per your use, and decide the standalone config to use at startup, which would allow you to adapt to various needs quickly.
In domain, it's even easier as you can define various profiles, each of them with specific subsystems present or not, and then assign the necessary profile to your server group.
Removing subsystems through xml modification or CLI is simple,
adding them through CLI sometimes require figuring some of the "default" expected entries for recreation of the subsystems, but once figured out, is easy also.
Key element in your case is to make sure you do not clean up too many of the subsystems, as they sometimes have dependencies to one another.
I am designing a AWS deployment solution for a new dynamic website project. I have acquired an EC2 instance for testing the environment. Need some help on how do I do a load testing on an Ec2 instance to determine how many HTTP requests it can safely handle... P.S. I am new to the AWS platform.
Thanks...
RedLine offers an EC2 Load Testing solution that will automate the distribution of load tests on your own EC2 instances.
Late to the party but could help someone in the future:
A possible tool for load tests, stress tests, whatever you may call them, is Apache JMeter, but there are plenty of alternatives.
A simple starting setup, further explained in this excellent tutorial on DigitalOcean, can exist of a Thread Group containing an HTTP Request Sampler and a View Results in Table Listener. The Thread Group can be used to configure the amount of "clients" you want to simulate. The Request Sampler will be used to configure the server's properties (hostname, path, etc). The Table View Listener outputs a handy CSV file that can be used to calculate means, compare different types of EC2 instances,...
JMeter is a beautiful program with a GUI that can be run on your local workstation, producing an XML file that can be executed on another EC2 instance, for instance. You can even do simple manual edits to the XML file on your server afterward, if necessary.
Take a look at Amazon's testing policy to make sure you're not doing anything illegal.
A couple of quick points;
Set the environment up exactly like it's supposed to run. If there's a database involved, you'll want to involve that in the testing too. Synthetic <?php echo "ok"; CPU based benchmarks won't help you much since normally very little of the time spent replying to HTTP requests is actual CPU time.
A recommendation is to use a service for the benchmarking. Setting load testing up is not without its complexities, and unless you consider benchmarking your core business, you're probably better off using something like Neustar to load and measure your site (there are many services, they're not necessarily what fits you best, just pulled one out of memory)
Of course you can set a load test up yourself, but getting that done right is not anything that can be described in a few sentences. There are very well paid people that only do that for a living :)
There is good experience in using curl-loader aka Davilka tool, also on Amazon EC2 env
http://curl-loader.sourceforge.net
Currently I am using ms-deploy to build and deploy on several machines using team-city. In my current scenario, I need to build, package and deploy on Dev. After this I need to deploy this package on test and Live servers (which are on different domain. I understand how we do it but problem is Web transformation only occurs for test and live configs if we build a package. It means if I want to use the same package that is created for Dev cannot be used, as web transformation only occurred for Dev web config. Also know that we can change web config when un-packaging but that parameters are very limited. We have a lot of changes not just the connection string or db changes.
Another solution is to add another step to build packages for test and live as part of Dev deployment but then it means a lot of copying on remote servers, once for test and once for live which is a lot of time consuming due to different domains.
Can you please guide what is the best solution in this scenario. So I can use team-city to publish to Dev and test and live using same package and different web configs in one go.
To configure items at deployment time which are not automatically created for you. You can add a file named parameters.xml to your project and extend what you want to make available at deployment time.
Here's some documentation on the approach Using Deployment Parameters for Web.Config File Settings.