when I am trying to run two ignite servers I am getting the following errors.
1) Failed to find class with given class loader for unmarshalling.
2) Caused by: java.lang.ClassNotFoundException: rg.netlink.app.config.ServerConfigurationFactory$2
even after peerClassLoadingENabled on both servers this error keeps persisting.
please help.
How can I run two ignite servers. Did anybody successfully run two ignite servers.
Can you figure out what's ServerConfigurationFactory$2?
I would imagine that for some reason your Ignite node contains some class in its configuration which is absent on other nodes. Nodes pass their configuration on discovery so this will cause problems. Make sure that you only use stock Ignite configuration classes and do not override them with custom implementations/wrappers.
Related
While I upgraded the Ignite that is deployed in Kubernetes (EKS) for Log4j vulnerability, I get the error below
[ignite-1] Caused by: class org.apache.ignite.spi.IgniteSpiException: BaselineTopology of joining node (54b55de4-7742-4e82-9212-7158bf51b4a9) is not compatible with BaselineTopology in the cluster. Joining node BlT id (4) is greater than cluster BlT id (3). New BaselineTopology was set on joining node with set-baseline command. Consider cleaning persistent storage of the node and adding it to the cluster again.
The setup is a 3 node cluster, with native persistence enabled (PVC). This seems to be occurring many times in our journey with Apache Ignite, having followed the official guide.
I cannot clean the storage as the pod gets restarted every now and then, by the time I get the pod shell the pod crash & restarts.
This might happen to be due to the wrong startup order, starting nodes manually in reverse order may resolve this, but I'm not sure if that is possible in K8s. Another possible issue might be related to the baseline auto-adjustment that might change your baseline unexpectedly, I suggest you turn it off if it's enabled.
One of the workarounds to clean a DB of a failing POD might be (quite tricky) - to replace Ignite image with some simple image like a plain Debian or Alpine docker images (just to be able to access CLI) keeping the same PVC attached, and once you fix the persistence issue, set the Ignite image back. The other one is - to access underlying PV directly if possible and do surgery in place.
I have two different ignite deployments. In both, Apache Ignite server is started from the java program. The program sets work directory, configures the logger and then starts the server.
I have web application (Apache Ignite Client), which connects to respective Apache Ignite Server and performs the operation on cache.
What I am observing is that, in one enviroment some files are created inside work/marshaller directory and in other deployment the marshaller folder is empty.
Persistence is not enabled.
Can anyone explain?
Thanks
Ignite would write to marshaller dir when a corresponding type is used. This is because it is possible to have situation when all nodes which knew what type corresponding to a given typeId has left, and the remaining can no longer make sense of data they possess.
I've noticed a strange behaviour of Apache Ignite which occurs fairly reliably on my 5-node Apache Ignite cluster but can be replicated with even a two node cluster. I use Apache Ignite 2.7 for Net in the Linux environment deployed in a Kubernetes cluster (each pod hosts one node).
The problem as follows. Assume we've got a cluster which consists of 2 Apache Ignite nodes, A and B. Both nodes start and initialize. A couple of Ignite Services are deployed on each node during the initialization phase. Among all, a service named QuoteService is deployed on the node B.
So far so good. The cluster works as expected. Then, the node B crashes or gets stopped for whatever reason and then restarts. All the ignite services hosted on the node B get redeployed. The node rejoins the cluster.
However, when a service on the node A is trying to call the QuoteService expected to be available on the node B, an exception gets thrown with the following message: Failed to find deployed service: QuoteService. It is strange as the line registering the service did run during the restart of the node B:
services.DeployMultiple("QuoteGenerator", new Services.Ignite.QuoteGenerator(), 8, 2);
(deploying the service as singleton does not make any difference)
A restart of either node A or node B separately does not help. The problem can only be resolved by shutting down the entire Ignite cluster and restarting all the nodes.
This condition can be reproduced even when 5 nodes are running.
This bug report may look a bit unspecific but it is hard to specify the concrete reproduce steps as the replication involves setting up at least two ignite nodes and stopping and restarting them in a sequence. So let me pose the questions this way:
1. Have you ever noticed such a condition or did you received similar reports from other users?
2. If so, what steps can you recommend to address this problem?
3. Should I wait for the next version of Apache Ignite as I read that the service deployment mechanism is currently being overhauled?
UPD:
Getting a similar problem on a running cluster even if I don't stop/start nodes. I will open another question on SA and it seems to have a different genesis.
I've figured out what caused the described behavior (although I don't understand why exactly).
I wanted to ensure that the Ignite service is only deployed on the current node so I used the following C# code to deploy the service:
var services = ignite.GetCluster().ForLocal().GetServices();
services.DeployMultiple("FlatFileService", new Services.Ignite.FlatFileService(), 8, 2);
When I changed my code to rely only on a NodeFilter to limit the deployment of the service to a specific set of nodes and got rid of "GetCluster().ForLocal().", the bug disappeared. The final code is as follows:
var flatFileServiceCfg = new ServiceConfiguration
{
Service = new Services.Ignite.FlatFileService(),
Name = "FlatFileService",
NodeFilter = new ProductServiceNodeFilter(),
MaxPerNodeCount = 2,
TotalCount = 8
};
var services = ignite.GetServices();
services.DeployAll(new[] { flatFileServiceCfg, ... other services... });
It is still strange, however, why the old code did work until the topology changed.
I am wondering if it's possible to start an Apache Ignite client Node by passing configuration parameters to the JVM. For instance, we may start a server Node by running "org.apache.ignite.startup.cmdline.CommandLineStartup" and passing config parameters to it.
I know it's possible to start a Node from inside a class implementation by initializing Ignite interface and explicitly joining a Cluster.
The easiest way to start a client node is invoke the Ignition.start(..) method. For more details you can refer to any example shipped with Ignite and to this documentation page: https://apacheignite.readme.io/docs/clients-vs-servers
What's needed to run multiple gemfire/geode clusters on one machine? I'm trying to test using WAN gateways locally, before setting it up on servers.
I have one cluster (i.e. gemfire.distributed-system-id=1) up and running with one locator and one server.
I am trying to setup a second cluster (i.e. gemfire.distributed-system-id=2), but receive the following error when attempting to connect to the locator in cluster 2:
Exception caused JMX Manager startup to fail because: 'HTTP service
failed to start'
I assume the error is due to a JMX Manager already running in cluster 1, so I'm guessing I need to start a second JMX Manager on a different port in cluster 2. Is this a correct assumption? If so, how do I setup the second JMX Manager?
Your assumption is correct, the exception is being thrown because the first members started some services (PULSE, jmx-manager, etc.) using the default ports already
You basically want to make sure the properties http-service-port and jmx-manager-port (non an extensive list, there are other properties you need to look at), are different in the second cluster.
Hope this helps.
Cheers.