IgniteSpiException while trying to connect to ignite cache from my local machine - ignite

I am having an application which is using the IgniteDb as a cache provider .It uses the discovery url as:
127.0.0.1:47500..47509.
Now i want to connect to this cache using java code in eclipse. I have written the code as:
IgniteConfiguration cfg = new IgniteConfiguration().setBinaryConfiguration(
new BinaryConfiguration().setNameMapper(new BinaryBasicNameMapper(true)));
TcpDiscoveryMulticastIpFinder discoveryMulticastIpFinder = new TcpDiscoveryMulticastIpFinder();
Set<String> set = new HashSet<>();
set.add("127.0.0.1:47500..47509");
cfg.setPeerClassLoadingEnabled (true);
TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
discoveryMulticastIpFinder.setAddresses(set);
discoverySpi.setNetworkTimeout (5000);
discoverySpi.setClientReconnectDisabled(true);
cfg.setDiscoverySpi(discoverySpi);
discoverySpi.setIpFinder(discoveryMulticastIpFinder);
cfg.setDiscoverySpi(discoverySpi);
Ignite ignite = Ignition.getOrStart(cfg);
IgniteCache<Integer, Person> cache = ignite.getOrCreateCache("person");
// Code to call cache put or get here
// putCache(cache);
//getCache(cache);
System.out.println("All Available Cache on server : "+ignite.cacheNames());
But on running the erro i am getting the error as:
Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node and remote node have different version numbers
(node will not join, Ignite does not support rolling updates, so versions must be exactly the same)
[locBuildVer=2.7.5, rmtBuildVer=2.8.0,
locNodeAddrs=[aschauha-t470.apac.tibco.com/0:0:0:0:0:0:0:1, aschauha-t470.apac.tibco.com/10.98.51.252, /127.0.0.1, /192.168.0.101],
rmtNodeAddrs=[aschauha-t470.apac.tibco.com/0:0:0:0:0:0:0:1, aschauha-t470.apac.tibco.com/10.98.51.252, /127.0.0.1, /192.168.0.101],
locNodeId=e66eeea7-5427-4fe7-8368-884641af534b, rmtNodeId=35ad5deb-d212-4a85-812e-ec7d44caa4a8]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1997)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1116)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:427)
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2099)
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
... 10 more
Please help me in resolving the issue.
Also, the code which i mentioned above, is it the right way of connecting to the ignite db acting as cache for application?

Error message tells everything needed:
Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node and remote node have different version numbers
(node will not join, Ignite does not support rolling updates, so versions must be exactly the same)
[locBuildVer=2.7.5, rmtBuildVer=2.8.0,
Ignite supports clusters from the same nodes only, so, you should either upgrade your local node to 2.8.0 or downgrade remote node to 2.7.5.

Related

Cannot connect Flink to Elasticache Redis cluster - FlinkJedisClusterConfig unable to parse cport in CLUSTER NODES response

How can I use an Elasticache Redis Replication Group as a data sink in Flink for Kinesis Analytics?
I have created an Elasticache Redis Replication Group, and would like to compute something in Flink and store the results in this group.
My Java code,
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSink;
import org.apache.flink.streaming.connectors.redis.RedisSink;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;
import java.net.InetSocketAddress;
import java.util.Set;
...
var endpoint = "foo.bar.clustercfg.usw2.cache.amazonaws.com";
var port = 6379;
var node = new InetSocketAddress(endpoint, port);
var jedisConfig = new FlinkJedisClusterConfig.Builder().setNodes(Set.of(node))
.build();
var redisMapper = new MyRedisMapper();
var redisSink = new RedisSink<>(jedisConfig, redisMapper);
This gives me the following error:
java.lang.NumberFormatException: For input string: "6379#1122"
at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.base/java.lang.Integer.parseInt(Integer.java:652)
at java.base/java.lang.Integer.valueOf(Integer.java:983)
at redis.clients.util.ClusterNodeInformationParser.getHostAndPortFromNodeLine(ClusterNodeInformationParser.java:39)
at redis.clients.util.ClusterNodeInformationParser.parse(ClusterNodeInformationParser.java:14)
at redis.clients.jedis.JedisClusterInfoCache.discoverClusterNodesAndSlots(JedisClusterInfoCache.java:50)
at redis.clients.jedis.JedisClusterConnectionHandler.initializeSlotsCache(JedisClusterConnectionHandler.java:39)
at redis.clients.jedis.JedisClusterConnectionHandler.<init>(JedisClusterConnectionHandler.java:28)
at redis.clients.jedis.JedisSlotBasedConnectionHandler.<init>(JedisSlotBasedConnectionHandler.java:21)
at redis.clients.jedis.JedisSlotBasedConnectionHandler.<init>(JedisSlotBasedConnectionHandler.java:16)
at redis.clients.jedis.BinaryJedisCluster.<init>(BinaryJedisCluster.java:39)
at redis.clients.jedis.JedisCluster.<init>(JedisCluster.java:45)
This occurs while trying to parse the response of CLUSTER NODES. The ip:port#cport is expected as part of the response (see https://redis.io/commands/cluster-nodes/) but Jedis is unable to parse this.
Am I doing something wrong here, or is this a bug in Jedis?
After a little digging I found that this is a bug which affects Jedis 2.8 and earlier when using Redis 4.0 or later. https://github.com/redis/jedis/issues/1958
My Redis cluster is running 6.2.6, and my Apache Flink is 1.13, which is old but is the newest version currently supported by AWS.
To solve this issue, I had to upgrade Jedis to the latest 2.x version so that this bug was fixed but it was still compatible with the Flink 1.13 libraries. Upgrading Jedis to a 3.x or 4.x version broke Flink.
<!-- https://mvnrepository.com/artifact/redis.clients/jedis -->
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.10.2</version>
</dependency>

Trying to disable the metrics logging in Ignite but failed

Found that the Ignite Metrics logging is a bit excessive so decided to disabled it.
As indicated in the screenshot, it should be done by setting setMetricsLogFrequency to 0.
However, it does not work. Below is my code for creating IgniteConfiguration. Note that Ignite is created with client mode.
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setMetricsLogFrequency(0); // Trying to disabled it!
cfg.setIgniteInstanceName("IgnitePod");
cfg.setClientMode(true);
cfg.setAuthenticationEnabled(true);
// Ignite persistence configuration.
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
cfg.setDataStorageConfiguration(storageCfg);
cfg.setDiscoverySpi(spi);
Ignite ignite = Ignition.start(cfg);
Any idea on how to solve this?
It is a different Ignite instance. Your one is called "IgnitePod" but this one is "CacheManager_0". You need to adjust its config, too.

ClassNotFoundException while deploying new application version (with changed session object) in active Apache Ignite grid

We are currently integrating Apache Ignite in our application to share sessions in a cluster.
At this point we can successfully share sessions between two local tomcat instances, but there's one use case, which is not working so far.
When running the two local instances with the exact same code, it all works great. But when the Ignite logic is integrated in our production cluster, we'll encounter the following use case:
Node 1 and Node 2, runs version 1 of the application
At this point we'd like to deploy version 2 of the application
Tomcat is stopped at Node 1, version 2 is deployed, and at the end of the deployment Tomcat at Node 1 is started again.
We now have Node 1 with version 2 of the code and Node 2, still with version 1
Tomcat is stopped at Node 2, version 2 is deployed, and at the end of the deployment Tomcat at Node 2 is started again.
We now have Node 1 with version 2 of the code and Node 2, with version 2
Deployment is finished
When reproducing this use case locally with two tomcat instances in the same grid, the Ignite web session clustering fails. What I tested, was removing one 'String property' of a class (Profile) which resided in the users session. When starting Node 1 with this changed class, I get the following exception:
Caused by: java.lang.ClassNotFoundException:
Optimized stream class checksum mismatch
(is same version of marshalled class present on all nodes?) [expected=4981, actual=-27920, cls=class nl.package.profile.Profile]
This will be a common/regular use case for our deployments. My question is: how to handle this use cases? Are there ways in Ignite to resolve/workaround this kind of issue?
In my understanding your use case perfectly fits for Ignite Binary objects [1].
This feature allows to store objects in class-free format and to modify objects structure in runtime without full cluster restart when a version of an object is changed.
Your Person class should implement org.apache.ignite.binary.Binarylizable interface that will give you full control on serialization and deserialization logic. With this interface you can even have two nodes in the cluster that use different versions of Person class at both deserialization & serialization time by reading/writing only required fields from/to binary format.
[1] https://apacheignite.readme.io/docs/binary-marshaller

GridGain - programmatically opening nodes using SSH through Grid.startNodes API

I am using Grid.startNodes(java.util.Collection, java.util.Map, boolean, int, int)
as defined here: http://gridgain.com/api/javadoc/org/gridgain/grid/Grid.html#startNodes(java.util.Collection, java.util.Map, boolean, int, int)
Code I am using:
GridConfiguration cfg = GridCfgGenerator.GetConfigurations(true);
Grid grid = GridGain.start(cfg);
Collection<Map<String,Object>> coll = new ArrayList<>();
Map<String, Object> host = new HashMap<String, Object>();
//host.put("host", "23.101.201.136");
host.put("host", "10.0.0.4");
host.put("port", 22);
host.put("uname", "username");
host.put("passwd", "password");
host.put("nodes", 7);
//host.put("ggHome", null); /* don't state so that it will use GRIDGAIN_HOME enviroment var */
host.put("cfg", "/config/partitioned.xml");
coll.add(host);
GridFuture f = grid.startNodes(coll, null, false, 3600 * 3600, 4);
System.out.println("before f.get()");
f.get();
I ran the above code on a vm with a 10.0.0.7
I have remote desktop into the VM whos host IP is 10.0.0.4 and see no changes to state. The code completes and exits. Both VMs are able to run gridgain locally and can discover each other's nodes if I start it using bin/ggstart.bat
I can manually start a node on 10.0.0.4 (the machine I am trying to SSH into via this API). I can start said node by running $GG_HOME/bin/ggstart.bat $GG_HOME/config/partitioned.xml so there is no issue in the configuration file
I am not quite sure how to debug this as I get no errors
Successful completion of the future returned from startNodes(..) method means that your local node has established SSH session and executed a command for each node it was going to start. But successful execution of a command doesn't mean that a node will be actually started, because it can fail for several reasons (e.g., wrong GRIDGAIN_HOME).
You should check the following:
Are there GridGain logs created GRIDGAIN_HOME/work/log directory? If yes, then check them - there could be an exception during startup process.
If there are no new logs, there is something wrong with the executed command. The command can be found in the local node logs - search for "Starting remote node with SSH command: ..." lines. You can try to create an SSH connection in terminal, run this command and see what happens.
Also you may want to check your SSH logs to see whether there are any errors.

Running embedded HotRod Infinispan server

I'm trying to deploy Infinispan cluster where each node running embedded HotRod server :
new HotRodServer().start(new HotRodServerConfigurationBuilder().host("10.1.1.6").port(11322).build(),
new DefaultCacheManager("infinispan.xml"));
client :
remoteCache = new RemoteCacheManager().getCache("myCache");
client cfg:
infinispan.client.hotrod.server_list=10.1.1.6:11322
The problem is that remoteCache is empty although I'm sure the JPA loader defined in infinispan.xml does it's work and I see positive entities count in jconsole for myCache entry.
Am I missing something ?
Another question : how do I expose REST endpoint on HotRod server started this way ?
dependencies :
'org.infinispan:infinispan-server-hotrod:5.3.0.Final',
'org.infinispan:infinispan-core:5.3.0.Final',
'org.infinispan:infinispan-cachestore-jpa:5.3.0.Final',
'org.infinispan:infinispan-client-hotrod:5.3.0.Final',
Thanks