Apache Ignite Failed to complete exchange process - ignite

Apache ignite .net client node fails to start with below error, any idea what could be the reason?
Exception : class org.apache.ignite.IgniteCheckedException: Failed to complete exchange process. Apache.Ignite.Core.Cache.CacheException
StackTrace:
at Apache.Ignite.Core.Impl.PlatformJniTarget.InStreamOutObject(Int32 type, Action`1 writeAction)
at Apache.Ignite.Core.Impl.Ignite.GetOrCreateCache[TK,TV](CacheConfiguration configuration, NearCacheConfiguration nearConfiguration, PlatformCacheConfiguration platformCacheConfiguration, Op op)
at Apache.Ignite.Core.Impl.Ignite.GetOrCreateCache[TK,TV](CacheConfiguration configuration, NearCacheConfiguration nearConfiguration, PlatformCacheConfiguration platformCacheConfiguration)
at Apache.Ignite.Core.Impl.Ignite.GetOrCreateCache[TK,TV](CacheConfiguration configuration, NearCacheConfiguration nearConfiguration)
NOTE: I am trying to implement Read/Write through in Ignite .net 5.

Related

Apache Kafka doens't start after SSL configuration

I have a Apache Kafka (v. 2.13-3.0.0) installed on a remote Ubuntu server.
I follow this tutorial to secure my cluster:
https://medium.com/egen/securing-kafka-cluster-using-sasl-acl-and-ssl-dec15b439f9d
but when I try to start Kafka with jaas conf file with the commands:
export KAFKA_OPTS=-Djava.security.auth.login.config=<kafka-binary-
dir>/config/kafka_server_jaas.conf
./bin/kafka-server-start.sh ./config/server.properties
I receive the error:
[2021-11-12 10:30:47,864] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-11-12 10:30:48,089] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-11-12 10:30:48,099] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.ClassNotFoundException: kafka.security.auth.SimpleAclAuthorizer
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.utils.Utils.loadClass(Utils.java:417)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
These are the SSL config in server.properties file:
########### SECURITY using SCRAM-SHA-512 and SSL
listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
# Broker security settings
ssl.truststore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/truststore/kafka.truststore.jks
ssl.truststore.password=giuseppe
ssl.keystore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/keystore/kafka.keystore.jks
ssl.keystore.password=giuseppe
ssl.key.password=giuseppe
# ACLs
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin
#zookeeper SASL
zookeeper.set.acl=false
########### SECURITY using SCRAM-SHA-512 and SSL
If I try to comment the 2 rows of ACL I receive the error:
[2021-11-12 11:05:29,301] INFO [ThrottledChannelReaper-
ControllerMutation]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-11-12 11:05:29,331] ERROR [KafkaServer id=0] Fatal error
during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on
file .lock in /tmp/kafka-logs. A Kafka instance in another process
or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:117)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:104)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:37)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:112)
at kafka.log.LogManager$.apply(LogManager.scala:1283)
at kafka.server.KafkaServer.startup(KafkaServer.scala:254)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
What is the cause? May it be a wrong configuration?
Thanks.
Update:
Changing the row in:
# ACLs authorizer.class.name=org.apache.kafka.server.authorizer.Authorizer
there is this error: org.apache.kafka.common.KafkaException: Could not find
a public no-argument constructor for
org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
I receive this new error:
[2021-11-12 16:51:57,613] ERROR Exiting Kafka due to fatal exception
(kafka.Kafka$)
org.apache.kafka.common.KafkaException: Could not find a public no-argument
constructor for org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.NoSuchMethodException:
org.apache.kafka.server.authorizer.Authorizer.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3508)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2711)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:390)
... 7 more
It just seems that if you change the
kafka.security.auth.SimpleAclAuthorizer
to
kafka.security.authorizer.AclAuthorizer
It should work; it worked for me.
Kafka 3.0 removed SimpleAclAuthorizer
Pull request - https://github.com/apache/kafka/commit/976e78e405d57943b989ac487b7f49119b0f4af4#diff-e0ccf1b5c964d2c303b6a69a8b8b67df5a6bfbae8aa514f580d353c4c6bf8e36
The blog seems to be using version 2.2.0.

What causes this "Unexpected element" that throws XMLStreamValidationException in JBoss config?

I am trying to migrate some software from JBoss 5 to JBoss 7. I am stuck, as my deployment fails with the below exceptions. Keep in mind that the software is working in JBoss 5, so anything that is not working should be because of differences between JBoss 5/7, I assume.
The line in question (line 12, as pointed to in the exception), is the following:
<application-policy xmlns="urn:jboss:security-beans:1.0" name="MyProjectDatabaseLogin">
The errors/exceptions are:
ERROR [org.jboss.msc.service.fail] (MSC service thread 1-4) MSC000001: Failed to start service jboss.deployment.unit."myear.ear".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."myear.ear".PARSE: WFLYSRV0153: Failed to process phase PARSE of deployment "myear.ear"
[stack trace omitted]
Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: WFLYPOJO0038: Exception while parsing POJO descriptor file: "/content/myear.ear/META-INF/myproject-auth-jboss-beans.xml"
[stack trace omitted]
Caused by: org.projectodd.vdx.core.XMLStreamValidationException: ParseError at [row,col]:[12,4]
Message: ParseError at [row,col]:[12,4]
Message: WFLYCTL0198: Unexpected element '{urn:jboss:security-beans:1.0}application-policy' encountered
[stack trace omitted]
ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "myear.ear")]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.deployment.unit.\"myear.ear\".PARSE" => "WFLYSRV0153: Failed to process phase PARSE of deployment \"myear.ear\"
Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: WFLYPOJO0038: Exception while parsing POJO descriptor file: \"/content/myear.ear/META-INF/myproject-auth-jboss-beans.xml\"
Caused by: org.projectodd.vdx.core.XMLStreamValidationException: ParseError at [row,col]:[12,4]
Message: ParseError at [row,col]:[12,4]
Message: WFLYCTL0198: Unexpected element '{urn:jboss:security-beans:1.0}application-policy' encountered"}}
Why is application-policy (or the xmlns value for it) unexpected here? What is causing this exception?
I had to manually type the above xml line and errors/exceptions, so it is possible there could be some typos not actually present in the original which do not contribute to the problem, though I have reread my question here several times and I don't think I typo'd the above.
I eventually figured out that these configurable items are no longer supposed to be in the same file. This information is now supposed to be in the server's configuration file, so you will probably put it into either the domain.xml file or the standalone.xml file.
This is a security application policy, so the contents of this tag now go into the <security-domains> section in a <security-domain> tag.
So it would be like the following. Notice that <application-policy ...> is now <security-domain ...> and it is within <security-domains>. Also, my security application-policy from before had two <login-module> within it, but if the new elytra security system is used, then only 1 <login-module> tag is allowed in the security-domain...
...
<security-domains> <!-- Search for this in the file and put the migrated part into here -->
<security-domain name="MySecurityDomain">
... all the stuff that used to be in the security application policy
... note that some of the stuff you put in here might need to change depending on the security system used
</security-domain>
</security-domains>
...

MassTransit Request Response sample

I'm learning about MassTransit, so I downloaded the sample they have however it doesn't seem to be working for me, I'm getting the following error when I try to start the service:
An exception occurred
MassTransit.RabbitMqTransport.RabbitMqConnectionException: Connect failed: igor#localhost:5672/test ---> RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable ---> RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=530, text="NOT_ALLOWED - vhost test not found", classId=10, methodId=40, cause=
at RabbitMQ.Client.Impl.SimpleBlockingRpcContinuation.GetReply(TimeSpan timeout)
When I try to use other examples using older versions of MassTransit they are working fine.
The sample uses a RMQ URI, which includes the test virtual host. Since you have not created it, your code fails and it actually tells you exactly this - virtual host test is not found.
Here is the app.config from that sample:
<appSettings>
<add key="RabbitMQHost" value="rabbitmq://localhost/test"/>
<add key="ServiceQueueName" value="request_service"/>
</appSettings>
Hence that the client uses the same URI, so both of them will fail starting.

Deploying a service on Apache Ignite

I have a util method that deploys a service on Ignite:
private static void startNewService(String queryId, String sqlQuery, long timeInterval) {
QueryServiceImpl cepService = new QueryServiceImpl(queryId, sqlQuery, timeInterval);
ServiceConfiguration cfg = new ServiceConfiguration();
cfg.setService(cepService);
cfg.setName(queryId);
cfg.setTotalCount(1);
cfg.setMaxPerNodeCount(1);
System.out.println("---- Deploying the service. "+queryId);
services.deploy(cfg);
System.out.println("---- Deployed the service. "+queryId);
}
When I run this from my client machine, I get the following error in the server machines:
[12:13:26,640][SEVERE][srvc-deploy-#35%myGrid%][GridServiceProcessor] Failed to initialize service (service will not be deployed): Query1
class org.apache.ignite.IgniteCheckedException: com.demo.ignite.service.QueryServiceImpl
at org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9739)
at org.apache.ignite.internal.processors.service.GridServiceProcessor.copyAndInject(GridServiceProcessor.java:1206)
at org.apache.ignite.internal.processors.service.GridServiceProcessor.redeploy(GridServiceProcessor.java:1127)
at org.apache.ignite.internal.processors.service.GridServiceProcessor.processAssignment(GridServiceProcessor.java:1750)
......
Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException: com.demo.ignite.service.QueryServiceImpl
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:695)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1755)
....
Caused by: java.lang.ClassNotFoundException: com.demo.ignite.service.QueryServiceImpl
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
....
at org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:8465)
at org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:347)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:686)
My QueryServiceImpl implements Service, QueryService where QueryService is an interface with a method runContinuousQuery().
Please note that I have not manually copied this jar/class to the server classpath. I am expecting ignite to load the required classes to the Ignite server nodes and run the service there. How can I do this?
Currently peer deployment is not supported for services, so your service should be on the classpath for every node. You can find a note about it in the documentation: https://apacheignite.readme.io/docs/service-grid

RMI Replication EhCache

I am working on ehcache replication using RMI for the first time. I am not able to get pass through the below error.
The root cause from the below stack trace shows:
Caused by: java.lang.ClassNotFoundException: net.sf.ehcache.distribution.RMICachePeer_Stub (no security manager: RMI class loader disabled)
I am using jdk1.6.0_16 with weblogic 12c and I have already tried below ways but didnt get any success.
Added the ehcache jar to the classpath. set CLASSPATH=%CLASSPATH%;C:\Oracle\Middleware\wlserver_12.1\server\lib\ehcache-2.8.3.jar;
set JAVA_OPTIONS=%JAVA_OPTIONS% -Djava.security.manager -Djava.security.policy==C:/Oracle/Middleware/wlserver_12.1/server/lib/weblogic.policy -Djava.rmi.server.codebase=file:///C:/Oracle/Middleware/wlserver_12.1/server/lib/ehcache-2.8.3.jar
set the System.setSecurityManager(new RMISecurityManager()) on the application startup.
Any help is appreciated.
Caused by: org.hibernate.cache.CacheException: net.sf.ehcache.CacheException: Problem starting listener for RMICachePeer //localhost:40001/com.wipro.dms.digi.domain.Location. Initial cause was
RemoteException occurred in server thread; nested exception is:
java.rmi.UnmarshalException: error unmarshalling arguments; nested exception is:
java.lang.ClassNotFoundException: net.sf.ehcache.distribution.RMICachePeer_Stub (no security manager: RMI class loader disabled)