Unable to join Akka.NET cluster - akka.net

I am having a problem joining and debugging joining to Akka.NET cluster. I am using version 1.3.8. My setup is following:
Lighthouse
Almost default code from github. Runs in console akka.hocon is following:
lighthouse {
actorsystem: "sng"
}
petabridge.cmd{
host = "0.0.0.0"
port = 9110
}
akka {
loglevel = DEBUG
loggers = ["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"]
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
}
remote {
log-sent-messages = on
log-received-messages = on
log-remote-lifecycle-events = on
enabled-transports = ["akka.remote.dot-netty.tcp"]
dot-netty.tcp {
transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
hostname = "0.0.0.0"
port = 4053
}
log-remote-lifecycle-events = DEBUG
}
cluster {
auto-down-unreachable-after = 5s
seed-nodes = []
roles = [lighthouse]
}
}
Working node
Also console (net461) application with as simple as possible startup and joining. It works as excpected. akka.hocon:
akka {
loglevel = DEBUG
loggers = ["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"]
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
remote {
log-sent-messages = on
log-received-messages = on
log-remote-lifecycle-events = on
dot-netty.tcp {
transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
hostname = "0.0.0.0"
port = 0
}
}
cluster {
auto-down-unreachable-after = 5s
seed-nodes = ["akka.tcp://sng#127.0.0.1:4053"]
roles = [monitor]
}
}
Not working node
An .NET 4.6.1 library, registerd as COM and started in other (Media Monkey) application with VBA code:
Sub OnStartup
Set o = CreateObject("MediaMonkey.Akka.Agent.MediaMonkeyAkkaProxy")
o.Init(SDB)
End Sub
Akka system is, as in console aplikation, created with standard ActorSystem.Create("sng", config);
akka.hocon:
akka {
loglevel = DEBUG
loggers = ["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"]
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
remote {
log-sent-messages = on
log-received-messages = on
log-remote-lifecycle-events = on
dot-netty.tcp {
transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
hostname = "0.0.0.0"
port = 0
}
}
cluster {
auto-down-unreachable-after = 5s
seed-nodes = ["akka.tcp://sng#127.0.0.1:4053"]
roles = [mediamonkey]
}
}
Debugging workflow
Startup Lighthouse application:
Configuration Result:
[Success] Name sng.Lighthouse
[Success] ServiceName sng.Lighthouse
Topshelf v4.0.0.0, .NET Framework v4.0.30319.42000
[Lighthouse] ActorSystem: sng; IP: 127.0.0.1; PORT: 4053
[Lighthouse] Performing pre-boot sanity check. Should be able to parse address [akka.tcp://sng#127.0.0.1:4053]
[Lighthouse] Parse successful.
[21:01:35 INF] Starting remoting
[21:01:35 INF] Remoting started; listening on addresses : [akka.tcp://sng#127.0.0.1:4053]
[21:01:35 INF] Remoting now listens on addresses: [akka.tcp://sng#127.0.0.1:4053]
[21:01:35 INF] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Starting up...
[21:01:35 INF] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Started up successfully
The sng.Lighthouse service is now running, press Control+C to exit.
[21:01:35 INF] petabridge.cmd host bound to [0.0.0.0:9110]
[21:01:35 INF] Node [akka.tcp://sng#127.0.0.1:4053] is JOINING, roles [lighthouse]
[21:01:35 INF] Leader is moving node [akka.tcp://sng#127.0.0.1:4053] to [Up]
Started and stopped working console node
Lighthouse logs:
[21:05:40 INF] Node [akka.tcp://sng#0.0.0.0:37516] is JOINING, roles [monitor]
[21:05:40 INF] Leader is moving node [akka.tcp://sng#0.0.0.0:37516] to [Up]
[21:05:54 INF] Connection was reset by the remote peer. Channel [[::ffff:127.0.0.1]:4053->[::ffff:127.0.0.1]:37517](Id=1293c63a)
[21:05:54 INF] Message AckIdleCheckTimer from akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%400.0.0.0%3A37516-1/endpointWriter to akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%400.0.0.0%3A37516-1/endpointWriter was not delivered. 1 dead letters encountered.
[21:05:55 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 2 dead letters encountered.
[21:05:55 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 3 dead letters encountered.
[21:05:56 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 4 dead letters encountered.
[21:05:56 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 5 dead letters encountered.
[21:05:57 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 6 dead letters encountered.
[21:05:57 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 7 dead letters encountered.
[21:05:58 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 8 dead letters encountered.
[21:05:58 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 9 dead letters encountered.
[21:05:59 WRN] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Marking node(s) as UNREACHABLE [Member(address = akka.tcp://sng#0.0.0.0:37516, Uid=1060233119 status = Up, role=[monitor], upNumber=2)]. Node roles [lighthouse]
[21:06:01 WRN] AssociationError [akka.tcp://sng#127.0.0.1:4053] -> akka.tcp://sng#0.0.0.0:37516: Error [Association failed with akka.tcp://sng#0.0.0.0:37516] []
[21:06:01 WRN] Tried to associate with unreachable remote address [akka.tcp://sng#0.0.0.0:37516]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://sng#0.0.0.0:37516] Caused by: [System.AggregateException: One or more errors occurred. ---> Akka.Remote.Transport.InvalidAssociationException: No connection could be made because the target machine actively refused it tcp://sng#0.0.0.0:37516
at Akka.Remote.Transport.DotNetty.TcpTransport.<AssociateInternal>d__1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Akka.Remote.Transport.DotNetty.DotNettyTransport.<Associate>d__22.MoveNext()
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at Akka.Remote.Transport.ProtocolStateActor.<>c.<InitializeFSM>b__11_54(Task`1 result)
at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke()
at System.Threading.Tasks.Task.Execute()
---> (Inner Exception #0) Akka.Remote.Transport.InvalidAssociationException: No connection could be made because the target machine actively refused it tcp://sng#0.0.0.0:37516
at Akka.Remote.Transport.DotNetty.TcpTransport.<AssociateInternal>d__1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Akka.Remote.Transport.DotNetty.DotNettyTransport.<Associate>d__22.MoveNext()<---
]
[21:06:04 INF] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Leader is auto-downing unreachable node [akka.tcp://sng#127.0.0.1:4053]
[21:06:04 INF] Marking unreachable node [akka.tcp://sng#0.0.0.0:37516] as [Down]
[21:06:05 INF] Leader is removing unreachable node [akka.tcp://sng#0.0.0.0:37516]
[21:06:05 WRN] Association to [akka.tcp://sng#0.0.0.0:37516] having UID [1060233119] is irrecoverably failed. UID is now quarantined and all messages to this UID will be delivered to dead letters. Remote actorsystem must be restarted to recover from this situation.
Working node logs:
[21:05:38 INF] Starting remoting
[21:05:38 INF] Remoting started; listening on addresses : [akka.tcp://sng#0.0.0.0:37516]
[21:05:38 INF] Remoting now listens on addresses: [akka.tcp://sng#0.0.0.0:37516]
[21:05:38 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37516] - Starting up...
[21:05:38 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37516] - Started up successfully
[21:05:40 INF] Welcome from [akka.tcp://sng#127.0.0.1:4053]
[21:05:40 INF] Member is Up: Member(address = akka.tcp://sng#127.0.0.1:4053, Uid=439782041 status = Up, role=[lighthouse], upNumber=1)
[21:05:40 INF] Member is Up: Member(address = akka.tcp://sng#0.0.0.0:37516, Uid=1060233119 status = Up, role=[monitor], upNumber=2)
//shutdown logs are missing
Started and stopped COM node
Lighthouse logs:
[21:12:02 INF] Connection was reset by the remote peer. Channel [::ffff:127.0.0.1]:4053->[::ffff:127.0.0.1]:37546](Id=4ca91e15)
COM node logs:
[WARNING][18. 07. 2018 19:11:15][Thread 0001][ActorSystem(sng)] The type name for serializer 'hyperion' did not resolve to an actual Type: 'Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion'
[WARNING][18. 07. 2018 19:11:15][Thread 0001][ActorSystem(sng)] Serialization binding to non existing serializer: 'hyperion'
[21:11:15 DBG] Logger log1-SerilogLogger [SerilogLogger] started
[21:11:15 DBG] StandardOutLogger being removed
[21:11:15 DBG] Default Loggers started
[21:11:15 INF] Starting remoting
[21:11:15 DBG] Starting prune timer for endpoint manager...
[21:11:15 INF] Remoting started; listening on addresses : [akka.tcp://sng#0.0.0.0:37543]
[21:11:15 INF] Remoting now listens on addresses: [akka.tcp://sng#0.0.0.0:37543]
[21:11:15 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37543] - Starting up...
[21:11:15 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37543] - Started up successfully
[21:11:15 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+Subscribe
[21:11:15 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+Subscribe
[21:11:16 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+JoinSeedNodes
[21:11:16 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+Subscribe
[21:11:26 WRN] Couldn't join seed nodes after [2] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
[21:11:31 WRN] Couldn't join seed nodes after [3] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
[21:11:36 WRN] Couldn't join seed nodes after [4] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
[21:11:40 ERR] No response from remote. Handshake timed out or transport failure detector triggered.
[21:11:40 WRN] AssociationError [akka.tcp://sng#0.0.0.0:37543] -> akka.tcp://sng#127.0.0.1:4053: Error [Association failed with akka.tcp://sng#127.0.0.1:4053] []
[21:11:40 WRN] Tried to associate with unreachable remote address [akka.tcp://sng#127.0.0.1:4053]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://sng#127.0.0.1:4053] Caused by: [Akka.Remote.Transport.AkkaProtocolException: No response from remote. Handshake timed out or transport failure detector triggered.
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Akka.Remote.Transport.AkkaProtocolTransport.<Associate>d__19.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Akka.Remote.EndpointWriter.<AssociateAsync>d__23.MoveNext()]
[21:11:40 DBG] Disassociated [akka.tcp://sng#0.0.0.0:37543] -> akka.tcp://sng#127.0.0.1:4053
[21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 1 dead letters encountered.
[21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 2 dead letters encountered.
[21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 3 dead letters encountered.
[21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 4 dead letters encountered.
[21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 5 dead letters encountered.
[21:11:40 INF] Message AckIdleCheckTimer from akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%40127.0.0.1%3A4053-1/endpointWriter to akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%40127.0.0.1%3A4053-1/endpointWriter was not delivered. 6 dead letters encountered.
[21:11:41 WRN] Couldn't join seed nodes after [5] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
[21:11:41 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 7 dead letters encountered.
[21:11:46 WRN] Couldn't join seed nodes after [6] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
[21:11:51 WRN] Couldn't join seed nodes after [7] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
Do you have any idea how to debug and/or resolve this?

As I can see that the first thing I notice in the non-working node the hocon configuration contains different "seed-nodes" address from the working node.
IMHO the "seed-nodes" in all the applications [nodes as called in cluster] withinvthe cluster needs to be same. So in the non-working node instead of
seed-nodes = ["akka.tcp://songoulash#127.0.0.1:4053"]
replace with the below which is in the working node
seed-nodes = ["akka.tcp://sng#127.0.0.1:4053"]
Also, please check the github link for sample https://github.com/AJEETX/Akka.Cluster
and another link https://github.com/AJEETX/AkkaNet.Cluster.RoundRobinGroup
#Rok, Kindly let me know if this was helpful or I can further try to investigate.

Related

RabbitMQ TLS Authentication

There is a task to configure the operation of some web services using certificate authorization.
There is:
Erlang 22.3.3
RabbitMQ 3.8.3
It makes no sense to describe their installation.
What has been done next:
1. In accordance with the article (https://www.rabbitmq.com/ssl.html) we perform the following actions:
git clone https://github.com/michaelklishin/tls-gen tls-gen
cd tls-gen / basic
CN = client PASSWORD = 123 make
make verify
make info
Copy the created certificates, change the owner
mv testca/ /etc/rabbitmq/
mv server/ /etc/rabbitmq/
mv client/ /etc/rabbitmq/
chown -R rabbitmq: /etc/rabbitmq/testca
chown -R rabbitmq: /etc/rabbitmq/server
chown -R rabbitmq: /etc/rabbitmq/client
We bring the configuration file to the form (/etc/rabbitmq/rabbitmq.config):
[
{ssl, [{versions, ['tlsv1.2', 'tlsv1.1', tlsv1]}]},
{rabbit, [
{ssl_listeners, [5671]},
{auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']},
{ssl_cert_login_from, 'client'},
{ssl_options, [{cacertfile, "/ etc / rabbitmq / testca / cacert.pem"},
{certfile, "/ etc / rabbitmq / server / cert.pem"},
{keyfile, "/ etc / rabbitmq / server / key.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, true}]}]}}
].
We start the server, try to connect from the client. We get the error:
2020-05-18 17: 21: 57.166 +03: 00 [ERR] Failed to connect to broker 10.10.11.16, port 5671, vhost dmz
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable
---> RabbitMQ.Client.Exceptions.PossibleAuthenticationFailureException: Possibly caused by authentication failure
---> RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Library, code = 0, text = 'End of stream', classId = 0, methodId = 0, cause = System .IO.EndOfStreamException: Reached the end of the stream. Possible authentication failure.
at RabbitMQ.Client.Impl.InboundFrame.ReadFrom (Stream reader)
at RabbitMQ.Client.Impl.SocketFrameHandler.ReadFrame ()
at RabbitMQ.Client.Framing.Impl.Connection.MainLoopIteration ()
at RabbitMQ.Client.Framing.Impl.Connection.MainLoop ()
at RabbitMQ.Client.Impl.SimpleBlockingRpcContinuation.GetReply (TimeSpan timeout)
at RabbitMQ.Client.Impl.ModelBase.ConnectionStartOk (IDictionary`2 clientProperties, String mechanism, Byte [] response, String locale)
at RabbitMQ.Client.Framing.Impl.Connection.StartAndTune ()
--- End of inner exception stack trace ---
at RabbitMQ.Client.Framing.Impl.Connection.StartAndTune ()
at RabbitMQ.Client.Framing.Impl.Connection.Open (Boolean insist)
at RabbitMQ.Client.Framing.Impl.Connection..ctor (IConnectionFactory factory, Boolean insist, IFrameHandler frameHandler, String clientProvidedName)
at RabbitMQ.Client.Framing.Impl.ProtocolBase.CreateConnection (IConnectionFactory factory, Boolean insist, IFrameHandler frameHandler, String clientProvidedName)
at RabbitMQ.Client.ConnectionFactory.CreateConnection (IEndpointResolver endpointResolver, String clientProvidedName)
--- End of inner exception stack trace ---
at RabbitMQ.Client.ConnectionFactory.CreateConnection (IEndpointResolver endpointResolver, String clientProvidedName)
at RabbitMQ.Client.ConnectionFactory.CreateConnection (String clientProvidedName)
at EasyNetQ.ConnectionFactoryWrapper.CreateConnection ()
at EasyNetQ.PersistentConnection.TryToConnect ()
In the rabbitmq log:
2020-05-18 17: 24: 59.880 [info] <0.3442.0> accepting AMQP connection <0.3442.0> (10/10/15/14/1561 -> 10/10/11/166767)
2020-05-18 17: 25: 02.887 [error] <0.3442.0> closing AMQP connection <0.3442.0> (10/10/15/14/1561 -> 10/10/11/1667671):
{handshake_error, starting, 0, {error, function_clause, 'connection.start_ok', [{rabbit_ssl, peer_cert_auth_name, [client, << 48,130,3,42,48,130,2,18,160,3,2,1,2,2 , 1,2,48,13,6,9,42,134,72,134,247,13,1,1,11,5,0,48,4,49,49,32,48,30,6,3,85,4,3 12,23,84,76,83,71,101,110,83,101,108,102,83,105,103,110,101,100,116,82,111,111,116,67,65,49,13,48,11,6,3,85,4,7,12,4,36,36,36 , 36.48,30,23,13,50,48,48,53,49,56,49,52,48,49,53,53,90,23,13,51,48,48,53,49 , 54,49,52,48,49,53,53,90,48,34,49,15,48,13,6,3,85,4,3,12,6,99,108,105,101,110,116,49,15,48 , 13,6,3,85,4,10,12,6,99,108,105,101,110,116,48,130,1,34,48,13,6,9,42,134,72,134,247,13,1,1,1,5,0,3,130 1,15,0,48,130,1,10,2,130,1,1,0,183,198,116,156,3,177,131,5,148,11,154,34,99,210,88,115,60,228,180,245,80,212,113,57,181,249,20,5,164,49,72,95,153,116,103,49 , 58,119,15,48,147,107,112,243,105,122,189,44,0,193,114,138,169,250,165,97,188,158,188,95,163,37,30,75,143,21,103,11,131,223,124,96,244,111,210,30,8,175,72,206,162,14,86,63,146,215,179,226,239,48,76,122,150,200,183,82,114,1 73,116,32,224,202,196,129,131,96,34,237,34,144,177,92,200,105,212,0,133,141,118,146,229,140,246,229,137,0,9,27,180,163,233,134,0,187,110,9,126,92,172,105,96,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,1,118,11,11,118,11,118,11,118,11,118,11,118,11,118,11,118,11,118,11,118,11,118,11,118,1,118,11,11,118,11,11,11,11,1,1,1,1,1,1,1,1,1,1,1,1,1,111,1'''1,11,11,1'''1,1''''N''O'', '' 92,181,68,172,135,15,90,152,209,242,31,138,135,34,95,29,162,226,175,253,176,14
UPDATE
New rabbitmq.config:
[
{rabbit,[
{auth_backends, [rabbit_auth_backend_internal]},
{auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']},
{ssl_listeners,[5671]},
{ssl_options,[
{versions,['tlsv1.2', 'tlsv1.1']},
{cacertfile, "/etc/rabbitmq/testca/cacert.pem"},
{certfile, "/etc/rabbitmq/server/cert.pem"},
{keyfile, "/etc/rabbitmq/server/key.pem"},
{verify,verify_peer},
{fail_if_no_peer_cert,true}]}
]}
].
New error:
2020-05-18 18:48:56.681 [info] <0.1410.0> Connection <0.1410.0> (10.10.15.14:52744 -> 10.10.11.16:5671) has a client-provided name: Viber.CallbackService.dll
2020-05-18 18:48:56.682 [error] <0.1410.0> Error on AMQP connection <0.1410.0> (10.10.15.14:52744 -> 10.10.11.16:5671, state: starting):
EXTERNAL login refused: user 'O=client,CN=client' - invalid credentials
Have you enabled the ssl plugin and restarted the broker?
sudo rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
sudo systemctl restart rabbitmq-server
You may also try and set the following in rabbitmq.conf:
ssl_cert_login_from = common_name
ssl_options.password = 123
And create a user called client in the broker to match the CN name in your certificate.

BizTalk receiving from RabbitMQ

I'm new to RabbitMQ but I have now installed onto a Windows server and have a couple of demo console apps (C#) that happily write to a read from a queue.
The following code works to pull messages from a queue called "RabbitPoCQueue_2" on the local server:
string queueName = "RabbitPoCQueue_2";
var factory = new ConnectionFactory();
bool keepGoing = true;
factory.HostName = "127.0.0.1";
try
{
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
const bool durable = false;
channel.QueueDeclare(queueName, durable, false, false, null);
System.Console.WriteLine(" [*] Waiting for messages.");
while (keepGoing)
{
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
System.Console.WriteLine(" [x] Received {0}", message);
};
channel.BasicConsume(queue: queueName,
autoAck: true,
consumer: consumer);
channel.BasicGet(queue: queueName, autoAck: true);
System.Console.WriteLine("Press Y to continue or any other key to exit");
keepGoing = System.Console.ReadKey().Key == ConsoleKey.Y;
}
}
}
I now need to configure a BizTalk (2016 FP3 CU5) receive location to do the same. I have ensured I've stopped the console receiver and that I have messages sat on the queue for BizTalk to collect.
I followed the article https://social.technet.microsoft.com/wiki/contents/articles/7401.biztalk-server-and-rabbitmq.aspx
Problem is, when I start the receive location, I get no errors but nothing is received.
The config for the WCF receive location can be seen below:
and here:
and here's a pic from the RabbitMQ management console showing messages sat on the queue:
When I look in the RabbitMQ log file, I see 2 rows on starting the receive location. I see 3 rows when starting the .Net console app (using RabbitMQ API), as shown below - first 2 rows are from BizTalk, last 3 from the console app:
2019-08-28 16:17:45.693 [info] <0.13361.2> connection <0.13361.2> ([::1]:16807 -> [::1]:5672): user 'guest' authenticated and granted access to vhost '/' ** Start of Receive location
2019-08-28 16:19:57.958 [info] <0.13452.2> accepting AMQP connection <0.13452.2> (127.0.0.1:17173 -> 127.0.0.1:5672)
2019-08-28 16:19:58.026 [info] <0.13452.2> connection <0.13452.2> (127.0.0.1:17173 -> 127.0.0.1:5672): user 'guest' authenticated and granted access to vhost '/' ** Receive from command line
2019-08-28 18:56:26.267 [info] <0.13452.2> closing AMQP connection <0.13452.2> (127.0.0.1:17173 -> 127.0.0.1:5672, vhost: '/', user: 'guest')
2019-08-28 18:56:39.815 [info] <0.17923.2> accepting AMQP connection <0.17923.2> (127.0.0.1:41103 -> 127.0.0.1:5672)
Can anyone spot where I went wrong?

When using the node driver, notarisation in flows hangs with a handshake failure

Whenever I try and test using the node driver, I find at the point of notarisation, my flows will hang.
After examining the node logs, it shows that the notary's message broker was unreachable:
[INFO ] 09:33:26,653 [nioEventLoopGroup-3-3] (AMQPClient.kt:91)
netty.AMQPClient.run - Retry connect {}
[INFO ] 09:33:26,657 [nioEventLoopGroup-3-4] (AMQPClient.kt:76)
netty.AMQPClient.operationComplete - Connected to localhost:10001 {}
[INFO ] 09:33:26,658 [nioEventLoopGroup-3-4]
(AMQPChannelHandler.kt:49) O=Notary Service, L=Zurich,
C=CH.channelActive - New client connection db926eb8 from
localhost/127.0.0.1:10001 to /127.0.0.1:63781 {}
[INFO ] 09:33:26,658
[nioEventLoopGroup-3-4] (AMQPClient.kt:86)
netty.AMQPClient.operationComplete - Disconnected from localhost:10001
{}
[ERROR] 09:33:26,658 [nioEventLoopGroup-3-4]
(AMQPChannelHandler.kt:98) O=Notary Service, L=Zurich,
C=CH.userEventTriggered - Handshake failure
SslHandshakeCompletionEvent(java.nio.channels.ClosedChannelException)
{}
[INFO ] 09:33:26,659 [nioEventLoopGroup-3-4]
(AMQPChannelHandler.kt:74) O=Notary Service, L=Zurich,
C=CH.channelInactive - Closed client connection db926eb8 from
localhost/127.0.0.1:10001 to /127.0.0.1:63781 {}
[INFO ] 09:33:26,659
[nioEventLoopGroup-3-4] (AMQPBridgeManager.kt:115)
peers.DLF1ZmHt1DXc9HbxzDNm6VHduUABBbNsp7Mh4DhoBs6ifd ->
localhost:10001:O=Notary Service, L=Zurich, C=CH.onSocketConnected -
Bridge Disconnected {}
While the notary logs display the following:
[INFO ] 13:24:21,735 [main] (ActiveMQServerImpl.java:540)
core.server.internalStart - AMQ221001: Apache ActiveMQ Artemis Message
Broker version 2.2.0 [localhost,
nodeID=7b3df3b8-98aa-11e8-83bd-ead493c8221e] {}
[DEBUG] 13:24:21,735 [main] (ArtemisRpcBroker.kt:51)
rpc.ArtemisRpcBroker.start - Artemis RPC broker is started. {}
[INFO ] 13:24:21,737 [main] (ArtemisMessagingClient.kt:28)
internal.ArtemisMessagingClient.start - Connecting to message broker:
localhost:10001 {}
[ERROR] 13:24:22,298 [main] (NettyConnector.java:713)
core.client.createConnection - AMQ214016: Failed to create netty
connection {} java.nio.channels.ClosedChannelException: null
at io.netty.handler.ssl.SslHandler.channelInactive(...)(Unknown Source) ~[netty-all-4.1.9.Final.jar:4.1.9.Final]
[DEBUG] 13:24:22,362 [main] (PersistentIdentityService.kt:137)
identity.PersistentIdentityService.verifyAndRegisterIdentity -
Registering identity O=Notary Service, L=Zurich, C=CH {}
[WARN ] 13:24:22,363 [main] (AppendOnlyPersistentMap.kt:79)
utilities.AppendOnlyPersistentMapBase.set - Double insert in
net.corda.node.utilities.AppendOnlyPersistentMap for entity class
class
net.corda.node.services.identity.PersistentIdentityService$PersistentIdentity
key 69ACAA32A0C7934D9454CB53EEA6CA6CCD8E4090B30C560A5A36EA10F3DC13E8,
not inserting the second time {}
[ERROR] 13:24:22,368 [main] (NodeStartup.kt:125) internal.Node.run -
Exception during node startup {}
org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException:
AMQ119007: Cannot connect to server(s). Tried with all available
servers.
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:787)
~[artemis-core-client-2.2.0.jar:2.2.0]
at net.corda.nodeapi.internal.ArtemisMessagingClient.start(ArtemisMessagingClient.kt:39)
~[corda-node-api-3.2-corda.jar:?]
at net.corda.nodeapi.internal.bridging.AMQPBridgeManager.start(AMQPBridgeManager.kt:195)
~[corda-node-api-3.2-corda.jar:?]
at net.corda.nodeapi.internal.bridging.BridgeControlListener.start(BridgeControlListener.kt:35)
~[corda-node-api-3.2-corda.jar:?]
at net.corda.node.internal.Node.startMessagingService(Node.kt:301) ~[corda-node-3.2-corda.jar:?]
How do I fix this?
IntelliJ Ultimate ships with the Yourkit profiler, which by default starts when IntelliJ starts and listens on port 100001 - the default port for the Notary in Driver.
You can locate the config for this using here and alter it to use a different port as per this
Your new config line will look something like this:
-agentlib:yjpagent=delay=10000,probe_disable=*,port=30000

Rabbitmq server crash randomly occur and I don't know why

I want to know the cause of Rabbitmq crash which randomly occur. can you let me know what kind of causes could be considered?
Also my team should manually restart the rabbitmq when crash happens, so I want to know if there is a way to restart rabbitmq server automatically.
Here is the error report when rabbitmq crash occur:
=WARNING REPORT==== 6-Dec-2017::07:56:43 ===
closing AMQP connection <0.4387.0> (000000:23070 -> 00000:5672, vhost: '/', user: '00000'):
client unexpectedly closed TCP connection
Also this is part of sasl.gsd fild :
=SUPERVISOR REPORT==== 7-Dec-2017::10:03:15 ===
Supervisor: {local,sockjs_session_sup}
Context: child_terminated
Reason: {function_clause,
[{gen_server,cast,
[{},sockjs_closed],
[{file,"gen_server.erl"},{line,218}]},
{rabbit_ws_sockjs,service_stomp,3,
[{file,"src/rabbit_ws_sockjs.erl"},{line,150}]},
{sockjs_session,emit,2,
[{file,"src/sockjs_session.erl"},{line,173}]},
{sockjs_session,terminate,2,
[{file,"src/sockjs_session.erl"},{line,311}]},
{gen_server,try_terminate,3,
[{file,"gen_server.erl"},{line,629}]},
{gen_server,terminate,7,
[{file,"gen_server.erl"},{line,795}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,247}]}]}
Offender: [{pid,<0.20883.1160>},
{id,undefined},
{mfargs,
{sockjs_session,start_link,
["pd4tvvi0",
{service,"/stomp",
#Fun<rabbit_ws_sockjs.1.47892404>,{},
"//cdn.jsdelivr.net/sockjs/1.0.3/sockjs.min.js",
false,true,5000,25000,131072,
#Fun<rabbit_ws_sockjs.0.47892404>,undefined},
[{peername,{{172,31,6,213},9910}},
{sockname,{{172,31,5,49},15674}},
{path,"/stomp/744/pd4tvvi0/htmlfile"},
{headers,[]},
{socket,#Port<0.12491352>}]]}},
{restart_type,transient},
{shutdown,5000},
{child_type,worker}]
=CRASH REPORT==== 7-Dec-2017::10:03:20 ===
crasher:
initial call: sockjs_session:init/1
pid: <0.25851.1160>
registered_name: []
exception exit: {function_clause,
[{gen_server,cast,
[{},sockjs_closed],
[{file,"gen_server.erl"},{line,218}]},
{rabbit_ws_sockjs,service_stomp,3,
[{file,"src/rabbit_ws_sockjs.erl"},{line,150}]},
{sockjs_session,emit,2,
[{file,"src/sockjs_session.erl"},{line,173}]},
{sockjs_session,terminate,2,
[{file,"src/sockjs_session.erl"},{line,311}]},
{gen_server,try_terminate,3,
[{file,"gen_server.erl"},{line,629}]},
{gen_server,terminate,7,
[{file,"gen_server.erl"},{line,795}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,247}]}]}
in function gen_server:terminate/7 (gen_server.erl, line 800)
ancestors: [sockjs_session_sup,<0.177.0>]
messages: []
links: [<0.178.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 175
neighbours:
Please check out the error report I posted above and let me know the causes of rabbitmq crash and the way to automatically restart rabbitmq server.
Thanks!!

Starting Server node on a cluster through Service and trying loadcache

I debugged the code and found all parameters being set appropriately and even in the console we can see that the server in remote node has started and the cache has been initialized.
All the requried parameters are passed through db
On trying to assert on the cache wihtout lazyload(hotloading it from the persistent store) the error log that i get is ,
I am not able to understand what s going wrong in the cluster so have only attached the code that does the job of starting the servers.
InitializeCache internally calls loadcache after all the keyfileds,jdbctypes are set..
class{
private void startNodes() {
logger.info("Starting Ignite Nodes");
IgniteCluster igniteCluster = rocCachemanager.getCluster();
// HashMaps for holding host and default configurations
HashMap<String, Object> defaults = new HashMap<>();
HashMap<String, Object> hmHosts;
// get Ignite configuration from DB
List<IgniteConfigPojo> list = igniteConfigImpl.getIgniteConfigList();
IgniteConfigPojo configPojo = list.get(0);
List<IgniteNodeMapPojo> listNodeMap = configPojo.getIgniteNodeMap();
// Collection of Host configuration
Collection<Map<String, Object>> hosts = new ArrayList<>();
// Prepare the map with all the ignite server host information
prepareHostList(listNodeMap, hosts);
// Actual start of remote nodes via ssh call
try {
if (listNodeMap.size() != igniteCluster.forServers().nodes().size()) {
Collection<ClusterStartNodeResult> result = igniteCluster.startNodes(hosts, defaults, false, 10000, 1);
for (ClusterStartNodeResult res : result) {
if (!res.isSuccess()) {
throw new ROCCacheException(res.getError());
} else {
logger.info("Ignite server start successfully triggered on machine " + res.getHostName());
}
}
}
int waitTime = 0;
while (listNodeMap.size() != igniteCluster.forServers().nodes().size()) {
if (waitTime >= MAX_TIME_FOR_SERVER_START) {
int serverNodes = igniteCluster.forServers().nodes().size();
throw new ROCCacheException("All the Server nodes have not joined the Ignite Cluster, Expected servers :"
+ listNodeMap.size() + " , actual :" + serverNodes);
}
synchronized (this) {
wait(2000);
}
waitTime += 2000;
}
logger.info("Successfully started all the ignite servers");
} catch (IgniteException e) {
throw new ROCCacheException("Error while starting the Ignite Servers", e);
} catch (InterruptedException e) {
throw new ROCCacheException("Error while starting the Ignite Servers,Received Interrupt signal", e);
}
}
#Override
public void onLeaderStart() {
startNodes();
initializeBookeeperCache();
initializeCaches();
}
}
#Test
#Transactional(propagation = Propagation.SUPPORTS)
public void startNodeTest() {
try {
roccacheservice.onLeaderStart();
Collection<ClusterNode> colClusterClientNodes = rocCacheManager.getCluster().forClients().nodes();
for (ClusterNode clientNode : colClusterClientNodes) {
assertEquals(clientNode.addresses().contains("10.113.56.110"), true);
}
Collection<ClusterNode> colClusterServerNodes = rocCacheManager.getCluster().forServers().nodes();
for (ClusterNode serverNode : colClusterServerNodes) {
assertEquals(serverNode.addresses().contains("10.113.56.231"), true);
System.out.println(serverNode.metrics());
}
****************************works fine till here****************************
ROCCacheConfiguration<Long, PersonPojo> new4 = new ROCCacheConfiguration<>();
new4.setName("Person");
ROCCache<Long, PersonPojo> orgCache4 = rocCacheManager.createCache(new4);
assertEquals(orgCache4.get(1L).getName(), "Abhishek");
assertEquals(orgCache4.get(1L).getAge(), 25);
} catch (Exception e) {
e.printStackTrace();
}
}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/abhisheks/.m2/repository/org/slf4j/slf4j-simple/1.7.19/slf4j-simple-1.7.19.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/abhisheks/.m2/repository/org/slf4j/slf4j-log4j12/1.7.10/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
[main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Loaded default TestExecutionListener class names from location [META-INF/spring.factories]: [org.springframework.test.context.web.ServletTestExecutionListener, org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener, org.springframework.test.context.support.DependencyInjectionTestExecutionListener, org.springframework.test.context.support.DirtiesContextTestExecutionListener, org.springframework.test.context.transaction.TransactionalTestExecutionListener, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener]
[main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Could not instantiate TestExecutionListener [org.springframework.test.context.web.ServletTestExecutionListener]. Specify custom listener classes or make the default listener classes (and their required dependencies) available. Offending class: [org/springframework/web/context/request/RequestAttributes]
[main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Using TestExecutionListeners: [org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener#5383967b, org.springframework.test.context.support.DependencyInjectionTestExecutionListener#2ac273d3, org.springframework.test.context.support.DirtiesContextTestExecutionListener#71423665, org.springframework.test.context.transaction.TransactionalTestExecutionListener#20398b7c, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener#6fc6f14e]
[main] INFO org.springframework.context.support.GenericApplicationContext - Refreshing org.springframework.context.support.GenericApplicationContext#d44fc21: startup date [Tue Apr 19 15:32:01 IST 2016]; root of context hierarchy
[main] WARN org.springframework.context.annotation.ConfigurationClassEnhancer - #Bean method IgniteStoreConfig.getPropertySourcesPlaceholderConfigurer is non-static and returns an object assignable to Spring's BeanFactoryPostProcessor interface. This will result in a failure to process annotations such as #Autowired, #Resource and #PostConstruct within the method's declaring #Configuration class. Add the 'static' modifier to this method to avoid these container lifecycle issues; see #Bean javadoc for complete details.
[main] INFO org.springframework.context.support.PropertySourcesPlaceholderConfigurer - Loading properties file from class path resource [ignitePersistentStore.properties]
[main] INFO org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor - JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
[main] INFO org.springframework.jdbc.datasource.DriverManagerDataSource - Loaded JDBC driver: com.mysql.jdbc.Driver
[main] INFO org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean - Building JPA container EntityManagerFactory for persistence unit 'ben'
HHH000204: Processing PersistenceUnitInfo [
name: ben
...]
HHH000412: Hibernate Core {5.0.7.Final}
HHH000206: hibernate.properties not found
HHH000021: Bytecode provider name : javassist
HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
HHH000400: Using dialect: org.hibernate.dialect.MySQLDialect
HHH000457: Joined inheritance hierarchy [com.subex.roc.schema.md.TraitValue] defined explicit #DiscriminatorColumn. Legacy Hibernate behavior was to ignore the #DiscriminatorColumn. However, as part of issue HHH-6911 we now apply the explicit #DiscriminatorColumn. If you would prefer the legacy behavior, enable the `hibernate.discriminator.ignore_explicit_for_joined` setting (hibernate.discriminator.ignore_explicit_for_joined=true)
HHH000228: Running hbm2ddl schema update
HHH000262: Table not found: SREG_Field
HHH000262: Table not found: SREG_Field
HHH000262: Table not found: SREG_Model
HHH000262: Table not found: SREG_Model
HHH000262: Table not found: SREG_Trait
HHH000262: Table not found: SREG_Trait
HHH000262: Table not found: SREG_TraitGroup
HHH000262: Table not found: SREG_TraitGroup
HHH000262: Table not found: SREG_TraitMultiValue
HHH000262: Table not found: SREG_TraitMultiValue
HHH000262: Table not found: SREG_TraitSingleValue
HHH000262: Table not found: SREG_TraitSingleValue
HHH000262: Table not found: SREG_TraitValueBase
HHH000262: Table not found: SREG_TraitValueBase
HHH000262: Table not found: SREG_TraitValueStore
HHH000262: Table not found: SREG_TraitValueStore
HHH000397: Using ASTQueryTranslatorFactory
Hibernate: select igniteconf0_.icf_id as icf_id1_0_, igniteconf0_.enable_peerclassload as enable_p2_0_, igniteconf0_.grid_name as grid_nam3_0_, igniteconf0_.join_timeout as join_tim4_0_ from ignite_config igniteconf0_
Hibernate: select ignitenode0_.icf_id as icf_id3_1_0_, ignitenode0_.inm_id as inm_id1_1_0_, ignitenode0_.inm_id as inm_id1_1_1_, ignitenode0_.icf_id as icf_id3_1_1_, ignitenode0_.nod_id as nod_id4_1_1_, ignitenode0_.port_range as port_ran2_1_1_, rocnodepoj1_.nod_id as nod_id1_4_2_, rocnodepoj1_.nod_address as nod_addr2_4_2_, rocnodedea2_.rnd_id as rnd_id1_3_3_, rocnodedea2_.nod_id as nod_id2_3_3_, rocnodedea2_.rnd_ignite_home as rnd_igni3_3_3_, rocnodedea2_.rnd_numberof_nodes as rnd_numb4_3_3_, rocnodedea2_.rnd_password as rnd_pass5_3_3_, rocnodedea2_.rnd_ssh_port as rnd_ssh_6_3_3_, rocnodedea2_.rnd_user_name as rnd_user7_3_3_ from ignite_node_map ignitenode0_ left outer join roc_nodes rocnodepoj1_ on ignitenode0_.nod_id=rocnodepoj1_.nod_id left outer join roc_node_detail rocnodedea2_ on rocnodepoj1_.nod_id=rocnodedea2_.rnd_id where ignitenode0_.icf_id=?
[main] INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from URL [file:/home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin/conf/spring_igniteConfig.xml]
[main] INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from URL [file:/home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin/conf/ignite_Config.xml]
[main] INFO org.springframework.context.support.GenericApplicationContext - Refreshing org.springframework.context.support.GenericApplicationContext#5d8ab698: startup date [Tue Apr 19 15:32:04 IST 2016]; root of context hierarchy
[main] INFO org.springframework.jdbc.datasource.DriverManagerDataSource - Loaded JDBC driver: com.mysql.jdbc.Driver
>>> __________ ________________
>>> / _/ ___/ |/ / _/_ __/ __/
>>> _/ // (7 7 // / / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/
>>>
>>> ver. 1.5.0-final#20151229-sha1:f1f8cda2
>>> 2015 Copyright(C) Apache Software Foundation
>>>
>>> Ignite documentation: http://ignite.apache.org
Config URL: file:/home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin/conf/spring_igniteConfig.xml
Daemon mode: off
OS: Linux 2.6.32-504.el6.x86_64 amd64
OS user: abhisheks
Language runtime: Java Platform API Specification ver. 1.8
VM information: Java(TM) SE Runtime Environment 1.8.0_66-b17 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.66-b17
VM total memory: 1.7GB
Remote Management [restart: off, REST: on, JMX (remote: off)]
IGNITE_HOME=/home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin
VM arguments: [-Dfile.encoding=UTF-8]
Configured caches ['ignite-marshaller-sys-cache', 'ignite-sys-cache', 'ignite-atomics-sys-cache']
3-rd party licenses can be found at: /home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin/libs/licenses
Initial heap size is 122MB (should be no less than 512MB, use -Xms512m -Xmx512m).
Non-loopback local IPs: 10.113.56.110, 192.168.122.1, fe80:0:0:0:c634:6bff:fe4f:784d%eth1
Enabled local MACs: 5254004ABB26, C4346B4F784D
Configured plugins:
^-- None
IPC shared memory server endpoint started [port=48100, tokDir=/home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin/work/ipc/shmem/8f12688b-fef6-4981-a5f4-aa6781438930-23547]
Successfully bound shared memory communication to TCP port [port=48100, locHost=0.0.0.0/0.0.0.0]
Successfully bound to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0]
Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
Collision resolution is disabled (all jobs will be activated upon arrival).
Swap space is disabled. To enable use FileSwapSpaceSpi.
Security status [authentication=off, tls/ssl=off]
Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0]
Started cache [name=ignite-sys-cache, mode=REPLICATED]
Started cache [name=ignite-atomics-sys-cache, mode=PARTITIONED]
Started cache [name=ignite-marshaller-sys-cache, mode=REPLICATED]
Performance suggestions for grid 'subexIgnite' (fix if possible)
To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
^-- Disable grid events (remove 'includeEventTypes' from configuration)
^-- Enable client mode for TcpDiscoverySpi (set TcpDiscoverySpi.forceServerMode to false)
To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
>>> +----------------------------------------------------------------------------+
>>> Ignite ver. 1.5.0-final#20151229-sha1:f1f8cda2f3f62231f42a59951bf34c39577c1bec
>>> +----------------------------------------------------------------------------+
>>> OS name: Linux 2.6.32-504.el6.x86_64 amd64
>>> CPU(s): 8
>>> Heap: 1.7GB
>>> VM name: 23547#abhisheks
>>> Grid name: subexIgnite
>>> Local node [ID=8F12688B-FEF6-4981-A5F4-AA6781438930, order=1, clientMode=true]
>>> Local node addresses: [192.168.122.1/0:0:0:0:0:0:0:1%lo, abhisheks/10.113.56.110, /127.0.0.1, /192.168.122.1]
>>> Local ports: TCP:11211 TCP:47100 TCP:47500 TCP:48100
Topology snapshot [ver=1, servers=0, clients=1, CPUs=8, heap=1.7GB]
[main] INFO org.springframework.test.context.transaction.TransactionContext - Began transaction (1) for test context [DefaultTestContext#2478b629 testClass = StartServiceTest, testInstance = com.subex.roc.cache.startserviceintegration.StartServiceTest#39023dbf, testMethod = startNodeTest#StartServiceTest, testException = [null], mergedContextConfiguration = [MergedContextConfiguration#2c2c3947 testClass = StartServiceTest, locations = '{}', classes = '{class com.subex.roc.cache.IgniteJPAConfiguration, class com.subex.roc.cache.IgniteEnvConfiguration}', contextInitializerClasses = '[]', activeProfiles = '{}', propertySourceLocations = '{}', propertySourceProperties = '{}', contextLoader = 'org.springframework.test.context.support.DelegatingSmartContextLoader', parent = [null]]]; transaction manager [org.springframework.orm.jpa.JpaTransactionManager#1a2ac487]; rollback [true]
[main] INFO com.subex.roc.cache.ROCCacheService - Starting Ignite Nodes
Hibernate: select igniteconf0_.icf_id as icf_id1_0_, igniteconf0_.enable_peerclassload as enable_p2_0_, igniteconf0_.grid_name as grid_nam3_0_, igniteconf0_.join_timeout as join_tim4_0_ from ignite_config igniteconf0_
Hibernate: select ignitenode0_.icf_id as icf_id3_1_0_, ignitenode0_.inm_id as inm_id1_1_0_, ignitenode0_.inm_id as inm_id1_1_1_, ignitenode0_.icf_id as icf_id3_1_1_, ignitenode0_.nod_id as nod_id4_1_1_, ignitenode0_.port_range as port_ran2_1_1_, rocnodepoj1_.nod_id as nod_id1_4_2_, rocnodepoj1_.nod_address as nod_addr2_4_2_, rocnodedea2_.rnd_id as rnd_id1_3_3_, rocnodedea2_.nod_id as nod_id2_3_3_, rocnodedea2_.rnd_ignite_home as rnd_igni3_3_3_, rocnodedea2_.rnd_numberof_nodes as rnd_numb4_3_3_, rocnodedea2_.rnd_password as rnd_pass5_3_3_, rocnodedea2_.rnd_ssh_port as rnd_ssh_6_3_3_, rocnodedea2_.rnd_user_name as rnd_user7_3_3_ from ignite_node_map ignitenode0_ left outer join roc_nodes rocnodepoj1_ on ignitenode0_.nod_id=rocnodepoj1_.nod_id left outer join roc_node_detail rocnodedea2_ on rocnodepoj1_.nod_id=rocnodedea2_.rnd_id where ignitenode0_.icf_id=?
Starting remote node with SSH command: nohup "/home/benakaraj/Downloads/apache-ignite-fabric-1.5.0.final-bin/bin/ignite.sh" -v "conf/spring_igniteConfig.xml" -J-DIGNITE_SSH_HOST="10.113.56.231" -J-DIGNITE_SSH_USER_NAME="root" > ignite-startNodes/04-19-2016--15-32-05-521bc7ca.log 2>& 1 &
[main] INFO com.subex.roc.cache.ROCCacheService - Ignite server start successfully triggered on machine 10.113.56.231
Your version is up to date.
Local java version is different from remote [loc=8, rmt=7]
Added new node to topology: TcpDiscoveryNode [id=e93bc2fa-8a37-4a50-9a22-071abece643f, addrs=[0:0:0:0:0:0:0:1%1, 10.113.56.231, 127.0.0.1, 192.168.122.1], sockAddrs=[/192.168.122.1:47500, /0:0:0:0:0:0:0:1%1:47500, /10.113.56.231:47500, /10.113.56.231:47500, /127.0.0.1:47500, /192.168.122.1:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1461060127285, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false]
Topology snapshot [ver=2, servers=1, clients=1, CPUs=16, heap=2.7GB]
[main] INFO com.subex.roc.cache.ROCCacheService - Successfully started all the ignite servers
Started cache [name=bookeeperCache, mode=PARTITIONED]
Hibernate: select roccacheco0_.rcc_id as rcc_id1_2_, roccacheco0_.automicity_mode as automici2_2_, roccacheco0_.backup_count as backup_c3_2_, roccacheco0_.cache_mode as cache_mo4_2_, roccacheco0_.cache_writeorder_mode as cache_wr5_2_, roccacheco0_.eviction_policy as eviction6_2_, roccacheco0_.filterClass as filterCl7_2_, roccacheco0_.is_lazy_load as is_lazy_8_2_, roccacheco0_.is_near_cache as is_near_9_2_, roccacheco0_.is_read_through as is_read10_2_, roccacheco0_.is_write_behind as is_writ11_2_, roccacheco0_.is_write_through as is_writ12_2_, roccacheco0_.key_class as key_cla13_2_, roccacheco0_.max_cache_entries as max_cac14_2_, roccacheco0_.rcc_cache_name as rcc_cac15_2_, roccacheco0_.rcc_table_name as rcc_tab16_2_, roccacheco0_.schema_version as schema_17_2_, roccacheco0_.value_class as value_c18_2_, roccacheco0_.writebehind_batch_size as writebe19_2_, roccacheco0_.writebehind_flush_freq as writebe20_2_, roccacheco0_.writebehind_flush_size as writebe21_2_ from roc_cache_config roccacheco0_
Hibernate: select model0_.id as id1_6_, model0_.description as descript2_6_, model0_.name as name3_6_, model0_.version as version4_6_ from SREG_Model model0_ where model0_.name=? and model0_.version=?
Hibernate: select fields0_.model_id as model_id5_5_0_, fields0_.id as id1_5_0_, fields0_.id as id1_5_1_, fields0_.name as name2_5_1_, fields0_.position as position3_5_1_, fields0_.type as type4_5_1_ from SREG_Field fields0_ where fields0_.model_id=?
Hibernate: select traitgroup0_.field_id as field_id3_8_0_, traitgroup0_.id as id1_8_0_, traitgroup0_.id as id1_8_1_, traitgroup0_.name as name2_8_1_ from SREG_TraitGroup traitgroup0_ where traitgroup0_.field_id=?
Hibernate: select traitgroup0_.field_id as field_id3_8_0_, traitgroup0_.id as id1_8_0_, traitgroup0_.id as id1_8_1_, traitgroup0_.name as name2_8_1_ from SREG_TraitGroup traitgroup0_ where traitgroup0_.field_id=?
Hibernate: select traitgroup0_.field_id as field_id3_8_0_, traitgroup0_.id as id1_8_0_, traitgroup0_.id as id1_8_1_, traitgroup0_.name as name2_8_1_ from SREG_TraitGroup traitgroup0_ where traitgroup0_.field_id=?
Hibernate: select traitgroup0_.model_id as model_id4_8_0_, traitgroup0_.id as id1_8_0_, traitgroup0_.id as id1_8_1_, traitgroup0_.name as name2_8_1_ from SREG_TraitGroup traitgroup0_ where traitgroup0_.model_id=?
Hibernate: select traits0_.group_id as group_id5_7_0_, traits0_.id as id1_7_0_, traits0_.id as id1_7_1_, traits0_.data_type as data_typ2_7_1_, traits0_.name as name3_7_1_, traits0_.trait_id as trait_id4_7_1_, traitvalue1_.id as id2_11_2_, traitvalue1_2_.value as value1_10_2_, traitvalue1_.trait_type as trait_ty1_11_2_ from SREG_Trait traits0_ left outer join SREG_TraitValueBase traitvalue1_ on traits0_.trait_id=traitvalue1_.id left outer join SREG_TraitMultiValue traitvalue1_1_ on traitvalue1_.id=traitvalue1_1_.id left outer join SREG_TraitSingleValue traitvalue1_2_ on traitvalue1_.id=traitvalue1_2_.id where traits0_.group_id=?
Started cache [name=Person, mode=REPLICATED]
Failed to obtain remote job result policy for result from ComputeTask.result(..) method (will fail the whole task): GridJobResultImpl [job=C2 [], sib=GridJobSiblingImpl [sesId=e4f38fd2451-8f12688b-fef6-4981-a5f4-aa6781438930, jobId=15f38fd2451-e93bc2fa-8a37-4a50-9a22-071abece643f, nodeId=e93bc2fa-8a37-4a50-9a22-071abece643f, isJobDone=false], jobCtx=GridJobContextImpl [jobId=15f38fd2451-e93bc2fa-8a37-4a50-9a22-071abece643f, timeoutObj=null, attrs={}], node=TcpDiscoveryNode [id=e93bc2fa-8a37-4a50-9a22-071abece643f, addrs=[0:0:0:0:0:0:0:1%1, 10.113.56.231, 127.0.0.1, 192.168.122.1], sockAddrs=[/192.168.122.1:47500, /0:0:0:0:0:0:0:1%1:47500, /10.113.56.231:47500, /10.113.56.231:47500, /127.0.0.1:47500, /192.168.122.1:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1461060127285, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false], ex=class o.a.i.IgniteException: null, hasRes=true, isCancelled=false, isOccupied=true]
class org.apache.ignite.IgniteException: Remote job threw user exception (override or implement ComputeTask.result(..) method if you would like to have automatic failover for this exception).
at org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
at org.apache.ignite.internal.processors.task.GridTaskWorker$3.apply(GridTaskWorker.java:909)
at org.apache.ignite.internal.processors.task.GridTaskWorker$3.apply(GridTaskWorker.java:902)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6429)
at org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:902)
at org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:798)
at org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:995)
at org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1219)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:821)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:103)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:784)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: null
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1792)
at org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6397)
at org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
at org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1166)
at org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1770)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:821)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:103)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:784)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Caused by: java.lang.NullPointerException
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5769)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5716)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1789)
... 13 more
java.lang.NullPointerException
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5769)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5716)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1789)
at org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6397)
at org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
at org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1166)
at org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1770)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:821)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:103)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:784)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[main] INFO org.springframework.test.context.transaction.TransactionContext - Rolled back transaction for test context [DefaultTestContext#2478b629 testClass = StartServiceTest, testInstance = com.subex.roc.cache.startserviceintegration.StartServiceTest#39023dbf, testMethod = startNodeTest#StartServiceTest, testException = [null], mergedContextConfiguration = [MergedContextConfiguration#2c2c3947 testClass = StartServiceTest, locations = '{}', classes = '{class com.subex.roc.cache.IgniteJPAConfiguration, class com.subex.roc.cache.IgniteEnvConfiguration}', contextInitializerClasses = '[]', activeProfiles = '{}', propertySourceLocations = '{}', propertySourceProperties = '{}', contextLoader = 'org.springframework.test.context.support.DelegatingSmartContextLoader', parent = [null]]].
Invoking shutdown hook...
[Thread-3] INFO org.springframework.context.support.GenericApplicationContext - Closing org.springframework.context.support.GenericApplicationContext#d44fc21: startup date [Tue Apr 19 15:32:01 IST 2016]; root of context hierarchy
Command protocol successfully stopped: TCP binary
Stopped cache: ignite-marshaller-sys-cache
Stopped cache: ignite-sys-cache
Stopped cache: ignite-atomics-sys-cache
Stopped cache: bookeeperCache
Stopped cache: Person
>>> +---------------------------------------------------------------------------------------+
>>> Ignite ver. 1.5.0-final#20151229-sha1:f1f8cda2f3f62231f42a59951bf34c39577c1bec stopped OK
>>> +---------------------------------------------------------------------------------------+
>>> Grid name: subexIgnite
>>> Grid uptime: 00:00:14:747
[Thread-3] INFO org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean - Closing JPA EntityManagerFactory for persistence unit 'ben'
The possibility of this NPE is removed in the latest Ignite version (1.6.0). It can be downloaded here: ignite.apache.org/download.cgi#binaries