OpsCenter not getting data after restart of server - datastax

we are using Datastax Enterprize edition. We are running a 2 node cluster. We get the message: After restarting of OpsCentre node getting below error.
2017-03-20 14:49:45,819 [opscenterd] ERROR: Unhandled error in
Deferred: There are no clusters with name or ID 'tracking'
File "/usr/share/opscenter/lib/py/twisted/internet/defer.py", line 1124, in _inlineCallbacks
result = g.send(result)
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/WebServer.py",
line 523, in ClusterController
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/ClusterServices.py",
line 181, in __getitem__
(MainThread)
Agents Log
WARN [async-dispatch-23] 2017-03-20 17:13:45,230 Attempted to ping opscenterd on stomp but did not receive a reply in time, will retry again later.
ERROR [StompConnection receiver] 2017-03-20 17:13:45,230 Mar 20, 2017 5:13:45 PM org.jgroups.client.StompConnection run
SEVERE: JGRP000112: Connection closed unexpectedly:
java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.net.SocketInputStream.read(SocketInputStream.java:223)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at org.jgroups.util.Util.readLine(Util.java:2825)
at org.jgroups.protocols.STOMP.readFrame(STOMP.java:240)
at org.jgroups.client.StompConnection.run(StompConnection.java:274)
at java.lang.Thread.run(Thread.java:745)
INFO [async-dispatch-23] 2017-03-20 17:13:45,236 Starting DynamicEnvironmentComponent
INFO [async-dispatch-23] 2017-03-20 17:13:45,512 Dynamic environment script output: paths:
cassandra-conf: /etc/dse//cassandra
cassandra-log: /var/log/cassandra
hadoop-log: /var/log/hadoop/userlogs
spark-log: /var/log/spark
dse-env: /etc/dse
dse-conf: /etc/dse/
hadoop-conf: /etc/dse/hadoop2-client
spark-conf: /etc/dse//spark
INFO [async-dispatch-23] 2017-03-20 17:13:45,522 Starting storage database connection.
ERROR [async-dispatch-23] 2017-03-20 17:13:47,737 Can't connect to Cassandra (All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.TransportException: [/127.0.0.1:9042] Cannot connect))), retrying soon.
INFO [async-dispatch-23] 2017-03-20 17:13:47,738 Starting monitored database connection.
ERROR [async-dispatch-23] 2017-03-20 17:13:49,965 Can't connect to Cassandra, authentication error, please carefully check your Auth settings, retrying soon.
INFO [async-dispatch-23] 2017-03-20 17:13:49,967 Starting RepairComponent
INFO [async-dispatch-23] 2017-03-20 17:13:49,970 Finished starting system.
INFO [async-dispatch-26] 2017-03-20 17:13:59,971 Starting system.
INFO [async-dispatch-26] 2017-03-20 17:13:59,973 Configuration change for component class opsagent.nodedetails.repair.RepairComponent: before: {:send-repair-fn #object[opsagent.nodedetails.repair.jmx$send_repair 0x76028b5c "opsagent.nodedetails.repair.jmx$send_repair#76028b5c"], :parse-notification-fn #object[opsagent.nodedetails.repair.jmx$parse_notification 0x5e84cf80 "opsagent.nodedetails.repair.jmx$parse_notification#5e84cf80"]}, after: {:send-repair-fn nil, :parse-notification-fn nil}
INFO [async-dispatch-26] 2017-03-20 17:13:59,974 The following components have had a config change and will be rebuilt and restarted: (:repair-component)
INFO [async-dispatch-26] 2017-03-20 17:13:59,975 The component restart for (:repair-component) when accounting for dependencies requires these components to be restarted #{:repair-component :http-server}
INFO [async-dispatch-26] 2017-03-20 17:13:59,976 Stopping RepairComponent.
INFO [async-dispatch-26] 2017-03-20 17:13:59,977 Starting StompComponent
INFO [async-dispatch-26] 2017-03-20 17:13:59,978 SSL communication is disabled
INFO [async-dispatch-26] 2017-03-20 17:13:59,978 Creating stomp connection to 192.168.136.250:61620
ERROR [async-dispatch-26] 2017-03-20 17:13:59,980 Mar 20, 2017 5:13:59 PM org.jgroups.client.StompConnection connect
INFO: Connected to 192.168.136.250:61620
I am not able to understand whats wrong with Agent and OpsCentre?

Related

● libvirtd.service ,Active: inactive (dead), Initialization of QEMU state driver failed: invalid argument: Failed to parse user 'libvirt-qemu'

When I check status of libvirtd using the cmd: sudo systemctl status libvirtd the o/p is as follows:
● libvirtd.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Thu 2021-07-22 18:00:59 EDT; 1min 4s ago
TriggeredBy: ● libvirtd-admin.socket
● libvirtd.socket
● libvirtd-ro.socket
Docs: man:libvirtd(8)
https://libvirt.org
Process: 1717 ExecStart=/usr/sbin/libvirtd $libvirtd_opts (code=exited, status=0/SUCCESS)
Main PID: 1717 (code=exited, status=0/SUCCESS)
Jul 22 18:00:59 eb2-2259-lin04 systemd[1]: Starting Virtualization daemon...
Jul 22 18:00:59 eb2-2259-lin04 systemd[1]: Started Virtualization daemon.
Jul 22 18:00:59 eb2-2259-lin04 libvirtd[1717]: libvirt version: 6.0.0, package: 0ubuntu8.11 (Christian Ehrhardt <christian.ehrhardt#canonical.com> Tue, 05 Jan 2021 13:48:48 +0100)
Jul 22 18:00:59 libvirtd[1717]: hostname: eb2-2259-lin04
Jul 22 18:00:59 eb2-2259-lin04 libvirtd[1717]: invalid argument: Failed to parse user 'libvirt-qemu'
Jul 22 18:00:59 eb2-2259-lin04 libvirtd[1717]: Initialization of QEMU state driver failed: invalid argument: Failed to parse user 'libvirt-qemu'
Jul 22 18:00:59 eb2-2259-lin04 libvirtd[1717]: Driver state initialization failed
Jul 22 18:00:59 eb2-2259-lin04 systemd[1]: libvirtd.service: Succeeded.
The status is always inactive(dead). And I get lines invalid argument: Failed to parse user 'libvirt-qemu' , Initialization of QEMU state driver failed: invalid argument: Failed to parse user 'libvirt-qemu' and Driver state initialization failed
I also tried sudo systemctl start libvirtd
and sudo systemctl status libvirtd but the issue doesn't get resolved.
I was actually installing KVM2 driver for GPU support within minikube following the link [https://help.ubuntu.com/community/KVM/Installation][1]. According to the link KVM2 is successfully installed in Ubuntu if virsh list --all cmd does not return an error. For me all the steps returned no error except for the virsh list --all returned the following error.
error: failed to connect to the hypervisor
error: Cannot recv data: Connection reset by peer
and sometimes returns the following error
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused
When I start minikube with kvm2 as the driver using the command minikube start --driver kvm2I get the following error
😄 minikube v1.15.1 on Ubuntu 20.04
💢 Exiting due to GUEST_DRIVER_MISMATCH: The existing "minikube" cluster was created using the "virtualbox" driver, which is incompatible with requested "kvm2" driver.
💡 Suggestion: Delete the existing 'minikube' cluster using: 'minikube delete', or start the existing 'minikube' cluster using: 'minikube start --driver=virtualbox'
Please suggest me how to start minikube with kvm2 as driver.

run rabbitmq on windows

I was using rabbitmq on Windows about 6 months ago, now I am reinstalling rabbitmq and erlang but I get this error:
C:\Program Files\new_RabbitMQ Server\rabbitmq_server-3.8.12\sbin>rabbitmq-server
Configuring logger redirection
12:49:39.059 [warning] Using RABBITMQ_ADVANCED_CONFIG_FILE: c:/Users/ALFA RAYAN/AppData/Roaming/RabbitMQ/advanced.config
12:49:39.810 [error]
BOOT FAILED
12:49:39.810 [error] BOOT FAILED
===========
12:49:39.810 [error] ===========
ERROR: distribution port 25672 in use by another node: rabbit#Ali
12:49:39.810 [error] ERROR: distribution port 25672 in use by another node: rabbit#Ali
12:49:39.810 [error]
12:49:40.826 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","Ali"} in context start_error
12:49:40.826 [error] CRASH REPORT Process <0.150.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","Ali"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"Ali\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","Ali"}}},{rabbit_prelau
Crash dump is being written to: erl_crash.dump...done
i use otp_win64_23.2 and rabbitmq-server-3.8.12
Go to windows services and look for RabbitMQ. Right click on it and stop the service.
Restart the RabbitMQ server in an administrator mode. This worked for me.

Rabbitmq server failed to boot

I'm trying to start my rabbitmq server using command:sudo rabbitmq-server on ubuntu 20.04 but it crashes.I have absolutely no clue whatsoever as to what I should do.
Rabbitmq version: 3.8.5 erlang version: 23
17:41:07.587 [error]
17:41:07.591 [error] BOOT FAILED
BOOT FAILED
17:41:07.592 [error] ===========
===========
17:41:07.592 [error] ERROR: distribution port 25672 in use by rabbit#nadaanbaalak
ERROR: distribution port 25672 in use by rabbit#nadaanbaalak
17:41:07.592 [error]
17:41:08.594 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","nadaanbaalak"} in context start_error
17:41:08.594 [error] CRASH REPORT Process <0.153.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","nadaanbaalak"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"nadaanbaalak\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","nadaanbaalak"}}},{rabb
Crash dump is being written to: erl_crash.dump...done
Any help would be great
It's pretty simple. Just needed to look at the logs. killed the process at the port number "25672" and restarted the rabbitmq server.

Apache flink - Timeout after submitting job on hadoop / yarn cluster

I am trying to upgrade our job from flink 1.4.2 to 1.7.1 but I keep running into timeouts after submitting the job. The flink job runs on our hadoop cluster (version 2.7) with Yarn.
I've seen the following behavior:
Using the same flink-conf.yaml as we used in 1.4.2: 1.5.6 / 1.6.3 / 1.7.1 all versions timeout while 1.4.2 works.
Using 1.5.6 with "mode: legacy" (to switch off flip-6) works
Using 1.7.1 with "mode: legacy" gives timeout (I assume this option was removed but the documentation is outdated? https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#legacy)
When the timeout happens I get the following stacktrace:
INFO class java.time.Instant does not contain a getter for field seconds
INFO class com.bol.fin_hdp.cm1.domain.Cm1Transportable does not contain a getter for field globalId
INFO Submitting job 5af931bcef395a78b5af2b97e92dcffe (detached: false).
INFO ------------------------------------------------------------
INFO The program finished with the following exception:
INFO org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
INFO at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:545)
INFO at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:420)
INFO at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:404)
INFO at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:798)
INFO at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:289)
INFO at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
INFO at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1035)
INFO at org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1111)
INFO at java.security.AccessController.doPrivileged(Native Method)
INFO at javax.security.auth.Subject.doAs(Subject.java:422)
INFO at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
INFO at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
INFO at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1111)
INFO Caused by: java.lang.RuntimeException: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJob(IntervalJobStarter.java:43)
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJobWithConfig(IntervalJobStarter.java:32)
INFO at com.bol.fin_hdp.Main.main(Main.java:8)
INFO at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
INFO at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO at java.lang.reflect.Method.invoke(Method.java:498)
INFO at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
INFO ... 12 more
INFO Caused by: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
INFO at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:258)
INFO at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
INFO at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
INFO at com.bol.fin_hdp.cm1.job.Job.execute(Job.java:54)
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJob(IntervalJobStarter.java:41)
INFO ... 19 more
INFO Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
INFO at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:371)
INFO at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
INFO at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
INFO at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
INFO at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
INFO at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:216)
INFO at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
INFO at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
INFO at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
INFO at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
INFO at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:301)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:214)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
INFO at java.lang.Thread.run(Thread.java:748)
INFO Caused by: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
INFO at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:213)
INFO ... 17 more
INFO Caused by: java.util.concurrent.CompletionException: org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: connection timed out: shd-hdp-b-slave-01...
INFO at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
INFO at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
INFO at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
INFO at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
INFO ... 15 more
INFO Caused by: org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: connection timed out: shd-hdp-b-slave-017.example.com/some.ip.address:46500
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:212)
INFO ... 7 more
What changed in flip-6 that might cause this behavior and how can I fix this?
For our jobs on YARN w/Flink 1.6, we had to bump up the web.timeout setting via -yD web.timeout=100000.
In our case, there was a firewall between the machine submitting the job and our Hadoop cluster.
In newer Flink versions (1.7 and up) Flink uses REST to submit jobs. The port number for this REST service is random on yarn setups and could not be set.
Flink 1.8.0 introduced a config option to set this to a port or port range using:
rest.bind-port: 55520-55530

cannot register a node (VM) with selenium grid 2 hub (physical machine)

I al trying to register a node which is a VM with windows 7 on a hub that sits on a physical machine windows 7 also.
My server starts succesfully on port 4444.
I run this command to register the node :
This is what i get from my node console :
C:\grid>java -jar selenium-server-standalone-2.35.0.jar -role node -hub http://
hubip:4444/grid/register -browser browserName=firefox,platform=WINDOWS -
rowser browserName=chrome,platform=WINDOWS -browser "browserName=internet explo
er,platform=WINDOWS" -remoteHost nodeip:4444
26 sept. 2013 11:28:40 org.openqa.grid.selenium.GridLauncher main
INFO: Launching a selenium grid node
26 sept. 2013 11:28:40 org.openqa.grid.common.RegistrationRequest addCapability
romString
INFO: Adding browserName=firefox,platform=WINDOWS
26 sept. 2013 11:28:40 org.openqa.grid.common.RegistrationRequest addCapability
romString
INFO: Adding browserName=chrome,platform=WINDOWS
26 sept. 2013 11:28:40 org.openqa.grid.common.RegistrationRequest addCapability
romString
INFO: Adding browserName=internet explorer,platform=WINDOWS
11:28:41.375 INFO - Java: Sun Microsystems Inc. 20.13-b02
11:28:41.375 INFO - OS: Windows 7 6.1 x86
11:28:41.375 INFO - v2.35.0, with Core v2.35.0. Built from revision c916b9d
11:28:41.531 INFO - RemoteWebDriver instances should connect to: ://127.0.0
1:5555/wd/hub
11:28:41.531 INFO - Version Jetty/5.1.x
11:28:41.531 INFO - Started HttpContext[/selenium-server/driver,/selenium-serve
/driver]
11:28:41.531 INFO - Started HttpContext[/selenium-server,/selenium-server]
11:28:41.531 INFO - Started HttpContext[/,/]
11:28:41.546 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#111a3
4
11:28:41.546 INFO - Started HttpContext[/wd,/wd]
11:28:41.546 INFO - Started SocketListener on 0.0.0.0:5555
11:28:41.546 INFO - Started org.openqa.jetty.jetty.Server#a9ae05
11:28:41.546 INFO - using the json request : {"class":"org.openqa.grid.common.R
gistrationRequest","capabilities":[{"seleniumProtocol":"WebDriver","platform":"
INDOWS","browserName":"firefox"},{"seleniumProtocol":"WebDriver","platform":"WI
DOWS","browserName":"chrome"},{"seleniumProtocol":"WebDriver","platform":"WINDO
S","browserName":"internet explorer"}],"configuration":{"port":5555,"host":"10.
128.120","hubHost":"192.168.1.110","registerCycle":5000,
"hub":"://hubip:4444/grid/register","url":"nodeip:4444","remoteHost":"
nodeip:4444","register":true,"proxy":"org.openqa.grid.selenium.proxy.Defau
tRemoteProxy","maxSession":5,"browser":"browserName=firefox,platform=WINDOWS","
ole":"node","hubPort":4444}}
11:28:41.546 INFO - Starting auto register thread. Will try to register every 5
00 ms.
11:28:41.546 INFO - Registering the node to hub :://hubip:4444/grid
register
11:29:56.910 INFO - Registering the node to hub :://hubip:4444/grid
register
11:31:12.274 INFO - Registering the node to hub :://hubip:4444/grid
register
11:32:27.684 INFO - Registering the node to hub :://hubip:4444/grid
register
11:33:43.204 INFO - Registering the node to hub :://hubip:4444/grid
register
11:34:58.599 INFO - Registering the node to hub :://hubip:4444/grid
register
--- the node keeps trying to register but the registration is refused.
Here is what i get from the hub console :
Avertissement: Unregistering the node. It's been down for 60113 milliseconds.
sept. 26, 2013 12:00:04 PM org.openqa.grid.internal.Registry removeIfPresent
Avertissement: Proxy 'nodeip:4444 time out : 300000' was prev
iously registered. Cleaning up any stale test sessions.
sept. 26, 2013 12:00:07 PM org.openqa.grid.internal.BaseRemoteProxy <init>
Avertissement: Max instance not specified. Using default = 1 instance
sept. 26, 2013 12:00:07 PM org.openqa.grid.internal.BaseRemoteProxy <init>
Avertissement: Max instance not specified. Using default = 1 instance
sept. 26, 2013 12:00:07 PM org.openqa.grid.internal.BaseRemoteProxy <init>
Avertissement: Max instance not specified. Using default = 1 instance
sept. 26, 2013 12:00:13 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isA
live
Avertissement: Failed to check status of node: Connection to nodeip
:4444 refused
sept. 26, 2013 12:00:19 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isA
live
Avertissement: Failed to check status of node: Connection to nodeip
:4444 refused
sept. 26, 2013 12:00:19 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy onE
vent
Avertissement: Marking the node as down. Cannot reach the node for 2 tries.
sept. 26, 2013 12:00:25 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isA
live
Avertissement: Failed to check status of node: Connection to nodeip
:4444 refusedsept. 26, 2013 12:00:31 PM