Hi peoples I am facing an issue with my clusters .
Whenever the ignite nodes goes on idle they gets disconnected and after that time if i try to connect my first request gets failed. but after first request all requests are sucessfull until they become idle for some time.
Please help
SEVERE: Servlet.service() for servlet [dispatcherServlet] in context
with path [] threw exception [Request processing failed; nested exception is
org.springframework.transaction.CannotCreateTransactionException: Could not
open JDBC Connection for transaction; nested exception is
java.sql.SQLException: Failed to communicate with Ignite cluster.] with root
cause
commons-api_research.1.tmu590s6e7pi#cc-dev-platform03.netlink.com |
java.net.SocketException: Connection reset
commons-api_research.1.tmu590s6e7pi#cc-dev-platform03.netlink.com | at
java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)
commons-api_research.1.tmu590s6e7pi#cc-dev-platform03.netlink.com | at
java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
commons-api_research.1.tmu590s6e7pi#cc-dev-platform03.netlink.com | at
java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252)
commons-api_research.1.tmu590s6e7pi#cc-dev-platform03.netlink.com | at
java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:292)
commons-api_research.1.tmu590s6e7pi#cc-dev-platform03.netlink.com | at
java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:351)
commons-api_research.1.tmu590s6e7pi#cc-dev-platform03.netlink.com | at
org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read
You should not keep raw JDBC connections around indefinitely, instead you should use a proper connection pool, such as c3p0, with support for connection recycling, etc.
Well its not about DataSource but setting idle connection timeout to max worked for me.
TcpCommunicationSpi spi = new TcpCommunicationSpi();
spi.setUsePairedConnections(true);
spi.setIdleConnectionTimeout(999999999999999999l);
cfg.setCommunicationSpi(spi);
Related
Whenever HA failover occurs activeMQ broker got stuck with the messages that has and could not restart by itself.
Messages are processed successfully when we restart the activeMQ.
The bean is created to stop and start the connectors in case if IOExceptions.
bean id="ioExceptionHandler" class="org.apache.activemq.util.DefaultIOExceptionHandler"
property name="ignoreSQLExceptions"value=false property
property name="stopStartConnectors" value=true property
bean
We are getting connection closed as exceptions when this failover occurs as below
Initiating stop/restart of transports on BrokerService[localhost] due to IO exception, java.io.IOException: The connection is closed. | org.apache.activemq.util.DefaultIOExceptionHandler | ActiveMQ Transport: tcp:///hostname:52272#8501
java.io.IOException: The connection is closed.
Later it is trying to restart the transport connectors as a result of this config, but not able to continue further.
INFO | waiting for broker persistence adapter checkpoint to succeed
before restarting transports |
org.apache.activemq.util.DefaultIOExceptionHandler |
IOExceptionHandler: restart transports.
Please let us know if any configuration required for broker to restart and process the messages it has.
I have setup ActiveMQ mulitple instances to achieve failover in master slave mode in windows.
While setting up the same i just created 3 instances under bin folder without changing any port and started all 3 instances one by one. First instance became master and remaining were in slave mode until I stopped master instance.
Now I am trying to achieve the same in Linux environment. First instance starts successfully but when I start second instance in a different window it throws below error:
ERROR | Failed to start Apache ActiveMQ ([instance2, ID:132vm6-57227-1478597606120-0:1], java.io.IOException: Transport Connector could not be registered in JMX: java.io.IOException: Failed to bind to server socket: tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600 due to: java.net.BindException: Address already in use)
INFO | Apache ActiveMQ 5.14.0 (instance2, ID:132vm6-57227-1478597606120-0:1) is shutting down
INFO | Connector openwire stopped
INFO | Connector amqp stopped
INFO | Connector stomp stopped
INFO | Connector mqtt stopped
INFO | Connector ws stopped
INFO | PListStore:[/opt/apache-activemq-5.14.0/bin/instance2/data/instance2/tmp_storage] stopped
INFO | Stopping async queue tasks
INFO | Stopping async topic tasks
INFO | Stopped KahaDB
INFO | Apache ActiveMQ 5.14.0 (instance2, ID:132vm6-57227-1478597606120-0:1) uptime 0.585 seconds
INFO | Apache ActiveMQ 5.14.0 (instance2, ID:132vm6-57227-1478597606120-0:1) is shutdown
INFO | Closing org.apache.activemq.xbean.XBeanBrokerFactory$1#4233871a: startup date [Tue Nov 08 15:03:24 IST 2016]; root of context hierarchy
WARN | Exception thrown from LifecycleProcessor on context close
java.lang.IllegalStateException: LifecycleProcessor not initialized - call 'refresh' before invoking lifecycle methods via the context: org.apache.activemq.xbean.XBeanBrokerFactory$1#4233871a: startup date [Tue Nov 08 15:03:24 IST 2016]; root of context hierarchy
at org.springframework.context.support.AbstractApplicationContext.getLifecycleProcessor(AbstractApplicationContext.java:357)[spring-context-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:884)[spring-context-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:843)[spring-context-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.apache.activemq.hooks.SpringContextHook.run(SpringContextHook.java:30)[activemq-spring-5.14.0.jar:5.14.0]
at org.apache.activemq.broker.BrokerService.stop(BrokerService.java:875)[activemq-broker-5.14.0.jar:5.14.0]
at org.apache.activemq.xbean.XBeanBrokerService.stop(XBeanBrokerService.java:122)[activemq-spring-5.14.0.jar:5.14.0]
at org.apache.activemq.broker.BrokerService.start(BrokerService.java:629)[activemq-broker-5.14.0.jar:5.14.0]
at org.apache.activemq.xbean.XBeanBrokerService.afterPropertiesSet(XBeanBrokerService.java:73)[activemq-spring-5.14.0.jar:5.14.0]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.7.0_65]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)[:1.7.0_65]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.7.0_65]
at java.lang.reflect.Method.invoke(Method.java:606)[:1.7.0_65]
I am using ActiveMQ 5.14 version.
If anybody has encountered a similar issue, kindly provide your inputs.
To get multiple instances of ActiveMQ running on the same machine, you need to change the ports that they try to open. There are (at least) 3 ports that need to be changed:
The transportConnector ports that accept messaging traffic. These are defined in theactivemq.xml file. Typically you only need the openwire one - this is 61616 by default; I usually change this in the other ActiveMQ instances to 61626, 61636 etc. You can usually comment out the others if you don't intend to use them.
The Jetty HTTP port. This is defined in the jetty.xml file. The default is 8161, set the next ones to 8162, 8163 etc.
The JMX port. This one's a bit tricky, as you need to stick a piece of config into the activemq.xml to explicitly define it as follows:
<managementContext>
<managementContext createConnector="true" connectorPort="1099"/>
</managementContext>
You can then change this to 1199, 1299 on the other instances. Hope this helps.
I have encountered an issue with ActiveMQ where the entire cluster will fail when the master Zookeeper node goes offline.
We have a 3-node ActiveMQ cluster setup in our development environment. Each node has ActiveMQ 5.12.0 and Zookeeper 3.4.6 (*note, we have done some testing with Zookeeper 3.4.7, but this has failed to resolve the issue. Time constraints have so far prevented us from testing ActiveMQ 5.13).
What we have found is that when we stop the master ZooKeeper process (via the "end process tree" command in Task Manager), the remaining two ZooKeeper nodes continue to function as normal. Sometimes the ActiveMQ cluster is able to handle this, but sometimes it does not.
When the cluster fails, we typically see this in the ActiveMQ log:
2015-12-18 09:08:45,157 | WARN | Too many cluster members are connected. Expected at most 3 members but there are 4 connected. | org.apache.activemq.leveldb.replicated.MasterElector | WrapperSimpleAppMain-EventThread
...
...
2015-12-18 09:27:09,722 | WARN | Session 0x351b43b4a560016 for server null, unexpected error, closing socket connection and attempting reconnect | org.apache.zookeeper.ClientCnxn | WrapperSimpleAppMain-SendThread(192.168.0.10:2181)
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)[:1.7.0_79]
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)[:1.7.0_79]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)[zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)[zookeeper-3.4.6.jar:3.4.6-1569965]
We were immediately concerned by the fact that (A)ActiveMQ seems to think there are four members in the cluster when it is only configured with 3 and (B) when the exception is raised, the server appears to be null. We then increased ActiveMQ's logging level to DEBUG in order to display the list of members:
2015-12-18 09:33:04,236 | DEBUG | ZooKeeper group changed: Map(localhost -> ListBuffer((0000000156,{"id":"localhost","container":null,"address":null,"position":-1,"weight":5,"elected":null}), (0000000157,{"id":"localhost","container":null,"address":null,"position":-1,"weight":1,"elected":null}), (0000000158,{"id":"localhost","container":null,"address":"tcp://192.168.0.11:61619","position":-1,"weight":10,"elected":null}), (0000000159,{"id":"localhost","container":null,"address":null,"position":-1,"weight":10,"elected":null}))) | org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ BrokerService[localhost] Task-14
Can anyone suggest why this may be happening and/or suggest a way to resolve this? Our configurations are shown below:
ZooKeeper:
tickTime=2000
dataDir=C:\\zookeeper-3.4.7\\data
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.0.10:2888:3888
server.2=192.168.0.11:2888:3888
server.3=192.168.0.12:2888:3888
ActiveMQ (server.1):
<persistenceAdapter>
<replicatedLevelDB
directory="activemq-data"
replicas="3"
bind="tcp://0.0.0.0:61619"
zkAddress="192.168.0.11:2181,192.168.0.10:2181,192.168.0.12:2181"
zkPath="/activemq/leveldb-stores"
hostname="192.168.0.10"
weight="5"/>
//server.2 has a weight of 10, server.3 has a weight of 1
</persistenceAdapter>
I'm using Apache Tomcat JDBC connection pooling in my project. I'm confused because under heavy load I keep seeing the following error:
12:26:36,410 ERROR [] (http-/XX.XXX.XXX.X:XXXXX-X) org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-/XX.XXX.XXX.X:XXXXX-X] Timeout: Pool empty. Unable to fetch a connection in 10 seconds, none available[size:4; busy:4; idle:0; lastwait:10000].
12:26:36,411 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/APP].[AppConf]] (http-/XX.XXX.XXX.X:XXXXX-X) JBWEB000236: Servlet.service() for servlet AppConf threw exception: org.jboss.resteasy.spi.UnhandledException: java.lang.NullPointerException
My expectation was that with pooling, requests for new connections would be held in a queue until a connection became available. Instead it seems that requests are rejected when the pool has reached capacity. Can this behaviour be changed?
Thanks,
Dal
This is my pool configuration:
PoolProperties p = new PoolProperties();
p.setUrl("jdbc:oracle:thin:#" + server + ":" + port + ":" + SID_SVC);
p.setDriverClassName("oracle.jdbc.driver.OracleDriver");
p.setUsername(username);
p.setPassword(password);
p.setMaxActive(4);
p.setInitialSize(1);
p.setMaxWait(10000);
p.setRemoveAbandonedTimeout(300);
p.setMinEvictableIdleTimeMillis(150000);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1 from dual");
p.setMinIdle(1);
p.setMaxIdle(2);
p.setRemoveAbandoned(true);
p.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"
+ "org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"
+ "org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
This working as per design/implementation, if you see the log Timeout: Pool empty. Unable to fetch a connection in 10 seconds and your configuration is p.setMaxWait(10000);. The requesting thread waits for 10seconds(10000 millseconds, maxwait) before giving up waiting for connection.
Now you have two solutions, increase the number of maxActive connection or check if there are any connection leaks/long running queries(which you do not expect).
I usually see damaged queues in our activemq5.4.2 ,meaning queue's are malformed and i had to remove the kahaDB files and bounce the broker to resolve this. all messages stored in queue are lost during. How to prevent this damaged queues without loss of data ?
Below are the logs of broker,
ERROR | Failed to reset batching | org.apache.activemq.store.kahadb.KahaDBStore | ActiveMQ Broker[AMQBROKER-TEST] Scheduler
java.lang.IllegalStateException: PageFile is not loaded
at org.apache.kahadb.page.PageFile.assertLoaded(PageFile.java:715)
at org.apache.kahadb.page.PageFile.tx(PageFile.java:239)
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.resetBatching(KahaDBStore.java:510)
at org.apache.activemq.store.ProxyMessageStore.resetBatching(ProxyMessageStore.java:93)
at org.apache.activemq.broker.region.cursors.QueueStorePrefetch.resetBatch(QueueStorePrefetch.java:85)
at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:254)
at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:108)
at org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:157)
at org.apache.activemq.broker.region.Queue.doBrowse(Queue.java:1026)
at org.apache.activemq.broker.region.Queue.expireMessages(Queue.java:783)
at org.apache.activemq.broker.region.Queue.access$100(Queue.java:83)
at org.apache.activemq.broker.region.Queue$2.run(Queue.java:123)
at org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
INFO | Transport failed: java.net.SocketException: Broken pipe | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler
WARN | Failed to register MBean: org.apache.activemq:BrokerName=AMQBROKER-TEST,Type=Queue,Destination=_ onEvent&X171249188Y1Z
INFO | Transport failed: java.net.SocketException: Broken pipe
INFO | Transport failed: java.net.SocketException: Connection reset
INFO | Transport failed: java.io.EOFException