ActiveMQ LeaseDatabaseLocker warning for lease duration - activemq

I have a slightly strange warning from ActiveMQ 5.9.0 with JDBC Oracle backed persistence...
WARN [org.apache.activemq.store.jdbc.LeaseDatabaseLocker] LockableService
keep alive period: 2000, which renews the lease, is less than
lockAcquireSleepInterval: 1000, the lease duration.
These values will allow the lease to expire.
My question is why is LockableService reporting that 2000 < 1000? I think it should say "LockableService keep alive period: 2000, which renews the lease, is greater than lockAcquireSleepInterval: 1000, the lease duration. These values will allow the lease to expire.". What do you think, maybe I'm reading this wrong...
I do see a problem with my current settings (I have a Master and Slave, I shutdown the Master and the Slave takes over, but I startup the Master again and it doesn't become a Slave)... So I obviously need to tweak my settings, here is the current relevant configuration...
<bean id="jdbcPersistenceAdapter" class="org.apache.activemq.store.jdbc.JDBCPersistenceAdapter">
<property name="brokerName" value="messageCentreBroker" />
<property name="createTablesOnStartup" value="true" />
<property name="dataSource" ref="activeMqDataSource" />
<property name="lockKeepAlivePeriod" value="2000" />
<property name="locker" ref="leaseDatabaseLocker" />
</bean>
<bean id="leaseDatabaseLocker" class="org.apache.activemq.store.jdbc.LeaseDatabaseLocker">
<property name="lockAcquireSleepInterval" value="1000" />
</bean>
So I guess my lockAcquireSleepInterval should be greater than 2000? I'll try this, but interested to hear thoughts on the WARN message too, it seems wrong?

I think it's just a typo. Nothing more.

Related

Ignite Cache Persistence server for DB with servers for compute

I'm using Ignite 2.5 and have deployed a couple of servers like this:
One computer acts as DB server with persistence enabled.
Three other computers are compute servers with same cache as on DB server but without persistence.
I have classes like this:
public class Address implements Serializable
{
String streetName;
String houseNumber;
String cityName;
String countryName;
}
public class Person implements Serializable
{
#QuerySqlField
String firstName;
#QuerySqlField
String lastName;
#QuerySqlField
Address homeAddress;
}
The cache is configured on all servers with this XML:
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="Persons" />
<property name="cacheMode" value="PARTITIONED" />
<property name="backups" value="0" />
<property name="storeKeepBinary" value="true" />
<property name="atomicityMode" value="TRANSACTIONAL"/>
<property name="writeSynchronizationMode" value="FULL_SYNC"/>
<property name="indexedTypes">
<list>
<value>java.lang.String</value>
<value>Person</value>
</list>
</property>
</bean>
On the DB server in addition there is persistence enabled like this:
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="storagePath" value="/data/Storage" />
<property name="walPath" value="/data/Wal" />
<property name="walArchivePath" value="/data/WalArchive" />
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="initialSize" value="536870912" />
<property name="maxSize" value="1073741824" />
<property name="persistenceEnabled" value="true" />
</bean>
</property>
</bean>
</property>
<property name="binaryConfiguration">
<bean class="org.apache.ignite.configuration.BinaryConfiguration">
<property name="compactFooter" value="false" />
</bean>
</property>
The cache is used with put/get but also with SqlQuery and SqlFieldsQuery.
From time to time I have to update the class definitions, i.e. add another field or so. I'm fine to shut down the whole cluster for updating the classes as it requires an application update anyway.
I believe the above configuration is generally OK to use for Ignite?
Do I understand this other question (Apache Ignite persistent store recommended way for class versions) correctly that on the DB server I shall not have the Person classes in the classpath? Wouldn't then the XML config fail because it's missing the index classes?
On compute servers I shall also not use the Person classes but instead read from cache into BinaryObject? Is the idea to manually fill my Person class from the BinaryObject?
Currently when I update a field in the Person class I get strange errors like:
Unknown pair [platformId=0, typeId=1968448811]
Sorry if there are multiple questions here, I somehow am lost with the "Unknown pair" issues and am now questioning if my complete setup is right.
Thanks for any advise.
I believe the above configuration is generally OK to use for Ignite?
No, you can't configure persistence only for one node only. So in your case, all nodes will store data, but only one node will persist its data, so only part of data will be persisted and this can lead to unpredictable consequences. If you want only one node to store data you need to configure node filter.
With the node filter, the cache will be located only on one node and this node will store data, however in this case your compute nodes would have to do network IO to read from cache.
Do I understand this other question (Apache Ignite persistent store
recommended way for class versions) correctly that on the DB server I
shall not have the Person classes in the classpath? Wouldn't then the
XML config fail because it's missing the index classes?
You don't need classes of your model to be in the classpath, but please, make sure that you work with BinaryObjects only on the server side, so all compute tasks should use BinaryObjects. Also as you mentioned, this configuration won't work, you need to use Query Entity instead for index configuration.
On compute servers I shall also not use the Person classes but instead read from cache into BinaryObject? Is the idea to manually fill my Person class from the BinaryObject?
Well, if you don't have the Person class on the server side you just can't create Person class, you need to use BinaryObject in your compute jobs.
Currently when I update a field in the Person class I get strange errors like: Unknown pair [platformId=0, typeId=1968448811]
Could you please provide the full stacktrace and say on what operation you get this error?

Hibernate Search & Lucene - set write timeout lock

I have an
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/XXXXX/User_Index/write.lock
exception and I read that the write timeout lock should be increased from the default 1 second.
(
It is interesting that previously I didn't have this exception but I work on a task to use Spring on the project. There is a small chance that there are more, competing transactions trying to get access to the index...? I don't think I think the Spring transaction is configured properly:
<!-- for the #Transactional annotations -->
<tx:annotation-driven />
<context:component-scan base-package="XXX.audit, XXX.authorization, XXX.policy, XXX.printing, XXX.provisioning, XXX.service.plainspring" />
<!-- defining Transaction Manager for Spring -->
<bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="dataSource" ref="dataSource" />
<property name="sessionFactory" ref="sessionFactory" />
</bean>
)
So I tried to configure the write lock timeout like
<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean" lazy-init="true">
...
<property name="hibernateProperties">
<props>
...
<prop key="hibernate.search.lucene_version">LUCENE_35</prop>
<prop key="hibernate.search.default.indexwriter.writeLockTimeout">20000</prop>
...
</property>
<property name="dataSource">
<ref bean="dataSource"/>
</property>
</bean>
but no success. Apache Lucene doesn't have config file. Also there is no Lucene code, only Hibernate Search is used (i.e. not possible to set the value of an IndexWriter)
How can I configure the the write lock timeout?
Apache Lucene 3.5
Hibernate Search 4.1.1
Thanks,
V.
There is no option to configure the IndexWriter lock timeout, as this should never be needed.
If you see such a timeout happening it's usually because of either of:
There is a lock file in the index directory as a left over from a crashed JVM
The configuration isn't suitable for the architecture of the application
Check the left over scenario first: shut down your application and see if there is a file name write.lock. If the application is not running it's safe to delete this file.
If that's not the case then you probably have two different instances of Hibernate Search attempting to use the same index directory, and both attempting to write to it.
That's not a valid configuration and you're getting the exception because the index is already locked by te other instance; having a lock timeout increase would only have you wait for a very long time - possibly until the other application is shut down.
Don't share indexes among applications; if you really need to do so, check the manual for the JMS based backends or other non-default backends which allow for multiple applications to share a single IndexWriter.
Finally, please consider upgrading. These versions are extremely old.

Properly Shutting Down ActiveMQ and Spring DefaultMessageListenerContainer

Our system will not shutdown when a "Stop" command is issued from the Tomcat Manager. I have determined that it is related to ActiveMQ/Spring. I have even figured out how to get it to shutdown, however my solution is a hack (at least I hope this isn't the "correct" way to do it). I would like to know the proper way to shutdown ActiveMQ so that I can remove my hack.
I inherited this component and I have no information about why certain architectural decisions were made, after a lot of digging I think I understand his thoughts, but I could be missing something. In other words, the real problem could be in the way that we are trying to use ActiveMQ/Spring.
We run in ServletContainer (Tomcat 6/7) and use ActiveMQ 5.9.1 and Spring 3.0.0 Multiple instances of our application can run in a "group", with each instance running on it's own server. ActiveMQ is used to facilitate communication between the multiple instances. Each instance has it's own embedded broker and it's own set of queues. Every queue on every instance has exactly 1 org.springframework.jms.listener.DefaultMessageListenerContainer listening to it, so 5 queues = 5 DefaultMessageListenerContainers for example.
Our system shut down properly until we fixed a bug by adding queuePrefetch="0" to the ConnectionFactory. At first I assumed that this change was incorrect in some way, but now that I understand the situation, I am confident that we should not be using the prefetch functionality.
I have created a test application to replicate the issue. Note that the information below makes no mention of message producers. That is because I can replicate the issue without ever sending/processing a single message. Simply creating the Broker, ConnectionFactory, Queues and Listeners during boot, is enough to keep the system from stopping properly.
Here is my sample configuration from my Spring XML. I will be happy to provide my entire project if someone wants it:
<amq:broker persistent="false" id="mybroker">
<amq:transportConnectors>
<amq:transportConnector uri="tcp://0.0.0.0:61616"/>
</amq:transportConnectors>
</amq:broker>
<amq:connectionFactory id="ConnectionFactory" brokerURL="vm://localhost?broker.persistent=false" >
<amq:prefetchPolicy>
<amq:prefetchPolicy queuePrefetch="0"/>
</amq:prefetchPolicy>
</amq:connectionFactory>
<amq:queue id="lookup.mdb.queue.cat" physicalName="DogQueue"/>
<amq:queue id="lookup.mdb.queue.dog" physicalName="CatQueue"/>
<amq:queue id="lookup.mdb.queue.fish" physicalName="FishQueue"/>
<bean id="messageListener" class="org.springframework.jms.listener.DefaultMessageListenerContainer" abstract="true">
<property name="connectionFactory" ref="ConnectionFactory"/>
</bean>
<bean parent="messageListener" id="cat">
<property name="destination" ref="lookup.mdb.queue.dog"/>
<property name="messageListener">
<bean class="com.acteksoft.common.remote.jms.WorkerMessageListener"/>
</property>
<property name="concurrentConsumers" value="200"/>
<property name="maxConcurrentConsumers" value="200"/>
</bean>
<bean parent="messageListener" id="dog">
<property name="destination" ref="lookup.mdb.queue.cat"/>
<property name="messageListener">
<bean class="com.acteksoft.common.remote.jms.WorkerMessageListener"/>
</property>
<property name="concurrentConsumers" value="200"/>
<property name="maxConcurrentConsumers" value="200"/>
</bean>
<bean parent="messageListener" id="fish">
<property name="destination" ref="lookup.mdb.queue.fish"/>
<property name="messageListener">
<bean class="com.acteksoft.common.remote.jms.WorkerMessageListener"/>
</property>
<property name="concurrentConsumers" value="200"/>
<property name="maxConcurrentConsumers" value="200"/>
</bean>
My hack involves using a ServletContextListener to manually stop the objects. The hacky part is that I have to create additional threads to stop the DefaultMessageListenerContainers. Perhaps I'm stopping the objects in the wrong order, but I've tried everything that I can imagine. If I attempt to stop the objects in the main thread, then they hang indefinitely.
Thank you in advance!
UPDATE
I have tried the following based on boday's recommendation but it didn't work. I have also tried to specify the amq:transportConnector uri as tcp://0.0.0.0:61616?transport.daemon=true
<amq:broker persistent="false" id="mybroker" brokerName="localhost">
<amq:transportConnectors>
<amq:transportConnector uri="tcp://0.0.0.0:61616?daemon=true"/>
</amq:transportConnectors>
</amq:broker>
<amq:connectionFactory id="connectionFactory" brokerURL="vm://localhost" >
<amq:prefetchPolicy>
<amq:prefetchPolicy queuePrefetch="0"/>
</amq:prefetchPolicy>
</amq:connectionFactory>
At one point I tried to add similar properties to the brokerUrl parameter in the amq:connectionFactory element and the shutdown worked properly, however after further testing I learned that the properties were resulting in an exception to be thrown from VMTransportFactory. This resulted in improper initialization and the basic message functionality didn't work.
In case anyone else is wondering, as far as I can see it's not possible to have a daemon ListenerContainer using ActiveMQ.
When the ActiveMQConnection is started, it creates a ThreadPoolExecutor with non-daemon thread. This is seemingly to avoid issues when failing over the connection from one broker to another.
https://issues.apache.org/jira/browse/AMQ-796
executor = new ThreadPoolExecutor(1, 1, 5, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), new ThreadFactory() {
#Override
public Thread newThread(Runnable r) {
Thread thread = new Thread(r, "ActiveMQ Connection Executor: " + transport);
//Don't make these daemon threads - see https://issues.apache.org/jira/browse/AMQ-796
//thread.setDaemon(true);
return thread;
}
});
try setting daemon=true on your TCP transport, this allows the process to run as a deamon thread which won't block the shutdown of your container
see http://activemq.apache.org/tcp-transport-reference.html

C3p0 APPARENT DEADLOCK exception even though statement caching is disabled

I am using C3p0 connection pool and I am observing following exception in my server.
com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#ed866f -- APPARENT DEADLOCK!!! Complete Status:
Managed Threads: 1
Active Threads: 1
Active Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask#995ec2 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0)
Pending Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask#11c3ceb
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask#e392ed
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask#c33853
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#a6f77d
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#1c25bfc
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#12534a9
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask#1a4564c
Pool thread stack traces:
Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0,5,main]
oracle.jdbc.driver.OracleStatement.close(OracleStatement.java:1338)
com.mchange.v2.c3p0.impl.NewPooledConnection.cleanupUncachedStatements(NewPooledConnection.java:651)
com.mchange.v2.c3p0.impl.NewPooledConnection.close(NewPooledConnection.java:539)
com.mchange.v2.c3p0.impl.NewPooledConnection.close(NewPooledConnection.java:234)
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.destroyResource(C3P0PooledConnectionPool.java:470)
com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask.run(BasicResourcePool.java:964)
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
My configuration is
<property name="initialPoolSize" value="1" />
<property name="minPoolSize" value="1" />
<property name="maxPoolSize" value="20" />
<property name="maxIdleTime" value="240" />
<property name="checkoutTimeout" value="60000" />
<property name="acquireRetryAttempts" value="0" />
<property name="acquireRetryDelay" value="1000" />
<property name="debugUnreturnedConnectionStackTraces" value="true" />
<property name="unreturnedConnectionTimeout" value="300" />
<property name="numHelperThreads" value="1" />
<property name="preferredTestQuery" value="SELECT 1 FROM DUAL" />
After some googing I found following threads.
https://forum.hibernate.org/viewtopic.php?t=947246
It is suggested to disable the statement caching to avoid this problem. But in my configuration I didn't enable the statement caching(maxStatements, maxStatementsPerConnection are 0 by default).
Please suggest any alternatives.
Don't set numHelperThreads to 1.
If you set numHelperThreads to 1, any slow task will trigger an APPARENT DEADLOCK. c3p0 relies on an internal Thread pool. One thread does not make a pool. Let numHelperThreads have at least its default value of 3.
Specifically what is going on in your stack trace is that an attempt by the Connection pool to close() a Connection is taking long time, for some reason. There could be a real problem here, if this happens a lot, but it could just be an occasional slow operation, which a reasonably sized Thread pool would just handle and shrug off.
Good luck!

Restarting activemq route in camel when receiving a "Connection.start()" message

I have a an activemq camel route that stops receiving messages at some point and requires a restart. I am not sure how to programmatically detect and fix this situation.
My route looks like so:
from("activemq:queue:Consumer.app.VirtualTopic.msg?messageConverter=#convertMsg")
It's all configured in Spring like so:
<!-- Configure the Message Bus Factory -->
<bean id="jmsFactory" class="com.local.messaging.activemq.SpringSslContextConnectionFactory">
<property name="brokerURL" value="${jms.broker.url}" />
<property name="sslContext" ref="sslContext" />
</bean>
<!-- Connect the Message Bus Factory to Camel. The 'activemq' bean
name is necessary for Camel to pick it up automatically -->
<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent" depends-on="jmsFactory">
<property name="usePooledConnection" value="true" />
<property name="connectionFactory">
<bean class="org.apache.activemq.pool.PooledConnectionFactory">
<property name="maxConnections" value="20" />
<property name="maximumActive" value="10" />
<property name="connectionFactory" ref="jmsFactory" />
</bean>
</property>
</bean>
Finally, the broker URL is configured like so:
jms.broker.url=failover://(ssl://amq1:61616,ssl://amq1:61616)
This starts up just fine and works like a champ most of the time. Every so often, though, I see this message in the logs:
Received a message on a connection which is not yet started. Have you forgotten to call Connection.start()? Connection: ActiveMQConnection {<details>}
I suspect strongly that this happens after the message bus has restarted, but as I have no direct access to the message bus, I don't know that for certain. I don't know that that matters.
The keys for me are:
How do I programmatically detect this situation? There doesn't appear to be any exception thrown or the like and the only way I've seen this is by parsing through log files.
After detecting it, how do I fix it? Do I need to start() and stop() the route or is there a cleaner way?
Finally, I did see some suggestions that this case should be handled by activemq, using the failover scheme. As shown above, I am using failover and this still happens.