NHIbernate SysCache2 and SQLDependency problems - sql-server-2005

I've set enable_broker on my SQL Server 2008 to use SQLDepndency
I've configured my .Net app to use Syscache2 with a cache region as follows:
<syscache2>
<cacheRegion name="BlogEntriesCacheRegion" priority="High">
<dependencies>
<commands>
<add name="BlogEntries"
command="Select EntryId from dbo.Blog_Entries where ENABLED=1"
/>
</commands>
</dependencies>
</cacheRegion>
</syscache2>
My Hbm file looks like this:
<?xml version="1.0" encoding="utf-8"?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">
<class name="BlogEntry" table="Blog_Entries">
<cache usage="nonstrict-read-write" region="BlogEntriesCacheRegion"/>
....
</class>
</hibernate-mapping>
I also have query caching enabled for queries against BlogEntry
When I first query, the results are cached in the 2nd level cache, as expected.
If I now go and change a row in blog_entries, everything works as expected, the cache is expired, it get's this message:
2010-03-03 12:56:50,583 [7] DEBUG NHibernate.Caches.SysCache2.SysCacheRegion - Cache items for region 'BlogEntriesCacheRegion' have been removed from the cache for the following reason : DependencyChanged
I expect that. On the next page request, the query and it's results are stored back in the cache. However, the cache is immediately invalidated again, even though nothing has further changed.
DEBUG NHibernate.Caches.SysCache2.SysCacheRegion - Cache items for region 'BlogEntriesCacheRegion' have been removed from the cache for the following reason : DependencyChanged
My cache is constantly invalidated every subsequent time with no changes to the underlying data. Only a restart of the application allows the cache to operate again - but only the first time the data is cached (again, the first dirtying of the cache, causes it to never work again)
Has anyone seen this problem or got any ideas what this could be? I was thinking that syscache2 needs to handle the SQLDependency onChange event, which it probably is doing - so I don't understand why SQL Server keeps sending SQLDependency depedencyChanged.
thanks

We are getting the same problem on one database instance, but not on the other. It definitely seems to be some kind of permission problem on the database end, because the exact same NHibernate configuration is used in both cases.
In the working case the cache behaves as expected, in the other (which is a database engine which has much stricter permissions) we get the exact same behaviour you mentioned.

Related

Solr DataImportHandler - batchSize="-1" does not work

I am using Apache Solr 6.4.1.
Because I am using a really big database (over 3mio rows), I would like to add batchSize="-1" in the db-data-config.xml.
But if I do this, it did work. Without batchSize I can get the first 2k rows than I get a "java.lang.RuntimeException: java.lang.StackOverflowError" Error.
In Solrconfig.xml
<requestHandler name="/dataimport" class="solr.DataImportHandler">
<lst name="defaults">
<str name="config">db-data-config.xml</str>
</lst>
In db-data-config.xml
<dataConfig>
<dataSource type="JdbcDataSource"
driver="com.microsoft.sqlserver.jdbc.SQLServerDriver"
url="jdbc:sqlserver://***:1433;integratedSecurity=true;
Initial Catalog=***;"
batchSize="-1"/>
...
Why is batchSize="-1" dont working? (batchSize="200" or other is working)
UPDATE
if I set Debug in Dataimporthandler to false, then it works!
I don't think that set batchSize to '-1' would help in your situation. This is written inside the source code of Solr DataImportHandler:
if (batchSize == -1)
batchSize = Integer.MIN_VALUE;
[... omissis ...]
Statement statement = c.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
statement.setFetchSize(batchSize);
So double check what kind of parameters accepts MS JDBC driver for the setFetchSize method.
setFetchSize - Gives the JDBC driver a hint as to the number of rows
that should be fetched from the database when more rows are needed for
ResultSet objects generated by this
Statement. If the value specified is zero, then the hint
is ignored. The default value is zero.
So the driver is free to ignore this hint, may be it is just reading in the whole table. And you could also try to change the version of your JDBC driver...
I think you first should adapt the value depending to network latency and the amount of record you want to retrive at each round trip.
Indexing performance and mssql server load depends on the batchsize. Try starting with a small size and then gradually increase it.
If this not works try to radically change your JDBC driver.
Returning to batchSize parameter, there are only few cases where you don't need it. Generally this is the behaviour the method should have:
if you have configured your JVM with enough memory to read the entire table
if your JDBC driver would rise an exception invoking setFetchSize() method
if you're dealing with MySql JDBC driver which has a known bug

How to get number rows updated/added when we make DB call via JCA files in osb proxy service

I am as a client inserting/updating/fetching values to/from back-end DB via JCA files creating business service and making the call. I am facing problem while performing insert/update call as for all the request i will be getting success response irrespective of the DB getting added/updated. If there is a way to confirm like these many rows got updated after insert/update DB then it confirms like operation is successful.
Below is the simple JCA file to update the DB, can you please let me know what extra configuration i need to do to get the number of rows get updated..!
<adapter-config name="RetrieveSecCustRelationship" adapter="Database Adapter" wsdlLocation="RetrieveSecCustRelationship.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
<connection-factory location="eis/DB/Database" UIConnectionName="Database" adapterRef=""/>
<endpoint-interaction portType="RetrieveSecCustRelationship_ptt" operation="RetrieveSecCustRelationship">
<interaction-spec className="oracle.tip.adapter.db.DBPureSQLInteractionSpec">
<property name="SqlString" value=**"update CUSTOMER_INSTALLED_PRODUCT set CUSTOMER_ID=? where CUSTOMER_ID=?"**/>
<property name="GetActiveUnitOfWork" value="false"/>
<property name="QueryTimeout" value="6"/>
</interaction-spec>
<input/>
<output/>
</endpoint-interaction>
</adapter-config>
Thanks & Regards
I'm afraid you will need to wrap it in PL/SQL and then extend that PL/SQL so number of affected rows is being returned. Then you could extract this value from response variable with XPath.

Getting "Can not convert the given object to query." with ColdFusion ORM

This is happening intermittently (usually at start up). I get the above error message when executing the following code.
var arr = ORMExecuteQuery( "FROM priority WHERE active = 1 ORDER BY sortOrder" );
var qry = entityToQuery( arr );
The first line executes fine, but the second line blows up. The solution is to run ormreload();
The problem keeps coming up in an unpredictable way though. Even when no changes have been made to the beans or gateways that are using ORM. Completely unpredictable and impossible to replicate on purpose. Is there something else that can mess with the hibernate mappings that could cause this type of problem.
Other info that may be pertinent:
This is a MURA plugin based on a recent version of FW/1.
ormreload() is a persistent fix (until it fails again)
My current solution is to put ormreload() in the setupApplication() method of application.cfc
I just want to understand better what could be causing this problem.

Why is NHibernate's adonet.batch_size setting ignored - session.SetBachSize() throws exception?

I am using NHibernate and switched from my local SQLExpress database to Oracle11g.
My code started to complain. The session objects method SetBatchSize() throws a System.NotSupported exception:
No batch size was defined for the session factory, batching is disabled. Set adonet.batch_size = 1 to enable batching.
It worked on the SQLExpress database. Ok, so I added this
<property name="adonet.batch_size">1</property>
to the config but it still throws the same exception. The sessions Batcher property is set to this
Value: {NHibernate.AdoNet.NonBatchingBatcher}
Type: NHibernate.Engine.IBatcher {NHibernate.AdoNet.NonBatchingBatcher}
It does not make any difference if i try to set the batch size in- or outside the transaction.
NHibernate has only batcher for some RDBMs. if it doesn't find one for the database in question it defaults to nonbatchingbatcher which is not able to batch at all. you could implement your own IBatcher.

Maximum number of messages sent to a Queue in OpenMQ?

I am currently using Glassfish v2.1 and I have set up a queue to send and receive messages from with Sesion beans and MDBs respectively. However, I have noticed that I can send only a maximum of 1000 messages to the queue. Is there any reason why I cannot send more than 1000 messages to the queue? I do have a "developer" profile setup for the glassfish domain. Could that be the reason? Or is there some resource configuration setting that I need to modify?
I have setup the sun-resources.xml configuration properties as follows:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 9.0 Resource Definitions //EN" "http://www.sun.com/software/appserver/dtds/sun-resources_1_3.dtd">
<resources>
<admin-object-resource
enabled="true"
jndi-name="jms/UpdateQueue"
object-type="user"
res-adapter="jmsra"
res-type="javax.jms.Queue">
<description/>
<property name="Name" value="UpdatePhysicalQueue"/>
</admin-object-resource>
<connector-resource
enabled="true" jndi-name="jms/UpdateQueueFactory"
object-type="user"
pool-name="jms/UpdateQueueFactoryPool">
<description/>
</connector-resource>
<connector-connection-pool
associate-with-thread="false"
connection-creation-retry-attempts="0"
connection-creation-retry-interval-in-seconds="10"
connection-definition-name="javax.jms.QueueConnectionFactory"
connection-leak-reclaim="false"
connection-leak-timeout-in-seconds="0"
fail-all-connections="false"
idle-timeout-in-seconds="300"
is-connection-validation-required="false"
lazy-connection-association="false"
lazy-connection-enlistment="false"
match-connections="true"
max-connection-usage-count="0"
max-pool-size="32"
max-wait-time-in-millis="60000"
name="jms/UpdateFactoryPool"
pool-resize-quantity="2"
resource-adapter-name="jmsra"
steady-pool-size="8"
validate-atmost-once-period-in-seconds="0"/>
</resources>
Hmm .. further investigation revealed the following in the imq logs:
[17/Nov/2009:10:27:57 CST] ERROR sendMessage: Sending message failed. Connection ID: 427038234214377984:
com.sun.messaging.jmq.jmsserver.util.BrokerException: transaction failed: [B4303]: The maximum number of messages [1,000] that the producer can process in a single transaction (TID=427038234364096768) has been exceeded. Please either limit the # of messages per transaction or increase the imq.transaction.producer.maxNumMsgs property.
So what would I do if I needed to send more than 5000 messages at a time?
What I am trying to do is to read all the records in a table and update a particular field of each record based on the corresponding value of that record in a legacy table to which I have only read only access. This table has more than 10k records in it. As of now, I am sequentially going through each record in a for loop, getting the corresponding record from the legacy table, comparing the field values, updating the record if necessary and adding corresponding new records in other tables.
However, I was hoping to improve performance by processing all the records asynchronously. To do that I was thinking of sending each record info as a separate message and hence requiring so many messages.
To configure OpenMQ and set artitrary broker properties, have a look at this blog post.
But actually, I wouldn't advice to increase the imq.transaction.producer.maxNumMsgs property, at least not above the value recommended in the documentation:
The maximum number of messages that a producer can process in a single transaction. It is recommended that the value be less than 5000 to prevent the exhausting of resources.
If you need to send more messages, consider doing it in several transactions.