I have been working on my project using EJB 3.1 running on a JBoss EAP (http://www.jboss.org/products/eap). I am using Hibernate Search as my persistence. What I am looking for is a way of calling the manual indexing in my EJBs and not getting transaction timeout as a result.
10:55:28,557 WARN [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (Hibernate Search: collectionsloader-6) SQL Error: 0, SQLState: null
10:55:28,557 ERROR [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (Hibernate Search: collectionsloader-6) javax.resource.ResourceException: IJ000460: Error checking for a transaction
I have been searching some examples and I don't want to force every transaction in my jboss to be more than the usual time.
Why Hibernate Search takes so much time to build a Index?
On the other hand I have been thinking about calling a batch process from my EJB but I am not really sure about how to do it. Moreover, I didn't find anything interesting on the Web....
Any ideas on how to call the mass indexer in order to regenerate my indexes?
fullTextSession.createIndexer().startAndWait();
Or, in general, anybody can tell the best way of calling a batch process from a EJB?
Thanks in advance,
Hibernator,
Related
I have a project with requirements that response time should be under 0.5sec under load as 3000 concurrency users;
I have few API which use some aggregation from SQL Server.
when we testing it with 3000CCU average response time about 15 second. And also 500 error due to SQL can't handle so many requests. Actually requests to SQL Server interrupt with timeout)
Our current instance is r4.2xlarge 8CPU and 61GB Memory.
All code is asynchronous without blocking operations.
We running our app behind load balancer with 10 instances 300 CCU per instance in this case. utilization on instances about 30%. The bottleneck currently is SQL server.
I see few solution. Setup some big SQL, Cluster or Sharding, I'm not really sure. I'm not strong in that.
Or use cache for request. We have mostly read only data, which we can cache after aggregation.
Update:
I need solution to cache exactly sql responses. To order work with it late with LINQ.
But it seems there is no ready solution for that.
I found good try for that called CacheManager. But there are few problems exist with this.
It works with Redis only in sync mode, means use sync command instead of async.
There is no implementation of concurrency lock, which can occur in our case because we have 10 instances. We need solution which work with as distributed cache.
There are few bugs which utilize Redis multiplexor wrong. And you constantly will have connection problem.
Please advice how to overcome this issue. How you solve it. I'm sure there are people who already solve it somehow.
I enable query store on sql and monitor all missing indexes. Ef core generates some of request absolutely different way that expected. After creating missing indexes performance became much better. But i still have problem to handle required CCU. I explore all existing solution which extend ef core to cache. Most of them was written in sync version. Which just can’t utilize all benefits of async. As well i did’t found any distributed cache that implement distributed lock. Finally I create this lib which extend ef core and and distributed cache in redis .cache allow us scale a lot more. And now everything just flight;) leave it here, for someone who have performance issue like me. https://github.com/grinay/Microsoft.EntityFrameworkCore.DistributedCache
We have recently upgraded our NServiceBus project from version 4 to version 5. We are using NHibernate for data storage to an SQL server database. Since the upgrade we have started to encounter an error around connection timeouts and the TimeoutEntity table. The NServiceBus services run fine for a while - at least a couple of hours and then they stop.
When investigating the cause of this it seems to be down to polling query to the TimeoutEntity table - the query is done every minute and if the query takes more than 2 seconds to complete an error is raised and CriticalError.Raise is called - this causes the NServiceBus to stop the service.
One route of investigation is to find out of the cause of the timeouts, but we would also like to know why this functionality was changed - in the previous version of NServiceBus, Logger.Warn was called rather than CriticalError.Raise. Would anybody know why this change was made in NServiceBus 5 and what we can do to mitigate it?
You can configure the time to wait before raising a critical error, see http://docs.particular.net/nservicebus/errors/critical-exception-for-timeout-outages on how to do it.
You can also define your own critical error action by using
config.DefineCriticalErrorAction((message, exception) => {
<do something here>
});
We are experimenting with using RavenDB as a log target within our current NLog usage. We created a new RavenDB server and a database using Voron as the storage engine. Then it was just a matter of updating our nlog configurations, which we did.
For a while, everything was great, but then we ran into a situation where we are running calling Trace() within a database transaction. With ravenDB setup as the log target, this means the SaveChanges() call is made within the transaction as well.
Everything is great until we call Transaction.Commit(). RavenDB throws a 500 server error at that point, because DTC is not supported with the Voron engine (and is slated to be removed everywhere, from what I understand). I'm fine with this. I don't particularly care - writing to the log should not be part of the transaction anyway, because if something does go wrong, we don't want to remove the related log entries.
I searched the documentation hoping to find a configuration option that I could set that would just tell RavenDB to ignore transactions, or at least ignore DTC, but I can't find anything. Is there anyway I can get it to stop throwing these 500 server errors?
Set EnlistInDistributedTransactions = false on the document store, and it will work for you
So the question is mostly in the title but after some research I can't really find any deeper information about this. Mostly I want to know if a deadlock situation occurs does Breeze automatically reattempt the commit or does it just return an error back to the front end to try saving again? Any documentation or articles going deeper into this would be appreciated!
To a certain extent this depends on the server backend that you are using. But in general breeze will NOT attempt to retry a deadlock failure and will instead return an exception indicating that a deadlock occurred. You can then retry the save yourself by handling the client side exception and reexecuting the query.
Note that because of the way that most breeze servers automatically toposort the entities in a save request, deadlocks are much less likely than if such a sort was not performed. The idea here is that by ensuring that multiple instances of a program use the same ordering when updating the same set of tables, we reduce the possibility of a deadlock.
This toposorting is part of any Entity Framework based backend as well as the Breeze Node/Sequelize (MySQL, Postgress) provider, and is likely to be added to the Breeze NHibernate and MongoDb providers in the near future.
We're using siteCore 6.5 and each time we start to publish items, users who are browsing the website will get server 500 errors which end up being
Transaction (Process ID ##) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
How can we setup SQL Server to give priority to a specific application? We cannnot modify any queries or code so it has to be done via SQL Server (or connection string)
I've seen "deadlock victim" in transaction, how to change the priority? and looked at http://msdn.microsoft.com/en-us/library/ms186736(v=SQL.105).aspx but these seem to be per session, not globally.
I don't care if it's a fix/change to SiteCore or a SQL solution.
I don't think you can set the deadlock priority globally - it's a session-only setting. There are not any connection string settings that I know of. The list of possible SqlConnection string settings can be found here.
It sounds to me like you're actually having a problem with the cache and every time you publish, it's clearing the cache and thus you're getting deadlocks with all these calls made at the same time. I haven't seen this sort of thing happen with 6.5 so you might also want to check into your caching. It would help a lot to look at your Sitecore logs and see if this is happening when caches are being created. Either way, check the caching guide on the SDN and see if that helps.