Cannot start/stop cache within lock or transaction - locking

I have create a IgniteCache namely MYIGNITECACHE1 with as in Single Thread and locked one row Entry in it.
And in same Locking period and Similar Thread I am creating another IgniteCache namely MYIGNITECACHE2 with as .
But while creating second Cache with as , I am getting IgniteException as Cannot start/stop cache within lock or transaction.
I am creating Cache as,
Ignite.getOrCreateCache("MYIGNITECACHE2");

This is correct behavior. To avoid this you can either create a separate thread and create the cache there, or created all required caches before acquiring the lock.

Related

How to prevent locks in redshift. ( Shared lock stopping a write job)

I have a data warehouse which are used by multiple downstream users. They read the data from the redshift table. When they read the data, there is a shared lock enforced on the table. At that time, my daily job which is supposed to write on the table does not write as it cannot put an exclusive lock until the shared lock is clear.
Ideally my write job should take priority over any other read job. Can I enforce this is some way?
Usually this is done by your update process not requiring an exclusive lock or managing the need for locks so that the update process isn't blocked.
Can you describe your update process and which steps are requiring the exclusive locks?
Look at the locks and statements causing them when things are making forward progress. Reworking these parts should allow you to keep you updates moving while these read sessions are acting on the versions of data they started with.
It is also important to not have user transactions that hang around for days on end. This can happen when interactive sessions are just left open mid transaction. The also prevents errors due to some sessions seeing very old versions of data.

Implementing a mutual exclusion system / distributed queue in Postgres

I want to implement a mutual exclusion system in PostgreSQL where multiple worker processes will temporarily lock resources (rows) from a table (queue) while they work on them. If the worker processes crash, I want the lock to be cleanly released and not have to rely on another process to clean up the leaked locks.
What I have come up with so far is to use a SELECT ... FOR UPDATE SKIP LOCKED query within a transaction, which locks the row it finds and skips any other locked row.
It works well but one of the issues is that the worker might take a while to do its task and I need to keep the transaction open for the entire duration of its task.
Another problem is that the workers work incrementally and persist their state to the database so that if they're stopped or crash, they can resume quickly where they were. The row being locked makes it impossible to persist their state in the same table (though I think I can get away from that by using another table to persist the state).
I've searched on the Web on how to implement a semaphore or a resource borrowing system in SQL/PostgreSQL but I haven't found something that fits my needs. Is there a simple way of achieving this with PostgreSQL?

Apache ignite spring data save method transaction behaviour with map parameter

As per apache ignite spring data documentation, there are two method to save the data in ignite cache:
1. org.apache.ignite.springdata.repository.IgniteRepository.save(key, vlaue)
and
2. org.apache.ignite.springdata.repository.IgniteRepository.save(Map<ID, S> entities)
So, I just want to understand the 2nd method transaction behavior. Suppose we are going to save the 100 records by using the save(Map<Id,S>) method and due to some reason after 70 records there are some nodes go down. In this case, will it roll back all the 70 records?
Note: As per 1st method behavior, If we use #Transaction at method level then it will roll back the particular entity.
First of all, you should read about the transaction mechanism used in Apache Ignite. It is very good described in articles presented here:
https://apacheignite.readme.io/v1.0/docs/transactions#section-two-phase-commit-2pc
The most interesting part for you is "Backup Node Failures" and "Primary Node Failures":
Backup Node Failures
If a backup node fails during either "Prepare" phase or "Commit" phase, then no special handling is needed. The data will still be committed on the nodes that are alive. GridGain will then, in the background, designate a new backup node and the data will be copied there outside of the transaction scope.
Primary Node Failures
If a primary node fails before or during the "Prepare" phase, then the coordinator will designate one of the backup nodes to become primary and retry the "Prepare" phase. If the failure happens before or during the "Commit" phase, then the backup nodes will detect the crash and send a message to the Coordinator node to find out whether to commit or rollback. The transaction still completes and the data within distributed cache remains consistent.
In your case, all updates for all values in the map should be done in one transaction or rollbacked. I guess that these articles answered your question.

Updating on commit to avoid deadlocks

I have a table that tracks the last update time of another table's partitions so our reconciler need only check the partitions that have been updated since the last reconcile. There are multiple threads updating the partitioned table and therefore updating the same row of the latest update time table several times each. This is obviously causing deadlocks. Is there a way to prevent these deadlocks by only updating once on commit?
I was thinking of maybe using a session local temporary table, but not sure how to transfer the values to the global table on commit.
There is no way to trigger a process on commit so that approach probably won't work.
Potentially, you could have each of the writer processes write to an Oracle Advanced Queue (AQ) and then have another process that de-queues the messages and actually applies them to the current table. That would mean that there would be some lag between the writer session committing and the AQ processor picking up and processing the message but that lag shouldn't be too long. You could do the same thing by having each writer thread insert into a queue-like table and having a separate thread process that table if you don't want to use AQ.
I'm confused, though, by how the process you are describing could cause a deadlock. Are you really talking about a deadlock (i.e. an ORA-00060 error is thrown and a deadlock trace file is generated)? What you are describing should lead to blocking locks, not deadlocks, unless there is more going on than you have told us.

What are the First and Second Level caches in (N)Hibernate?

Can anyone explain in simple words what First and Second Level caching in Hibernate/NHibernate are?
1.1) First-level cache
First-level cache always Associates with the Session object. Hibernate uses this cache by default. Here, it processes one
transaction after another one, means wont process one transaction many
times. Mainly it reduces the number of SQL queries it needs to
generate within a given transaction. That is instead of updating after
every modification done in the transaction, it updates the transaction
only at the end of the transaction.
1.2) Second-level cache
Second-level cache always associates with the Session Factory object. While running the transactions, in between it loads the
objects at the Session Factory level, so that those objects will be
available to the entire application, not bound to single user. Since
the objects are already loaded in the cache, whenever an object is
returned by the query, at that time no need to go for a database
transaction. In this way the second level cache works. Here we can use
query level cache also.
Quoted from: http://javabeat.net/introduction-to-hibernate-caching/
There's a pretty good explanation of first level caching on the Streamline Logic blog.
Basically, first level caching happens on a per session basis where as second level caching can be shared across multiple sessions.
Here some basic explanation of hibernate cache...
First level cache is associated with “session” object.
The scope of cache objects is of session. Once session is closed, cached objects are gone forever.
First level cache is enabled by default and you can not disable it.
When we query an entity first time, it is retrieved from database and stored in first level cache associated with hibernate session.
If we query same object again with same session object, it will be loaded from cache and no sql query will be executed.
The loaded entity can be removed from session using evict() method. The next loading of this entity will again make a database call if it has been removed using evict() method.
The whole session cache can be removed using clear() method. It will remove all the entities stored in cache.
Second level cache is apart from first level cache which is available to be used globally in session factory scope.
second level cache is created in session factory scope and is available to be used in all sessions which are created using that particular session factory.
It also means that once session factory is closed, all cache associated with it die and cache manager also closed down.
Whenever hibernate session try to load an entity, the very first place it look for cached copy of entity in first level cache (associated with particular hibernate session).
If cached copy of entity is present in first level cache, it is returned as result of load method.
If there is no cached entity in first level cache, then second level cache is looked up for cached entity.
If second level cache has cached entity, it is returned as result of load method. But, before returning the entity, it is stored in first level cache also so that next invocation to load method for entity will return the entity from first level cache itself, and there will not be need to go to second level cache again.
If entity is not found in first level cache and second level cache also, then database query is executed and entity is stored in both cache levels, before returning as response of load() method.
First-level cache
Hibernate tries to defer the Persistence Context flushing up until the last possible moment. This strategy has been traditionally known as transactional write-behind.
The write-behind is more related to Hibernate flushing rather than any logical or physical transaction. During a transaction, the flush may occur multiple times.
The flushed changes are visible only for the current database transaction. Until the current transaction is committed, no change is visible by other concurrent transactions.
Due to the first-level cache, Hibernate can do several optimizations:
JDBC statement batching
prevent lost update anomalies
Second-level cache
A proper caching solution would have to span across multiple Hibernate Sessions and that’s the reason Hibernate supports an additional second-level cache as well.
The second-level cache is bound to the SessionFactory life-cycle, so it’s destroyed only when the SessionFactory is closed (typically when the application is shutting down). The second-level cache is primarily entity-based oriented, although it supports an optional query-caching solution as well.
When loading an entity, Hibernate will execute the following actions:
If the entity is stored in the first-level cache, then the cached object reference is returned. This ensures application-level repeatable reads.
If the entity is not stored in the first-level cache and the second-level cache is activated, then Hibernate checks if the entity has been cached in the second-level cache, and if it were, it returns it to the caller.
Otherwise, if the entity is not stored in the first or second-level cache, it will be loaded from the DB.
by default, NHibernate uses first level caching which is Session Object based. but if you are running in a multi-server environment, then the first level cache may not very scalable along with some performance issues. it is happens because of the fact that it has to make very frequent trips to the database as the data is distributed over multiple servers. in other words NHibernate provides a basic, not-so-sophisticated in-process L1 cache out of box. However, it doesn’t provide features that a caching solution must have to have a notable impact on the application performance.
so the questions of all these problem is the use of a L2 cache which is associated with the session factory objects. it reduces the time consuming trips to the database so ultimately increases the app response time.
First Level Cache
Session object holds the first level cache data. It is enabled by default. The first level cache data will not be available to entire application. An application can use many session object.
Second Level Cache
SessionFactory object holds the second level cache data. The data stored in the second level cache will be available to entire application. But we need to enable it explicitly.
In a second level cache, domain hbm files can be of key mutable and value false.
For example,
In this domain class some of the duration in a day remains constant as the universal truth. So, it can be marked as immutable across application.