How does update counter_cache when we run raw SQL? - sql

I've got few counter caches which update when model save/update or destroy because of after hook. If I create/update or destroy record using raw SQL, how can I update counter cache automatically?

This should help you
http://apidock.com/rails/ActiveRecord/CounterCache/ClassMethods/reset_counters
# For Post with id #1 records reset the comments_count
Post.reset_counters(1, :comments)
Resets one or more counter caches to their correct value using an SQL count query. This is useful when adding new counter caches, or if the counter has been corrupted or modified directly by SQL.

Related

Can I use Postgres transactions only for write queries and use read queries without transaction?

What happens if I use transactions for write operations but don't use those for read operations?
My use case:
get some data1 from db (without transaction)
create some data2 using data1 (with transaction)
get some data3 from db (without transaction)
create some data4 using data2 and data3 (with transaction)
If no error commit otherwise rollback.
Is it something wrong that I am not using transaction for the 2 read queries?
Edit/Add/Delete Records
A Transaction is used when you want to ensure that a bunch of row edit/add/delete queries are committed together to the db. In other wards, you want to ensure that all sql commands in that bunch runs successfully or don't commit any of the commands. E.g. you are saving a new record for a users table and a users address table together, but you might not want to write to the users table if the address table record fails for some reason. In this case you would use a transaction for both commands.
Read Records
If you understand the above, you know you don't need transactions for read sql commands.
Was the answer helpful? Consider marking the answer tick and upvoting. Thanks 🙏
If that sequence is fine or not depends on your requirements. With your current procedure, the following could happen:
if you encounter an error before step 2 finishes, nothing has changed
if you encounter an error before step 5 finishes, you have only data2, but not data4
if no error happens before step 5 has completed, you have data2 and data4
If that is fine for you, there is no problem with what you are doing.
So if you're going to query the database for the same rows that you just inserted using a transaction, but haven't committed the transaction yet, then you should read from the database using the transaction.
Eg. You create a user, then you need to create an external account for this user, and the method that creates that external account reads the user from the database and does not get it as a parameter. You can either modify the create external account method so it gets the user as a parameter and then pass to it the just created user, either you can keep the method like it is, but you have to make sure you pass the transaction to it. Otherwise, if the transaction is not committed and is not passed to the read query, the created user won't be found.
Ideally you should avoid this thing by passing the input data to the create external account too, so you don't need to read the user from db, but if for some reason this is not possible, then make sure you read from the db using the transaction.

Understanding locks and query status in Snowflake (multiple updates to a single table)

While using the python connector for snowflake with queries of the form
UPDATE X.TABLEY SET STATUS = %(status)s, STATUS_DETAILS = %(status_details)s WHERE ID = %(entry_id)s
, sometimes I get the following message:
(snowflake.connector.errors.ProgrammingError) 000625 (57014): Statement 'X' has locked table 'XX' in transaction 1588294931722 and this lock has not yet been released.
and soon after that
Your statement X' was aborted because the number of waiters for this lock exceeds the 20 statements limit
This usually happens when multiple queries are trying to update a single table. What I don't understand is that when I see the query history in Snowflake, it says the query finished successfully (Succeded Status) but in reality, the Update never happened, because the table did not alter.
So according to https://community.snowflake.com/s/article/how-to-resolve-blocked-queries I used
SELECT SYSTEM$ABORT_TRANSACTION(<transaction_id>);
to release the lock, but still, nothing happened and even with the succeed status the query seems to not have executed at all. So my question is, how does this really work and how can a lock be released without losing the execution of the query (also, what happens to the other 20+ queries that are queued because of the lock, sometimes it seems that when the lock is released the next one takes the lock and have to be aborted as well).
I would appreciate it if you could help me. Thanks!
Not sure if Sergio got an answer to this. The problem in this case is not with the table. Based on my experience with snowflake below is my understanding.
In snowflake, every table operations also involves a change in the meta table which keeps track of micro partitions, min and max. This meta table supports only 20 concurrent DML statements by default. So if a table is continuously getting updated and getting hit at the same partition, there is a chance that this limit will exceed. In this case, we should look at redesigning the table updation/insertion logic. In one of our use cases, we increased the limit to 50 after speaking to snowflake support team
UPDATE, DELETE, and MERGE cannot run concurrently on a single table; they will be serialized as only one can take a lock on a table at at a time. Others will queue up in the "blocked" state until it is their turn to take the lock. There is a limit on the number of queries that can be waiting on a single lock.
If you see an update finish successfully but don't see the updated data in the table, then you are most likely not COMMITting your transactions. Make sure you run COMMIT after an update so that the new data is committed to the table and the lock is released.
Alternatively, you can make sure AUTOCOMMIT is enabled so that DML will commit automatically after completion. You can enable it with ALTER SESSION SET AUTOCOMMIT=TRUE; in any sessions that are going to run an UPDATE.

SQL Transaction within DataReader or outside of it? And Function Sequence Error?

Currently my code structure (in VB.NET) is as follows -
Using reader As IfxDataReader = command.ExecuteReader()
If reader.HasRows Then
Do While reader.Read()
Using transaction As IfxTransaction = conn.BeginTransaction(System.Data.IsolationLevel.ReadCommitted)
'multiple update statements
transaction.Commit()
End Using
Loop
End If
End Using
The reader is reading multiple records and for every record, there are multiple update statements to be run. I figure that it would be better to begin a transaction for each record, and the commit after it is done, move on to the next record and create a new transaction for that, "rinse and repeat".
Everything works fine and is committed to the database, but when the reader checks for more rows after the last record, this peculiar error shows up -
ERROR [HY010][Informix .NET provider] Function sequence error.
After doing some reasearch, the IBM website says that I would have to update to a CSDK 3.5 or higher (http://www-01.ibm.com/support/docview.wss?uid=swg1IC58696). However, to me this seems a bit unnecessary since the code is working fine, it's just throwing that error at the end.
Would it be better to have the transaction OUTSIDE of the reader, and go through all the records in the table and THEN commit all at once? Or is it the most efficient/optimal the way it is now (in other words, going through each record, with all the necessary update statements for that record, and committing one at a time)? Secondly, would the former choice resolve the function sequence error?
If you plan your application to target for 64bit architecture or .NET FW 4x
then you may consider using CSDK 4.10 xC2 or above.
Within the code, there was a datareader, and inside the datareader were some update statements. I changed the way the code was structured by separating these functions. First I have it read all the data, and then store into objects. Then, after that was done and closed, I ran the update statements while iterating through each object. That seemed to solve the function sequence error that was coming up.

READ COMMITTED database isolation level in oracle

I'm working on a web app connected to oracle. We have a table in oracle with a column "activated". Only one row can have this column set to 1 at any one time. To enforce this, we have been using SERIALIZED isolation level in Java, however we are running into the "cannot serialize transaction" error, and cannot work out why.
We were wondering if an isolation level of READ COMMITTED would do the job. So my question is this:
If we have a transaction which involves the following SQL:
SELECT *
FROM MODEL;
UPDATE MODEL
SET ACTIVATED = 0;
UPDATE MODEL
SET ACTIVATED = 1
WHERE RISK_MODEL_ID = ?;
COMMIT;
Given that it is possible for more than one of these transactions to be executing at the same time, would it be possible for more than one MODEL row to have the activated flag set to 1 ?
Any help would be appreciated.
your solution should work: your first update will lock the whole table. If another transaction is not finished, the update will wait. Your second update will guarantee that only one row will have the value 1 because you are locking the table (it doesn't prevent INSERT statements however).
You should also make sure that the row with the RISK_MODEL_ID exists (or you will have zero row with the value '1' at the end of your transaction).
To prevent concurrent INSERT statements, you would LOCK the table (in EXCLUSIVE MODE).
You could consider using a unique, function based index to let Oracle handle the constraint of only having a one row with activated flag set to 1.
CREATE UNIQUE INDEX MODEL_IX ON MODEL ( DECODE(ACTIVATED, 1, 1, NULL));
This would stop more than one row having the flag set to 1, but does not mean that there is always one row with the flag set to 1.
If what you want is to ensure that only one transaction can run at a time then you can use the FOR UPDATE syntax. As you have a single row which needs locking this is a very efficient approach.
declare
cursor c is
select activated
from model
where activated = 1
for update of activated;
r c%rowtype;
begin
open c;
-- this statement will fail if another transaction is running
fetch c in r;
....
update model
set activated = 0
where current of c;
update model
set activated = 1
where risk_model_id = ?;
close c;
commit;
end;
/
The commit frees the lock.
The default behaviour is to wait until the row is freed. Otherwise we can specify NOWAIT, in which case any other session attempting to update the current active row will fail immediately, or we can add a WAIT option with a polling time. NOWAIT is the option to choose to absolutely avoid the risk of hanging, and it also gives us the chance to inform the user that someone else is updating the table, which they might want to know.
This approach is much more scalable than updating all the rows in the table. Use a function-based index as WW showed to enforce the rule that only one row can have ACTIVATED=1.

Nhibernate - Getting a list

I am trying to fetch a list from my database fulfilling a given criterion. The statement I am using is :
var products = session
.CreateCriteria(typeof(Product))
.Add(Restrictions.Eq("Category", category))
.List();
Where, product is my Domain object
session is the current active session.
Whenever I use this statement, NHibernate queries the database everytime to fetch me the list instead of doing it just the 1st time and then returning me the result from the cache from 2nd time onwards. Is there anything I am doing incorrect?
It has to hit the database, but only to retrieve the PK values in the query results.
Demonstration: set a breakpoint on this line, execute it once, then pause before it executes again. Modify the database directly to change one of the object's values, then run the line again. Check the results. The entities returned should not reflect the changes you made to the database (i.e., they came from the session cache).