Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Can we create database transaction and commit/rollback later. I mean we do not committing/rollback at the same machine/host/server. Let say we return the transaction and let other to decide to commit or rollback based on transaction ID. How we do it in Go and sql library?
No.
A transaction allows for doing a series of commands atomically e.g., without another command getting data that's half updated, and without another command changing the underlying data within the series of commands.
It is something you want to be over and done with quickly because they lock underlying tables.
Imagine if your transaction was to insert a row into Table A. You start transaction, insert the row, then don't commit or rollback. Nobody else can use Table A until you have done so (except in particular circumstances). They will sit there waiting (blocked). You could also get deadlocks if concurrent transactions try to put data into tables in different orders - in which transactions are automatically rolled back without user input.
There's a great video by Brent Ozar explaining and showing deadlocks - worth watching on its own, but also demonstrates what happens if you don't commit transactions.
If you want a queuing or approving mechanism for changes, you'll need to build that inherently e.g.,
Putting changes into a 'queue' to be done later, or
Doing the data changes but flagging them as 'draft' in a column in relevant table(s). The rest of your code then has to include whether they want to include draft data or not.
tl;dr version: Transactions in databases are a data-level feature to ensure your data is consistent. Using approval/etc is at the business logic level.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have a general question about why I'd need a trigger if I can do the checks and such and validations in a procedure.?
Stored procedures are meant to process records, and work on operations and data, which would otherwise have been difficult to perform in a sql query on its own.Eg: You can define your own user defined exceptions, handle file operations, call rest-api, etc inside the stored procedure.
Triggers on the other hand are part of the transaction boundary, and would be invoked on the INSERT,UPDATE,DELETE of entries in the table. A stored procedure can be invoked when a trigger is fired. There are triggers which are invoked on DDL commands also. A use case of using triggers, would be to audit records, (say populate audit history of the changes to the table).
Some other features of,triggers, it can possibly get fired many times in oracle, check how this happens
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:237924300346045037
In your case of doing validations on data, i would first do the following. Ensure all validations are set up using constraints on the table, (check constraint, not null, unique ness etc).
If necessary create further validations using a stored procedure. Have that stored procedure get invoked using a trigger. I wouldnt do anything non-transaction in a trigger(eg: sending alert mails etc, as its in the transaction boundary, the trigger would have got fired even when you have rolled back the transaction)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I think this is typical question for how modern database handle the concurrency issue.
Say we have a process P1 of modifying db transaction(insert or delete) for the table. Transaction would begin -- sql -- commit. Then during the course before P1 commits the transaction, what if now we have another process P2 comes in to have read transaction for the table? What'll happen? Can P2 still read the table?
Either the table will be locked and P2 won't be able to read until P1 finishes, Or, P2 will read the table, reading off the introduced change by P1?
This behavior depends on database implementation details and timing. In general, until P1 is committed, its results are not valid, so it will not have exclusively locked the table for reading. P2 will most likely not encounter any lock and read the old data.
I'm saying "most likely" because this also depends on configured isolation levels in the database. No serious production database survives for long when configured to be "serializable", implying perfect isolation between transaction. So, depending on the situation, a "phantom read" or other weird things may occur. This is the trade-off between locking continuously or accepting a potential weirdness every now and then.
As #Smutje mentioned in the comment, do consider reading https://en.wikipedia.org/wiki/Isolation_(database_systems) in full, it's mandatory knowledge once you contemplate questions like this.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am a noob in DBMS. I am developing a project which has multiple readers and writers.
Here are the steps of the project.
User Logins
Makes changes
Then clicks submit
Admins review the changes and merges with the main DB.
So, I thought let's use a transaction for each user when they login to my project. Because the transaction takes a snapshot and commits data if all the queries are executed without any error.
If two users want to write in the same row then the transaction throws an error that is required for the project.
Now my question is if such an error occurs then I want only that query to fail I still want the transaction to continue if it has no error.
You are trying to use the concept of a database transaction in a wrong way. Database transactions should be very short (sub-second) and never involve user interaction. The idea is to group statements that belong together so that either all of them succeed or all fail.
What you want to do is application logic and should be handled by the application. That doesn't mean that you cannot use the database. For example, your table could have a column that persists the status (entered by client, approved, ...).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
If i have a transaction on 2 docs: A and B and in doc A is possibile to incur in 1 write per second limit, in this case: does the transaction fail?
I don't care that the doc A has a not accurate value I want that the doc B (that it's a create document action) doesn't fail, is it so?
I tried 'some manual tests' and look like that the transaction not fail, thanks
The limit on document write throughput in Firestore is not hard-coded or enforced by any software. It is literally the physical limit of the hardware (or physics) due to the distributed nature of the database, and the consistency guarantees it offers.
A simple test is unlikely to trigger any problematic behavior. If you do more writes than can be committed, they will just queue up and be committed when there is bandwidth/space. So while you may see a delay, you typically won't see an error.
The only case where I can imagine seeing errors is if a queue somewhere overflows. There's no specific way to handle this, as it'll most likely surface as a memory/buffer overflow, or some sort of time-out.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have databases on two different servers. I need to regularly retrieve new records from a table on server A, and process them in order to use them to update a table on server B (with a different schema). I was going to use a trigger for this, but if this fails, the inserts on server A are rolled back. The inserts on table A must not fail, so the update of server B needs to be as decoupled from this as possible. I am now thinking of using a scheduled sproc on server B to retrieve the results from server A and update server B. This would need to run every 30 seconds. Is there anything wrong with this approach, or is there a better or more 'correct' way of achieving this?
I think creating a scheduled job in SQL Server Agent is the way to go here. This can execute a simple stored procedure (if the logic is realatively simple) or an SSIS package (where it is more complex).
Just a final note on triggers: if possible I have always tried to avoid using triggers. They can have what appear to be "unintended" or "mysterious" side effects, they can be difficult to debug and developers can often forget to check for triggers when trying to resolve an issue. That's not to say they don't offer benfits too - but I think you need to be wary of them.