Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I think this is typical question for how modern database handle the concurrency issue.
Say we have a process P1 of modifying db transaction(insert or delete) for the table. Transaction would begin -- sql -- commit. Then during the course before P1 commits the transaction, what if now we have another process P2 comes in to have read transaction for the table? What'll happen? Can P2 still read the table?
Either the table will be locked and P2 won't be able to read until P1 finishes, Or, P2 will read the table, reading off the introduced change by P1?
This behavior depends on database implementation details and timing. In general, until P1 is committed, its results are not valid, so it will not have exclusively locked the table for reading. P2 will most likely not encounter any lock and read the old data.
I'm saying "most likely" because this also depends on configured isolation levels in the database. No serious production database survives for long when configured to be "serializable", implying perfect isolation between transaction. So, depending on the situation, a "phantom read" or other weird things may occur. This is the trade-off between locking continuously or accepting a potential weirdness every now and then.
As #Smutje mentioned in the comment, do consider reading https://en.wikipedia.org/wiki/Isolation_(database_systems) in full, it's mandatory knowledge once you contemplate questions like this.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Can we create database transaction and commit/rollback later. I mean we do not committing/rollback at the same machine/host/server. Let say we return the transaction and let other to decide to commit or rollback based on transaction ID. How we do it in Go and sql library?
No.
A transaction allows for doing a series of commands atomically e.g., without another command getting data that's half updated, and without another command changing the underlying data within the series of commands.
It is something you want to be over and done with quickly because they lock underlying tables.
Imagine if your transaction was to insert a row into Table A. You start transaction, insert the row, then don't commit or rollback. Nobody else can use Table A until you have done so (except in particular circumstances). They will sit there waiting (blocked). You could also get deadlocks if concurrent transactions try to put data into tables in different orders - in which transactions are automatically rolled back without user input.
There's a great video by Brent Ozar explaining and showing deadlocks - worth watching on its own, but also demonstrates what happens if you don't commit transactions.
If you want a queuing or approving mechanism for changes, you'll need to build that inherently e.g.,
Putting changes into a 'queue' to be done later, or
Doing the data changes but flagging them as 'draft' in a column in relevant table(s). The rest of your code then has to include whether they want to include draft data or not.
tl;dr version: Transactions in databases are a data-level feature to ensure your data is consistent. Using approval/etc is at the business logic level.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am a noob in DBMS. I am developing a project which has multiple readers and writers.
Here are the steps of the project.
User Logins
Makes changes
Then clicks submit
Admins review the changes and merges with the main DB.
So, I thought let's use a transaction for each user when they login to my project. Because the transaction takes a snapshot and commits data if all the queries are executed without any error.
If two users want to write in the same row then the transaction throws an error that is required for the project.
Now my question is if such an error occurs then I want only that query to fail I still want the transaction to continue if it has no error.
You are trying to use the concept of a database transaction in a wrong way. Database transactions should be very short (sub-second) and never involve user interaction. The idea is to group statements that belong together so that either all of them succeed or all fail.
What you want to do is application logic and should be handled by the application. That doesn't mean that you cannot use the database. For example, your table could have a column that persists the status (entered by client, approved, ...).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
If i have a transaction on 2 docs: A and B and in doc A is possibile to incur in 1 write per second limit, in this case: does the transaction fail?
I don't care that the doc A has a not accurate value I want that the doc B (that it's a create document action) doesn't fail, is it so?
I tried 'some manual tests' and look like that the transaction not fail, thanks
The limit on document write throughput in Firestore is not hard-coded or enforced by any software. It is literally the physical limit of the hardware (or physics) due to the distributed nature of the database, and the consistency guarantees it offers.
A simple test is unlikely to trigger any problematic behavior. If you do more writes than can be committed, they will just queue up and be committed when there is bandwidth/space. So while you may see a delay, you typically won't see an error.
The only case where I can imagine seeing errors is if a queue somewhere overflows. There's no specific way to handle this, as it'll most likely surface as a memory/buffer overflow, or some sort of time-out.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on my database class project. I am reading the PostgreSQL Write-ahead-logging README, it mentioned several commands such as SQL commands
BEGIN
COMMIT
ROLLBACK
SAVEPOINT
ROLLBACK
RELEASE
In the SQL standard, I didn't see those commands. I am confused by that. What's the differences between those commands and standard "SELECT"? Could anyone tell me more about those commands? Can those commands be used the same way as standard SQL?
The ANSI SQL Standard [http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt] is also your friend, and you can find these keywords defined there.
Generally all these keywords behave similarly across platforms, but beware the subtle differences in their function, performance or usage.
For example: SAVEPOPINT has similar meanings across different platforms (albeit possibly differing implementations or context), so you need to refer to your platform docs for specifics.
In this case, the Postgres 9.1 manual [http://www.postgresql.org/docs/9.1/] (the one I have bookmarked) ROLLBACK and RELEASE keywords are used together with other modifiers to apply to a SAVEPOINT within a transaction.
OTOH: T-SQL (MS-SQL Server) requires SAVE|ROLLBACK TRANSACTION when operating on a SAVEPOINT [http://msdn.microsoft.com/en-us/library/ms188378.aspx].
Hope that helps!
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a production db that I'd like to copy to dev. Unfortunately it takes about an hour to do this operation via mysqldump | mysql and I am curious if there is a faster way to do this via direct sql commands within mysql since this is going into the same dbms and not moving to another dbms elsewhere.
Any thoughts / ideas on a streamlined process to perform this inside of the dbms so as to eliminate the long wait time?
NOTE: The primary goal here is to avoid hour long copies as we need some data very quickly from production in the dev db. This is not a question about locking or replication. Wanted to clarify based on some comments from my including more info / ancillary remarks than I should have initially.
You could set up a slave to replicate the production db, then take dumps from the slave. This would allow your production database to continue operating normally.
After the slave is done performing a backup, it will catch back up with the master.
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-mysqldump.html