Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
If i have a transaction on 2 docs: A and B and in doc A is possibile to incur in 1 write per second limit, in this case: does the transaction fail?
I don't care that the doc A has a not accurate value I want that the doc B (that it's a create document action) doesn't fail, is it so?
I tried 'some manual tests' and look like that the transaction not fail, thanks
The limit on document write throughput in Firestore is not hard-coded or enforced by any software. It is literally the physical limit of the hardware (or physics) due to the distributed nature of the database, and the consistency guarantees it offers.
A simple test is unlikely to trigger any problematic behavior. If you do more writes than can be committed, they will just queue up and be committed when there is bandwidth/space. So while you may see a delay, you typically won't see an error.
The only case where I can imagine seeing errors is if a queue somewhere overflows. There's no specific way to handle this, as it'll most likely surface as a memory/buffer overflow, or some sort of time-out.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I think this is typical question for how modern database handle the concurrency issue.
Say we have a process P1 of modifying db transaction(insert or delete) for the table. Transaction would begin -- sql -- commit. Then during the course before P1 commits the transaction, what if now we have another process P2 comes in to have read transaction for the table? What'll happen? Can P2 still read the table?
Either the table will be locked and P2 won't be able to read until P1 finishes, Or, P2 will read the table, reading off the introduced change by P1?
This behavior depends on database implementation details and timing. In general, until P1 is committed, its results are not valid, so it will not have exclusively locked the table for reading. P2 will most likely not encounter any lock and read the old data.
I'm saying "most likely" because this also depends on configured isolation levels in the database. No serious production database survives for long when configured to be "serializable", implying perfect isolation between transaction. So, depending on the situation, a "phantom read" or other weird things may occur. This is the trade-off between locking continuously or accepting a potential weirdness every now and then.
As #Smutje mentioned in the comment, do consider reading https://en.wikipedia.org/wiki/Isolation_(database_systems) in full, it's mandatory knowledge once you contemplate questions like this.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In a schedular ,which is running on all host at a time, right now I am using xdmp:lock-acquire to lock the collection, so that ,cts:uris will not pick the duplicate uris, but due to this scheduler is running in sequence .Is there any other alternative ,so that I avoid this lock, so that all run in parallel.
It's a bit unclear what you are trying to do, but it sounds like you have documents in the database that you are processing using a scheduled task, that runs on all hosts. Your existing query makes it possible for two tasks to attempt to process the same document.
The easiest would be to generate a list of forests on the host that the task is running on using xdmp:host-forests, and passing that list into cts:uris as $forest-ids
$forest-ids A sequence of IDs of forests to which the search will be
constrained. An empty sequence means to search all forests in the
database. The default is ().
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am a noob in DBMS. I am developing a project which has multiple readers and writers.
Here are the steps of the project.
User Logins
Makes changes
Then clicks submit
Admins review the changes and merges with the main DB.
So, I thought let's use a transaction for each user when they login to my project. Because the transaction takes a snapshot and commits data if all the queries are executed without any error.
If two users want to write in the same row then the transaction throws an error that is required for the project.
Now my question is if such an error occurs then I want only that query to fail I still want the transaction to continue if it has no error.
You are trying to use the concept of a database transaction in a wrong way. Database transactions should be very short (sub-second) and never involve user interaction. The idea is to group statements that belong together so that either all of them succeed or all fail.
What you want to do is application logic and should be handled by the application. That doesn't mean that you cannot use the database. For example, your table could have a column that persists the status (entered by client, approved, ...).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I'd like to delay all the queries I receive in my test database for a specified amount of time. My intent in doing this is to test the "loading" feature in my program. I do not want to alter my queries though! WAITFOR doesn't work for me. If possible, the ideal would be to delay all the queries of a specific connection.
Summarizing: I'd like to delay all the queries of my database via some kind of configuration.
How to do that in SQL Server?
To the best of my knowledge, this is not an out-of-the-box feature.
Most people who want to test their data access code write specific test cases to do that. Again, there are lots of different scenarios; the closest to what you describe would be to capture all the requests going to your server, and then write a harness to replay those queries under test conditions.
Is it really needed? And is there a way to put a delay on the code level? I mean do something like this before the database request.
Thread.Sleep(milliseconds);
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am going to develop some business logic. During requirement phase, I need to provide SLA.
Business always want everything in less than second. Some time it is very frustatung also.They are not bothered about complexity.
My DB is SQL server 2012 and transaction DB.
Is there any formula which will take number of tables, columns etc and provide estimate?
No, you won't be able to get an execution time. Not only do the # of tables/joins factor in, but how much data is returned, network speed, load on the server, and many other factors. What you can get is a Query plan. SQL Server generates a query plan for every query it executes. And the execution plan will give you a "cost" value that you can use as a VERY general guideline about the query's performance. Check out these two links for more information...
1) this is a good StackOverflow question/answer.
2) sys.dm_exec_query_plan