Concurrency call to SQL - Get Unique Number - sql

What I am trying to achieve:
In an Web App (Blazor Server), we generate some documents.
Each document, should have a unique sequence number.
The Sequence Number must be between X and Y, where Y always > X.
The Sequence Number (X and Y) is set at the beginning of each year by a Superadmin.
The values are saved in a SQL Server or PSQL database, via EF Core 6.
Normal Flow:
First document is generated : gets the ID 1.
Second document is generated from the app : gets the ID 2.
Last document is generated from the app : gets the ID Y.
Last document + 1 > gets an error.
Now, my only problem is: how can I treat concurrent calls?
What happens if in the same millisecond two users try to get the same Id?
Can SQL guarantee to me that each one will have the next Sequence Number?

Related

Auto-incrementing a Firebird field value when using UPDATE OR INSERT INTO

I have been a Delphi programmer for 25 years, but managed to avoid SQL until now. I was a dBase expert back in the day. I am using Firebird 3.0 SuperServer as a service on a Windows server 2012 box. I run a UDP listener service written in Delphi 2007 to receive status info from a software product we publish.
The FB database is fairly simple. I use the user's IP address as the primary key and record reports as they come in. I am currently getting about 150,000 reports a day and they are logged in a text file.
Rather than insert every report into a table, I would like to increment an integer value in a single record with a "running total" of reports received from each IP address. It would save a LOT of data.
The table has fields for IP address (Primary Key), LastSeen (timestamp), and Hits (integer). There are a few other fields but they aren't important.
I use UPDATE OR INSERT INTO when the report is received. If the IP address does not exist, a new row is inserted. If it does exist, then the record is updated.
I would like it to increment the "Hits" field by +1 every time I receive a report. In other words, if "Hits" already = 1, then I want to inc(Hits) on UPDATE to 2. And so on. Basically, the "Hits" field would be a running total of the number of times an IP address sends a report.
Adding 3 million rows a month just so I can get a COUNT for a specific IP address does not seem efficient at all!
Is there a way to do this?
The UPDATE OR INSERT statement is not suitable for this, as you need to specify the values to update or insert, so you will end up with the same value for both the insert and the update. You could address this by creating a before insert trigger that will always assign 1 to the field that holds the count (ignoring the value provided by the statement for the insert), but it is probably better to use MERGE, as it gives you more control about the resulting action.
For example:
merge into user_stats
using (
select '127.0.0.1' as ipaddress, timestamp '2021-05-30 17:38' as lastseen
from rdb$database
) as src
on user_stats.ipaddress = src.ipaddress
when matched then
update set
user_stats.hits = user_stats.hits + 1,
user_stats.lastseen = max_value(user_stats.lastseen , src.lastseen)
when not matched then
insert (ipaddress, hits, lastseen) values (src.ipaddress, 1, src.lastseen)
However, if you get a lot of updates for the same IP address, and those updates are processed concurrently, this can be rather error-prone due to update conflicts. You can address this by inserting individual hits, and then have a background process to summarize those records into a single record (e.g. daily).
Also keep in mind that having a single record removes the possibility to perform more analysis (e.g. distribution of hits, number of hits on day X or a time HH:mm, etc).

Replication/save conflicts of documents

I have two servers lets call them server A and B. In A, I have order documents and B is a replicate of A (A replicates to B every minute). In B, I have a java agent which is scheduled every 5 minutes and is sending a document to a website but also puts a flag in a field of the document. Many times now I get save/replication conflict on server A of that particular document which has been accessed by server B. This because others are also editing the same document on server A. How can this problem be solved?
If the documents on A are created using a form, enable the "Merge conflicts" in form properties. If the documens are created with an agent, add a reserved field doc.~$ConflictAction = "1".

Selenium IDE - verifyText in a multiple entries in a table (dynamically created ID's)

The system should make a entry in the database (lets say a car with a registration number).
The box in the table with the registration number has ex. ID232 . I have no problem with verifying the registration number of the first car that comes up in the results (The verification is done based on a search which brings results from the database). The problem comes if I want to verify the next car based on reg. number , because the second registration number box has the same ID.
An example:
Car ID Registration Number
1 BS2344 <--- ID232
2 BS3224 <--- ID232
The selenium IDE can verify the first entry, but the second verifyText will fail because it verifies the first one only (because the second box is has the same ID). The only difference is a automatically incrementing ID (Car ID) that I can use, but then i will have to input this manually (And the whole automation point is gone). The whole test process is to Create multiple Cars, and then verify them.
use the loop and verify the same Id as many times as the entries in the database. as the car code is generated randomly, for the same id different car code will be generated and you will be able to check for all the ids..
I hope you got my point..
hope this answer would help you!

randomly generating unique number between 1-999 for primary key in table

I have a problem I'm not sure how to solve elegantly.
Background Information
I have a table of widgets. Each widget is assigned an ID from a range of numbers, let's say between 1-999. The values of 1-999 is saved in my database as "lower_range" and "upper_range" in a table called "config".
When a user requests to create a new widget using my web app, I need to be able to do the following:
generate a random number between 1 and 999 using lua's math.random function or maybe a random number generator in sqlite (So far, in my tests, lua's math.random always returns the same value...but that's a different issue)
do a select statement to see if there already is a widget with this number assigned...
if not, create the new widget.
otherwise repeat process until you get a number that is not currently in use.
Problem
The problem I see with the above logic is two-fold:
the algorithm can potentially take a long time because I have to keep searching until I find a unique value.
How do I prevent simultaneous requests for new widget numbers generating the same value?
Any suggestions would be appreciated.
Thanks
Generate your random numbers ahead of time and store them in a table; make sure the numbers are unique. Then when you need to get the next number, just check how many have already been assigned and get the next number from your table. So, instead of
Generate a number between 1-999
Check if it's already assigned
Generate a new number, and so on.
do this:
Generate array of 999 elements that have values 1-999 in some random order
Your GetNextId function becomes return ids[currentMaxId+1]
To manage simultaneous requests, you need to have some resource that generates a proper sequence. The easiest is probably to use a key in your widget table as the index in the ids array. So, add a record to the widgets table first, get its key and then generate widget ID using ids[key].
Create a table to store the keys and the 'used' property.
CREATE TABLE KEYS
("id" INTEGER, "used" INTEGER)
;
Then use the following to find a new key
select id
from KEYS
where used = 0
order by RANDOM()
limit 1
Don't generate a random number, just pick the number off a list that's in random order.
For example, make a list of numbers 1 - 999. Shuffle that list using Fisher-Yates or equivalent (see also Randomize a List in C# even if you're not using C#).
Now you can just keep track of the most recently used index into your list. (Shuffling the list should occur exactly once, then you store and reuse the result).
Rough pseudo-code:
If config-file does not contain list of indices
create a list with numbers 1 - 999
Use Fisher-Yates to shuffle that list
// list now looks like 0, 97, 251, 3, ...
Write the list to the config file
Set 'last index used' to 0 and write to config file
end if
To use this,
NextPK = myList[last-index-used]
last-index-used = last-index-used + 1
write last-index-used to config file
To get and flag an ID as used at same time (expanding on Declan_K's answer):
replace into random_sequence values ((select id from random_sequence where used=0 order by random()), 1);
select id from random_sequence where rowid = last_insert_rowid();
6
When you run out of "unused" sequence table entries the select will return "blank"
I use replace into as update doesn't have an last_insert_rowid() equiv that I can see.
You Can get sql to create a primary key, that will increase by one evert time you add a ros to the database.

SQL - mantain sort order for paginating data when change in real time

I'm implementing a php page that display data in pagination format. My problem is that these data change in real time, so when user request next page, I submit last id and query is executed for retrieve more 10 rows from last id order by a column that value change in real time. For example I have 20 rows:
Id 1 col_real_time 5
Id 2 col_real_time 3
Id 3 col_real_time 11
Etc
I get data sorted by col_real_time in ascending order, so result is
id 2, id 1, id 3
Now in realtime id 2 change in col_real_time 29 before user send request for next page, user now send request for next results and because id 2 now is 29 he already see it.
How can I do?
Now in realtime id 2 change
You basically have to take a snapshot of the data if you don't want the data to appear to change to the user. This isn't something that you can do very efficiently in SQL, so I'd recommend downloading the entire result set into a PHP session variable that can be persisted across pages. That way, you can just get rows on demand. There are Javascript widgets that will effectively do the same thing, but will send the entire result set to the client which is a bad idea if you have a lot of data.
This is not as easy to do as pure SQL pagination, as you will have to take responsibility for cleaning the stored var out when it's no longer needed. Otherwise, you'll rather quickly run out of memory.
If you have just a few pages, you could:
Save it to session and page over it, instead of going back to the
database server.
Save it to a JSON object list and use Jquery to read it and page
over it.
Save it to a temp table indicating generation timestamp, user_id and
session_id, and page over it.