I'm developing a small app in Pro*C, but I have a question with one of the main features.
I have an auto increment trigger for the identifier of each table, so before the row gets inserted into the table, the trigger sets the id of the row.
The problem here is that I want to retrieve the value of the sequence after an isert (to get the id of the row inserted), but what happens when two transactions are trying to insert a a row? If I use the read-commited level and I commit the transaction after inserting the row and retrieve it's value can cause any problems? What should I do? Thanks!
It's safe for two sessions to independently insert rows and refer to currval, as it's local to the session.
The documentation doesn't quiet state that clearly:
... Any reference to CURRVAL always returns the current value of the sequence, which is the value returned by the last reference to NEXTVAL.
Before you use CURRVAL for a sequence in your session, you must first initialize the sequence with NEXTVAL.
Taken together they show it's safe, but the first part of that doesn't really make it clear that it's the last reference to NEXTVAL in the current session. It does, however, say:
A sequence can be accessed by many users concurrently with no waiting or locking.
However, you don't need to do a query to get the ID, you can use the returning into clause
insert into your_table (col1, ...) values (:val1, ...)
returning id into :id;
Related
Is it possible to increase the value of a number in a column with a trigger every time it gets selected? We have special tables where we store the new id and when we update it in the app, it tends to get conflicts before the update happens, even when it all takes less than a second. So I was wondering if it is not possible to set database to increase value after every select on that column? Do not ask me why we do not use autoincrement for ids because I do not know.
Informix provides the SERIAL and BIGSERIAL types (and also SERIAL8, but don't use that) which provide autoincrement support. It also provides SEQUENCES with more sophisticated autoincrements. You should aim to use one of those.
Trying to use a SELECT trigger to update the table being selected from is, at best, fraught with problems about transactions and the like (problems which both the types and sequences carefully avoid).
If your design team needs help making effective use of these, ask a new question outlining what you want to achieve.
Normally, the correct way to proceed is to make the ID column in each table that defines 'something' (the Orders table, the Customer table, …) into a SERIAL column and either not insert a value into the ID column or insert 0 into it. The generated value can be retrieved and used when creating auxilliary information — order items, etc.
Note that you could think about using:
CREATE TABLE xyz_sequence
(
xyz SERIAL NOT NULL PRIMARY KEY
);
and using:
INSERT INTO xyz_sequence VALUES(0);
and then retrieving the inserted value — in Informix ESQL/C, you'd use sqlca.sqlerrd[1], in other languages, other techniques. You can also delete the newly inserted record, or even all the records in the table. You can afford to ignore errors from the DELETE statement; sooner or later, the rows will be deleted. The next value inserted will continue where the prior ones left off.
In a stored procedure, you'd use DBINFO('sqlca.sqlerrd1') to get the inserted value. You'd use DBINFO('bigserial') to get the value if you use a BIGSERIAL type.
I found out possible answer in this question update with return value instead of doing it with select it seems better to return value directly from update as update use locks it should be more safer even when you use multithreading application. But these are just my assumptions. Hopefully it will help someone.
I have a bigint column which is not unique. I want to be able to set the value of this column on inserts but when no value is provided, I would like to auto generate the next sequence in the column of numbers.
Is this possible to do in a synchronised way? The new value needs to be unique with no possibility of the same number being generated when two records are inserted simultaneously.
Define a Sequence object in your database and when no value has been explicitly provided to the insert statement, retrieve the next value from the Sequence to insert instead. The logic for this can be implemented in a trigger.
https://msdn.microsoft.com/en-us/library/ff878058.aspx
As others have suggested, an Identity column would better though.
I have two tables: table1 (column id: Seed 1000, increment 1) & table2 (column id: Seed 2000, increment 1). First I insert some records in table1 & table2. Second I insert to table2 in table1 (using identity_insert_on) and get something like:
1000 first_record_table1
1001 second_record_table1
1002 third_record_table1
2000 first_record_table2
2001 second_record_table2
Third: If I add a new record in table1, i expect to get 1003 as the new id but i get 2002.
(1003 is not breaking the unique id rule and is the correct increment value for table1, so I don't see any reason to jump to the last record and increment 1, as the database seems to do).
Question: How can I get 1003 as the new id?
The documentation on identity is quite clear:
The identity property on a column does not guarantee the following:
Uniqueness of the value – Uniqueness must be enforced by using a PRIMARY KEY or UNIQUE constraint or UNIQUE index.
Consecutive values within a transaction – A transaction inserting multiple rows is not guaranteed to get consecutive values for the rows because other concurrent inserts might occur on the table. If values must be consecutive then the transaction should use an exclusive lock on the table or use the SERIALIZABLE isolation level.
Consecutive values after server restart or other failures –SQL Server might cache identity values for performance reasons and some of the assigned values can be lost during a database failure or server restart. This can result in gaps in the identity value upon insert. If gaps are not acceptable then the application should use a sequence generator with the NOCACHE option or use their own mechanism to generate key values.
Reuse of values – For a given identity property with specific seed/increment, the identity values are not reused by the engine. If a particular insert statement fails or if the insert statement is rolled back then the consumed identity values are lost and will not be generated again. This can result in gaps when the subsequent identity values are generated.
In most cases, an identity primary key column does just what it is intended to do. It creates a new value bigger than any previous value when a new row is inserted. There may be gaps, but that is not a problem for a primary key.
If you want a column that fills in gaps, then you will have to write a trigger to assign the values.
Alternatively, you could just use row_number() in queries to get sequential values.
Increment 1 just means the next record will have an ID that's one larger than the largest one, not that all numbers will be used. If you want the 1003 as the new ID, you'll have to do some programming yourself, and handle the case that the new is already taken once you reach 2000.
But you should never ever rely on any generated IDs being without gaps. Imagine you have 2 sessions. First session inserts something and does not commit nor rollback yet. Second session inserts something and commits. First session rolls back. How do you want to handle this case? You will need to assign some ID to the first session, and a different ID to the second one. Now the first session rolls back, so its ID is unused. Do you want to subtract one from the second session's ID? Very bad idea. Block the 2nd session until the 1st one has either rolled back or committed? Very bad idea as well.
So please forget about consecutive IDs, they will never work. Unique IDs are what you normally need, and SQL Server guarantees them.
We have a table and a set of procedures that are used for generating pk ids. The table holds the last id, and the procedures gets the id, increments it, updates the table, and then returns the newly incremented id.
This procedure could potentially be within a transaction. The problem is that if the we have a rollback, it could potentially rollback to an id that is before any id's that came into use during the transaction (say generated from a different user or thread). Then when the id is incremented again, it will cause duplicates.
Is there any way to exclude the id generating table from a parent transaction to prevent this from happening?
To add detail our current problem...
First, we have a system we are preparing to migrate a lot of data into. The system consists of a ms-sql (2008) database, and a textml database. The sql database houses data less than 3 days old, while the textml acts as an archive for anything older. The textml db also relies on the sql db to provide ids' for particular fields. These fields are Identity PK's currently, and are generated on insertion before publishing to the texml db. We do not want to wash all our migrated data through sql since the records will flood the current system, both in terms of traffic and data. But at the same time we have no way of generating these id's since they are auto-incremented values that sql server controls.
Secondly, we have a system requirement which needs us to be able to pull an old asset out of the texml database and insert it back into the sql database with the original id's. This is done for correction and editing purposes, and if we alter the id's it will break relations downstream on clients system which we have no control over. Of course all this is an issue because id columns are Identity columns.
procedures gets the id, increments it,
updates the table, and then returns
the newly incremented id
This will cause deadlocks. procedure must increment and return in one single, atomic, step, eg. by using the OUTPUT clause in SQL Server:
update ids
set id = id + 1
output inserted.id
where name= #name;
You don't have to worry about concurrency. The fact that you generate ids this way implies that only one transaction can increment an id, because the update will lock the row exclusively. You cannot get duplicates. You do get complete serialization of all operations (ie. no performance and low throughput) but that is a different issue. And this why you should use built-in mechanisms for generating sequences and identities. These are specific to each platform: AUTO_INCREMENT in MySQL, SEQUENCE in Oracle, IDENTITY and SEQUENCE in SQL Server (sequence only in Denali) etc etc.
Updated
As I read your edit, the only reason why you want control of the generated identities is to be able to insert back archived records. This is already possible, simply use IDENTITY_INSERT:
Allows explicit values to be inserted
into the identity column of a table
Turn it on when you insert back the old record, then turn it back off:
SET IDENTITY_INSERT recordstable ON;
INSERT INTO recordstable (id, ...) values (#oldid, ...);
SET IDENTITY_INSERT recordstable OFF;
As for why manually generated ids serialize all operations: any transaction that generates an id will exclusively lock the row in the ids table. No other transaction can read or write that row until the first transaction commits or rolls back. Therefore there can be only one transaction generating an id on a table at any moment, ie. serialization.
I am using DB2 v9 on LUW.
I have a column defined like this:
"ID" BIGINT NOT NULL GENERATED BY DEFAULT
AS IDENTITY (START WITH 1, INCREMENT BY 1, CACHE 20,
NO MINVALUE, NO MAXVALUE, NO CYCLE, NO ORDER),
I would like to know the best way to determine what the next value will be for the ID column next time a record is inserted into the table.
I will use this information to write a script to do a "sanity" check on the table that IDENTITY is still intact and that its next value is one greater than the highest value in the ID column.
I do not want to just reset the value blindly. If the table does not pass the sanity check I want to be notified so I can determine what is causing the IDENTITY to be "wacked".
You cannot determine the next identity. Even if you could you run the risk of the data being out of sync by the time you try to create a new record. The only thing to do is to create a new record and get the new identity, do your check, and then update the record with the rest of the data.
You could use SELECT IDENT_CURRENT('yourtablename') to get the last one generated. This has the same caveat as the one above. That works in T-SQL, not sure in DB2 flavor.
I don't think this will work as you expect. Consider the case where an row is inserted, then before another row is inserted, that row is deleted. At that point, the autogenerated id will be (at least) 2 greater than the highest value in the DB AND it will be correct. If you can guarantee that no deletes take place, it might work, but I'm not sure what use it would be.
Essentially, you're checking if the very basic operations of the DB software are working and, if they aren't, what are you going to do? Change vendors?
If the case is that you simply want to reseed the identity column, then do a select max(id) and reseed the column within the same transaction. You can be sure that no new records are inserted while the column is being reseeded by enforcing serializable isolation level transaction semantics.
If ID column is set to GENERATED BY ALWAYS you would not have a problem with improper load/import. Also, IDENTITY_VAL_LOCAL function can be used to get the identity value.
More about this function here