AS400 DB2 Duplicate Key Error during Insert in Table with PK Identity Column - sql

I got a Table with an Auto Increment Column which looks like:
ALTER TABLE SOMESCHEMA.SOMETABLE
ALTER COLUMN ID
SET DATA TYPE INTEGER GENERATED BY DEFAULT
SET INCREMENT BY 1
SET NO ORDER
SET NO CYCLE
SET MINVALUE 1
SET MAXVALUE 2147483647
SET NO CACHE;
As long as i let the DBMS generate the Ids everything works fine and I can get the generated Id via:
SELECT IDENTITY_VAL_LOCAL() FROM sysibm.sysdummy1
But sometimes i need to insert a row with an ID of my choice and there i get into trouble.
Lets say we got a single row in the table with ID 1.Now i insert a new row with a manually assigned id of 2. The next time i try to insert a new row without a preset ID i get an error SQL0803 "DUPLICATE KEY".
I assume the internal "NextId" field for that Auto-Increment Column doesnt update itself if the Id of a row is manually set.
So I tried reseting this field with:
ALTER TABLE SOMESCHEMA.SOMETABLE ALTER COLUMN ID RESTART WITH 3
But this causes a permanent Table lock, which i dont know how to unlock.
How can i get this "Mixed-Mode" ID-Column working? Is it possible to get it to work like MySQL where the DBMS automatically updates the "NextID" upon a manually-Id Insert? If not, how can I release that {insert swear-word here} lock that pops up if i try to reset the NextId?

SQL0913 isn't creating a lock - it is reporting that a lock exists. ALTER TABLE needs an exclusive lock on the table in order to reset the ID number. A table can be locked by another process having it open, or it can be locked by this process if there are uncommitted rows.
There is another reason the table is in use - soft close (or pseudo-close). For performance reasons, DB2 for i keeps cursors in memory so that they can be reused as efficiently as possible. So even if you say CLOSE CURSOR, DB2 keeps it in memory. These soft closed cursors can be closed by the command ALCOBJ OBJ((SOMSCHEMA/SOMETABLE *FILE *EXCL)) WAIT(1) CONFLICT(*RQSRLS) The CONFLICT(*RQSRLS) parameter tells DB2 to close all soft closed cursors.
So the root of the issue is that DB2 wants exclusive access to the table. Which is sort of a design question, because typically one doesn't manipulate the table's structure during the work day. It sounds as though this table is sometimes a parent and sometimes a child when it comes to ID numbers. If that is the case, may I suggest that you ALTER the table again?
I think the implementation might be better if you used a trigger rather than auto-increment. Fire the trigger on INSERT. If ID is supplied, do nothing. If ID is not supplied, SELECT MAX(ID)+1 and use that as the actual ID number you commit to the database.

ALTER TABLE table_name ALTER COLUMN column_name RESTART WITH 99999;
Fixed my issue. "99999" is the next ID to be used for example

Related

Executing Insert command at the same time writes the data twice to database despite a check

I need to check whether similar data exists in the database and skip that insert. This question might look like a duplicate but I did not find any solutions.
I am using PostgreSQL DB and I have an SQL query
INSERT INTO table_name (name, dob, mobile)
VALUES ('sam', '23-05-2000', '8070605040');
If I run this command twice then it should only be inserted once after checking for the combined uniqueness of dob and mobile i.e. if I enter data like ('tom', '23-05-2000', '8070605040'), it should not be entered. My existing code works when the command runs one after the other.
But if I run the command at the same time on 2 devices by the press of a button then the record is entered twice.
Result after Running code one after the other
name dob mobile
sam 23-05-2000 8070605040
Result if the command is executed at the same time
name dob mobile
sam 23-05-2000 8070605040
sam 23-05-2000 8070605040
If there is even a second delay then the existing code works fine. But executing literally at the same time does not check as it is not written to the database yet.
I also do not want to add a unique condition on the table itself as I don't understand the data correctly and would like to do this as an SQL query.
How can I check and prevent this from happening.
Thanks.
If I run this command twice then it should only be inserted once after checking for the combined uniqueness of dob and mobile
This is only true if dob/mobile is declared unique or has a unique index. Presumably, you need:
alter table table_name add constraint unq_table_name_dob_mobile unique (dob, mobile);
With a unique index/constraint, the database ensures the data integrity. The database will not allow duplicates into the table, no matter how hard the application tries.
If you want to manually check the uniqueness of the record, you can use a transaction and lock the whole table:
BEGIN;
LOCK TABLE tbl IN EXCLUSIVE MODE; -- only allow read access
SELECT * FROM tbl WHERE a = 1 and b = 2;
--- if not existed, insert new record here
COMMIT;
This has poor performance and should only be used for this one specific need. The best solution is always use a UNIQUE INDEX.

SAP HANA hdbsequence trigger reset_by manually

In the project I'm working at the Id for certain insert statements is managed by hdbsequences. Now I want to create a sequence for another table that already has existing data in it and I want it to start with the max id value of the data of that table.
I know I could just manually set the "start_with"-Property to it but that is not an option because we need to transport the sequence to another system later where the data in that corresponding table is not the same as on the current system (therefore the ID is different).
I also know of the "reset_by"-Property in which I can select the max value of the table, the problem is that I don't know how to trigger that explicitly.
What I already found out, is that the "reset_by"-Property is called whenever the database is restarted, but unfortunately that is not also not an option because we can't reset the database to not disrupt the other systems.
Thanks in advance for your time and help.
You can do an ALTER SEQUENCE and set the value to be used by the next sequence usage with the option "restart with".
For instance (schema name and sequence name have to be replaced):
alter sequence "<schema name>"."<sequence name>" restart with 100;
The integer value behind the "restart with" option has to be set to the value which has to be used next. So in case your last ID is 100, set it to 101. 101 is the value returned by the next NEXTVAL call on the sequence.

How can I get the last issued sequence ID in vertica?

Background: I am migrating from postgreSQL to Vertica and found, that there are some issues in IDENTITY or AUTO_INCREMENT columns. One of these issues is, that vertica cannot assign values to IDENTITY columns or alter a column, that already has data into an IDENTITY column. Therefore I created a sequence and set the default value of the column to be unique doing:
SELECT MAX(id_column) FROM MY_SCHEMA.my_table;
which is 12345
CREATE SEQUENCE MY_SCHEMA.seq_id_column MINVALUE 12346 CACHE 1;
ALTER TABLE MY_SCHEMA.my_table
ALTER COLUMN id_column SET DEFAULT(MY_SCHEMA.seq_id_column.nextval);
ALTER TABLE MY_SCHEMA.log ADD UNIQUE(id_column);
Which works as expected. In this case, I have the cache deactivated, as I am on a single node installation and I want my ID column to be contiguous. However, this is not an option on a cluster installation as the needed lock leads to a bottleneck.
Question: In a vertica cluster with several nodes, how can I access the ID of the last insert in a session (without an additional select)?
E.g. in postgreSQL I could do something like
INSERT INTO MY_SCHEMA.my_table RETURNING id_column;
which does not work in Vertica. Furthermore, the LAST_INSERT_ID() function of Vertica does not work for named sequences. I also feel, that querying the current_value of MY_SCHEMA.seq_id_column could be giving wrong results due to caching, but I am unsure about this.
Why no additional SELECT?
To my knowledge, the select will only give correct values after a commit. I cannot do a commit after every single insert due to performance.
The comments from LukStorms pointed me in the right direction.
The NEXTVAL() function (as far as I have tested) gives contiguous values in the case, where one single session queries them. Furthermore, on concurrent access, if issued after an insert, CURRVAL retrieves the cached value, which is guaranteed to be unique but not necessarily contiguous. As I never call NEXTVAL anywhere else as in my default clause, this solves the problem for me, although there might be cases, where an additional call to NEXTVAL between inserts increments the sequence counter.
One case I can think of (and that I will test in the future) is what happens if AUTO COMMIT is set to OFF, which is ON by default for the vertica client drivers.
UPDATE:
This even seems to work with AUTOCOMMIT being OFF (shown using the vertica-python client driver, where C is the connection and cur the cursor):
cur.execute("SELECT NEXTVAL('my_schema.my_sequence');")
cur.fetchall()
--> 1
cur.execute("SELECT CURRVAL('my_schema.my_sequence');")
cur.fetchall()
--> 1
cur.execute("SET SESSION AUTOCOMMIT TO OFF")
cur.execute("SELECT NEXTVAL('my_schema.my_sequence');")
cur.execute("SELECT NEXTVAL('my_schema.my_sequence');")
cur.execute("SELECT NEXTVAL('my_schema.my_sequence');")
cur.execute("SELECT CURRVAL('my_schema.my_sequence');")
cur.fetchall()
--> 4
However, this seems to be unchanged during a rollback of the connection. So the following happens:
C.rollback()
cur.execute("SELECT CURRVAL('my_schema.my_sequence');")
cur.fetchall()
--> 4

Keep a shadow copy of a table while retaining records removed from the original

This is probably laughably easy for an SQL expert, but SQL (although I can use it) is not really my thing.
I've got a table in a DB. (Let's call it COMPUTERS)
About 10.000 rows. 25 columns. 1 unique key: Column ASSETS.
Occasionally an external program will delete 1 or more of the rows, but isn't supposed to do that, because we still need to know some info from those rows before we can really delete the items.
We can't control the behavior of the external application so we came up with a different idea:
We want to create a second identical table (COMPUTERS_BACKUP) and initially fill this with a one-on-one copy of COMPUTERS.
After that, once a day copy new records from COMPUTERS to COMPUTERS_BACKUP and update those records in COMPUTERS_BACKUP where the original in COMPUTERS has changed (ASSETS column will never change).
That way we keep the last state of a record deleted from COMPUTERS.
Can someone supply the code for a stored procedure that can be scheduled to run once a day? I can probably figure this out myself, but it would take me several hours or so and I'm very pressed for time.
just create a trigger for insert computers table
CREATE TRIGGER newComputer
ON [Computers]
AFTER INSERT
Begin
INSERT INTO COMPUTERS_BACKUP
SELECT * FROM Inserted
End
It'll work when you insert new computer to computers table and it'll also insert the record to bakcup table
When you update computers you could change computers backup too with update trigger
CREATE TRIGGER newComputer
ON [Computers]
AFTER UPDATE
Begin
//can access before updating the record through SELECT * FROM Deleted
//can access after updating the record through SELECT * FROM Inserted
UPDATE Computers_BACKUP SET
(attributes) = inserted.(attribute)
WHERE id = inserted.id
End
At the end I guess you don't want to delete the backup when original record is deleted from computers table. You can chech more examples from msdn using triggers.
When a record removed from computers table
CREATE TRIGGER computerDeleted ON [Computers] AFTER DELETE
Begin
INSERT INTO Computers_BACKUP
SELECT * FROM Deleted
End
Besides creating triggers, you may look into enabling Change Data Capture, which is available in SQL Server Enterprise Edition. It may be an overshot, but it should be mentioned and you may find it useful for other tables and objects.
IMHO a possible solution, if you never delete records (only update) from that table in your application, can be to introduce an INSTEAD OF DELETE trigger
CREATE TRIGGER tg_computers_delete ON computers
INSTEAD OF DELETE AS
DELETE computers WHERE 1=2;
It will prevent the deletion of the records.
Here is SQLFiddle demo.
A trigger for Before Delete event can help you to guard this table:
CREATE TRIGGER backup_row_before_delete ON COMPUTERS_Table FOR Delete
as
INSERT INTO Computers_Backup
SELECT deleted.* from deleted
You can change deleted.* for deleted.col1, deleted.col2 if you want to keep certain columns only.
will delete 1 or more of the rows, but isn't supposed to do that
Then you have permission and integrity issues.
You can most certainly use a trigger to record deletions (and updates of course) but I would not recommend you use it purely to keep a copy of stuff you didn't want deleted in the first place!
Remove delete permissions if you have to or beef up your data integrity if you can. Without your schema it's hard to tell exactly how though.
Finally, use your (INSTEAD OF) trigger to check whatever conditions you need to prevent the delete when appropriate.

How to delete a large record from SQL Server?

In a database for a forum I mistakenly set the body to nvarchar(MAX). Well, someone posted the Encyclopedia Britanica, of course. So now there is a forum topic that won't load because of this one post. I have identified the post and ran a delete query on it but for some reason the query just sits and spins. I have let it go for a couple hours and it just sits there. Eventually it will time out.
I have tried editing the body of the post as well but that also sits and hangs. When I sit and let my query run the entire database hangs so I shut down the site in the mean time to prevent further requests while it does it's thinking. If I cancel my query then the site resumes as normal and all queries for records that don't involve the one in question work fantastically.
Has anyone else had this issue? Is there an easy way to smash this evil record to bits?
Update: Sorry, the version of SQL Server is 2008.
Here is the query I am running to delete the record:
DELETE FROM [u413].[replies] WHERE replyID=13461
I have also tried deleting the topic itself which has a relationship to replies and deletes on topics cascade to the related replies. This hangs as well.
Option 1. Depends on how big the table itself and how big are the rows.
Copy data to a new table:
SELECT *
INTO tempTable
FROM replies WITH (NOLOCK)
WHERE replyID != 13461
Although it will take time, table should not be locked during the copy process
Drop old table
DROP TABLE replies
Before you drop:
- script current indexes and triggers so you are able to recreate them later
- script and drop all the foreign keys to the table
Rename the new table
sp_rename 'tempTable', 'replies'
Recreate all the foreign keys, indexes and triggers.
Option 2. Partitioning.
Add a new bit column, called let's say 'Partition', set to 0 for all rows except the bad one. Set it to 1 for bad one.
Create partitioning function so there would be two partitions 0 and 1.
Create a temp table with the same structure as the original table.
Switch partition 1 from original table to the new temp table.
Drop temp table.
Remove partitioning from the source table and remove new column.
Partitioning topic is not simple. There are some examples in the internet, e.g. Partition switching in SQL Server 2005
Start by checking if your transaction is being blocked by another process. To do this, you can run this command..
SELECT * FROM sys.dm_os_waiting_tasks WHERE session_id = {spid}
Replace {spid} with the correct spid number of the connection running your DELETE command. To get that value, run SELECT ##spid before the DELETE command.
If the column sys.dm_os_waiting_tasks.blocking_session_id has a value, you can use activity monitor to see what that process is doing.
To open activity monitor, right-click on the server name in SSMS' Object Explorer and choose Activity Monitor. The Processes and Resource Waits sections are the ones you want.
Since you're having issues deleting the record and recreating the table, have you tried updating the record?
Something like (changing "body" field name to whatever it is in the table):
update [u413].[replies] set body='' WHERE replyID=13461
Once you clear out the text from that single reply record you should be able to alter the data type of the column to set an upper bound. Something like:
alter table [u413].[replies] alter column body nvarchar(100)