Deleting an entry and releasing from unique key index - sql

For a project with offline storage, I created a database with a primary key for the id and also a unique field holding the date.
When I delete an entry, I can not create a new one with the same date the deleted entry had.
What can I do to get the date out of the index when I delete that entry?
This is how I create Table:
CREATE TABLE IF NOT EXISTS userdata (id INTEGER PRIMARY KEY ASC, entrydate DATE UNIQUE, strength, comments
I wonder if I need to add something to tell the DB server to allow me to use the same value again as soon it is free again. Maybe I need to run some kind of an update, where SQLite updates its internal records.

I think there are a few possibilities here.
Your delete statement is in an uncommitted transaction. So the unique value hasn't actually been removed from the table before your attempt to insert.
The value you are deleting and the new value you are inserting are not actually the same value. Run a select and make sure the value you are attempting to insert is actually available.
You have a corrupt index and need to reindex it.

Related

Saving change history

Background:
I am trying to solve one simple problem. I have a database with two tables, one stores text (this is something like articles), and the other stores the category to which this text belongs. Users can make changes to the text, and I need to save who and when made the changes, also when saving changes, the user writes a comment on his changes, which I also save.
As I have done now:
I added another table to which I save everything related to changes, who made the changes and when, as well as a comment on the changes, and the ID of the text to which the changes apply.
What is the problem:
Deleting the text also needs to be recorded in history, but since in the records with history there is a foreign key with a check, then I have to delete the entire history that is associated with the text, so as not to get an error.
What I have tried else:
I tried to add an attribute to the table with the text "Deleted", and the row is not physically deleted, but the "Deleted" = 1 flag is simply set, and at the same time I can save the history, and even the moment of deletion. But there is another problem, the table with the text has an attribute "Name", which must be unique, and if the record is not physically deleted, then when I try to insert a new record with the value "Name", which already exists, I get a uniqueness error, although the old record with such a name is considered remote.
Question:
What are the approaches to solving the problem, in which it is possible to save the history of changes in another table, even after deleting records from the main table, and at the same time keep the uniqueness of some attributes of the main table and maintain data integrity.
I would be grateful for any options and hints.
A good practice is to use a unique identifier such as a UUID as the primary key for your primary record (ie. your text record). That way, you can safely soft delete the primary record and any associated metadata can be kept without fear of collisions in the future.
If you need to enforce uniqueness of certain attributes (such as the Name you mentioned) you can create a secondary index (non-clustered index in SQL terminology) on that column in the table and then, when performing the soft delete you can set the Name to NULL and record the old Name value in some other column. For SQL Server (since 2008), in order to allow multiple NULL values in a unique index you need to created what they call a filtered index where you explicitly say you want to ignore NULL values.
In other words, you schema would consist of something like this:
a UUID as primary key for the text record
change metadata would have a foreign key relation to text record via the UUID
a Name column with a non-clustered UNIQUE index
a DeletedName column that will store the Name when record is deleted
a Deleted bit column that can be NULL for non-deleted records and set to 1 for deleted
When you do a soft-delete, you would execute an atomic transaction that would:
set the DeletedName = Name
set Name = NULL (so as not to break the UNIQUE index)
mark record as deleted by setting Deleted = 1
There are other ways too but this one seems to be easily achievable based on what you already have.
In my opinion, you can do it in one of two ways:
Using the tables corresponding to the main table, which includes the action field, and using the delete , insert , update trigger of main tables for filling.
ArticlesTable(Id,Name) -> AuditArticlesTable(Id,Name,Action,User,ModifiedDate)
You can use the Filtered unique index (https://learn.microsoft.com/en-us/sql/relational-databases/indexes/create-filtered-indexes?view=sql-server-ver15) on the “Name” field to solving your issue on adding same name when exists another instance as deleted record

How can I delete a database record and reuse the deleted primary key?

I've created a database in VS 2016. I also wrote a WebApplication (ASP.Net/C#/Entity Framework) where I can enter different values in different textboxes. These get saved in my database. Every record receives an ID (Primary Key).
Lets say I have 5 records with the IDs 1-5. If I create a new record after deleting these 5 IDs, the ID of the new one is 6, but I'd like it to reuse the ID of 1.
How can I achieve this?
I presume you are using auto-increment to assign a new key value to your Id column. You are assuming that since there are no records then the next free value would be 1, but it just keeps incrementing from the previous highest value.
Its a bad idea, but if you really wished to recycle your key values then you could switch off auto-increment and manually manage your key values yourself, but this is error prone and difficult.
Do you really need to do this? Int and BigInt can hold very large numbers, are you likely to ever run out of key values so that recycling might be required?
If you just want to reset your auto-increment back to 1 I suggest you look at this post
It's not a problem. Your database has Primery Key with Identity(1,1). Every new record will have ID greater than the last one added.
If You are deleting all the rows You can use
TRUNCATE TABLE tablename
.
Otherwise You might think of turning off autoincrementation and providing number for ID with function looking for smalles free Id in Your table.
Lets say i have 5 Records with the ID's 1-5. If i create a new record after deleting these 5 ID's, the ID of the new one is 6, but it should be one.
It seems your id column is Identity.You need to use below command after deleting 5 rows everytime to start your value again from 1
DBCC CHECKIDENT (yourtable, RESEED, 1);
this wont work unless you truncate table,since you have a primary key on Id,so you will need work on your primary key strategy
because SQL server does not track which unique identifier deleted , So you have only one option to reset that
DBCC CHECKIDENT (yourtable, RESEED, 0);
This is not you looking for but SQL server does not track which are deleted.

Create autoserial column in informix

is it possible to create a autoserial index in order 1,2,3,4... in Informix and what would be the syntax. I have a query and some of my timestamps are identical so I was unable to query using a timestamp variable. Thanks!
These are the commands that I ran to add an id field to an existing table. While logged in to the dbaccess console.
alter table my_table add id integer before some_field;
create sequence myseq;
update my_table set id = myseq.nextval;
drop sequence myseq;
alter table my_table modify (id serial not null);
Thanks to #ricardo-henriques for pointing me in the right direction. These commands will allow you to run the instructions explained in his answer on your database.
That would be the SERIAL data type.
You can use, as #RET mention the SERIAL data type.
Next you will struggle with the fact that you can't add a SERIAL column to an existing table. Ways to work around:
Add an INTEGER column, populate with sequential numbers and then alter the column to SERIAL.
Unload the data to a file, drop the table and recreate it with the new column.
Create a new table with the new column, populate the new table with the data from the old, drop the old and rename the new.
...
Bear in mind that they may not be unique. Hence you have to create an unique index or a primary key or an unique constraint in the column to prevent duplicates.
Another notes you should be aware:
- Primary key don't allow NULLS, unique index and unique constraints allow (as long there is only one record), so you should specify NOT NULL on the column definition.
- If you use a primary key or a unique constraint you can create a foreign key to it.
- In primary key and unique constraint the validation of the uniqueness of the record is done in the end of the DML, for the unique index it is done row a row.
Seems you're getting your first touch with informix, welcome. Yes it can be a little bit hard on the beginning just remember:
Always search before asking, really search.
When in doubt or reached a dead end then ask away.
Try to trim down your case scenario, built your own case the simple away you can, these will not only help us to help us but you will practice and in some cases find the solution by yourself.
When error is involve always give the error code, in informix it is given at least one error code and sometimes an ISAM error too.
Keen regards.

SQL Trigger: On update of primary key, how to determine which "deleted" record cooresponds to which "inserted" record?

Assume that I know that updating a primary key is bad.
There are other questions which imply that the inserted and updated table records match by position (the first of one matches the first of the other.) Is this a fact or coincidence?
Is there anything that could join the two tables together when the primary key changes on an update?
There is no match of inserted+deleted virtual table row positions.
And no, you can't match rows
Some options:
there is another unique unchanging (for that update) key to link rows
limit to single row actions.
use a stored procedure with the OUTPUT clause to capture before and after keys
INSTEAD OF trigger with OUTPUT clause (TBH not sure if you can do this)
disallow primary key updates (added after comment)
Each table is allowed to have one identity column. Identity columns are not updateable; they are assigned a value when the records are inserted (or when the column is added), and they can never change. If the primary key is updateable, it must not be an identity column. So, either the table has another column which is an identity column, or you can add one to it. There is no rule that says the identity column has to be the primary key. Then in the trigger, rows in inserted and updated that have the same identity value are the same row, and you can support updating the primary key on multiple rows at a time.
Yes -- create an "old_primary_key" field in the table you're updating, and populate it first.
Nothing you can do to match-up the inserted and deleted psuedo table record keys -- even if you store their data in a log table somewhere.
I guess alternatively, you could create a separate log table that tracked changes to primary keys (old and new). This might be more useful than adding a field to the table you're updating as I suggested right at first, as it would allow you to track more than one change for a given record. Just depends on your situation, I guess.
But that said -- before you do anything, please go find a chalk board and write this 100 times:
I know that updating a primary key is bad.
I know that updating a primary key is bad.
I know that updating a primary key is bad.
I know that updating a primary key is bad.
I know that updating a primary key is bad.
...
:-) (just kidding)

Change an autoincrementing field to one previous

One day, wordpress suddenly jumped from pots id 9110 to 890000000 post.
Days later I'd like to move back new posts to continue from id 9111.
I'm sure that id will never reach id 890000000, no problem here, but id is an autoincrement field and ALTER TABLE wp8_posts AUTO_INCREMENT = 9111 is not working.
Can I force id to continue from 9111 ?
You probably need to renumber the already existing post so that it has an id of 9111, and then issue your alter table command. You'll have to change the ID in all the other tables too that are pointing to this ID. If you then issue your alter table command, it should work. If this still doesn't work, you could rename the table, to something like wp8_posts_backup, with:
RENAME TABLE wp8_posts TO wp8_posts_backup
Then, create another table with the same schema with:
Create Table wp8_posts LIKE wp8_posts_backup;
and then copy the data from the backup to the new one. All this still requires renumbering the old post, or deleting it and then recreating it, because having the database knows that there is a record with ID of 890000000, and will always try to go above that when creating the next ID. I believe it uses the index on the column to find the highest number, and calcuate the next id, rather than storing the same value somewhere else. Which is why it's necessary to have a unique index on any autoincrementing field.