One day, wordpress suddenly jumped from pots id 9110 to 890000000 post.
Days later I'd like to move back new posts to continue from id 9111.
I'm sure that id will never reach id 890000000, no problem here, but id is an autoincrement field and ALTER TABLE wp8_posts AUTO_INCREMENT = 9111 is not working.
Can I force id to continue from 9111 ?
You probably need to renumber the already existing post so that it has an id of 9111, and then issue your alter table command. You'll have to change the ID in all the other tables too that are pointing to this ID. If you then issue your alter table command, it should work. If this still doesn't work, you could rename the table, to something like wp8_posts_backup, with:
RENAME TABLE wp8_posts TO wp8_posts_backup
Then, create another table with the same schema with:
Create Table wp8_posts LIKE wp8_posts_backup;
and then copy the data from the backup to the new one. All this still requires renumbering the old post, or deleting it and then recreating it, because having the database knows that there is a record with ID of 890000000, and will always try to go above that when creating the next ID. I believe it uses the index on the column to find the highest number, and calcuate the next id, rather than storing the same value somewhere else. Which is why it's necessary to have a unique index on any autoincrementing field.
Related
Background:
I am trying to solve one simple problem. I have a database with two tables, one stores text (this is something like articles), and the other stores the category to which this text belongs. Users can make changes to the text, and I need to save who and when made the changes, also when saving changes, the user writes a comment on his changes, which I also save.
As I have done now:
I added another table to which I save everything related to changes, who made the changes and when, as well as a comment on the changes, and the ID of the text to which the changes apply.
What is the problem:
Deleting the text also needs to be recorded in history, but since in the records with history there is a foreign key with a check, then I have to delete the entire history that is associated with the text, so as not to get an error.
What I have tried else:
I tried to add an attribute to the table with the text "Deleted", and the row is not physically deleted, but the "Deleted" = 1 flag is simply set, and at the same time I can save the history, and even the moment of deletion. But there is another problem, the table with the text has an attribute "Name", which must be unique, and if the record is not physically deleted, then when I try to insert a new record with the value "Name", which already exists, I get a uniqueness error, although the old record with such a name is considered remote.
Question:
What are the approaches to solving the problem, in which it is possible to save the history of changes in another table, even after deleting records from the main table, and at the same time keep the uniqueness of some attributes of the main table and maintain data integrity.
I would be grateful for any options and hints.
A good practice is to use a unique identifier such as a UUID as the primary key for your primary record (ie. your text record). That way, you can safely soft delete the primary record and any associated metadata can be kept without fear of collisions in the future.
If you need to enforce uniqueness of certain attributes (such as the Name you mentioned) you can create a secondary index (non-clustered index in SQL terminology) on that column in the table and then, when performing the soft delete you can set the Name to NULL and record the old Name value in some other column. For SQL Server (since 2008), in order to allow multiple NULL values in a unique index you need to created what they call a filtered index where you explicitly say you want to ignore NULL values.
In other words, you schema would consist of something like this:
a UUID as primary key for the text record
change metadata would have a foreign key relation to text record via the UUID
a Name column with a non-clustered UNIQUE index
a DeletedName column that will store the Name when record is deleted
a Deleted bit column that can be NULL for non-deleted records and set to 1 for deleted
When you do a soft-delete, you would execute an atomic transaction that would:
set the DeletedName = Name
set Name = NULL (so as not to break the UNIQUE index)
mark record as deleted by setting Deleted = 1
There are other ways too but this one seems to be easily achievable based on what you already have.
In my opinion, you can do it in one of two ways:
Using the tables corresponding to the main table, which includes the action field, and using the delete , insert , update trigger of main tables for filling.
ArticlesTable(Id,Name) -> AuditArticlesTable(Id,Name,Action,User,ModifiedDate)
You can use the Filtered unique index (https://learn.microsoft.com/en-us/sql/relational-databases/indexes/create-filtered-indexes?view=sql-server-ver15) on the “Name” field to solving your issue on adding same name when exists another instance as deleted record
Let me describe my scenario here.
I am having a table with multiple records, where the name is the same, as it's gonna be records for the same person updated on daily basis.
Right now, I am trying to find out the easiest way to update all the names accordingly.
Name is going to be updated (TestName -> RealName)
I want this change to be applied to all the records with the same name, in this case, "TestName"
I can do a single query, but I am trying to find if there's an automatic way to do this correctly.
Been trying using a triggers, but in most cases, I am ending with an infinite loop, as I am trying to update the table, where a trigger is actually bound to, so it's invoking another update and so on.
I don't need an exact solution, just give me some ropes about how it can be achieved, please.
The problem may be simply resolved by using the function pg_trigger_depth() in the trigger, e.g.:
create trigger before_update_on_my_table
before update on my_table
for each row
when (pg_trigger_depth() = 0) -- this prevents recursion
execute procedure before_update_on_my_table();
However, it seems that the table is poorly designed. It should not contain names. Create a table with names (say user_name) and in the old table store a reference to the new one, e.g.:
create table user_name(id serial primary key, name text);
create table my_table(id serial primary key, user_id int references user_name(id));
You can use event triggers in postgresql https://www.postgresql.org/docs/9.3/sql-createeventtrigger.html
I'm learning SQLite from this webiste: SQLite Tutorial.
I was reading the article they had on the AUTOINCREMENT command.
My question had to do with their explanation of why this feature is useful:
The main purpose of using AUTOINCREMENT attribute is…
To prevent SQLite to reuse value that has not been used or from the previously deleted row.
I'm confused about this explanation as it doesn't explain in detail what the implications of this statement is.
Could someone please give more detail about what happens in the background, if this feature is implemented differently for different platforms or specific packaging of the engine in different packages (npm packages etc.).
Also, more importantly, give examples of use cases where using this feature would be necessary and what would be both the proper and improper ways of using it.
Thanks to all!
To prevent SQLite to reuse value that has not been used or from the
previously deleted row.
AUTOINCREMENT property ensure that newly generated id will be unique that will be not from any already used id in that column or should not be from id that has been deleted. It is mostly used in primary key of table where we need unique property which has not been used so far.
In most of relational database, there is AutoIncrement property but in Oracle, I've seen Sequence which similarly acts AutoIncrement property.
For e.g : if you have 10 rows which has AutoIncrement column called id and has value from 1 to 10. Now, you delete all rows and insert new one, then new row will have id = 11 becase 1 to 10 has already been used. You do not need to specify id value as it automatically fills up new row id value by checking previous inserted value.
This feature is usually being used on the table's primary key (I personally prefer to name it ID), like this:
CREATE TABLE MYTABLE(
ID INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
...
);
If you are learning SQLite, you should know that the table's primary key absolutely mush be unique for each record in this table.
So if you are inserting a record to the table without AUTOINCREMENT on its primary key, the database will force you to specify ID of each new record.
If there are already some records in your table, you may ask yourself a question like "What ID whould I put in the record to ensure that it will be unique?"
This is what AUTOINCREMENT was created for. If AUTOINCREMENT is set on the table's primary key, you don't longer need to specify it when inserting a record, so you don't longer need to think what ID to put there.
Now how does it work. If AUTOINCREMENT is set on the table's primary key, a special number of added records (let's name it as a variable "added") is being stored along with the table's data in the database. When you issue the INSERT command with this table, its ID will be calculated like
added + 1
And the added variable will be incremented (autoINCREMENT)
Initially, added's value is 0.
For example, as Akash KC already said, if 10 records were added to the table, the next record's ID will be 11.
The detail is that AUTOINCREMENT doesn't mind deletions - if you take an empty table, add 10 records to it, delete one of them with ID 5 (for example) and then add a new one, its ID will be 11 as well.
I have a simple 2 CentOS servers configurations using both postgres-9.4 to simulate the FDW scenario in Postgres-9.4.
I used fdw to link a simple table with a another database on another server, reading worked perfectly from both ends,the issue was with the serial primary key, it was not in sync; in other words, If I inserted from the original table, after I inserted from the foreign table, it doesn't sync the count. and vise versa.
Based on the comment I got from Nick Barnes, yes I do need to keep up the counter on both sides in sync, so I made a Function that every time Queries the Actual Database for the latest index, so that is always inserts to the correct record.
I am still not sure if this is going to survive, but I'll make it work production really soon.
I blogged about it here with a table example.
I had the same problem, and tried it like Negma suggested in his blog. This solution only works in case you insert only one row. In case you insert more rows in the same transaction, select max(id) will always return the same id and you will get not unique ids.
I have solved this by changing the type of the id from serial/integer to uuid. Then you can do the same as Negma suggested but with gen_random_uuid() from the pgcrypto EXTENSION.
So at the foreign server I did:
ALTER TABLE tablename ALTER COLUMN columnname SET DEFAULT gen_random_uuid();
And the same for the foreign table.
For a project with offline storage, I created a database with a primary key for the id and also a unique field holding the date.
When I delete an entry, I can not create a new one with the same date the deleted entry had.
What can I do to get the date out of the index when I delete that entry?
This is how I create Table:
CREATE TABLE IF NOT EXISTS userdata (id INTEGER PRIMARY KEY ASC, entrydate DATE UNIQUE, strength, comments
I wonder if I need to add something to tell the DB server to allow me to use the same value again as soon it is free again. Maybe I need to run some kind of an update, where SQLite updates its internal records.
I think there are a few possibilities here.
Your delete statement is in an uncommitted transaction. So the unique value hasn't actually been removed from the table before your attempt to insert.
The value you are deleting and the new value you are inserting are not actually the same value. Run a select and make sure the value you are attempting to insert is actually available.
You have a corrupt index and need to reindex it.