I am terribly sorry if this is a supremely easy question. It's just such a weird case I have trouble even figuring out how to write that in google. I just can't. I'll describe the situation and what I want to do - I don't know how to put that as a question...
The situation is this. I have a mySQL table: service_logs. There I have the log_id as the primary key with AUTO_INCREMENT set to it. So all the various logs there have log_id 1, 2, 3, 4, 5 and so on with various data. But across the history, many individual logs were deleted. So now I have:
log_id 1: content
log_id 2: content
log_id 10: content
log_id 11: content
log_id 40: content
and so on.
I want to fill the gaps in that. I want to have the entry nr. 10 reassigned the 3rd number, then entry nr. 11 assigned the 4th number, and so on. I don't want to have gaps in there.
And yes I know it's dumb and shouldn't be done. I just have a friend who needs these without gaps for some of his Excel stuff :/
Yes this is a bad idea and not necessary as you can always add a row_number to get your wanted result to export it.
That said.
You can remove auto_increment and primary key
Renumber the column
Add autoincrement and primary key
and set the "correct" number
it must be clear at the time no further actions should be run with the server
If you only want consquitove numbers you can also choose toe "make a seciond" log table with row_numbers in it, it is the sample CREATE TABLE log2
CREATE TABLe log(log_id int PRIMARY KEY AUTO_INCREMENT)
ALTER TABLE log MODIFY log_id INT ;
ALTER TABLE log DROP PRIMARY KEY;
INSERT INTO log VALUES (1),(2),(5),(10)
CREATE TABLE log2 SELECT ROW_NUMBER() OVER(ORDER BY log_id ASC) log_id,'abs' FROm log
UPDATE log
JOIN (SELECT #rank := 0) r
SET log_id=#rank:=#rank+1;
SELECT * FROM log
| log_id |
| -----: |
| 1 |
| 2 |
| 3 |
| 4 |
ALTER TABLE log MODIFY log_id INT PRIMARY KEY AUTO_INCREMENT;
✓
SELECT #max := MAX(log_id)+ 1 FROM log;
PREPARE stmt FROM 'ALTER TABLE log AUTO_INCREMENT = ?';
EXECUTE stmt USING #max;
DEALLOCATE PREPARE stmt;
| #max := MAX(log_id)+ 1 |
| ---------------------: |
| 5 |
SELECT * FROM log2
log_id | abs
-----: | :--
1 | abs
2 | abs
3 | abs
4 | abs
db<>fiddle here
Related
I am really struggling how to implement requirement which is going to be best described with example.
Consider everything below to be written in pseudocode although I am interested in solutions for postgres.
id
id_for_user
note
created_by
1
1
Buy milk
1
1
2
Winter tyres
1
1
3
Read for 1h
1
2
1
Clean dishes
2
2
2
Learn how magnets work
2
INSERT INTO notes VALUES (note: 'Learn icelandic', created_by: 1);
id
id_for_user
note
created_by
1
1
Buy milk
1
2
2
Winter tyres
1
3
3
Read for 1h
1
4
1
Clean dishes
2
5
2
Learn how magnets work
2
6
4
Learn Icelandic
1
INSERT INTO notes VALUES (note: 'Are birds real?', created_by: 2);
id
id_for_user
note
created_by
1
1
Buy milk
1
2
2
Winter tyres
1
3
3
Read for 1h
1
4
1
Clean dishes
2
5
2
Learn how magnets work
2
6
4
Learn Icelandic
1
7
3
Are birds real?
2
I would like to achieve something like this:
CREATE TABLE notes (
id SERIAL,
id_for_user INT DEFAULT nextval(created_by) -- Dynamic name for sequence so every user gets its own,
note VARCHAR,
created_by INT,
PRIMARY KEY(id, id_for_user),
CONSTRAINT fk_notes_created_by
FOREIGN KEY(created_by)
REFERENCES users(created_by)
);
So that user 1 sees (notice how id_for_user is just id on front end)
id
note
1
Buy milk
2
Winter tyres
3
Read for 1h
4
Learn Icelandic
And user 2
id
note
1
Clean dishes
2
Learn how magnets work
3
Are birds real?
Basically I want to have auto incremented field for each user.
I am then also probably going to query for the record by id_for_user filling create_by on backend based on which user made the request.
Is something like this even possible? What are my options? I would really like to have this logic on db level.
https://www.db-fiddle.com/f/6eBvq4VCQPTmmR3W6fCnEm/2
Try with a sequence, this object will have control of the autonumeric of the ID
example:
CREATE SEQUENCE sequence_notes1
INCREMENT BY 1
MINVALUE 1
MAXVALUE 100;
CREATE SEQUENCE sequence_notes2
INCREMENT BY 1
MINVALUE 1
MAXVALUE 100;
CREATE TABLE notes (
id SERIAL,
id_for_user INT,
note VARCHAR,
created_by INT,
PRIMARY KEY(id)
);
INSERT INTO notes (id_for_user, note, created_by) VALUES (nextval('sequence_notes1'),'Foo', 1);
INSERT INTO notes (id_for_user, note, created_by) VALUES (nextval('sequence_notes1'),'Moo', 1);
INSERT INTO notes (id_for_user, note, created_by) VALUES (nextval('sequence_notes2'),'Boo', 2);
INSERT INTO notes (id_for_user, note, created_by) VALUES (nextval('sequence_notes2'),'Loo', 2);
You can have a separate table to store "the next ordinal value for each user". Then a trigger can fill the value and increment the related table.
For example:
create table usr (
id int primary key,
next_ordinal int default 1
);
create table note (
id int primary key,
note varchar(100),
created_by int references usr (id),
user_ord int
);
create or replace function add_user_ord() returns trigger as $$
begin
select next_ordinal into new.user_ord from usr where id = new.created_by;
update usr set next_ordinal = next_ordinal + 1 where id = new.created_by;
return new;
end;
$$ language plpgsql;
//
create trigger trg_note1 before insert on note
for each row execute procedure add_user_ord();
Then, the trigger will add the correct ordinal numbers automatically behind the scenes during INSERTs:
insert into usr (id) values (10), (20);
insert into note (id, note, created_by) values (1, 'Hello', 10);
insert into note (id, note, created_by) values (2, 'Lorem', 20);
insert into note (id, note, created_by) values (3, 'World', 10);
insert into note (id, note, created_by) values (4, 'Ipsum', 20);
Result:
id note created_by user_ord
-- ----- ---------- --------
1 Hello 10 1
2 Lorem 20 1
3 World 10 2
4 Ipsum 20 2
Note: This solution does not address multi-threading inserts. If your application needs this you'll need to add some isolation (or pessimistic, or optimistic locking) for it.
In MySQL, this is supported in the MyISAM storage engine.
https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html says:
For MyISAM tables, you can specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In this case, the generated value
for the AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is
useful when you want to put data into ordered groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id; Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
The reason this works in MyISAM is that MyISAM only supports table-level locking.
In a storage engine with row-level locking, you get race conditions if you try to have a primary key that works like this. This is why others on this thread have commented that implementing this with triggers requires some pessimistic locking. You have to use locking to ensure that only one client at a time is inserting, so they don't allocate the same value.
This will be limiting in a high-traffic application. InnoDB's auto-increment is implemented the way it is to allow applications in which many client threads are executing inserts concurrently.
So you could use MyISAM or you could use InnoDB and invent your own way of allocating new id's per user, but either way it will severely limit your app's scalability.
SQL table:
id | name
----+--------
1 | apple
2 | orange
3 | apricot
The id is primary key, unique, could be SERIES. The goal is to insert new row where id equals 2 and shift existing row numbers below, that is 2 and 3, to 3 and 4 position.
I have tried shift rows before inserting new row:
"UPDATE some_table SET id = id + 1 WHERE id >= id"
but an error occurred:
org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "some_table_pkey"
Detail: Key (id)=(3) already exists.
Is there some effective way to do such an operation?
The table should look like this after update:
id | name
----+--------
1 | apple
2 | cherry
3 | orange
4 | apricot
While I think the attempt is futile, you can achieve that by marking the primary key constraint as deferrable:
CREATE TABLE some_table
(
id int,
name text
);
alter table some_table
add constraint pk_some_table
primary key (id)
deferrable initially immediate; --<< HERE
In that case the PK constraint is evaluated at per statement, not per row.
Online example: https://rextester.com/JSIV60771
update Names
set id=id+1
where id in
(select id from Names where id>=2 order by id desc);
Here first you can update the id's and then you can insert
insert into Names (id,name) values(2,'cheery')
I have a table where the primary key is just a synthetic key to keep track of each user's tracked orders. A user cannot have duplicate orders, but can have as many different orders to track as they want. My problem is that I want to increase performance by moving my current solution (checking for duplicates in the query) to changing it into a CHECK constraint or something similar.
Table
| TrackingId | OrderId | UserId |
+------------+---------+--------+
| 1 | 37 | 144 |
| 2 | 41 | 144 |
| 3 | 37 | 144 | -- DUPLICATE
| 4 | 41 | 26 | -- But this is fine
Is it possible to create a CHECK constraint to prevent inserting data into the table in this case?
To clarify, instead of having to add the process of checking for an already existing record in my SQL code for my application, can I add code into the database schema to check so it will just throw an error if it fails the CHECK constraint?
e.g.
DECLARE #userId INT
DECLARE #orderId INT
--(the database handles the check for you)
INSERT INTO Tracking
...
instead of:
DECLARE #userId INT
DECLARE #orderId INT
IF (SELECT COUNT(*)
FROM Tracking
WHERE UserId = #userId AND OrderId = #orderId
GROUP BY UserId, OrderId
HAVING COUNT(*) > 0) > 0
-- Check failed (User already has a record for that particular OrderId, can't add it again)
THROW 51000, 'Cannot insert duplicate order for user.', 1;
ELSE
(
INSERT INTO Tracking
...
)
You seem to want a unique constraint, not a check constraint:
alter table tracking
add constraint unq_tracking_userid_orderid unique (userid, orderid);
You can see the concept of unique constraints/indices here:
SQL Server Create Unique Indexes
In your case the code should be:
CREATE UNIQUE INDEX ix_tracking_userid_orderid ON tracking (userid, orderid);
Postgres database
I'm trying to find a faster way to create a new column in a table which is a copy of the tables primary key column, so if I have the following columns in a table named students:
student_id Integer Auto-Increment -- Primary key
name varchar
Then I would like to create a new column named old_student_id which has all the same values as student_id.
To do this I create the column and the execute the following update statement
update student set old_student_id=student_id
Which works, but on my biggest table it takes over an hour, and I feels like I should be able to use some kind of alternative approach to get that down to a few minutes, I just don't know what.
So what I want at the end of the day is something that looks like this:
+------------+-----+---------------+
| student_id | name| old_student_id|
+------------+-----+---------------+
| 1 | bob | 1 |
+------------+-----+---------------+
| 2 | tod | 2 |
+------------+-----+---------------+
| 3 | joe | 3 |
+------------+-----+---------------+
| 4 | tim | 4 |
+------------+-----+---------------+
To speed things up a bit before I do the update query, I drop all the FK's and Indices on the table, then reapply them when it finishes. Also I'm on an AWS RDS, so I have setup a param group which has synchronized_commits=false, turned off backups, and increased working mem a bit for the duration of this update.
For context this is actually happening to every table in the database, across three databases. The old ids are used as references for several external systems which reference these ids, so I need to keep track of them in order to update those systems as well. I have an 8 hour downtime window, and currently merging the databases takes ~3 hours, and a whole hour of that time is spent creating these ids.
If in the future you do not need to update old_student_id column then you can use virtual columns on PostgreSQL.
CREATE TABLE table2 (
id serial4 NOT NULL,
val1 int4 NULL,
val2 int4 NULL,
total int4 NULL GENERATED ALWAYS AS (id) STORED
);
During the inserting process, the total field will be set to the same value as the id field. But you can not update this field, because this is a virtual column.
Alternative method is a using triggers. In this case you can update your fields. See this example:
Firstly we need create trigger function which will be called before table inserting.
CREATE OR REPLACE FUNCTION table2_insert()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
begin
new.total = new.val1 * new.val2;
return new;
END;
$function$
;
After then:
CREATE TABLE table2 (
id serial4 NOT NULL,
val1 int4 NULL,
val2 int4 NULL,
total int4 NULL
);
create trigger my_trigger before
insert
on
table2 for each row execute function table2_insert();
With both methods, you don't have to update many records every time.
I am trying to create a table with an auto-increment column as below. Since Redshift psql doesn't support SERIAL, I had to use IDENTITY data type:
IDENTITY(seed, step)
Clause that specifies that the column is an IDENTITY column. An IDENTITY column contains unique auto-generated values. These values start with the value specified as seed and increment by the number specified as step. The data type for an IDENTITY column must be either INT or BIGINT.`
My create table statement looks like this:
CREATE TABLE my_table(
id INT IDENTITY(1,1),
name CHARACTER VARYING(255) NOT NULL,
PRIMARY KEY( id )
);
However, when I tried to insert data into my_table, rows increment only on the even number, like below:
id | name |
----+------+
2 | anna |
4 | tom |
6 | adam |
8 | bob |
10 | rob |
My insert statements look like below:
INSERT INTO my_table ( name )
VALUES ( 'anna' ), ('tom') , ('adam') , ('bob') , ('rob' );
I am also having trouble with bringing the id column back to start with 1. There are solutions for SERIAL data type, but I haven't seen any documentation for IDENTITY.
Any suggestions would be much appreciated!
You have to set your identity as follows:
id INT IDENTITY(0,1)
Source: http://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_examples.html
And you can't reset the id to 0. You will have to drop the table and create it back again.
Set your seed value to 1 and your step value to 1.
Create table
CREATE table my_table(
id bigint identity(1, 1),
name varchar(100),
primary key(id));
Insert rows
INSERT INTO organization ( name )
VALUES ('anna'), ('tom') , ('adam'), ('bob'), ('rob');
Results
id | name |
----+------+
1 | anna |
2 | tom |
3 | adam |
4 | bob |
5 | rob |
For some reason, if you set your seed value to 0 and your step value to 1 then the integer will increase in steps of 2.
Create table
CREATE table my_table(
id bigint identity(0, 1),
name varchar(100),
primary key(id));
Insert rows
INSERT INTO organization ( name )
VALUES ('anna'), ('tom') , ('adam'), ('bob'), ('rob');
Results
id | name |
----+------+
0 | anna |
2 | tom |
4 | adam |
6 | bob |
8 | rob |
This issue is discussed at length in AWS forum.
https://forums.aws.amazon.com/message.jspa?messageID=623201
The answer from the AWS.
Short answer to your question is seed and step are only honored if you
disable both parallelism and the COMPUPDATE option in your COPY.
Parallelism is disabled if and only if you're loading your data from a
single file, which is what we normally do not recommend, and hence
will be an unlikely scenario for most users.
Parallelism impacts things because in order to ensure that there is no
single point of contention in assigning identity values to rows, there
end up being gaps in the value assignment. When parallelism is
disabled, the load is happening serially, and therefore, there is no
issue with assigning different id values in parallel.
The reason COMPUPDATE impacts things is when it's enabled, the COPY is
actually making 2 passes over your data. During the first pass, it
internally increments the identity values, and as a result, your
initial value starts with a larger value than you'd expect.
We'll update the doc to reflect this.
Also multiple nodes seems to cause such effect with IDENTITY column. In essence it can only provide you with guaranteed unique IDs.