PostgreSQL Why sequence increases only once with two "nextval" - sql

I am implementing a board with a comment function.
I created a table like this with nextval
create table tr_board
(
b_id varchar(20) not null default nextval('seq_board'),
title varchar(256),
content varchar(500),
user_id varchar(30),
update_date timestamp not null,
is_complete varchar(1) default 0,
is_private varchar(1) default 0
);
and this is my Insert sql.
<insert id="addBoard" parameterType="java.util.HashMap">
<selectKey keyProperty="id" resultType="int" order="BEFORE">
select nextval('seq_board') as id;
</selectKey>
insert into tr_board
(
b_id,
title,
content,
user_id,
is_private,
update_date
)
values
(
#{id},
#{title},
#{content},
#{user_id},
#{is_private},
current_timestamp at time zone 'utc'
)
</insert>
I used nextval in the "default" of "create" and "insert".
So I thought the sequence would increase twice.
But Why is the sequence increasing only once, even though a total of two nextval exists?
I would appreciate your reply.

The nextval() on the column default only gets called if a value isn't provided, or you explicitly ask for the default to be used. You are getting a sequence value, and then passing that as a value to the b_id column, so it doesn't try to get the next value in the sequence because no default is needed. It would only call it twice if you got the value, and then did nothing with it, requiring the default to be used.

Related

How to generate a unique numeric ID in SQL Server (not using identity)?

I need a unique number id for my table. Usually I would use Identity in Sql Server, but there is a catch to my use case. I would like to know the id before the row is created (to be able to reference it in other records in memory, before committing everything to the database).
I don't know if it's possible to achieve with Identity, but I could not figure that out.
So my next best guess is that I need a table that will store one value and keep incrementing it and returning me a new value for the id. Access would have to be locked so that no two operations can get the same value.
I am thinking of using e.g. sp_getapplock #Resource = 'MyUniqueId' to prevent same number from being returned to a caller. Perhaps I can use ordinary locking in transactions for that as well.
Is there any better approach to the problem?
You can create a SEQUENCE object that produces incrementing values. A SEQUENCE can be used independently or as a default value for one or more tables.
You can create a sequence with CREATE SEQUENCE :
CREATE SEQUENCE Audit.EventCounter
AS int
START WITH 1
INCREMENT BY 1 ;
You can retrieve the next value atomically with NEXT VALUE FOR and use it in multiple statements eg :
DECLARE #NextID int ;
SET #NextID = NEXT VALUE FOR Audit.EventCounter;
Rolling back a transaction doesn't affect a SEQUENCE. From the docs:
Sequence numbers are generated outside the scope of the current transaction. They are consumed whether the transaction using the sequence number is committed or rolled back.
You can use NEXT VALUE FOR as a default in multiple tables. In the documentation example, three different types of event table use the same SEQUENCE allowing all events to get unique numbers:
CREATE TABLE Audit.ProcessEvents
(
EventID int PRIMARY KEY CLUSTERED
DEFAULT (NEXT VALUE FOR Audit.EventCounter),
EventTime datetime NOT NULL DEFAULT (getdate()),
EventCode nvarchar(5) NOT NULL,
Description nvarchar(300) NULL
) ;
GO
CREATE TABLE Audit.ErrorEvents
(
EventID int PRIMARY KEY CLUSTERED
DEFAULT (NEXT VALUE FOR Audit.EventCounter),
EventTime datetime NOT NULL DEFAULT (getdate()),
EquipmentID int NULL,
ErrorNumber int NOT NULL,
EventDesc nvarchar(256) NULL
) ;
GO
CREATE TABLE Audit.StartStopEvents
(
EventID int PRIMARY KEY CLUSTERED
DEFAULT (NEXT VALUE FOR Audit.EventCounter),
EventTime datetime NOT NULL DEFAULT (getdate()),
EquipmentID int NOT NULL,
StartOrStop bit NOT NULL
) ;
GO
One option here would be to use a UUID to represent each unique record. Should you want to generate the UUID within SQL Server, you could use the NEWID() function (see the documentation for more information). If this value would be generated from your application code, you could convert it to uniqueidentifier type within SQL Server using CONVERT.
For reference, a UUID is a 16 byte unique identifier. It is extremely unlikely that your application or SQL Server would ever generate the same UUID more than once. They look like this:
773c1570-1076-4e19-b728-6d7b0b20895a
If you want a behaviour that matches the one of an IDENTITY column, try:
CREATE SEQUENCE mydb.dbo.mysequence;
And then, repeatedly:
SELECT NEXT VALUE FOR mysequence;
And , if you want to play some more, see here:
https://learn.microsoft.com/en-us/sql/t-sql/statements/create-sequence-transact-sql?view=sql-server-ver15
happy playing ...

Block parallel inserts in a TABLE

Is there a way to block parallel inserts in a table and not just row-lock level?
The insert is very fast (millisecond level) but I want to have some sort of guarantee that only 1 row can be inserted in a particular millisecond entry.
By design it already makes sure the data will never be inconsistent (see load_id_by_date):
CREATE TABLE my_table
(
load_id uniqueidentifier NOT NULL,
load_date datetime NOT NULL DEFAULT (GETDATE()),
load_id_by_date bigint NOT NULL DEFAULT (CAST(GETDATE() as decimal(19,9)) * 1000000000) UNIQUE,
is_processed bit DEFAULT(0)
PRIMARY KEY (load_id_by_date)
)
But I was just wondering if there is a way to stop parallel inserts from happening from multi-threaded calls. A simple (single threaded) simulation below highlights the issue.
-- TO TEST:
WHILE (1=1)
BEGIN
INSERT INTO my_table (load_id)
SELECT NEWID()
END
will have
Msg 2627, Level 14, State 1, Line 6
Violation of UNIQUE KEY constraint 'UQ__config_l__A307163DB6D0D819'. Cannot insert duplicate key in object 'my_table.config_load_id_toprocess'. The duplicate key value is (43507564143441).
But now I am thinking the approach on timestamp uniqueness might be the wrong way to go by. But the actual calls will not be that fast, with a frequency of 2 seconds fastest but multi-threaded.
Thanks #Mitch Wheat on the XY problem. I have narrowed on what i needed to do.
The load_id_by_int (formerly load_id_by_date) is now generated from a bigint representation of NEWID(). The chances of conflict is now acceptable (at least in my opinion). Thanks for the assistance everyone who commented.
CREATE TABLE my_table
(
load_id uniqueidentifier NOT NULL,
load_date datetime NOT NULL DEFAULT (GETDATE()),
load_id_by_int bigint NOT NULL DEFAULT (ABS(convert(bigint, convert (varbinary(8), NEWID(), 1)))),
is_processed bit DEFAULT(0)
PRIMARY KEY (load_id_by_int)
)
The concept was derived from Convert from UniqueIdentifier to BigInt and Back?

Ensure SQLite table only has one row

How can I enforce a table to have only one row? Below is what I tried. The UPDATE trigger might work, however, the CREATE trigger definitely will not. For the CREATE, I would like to use SET, however, SET is not supported by SQLite.
CREATE TABLE IF NOT EXISTS `config` (
`id` TINYINT NOT NULL DEFAULT 0,
`subdomain` VARCHAR(45) NOT NULL,
`timezone` CHAR(3) NOT NULL,
`timeout` TINYINT NOT NULL,
`offline` TINYINT NOT NULL,
`hash_config` CHAR(32) NOT NULL,
`hash_points` CHAR(32) NOT NULL,
PRIMARY KEY (`id`));
INSERT INTO config(id,subdomain,timezone,timeout,offline,hash_config,hash_points) VALUES(0,'subdomain','UTC',5,0,'hash_config','hash_points');
CREATE TRIGGER `config_insert_zero`
BEFORE INSERT ON `config`
FOR EACH ROW
BEGIN
-- SET NEW.id=0;
NEW.id=OLD.id;
END;
CREATE TRIGGER `config_update_zero`
BEFORE UPDATE ON `config`
FOR EACH ROW
BEGIN
-- SET NEW.id=0;
NEW.id=OLD.id;
END;
In the general case, to limit the number of rows in a table, you have to prevent any further insert.
In SQLite, this is done with RAISE():
CREATE TRIGGER config_no_insert
BEFORE INSERT ON config
WHEN (SELECT COUNT(*) FROM config) >= 1 -- limit here
BEGIN
SELECT RAISE(FAIL, 'only one row!');
END;
However, if the limit is one, you could instead simply constrain the primary key to a fixed value:
CREATE TABLE config (
id INTEGER PRIMARY KEY CHECK (id = 0),
[...]
);
One idea you may want to consider is to make it appear like the table has only one row. In reality, you keep all previous rows because it's quite possible you will one day want to maintain a history of all past values.
Since there is only one row, there really is no need for an ID column, the purpose of which is to uniquely differentiate each row from all the others. You do need, however, a timestamp which will be used to identify the "one row" which will be the latest row written to the table.
CREATE TABLE `config_history` (
`created` timestamp default current_timestamp,
`subdomain` VARCHAR(45) NOT NULL,
`timezone` CHAR(3) NOT NULL,
`timeout` TINYINT NOT NULL,
`offline` TINYINT NOT NULL,
`hash_config` CHAR(32) NOT NULL,
`hash_points` CHAR(32) NOT NULL,
PRIMARY KEY (`created`)
);
Since you are normally interested in only the last row written (the latest version), the query selects the row with the latest creation date:
select ch.created effective_date, ch.subdomain, ch.timezone, ch.timeout,
ch.offline, ch.hash_config, ch.hash_points
from config_history ch
where ch.created =(
select max( created )
from config_history );
Put a create view config as in front of this query and you have a view that selects only one row, the latest, from the table. Any query against the view returns the one row:
select *
from config;
An instead of trigger on the view can convert Updates to Inserts -- you don't actually want to change a value, just write a new row with the new values. This then becomes the new "current" row.
Now you have what appears to be a table with only one row but you also maintain a complete history of all the past changes ever made to that row.

SQL - date column automatically changing each time I update table

I have a date column in my table. When I perform an update query on the rows, each time the date gets refreshed to the current date. I have set date's default value to CURRENT_TIMESTAMP but why is this happening each time?
UPDATE
My create query:
CREATE TABLE `ACCOUNTS` (
`id` bigint(7) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(30) DEFAULT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`status` varchar(1) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=1234567 DEFAULT CHARSET=latin1
It's probably something to do with date being a keyword. try changing it to some_date. Was CURRENT_TIMESTAMP intentional?
When you set the default value to CURRENT_TIMESTAMP, sql will insert the current time stamp at the time of creating new record only and will never update it unless you update it yourself. Refreshing will not update the timestamp
In case you use a MariaDB, this documentation page https://mariadb.com/kb/en/library/timestamp/ may have some surprising information for you:
"The timestamp field is generally used to define at which moment in time a row was added or updated and by default will automatically be assigned the current datetime when a record is inserted or updated. The automatic properties only apply to the first TIMESTAMP in the record; subsequent TIMESTAMP columns will not be changed."
Hope this bit helps the next developer who runs into this fantastic feature...

What options are available for applying a set level constraint in PostgreSQL?

I have a situation where I need to ensure that there is only one active record with the same object_id and user_id at any time. Here is a representative table:
CREATE TABLE actions (
id SERIAL PRIMARY KEY,
object_id integer,
user_id integer,
active boolean default true,
created_at timestamptz default now()
);
By only one active record at a time, I mean you could have a sequence of inserts like the following:
insert into actions (object_id, user_id, active) values (1, 1, true);
insert into actions (object_id, user_id, active) values (1, 1, false);
but doing a subsequent
insert into actions (object_id, user_id, active) values (1, 1, true);
should fail because at this point in time, there already exists 1 active tuple with object_id = 1 and user_id = 1.
I'm using PostgreSQL 8.4.
I saw this post which looks interesting, but its Oracle specific.
I also saw this post but it requires more care regarding the transaction isolation level. I don't think it would work as-is in read committed mode.
My question is what other options are available to unsure this kind of constraint?
Edit: Removed the third insert in the first set. I think it was confusing the example. I also added the created_at time stamp to help with the context. To reiterate, there can be multiple (object_id, user_id, false) tuples, but only one (object_id, user_id, true) tuple.
Update: I accepted Craig's answer, but for others who may stumble upon something similar, here is another possible (though suboptimal) solution.
CREATE TABLE action_consistency (
object_id integer,
user_id integer,
count integer default 0,
primary key (object_id, user_id),
check (count >= 0 AND count <= 1)
);
CREATE OR REPLACE FUNCTION keep_action_consistency()
RETURNS TRIGGER AS
$BODY$
BEGIN
IF NEW.active THEN
UPDATE action_consistency
SET count = count + 1
WHERE object_id = NEW.object_id AND
user_id = NEW.user_id;
INSERT INTO action_consistency (object_id, user_id, count)
SELECT NEW.object_id, NEW.user_id, 1
WHERE NOT EXISTS (SELECT 1
FROM action_consistency
WHERE object_id = NEW.object_id AND
user_id = NEW.user_id);
ELSE
-- assuming insert will be active for simplicity
UPDATE action_consistency
SET count = count - 1
WHERE object_id = NEW.object_id AND
user_id = NEW.user_id;
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER ensure_action_consistency AFTER INSERT OR UPDATE ON actions
FOR EACH ROW EXECUTE PROCEDURE keep_action_consistency();
It requires the use of a tracking table. For what I hope are obvious reasons, this is not at all desirable. It means that you have an additional row each distinct (object_id, user_id) in actions.
Another reason why I accepted #Craig Ringer's answer is that there are foreign key references to actions.id in other tables that are also rendered inactive when a given action tuple changes state. This why the history table is less ideal in this scenario. Thank you for the comments and answers.
Given your specification that you want to limit only one entry to being active at a time, try:
CREATE TABLE actions (
id SERIAL PRIMARY KEY,
object_id integer,
user_id integer,
active boolean default true,
created_at timestamptz default now()
);
CREATE UNIQUE INDEX actions_unique_active_y ON actions(object_id,user_id) WHERE (active = 't');
This is a partial unique index, a PostgreSQL specific feature - see partial indexes. It constrains the set such that only one (object_id,user_id) tuple may exist where active is true.
While that strictly answers your question as you explained further in comments, I think wildplasser's answer describes the more correct choice and best approach.
You can use UNIQUE constraint to ensure that the column contains the unique value...
Here, set of object_id and user_id have been made unique....
CREATE TABLE actions (
id SERIAL PRIMARY KEY,
object_id integer,
user_id integer,
active boolean default true,
UNIQUE (object_id , user_id )
);
Check Out SQLFIDDLE
Similary, if you want to make set of object_id,user_id and active as UNIQUE, you can simply add the column name in the list of UNIQUE.
CREATE TABLE actions (
id SERIAL PRIMARY KEY,
object_id integer,
user_id integer,
active boolean default true,
UNIQUE (object_id , user_id,active )
);
Check Out SQLFIDDLE
Original:
CREATE TABLE actions (
id SERIAL PRIMARY KEY,
object_id integer,
user_id integer,
active boolean default true
);
my version:
CREATE TABLE actions (
object_id integer NOT NULL REFERENCES objects (id),
user_id integer NOT NULL REFERENCES users(id),
PRIMARY KEY (user_id, object_id)
);
What are the differences:
omitted the surrogate key. It is useless, it enforces no constraint, and nobody will ever reference it
added a (composite) primary key, which happens to be the logical key
changed the two fields to NOT NULL, and made them into foreign keys (what would be the meaning of a row that would not exist in the users or objects table?
removed the boolean flag. What is the semantic difference between a {user_id,object_id} tuple that does not exist versus one that does exist but has it's "active" flag set to false? Why create three states when you only need two?