I'm trying to formulate some check constraints in SQL Anywhere 9.0.
Basically I have schema like this:
CREATE TABLE limits (
id INT IDENTITY PRIMARY KEY,
count INT NOT NULL
);
CREATE TABLE sum (
user INT,
limit INT,
my_number INT NOT NULL CHECK(my_number > 0),
PRIMARY KEY (user, limit)
);
I'm trying to force a constraint my_number for each limit to be at most count in table.
I've tried
CHECK ((SELECT sum(my_number) FROM sum WHERE limit = limit) <= (SELECT count FROM limits WHERE id = limit))
and
CHECK (((SELECT sum(my_number) FROM sum WHERE limit = limit) + my_number) <= (SELECT count FROM limits WHERE id = limit))
and they both seem not to do the correct thing. They are both off by one (meaning once you get a negative number, then insertion will fail, but not before that.
So my question is, with what version of the table are these subqueries being executed against? Is it the table before the insertion happens, or does the subquery check for consistency after the insert happens, and rolls back if it finds it invalid?
I do not really understand what you try to enforce here but based on this help topic.
Using CHECK constraints on columns
Once a CHECK condition is in place, future values are evaluated
against the condition before a row is modified.
I would go for a before insert trigger. You have more options and can bring up a better error message.
Related
I have a simple table:
CREATE TABLE [dbo].[Word] (
[WordId] INT IDENTITY (1, 1) NOT NULL,
[NameId] INT NOT NULL
PRIMARY KEY CLUSTERED ([WordId] ASC)
);
I have a unique index on NameId
I am trying to update this table and change one column to a random number:
UPDATE Word SET NameId = ROUND(RAND()*2147483647,0)
I realize there is a very very small chance this will not work but it's actually failing every time even though the table has only a very small number of rows the update always fails and says there's a duplicate.
Can anyone tell me what's happening here and also suggest a way to update this table so that there's no duplicate values of NameId created most of the time.
You are updating every NameId with same value, use WHERE statement to update only one row
EDIT: This should do the trick you are looking for, NewId() generates new id for each row
UPDATE Word SET NameId = abs(checksum(NewId()) % 2147483647)
How many rows do you have? It could be a case of the birthday paradox.
Have you tried doing this:
SELECT ROUND(RAND()*2147483647,0) FROM Word
Do the numbers really need to be pseudo-random? You could use row_number() to make them increment.
I have a table, something like:
create table state {foo int not null, bar int not null, baz varchar(32)};
create unique index on state(foo,bar);
I'd like to lock for a unique record in this table. However, if there's no existing record I'd like to prevent anyone else from inserting a record, but without inserting myself.
I'd use "FOR UPDATE WITH RS USE AND KEEP EXCLUSIVE LOCKS" but that only seems to work if the record exists.
A) You can let DB2 create every ID number. Let's say you have defined your Customer table
CREATE TABLE Customers
( CustomerID Int NOT NULL
GENERATED ALWAYS AS IDENTITY
PRIMARY KEY
, Name Varchar(50)
, Billing_Type Char(1)
, Balance Dec(9,2) NOT NULL DEFAULT
);
Insert rows without specifying the CustomerID, since DB2 will always produce the value for you.
INSERT INTO Customers
(Name, Billing_Type)
VALUES
(:cname, :billtype);
If you need to know what the last value assigned in your session was, you can then use the IDENTITY_VAL_LOCAL() function.
B) In my environment, I generally specify GENERATED BY DEFAULT. This is in part due to the nature of our principle programming language, ILE RPG-IV, where developers have traditionally to allowed the compiler to use the entire record definition. This leads me to I can tell everyone to use a sequence to generate ID values for a given table or set of tables.
You can grant select to only you, but if there are others with secadm or other privileges, they could insert.
You can do something with a trigger, something like check the current session, and if the user is your user, then it inserts the row.
if (SESSION_USER <> 'Alex) then
rollback -- or generate an exception
end if;
It seems that you also want to keep just one row, then, you can control that also in a trigger:
select count(0) into value from state
if (value > 1) then
rollback -- or generate an exception
end if;
I would like to insert a record into a table and if the record is already present get its id, otherwise run the insert and get the new record's id.
I will be inserting millions of records and have no idea how to do this in an efficient manner. What I am doing now is to run a select to check if the record is already present, and if not, insert it and get the inserted record's id. As the table is growing I imagine that SELECT is going to kill me.
What I am doing now in python with psycopg2 looks like this:
select = ("SELECT id FROM ... WHERE ...", [...])
cur.execute(*select)
if not cur.rowcount:
insert = ("INSERT INTO ... VALUES ... RETURNING id", [...])
cur.execute(*insert)
rid = cur.fetchone()[0]
Is it maybe possible to do something in a stored procedure like this:
BEGIN
EXECUTE sql_insert;
RETURN id;
EXCEPTION WHEN unique_violation THEN
-- return id of already existing record
-- from the exception info ?
END;
Any ideas of how optimize a case like this?
First off, this is obviously not an UPSERT as UPDATE was never mentioned. Similar concurrency issues apply, though.
There will always be a race condition for this kind of task, but you can minimize it to an extremely tiny time slot, while at the same time querying for the ID only once with a data-modifying CTE (introduced with PostgreSQL 9.1):
Given a table tbl:
CREATE TABLE tbl(tbl_id serial PRIMARY KEY, some_col text UNIQUE);
Use this query:
WITH x AS (SELECT 'baz'::text AS some_col) -- enter value(s) once
, y AS (
SELECT x.some_col
, (SELECT t.tbl_id FROM tbl t WHERE t.some_col = x.some_col) AS tbl_id
FROM x
)
, z AS (
INSERT INTO tbl(some_col)
SELECT y.some_col
FROM y
WHERE y.tbl_id IS NULL
RETURNING tbl_id
)
SELECT COALESCE(
(SELECT tbl_id FROM z)
,(SELECT tbl_id FROM y)
);
CTE x is only for convenience: enter values once.
CTE y retrieves tbl_id - if it already exists.
CTE z inserts the new row - if it doesn't.
The final SELECT avoids running another query on the table with the COALESCE construct.
Now, this can still fail if a concurrent transaction commits a new row with some_col = 'foo' exactly between CTE y and z, but that's extremely unlikely. If it happens you get a duplicate key violation and have to retry. Nothing lost. If you don't face concurrent writes, you can just forget about this.
You can put this into a plpgsql function and rerun the query on duplicate key error automatically.
Goes without saying that you need two indexes in this setup (like displayed in my CREATE TABLE statement above):
a UNIQUE or PRIMARY KEY constraint on tbl_id (which is of serial type!)
another UNIQUE or PRIMARY KEY constraint on some_col
Both implement an index automatically.
I have a table with a column which contains a 'valid until' Date and I want to make sure that this can only be set to null in a single row within the table. Is there an easy way to do this?
My table looks like this (postgres):
CREATE TABLE 123.myTable(
some_id integer NOT NULL,
valid_from timestamp without time zone NOT NULL DEFAULT now(),
valid_until timestamp without time zone,
someString character varying)
some_id and valid_from is my PK. I want nobody to enter a line with a null value in column valid_until if there is already a line with null for this PK.
Thank you
In PostgreSQL, you have two basic approaches.
Use 'infinity' instead of null. Then your unique constraint works as expected. Or if you cannot do that:
CREATE UNIQUE INDEX null_valid_from ON mytable(someid) where valid_until IS NULL
I have used both approaches. I find usually the first approach is cleaner and it allows you to use range types and exclude constraints in newer versions of PostgreSQL better (to ensure no two time ranges overlap based on a given given someid), bt the second approach often is useful where the first cannot be done.
Depending on the database, you can't have null in a primary key (I don't know about all databases, but in sql server you can't). The easiest way around this I can think of is to set the date time to the minimum value, and then add a unique constraint on it, or set it to be the primary key.
I suppose another way would be to set up a trigger to check the other values in the table to see if another entry is null, and if there is one, don't allow the insert.
As Kevin said in his answer, you can set up a database trigger to stop someone from inserting more than one row where the valid until date is NULL.
The SQL statement that checks for this condition is:
SELECT COUNT(*)
FROM TABLE
WHERE valid until IS NULL;
If the count is not equal to 1, then your table has a problem.
The process that adds a row to this table has to perform the following:
Find the row where the valid until value is NULL
Update the valid until value to the current date, or some other meaningful date
Insert the new row with the valid until value set to NULL
I'm assuming you are Storing Effective-dated-records and are also using a valid from date.
If so, You could use CRUD stored procedures to enforce this compliance. E.G the insert closes off any null valid dates before inserting a new record with a null valid date.
You probably need other stored procedure validation to avoid overlapping records and to allow deleting and editing records. It may be more efficient (in terms of where clauses / faster queries) to use a date far in the future rather than using null.
I know only Oracle in sufficient detail, but the same might work in other databases:
create another column which always contains a fixed value (say '0') include this column in your unique key.
Don't use NULL but a specific very high or low value. I many cases this is actually easier to use then a NULL value
Make a function based unique key on a function converting the date including the null value to some other value (e.g. a string representation for dates and 'x' for null)
make a materialized view which gets updated on every change on your main table and put a constraint on that view.
select count(*) cnt from table where valid_until is NULL
might work as the select statement. And a check constraint limiting the cnt value to the values 0 and 1
I would suggest inserting to that table through an SP and putting your constraint in there, as triggers are quite hidden and will likely be forgotten about. If that's not an option, the following trigger will work:
CREATE TABLE dbo.TESTTRIGGER
(
YourDate Date NULL
)
CREATE TRIGGER DupNullDates
ON dbo.TESTTRIGGER
FOR INSERT, UPDATE
AS
DECLARE #nullCount int
SELECT #nullCount = (SELECT COUNT(*) FROM TESTTRIGGER WHERE YourDate IS NULL)
IF(#NullCount > 1)
BEGIN
RAISERROR('Cannot have Multiple Nulls', 16, 1)
ROLLBACK TRAN
END
GO
Well if you use MS SQL you can just add a unique Index on that column. That will allow only one NULL. I guess that if you use other RDBMS, this will still function.
I just came to this very simple situation where I needed to shift a primary key up a certain value. Suppose the following table:
CREATE TABLE Test (
Id INTEGER PRIMARY KEY,
Desc TEXT);
Loaded with the following values:
INSERT INTO Test VALUES (0,'one');
INSERT INTO Test VALUES (1,'two');
If there's an attempt at updating the primary key, it will, of course, fail:
UPDATE Test SET Id = Id+1;
Error: column id is not unique
Is there some way to suspend unicity check until after the update query has run?
Find a nice pivot point, and move the data around that pivot. For example, if all your IDs are positive, a good pivot is 0.
When you would normally do
UPDATE Test SET Id = Id+1;
Do this sequence instead
UPDATE Test SET Id = -Id;
UPDATE Test SET Id = -Id +1;
For times, you can find a similar pivot point, but the formula is just a tad harder.
without understanding the fundamental problem (and yeah, you seem like a victim of code and run on this one!), multiplying the ID by the largest value in the table should work.
update test
set id = id * (select max(id) + 1 from test)
However, it's dirty, and really, databases make it hard to change primary keys for a reason...
OK. Second attempt. Try this:
Get the MAX of the key column.
UPDATE table SET key = key + max + 1
UPDATE table SET key = key - max
This will avoid duplicated keys at any time in the update process by moving the window far enough.