I'd like to define a constraint between two nullable FK's in a table where if one is null the other needs a value, but both can't be null and both can't have values. Logic is the derived table inherits data from the either of the FK tables to determine its type. Also, for fun bonus points, is this a bad idea?
One way to achieve it is to simply write down what "exclusive OR" actually means:
CHECK (
(FK1 IS NOT NULL AND FK2 IS NULL)
OR (FK1 IS NULL AND FK2 IS NOT NULL)
)
However, if you have many FKs, the above method can quickly become unwieldy, in which case you can do something like this:
CHECK (
1 = (
(CASE WHEN FK1 IS NULL THEN 0 ELSE 1 END)
+ (CASE WHEN FK2 IS NULL THEN 0 ELSE 1 END)
+ (CASE WHEN FK3 IS NULL THEN 0 ELSE 1 END)
+ (CASE WHEN FK4 IS NULL THEN 0 ELSE 1 END)
...
)
)
BTW, there are legitimate uses for that pattern, for example this one (albeit not applicable to MS SQL Server due to the lack of deferred constraints). Whether it is legitimate in your particular case, I can't judge based on the information you provided so far.
You can use check constraint:
create table #t (
a int,
b int);
alter table #t add constraint c1
check ( coalesce(a, b) is not null and a*b is null );
insert into #t values ( 1,null);
insert into #t values ( null ,null);
Running:
The INSERT statement conflicted with the CHECK constraint "c1".
Alternate way is to define this check constraint in a procedure. Before you insert a record in the derived table, the constraint should be satisfied. Else insert fails or returns an error.
Related
I want to allow allow only a set number of values to be inserted into column A, and depending on the value entered, allow only a certain number of values to be inserted into column B. For example
For example
If A = 1, B can be between 1 and 9
If A = 2, B can be between 10 AND 19
If A = 3, B can be between 20 AND 29
How can I achieve this?
I figured check constraints are the best place to start. A simple constraint will ensure only values 1-3 can be added to column A. Such as:
CREATE TABLE dbo.test (
col_a INT,
col_b INT,
CONSTRAINT ch_col_a_valid_range CHECK (col_a BETWEEN 1 AND 3)
)
GO
Then I fugured that using a scalar function to determine whether the col_b is valid, by passing in the value from col_a and col_b.
CREATE FUNCTION dbo.value_is_valid (
#a INT,
#b INT
)
RETURNS BIT
AS
BEGIN
IF (#a = 1 AND #b BETWEEN 1 AND 9) RETURN 1;
IF (#a = 2 AND #b BETWEEN 10 AND 19) RETURN 1;
IF (#a = 3 AND #b BETWEEN 20 AND 29) RETURN 1;
RETURN 0;
END
GO
Then add the constraint to the table, and call the function as part of the check.
CREATE TABLE dbo.test (
col_a INT,
col_b INT,
CONSTRAINT ch_col_a_valid_range CHECK (col_a BETWEEN 1 AND 3),
CONSTRAINT ch_col_b_valid_based_on_a CHECK(dbo.value_is_valid(col_a, col_b) = 1)
)
GO
However, the following insert fails, complaining about a conflict with the ch_col_b_valid_based_on_a constraint that was added.
INSERT INTO dbo.test (
col_a,
col_b
)
VALUES (1, 9)
The INSERT statement conflicted with the CHECK constraint "ch_col_b_valid_based_on_a". The conflict occurred in database " MyDB", table "dbo.test".
What can I do to work around this and achieve the result mentioned above?
On looking back, this approach of using a scalar function in a check constraint works exactly as expected.
One method is a check constraint:
CREATE TABLE dbo.test (
col_a INT,
col_b INT,
CONSTRAINT ch_col_a_valid_range CHECK (col_a BETWEEN 1 AND 3),
CONSTRAINT chk_col_a_colb
CHECK ( (col_a = 1 AND col_b BETWEEN 1 AND 9) OR
(col_a = 2 AND col_b BETWEEN 10 AND 19) OR
(col_a = 3 AND col_b BETWEEN 20 AND 29)
)
);
However, I might be inclined to created an AB_valid table with a list of valid pairs and use a foreign key constraint. That way, the list of valid values could be maintained dynamically rather than requiring modification to the table definition.
You could use some math to create a simpler constraint.
CREATE TABLE dbo.test (
col_a INT,
col_b INT,
CONSTRAINT ch_col_a_valid_range CHECK (col_a BETWEEN 1 AND 3),
CONSTRAINT ch_col_b_valid_based_on_a CHECK(col_b/10 + 1 = col_a)
);
I have a column Col1 nvarchar(10) null
I have to write a check constraint or trigger (I think it's not possible with check constraint), that will change the Col1 from null to not null when and only when some data is entered into the field or, rather, it will deny the column to get a null value after some non-null value is entered into the field.
It's because of application that first checks if that field is null, and if it is then it adds some value to it. After that the field can not be changed back to null.
For now I have the following:
create trigger [TRG_Col1_NotNull] on my.Table
instead of update
as
begin
if exists (
select * from inserted as i
where i.Col1 is null
)
raiserror ('You can not change the value of Col1 to null', 16, 1)
rollback transaction
end
Is this the best (or even correct) way to do this or is there any better and easier solution for this (maybe check constraint somehow)?
OK! The update!
Application works like this:
It first save data to table in PK column, Col1, Col2, Col3 values 1, null, text, date. After that it checks if Col1 is null and reads the PK column and writes it's values to Col1. So I get the 1, 1, text, data.
This could do what you asked (I know: it's after UPDATE, so actually, you'll change values two times, but I will not use AFTER/INSTEAD: what if other values should be updated?).
CREATE TABLE TES1 (ID INT, COL1 VARCHAR(10));
INSERT INTO TES1 VALUES (1,'X');
INSERT INTO TES1 VALUES (2,NULL);
CREATE TRIGGER TRG1 ON TES1
AFTER UPDATE
AS
BEGIN
UPDATE A SET COL1=CASE WHEN d.COL1 IS NULL THEN i.COL1 ELSE d.COL1 END
FROM TES1 A
INNER JOIN DELETED d ON A.ID = d.ID
INNER JOIN INSERTED i ON A.ID = i.ID;
END
Sample data
UPDATE TES1 SET COL1 = NULL WHERE ID=1;
SELECT * FROM TES1;
UPDATE TES1 SET COL1 = 'A' WHERE ID=1;
SELECT * FROM TES1;
UPDATE TES1 SET COL1 = 'B' WHERE ID=2;
SELECT * FROM TES1;
UPDATE TES1 SET COL1 = 'C' WHERE ID=2;
SELECT * FROM TES1;
You can create a CHECK constraint that will work only for new values.
ALTER TABLE [dbo].[Test] WITH NOCHECK ADD CONSTRAINT [CK_Test] CHECK (([Col1] IS NOT NULL))
GO
ALTER TABLE [dbo].[Test] CHECK CONSTRAINT [CK_Test]
GO
WITH NOCHECK option means that constraint will be created successfully even if the table has NULL values.
But, after this constraint is created, an attempt to insert new NULL value or update existing value to NULL will fail.
I need to write T-SQL code that will compare between T1.PercentComplete that need to be between T2.StageFrom and T2.StageTo. and than get the T2.Bonus_Prec and join T1
T1:
T2:
The desired result for T2.Bonus_Prec is 0.02 since T1.Percent_Complete is .27, which is between 0 and 1.
The thing is that each Key can have a different T2.StageID between 1-6.
If Key have just one T2.StageID it'll be 0. (fast way for me to know that there is only 1 bonus option)
If it have more than 1 it's will start with 1. (This can be changed if needed)
T1:
DROP TABLE T1;
CREATE TABLE T1(
Key VARCHAR(10) NOT NULL PRIMARY KEY
,Percent_Complete_ NUMBER(16,2) NOT NULL
);
INSERT INTO T1(Key,Percent_Complete_) VALUES ('Key Vendor',Percent_Complete);
INSERT INTO T1(Key,Percent_Complete_) VALUES ('***',0.27);
T2:
DROP TABLE T2;
CREATE TABLE T2(
Key VARCHAR(50) NOT NULL
,StageID INT NOT NULL
,Stage_From NUMERIC(10,2) NOT NULL
,Stage_To NUMERIC(8,2) NOT NULL
,Stage_Bonus_Prec NUMERIC(16,2) NOT NULL
);
INSERT INTO T2(Key,StageID,Stage_From,Stage_To,Stage_Bonus_Prec) VALUES ('Key',Stage_Id,Stage_From,Stage_To,Stage_Bonus_Prec);
INSERT INTO T2(Key,StageID,Stage_From,Stage_To,Stage_Bonus_Prec) VALUES ('***',1,0,0.8,0.02);
INSERT INTO T2(Key,StageID,Stage_From,Stage_To,Stage_Bonus_Prec) VALUES ('***',2,0.8,1,0.035);
INSERT INTO T2(Key,StageID,Stage_From,Stage_To,Stage_Bonus_Prec) VALUES ('***',3,1,-1,0.05);
OUTPUT:
+-----+-------------------+--------------------+
| Key | Percent_Complete | [Stage_Bonus_Prec] |
+-----+-------------------+--------------------+
| *** | 0.27 | 0.02 |
+-----+-------------------+--------------------+
Here is a SQLFiddle with these values
It is still not clear what you are trying to do but I made an attempt. Please notice I also corrected a number of issues with ddl and sample data you posted.
if OBJECT_ID('T1') is not null
drop table T1
CREATE TABLE T1(
KeyVendor VARCHAR(10) NOT NULL PRIMARY KEY
,PercentComplete VARCHAR(16) NOT NULL
);
INSERT INTO T1(KeyVendor,PercentComplete) VALUES ('***','0.27');
if OBJECT_ID('T2') is not null
drop table T2
CREATE TABLE T2(
MyKey VARCHAR(50) NOT NULL
,StageID INT NOT NULL
,Stage_From NUMERIC(10,0) NOT NULL
,Stage_To NUMERIC(8,0) NOT NULL
,Stage_Bonus_Prec NUMERIC(16,3) NOT NULL
);
INSERT INTO T2(MyKey,StageID,Stage_From,Stage_To,Stage_Bonus_Prec) VALUES ('***',1,0,0.8,0.02);
INSERT INTO T2(MyKey,StageID,Stage_From,Stage_To,Stage_Bonus_Prec) VALUES ('***',2,0.8,1,0.035);
INSERT INTO T2(MyKey,StageID,Stage_From,Stage_To,Stage_Bonus_Prec) VALUES ('***',3,1,-1,0.05);
select *
from T1
cross apply
(
select top 1 Stage_Bonus_Prec
from T2
where t1.PercentComplete >= t2.Stage_Bonus_Prec
and t1.KeyVendor = t2.MyKey
order by Stage_Bonus_Prec
) x
Taking a shot at this as well, since it's still a bit unclear:
SELECT t1.percent_complete, t2.Stage_Bonus_Prec
FROM T1 INNER JOIN T2
ON T1.[key vendor] = T2.[Key] AND
T1.[percent_complete] BETWEEN T2.Stage_From AND T2.Stage_To
Joining T1 and T2 on [Key Vendor] and [Key] and using the BETWEEN operator to find the percent_complete value that is between Stage_From and Stage_To.
I think I'm still missing something since I'm still confused about where Key value of *** comes from in your desired results.
SQLFiddle of this in action, based on a slightly fixed up version of your DDL (you put your field names in their own data record, I've removed them since they don't belong there).
I have to create a table, with either first name and last name of a person, or a name of an organization. There has to be exactly one of them. For example one row of the table is -
first_name last_name organization
---------- --------- ------------
John Smith null
or another row can be -
first_name last_name organization
---------- --------- --------------------
null null HappyStrawberry inc.
Is there a way to define this in SQL language? Or should I just define all three columns being able to get null values?
Your situation is a classical example of what some ER dialects call "entity subtyping".
You have an entity called "Person" (or "Party" or something of that ilk), and you have two ditinct sub-entities called "NaturalPerson" and "LegalPerson", respectively.
The canonical way to model ER entity subtypes in a relational database is using three tables : one for the "Person" entity with all columns that are "common" for both NaturalPerson and LegalPerson (i.e. that exist for Persons, regardless of their type), and one per identified sub-entity holding all the columns that pertain to that sub-entity in particular.
You can read more on this in Fabian Pascal, "Practical Issues in Database Management".
You could use a check constraint, like:
create table YourTable (
col1 varchar(50)
, col2 varchar(50)
, col3 varchar(50)
, constraint TheConstraint check ( 1 =
case when col1 is null then 1 else 0 end +
case when col2 is null then 1 else 0 end +
case when col3 is null then 1 else 0 end )
)
Another way is to add a type column (EAV method):
create table YourTable (
type varchar(10) check (type in ('FirstName', 'LastName', 'Organisztion')
, value varchar(50))
insert YourTable ('LastName', 'Obama')
insert YourTable ('FirstName', 'Barrack')
insert YourTable ('Orginazation', 'White House')
You can do this using a constraint:
CREATE TABLE [dbo].[Contact](
[first_name] [varchar](50) NULL,
[last_name] [varchar](50) NULL,
[organization] [varchar](50) NULL,
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Contact] WITH CHECK ADD CONSTRAINT [CK_Contact] CHECK (([first_name] IS NOT NULL OR [last_name] IS NOT NULL OR [organization] IS NOT NULL))
GO
ALTER TABLE [dbo].[Contact] CHECK CONSTRAINT [CK_Contact]
GO
The CK_Contact constraint ensures that at least one value was entered.
Is it possible to create a Database which has 1 column (but not the column of primary key) to be auto-increment? So that when I insert value to the database, i don't need to fill in the value myself, and DB will fill in that value for that column for me (and increment every time I do a new insert)?
Thank you.
Yes, of course it is possible. Just make this column a unique key (not a primary key) and it has to be declared with a special attribute: "IDENTITY" for SQL Server, and
"AUTO_INCREMENT" for MySQL (see the example below) . And another column can be a primary key.
On MySQL database the table could be declared like this:
CREATE TABLE `mytable` (
`Name` VARCHAR(50) NOT NULL,
`My_autoincrement_column` INTEGER(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`Name`),
UNIQUE KEY `My_autoincrement_column` (`My_autoincrement_column`)
);
Yes, you can do this. Here is a sample for SQL Server using IDENTITY:
CREATE TABLE MyTable (
PrimaryKey varchar(10) PRIMARY KEY,
IdentityColumn int IDENTITY(1,1) NOT NULL,
DefaultColumn CHAR(1) NOT NULL DEFAULT ('N')
)
INSERT INTO MyTable (PrimaryKey) VALUES ('A')
INSERT INTO MyTable (PrimaryKey) VALUES ('B')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('C', 'Y')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('D', 'Y')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('E', DEFAULT)
--INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('F', NULL) -- ERROR
--> Cannot insert the value NULL into column 'DefaultColumn', table 'tempdb.dbo.MyTable'; column does not allow nulls. INSERT fails.
SELECT * FROM MyTable
Here is an example using SQL Server using functions to roll-your-own incrementing column. This is by means not fault tolerant or the way I would do it. (I'd use the identity feature.) However, it is good to know that you can use functions to return default values.
DROP TABLE MyTable
GO
DROP FUNCTION get_default_for_mytable
GO
CREATE FUNCTION get_default_for_mytable
()
RETURNS INT
AS
BEGIN
-- Declare the return variable here
DECLARE #ResultVar int
-- Add the T-SQL statements to compute the return value here
SET #ResultVar = COALESCE((SELECT MAX(HomeBrewedIdentityColumn) FROM MyTable),0) + 1
-- Return the result of the function
RETURN #ResultVar
END
GO
CREATE TABLE MyTable (
PrimaryKey varchar(10) PRIMARY KEY,
IdentityColumn int IDENTITY(1,1) NOT NULL,
DefaultColumn CHAR(1) NOT NULL DEFAULT ('N'),
HomeBrewedIdentityColumn int NOT NULL DEFAULT(dbo.get_default_for_mytable())
)
GO
INSERT INTO MyTable (PrimaryKey) VALUES ('A')
INSERT INTO MyTable (PrimaryKey) VALUES ('B')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('C', 'Y')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('D', 'Y')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('E', DEFAULT)
--INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('F', NULL) -- ERRROR
--> Cannot insert the value NULL into column 'DefaultColumn', table 'tempdb.dbo.MyTable'; column does not allow nulls. INSERT fails.
SELECT * FROM MyTable
Results
PrimaryKey IdentityColumn DefaultColumn HomeBrewedIdentityColumn
---------- -------------- ------------- ------------------------
A 1 N 1
B 2 N 2
C 3 Y 3
D 4 Y 4
E 5 N 5
I think you can have only 1 identity autoincrement column per table, this columns doesn't have to be the primary key but it would mean you have to insert the primary key yourself.
If you already have a primary key which is auto increment then I would try and use this if possible.
If you are trying to get an row ID to range on for querying then I would look at creating a view which has the row ID in it (not SQL 2000 or below).
Could you add in what your primary key is and what you intend to use the auto increment column for and it might help come up with a solution
On sql server this is called an identity column
Oracle and DB2 have sequence but I think you are looking for identity and all major dbms (mysql, sql server, db2, oracle) support it.