Consider this table definition:
CREATE TABLE foo (
a int not null, -- Implicit not null constraint
b int check (b is not null), -- Explicit not null constraint
c int check (c > 1) -- Explicit constraint
);
I want to discover all the explicit check constraints, i.e. constraints that the user defined in their DDL statement by using the CHECK syntax. Those constraints may or may not be named. In the above example, they're not named. How can I discover only the "explicit" check constraints, ignoring the implicit ones?
E.g. when I query ALL_CONSTRAINTS:
SELECT *
FROM all_constraints
WHERE constraint_type = 'C'
AND table_name = 'FOO';
I don't see any way to distinguish the explicitness/implicitness:
CONSTRAINT_NAME SEARCH_CONDITION GENERATED
---------------------------------------------------
SYS_C00120656 "A" IS NOT NULL GENERATED NAME
SYS_C00120657 b is not null GENERATED NAME
SYS_C00120658 c > 1 GENERATED NAME
I could of course make a heuristic on the unlikelyhood of someone using the exact "COLUMN_NAME" IS NOT NULL syntax (including double quote):
SELECT *
FROM all_constraints
WHERE constraint_type = 'C'
AND table_name = 'FOO'
AND search_condition_vc NOT IN (
SELECT '"' || column_name || '" IS NOT NULL'
FROM all_tab_cols
WHERE table_name = 'FOO'
AND nullable = 'N'
);
This gives me the wanted result:
CONSTRAINT_NAME SEARCH_CONDITION GENERATED
---------------------------------------------------
SYS_C00120657 b is not null GENERATED NAME
SYS_C00120658 c > 1 GENERATED NAME
I'm putting this as an answer here, as this might be good enough for some people, but I'd really like a more reliable solution.
SYS.CDEF$.TYPE# knows the difference between implicit and explicit check constraints. Implicit check constraints are stored as 7, explicit check constraints are stored as 1.
--Explicit constraints only.
select constraint_name, search_condition
from dba_constraints
where (owner, constraint_name) not in
(
--Implicit constraints.
select dba_users.username, sys.con$.name
from sys.cdef$
join sys.con$
on cdef$.con# = con$.con#
join dba_users
on sys.con$.owner# = dba_users.user_id
where cdef$.type# = 7
)
and constraint_type = 'C'
and table_name = 'FOO'
order by 1;
CONSTRAINT_NAME SEARCH_CONDITION
--------------- ----------------
SYS_C00106940 b is not null
SYS_C00106941 c > 1
This solution has the obvious disadvantage of relying on undocumented tables. But it does appear to be more accurate than relying on the text of the condition. Some implicit check constraints are not created with double quotes. I can't reproduce that issue, but I found it happening to the table SYS.TAB$.
Idea: You could compare table with its "shadow" counterpart. CREATE TABLE AS does not preserve user defined check constraints:
-- original table
CREATE TABLE foo (
id int PRIMARY KEY NOT NULL,
a int not null, -- Implicit not null constraint
b int check (b is not null), -- Explicit not null constraint
c int check (c = 1), -- Explicit constraint
d INT CONSTRAINT my_check CHECK (d = 3)
);
-- clone without data(it should be stored in different schema than actual objects)
CREATE TABLE shadow_foo
AS
SELECT *
FROM foo
WHERE 1=2;
-- for Oracle 18c you could consider private temporary tables
CREATE PRIVATE TEMPORARY TABLE ora$shadow_foo ON COMMIT DROP DEFINITION
AS
SELECT * FROM foo WHERE 1=2;
And main query:
SELECT c.*
FROM (SELECT * FROM all_constraints WHERE TABLE_NAME NOT LIKE 'SHADOW%') c
LEFT JOIN (SELECT * FROM all_constraints WHERE TABLE_NAME LIKE 'SHADOW%') c2
ON c2.table_name = 'SHADOW_' || c.table_name
AND c2.owner = c.owner
AND c2.search_condition_vc = c.search_condition_vc
WHERE c2.owner IS NULL
AND c.constraint_type = 'C'
AND c.owner LIKE 'FIDDLE%'
db<>fiddle demo
Related
Let say I have these 3 constraint given:
ALTER TABLE actor ADD CONSTRAINT PK_ACTORID PRIMARY KEY (actor_id);
ALTER TABLE film ADD CONSTRAINT PK_FILMID PRIMARY KEY (film_id);
ALTER TABLE film_actor ADD CONSTRAINT FK_FILMID1 FOREIGN KEY (film_id) REFERENCES film;
I need to write sql to show these table constraints:
-- Check which constraints added in ACTOR table
SELECT OWNER, CONSTRAINT_NAME, TABLE_NAME, SEARCH_CONDITION, INDEX_NAME
FROM USER_CONSTRAINTS
WHERE TABLE_NAME = 'ACTOR';
-- Check which constraints added in FILM_ACTOR table
SELECT OWNER, CONSTRAINT_NAME, TABLE_NAME, SEARCH_CONDITION, INDEX_NAME
FROM USER_CONSTRAINTS
WHERE TABLE_NAME = 'FILM_ACTOR';
And the result end up like:
and
My question is, how can I combine two sql statements I wrote as 1 sql and also formatting the result displayed.
Would changing your where statement work?
Something like this:
SELECT OWNER, CONSTRAINT_NAME, TABLE_NAME, SEARCH_CONDITION, INDEX_NAME
FROM USER_CONSTRAINTS
WHERE TABLE_NAME = 'ACTOR' or table_name = 'film' or table_name ='film_actor';
I have UNIQUE KEY attribute on my Table. For my project I deleted that UNIQUE KEY from table , but when try to enter data it still giving the error of UNIQUE KEY VIOLATION.
Already done
to check to commit the database .
to refreshed the entire database.
to find that CONSTRAINT on schema.
SELECT DISTINCT table_name
FROM all_indexes
WHERE index_name = 'CONSTRAINT_NAME';
The above query returns no data (Constraint not found).
I want my data duplicate for one employee without UNIQUE KEY VIOLATION ERROR
Below are some of the queries you can run and check if there is specified index, constraints exist on the table and once you find you can simply drop it. Maybe your unique index was created before the constraint was created:
SELECT * FROM user_cons_columns WHERE table_name = '<your table name>';
select column_name from user_ind_columns where index_name = '<index_name>';
select column_name from user_cons_columns where constraint_name = '<index_name>';
Use below command for dropping index:
DROP INDEX index_name;
Use below command for dropping constraints:
ALTER TABLE <table_name>
DROP CONSTRAINT <constraint_name>
[TL;DR] If you have a UNIQUE INDEX without a UNIQUE CONSTRAINT then you will get the same error message. You need to make sure you have dropped both the index and the constraint.
CREATE TABLE test_data ( id NUMBER );
CREATE UNIQUE INDEX test_data__id__u ON test_data ( id );
ALTER TABLE test_data ADD CONSTRAINT test_data__id__u UNIQUE ( id );
INSERT INTO test_data ( id ) VALUES ( 1 );
INSERT INTO test_data ( id ) VALUES ( 1 );
Will insert one row and will give ORA-00001: unique constraint (FIDDLE_ALGCPMTPWFJZCXIPXNLR.TEST_DATA__ID__U) violated for the second.
If you do:
SELECT * FROM user_constraints WHERE table_name = 'TEST_DATA';
SELECT * FROM user_indexes WHERE table_name = 'TEST_DATA';
Then it will show there is an index and a constraint on the table.
Then if you drop the constraint:
ALTER TABLE test_data DROP CONSTRAINT test_data__id__u;
and try to do:
INSERT INTO test_data ( id ) VALUES ( 1 );
Then you will get:
ORA-00001: unique constraint (FIDDLE_ALGCPMTPWFJZCXIPXNLR.TEST_DATA__ID__U) violated
If you look at the indexes and constraints again:
SELECT * FROM user_constraints WHERE table_name = 'TEST_DATA';
SELECT * FROM user_indexes WHERE table_name = 'TEST_DATA';
Then it will show no constraints but the index is still there. You need to make sure the unique index has been dropped too.
DROP INDEX test_data__id__u;
INSERT INTO test_data ( id ) VALUES ( 1 );
Will then insert the row.
db<>fiddle here
I created table "A" with datatype VARCHAR2(3000) with "Not null" constraint. Table "A" has one column "someColumn" which is as primary key. As below:
CREATE TABLE A (
"someColumn" VARCHAR2(3000) NOT NULL
)
ALTER TABLE A
ADD CONSTRAINT pk_A PRIMARY KEY (
"someColumn"
)
Now I want change datatype from VARCHAR2(3000) to VARCHAR2(4000) but I don't want change constraint. So I used:
ALTER TABLE A
MODIFY
(
"someColumn" VARCHAR2(4000)
)
It worked and now I have ddl like below:
PROMPT CREATE TABLE a
CREATE TABLE a (
"someColumn" VARCHAR2(4000) NOT NULL
)
/
PROMPT ALTER TABLE a ADD CONSTRAINT pk_a PRIMARY KEY
ALTER TABLE a
ADD CONSTRAINT pk_a PRIMARY KEY (
"someColumn"
)
/
Then I used code as below:
ALTER TABLE A
MODIFY
(
"someColumn" VARCHAR2(3000) NULL
)
I got message "Alter table, executed..." but when I checked ddl again - I have "Not null" constraint still and new datatype (4000).
It's Oracle error?
To be sure that I don't have any cache in my "SQLTools" I am using:
SELECT * FROM all_tab_cols WHERE table_name = 'A'
Answer:
The change to NULLABLE fails but silently, so I see no error message.
on a column that is primary key you may change the length, but you can't do it nullable (because it is a PK which can't be NULL).
Initial state 3000 length NOT NULL
select COLUMN_NAME, DATA_LENGTH, NULLABLE from user_tab_columns where table_name = 'A';
COLUMN_NAME DATA_LENGTH N
------------------------------ ----------- -
someColumn 3000 N
Change length to 4000 - OK
ALTER TABLE A
MODIFY
(
"someColumn" VARCHAR2(4000)
)
;
select COLUMN_NAME, DATA_LENGTH, NULLABLE from user_tab_columns where table_name = 'A';
COLUMN_NAME DATA_LENGTH N
------------------------------ ----------- -
someColumn 4000 N
Change length to 3000 - OK
make it nullable - fails SILENTLY as PK can't be nullable
ALTER TABLE A
MODIFY
(
"someColumn" VARCHAR2(3000) NULL
);
select COLUMN_NAME, DATA_LENGTH, NULLABLE from user_tab_columns where table_name = 'A';
COLUMN_NAME DATA_LENGTH N
------------------------------ ----------- -
someColumn 3000 N
If you wan't to see a error message - split the change in two (change length and set nullable).
The first one will pass, the second will explicitely fail.
ALTER TABLE A
MODIFY
(
"someColumn" VARCHAR2(3000)
);
ALTER TABLE A
MODIFY
(
"someColumn" NULL
);
ORA-01451: column to be modified to NULL cannot be modified to NULL
When I create a temp table using a select into in SQL Server, is there a way to specify that a column should be nullable? I have a multi-step process where I'm making a temp table by selecting a lot of columns (which is why I'm not doing a create table #tmp (...)). After I make that temp table, I'm updating some columns and some of those updates might null out a field.
I know I could do an alter table alter column statement to achieve what I want, but I'm curious about whether there's a way to specify this in the select itself. I know you can inline cast your columns to get the desired datatype, but I can't see how you specify nullability.
Nullability is inherited from the source column.
You can lose or gain nullability with an expression:
Example (constant literals appear to be problematic - need a good NOOP function which can return NULL):
CREATE TABLE SO5465245_IN
(
a INT NOT NULL
,b INT NULL
) ;
GO
SELECT COALESCE(a, NULL) AS a
,ISNULL(b, 0) AS b
,COALESCE(10, NULL) AS c1
,COALESCE(ABS(10), NULL) AS c2
,CASE WHEN COALESCE(10, NULL) IS NOT NULL THEN COALESCE(10, NULL) ELSE NULL END AS c3
INTO SO5465245_OUT
FROM SO5465245_IN ;
GO
SELECT TABLE_NAME
,COLUMN_NAME
,IS_NULLABLE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME LIKE 'SO5465245%'
ORDER BY TABLE_NAME
,ORDINAL_POSITION ;
GO
DROP TABLE SO5465245_IN ;
GO
DROP TABLE SO5465245_OUT ;
GO
This soulution I've recently come up with and though I should share:
select top 0
B.*
into
TargetTable
from
SourceTable as A
left join SourceTable as B on 1 = 0
This effectively creates a duplicated structure of SourceTable in TargetTable with all columns nullable (at least in sql2008).
CONVERT will make your columns nullable, and works for literals/constants too. Tested in SQL Server 2005/2008.
SELECT
SomeText = CONVERT(varchar(10), 'literal'),
SomeNumber = CONVERT(int, 0)
INTO SO5465245
INSERT SO5465245 VALUES (null, null)
SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE, IS_NULLABLE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'SO5465245'
ORDER BY TABLE_NAME, ORDINAL_POSITION
DROP TABLE SO5465245
If you want to inherit nullablity for the destination column irrespective of the source table columns you can follow this query.
SELECT COLUMN1, COLUMN2, COLUMN3 INTO DestinationTable from SourceTable
if this was your query where COLUMN1,COLUMN2,COLUMN3 were not nullable in SourceTable then change the query as
SELECT NULL COLUMN1, NULL COLUMN2, NULL COLUMN3 INTO DestinationTable from SourceTable
so, this will allow you to insert null values in to the Destination table.
I recently had the same issue - I wanted to use "select into", wanted all columns in the target table to be nullable & a repeatable approach where I didn't have to know the names of the fields in the source table.
select *
into dbo.I_Data
from
(select 1[Z_1]) A
full join (select null[Z_2], * from dbo.S_Data) B on A.Z_1 = B.Z_2
where
dbo.S_Data is the source data table and
[Z_1] & [Z_2] are two dummy columns used for the join
Then to clean up:
(a) Remove the row of nulls
delete dbo.I_Data where [Z_1] = 1
(b) Remove the dummy fields:
alter table dbo.I_Data
drop column [Z_1], [Z_2]
Regards.
I have a table that holds translations in an entire system and other tables reference to it, for example something like this:
Table "translations"
id | title
----------------------------
1 | First Translation
2 | Second Translation
And second table with foreign key pointing to translations:
Table "article"
id | translation_id | ...
1 | 1 | ...
I would like to get a list of rows that are not referenced by any other table (in this example row with id=2).
Number of tables might change and I would like to have a general solution that will operate on native relations mechanism in psql.
I've made the function you need. Bellow is the sample data I created to test it. In my data sample the return should be the ID 4 from the table t1. To your case the t1 table would be the translations table.
You have to change it to your tables. It shouldn't be difficult.
create table t1 (
id integer primary key not null,
lang varchar(10)
);
create table t2 (
id integer primary key not null,
id_t1 integer,
constraint fk_t2 foreign key (id_t1) references t1(id)
);
create table t3 (
id integer primary key not null,
id_t1 integer,
constraint fk_t3 foreign key (id_t1) references t1(id)
);
insert into t1 values (1, 'pt'), (2, 'us'), (3,'cn'), (4,'uk');
insert into t2 values (1, 1), (2,2);
insert into t3 values (1, 1), (2,3);
CREATE OR REPLACE FUNCTION listAllReferences()
RETURNS setof integer AS
$$
declare
fullSQL text;
rs RECORD;
begin
fullSQL := '';
for rs in
SELECT 'select t1.id from t1 inner join ' || tc.table_name || ' ON ('||tc.table_name||'.'||kcu.column_name||' = t1.id)' as sel
FROM information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
WHERE constraint_type = 'FOREIGN KEY'
AND ccu.table_name='t1' loop
if fullSQL != '' then
fullSQL := fullSQL || ' union ';
end if;
fullSQL := fullSQL || rs.sel;
end loop;
return query
execute 'select t1.id
from t1 left join ('||fullSQL||') alltb on (t1.id = alltb.id)
where alltb.id is null';
return;
end;
$$
LANGUAGE plpgsql;
And to use it just do:
select * from listAllReferences();
It will return:
listallreferences
4
Future tables with reference to your language table will also get covered because I'm getting the data from the INFORMATION_SCHEMA of PostgreSQL
Also you may have to add another filter () to the query on the implicit cursor which is AND tc.table_schema = 'yourSchemaName'