Dropping unique constraint for column in H2 - sql

I try to drop unique constraint for column in h2, previously created as info varchar(255) unique.
I tried:
sql> alter table public_partner drop constraint (select distinct unique_index_name from in
formation_schema.constraints where table_name='PUBLIC_PARTNER' and column_list='INFO');
But with no success (as follows):
Syntax error in SQL statement "ALTER TABLE PUBLIC_PARTNER DROP CONSTRAINT ([*]SELECT DISTI
NCT UNIQUE_INDEX_NAME FROM INFORMATION_SCHEMA.CONSTRAINTS WHERE TABLE_NAME='PUBLIC_PARTNER
' AND COLUMN_LIST='INFO') "; expected "identifier"; SQL statement:
alter table public_partner drop constraint (select distinct unique_index_name from informa
tion_schema.constraints where table_name='PUBLIC_PARTNER' and column_list='INFO') [42001-1
60]
How this constraint should be correctly removed?
By the way:
sql> (select unique_index_name from information_schema.constraints where table_name='PUBLI
C_PARTNER' and column_list='INFO');
UNIQUE_INDEX_NAME
CONSTRAINT_F574_INDEX_9
(1 row, 0 ms)
seems to return a correct output.

In the SQL language, identifier names can't be expressions. You need to run two statements:
select distinct constraint_name from information_schema.constraints
where table_name='PUBLIC_PARTNER' and column_list='INFO'
and then get the identifier name, and run the statement
ALTER TABLE PUBLIC_PARTNER DROP CONSTRAINT <xxx>

You could use a user defined function to execute a dynamically created statement. First to create the execute alias (only once):
CREATE ALIAS IF NOT EXISTS EXECUTE AS $$ void executeSql(Connection conn, String sql)
throws SQLException { conn.createStatement().executeUpdate(sql); } $$;
Then to call this method:
call execute('ALTER TABLE PUBLIC_PARTNER DROP CONSTRAINT ' ||
(select distinct unique_index_name from in formation_schema.constraints
where table_name='PUBLIC_PARTNER' and column_list='INFO'));
... where execute is the user defined function that runs a statement.

If you are using H2 with Spring Boot in PosgreSQL Mode the query has to include the schema public and the tables are likely in lower case mode. (see application.yml below)
Check the letter case in the information schema table and use the upper and lower case as seen in table information_schema.constraints.
Verbose Query Set
SET #constraint_name = QUOTE_IDENT(
SELECT DISTINCT constraint_name
FROM information_schema.constraints
WHERE table_schema = 'public'
AND table_name = 'public_partner'
AND constraint_type = 'UNIQUE'
AND column_list = 'info');
SET #command = 'ALTER TABLE public.public_partner DROP CONSTRAINT public.' || #constraint_name;
SELECT #command;
EXECUTE IMMEDIATE #command;
Explanation:
SELECT DISTINCT constraint_name [...]
Select the Columns constraint_name with the UNIQUE constrain from the schema info
QUOTE_IDENT([...])
I don't know why this is needed, it will quote the resulting string
SET #constraint_name = [...];
Store in Variable #constraint_name
SET #command = [...];
Compose whole command by concatenation of strings and store in variable #command
SELECT #command;
Show the composed Query on Screen, just for debugging
EXECUTE IMMEDIATE #command;
Execute #command
Typical H2 Configuration in PostgreSQL Mode in the Spring Boot application.yml (extract)
spring:
# [...]
jpa:
database-platform: org.hibernate.dialect.H2Dialect
# [...]
datasource:
url: jdbc:h2:mem:testdb;MODE=PostgreSQL;DATABASE_TO_LOWER=TRUE;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=false
username: sa
password: sa
# [...]

Related

SQL Server : removes null constraint when changing the column's datatype, Oracle does not

I was reviewing something for a project and noticed in SQL Server, modifying the datatype of a column removed an existing not null check. I wanted to compare the same to Oracle and noticed that the null check is not removed when a data type change occurs.
My question: is there a reason why SQL Server does not preserve the null check without explicitly specifying the alter statement to make the column not null? After googling some, couldn't really find an answer. Maybe this is specific to a setting in SQL Server that is off?
If there isn't config, seems maybe there is a really good reason that I can't see for why this occurs.
Here is the SQL I was using to compare:
-- SQL Server
CREATE TABLE TestTable (Name varchar(50) NOT NULL);
-- Does not allow null
SELECT COLUMNPROPERTY(OBJECT_ID('dbo.TestTable', 'U'), 'Name', 'AllowsNull');
ALTER TABLE TestTable ALTER COLUMN Name varchar(250);
-- Allows null now
SELECT COLUMNPROPERTY(OBJECT_ID('dbo.TestTable', 'U'), 'Name', 'AllowsNull');
DROP TABLE TestTable;
-- Oracle
CREATE TABLE MYSCHEMA.TestTable (Name VARCHAR2(50) NOT NULL);
select nullable from all_tab_columns where owner = 'MYSCHEMA' and table_name = 'TESTTABLE' and column_name = 'NAME';
ALTER TABLE MYSCHEMA.TestTable MODIFY Name VARCHAR2(250);
select nullable from all_tab_columns where owner = 'MYSCHEMA' and table_name = 'TESTTABLE' and column_name = 'NAME';
Drop Table MYSCHEMA.TestTable;
Environment:
SQL Server 2017
Oracle 12c
Both running in docker on linux.
NULL may be the default.
From https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-table-transact-sql?view=sql-server-ver15
When you create or alter a table with the CREATE TABLE or ALTER TABLE
statements, the database and session settings influence and possibly
override the nullability of the data type that's used in a column
definition. Be sure that you always explicitly define a column as NULL
or NOT NULL for noncomputed columns.
Did you try this?
ALTER TABLE TestTable ALTER COLUMN Name varchar(250) NOT NULL;

SQL Server - Check column if exists >> rename and change type

SQL Server:
Check column if exists when
If True : (Change/Modify) column_name and dataType
If False : Create
Schema name : Setup
Code:
IF EXISTS (SELECT 1 FROM sys.columns
WHERE Name = N'bitIntialBalance'
AND Object_ID = Object_ID(N'Setup.LeaveVacationsSubType'))
BEGIN
ALTER TABLE [Setup].[LeaveVacationsSubType]
ALTER COLUMN intIntialBalance INT NULL;
EXEC sp_RENAME 'Setup.LeaveVacationsSubType.bitIntialBalance', 'intIntialBalance', 'COLUMN';
--ALTER TABLE [Setup].[LeaveVacationsSubType] MODIFY [intIntialBalance] INT; not working
END
GO
IF NOT EXISTS(SELECT 1 FROM sys.columns
WHERE Name = N'intIntialBalance'
AND Object_ID = Object_ID(N'Setup.LeaveVacationsSubType'))
BEGIN
ALTER TABLE [Setup].[LeaveVacationsSubType]
ADD intIntialBalance INT NULL;
END
GO
If I guess correctly, the problem is that query plan is made for the whole script, and SQL Server also checks that it can actually perform all the operations, even if it is inside an if statement. That's why you'll get an error, even if in the reality that statement would never be executed.
One way to get around this issue is to make all those statements dynamic, something like this:
execute ('ALTER TABLE [Setup].[LeaveVacationsSubType] MODIFY [intIntialBalance] INT')

Don't know temp tables when use Execute in SQL Server

I have below Query :
if OBJECT_ID('tempdb..#tmpTables') is not null
drop table #tmpTables
Execute('select TABLE_NAME into #tmpTables from '+#dbName+'.INFORMATION_SCHEMA.TABLES')
while (select COUNT(*) from #tmpTables)>0
begin
//here is my statement
end
When I execute this Query, I am getting this error :
Invalid object name '#tmpTables'.
But when the query is changed to this :
if OBJECT_ID('tempdb..#tmpTables') is not null
drop table #tmpTables
select TABLE_NAME into #tmpTables from INFORMATION_SCHEMA.TABLES
while (select COUNT(*) from #tmpTables)>0
begin
//here is my code
end
It works.
How can I do this ?
Table names prefixed with a single number sign (#) are 'Local temporary table' names.
Local temporary tables are visible only to their creators during the same connection to an instance of SQL Server as when the tables were first created or referenced.
Local temporary tables are deleted after the user disconnects from the instance of SQL Server.
And when you create a Local Temporary Table by EXEC() command the creator will not be you and it will disconnected after finishing the statement, And as finishing the connection temp table dropped.
You can use a table variable like this:
DECLARE #tmpTables TABLE(TABLE_NAME nvarchar(max))
insert into #tmpTables
(select TABLE_NAME from INFORMATION_SCHEMA.TABLES)

SQL Column Not Found: Added earlier in program

I'm using SQL Server 2008. I have a stored procedure with code that looks like this:
if not exists (select column_name from INFORMATION_SCHEMA.columns
where table_name = 'sample_table' and column_name = 'sample_column')
BEGIN
ALTER TABLE Sample_Table
ADD Sample_Column NVARCHAR(50)
END
Update dbo.Sample_Table
SET Sample_Column = '1'
When I execute, I get a "Column Not Found" error because the column doesn't originally exist in Sample_Table-it's added in the procedure. What's the correct way to get around this?
My workaround (below) is to wrap the update statement in an EXEC statement, so that it is forced to create the code and execute after the ALTER TABLE step. But is there a better method?
EXEC ('
Update dbo.Sample_Table
SET Sample_Column = ''1'' ')
If you do not really like your workaround, the only other option seems to be separating your DDL logic from the DML one, i.e. one SP will check/create the column (maybe other columns too, as necessary), another SP sets the value(s).
On the other hand, it looks like you are using your UPDATE statement merely as a means of providing a default value for the newly created column. If that is the case, you might consider an entirely different solution: creating a DEFAULT constraint (no need for the UPDATE statement). Here:
if not exists (select column_name from INFORMATION_SCHEMA.columns
where table_name = 'sample_table' and column_name = 'sample_column')
BEGIN
ALTER TABLE Sample_Table
ADD Sample_Column NVARCHAR(50) NOT NULL
CONSTRAINT DF_SampleTable_SampleColumn DEFAULT ('1');
ALTER TABLE Sample_Table
ALTER COLUMN Sample_Column NVARCHAR(50) NULL;
END
The second ALTER TABLE command is there only to drop the NOT NULL restriction, as it seems like you didn't mean your column to have it. But if you are fine with NOT NULL, then just scrap the second ALTER TABLE.

How can I set all columns' default value equal to null in PostgreSQL

I would like to set the default value for every column in a number of tables equal to Null. I can view the default constraint under information_schema.columns.column_default. When I try to run
update information_schema.columns set column_default = Null where table_name = '[table]'
it throws "ERROR: cannot update a view HINT: You need an unconditional ON UPDATE DO INSTEAD rule."
What is the best way to go about this?
You need to run an ALTER TABLE statement for each column. Never ever try to do something like that by manipulating system tables (even if you find the correct one - INFORMATION_SCHEMA only contains view to the real system tables)
But you can generate all needed ALTER TABLE statements based on the data in the information_schema views:
SELECT 'ALTER TABLE '||table_name||' ALTER COLUMN '||column_name||' SET DEFAULT NULL;'
FROM information_schema.columns
WHERE table_name = 'foo';
Save the output as a SQL script and then run that script (don't forget to commit the changes)