How can I set all columns' default value equal to null in PostgreSQL - sql

I would like to set the default value for every column in a number of tables equal to Null. I can view the default constraint under information_schema.columns.column_default. When I try to run
update information_schema.columns set column_default = Null where table_name = '[table]'
it throws "ERROR: cannot update a view HINT: You need an unconditional ON UPDATE DO INSTEAD rule."
What is the best way to go about this?

You need to run an ALTER TABLE statement for each column. Never ever try to do something like that by manipulating system tables (even if you find the correct one - INFORMATION_SCHEMA only contains view to the real system tables)
But you can generate all needed ALTER TABLE statements based on the data in the information_schema views:
SELECT 'ALTER TABLE '||table_name||' ALTER COLUMN '||column_name||' SET DEFAULT NULL;'
FROM information_schema.columns
WHERE table_name = 'foo';
Save the output as a SQL script and then run that script (don't forget to commit the changes)

Related

SQL Server : removes null constraint when changing the column's datatype, Oracle does not

I was reviewing something for a project and noticed in SQL Server, modifying the datatype of a column removed an existing not null check. I wanted to compare the same to Oracle and noticed that the null check is not removed when a data type change occurs.
My question: is there a reason why SQL Server does not preserve the null check without explicitly specifying the alter statement to make the column not null? After googling some, couldn't really find an answer. Maybe this is specific to a setting in SQL Server that is off?
If there isn't config, seems maybe there is a really good reason that I can't see for why this occurs.
Here is the SQL I was using to compare:
-- SQL Server
CREATE TABLE TestTable (Name varchar(50) NOT NULL);
-- Does not allow null
SELECT COLUMNPROPERTY(OBJECT_ID('dbo.TestTable', 'U'), 'Name', 'AllowsNull');
ALTER TABLE TestTable ALTER COLUMN Name varchar(250);
-- Allows null now
SELECT COLUMNPROPERTY(OBJECT_ID('dbo.TestTable', 'U'), 'Name', 'AllowsNull');
DROP TABLE TestTable;
-- Oracle
CREATE TABLE MYSCHEMA.TestTable (Name VARCHAR2(50) NOT NULL);
select nullable from all_tab_columns where owner = 'MYSCHEMA' and table_name = 'TESTTABLE' and column_name = 'NAME';
ALTER TABLE MYSCHEMA.TestTable MODIFY Name VARCHAR2(250);
select nullable from all_tab_columns where owner = 'MYSCHEMA' and table_name = 'TESTTABLE' and column_name = 'NAME';
Drop Table MYSCHEMA.TestTable;
Environment:
SQL Server 2017
Oracle 12c
Both running in docker on linux.
NULL may be the default.
From https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-table-transact-sql?view=sql-server-ver15
When you create or alter a table with the CREATE TABLE or ALTER TABLE
statements, the database and session settings influence and possibly
override the nullability of the data type that's used in a column
definition. Be sure that you always explicitly define a column as NULL
or NOT NULL for noncomputed columns.
Did you try this?
ALTER TABLE TestTable ALTER COLUMN Name varchar(250) NOT NULL;

Check if column exists then alter column from the table?

I want to write sql script that should check if column exists in the table, and then remove the column if previous statement is true. The database I use is Sybase ASE, and this is the code that I tried to use:
IF EXISTS (SELECT 1 FROM syscolumns WHERE id = object_id('users') AND name = 'maiden_name')
BEGIN
ALTER TABLE security DROP maiden_name
END
The code above executed successfully first time I run it. The second time I goth the error:
Invalid column name 'maiden_name'
If column does not exist the ALTER TABLE block of code shouldn't run. Is there a way to achieve this is Sybase? Thank you.
You can use dynamic SQL:
IF EXISTS (SELECT 1 FROM syscolumns WHERE id = object_id('users') AND name = 'maiden_name')
BEGIN
EXEC('ALTER TABLE security DROP maiden_name')
END;
The problem is that the parser is trying to parse the ALTER during the compilation phase, and it gets an error if the column does not exist.

Converting data types from one database table to another

I did a bulk insert on a large text file that was an update for an existing database that I had. I ran into all kinds of trouble with truncation errors so I just set everything to varchar(max). Now that everything is in SQL Server I'd like to convert the data types from database b, to those of database a. If both databases had the same table and field names, what are some methods of getting this done? Or would it be best to have a pre-existing script you run after import that's 'hardcoded'
Try something like this and modify the result and execute it
declare #sql varchar(max)
set #sql=''
select
#sql=#sql+'Alter table '+table_name+' alter column '+column_name+' '+cast(data_type as varchar(100))+
case when data_type like '%char%' then '(' else '' end +cast(coalesce(CHARACTER_MAXIMUM_LENGTH,'' ) as varchar(100))+
case when data_type like '%char%' then ')' else '' end+';'
from
information_schema.columns
where
table_name='test'
print #sql
You could create a view of the table your looking to pull from and just cast() the columns you need to change to the data types you need to change them to. Then you can just do all of your inserts off of that View in the appropriate data type. Hope that helps.
If you are moving data from one sql server database to another you could use Atlantis Interactive Data Inspector. If you don't want to do that, you can script out the table in database a then paste ALTER TABLE [table] ALTER COLUMN before every column and run in on database b.

SQL Column Not Found: Added earlier in program

I'm using SQL Server 2008. I have a stored procedure with code that looks like this:
if not exists (select column_name from INFORMATION_SCHEMA.columns
where table_name = 'sample_table' and column_name = 'sample_column')
BEGIN
ALTER TABLE Sample_Table
ADD Sample_Column NVARCHAR(50)
END
Update dbo.Sample_Table
SET Sample_Column = '1'
When I execute, I get a "Column Not Found" error because the column doesn't originally exist in Sample_Table-it's added in the procedure. What's the correct way to get around this?
My workaround (below) is to wrap the update statement in an EXEC statement, so that it is forced to create the code and execute after the ALTER TABLE step. But is there a better method?
EXEC ('
Update dbo.Sample_Table
SET Sample_Column = ''1'' ')
If you do not really like your workaround, the only other option seems to be separating your DDL logic from the DML one, i.e. one SP will check/create the column (maybe other columns too, as necessary), another SP sets the value(s).
On the other hand, it looks like you are using your UPDATE statement merely as a means of providing a default value for the newly created column. If that is the case, you might consider an entirely different solution: creating a DEFAULT constraint (no need for the UPDATE statement). Here:
if not exists (select column_name from INFORMATION_SCHEMA.columns
where table_name = 'sample_table' and column_name = 'sample_column')
BEGIN
ALTER TABLE Sample_Table
ADD Sample_Column NVARCHAR(50) NOT NULL
CONSTRAINT DF_SampleTable_SampleColumn DEFAULT ('1');
ALTER TABLE Sample_Table
ALTER COLUMN Sample_Column NVARCHAR(50) NULL;
END
The second ALTER TABLE command is there only to drop the NOT NULL restriction, as it seems like you didn't mean your column to have it. But if you are fine with NOT NULL, then just scrap the second ALTER TABLE.

MySQL conditional drop foreign keys script

I'm involved is a project to migrate a project from Oracle to MySQL. I Have a script that i'm running from the MySQL shell command, called CreateTables.sql that looks like this internally:
source table\DropForeignKeys.sql
source tables\Site.sql
source tables\Language.sql
source tables\Country.sql
source tables\Locale.sql
source tables\Tag.sql
mysql --user=root --password --database=junkdb -vv < CreateTables.sql
What I'm after is a way to make the execution for the first script DropForeignKeys.sql conditional based on if the db has any tables of not. Alternatively it would be nice if there were a way to drop constraint if not exists but such a construct does not exists in MySQL to my knowledge.
So my question is how do I make the dropping of foreign key constraints conditional at script level or constraint level, so that I can have a reliable re-playable script?
What I'm after is a way to make the execution for the first script DropForeignKeys.sql conditional based on if the db has any tables of not.
Conditional logic (IF/ELSE) is only supported in functions and stored procedures - you'd have to use something that resembles:
DELIMITER $$
DROP PROCEDURE IF EXISTS upgrade_database $$
CREATE PROCEDURE upgrade_database()
BEGIN
-- INSERT NEW RECORD IF PREEXISTING RECORD DOESNT EXIST
IF((SELECT COUNT(*) AS column_exists
FROM information_schema.columns
WHERE table_name = 'test'
AND column_name = 'test7') = 0) THEN
ALTER TABLE test ADD COLUMN `test7` int(10) NOT NULL;
UPDATE test SET test7 = test;
SELECT 'Altered!';
ELSE
SELECT 'Not altered!';
END IF;
END $$
DELIMITER ;
CALL upgrade_database();
Rather than reference INFORMATION_SCHEMA.COLUMNS, you could reference INFORMATION_SCHEMA.KEY_COLUMN_USAGE.
Depending on your needs:
ALTER TABLE [table name] DISABLE KEYS
ALTER TABLE [table name] ENABLE KEYS
...will disable and re-enable the keys attached to that table without needing to know each one. You can disable and enable keys on a database level using SET foreign_key_checks = 0; to disable, and SET foreign_key_checks = 1; to enable them.
It surprises me that MySQL doesn't seem to have a better way of dealing with this common scripting problem.
Oracle doesn't either, but constraints aren't really something you want to alter blindly without knowing details.
The reason I need the drop foreign keys script is because drop table yields an error when their are FK attachments. Will disabling FK checks allow for me to drop the tables?
Yes, dropping or disabling the constraints will allow you to drop the table but be aware - in order to re-enable the fk check you'll need the data in the parent to match the existing data in the child tables.