Looping over columns and changing Null values - sql

I have a table called myTable where some values are null.
I want to replace all null values in all columns with the previous non null value. I found some code that iterates over each row for a specific column, and changes Null Values as I want.
DECLARE #value AS int
UPDATE myTable
SET
#value = COALESCE(col2, #value),
col2 = COALESCE(col2, #value)
Result:
This does what I want it do do but only for one column at the time. My goal is to alter the code above in some way so that I can automatically loop over each column in the table.
I ran into several issues when trying to achieve this. Here is my attempt
DECLARE #ColNames table (NAMES nvarchar(50), ARRAYINDEX int identity(1,1) )
INSERT INTO #ColNames (NAMES)
VALUES ('col1'),('col2'),('col3')
DECLARE #INDEXVAR int
DECLARE #TOTALCOUNT int
SET #INDEXVAR = 0
SELECT #TOTALCOUNT = COUNT(*) FROM #ColNames
WHILE #INDEXVAR < #TOTALCOUNT
BEGIN
DECLARE #curColName nvarchar (50)
SELECT #INDEXVAR = #INDEXVAR + 1
SELECT #curColName = NAMES from #ColNames where ARRAYINDEX = #INDEXVAR
DECLARE #value AS int
UPDATE myTable
SET
#value = COALESCE(#curColName, #value),
#curColName = COALESCE(#curColName, #value)
END
The issues that I have found and not been able to solve are the following:
#curColName is just a nvarchar variable and not a representation of my actual column, even if the names are the same. This gives me errors on both lines inside the SET statement.
When hard coding the column names in my loop inside the BEGIN/END statement., the script fills out ALL Null values with a number. So col2 gets the value 3 on ALL rows, not only row 2 and 3 as my previous example.
If these two points are hard or impossible to solve, is there an easier way of solving this problem?
Thanks

This is based on your expected results, and that you actually want to assign the value of Col2 to be the value of the previous non-NULL value, when ordered by the column pk.
If so, to achieve this you can use an updatable CTE. The first CTE puts the data into groups, based on the non-NULL values, and then the second gets the MAX value of Col2 in the group (which would be the non-NULL value). Finally you UPDATE against that CTE on rows where Col2 has the value NULL:
CREATE TABLE dbo.YourTable (PK int,
Col1 int,
Col2 int);
INSERT INTO dbo.YourTable (PK,Col1,Col2)
VALUES(1,2,NULL),
(2,NULL,3),
(3,NULL,NULL);
GO
WITH Groups AS(
SELECT Col2,
COUNT(Col2) OVER (ORDER BY PK) AS Grp
FROM dbo.YourTable),
Maxes AS(
SELECT Col2,
MAX(Col2) OVER (PARTITION BY Grp) AS MaxCol2
FROM Groups)
UPDATE Maxes
SET Col2 = MaxCol2
WHERE Col2 IS NULL;
GO
SELECT *
FROM dbo.YourTable;
GO
DROP TABLE dbo.YourTable;

Related

loop a variable delimited by comma and enter each item to each row of the table SQL

lets say I have the variable x, which is equal to: x='3,4,5,6,7'
Then i have a table #tmpTable with two columns (respID and Responses)
On my #tmpTable the respIDs for each row are null.
I want the ids of each row there to be the values on my x variable above. (for example, row 1's respID=1, row 2's respID=2.. and so on..)
how to do this in SQL?
You can achieve as below using SSMS:
declare #S varchar(20)
set #S = '1,2,3,4,5'
declare #tempTable as table (col1 varchar(max), col2 varchar(max))
While len(#s) > 0
begin
insert into #tempTable(col1) select left(#S, charindex(',', #S+',')-1)
set #S=stuff(#S, 1, charindex(',', #S+','), '')
end
select * from #tempTable
You can do something like this.
SELECT
Responses.value('(/x/#ID)[1]', 'int') AS [ID],
Responses
FROM YourTable
Sorry the image you had in your post has now disappeared so I don't remember the table name or the exact xml. Have a search on google for "tsql xml xpath".

How to add a row number to new table in SQL?

I'm trying to create a new table using an existing table already using:
INSERT INTO NewTable (...,...)
SELECT * from SampleTable
What I need to is add a record number at the beginning or the end, it really doesn't matter as long as it's there.
Sample Table
Elizabeth RI 02914
Emily MA 01834
Prospective New Table
1 Elizabeth RI 02914
2 Emily MA 01834
Is that at all possible?
This is what I ultimately I'm shooting for... except right now those tables aren't the same size because I need my ErrorTemporaryTable to have a column in which the first row has a number which increments by the previous one by one.
declare #counter int
declare #ClientMessage varchar(255)
declare #TestingMessage carchar(255)
select #counter = (select count(*) + 1 as counter from ErrorValidationTesting)
while #counter <= (select count(*) from ErrorValidationTable ET, ErrorValidationMessage EM where ET.Error = EM.Error_ID)
begin
insert into ErrorValidationTesting (Validation_Error_ID, Program_ID, Displayed_ID, Client_Message, Testing_Message, Create_Date)
select * from ErrorTemporaryTable
select #counter = #counter + 1
end
You can use into clause with IDENTITY column:
SELECT IDENTITY(int, 1,1) AS ID_Num, col0, col1
INTO NewTable
FROM OldTable;
Here is more information
You can also create table with identity field:
create table NewTable
(
id int IDENTITY,
col0 varchar(30),
col1 varchar(30)
)
and insert:
insert into NewTable (col0, col1)
SELECT col0, col1
FROM OldTable;
or if you have NewTable and you want to add new column see this solution on SO.
INSERT INTO NewTable (...,...)
SELECT ROW_NUMBER() OVER (ORDER BY order_column), * from SampleTable
If you are in SQL Server
INSERT INTO newTable (idCol, c1,c2,...cn)
SELECT ROW_NUMBER() OVER(ORDER BY c1), c1,c2,...cn
FROM oldTable
Try this query to insert 1,2,3... Replace MyTable and ID with your column names.
DECLARE #myVar int
SET #myVar = 0
UPDATE
MyTable
SET
ID = #myvar ,
#myvar = #myVar + 1

tsql: can "returning select" come before "update"?

I want to write a t-sql stored procedure (aka sproc) which selects 3 columns from 'MyTable'. In addition, I want to update the table in the same sproc:
I select the third value from the table.
If it equals 'true', I want to update the relevant record in the table to 'false'
I wasn't sure what syntax should I use. Could you help me out?
ALTER procedure [dbo].[My_PROC]
#ID varchar(10)
AS
BEGIN
declare #Col3 bit;
set #Col3 = select Col3
from dbo.MyTable with (nolock)
where #ID = ID
if #Col3 = 'true'
update dbo.dbo.MyTable set col3 = 'false'
where #ID = ID
select Col1,
Col2,
Col3
from dbo.MyTable table with (nolock) where #ID = ID,
table.Col1,
table.Col2,
#Col3
END
edit: I want to return the original Col3 (not the updated value).
Use:
ALTER procedure [dbo].[My_PROC]
#ID varchar(10)
AS
BEGIN
SELECT t.col1,
t.col2,
t.col3
FROM dbo.MyTable AS t WITH (NOLOCK)
WHERE t.id = #ID
-- No need for an IF statement to determine updating...
UPDATE dbo.dbo.MyTable
SET col3 = 'false'
WHERE id = #ID
AND t.col3 = 'true'
END
I don't know what you're intending for the final SELECT, but I can update it once I understand what you intended.
If you are using SQL Server 2005 or later, you can use an OUTPUT clause to output the original value if it was actually updated by the query. OUTPUT won’t give you the original value if the row is not updated by the query, however.
declare #t table (
ID int,
tf bit
);
insert into #t values
(1,0),
(2,1),
(3,0);
declare #ID int = 2;
select * from #t
update #t set
tf = 0
output deleted.ID, deleted.tf
where ID = #ID;
select * from #t;

'insert into' with array

I'm wondering if there's a way to use 'insert into' on a list of values. I'm trying to do this:
insert into tblMyTable (Col1, Col2, Col3)
values('value1', value2, 'value3')
So, what I'm trying to say is that value2 will be an array of strings. I'm going to put this in C# but the SQL statement is all I need really. I know I could just use a foreach and loop through my array but I figured there might be a better way sort of like the SELECT statement here: SQL SELECT * FROM XXX WHERE columnName in Array. It seems like a single query would be much more efficient than one at a time.
I'm using SQL Server 2008 R2. Thanks fellas!
You can use this type of insert statement
insert into tblMyTable (Col1, Col2, Col3)
select 'value1', value, 'value3'
from dbo.values2table('abc,def,ghi,jkl',',',-1) V
The 'value', 'value3' and 'abc,def,ghi,jkl' are the 3 varchar parameters you need to set in C# SQLCommand.
This is the supporting function required.
CREATE function dbo.values2table
(
#values varchar(max),
#separator varchar(3),
#limit int -- set to -1 for no limit
) returns #res table (id int identity, [value] varchar(max))
as
begin
declare #value varchar(50)
declare #commapos int, #lastpos int
set #commapos = 0
select #lastpos = #commapos, #commapos = charindex(#separator, #values, #lastpos+1)
while #commapos > #lastpos and #limit <> 0
begin
select #value = substring(#values, #lastpos+1, #commapos-#lastpos-1)
if #value <> '' begin
insert into #res select ltrim(rtrim(#value))
set #limit = #limit-1
end
select #lastpos = #commapos, #commapos = charindex(#separator, #values, #lastpos+1)
end
select #value = substring(#values, #lastpos+1, len(#values))
if #value <> '' insert into #res select ltrim(rtrim(#value))
return
end
GO
The parameters used are:
',' = delimiter
-1 = all values in the array, or N for just first N items
solution is above, alternatives below
Or, if you fancy, a purely CTE approach not backed by any split function (watch comments with <<<)
;WITH T(value,delim) AS (
select 'abc,def,ghi', ',' --- <<< plug in the value array and delimiter here
), CTE(ItemData, Seq, I, J) AS (
SELECT
convert(varchar(max),null),
0,
CharIndex(delim, value)+1,
1--case left(value,1) when ' ' then 2 else 1 end
FROM T
UNION ALL
SELECT
convert(varchar(max), subString(value, J, I-J-1)),
Seq+1,
CharIndex(delim, value, I)+1, I
FROM CTE, T
WHERE I > 1 AND J > 0
UNION ALL
SELECT
SubString(value, J, 2000),
Seq+1,
CharIndex(delim, value, I)+1, 0
FROM CTE, T
WHERE I = 1 AND J > 1
)
--- <<< the final insert statement
insert into tblMyTable (Col1, Col2, Col3)
SELECT 'value1', ItemData, 'value3'
FROM CTE
WHERE Seq>0
XML approach
-- take an XML param
declare #xml xml
set #xml = '<root><item>abc</item><item>def</item><item>ghi</item></root>'
insert into tblMyTable (Col1, Col2, Col3)
SELECT 'value1', n.c.value('.','varchar(max)'), 'value3'
FROM #xml.nodes('/root/item') n(c)
-- heck, start with xml string
declare #xmlstr nvarchar(max)
set #xmlstr = '<root><item>abc</item><item>def</item><item>ghi</item></root>'
insert tblMyTable (Col1, Col2, Col3)
SELECT 'value1', n.c.value('.','varchar(max)'), 'value3'
FROM (select convert(xml,#xmlstr) x) y
cross apply y.x.nodes('/root/item') n(c)
In C# code, you would only use 4 lines starting with "insert tblMyTable ..." and parameterize the #xmlstr variable.
Since you're using SQL 2008 and C# your best bet is probably to use a a table valued parameter and then join to it.
This is better than passing a comma delimited string because you don't have to worry about quotes and commas in your values.
update
Another option is to use the xml data type.
Pre-SQL 2005 another option is to pass an XML string and using OPENXML. If you use an XMLWriter to create your string it will take care of making sure your xml is valid
-- This table is meant to represent the real table you
-- are using, so when you write this replace this one.
DECLARE #tblMyTable TABLE
(
Value1 VARCHAR(200)
, Value2 VARCHAR(200)
, Value3 VARCHAR(200)
);
-- You didn't say how you were going to get the string
-- array, so I can't do anything cool with that. I'm
-- just going to say we've made a table variable to
-- put those values in. A user-defined table type
-- might be in order here.
DECLARE #StringArray TABLE
(
Value VARCHAR(200)
);
INSERT INTO #StringArray
VALUES ('Jeremy'), ('snickered'), ('LittleBobbyTables'), ('xkcd Reference');
DECLARE #Value1 VARCHAR(200) = 'This guy --->';
DECLARE #Value3 VARCHAR(200) = ' <--- Rocks!';
-- I want to cross apply the two constant values, so
-- they go into a CTE, which makes them as good as
-- in a table.
WITH VariablesIntoTable AS
(
SELECT
#Value1 AS Value1
, #Value3 AS Value3
)
-- Cross applying the array couples every row in the
-- array (which is in a table variable) with the two
-- variable values.
, WithStringArray AS
(
SELECT
VariablesIntoTable.Value1
, StringArray.Value AS Value2
, VariablesIntoTable.Value3
FROM VariablesIntoTable
CROSS APPLY #StringArray StringArray
)
INSERT INTO #tblMyTable
-- The output clause allows you to see what you just
-- inserted without a separate select.
OUTPUT inserted.Value1, inserted.Value2, inserted.Value3
SELECT
WithStringArray.Value1
, WithStringArray.Value2
, WithStringArray.Value3
FROM WithStringArray

Insert default value when parameter is null

I have a table that has a column with a default value:
create table t (
value varchar(50) default ('something')
)
I'm using a stored procedure to insert values into this table:
create procedure t_insert (
#value varchar(50) = null
)
as
insert into t (value) values (#value)
The question is, how do I get it to use the default when #value is null? I tried:
insert into t (value) values ( isnull(#value, default) )
That obviously didn't work. Also tried a case statement, but that didn't fair well either. Any other suggestions? Am I going about this the wrong way?
Update: I'm trying to accomplish this without having to:
maintain the default value in multiple places, and
use multiple insert statements.
If this isn't possible, well I guess I'll just have to live with it. It just seems that something this should be attainable.
Note: my actual table has more than one column. I was just quickly writing an example.
Christophe,
The default value on a column is only applied if you don't specify the column in the INSERT statement.
Since you're explicitiy listing the column in your insert statement, and explicity setting it to NULL, that's overriding the default value for that column
What you need to do is "if a null is passed into your sproc then don't attempt to insert for that column".
This is a quick and nasty example of how to do that with some dynamic sql.
Create a table with some columns with default values...
CREATE TABLE myTable (
always VARCHAR(50),
value1 VARCHAR(50) DEFAULT ('defaultcol1'),
value2 VARCHAR(50) DEFAULT ('defaultcol2'),
value3 VARCHAR(50) DEFAULT ('defaultcol3')
)
Create a SPROC that dynamically builds and executes your insert statement based on input params
ALTER PROCEDURE t_insert (
#always VARCHAR(50),
#value1 VARCHAR(50) = NULL,
#value2 VARCHAR(50) = NULL,
#value3 VARCAHR(50) = NULL
)
AS
BEGIN
DECLARE #insertpart VARCHAR(500)
DECLARE #valuepart VARCHAR(500)
SET #insertpart = 'INSERT INTO myTable ('
SET #valuepart = 'VALUES ('
IF #value1 IS NOT NULL
BEGIN
SET #insertpart = #insertpart + 'value1,'
SET #valuepart = #valuepart + '''' + #value1 + ''', '
END
IF #value2 IS NOT NULL
BEGIN
SET #insertpart = #insertpart + 'value2,'
SET #valuepart = #valuepart + '''' + #value2 + ''', '
END
IF #value3 IS NOT NULL
BEGIN
SET #insertpart = #insertpart + 'value3,'
SET #valuepart = #valuepart + '''' + #value3 + ''', '
END
SET #insertpart = #insertpart + 'always) '
SET #valuepart = #valuepart + + '''' + #always + ''')'
--print #insertpart + #valuepart
EXEC (#insertpart + #valuepart)
END
The following 2 commands should give you an example of what you want as your outputs...
EXEC t_insert 'alwaysvalue'
SELECT * FROM myTable
EXEC t_insert 'alwaysvalue', 'val1'
SELECT * FROM myTable
EXEC t_insert 'alwaysvalue', 'val1', 'val2', 'val3'
SELECT * FROM myTable
I know this is a very convoluted way of doing what you need to do.
You could probably equally select the default value from the InformationSchema for the relevant columns but to be honest, I might consider just adding the default value to param at the top of the procedure
Try an if statement ...
if #value is null
insert into t (value) values (default)
else
insert into t (value) values (#value)
As far as I know, the default value is only inserted when you don't specify a value in the insert statement. So, for example, you'd need to do something like the following in a table with three fields (value2 being defaulted)
INSERT INTO t (value1, value3) VALUES ('value1', 'value3')
And then value2 would be defaulted. Maybe someone will chime in on how to accomplish this for a table with a single field.
Probably not the most performance friendly way, but you could create a scalar function that pulls from the information schema with the table and column name, and then call that using the isnull logic you tried earlier:
CREATE FUNCTION GetDefaultValue
(
#TableName VARCHAR(200),
#ColumnName VARCHAR(200)
)
RETURNS VARCHAR(200)
AS
BEGIN
-- you'd probably want to have different functions for different data types if
-- you go this route
RETURN (SELECT TOP 1 REPLACE(REPLACE(REPLACE(COLUMN_DEFAULT, '(', ''), ')', ''), '''', '')
FROM information_schema.columns
WHERE table_name = #TableName AND column_name = #ColumnName)
END
GO
And then call it like this:
INSERT INTO t (value) VALUES ( ISNULL(#value, SELECT dbo.GetDefaultValue('t', 'value') )
This is the best I can come up with. It prevents sql injection uses only one insert statement and can ge extended with more case statements.
CREATE PROCEDURE t_insert ( #value varchar(50) = null )
as
DECLARE #sQuery NVARCHAR (MAX);
SET #sQuery = N'
insert into __t (value) values ( '+
CASE WHEN #value IS NULL THEN ' default ' ELSE ' #value ' END +' );';
EXEC sp_executesql
#stmt = #sQuery,
#params = N'#value varchar(50)',
#value = #value;
GO
chrisofspades,
As far as I know that behavior is not compatible with the way the db engine works,
but there is a simple (i don't know if elegant, but performant) solution to achive your two objectives of DO NOT
maintain the default value in multiple places, and
use multiple insert statements.
The solution is to use two fields, one nullable for insert, and other one calculated to selections:
CREATE TABLE t (
insValue VARCHAR(50) NULL
, selValue AS ISNULL(insValue, 'something')
)
DECLARE #d VARCHAR(10)
INSERT INTO t (insValue) VALUES (#d) -- null
SELECT selValue FROM t
This method even let You centralize the management of business defaults in a parameter table, placing an ad hoc function to do this, vg changing:
selValue AS ISNULL(insValue, 'something')
for
selValue AS ISNULL(insValue, **getDef(t,1)**)
I hope this helps.
The best option by far is to create an INSTEAD OF INSERT trigger for your table, removing the default values from your table, and moving them into the trigger.
This will look like the following:
create trigger dbo.OnInsertIntoT
ON TablenameT
INSTEAD OF INSERT
AS
insert into TablenameT
select
IsNull(column1 ,<default_value>)
,IsNull(column2 ,<default_value>)
...
from inserted
This makes it work NO MATTER what code tries to insert NULLs into your table, avoids stored procedures, is completely transparent, and you only need to maintain your default values in one place, namely this trigger.
You can use default values for the parameters of stored procedures:
CREATE PROCEDURE MyTestProcedure ( #MyParam1 INT,
#MyParam2 VARCHAR(20) = ‘ABC’,
#MyParam3 INT = NULL)
AS
BEGIN
-- Procedure body here
END
If #MyParam2 is not supplied, it will have the 'ABC' value...
You can use the COALESCE function in MS SQL.
INSERT INTO t ( value ) VALUES( COALESCE(#value, 'something') )
Personally, I'm not crazy about this solution as it is a maintenance nightmare if you want to change the default value.
My preference would be Mitchel Sellers proposal, but that doesn't work in MS SQL. Can't speak to other SQL dbms.
Don't specify the column or value when inserting and the DEFAULT constaint's value will be substituted for the missing value.
I don't know how this would work in a single column table. I mean: it would, but it wouldn't be very useful.
Hope To help to -newbie as i am- Ones who uses Upsert statements in MSSQL.. (This code i used in my project on MSSQL 2008 R2 and works simply perfect..May be It's not Best Practise.. Execution time statistics shows execution time as 15 milliSeconds with insert statement)
Just set your column's "Default value or binding" field as what you decide to use as default value for your column and Also set the column as Not accept null values from design menu and create this stored Proc..
`USE [YourTable]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROC [dbo].[YourTableName]
#Value smallint,
#Value1 bigint,
#Value2 varchar(50),
#Value3 varchar(20),
#Value4 varchar(20),
#Value5 date,
#Value6 varchar(50),
#Value7 tinyint,
#Value8 tinyint,
#Value9 varchar(20),
#Value10 varchar(20),
#Value11 varchar(250),
#Value12 tinyint,
#Value13 varbinary(max)
-- in my project #Value13 is a photo column which storing as byte array..
--And i planned to use a default photo when there is no photo passed
--to sp to store in db..
AS
--SET NOCOUNT ON
IF #Value = 0 BEGIN
INSERT INTO YourTableName (
[TableColumn1],
[TableColumn2],
[TableColumn3],
[TableColumn4],
[TableColumn5],
[TableColumn6],
[TableColumn7],
[TableColumn8],
[TableColumn9],
[TableColumn10],
[TableColumn11],
[TableColumn12],
[TableColumn13]
)
VALUES (
#Value1,
#Value2,
#Value3,
#Value4,
#Value5,
#Value6,
#Value7,
#Value8,
#Value9,
#Value10,
#Value11,
#Value12,
default
)
SELECT SCOPE_IDENTITY() As InsertedID
END
ELSE BEGIN
UPDATE YourTableName SET
[TableColumn1] = #Value1,
[TableColumn2] = #Value2,
[TableColumn3] = #Value3,
[TableColumn4] = #Value4,
[TableColumn5] = #Value5,
[TableColumn6] = #Value6,
[TableColumn7] = #Value7,
[TableColumn8] = #Value8,
[TableColumn9] = #Value9,
[TableColumn10] = #Value10,
[TableColumn11] = #Value11,
[TableColumn12] = #Value12,
[TableColumn13] = #Value13
WHERE [TableColumn] = #Value
END
GO`
With enough defaults on a table, you can simply say:
INSERT t DEFAULT VALUES
Note that this is quite an unlikely case, however.
I've only had to use it once in a production environment. We had two closely related tables, and needed to guarantee that neither table had the same UniqueID, so we had a separate table which just had an identity column, and the best way to insert into it was with the syntax above.
The most succinct solution I could come up with is to follow the insert with an update for the column with the default:
IF OBJECT_ID('tempdb..#mytest') IS NOT NULL DROP TABLE #mytest
CREATE TABLE #mytest(f1 INT DEFAULT(1), f2 INT)
INSERT INTO #mytest(f1,f2) VALUES (NULL,2)
INSERT INTO #mytest(f1,f2) VALUES (3,3)
UPDATE #mytest SET f1 = DEFAULT WHERE f1 IS NULL
SELECT * FROM #mytest
The pattern I generally use is to create the row without the columns that have default constraints, then update the columns to replace the default values with supplied values (if not null).
Assuming col1 is the primary key and col4 and col5 have a default contraint
-- create initial row with default values
insert table1 (col1, col2, col3)
values (#col1, #col2, #col3)
-- update default values, if supplied
update table1
set col4 = isnull(#col4, col4),
col5 = isnull(#col5, col5)
where col1 = #col1
If you want the actual values defaulted into the table ...
-- create initial row with default values
insert table1 (col1, col2, col3)
values (#col1, #col2, #col3)
-- create a container to hold the values actually inserted into the table
declare #inserted table (col4 datetime, col5 varchar(50))
-- update default values, if supplied
update table1
set col4 = isnull(#col4, col4),
col5 = isnull(#col5, col5)
output inserted.col4, inserted.col5 into #inserted (col4, col5)
where col1 = #col1
-- get the values defaulted into the table (optional)
select #col4 = col4, #col5 = col5 from #inserted
Cheers...
The easiest way to do this is to modify the table declaration to be
CREATE TABLE Demo
(
MyColumn VARCHAR(10) NOT NULL DEFAULT 'Me'
)
Now, in your stored procedure you can do something like.
CREATE PROCEDURE InsertDemo
#MyColumn VARCHAR(10) = null
AS
INSERT INTO Demo (MyColumn) VALUES(#MyColumn)
However, this method ONLY works if you can't have a null, otherwise, your stored procedure would have to use a different form of insert to trigger a default.
The questioner needs to learn the difference between an empty value provided and null.
Others have posted the right basic answer: A provided value, including a null, is something and therefore it's used. Default ONLY provides a value when none is provided. But the real problem here is lack of understanding of the value of null.
.