Still returning NULL when default value added - sql

I'm trying to replace NULL with a default value of 'NO' but even when I execute it, it still displays NULL when I try to view the data. I've already tried dropping the constraint on the column, but it did not work
/*------------------------
use AuntieB
--alter table charity
-- add STORE char(10);
--update charity
--set STORE = 'YES'
--where Name = 'Salvation Army' or Name = 'Mother Wattles' or Name = 'Fresh Start Charity'
alter table charity
add default 'No' for STORE;
select * from charity
------------------------*/
CharityID Name Address City State Zip Phone ContactID STORE
----------- -------------------- ------------------------------ ------------------------------ ----- ---------- ------------ ----------- ----------
1000 St. Francis Home 45875 West. Hill St. Utica MI 48045 586-795-3486 1025 NULL
1001 Helping Hands 98563 Stadium Detriot MI 48026 313-978-6589 1030 NULL
1002 Boy Scouts 1155 E. Long Lake Rd Troy MI 48085 248-253-9596 1036 NULL
1003 Focus Hope 54362 Grand River Detroit MI 48312 313-478-7895 1041 NULL
1004 Fresh Start Charity 12569 Gratiot Ave. Roseville MI 48084 555-555-2035 1046 YES
1005 St. John Hospital 59652 Shelby Rd. Shelby Twp. MI 48317 586-569-6987 1050 NULL
1006 Salvation Army 56231 Somewhere Blvd. Eastpointe MI 48021 586-555-1212 1056 YES
1007 LA Angels Traders 2468 Halo Park Dr South Los Angelas MI 90234 903-965-3556 2015 NULL
1008 Purple Heart 28765 Van Dyke Sterling Heights MI 48313 586-732-8723 1061 NULL
1009 St. Raja Home 45875 West. Hill St. Utica MI 48045 586-795-3486 1062 NULL
1010 Mother Wattles 4568 Griswold Detroit MI 48205 313-478-9856 2016 YES
1011 Ron McDonald House 649 West Road Utica MI 48045 586-795-9979 1030 NULL
1012 St. Jude 262 Danny Thomas Place Memphis MI 38105 800-822-6344 1030 NULL
(13 rows affected)

From the docs:
When a DEFAULT definition is added to an existing column in a table, by default, the Database Engine applies the new default only to new rows of data that are added to the table. Existing data that was inserted by using the previous DEFAULT definition is unaffected. However, when you add a new column to an existing table, you can specify that the Database Engine insert the default value (specified by the DEFAULT definition) instead of a null value, into the new column for the existing rows in the table.
So when adding a DEFAULT definition to an existing column, you need to fill the existing rows yourself.

You cannot add a new default constraint to an existing column and update exiting NULL values in the same DDL operation. You'll need to explicitly update values NULL values to the desired value afterward.
You can, however, add a new column with a default constraint and apply the default value to all existing rows by specifying the WITH VALUES clause:
ALTER TABLE dbo.charity
ADD store char(10) NULL
CONSTRAINT df_charity_STORE DEFAULT 'No'
WITH VALUES;
This method allows you to add a new NOT NULL column too.
If you are running Enterprise (or Developer) edition of SQL Server, WITH VALUES is a meta-data only operation, which avoids updating every row in the table internally during the operation. The operation physically updates each row in the table in lessor editions.

There are two options here. You can update all null values like this;
update charity
set STORE = 'NO'
where STORE is null
or If you want to replace null values only when selecting them, you can use ISNULL statement.
select ISNULL(STORE,'NO') from charity
Also, adding constraint with default value for existing column doesn't update previous data. It sets default value for new coming rows.

Related

Delete table column content

I have the following table:
car
score
description
Opel
30
43
Volvo
500
434
Kia
50
3
Toyota
4
4
Mazda
5000
4
How I can delete all the content of table column score without changing the table structure?
Expected result:
car
score
description
Opel
43
Volvo
434
Kia
3
Toyota
4
Mazda
4
As pointed out by Bergi, you have the option of setting all values in the column to NULL or 0, depending on what you need, or you can delete the entire column.
Solution 1:
UPDATE cars SET score = NULL;
or
UPDATE cars SET score = 0;
This will preserve the score column but set all the values to NULL or 0 respectively. Note that NULL and 0 are different things. NULL means the field is empty but 0 means the field has the numerical value 0.
If you don't need the score column anymore, you can delete it like this:
ALTER TABLE cars
DROP COLUMN score;
This will delete the column score and you will not be able to use it anymore.
I think the answer by gowner is ok.
However in case you have no permission to alter table structure, you cannot delete column.
And given the score field is not nullable,
you cannot update the field to null.
You must be careful that updating the score to 0 may not be ideal.
0 may have different meaning in your table. Maybe minimum score is 1 and 0 is not a possible value in the field. Or a consensus in your organization that -1 means "no value". They should be relfected in the default constraint or guidelines of your organization.
I would prefer to be safe
UPDATE cars SET score = DEFAULT;

Merge Output - Capture Column Name Updated

I am merging a table and want to retain the previous value in a seperate table, along with the column that was updated.
I have got it working to retain the values, but want to know how I can retain the column name.
Existing table: tTable1
qID qUnits qDateTime
1001 4900 2022-09-13 12:00:00.000
1002 6800 2022-09-14 15:00:00.000
1003 7400 2022-09-14 13:00:00.000
Temp Table (holds updated values): #updateValues
qID qOriginalUnit qUpdatedUnit
1001 4900 8900
1002 6800 13400
1002 7400 16500
The code i'm using currently, to output what the existing value was, before the update;
DECLARE #auditRecords TABLE(qID INT, PreviousValue VARCHAR(100)
MERGE tTable1 AS TARGET
USING #updateValues AS SOURCE
ON (SOURCE.qID = TARGET.qID)
WHEN MATCHED
THEN UPDATE SET
TARGET.qUnits = SOURCE.qUpdatedUnit
OUTPUT SOURCE.qID, SOURCE.qOriginalUnit INTO #auditRecords(qID,PreviousValue);
I'd like to be able to include the column name that was updated, in this instance QUnits, so the output would look like the following;
qID PreviousValue UpdatedColumn
1001 4900 qUnits
1002 6800 qUnits
1003 7400 qUnits
In this particular example, where you are only updating a single column, you can just hardcode the column name in the OUTPUT results:
OUTPUT SOURCE.qID,
SOURCE.qOriginalUnit,
'qUnits'
INTO #auditRecords(qID, PreviousValue, UpdatedColumn)
If you are updating multiple columns it becomes a bit more cumbersome as you need to compare values column by column.
You could also move this into an UPDATE TRIGGER in which you have access to the results of the COLUMNS_UPDATED function.

How to load grouped data with SSIS

I have a tricky flat file data source. The data is grouped, like this:
Country City
U.S. New York
Washington
Baltimore
Canada Toronto
Vancouver
But I want it to be this format when it's loaded in to the database:
Country City
U.S. New York
U.S. Washington
U.S. Baltimore
Canada Toronto
Canada Vancouver
Anyone has met such a problem before? Got a idea to deal with it?
The only idea I got now is to use the cursor, but the it is just too slow.
Thank you!
The answer by cha will work, but here is another in case you need to do it in SSIS without temporary/staging tables:
You can run your dataflow through a Script Transformation that uses a DataFlow-level variable. As each row comes in the script checks the value of the Country column.
If it has a non-blank value, then populate the variable with that value, and pass it along in the dataflow.
If Country has a blank value, then overwrite it with the value of the variable, which will be last non-blank Country value you got.
EDIT: I looked up your error message and learned something new about Script Components (the Data Flow tool, as opposed to Script Tasks, the Control Flow tool):
The collection of ReadWriteVariables is only available in the
PostExecute method to maximize performance and minimize the risk of
locking conflicts. Therefore you cannot directly increment the value
of a package variable as you process each row of data. Increment the
value of a local variable instead, and set the value of the package
variable to the value of the local variable in the PostExecute method
after all data has been processed. You can also use the
VariableDispenser property to work around this limitation, as
described later in this topic. However, writing directly to a package
variable as each row is processed will negatively impact performance
and increase the risk of locking conflicts.
That comes from this MSDN article, which also has more information about the Variable Dispenser work-around, if you want to go that route, but apparently I mislead you above when I said you can set the value of the package variable in the script. You have to use a variable that is local to the script, and then change it in the Post-Execute event handler. I can't tell from the article whether that means that you will not be able to read the variable in the script, and if that's the case, then the Variable Dispenser would be the only option. Or I suppose you could create another variable that the script will have read-only access to, and set its value to an expression so that it always has the value of the read-write variable. That might work.
Yes, it is possible. First you need to load the data to a table with an IDENTITY column:
-- drop table #t
CREATE TABLE #t (id INTEGER IDENTITY PRIMARY KEY,
Country VARCHAR(20),
City VARCHAR(20))
INSERT INTO #t(Country, City)
SELECT a.Country, a.City
FROM OPENROWSET( BULK 'c:\import.txt',
FORMATFILE = 'c:\format.fmt',
FIRSTROW = 2) AS a;
select * from #t
The result will be:
id Country City
----------- -------------------- --------------------
1 U.S. New York
2 Washington
3 Baltimore
4 Canada Toronto
5 Vancouver
And now with a bit of recursive CTE magic you can populate the missing details:
;WITH a as(
SELECT Country
,City
,ID
FROM #t WHERE ID = 1
UNION ALL
SELECT COALESCE(NULLIF(LTrim(#t.Country), ''),a.Country)
,#t.City
,#t.ID
FROM a INNER JOIN #t ON a.ID+1 = #t.ID
)
SELECT * FROM a
OPTION (MAXRECURSION 0)
Result:
Country City ID
-------------------- -------------------- -----------
U.S. New York 1
U.S. Washington 2
U.S. Baltimore 3
Canada Toronto 4
Canada Vancouver 5
Update:
As Tab Alleman suggested below the same result can be achieved without the recursive query:
SELECT ID
, COALESCE(NULLIF(LTrim(a.Country), ''), (SELECT TOP 1 Country FROM #t t WHERE t.ID < a.ID AND LTrim(t.Country) <> '' ORDER BY t.ID DESC))
, City
FROM #t a
BTW, the format file for your input data is this (if you want to try the scripts save the input data as c:\import.txt and the format file below as c:\format.fmt):
9.0
2
1 SQLCHAR 0 11 "" 1 Country SQL_Latin1_General_CP1_CI_AS
2 SQLCHAR 0 100 "\r\n" 2 City SQL_Latin1_General_CP1_CI_AS

Updating a column in postgresql based on a specific date

I'm wondering if I could use trigger with PostgrSQL to update a column where I do have column with date type and on this date I would like to updated another column in another table.
To make it more clear
I do have 2 tables
Works_locations table
walphid wnumberid locationid
DRAR 1012 101
PAPR 1013 105
PAHF 1014 105
ETAR 1007 102
DRWS 1007 102
the locationsid attribute refers to "locid' in Locations table which is down
locid locname
101 Storage
102 Gallary A
103 Gallary B
104 Gallary C
105 Lobby
Exhibition table
exhid exhname description strtdate endDate
101 Famous Blah Blah 2013-07-15 2013-10-13
and here are bridge table to connect the exhibition table with the locations table
locationsid exhibitid
102 102
103 101
104 103
Now Each exhibition has some works and should be placed in one of the locations table.
On the 'endDate' column in the exhibition table, which is a date data type, I would like to update 'locationid' column in the Works locations table to be placed in another location.
In another words. Each exhibitions has some works and this works are placed in one of the locations. at the ending date of the exhibition, I would like to change the locations, and specifically, I would like the work to be returned to the storage.
Any idea how would I do this action with postgresql?
Regard
PostgreSQL does not have a built-in task scheduler. Even if there was a scheduler, the commands it ran wouldn't be triggers, they'd just be be procedures run by a scheduler.
You can't write triggers that fire at some arbitrary time.
You will need to use cron, Task Scheduler, PgAgent, or similar to run the statements at the desired time. Or you could write a script that checks when the next event is, sleeps until then, runs the desired command and marks that event as done, and sleeps until the next event.

sql combine two columns that might have null values

This should be an easy thing to do but I seem to keep getting an extra space. Basically what I am trying to do is combine multiple columns into one column. BUT every single one of these columns might be null as well. When I combine them, I also want them to be separated by a space (' ').
What I created is the following query:
select 'All'= ISNULL(Name+' ','')+ISNULL(City+' ','')+ISNULL(CAST(Age as varchar(50))+' ','') from zPerson
and the result is:
All
John Rock Hill 23
Munchen 29
Julie London 35
Fort Mill 27
Bob 29
As you can see: there is an extra space when the name is null. I don't want that.
The initial table is :
id Name City Age InStates AllCombined
1 John Rock Hill 23 1 NULL
2 Munchen 29 0 NULL
3 Julie London 35 0 NULL
4 Fort Mill 27 1 NULL
5 Bob 29 1 NULL
Any ideas?
select 'All'= LTRIM(ISNULL(Name+' ','')+ISNULL(City+' ','')+ISNULL(CAST(Age as varchar(50))+' ','') from zPerson)
SEE LTRIM()
In the data you have posted, the Name column contains no NULLs. Instead, it contains empty strings, so ISNULL(Name+' ','') will evalate to a single space.
The simplest resolution is to change the data so that empty-strings are null. This is appropriate in your case since this is clearly your intention.
UPDATE zPerson SET Name=NULL WHERE Name=''
Repeat this for your City and Age fields if necessary.
Use TRIM() arount the ISNULL() function, or LTRIM() around the entire selected term