I am merging a table and want to retain the previous value in a seperate table, along with the column that was updated.
I have got it working to retain the values, but want to know how I can retain the column name.
Existing table: tTable1
qID qUnits qDateTime
1001 4900 2022-09-13 12:00:00.000
1002 6800 2022-09-14 15:00:00.000
1003 7400 2022-09-14 13:00:00.000
Temp Table (holds updated values): #updateValues
qID qOriginalUnit qUpdatedUnit
1001 4900 8900
1002 6800 13400
1002 7400 16500
The code i'm using currently, to output what the existing value was, before the update;
DECLARE #auditRecords TABLE(qID INT, PreviousValue VARCHAR(100)
MERGE tTable1 AS TARGET
USING #updateValues AS SOURCE
ON (SOURCE.qID = TARGET.qID)
WHEN MATCHED
THEN UPDATE SET
TARGET.qUnits = SOURCE.qUpdatedUnit
OUTPUT SOURCE.qID, SOURCE.qOriginalUnit INTO #auditRecords(qID,PreviousValue);
I'd like to be able to include the column name that was updated, in this instance QUnits, so the output would look like the following;
qID PreviousValue UpdatedColumn
1001 4900 qUnits
1002 6800 qUnits
1003 7400 qUnits
In this particular example, where you are only updating a single column, you can just hardcode the column name in the OUTPUT results:
OUTPUT SOURCE.qID,
SOURCE.qOriginalUnit,
'qUnits'
INTO #auditRecords(qID, PreviousValue, UpdatedColumn)
If you are updating multiple columns it becomes a bit more cumbersome as you need to compare values column by column.
You could also move this into an UPDATE TRIGGER in which you have access to the results of the COLUMNS_UPDATED function.
Related
I have a very big transaction table on DB2 v11, and I need to query a subset of it as efficiently as possible. All I need is the total count of the set (not known in advance, it's based on criteria, lets say 1 day) and the ID of the first record, and the ID of the last record.
The old code was fetching the entire table, then just using the 1st record ID, and the last record ID, and size, and not making use of the rest. Now this code is timing out. It's a complex query of several joins.
IS there a way to just fetch the size of the set, 1st record, last record all in one select query ?
I've read that reordering the list in order to fetch the 1st record(so fetch with Desc, then change to Asc) is not efficient.
sample table 1 TRANSACTION_RECORDS:
tdID TIMESTAMP name
-------------------------------
123 2020-03-31 john
234 2020-03-31 dan
456 2020-03-01 Eve
675 2020-04-01 joy
sample table 2 TRANSACTION_TYPE:
invoiceId tdID account
------------------------------
897 123 abc
898 123 def
877 234 mnc
899 456 opp
Sample query
select Min(tr.transaction_id), Max(tr.transaction_id)
from TRANSACTION_RECORDS TR
join TRANSACTION_TYPE TT
on TR.tdID=tt.tdID
WHERE Date(TR.TIMESTAMP) = '2020-03-31'
group by tr.tdID
order by TR.tdID ASC
This results in multiple columns, (but it requires the group by)
123,123
234,234
456,456
What I want is:
123,456
As I mentioned in the comments, for this query you don't need Group BY and neither Order by, just do:
select Min(tr.transaction_id), Max(tr.transaction_id)
from TRANSACTION_RECORDS TR
join TRANSACTION_TYPE TT
on TR.tdID=tt.tdID
WHERE Date(TR.TIMESTAMP) = '2020-03-31'
It should work as expected
I'm trying to replace NULL with a default value of 'NO' but even when I execute it, it still displays NULL when I try to view the data. I've already tried dropping the constraint on the column, but it did not work
/*------------------------
use AuntieB
--alter table charity
-- add STORE char(10);
--update charity
--set STORE = 'YES'
--where Name = 'Salvation Army' or Name = 'Mother Wattles' or Name = 'Fresh Start Charity'
alter table charity
add default 'No' for STORE;
select * from charity
------------------------*/
CharityID Name Address City State Zip Phone ContactID STORE
----------- -------------------- ------------------------------ ------------------------------ ----- ---------- ------------ ----------- ----------
1000 St. Francis Home 45875 West. Hill St. Utica MI 48045 586-795-3486 1025 NULL
1001 Helping Hands 98563 Stadium Detriot MI 48026 313-978-6589 1030 NULL
1002 Boy Scouts 1155 E. Long Lake Rd Troy MI 48085 248-253-9596 1036 NULL
1003 Focus Hope 54362 Grand River Detroit MI 48312 313-478-7895 1041 NULL
1004 Fresh Start Charity 12569 Gratiot Ave. Roseville MI 48084 555-555-2035 1046 YES
1005 St. John Hospital 59652 Shelby Rd. Shelby Twp. MI 48317 586-569-6987 1050 NULL
1006 Salvation Army 56231 Somewhere Blvd. Eastpointe MI 48021 586-555-1212 1056 YES
1007 LA Angels Traders 2468 Halo Park Dr South Los Angelas MI 90234 903-965-3556 2015 NULL
1008 Purple Heart 28765 Van Dyke Sterling Heights MI 48313 586-732-8723 1061 NULL
1009 St. Raja Home 45875 West. Hill St. Utica MI 48045 586-795-3486 1062 NULL
1010 Mother Wattles 4568 Griswold Detroit MI 48205 313-478-9856 2016 YES
1011 Ron McDonald House 649 West Road Utica MI 48045 586-795-9979 1030 NULL
1012 St. Jude 262 Danny Thomas Place Memphis MI 38105 800-822-6344 1030 NULL
(13 rows affected)
From the docs:
When a DEFAULT definition is added to an existing column in a table, by default, the Database Engine applies the new default only to new rows of data that are added to the table. Existing data that was inserted by using the previous DEFAULT definition is unaffected. However, when you add a new column to an existing table, you can specify that the Database Engine insert the default value (specified by the DEFAULT definition) instead of a null value, into the new column for the existing rows in the table.
So when adding a DEFAULT definition to an existing column, you need to fill the existing rows yourself.
You cannot add a new default constraint to an existing column and update exiting NULL values in the same DDL operation. You'll need to explicitly update values NULL values to the desired value afterward.
You can, however, add a new column with a default constraint and apply the default value to all existing rows by specifying the WITH VALUES clause:
ALTER TABLE dbo.charity
ADD store char(10) NULL
CONSTRAINT df_charity_STORE DEFAULT 'No'
WITH VALUES;
This method allows you to add a new NOT NULL column too.
If you are running Enterprise (or Developer) edition of SQL Server, WITH VALUES is a meta-data only operation, which avoids updating every row in the table internally during the operation. The operation physically updates each row in the table in lessor editions.
There are two options here. You can update all null values like this;
update charity
set STORE = 'NO'
where STORE is null
or If you want to replace null values only when selecting them, you can use ISNULL statement.
select ISNULL(STORE,'NO') from charity
Also, adding constraint with default value for existing column doesn't update previous data. It sets default value for new coming rows.
Please help me with the below :
The table AR_X_LO is a SCD TYPE 2 table. There was a bug in the ETL with the result that changed records has not been end dated, e.g.
AR_X_LO_TP_ID AR_ID EFF_TMS LO_ID RANK END_TMS ORIG_SRC_STM_ID RT_TMS
------------- ------- ------------------- -------- ---- ---------- --------------- ----------
802 6751231 2016-06-08 00:00:00 39748325 1 NULL 9643 2016-06-09
802 6751231 2015-05-02 00:00:00 29496916 1 NULL 9643 2015-05-04
The ETL was supposed to end the changed row with the EFF_TMS of the new row - 1 day.
AR_X_LO_TP_ID AR_ID EFF_TMS LO_ID RANK END_TMS ORIG_SRC_STM_ID RT_TMS
------------- ------- ------------------- -------- ---- ---------- --------------- ----------
802 6751231 2016-06-08 39748325 1 NULL 9643 2016-06-09
802 6751231 2015-05-02 29496916 1 2016-06-07 9643 2015-05-04
I want to write a SQL query that for each AR_ID, AR_X_LO_TP_ID, RANK, ORIG_SRC_STM_ID combination returns what the END_TMS was supposed to be.
According to your request being for
"a SQL query that [...] returns what the END_TMS was supposed to be"
and since you specified the SAS tag, the following SAS code will do just that:
proc sql;
create table result as
select t1.*, datepart(t2.EFF_TMS)-1 as END_TMS format=E8601DA.
from AR_X_LO(drop=END_TMS) t1
left join AR_X_LO t2
on t1.AR_ID = t2.AR_ID
and t1.AR_X_LO_TP_ID = t2.AR_X_LO_TP_ID
and t1.RANK= t2.RANK
and t1.ORIG_SRC_STM_ID = t2.ORIG_SRC_STM_ID
and t1.EFF_TMS < t2.EFF_TMS
group by t1.EFF_TMS
having END_TMS=min(END_TMS)
;
quit;
Be aware that this code contains SAS-specific statements/functions (like the datepart() function, format= statement or the drop= dataset option) which will not work in other SQL environments (like Oracle which you also tagged) and will perform poorly in SAS if indeed you are working on an Oracle backend.
If the latter is true, you could probably do this more elegantly with analytic functions such as lag, lead, partition by, etc (using SQL-passthrough when within SAS)
NOTE: conforming to your provided example of the expected result, I returned the END_TMS as date even though the name of that variable suggest it should probably by a timestamp (datetime in SAS).
There seems to be a lot of threads on this topic but few that work with Excel.
I have a simple table from which i want to select:
ideally all columns i.e. using * if possible so if a user adds new columns they do not need to edit SQL. (is this a pipe dream?) if so a solution specifying all the returned columns is OK.
only return rows where [name]&[date] (concatenated) is distinct
all other columns i don't care about which row is returned. first, last, limit 1... anything. they are a mix of all types.
this must not create a new table or delete rows, just selecting and joining
name date sales
andy 01/01/2010 100
andy 01/01/2010 900
andy 05/01/2010 100
alex 02/02/2010 200
alex 02/02/2010 200
alex 05/01/2010 200
dave 09/09/2010 300
dave 09/09/2010 300
dave 01/09/2010 300
Also code simplicity is prefered over speed. This is going to be left to run over night so nice looking but slow is fine... and excel doesn't have millions of rows!
Many thanks to everyone in advance.
UPDATE
I would expect the table to look like this:
name date sales
andy 01/01/2010 100
andy 05/01/2010 100
alex 02/02/2010 200
alex 05/01/2010 200
dave 09/09/2010 300
dave 01/09/2010 300
or
andy 01/01/2010 900
andy 05/01/2010 100
alex 02/....
I can select all the 'unique things with this:
SELECT MAX(joined)
FROM
(SELECT [Single$].[date] AS [date],
[Single$].[name] AS [name],
name & date AS [joined]
FROM [Single$]
)
GROUP BY joined
HAVING MAX(joined) IS NOT NULL
But i don't know how to somehow join this back to the original table keeping any single row where the join matches. And i don't know if a join is the right way about this? Thanks
Simply run an aggregate query grouped by [Name] and [Date]. For all other columns run an aggregate like MAX() or MIN() which should work on numeric and string values.
SELECT [Single$].[name] AS [name], [Single$].[date] AS [date],
MAX([Single$].[sales]) As [sales],
MAX(...)
FROM [Single$]
GROUP BY [Single$].[name] AS [name], [Single$].[date] AS [date]
I'm wondering if I could use trigger with PostgrSQL to update a column where I do have column with date type and on this date I would like to updated another column in another table.
To make it more clear
I do have 2 tables
Works_locations table
walphid wnumberid locationid
DRAR 1012 101
PAPR 1013 105
PAHF 1014 105
ETAR 1007 102
DRWS 1007 102
the locationsid attribute refers to "locid' in Locations table which is down
locid locname
101 Storage
102 Gallary A
103 Gallary B
104 Gallary C
105 Lobby
Exhibition table
exhid exhname description strtdate endDate
101 Famous Blah Blah 2013-07-15 2013-10-13
and here are bridge table to connect the exhibition table with the locations table
locationsid exhibitid
102 102
103 101
104 103
Now Each exhibition has some works and should be placed in one of the locations table.
On the 'endDate' column in the exhibition table, which is a date data type, I would like to update 'locationid' column in the Works locations table to be placed in another location.
In another words. Each exhibitions has some works and this works are placed in one of the locations. at the ending date of the exhibition, I would like to change the locations, and specifically, I would like the work to be returned to the storage.
Any idea how would I do this action with postgresql?
Regard
PostgreSQL does not have a built-in task scheduler. Even if there was a scheduler, the commands it ran wouldn't be triggers, they'd just be be procedures run by a scheduler.
You can't write triggers that fire at some arbitrary time.
You will need to use cron, Task Scheduler, PgAgent, or similar to run the statements at the desired time. Or you could write a script that checks when the next event is, sleeps until then, runs the desired command and marks that event as done, and sleeps until the next event.