Logic to get only one row in result set - sql

am stuck at point to build the logic. Can anyone help me ?
requirement:
For privacyType = 'Primary Address' if there is >1 row where SI0_ADDR.ADDR_TYPE_CODE = 'M', display the row where currentDate() is between SI0_ADDR.ADDR_EFF_DATE and SI0_ADDR.ADDR_EXPR_DATE.
my query is :
select STU_ID
,case when Privacyflag = '' then 'N'
else Privacyflag
end Privacyflag
,type from (
select a.STU_ID,Privacyflag,a.type
,ROW_NUMBER() OVER ( ORDER BY ADDR_EXPR_DATE DESC) disp_nm from (
select ad.STU_ID, case when ad.ADDR_TYPE_CODE = 'M' then ad.ADDR_PRIVACY_FLAG
when ad.ADDR_TYPE_CODE='' then 'N'
end Privacyflag, 'Primary Phone' type
, case when ADDR_EXPR_DATE = '1900-01-01' then '2100-12-31'
else ADDR_EXPR_DATE
end as ADDR_EXPR_DATE
from SI0_ADDR ad
where ad.STU_ID = #studentid ) a
where Privacyflag is not null
) ab
where ab.disp_nm = '1'
This logic is not working in some of the cases

You're not qualifying by the date in any way, which was part of your requirements. Your query doesn't need so many nested sub-queries. The PrivacyFlag doesn't seem to be part of your question, so I didn't work it into this example. And your example shows Primary Phone, but your question talks about Primary Address, so I'm not sure how to work that out.
In any case, here is a very simple example that shows the data, and how to pull the one record with the latest expiration date, and also the latest effective date (in case there are two records with the same expiration date).
create table #temp (stu_id int, type varchar(10), address varchar(50), eff_date datetime, expr_date datetime)
insert into #temp values
(1, 'primary', '123 NW 52nd', '1/1/2016', '1/1/1900'),
(1, 'primary', '942 SE 33rd', '1/2/2016', '12/31/2016'),
(1, 'primary', '721 SW 22nd', '4/1/2015', '1/1/1900')
select top 1 *
from (
select stu_id
,type
,address
,eff_date
,case when expr_date = '1/1/1900' then '12/31/2100' else expr_date end as expr_date
from #temp
where stu_id = 1
and type = 'primary'
) as a
where getdate() between eff_date and expr_date
order by a.expr_date desc, a.eff_date desc
drop table #temp
You could even do it with zero sub-queries, but then you'd need to duplicate the case statement into the where and order by parts of the query:
select top 1
stu_id
,type
,address
,eff_date
,case when expr_date = '1/1/1900' then '12/31/2100' else expr_date end as expr_date
from #temp
where stu_id = 1
and type = 'primary'
and getdate() between eff_date and case when expr_date = '1/1/1900' then '12/31/2100' else expr_date end
order by case when expr_date = '1/1/1900' then '12/31/2100' else expr_date end desc
,eff_date desc
With such a small dataset, the showplan didn't indicate which query would be most efficient; it showed the same identical query plan for both queries, except the first select statement took 7 CPU cycles to compile the plan and the second select statement took 2 CPU cycles to compile the plan. You may want to test against a larger dataset on real tables with indexes, etc., and see which performs the best.

Related

How to query multiple where 'NOT Equal' conditions

I am trying to work out how to pull back records where "Field A" and "Field B" cannot be a set combination.
Example: Field A cannot equal "Scheduled" whilst Field B Equals "PreliminaryScheduled"
But I do want to see other records where Field A = "Scheduled" and Field B = "PreliminaryScheduled"
I hope this makes sense, please see the script below, I have included a very basic temp table created with examples of what I am trying to achieve, I havea workaround by using a CONCAT but I don't think this is the best solution?
I know I can exclude these by doing a where not exists but in the actual database this would be a big table and I would prefer not to double query this.
I have a work around also but I would like to know if there is a proper/better way of doing this task.
Please see the code and comments.
--=============================
-- Create Table
--=============================
CREATE TABLE #Temp
(
[id] INT IDENTITY(1,1),
[status] nvarchar(100),
[fkstatus] NVARCHAR(200),
[Date] DATE
)
--=============================
-- Insert Into Table
--=============================
INSERT INTO [#Temp]
(
[status],
[fkstatus],
[Date]
)
VALUES
(N'Scheduled', N'PreliminaryScheduled', GETDATE()), (N'Scheduled', N'PreliminaryScheduled', '2019-01-01'), (N'Cancelled', N'PreliminaryScheduled', '2019-02-01'), (N'Complete', N'PreliminaryScheduled', GETDATE()), (N'Scheduled', N'Other', '2019-03-01')
--=============================
--(A)
-- Brings back what I DO NOT want, these are the items that I want to exclude.
--=============================
SELECT *
FROM [#Temp]
WHERE ([status] = 'Scheduled' AND [fkStatus] = 'PreliminaryScheduled')
--=============================
-- (B)
-- Real world logic, I beleive this should work?.....
--=============================
SELECT *
FROM [#Temp]
WHERE ([status] <> 'Scheduled' AND [fkStatus] <> 'PreliminaryScheduled')
--=============================
-- (C)
-- Work Around - Or is this the actual way this has to be done?
--=============================
SELECT *
FROM [#Temp]
WHERE CONCAT([status],'-',[fkstatus]) <> 'Scheduled-PreliminaryScheduled'
--=============================
-- (D)
-- Additional with a Date.
--=============================
SELECT *
FROM [#Temp]
WHERE ([status] <> 'Scheduled' or [fkStatus] <> 'PreliminaryScheduled')
AND [Date] < '2019-01-01'
--I expect this to return results from Point (C) but with (D) in mind.
You can try to use OR instead of AND
SELECT *
FROM [#Temp]
WHERE ([status] <> 'Scheduled' or [fkStatus] <> 'PreliminaryScheduled')
You can use the NOT statement.
SELECT * FROM [#Temp]
WHERE [Date] < '2019-01-01'
AND NOT ([status] = 'Scheduled' AND [fkStatus] = 'PreliminaryScheduled')
Negate the condition you don't want records to meet using NOT:
SELECT * FROM #temp
WHERE NOT ([status] = 'Scheduled' AND [fkStatus] = 'PreliminaryScheduled')
Alternatively, use OR:
SELECT * FROM #temp
WHERE [status] <> 'Scheduled' OR [fkStatus] <> 'PreliminaryScheduled'
They both result in the same query plan (using OR), you might find the first one is a bit clearer.

How to count records in a table based on each (per) hour in SQL Server?

In SQL Server 2008, I have data like this (Case: varchar(20), Time: time):
Case Time
-------------
D1 18:44
D2 19:12
C1 21:20
F2 21:05
...
What I would like to do is to count cases per hour. Should include all cases.
Expected result:
.... Column18 Column19 Column20 Column21 ...
1 1 0 2
where Column18 refers to the cases between 18:00 and 18:59, and same logic for others. I have from Column0 to Column23, 1 column per hour...
What I am doing is:
Select
...
, Column18 = sum(CASE WHEN Time like '18:%' THEN 1 ELSE 0 END)
, Column19 = sum(CASE WHEN Time like '19:%' THEN 1 ELSE 0 END)
, Column20 = sum(CASE WHEN Time like '20:%' THEN 1 ELSE 0 END)
, Column21 = sum(CASE WHEN Time like '21:%' THEN 1 ELSE 0 END)
...
from
mytable
Even though my query works, it is long and repetitive, so it does not seem professional to me. I wonder if there is any better way to handle this situation. Any advice would be appreciated.
We can go with Dynamic Pivot -
declare #ColString varchar(1000)=''
;with cte as(
select 0 as X
union all
select x+1 as X
from cte where X <23
)
select #ColString = #ColString + ',[Column' + cast(X as varchar) + ']' from cte
select #ColString = stuff(#ColString,1,1,'')
declare #DynamicQuery varchar(3000)=''
select #DynamicQuery =
'select *
from (
select [case],''Column''+cast(datepart(hh,[time]) as varchar) as [time]
from #xyz
) src
pivot
(
count([case]) for [Time] in ('+ #ColString + ')
) piv'
exec (#DynamicQuery)
Input data -
create table #xyz ([Case] varchar(10),[Time] time(0))
insert into #xyz
select 'D1','18:44' union all
select 'D2','19:12' union all
select 'C1','21:20' union all
select 'F2','21:05'
Your query is basically fine, but I strongly discourage you from using string functions on date/time columns.
datepart() is definitely one solution:
Select ...,
Column18 = sum(CASE WHEN datepart(hour, Time) = 18 THEN 1 ELSE 0 END)
Column19 = sum(CASE WHEN datepart(hour, Time) = 19 THEN 1 ELSE 0 END)
Direct comparison is more verbose, but more flexible:
select . . .,
sum(case when time >= '18:00' and time < '19:00' then 1 else 0 end) as column18,
sum(case when time >= '19:00' and time < '20:00' then 1 else 0 end) as column19,
Note that this uses as. SQL Server supports the syntax alias =. However, other databases do not use such syntax, so I prefer to stick with the ANSI-standard method of defining aliases.
Putting the values on rows instead of columns is probably the more "typical" solution:
select datepart(time, hour) as hr, count(*)
from t
group by datepart(time, hour)
order by hr;
As written, this will not return hours with zero counts.
Here is the simplest answer I could come up. Thanks a lot for all the advices. Looks way better now:
create table #temp (CaseID varchar(20),TheTime time)
insert into #temp values ('A1','03:56')
insert into #temp values ('A2','03:12')
insert into #temp values ('B2','03:21')
insert into #temp values ('C1','05:12')
insert into #temp values ('B3','06:00')
insert into #temp values ('B4','07:14')
insert into #temp values ('B5','07:18')
insert into #temp values ('D1','18:44')
insert into #temp values ('D2','19:54')
insert into #temp values ('C2','21:12')
insert into #temp values ('F4','21:50')
select *
from (
select CaseID, DATEPART(hour,TheTime) as HourOfDay
from #temp
) t
PIVOT
(
Count(CaseID)
FOR HourOfDay IN ([00],[01],[02],[03],[04],[05],[06],[07],[08],
[09],[10],[11],[12],[13],[14],[15],[16],[17],
[18],[19],[20],[21],[22],[23])
) AS PivotTable

Capture current and previous values, across multiple fields in a self joined table

I'm trying to create a report that essentially redisplays the log tables for better readability, so all changes are captured in two columns, one Original Value and one Current Value column with a attribute column labelling what was changed.
I've created a Temp Table for SQL Server for ease of testing.
declare #T table (Utility_ID int,
Utility_Record_ID int,
Organization_Name nvarchar(255),
Facility_Name nvarchar(255),
Space_Name nvarchar(255),
Utility_Name nvarchar(255),
MeterNumber nvarchar(255),
Data_Type nvarchar(255),
Reading decimal(18,2),
Consumption decimal(18,2),
Begin_Date datetime,
End_Date datetime,
[Estimated/Actual] nvarchar(255),
[Date-Time Of Change] datetime,
[changed By] nvarchar(100),
[Change Type] nVarchar(100)
)
insert into #T
select 11819, 38503, 'The Men Who Stare At Goats', 'BuildingA', NULL, 'Electricty for BuildingA', '9615(small meter)', 'Reading', 2369, 9.00, '2011 -06-30 00:00:00:000', NULL, 'Actual', '2012-03-07 09:26:53.590', 'Billy.Bob', 'Insert'
union all
select 11819, 38503, 'The Men Who Stare At Goats', 'BuildingA', NULL, 'Electricty For BuildingA', '9615(small meter)', 'Reading', 2372, 12.00, '2011 -06-30 00:00:00:000', NULL, 'Actual', '2012-05-01 10:30:40:237', 'Billy.Jean', 'Update'
union all
select 2000, 12345, 'Hotel', 'Fun House', 'My Gaff', 'Water For My Gaff', '987654321', 'Consumption', NULL, 500.00, '2011 -06-30 00:00:00:000', '2011 -08-30 00:00:00:000', 'Actual', '2013-05-01 10:30:40:237', 'Billy.NoMates', 'Insert'
My obstacle is that I'm tracking changes across c.Reading, c.Consumption, estimated/Actual, begin date and end date. How can I implement the other field values to pull through into the CurrentValue and PreviousValue columns if they were changed instead, or, changed with another field?
My attempt below for tracking one column [c.Consumption] I've partitioned by ID and ordered by the recorded date-time of change.
-- Get CurrentValue and PreviousValue with a Changed column
;with cte as
(
select *,
row_number() over(partition by Utility_Record_ID order by [Date-Time Of Change]) as rn
from #T
)
select
C.Utility_ID ,
C.Utility_Record_ID,
C.Organization_Name,
C.Facility_Name,
C.Space_Name,
C.Utility_Name,
C.MeterNumber,
C.Data_Type,
C.Reading,
C.Consumption,
C.Begin_Date,
C.End_Date,
C.[Estimated/Actual],
C.[Date-Time Of Change],
C.[changed By],
C.[Change Type],
C.Consumption as CurrentValue,
P.Consumption as PreviousValue,
case C.Consumption when P.Consumption then 0 else 1 end as Changed
from cte as C
LEFT OUTER JOIN cte as P
on C.Utility_Record_ID = P.Utility_Record_ID and
C.rn = P.rn + 1
This catch is, that I'm using a read only access account, so no triggers or views. This must be select query pure and simple.
Its not pretty, but assuming the columns you're monitoring are fixed, then you could hard-code the checks using case statements. This also assumes that only 1 of the tracked fields can change between 2 versions of the record..
-- Get CurrentValue and PreviousValue with a Changed column
;with cte as
(
select *,
row_number() over(partition by Utility_Record_ID order by [Date-Time Of Change]) as rn
from #T
)
select
C.Utility_ID ,
C.Utility_Record_ID,
C.Organization_Name,
C.Facility_Name,
C.Space_Name,
C.Utility_Name,
C.MeterNumber,
C.Data_Type,
C.Reading,
C.Consumption,
C.Begin_Date,
C.End_Date,
C.[Estimated/Actual],
C.[Date-Time Of Change],
C.[changed By],
C.[Change Type],
case
when C.Reading <> P.Reading then 'Reading'
when C.Consumption <> P.Consumption then 'Consumption'
when C.[Estimated/Actual] <> P.[Estimated/Actual] then '[Estimated/Actual]'
when C.Begin_Date <> P.Begin_Date then 'Begin_Date'
when C.End_Date <> P.End_Date then 'End_Date'
end WhatChanged,
case
when C.Reading <> P.Reading then C.Reading
when C.Consumption <> P.Consumption then C.Consumption
when C.[Estimated/Actual] <> P.[Estimated/Actual] then C.[Estimated/Actual]
when C.Begin_Date <> P.Begin_Date then C.Begin_Date
when C.End_Date <> P.End_Date then C.End_Date
end CurrentValue,
case
when C.Reading <> P.Reading then P.Reading
when C.Consumption <> P.Consumption then P.Consumption
when C.[Estimated/Actual] <> P.[Estimated/Actual] then P.[Estimated/Actual]
when C.Begin_Date <> P.Begin_Date then P.Begin_Date
when C.End_Date <> P.End_Date then P.End_Date
end PreviousValue
from cte as C
LEFT OUTER JOIN cte as P
on C.Utility_Record_ID = P.Utility_Record_ID and
C.rn = P.rn + 1

Case Statement Sql Multiple column check for dates

I have a stored procdure that uses case statement as follows: What I am trying to do is evaluate 2 columns in the testTable for dates. So the below case statement says that if stop_date is null or greater than current date then set is_active cloumn is Y else N
What I am trying to do is also evaluate another date column say another_stop_date and check if it is null or has a date greater then today and use same logic to update the is_active column
I am not sure if we can use multiple case statement logic to update a single column?
I have commented the code below where I am not getting the right results
Basically need to evaluate stop_dt and another_stop_date columns from testTable!
USE [Test]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROC [dbo].[p_test]
#Blank_Row CHAR(1) = 'N'
AS
BEGIN
SET NOCOUNT ON
DECLARE #TD DATETIME
SELECT #TD = GETDATE()
DECLARE #tempTable TABLE (
ID INT,
c_id INT,
desc varchar(40),
date datetime,
s_col TinyINT,
is_active char(1),
stuff VARCHAR(8))
INSERT INTO #tempTable
SELECT id, c_id, desc, max( date ), 1,
CASE WHEN (stop_dt IS NULL OR stop_dt > #TD) THEN 'Y'
--//Case When (another_stop_date is NULL or another Stop_date > #TD) THEN 'Y'<-----confused here
ELSE 'N' END,
stuff
FROM testTable
GROUP BY id, stop_dt, c_id, desc, stuff, another_stop_date
Select * from tempTable
You can combine clauses in a case statement with the usual logical operators, as well as having separate cases:
Case
When
(stop_dt is null or stop_dt > #td) and
(another_stop_date is null or another_stop_date > #td)
Then 'Y'
Else 'N'
End
Case statement operate close to if statements and can have multiple clauses.
Case when condition_1 > 1 then 'hi'
when condition_1 < 14 then 'no'
when condition_89 > 12 then 'why is this here'
else 1
end
Apply it to your statement:
CASE WHEN (stop_dt IS NULL OR stop_dt > #TD) THEN 'Y'
When (another_stop_date is NULL or another Stop_date > #TD) THEN 'Y'<-----confused here
ELSE 'N' END

How to combine the values of the same field from several rows into one string in a one-to-many select?

Imagine the following two tables:
create table MainTable (
MainId integer not null, -- This is the index
Data varchar(100) not null
)
create table OtherTable (
MainId integer not null, -- MainId, Name combined are the index.
Name varchar(100) not null,
Status tinyint not null
)
Now I want to select all the rows from MainTable, while combining all the rows that match each MainId from OtherTable into a single field in the result set.
Imagine the data:
MainTable:
1, 'Hi'
2, 'What'
OtherTable:
1, 'Fish', 1
1, 'Horse', 0
2, 'Fish', 0
I want a result set like this:
MainId, Data, Others
1, 'Hi', 'Fish=1,Horse=0'
2, 'What', 'Fish=0'
What is the most elegant way to do this?
(Don't worry about the comma being in front or at the end of the resulting string.)
There is no really elegant way to do this in Sybase. Here is one method, though:
select
mt.MainId,
mt.Data,
Others = stuff((
max(case when seqnum = 1 then ','+Name+'='+cast(status as varchar(255)) else '' end) +
max(case when seqnum = 2 then ','+Name+'='+cast(status as varchar(255)) else '' end) +
max(case when seqnum = 3 then ','+Name+'='+cast(status as varchar(255)) else '' end)
), 1, 1, '')
from MainTable mt
left outer join
(select
ot.*,
row_number() over (partition by MainId order by status desc) as seqnum
from OtherTable ot
) ot
on mt.MainId = ot.MainId
group by
mt.MainId, md.Data
That is, it enumerates the values in the second table. It then does conditional aggregation to get each value, using the stuff() function to handle the extra comma. The above works for the first three values. If you want more, then you need to add more clauses.
Well, here is how I implemented it in Sybase 13.x. This code has the advantage of not being limited to a number of Names.
create proc
as
declare
#MainId int,
#Name varchar(100),
#Status tinyint
create table #OtherTable (
MainId int not null,
CombStatus varchar(250) not null
)
declare OtherCursor cursor for
select
MainId, Name, Status
from
Others
open OtherCursor
fetch OtherCursor into #MainId, #Name, #Status
while (##sqlstatus = 0) begin -- run until there are no more
if exists (select 1 from #OtherTable where MainId = #MainId) begin
update #OtherTable
set CombStatus = CombStatus + ','+#Name+'='+convert(varchar, Status)
where
MainId = #MainId
end else begin
insert into #OtherTable (MainId, CombStatus)
select
MainId = #MainId,
CombStatus = #Name+'='+convert(varchar, Status)
end
fetch OtherCursor into #MainId, #Name, #Status
end
close OtherCursor
select
mt.MainId,
mt.Data,
ot.CombStatus
from
MainTable mt
left join #OtherTable ot
on mt.MainId = ot.MainId
But it does have the disadvantage of using a cursor and a working table, which can - at least with a lot of data - make the whole process slow.