Transpose rows into columns SQL Server 2014 - sql

I have a CSV file with 51 columns.
In this order, there is a UID Column, Serial Column, Date column and 48 columns for each 30 minute segment of the day (from 00:30 through to 00:00). Each day has a new row.
So it looks like:
UID | Serial | Date | Val_0030 | Val_0100 | Val_0130 | ..... | Val_0000
123 | 123456 | 2016-01-02 | 56.2 | 23.25 | 32.8 | ..... | 86.23
I need to transpose this data into 4 columns, so that each half hour has a UID, Serial and Date column. In other words I need to run down instead of across.
To look like this:
UID | Serial | 2016-01-02 00:30 | Value
Rather that each day having a new row as it currently does, I will determine that Val_0130 for example will determine that the time is 01:30 and will concat with the date
I have tried using pivot and unpivot without any success. Can anyone advise the best approach to do this.

I would use UNPIVOT and then cut up the column name Val_0130 to add to datetime to get the desired result. This way you will only have to write the 48 columns in one spot.
here is some test data:
DECLARE #Table AS TABLE (UID INT, Serial INT, Date DATETIME, Val_0030 MONEY, Val_0100 MONEY, Val_0130 MONEY, Val_0000 MONEY)
INSERT INTO #Table (UID, Serial, Date, Val_0030, Val_0100, Val_0130, Val_0000)
VALUES
(123, 123456, '2016-01-02',56.2,23.25,12.34,86.23)
,(231, 234561, '2016-01-05',26.2,13.25,23.45,106.23)
,(312, 345612, '2016-01-07',76.2,3.25,34.56,1010.56)
And the Query
SELECT
UID
,Serial
,DateWithTime = [Date] + CAST((SUBSTRING(ColumnNames,5,2) + ':' + RIGHT(ColumnNames,2)) AS DATETIME)
,Value
FROM
#Table t
UNPIVOT (
Value
FOR ColumnNames IN (Val_0030, VAL_0100, Val_0130, VAL_0000)
) u
And if you don't want to type out all 48 columns, like I wouldn't want to, just run this query and copy and past the result into the ColumnNames IN () section of the above query.
DECLARE #ColString VARCHAR(MAX) = ''
DECLARE #DT DATETIME = '00:00'
WHILE #DT < '1900-01-02 00:00:00.000'
BEGIN
IF LEN(#ColString) > 0
BEGIN
SET #ColString += ','
END
SET #ColString += 'Val_' + FORMAT(#DT,'HHmm')
SET #DT = DATEADD(MINUTE,30,#DT)
END
SELECT #ColString

Matt provides a good answer with UNPIVOT. On platforms where that's not an option, you can get the same effect using a cross join and a case statement. Create a table of half-hours, and produce the values with
select ...
, case hh.time when '00:00' then VAL_0000
when '00:30' then Val_0030
when '01:00' then Val_0100
when '01:30' then Val_0130
...
end as Value
from data cross join "half-hours" as hh

Related

Inserting not calculated Id in table transact SQL

I have a stored procedure where I receive data as JSON from a API in C#. I insert the data into two tables like this:
INSERT INTO dbo.ServiceRequestHeader(SubscriptionId, CustomerAccountId, ModifiedBy)
OUTPUT Inserted.ServiceRequestHeaderId INTO #TempT
SELECT
SubscriptionId,
CustomerAccountId,
ModifiedBy
FROM
OpenJson(#JsonServiceRequest)
WITH
(SubscriptionId TinyInt,
CustomerAccountId Int)
SELECT #TempId = Id FROM #TempT
INSERT INTO dbo.ServiceRequest(ServiceRequestId, ServiceRequestHeaderId, SubscriptionId)
SELECT
#TempId, -- <= Here I need to modify the serviceRequestHeaderId
#TempId,
SubscriptionId
FROM
OpenJson(#JsonServiceRequest, '$.ServiceRequest')
WITH (SubscriptionId TinyInt,
...)
The thing is that the serviceRequestId is not a calculated field and it's a special case that depends on ServiceRequestHeaderId.
Example:
If ServiceRequestHeaderId = 1000 the ServiceRequestId would be 1000 001, 1000 002... N...
This is where I can't come with a way to do it
You can generate servicerequestids as given below. I am using FORMAT function with 000 for padding with 0 till 3 digits. If you want four digits, use 0000.
SELECT #TempId = Id FROM #TempT
INSERT INTO dbo.ServiceRequest(ServiceRequestId, ServiceRequestHeaderId, SubscriptionId)
SELECT
CONCAT(#TempId,FORMAT(ROW_NUMBER() OVER(ORDER BY (SELECT null)),'000')) AS ServiceRequestId, -- <= Here I need to modify the serviceRequestHeaderId
#TempId,
SubscriptionId
FROM
OpenJson(#JsonServiceRequest, '$.ServiceRequest')
WITH (SubscriptionId TinyInt,
...)
You will get something like below:
+------------------+
| ServiceRequestId |
+------------------+
| 1000001 |
| 1000002 |
| 1000003 |
+------------------+
Use a CTE to calculate a row number per request and then build the id from it e.g.
with MyCTE as (
select
SubscriptionId
-- Order by whatever makes business sense to you
, row_number() over (order by SubscriptionId) rn
from openjson(#JsonServiceRequest, '$.ServiceRequest')
with (
SubscriptionId tinyint,
...
)
)
insert into dbo.ServiceRequest (ServiceRequestId, ServiceRequestHeaderId, SubscriptionId)
-- Put whatever logic you like here to calculate a row number based id
select convert(varchar(4),#TempId) + ' ' + case when rn >= 100 then convert(varchar(3),rn) when rn > 10 then '0' + convert(varchar(2),rn) else '00' + convert(varchar(1),rn) end
, #TempId, SubscriptionId
from MyCTE;

SQL Script to find Count the number of :username Occurrences of a String

I have a table that stores information whenever user make changes to the DB. I want to extract how many times the user make changes on the date on the application. The info is normally stored for each user in one row for example :
2019-06-15randomname1:YES I DID IT 2019-06-14randomname2:HHHHHHH JJJJJJ 2019-06-14Urandomnamexxxxxx: COMMENT OF PEOPLE
What I want is to search :username to detect how many times the user has changed. In this instance. The answer suppose to be 3. How can I do it
DECLARE #logEntry VARCHAR(4000);
SET #logEntry = ':' + (SELECT PERSON_NAME FROM P_PERSON WHERE PERSON = logged_person)
SELECT
id
,value
,COUNT = (LEN(value) - LEN(REPLACE(value, #logEntry , '')))/LEN(#logEntry)
FROM table
Will I use regular expression because for this particular example the answer will be 3 since we have 3.
I have decided to use :username I am having problem with Subquery returned more than 1 value :
If I understand, you want to count the occurrence of a date in a string
DECLARE #D VARCHAR(10) = '2019-01-01';
SELECT *, LEN(V) - (LEN(REPLACE(V, #D, '')) * 10) Occurrence
FROM (VALUES('A2019-01-01B2019-01-01C2019-01-01D2019-01-01E2019-01-01F2019-01-01'))T(V);
Returns:
+--------------------------------------------------------------------+------------+
| V | Occurrence |
+--------------------------------------------------------------------+------------+
| A2019-01-01B2019-01-01C2019-01-01D2019-01-01E2019-01-01F2019-01-01 | 6 |
+--------------------------------------------------------------------+------------+
Note that this will works only when the string doesn't contains a white spaces.
If you have a white spaces, then you need to remove them first as
DECLARE #D VARCHAR(10) = '2019-01-01';
SELECT *, LEN(REPLACE(V, ' ', '')) - (LEN(REPLACE(REPLACE(V, ' ', ''), #D, '')) * 10) Occurrence
FROM (VALUES('A 2019-01-01 B 2019-01-01 C 2019-01-01 D 2019-01-01 E 2019-01-01 F 2019-01-01'))T(V);
You just changed your question, to search by a user name, but since the ':' is fixed, and if you have 2016+ version you can do as
DECLARE #D VARCHAR(10) = 'UserName1';
SELECT *,
(SELECT COUNT(1) FROM STRING_SPLIT(V, ':') WHERE Value LIKE CONCAT('%', #D, '%'))
FROM (VALUES
('2019-06-15UserName1:YES I DID IT 2019-06-14UserName2:HHHHHHH JJJJJJ 2019-06-14UserName1: COMMENT OF PEOPLE')
) T(V);
Finally, I'll recommend to re-think of that design, which is the real issue here, and read more about normalization.
UPDATE:
Here is how to count the user name with joining the two tables
SELECT *,
(
SELECT COUNT(1)
FROM STRING_SPLIT(Col, ':')
WHERE Value LIKE CONCAT('%', UserName)
) Cnt
FROM Users U JOIN Data D
ON D.Col LIKE CONCAT('%', U.UserName, '%');
Returns:
+----------+----------------------------------------------+-----+
| UserName | Col | Cnt |
+----------+----------------------------------------------+-----+
| User1 | 2019-01-01User1:YES 2019-01-02User2:No | 1 |
| User2 | 2019-01-01User1:YES 2019-01-02User2:No | 1 |
| User1 | 2019-01-01User1:YES I 2019-01-02User1:No Way | 2 |
+----------+----------------------------------------------+-----+
See how it's working on live demo
First, you have a lousy data model and processing. You should not be just adding substrings to a string. You should be adding new rows to a table. And, you should not be encoding information in a string. You should be using columns for that.
My strongest suggestion is that you fix your data model and processing.
That said, you might be stuck with this situation. THe simplest solution is just to look for
SELECT id, value,
(LEN(REPLACE(value, 'XXXXXXXXXXXXX:', 'XXXXXXXXXXXXX:1') -
LEN(value)
) as Num_Times
FROM Table;
Of course, this assumes that 'XXXXXXXXXXXXX:' doesn't actually occur in the message. If that is a possibility, see my original comment on the data structure.
The following will do as you ask, but you seriously need to reconsider how you store your data. What if instead of someone commenting "I did it", they entered "I did it on 2019-01-01"?
-- DateCount
-- Return number of occurances of ####-##-## where # is a digit
create function dbo.DateCount(#s nvarchar(max))
returns int as
begin
declare #k int = 0 -- #k holds the count so far
declare #i int = 1 -- index into string, start at first character
while #i < len(#s)-9 -- keep checking until we get to the end
begin
if substring(#s,#i,10) like '[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]'
set #k = #k + 1 -- increment count if these 10 characters match
set #i = #i + 1 -- check the next character
end
return #k -- return the count
end
go
select dbo.DateCount( '2019-06-15randomname1:YES I DID IT 2019-06-14random'
+ 'name2:HHHHHHH JJJJJJ 2019-06-14Urandomnamexxxxxx: '
+ 'COMMENT OF PEOPLE' )
-- Result is 3
If you're keen on using a set-based solution instead of a while loop, you can try this:
create function dbo.DateCount(#s nvarchar(max))
returns int as
begin
declare #k int;
with A as ( select 1 as I
union all
select I+1 as I from A where I<=len(#s)-9 )
select #k=count(*) from A
where substring(#S,I,10) like '[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]'
option (maxrecursion 0)
return #k
end
But, in my performance tests, I find that the set-based solution takes 50% longer.

in a table column row by row subtraction to find the value in SQL Server

id patient_date
1 10/5/2017
2 6/6/2017
3 6/10/2017
4 8/7/2017
5 9/19/2017
Output:
id patient_date days
1 10/5/2017 (6/6/2017-10/5/2017)
2 6/6/2017 (6/10/2017-6/6/2017)
3 6/10/2017 (8/7/2017-6/10/2017)
4 8/7/2017 (9/19/2017-8/7/2017)
5 9/19/2017
Here's query with extra column for you to choose :)
declare #Table table(ID int identity(1,1), patient_date date)
insert into #Table values
('10/5/2017'),
('6/6/2017'),
('6/10/2017'),
('8/7/2017'),
('9/19/2017')
select A.ID,
A.patient_date,
cast(B.patient_date as varchar(10)) + ' - ' + cast(A.patient_date as varchar(10)) as Period, --this column will show exactly what you asked
abs(datediff(day,B.patient_date, A.patient_date)) as DaysDifference --this column is computed absolute difference in days between to dates
from #Table A left join #Table B on A.ID = B.ID - 1
You can try this. This will use lead to find your next value. The last value should either be null or nothing, correct it as you need.
The date 1900-01-01 should be changed to your desired wishes. It could also be NULL as value. Then it wont calculate the last row.
DECLARE #table TABLE (ID int,Patient_date date)
INSERT INTO #table VALUES
(1, '10/5/2017'),
(2,'6/6/2017'),
(3,'6/10/2017'),
(4,'8/7/2017'),
(5,'9/19/2017')
select *,DATEDIFF(DD,Patient_date,NextDate) as DaysBetween,
'('+cast(Patient_date as varchar(50)) + ' - ' + cast(NextDate as
varchar(50))+')' as DayString from (
select *,LEAD(Patient_date,1,'1900-01-01') over(order by ID ) as NextDate
from #table
) x
In my result i used NULL instead of 1900-01-01 - Also notice i use another date format than you, but it shouldnt be a problem.
Result:

Updating sql databe values

In table Requisitions i have tow columns RequisitionID and Code.
I need to update Code value based on RequisitionID value in this format.
RN-000RequisitionID/2017 so output will be RN-0001/2017 for example if RequisitionID =1
i have tried the below query but it didn't work.
update [dbo].[Requisitions] set [Code]='RN-000 "'RequisitionID'"/2017'
Modification in your query:
update [dbo].[Requisitions] set [Code]='RN-000'+RequisitionID+'/2017'
if above didn't work, use:
update [dbo].[Requisitions] set [Code]='RN-000'+CONVERT(VARCHAR,RequisitionID)+'/2017'
Hope it helps.
You need to us a + to CONCATENATE them together. You will also have to cast 2017 to a VARCHAR so that the + operator isn't interpreted as addition, rather concatenation.
declare #table table (RequisitionID int, Code varchar(64))
insert into #table values
(1,'RN-000')
update #table
Set Code= Code + cast(RequisitionID as varchar(1)) + '/2017'
Where RequisitionID = 1
select * from #table
+---------------+--------------+
| RequisitionID | Code |
+---------------+--------------+
| 1 | RN-0001/2017 |
+---------------+--------------+

Need Query - Row count with loop

I have a Primary table like
========================================
ID NAME logtime (date time colum)
========================================
1 cat dd/mm/yyyy 10.30
2 cat dd/mm/yyyy 9.20
3 cat dd/mm/yyyy 9.30
4 cat dd/mm/yyyy 7.20
Secondary Table like
---------------------
Name improvement
---------------------
cat 1
Now I want to create a loop
To calculate difference between first 2 rows, if the difference is >= 1 hr then update the secondary table as existing value +1(that is 2) else rest it.
Now calculate 2'nd and 3'rd rows,
To calculate difference between first 2 rows, if the difference is >= 1 hr then update the secondary table as existing value +1, here the answer is 2.
Then 3'rd and 4'th.
Once all row are calculated then exit from the loop.
can anyone give me a query for this?
declare #timediff table(id int identity(1,1),Name varchar(50),logtime datetime)
insert into #timediff Select Name,logtime from timediff
declare #datetimeDiff int
declare #i int=1
while(#i<=(Select count(*) from #timediff))
Begin
Select #datetimeDiff=datediff(hour,logtime,
(Select logtime from #timediff where id=#i+1))
from #timediff where id=#i
if(#datetimeDiff>=1)
BEGIN
Select #datetimeDiff
--You can write your update code here
END
Set #i=#i+2
END