Datetime data type in a CASE expression impacts all other values on SQL Server 2016 - sql

I'm attempting to insert a few rows into a table in SQL Server 2016 using the SQL query below. A temp table is used to store the data being inserted, then a loop is used to insert multiple rows into the table.
--Declare temporary table and insert data into it
DECLARE #fruitTransactionData TABLE
(
Category VARCHAR (30),
Species VARCHAR (30),
ArrivalDate DATETIME
)
INSERT INTO #fruitTransactionData ([Category], [Species], [ArrivalDate])
VALUES ('Fruit', 'Apple - Fuji', '2017-06-30')
--Go into loop for each FieldName (there will be 3 rows inserted)
DECLARE #IDColumn INT
SELECT #IDColumn = MIN(ID) FROM FieldNames
WHILE #IDColumn IS NOT NULL
BEGIN
--Insert data into Transactions
INSERT INTO [dbo].[Transactions] ([FieldName], [Result])
SELECT
(SELECT Name FROM FieldNames WHERE ID = #IDColumn),
CASE
WHEN #IDColumn = 1 THEN 1 --Result insert for FieldName 'Category' where ID=1 refers to 'Fruits'
WHEN #IDColumn = 2 THEN 99 --Result insert for FieldName 'Species' where ID=99 refers to 'Apple - Fuji'
WHEN #IDColumn = 3 THEN [data].[ArrivalDate] --Result insert for FieldName 'Date'
ELSE NULL
END
FROM
#fruitTransactionData [data]
--Once a row has been inserted for one FieldName, then move to the next one
SELECT #IDColumn = MIN(ID)
FROM FieldNames
WHERE ID > #IDColumn
END
When inserting the data, the data is inserted, but all the results show dates, when some data weren't meant to be dates.
+-----+------------+---------------------+
| ID | FieldName | Result |
+-----+------------+---------------------+
| 106 | Category | Jan 2 1900 12:00AM |
| 107 | Species | Apr 10 1900 12:00AM |
| 108 | Date | Jun 30 2017 12:00AM |
+-----+------------+---------------------+
If I comment out the row insert of the date, the columns display correctly.
+-----+------------+--------+
| ID | FieldName | Result |
+-----+------------+--------+
| 109 | Category | 1 |
| 110 | Species | 99 |
+-----+------------+--------+
It seems like the insertion of the date converts all the result values to datetime format (eg. Jan 2 1900 12:00 is a conversion of the number 1).
The result I'm trying to get as opposed to the above results is this:
+-----+------------+---------------------+
| ID | FieldName | Result |
+-----+------------+---------------------+
| 106 | Category | 1 |
| 107 | Species | 99 |
| 108 | Date | Jun 30 2017 12:00AM |
+-----+------------+---------------------+
Just for clarification, the Transaction table schema is as follows:
[ID] INT IDENTITY(1, 1) CONSTRAINT [PK_Transaction_ID] PRIMARY KEY,
[FieldName] VARCHAR(MAX) NULL
[Result] VARCHAR(MAX) NULL

SQL Server is making a guess at the data type for the CASE statement. It does this based on its internal precedence order for data types and the following case statement return type rule:
the highest precedence type from the set of types in
result_expressions and the optional else_result_expression.
Since int has a lower precedence order than datetime SQL Server is choosing to use a datetime return type.
Ultimately explicitly normalizing the data types of your case statement to varchar will solve the issue:
CASE WHEN #IDColumn = 1 THEN '1'
WHEN #IDColumn = 2 THEN '99'
WHEN #IDColumn = 3 THEN FORMAT([data].[ArrivalDate]), 'Mon d yyyy h:mmtt')
ELSE NULL
END
In case you are interested SQL Server uses the following precedence order for data types:
user-defined data types (highest)
sql_variant
xml
datetimeoffset
datetime2
datetime
smalldatetime
date
time
float
real
decimal
money
smallmoney
bigint
int
smallint
tinyint
bit
ntext
text
image
timestamp
uniqueidentifier
nvarchar (including nvarchar(max) )
nchar
varchar (including varchar(max) )
char
varbinary (including varbinary(max) )
binary (lowest)

It is converting the format because all types in a CASE should have the same format. I think you want to convert the date as a string (and the numbers too).
SELECT
,(SELECT Name FROM FieldNames WHERE ID=#IDColumn)
,CASE WHEN #IDColumn=1 THEN '1' --Result insert for FieldName 'Category' where ID=1 refers to 'Category Fruits'
WHEN #IDColumn=2 THEN '99' --Result insert for FieldName 'Species' where ID=99 refers to 'Apple - Fuji'
WHEN #IDColumn=3 THEN convert(varchar(MAX), [data].[ArrivalDate], 23) --Result insert for Field Name 'Date'
ELSE null
END
FROM #fruitTransactionData [data]

This is too long for a comment.
What you are trying to do just doesn't make sense. The columns you are inserting into are defined by:
INSERT INTO [dbo].[Transactions]([FieldName], [Result])
---------------------------------^ -----------^
The INSERT is inserting rows with two values, one for "FieldName" and the other for "Result".
So the SELECT portion should return two columns, no more, no fewer. Your SELECT appears to have four. Admittedly, the first two are syntactically incorrect CASE expressions, so the count might be off.
It is totally unclear to me what you want to do, so I can't make a more positive suggestion.

Related

How to take values from a column and assign them to other columns

Here's what I have:
DECLARE #keyString2 nvarchar(500)
SET #keyString2 =
(SELECT TOP (1) Key_analysis
FROM testing.dbo.[nameWIthoutSpecialChars])
IF CHARINDEX('Limit of Insurance Relativity Factors' , #keyString2) > 0
EXEC sp_rename 'testing.dbo.nameWIthoutSpecialChars.Key2',
'Limit of Insurance Relativity Factors',
'COLUMN';
Basically what I'm doing with that code is renaming column names
with values that are from a different column. Though, if you see, there's a hardcoded string in CHARINDEX, I'd have to already know what's inside of that variable which makes it a real manual process. I could essentially just hardcode the EXEC and run it over and over without even needing the IF statement.
What I'm trying to accomplish is to rename columns based off of values inside of another.
To make it more clear I have a table like this:
+--------------------------------+---------+---------+
| Description | Column2 | Column3 |
+--------------------------------+---------+---------+
| string value 1, string value2 | | |
+--------------------------------+---------+---------+
| string value 1, string value2 | | |
+--------------------------------+---------+---------+
| string value 1, string value 2 | | |
+--------------------------------+---------+---------+
The values in the "Description" column will be the same throughout the table. What I want to have happen is that those values replace the other columns like so
+--------------------------------+----------------+----------------+
| Description | string value 1 | string value 2 |
+--------------------------------+----------------+----------------+
| string value 1, string value2 | | |
+--------------------------------+----------------+----------------+
| string value 1, string value2 | | |
+--------------------------------+----------------+----------------+
| string value 1, string value 2 | | |
+--------------------------------+----------------+----------------+
The only other caveat here is that there may be more or less string values than the 2 shown, I want to run this through multiple tables. Every table has 10 columns that are just like "Column1" and "Column2" in the example, meaning 10 potential columns that need to be renamed considering how many values are in the "Description" column
Experimental table,I didn't use #temporary table or table variable.
create TABLE bbbb (
Description VARCHAR(30) NOT NULL
,column2 VARCHAR(30)
,column3 VARCHAR(30)
);
INSERT INTO bbbb(Description,column2,column3) VALUES
('string value 1,string value2',NULL,NULL),
('string value 1,string value2',NULL,NULL),
('string value 1,string value2',NULL,NULL);
final query
declare #a varchar(100);
declare #b varchar(100);
set #a=(select distinct PARSENAME(REPLACE(Description,',','.'),1) from bbbb)
set #b=(select distinct PARSENAME(REPLACE(Description,',','.'),2) from bbbb)
EXEC sp_rename '[bbbb].[column2]', #a, 'COLUMN';
EXEC sp_rename '[bbbb].[column3]', #b, 'COLUMN';
select * from bbbb

Oracle: How query where datetime column not null and null values?

I want query datetime column not null and null values.
But i query have not null values now. i want query both values.
Query:
select l.com_code,
l.p_code,
to_char(l.effdate,'dd/mm/yyyy') effdate,to_char(l.expdate,'dd/mm/yyyy') expdate
from RATE_BILL l
where ( to_date('02/06/2016','dd/mm/yyyy') <= to_date(l.effdate,'dd/mm/yyyy')
or to_date('02/06/2016','dd/mm/yyyy') <= to_date(l.expdate,'dd/mm/yyyy') )
Data Sample
com_code | p_code | effdate | expdate
A | TEST01 | 01/01/2016 | 31/05/2016
A | Test01 | 01/06/2016 |
Query Result:
com_code | p_code | effdate | expdate
A | TEST01 | 01/01/2016 | 31/05/2016
A | Test01 | 01/06/2016 |
Column expdate If null = '31/12/9998' but show in DB is null
when query datetime = '02/06/2016' is between should result this
com_code | p_code | effdate | expdate
A | Test01 | 01/06/2016 |
But where query is
where ( to_date('31/05/2016','dd/mm/yyyy') <= to_date(l.effdate,'dd/mm/yyyy') or to_date('31/05/2016','dd/mm/yyyy') <= to_date(l.expdate,'dd/mm/yyyy') )
Result Should
A | TEST01 | 01/01/2016 | 31/05/2016
A | Test01 | 01/06/2016 |
Values Datetime is "Now Datetime"
First of all I must admit that I am not sure to understand the meaning of your text from your wording [no offence intended]. Feel free to comment if this answer does not address your needs.
The where condition of a query is built on the columns of the table/view and their sql data types. There is no need to convert datetime columns to the datetime data type.
Moreover, it is potentially harmful here since it implies an implicit conversion:
date column
-> char /* implicit, default format */
-> date /* express format;
in general will differ from the format the argument
string follows
*/
So change the where condition to:
where to_date('02/06/2016','dd/mm/yyyy') <= l.effdate
or to_date('02/06/2016','dd/mm/yyyy') <= l.expdate
To cater for null values, complement the where condition with 'sufficiently large' datetime to compare against in case of null values in the db columns:
where to_date('02/06/2016','dd/mm/yyyy') <= nvl(l.effdate, to_date('12/31/9998','dd/mm/yyyy'))
or to_date('02/06/2016','dd/mm/yyyy') <= nvl(l.expdate, to_date('12/31/9998','dd/mm/yyyy'))
You are free to use different cutoff dates. For example you might wish to use expdate from rate_bill when it is not null and the current datetime otherwise:
where to_date('02/06/2016','dd/mm/yyyy') <= nvl(l.effdate, to_date('12/31/9998','dd/mm/yyyy'))
or to_date('02/06/2016','dd/mm/yyyy') <= nvl(l.expdate, sysdate)
I don't understand the details of your problem, but I think you have got problems with comparison of null values.
Null values are ignored by comparison. To select these columns, you should explicitly check l.effdate is null
e.g.
-- select with expdate < today or with no expdate
select *
from RATE_BILL l
where l.expdate is null or
l.expdate <= trunc(sysdate)

SQL Server dynamic pivot table

In SQL Server, I have two tables TableA and TableB, based on these I need to generate a report which is kind of very complex and after doing some research I come to a conclusion that I have to go with SQL Pivot table. Also, I tried this sample link but in my case the TableB table can have any number of child rows which makes it very complicated so, can anyone help me on this. Please see the details below:
Code
Create table TableA(
ProjectID INT NOT NULL,
ControlID INT NOT NULL,
ControlCode Varchar(2) NOT NULL,
ControlPoint Decimal NULL,
ControlScore Decimal NULL,
ControlValue Varchar(50)
)
Sample Data
ProjectID | ControlID | ControlCode | ControlPoint | ControlScore | ControlValue
P001 1 A 30.44 65 Invalid
P001 2 C 45.30 85 Valid
Code
Create table TableB(
ControlID INT NOT NULL,
ControlChildID INT NOT NULL,
ControlChildValue Varchar(200) NULL
)
Sample Data
ControlID | ControlChildID | ControlChildValue
1 100 Yes
1 101 No
1 102 NA
1 103 Others
2 104 Yes
2 105 SomeValue
Output should be in a single row for a given ProjectID with all its Control values first & followed by child control values (based on the ControlCode (i.e.) ControlCode_Child (1, 2, 3...) and it should look like this

What is the most efficient way to convert a split-up date to a datetime value in SQL Server 2005?

I'm trying to tie together two SQL Server 2005 databases using a view. The source database splits the date across three int fields.
RecordId | RecordYear | RecordMonth | RecordDay
-----------------------------------------------
000001 | 2001 | 1 | 26
000002 | 2002 | 3 | 10
My goal is to create an easier-to-work-with view with a single datetime field for the date, something like below.
RecordId | RecordDate
---------------------
000001 | 2001/01/26
000002 | 2002/03/10
What is the most efficient way to get this done?
Right now, I'm casting each column as a varchar, concatenate them with slash separators, then casting the full varchar as a datetime. I have to feel like there's a more efficient way.
cast(
cast(RecordYear as varchar) + '/' +
cast(RecordMonth as varchar) + '/' +
cast(RecordDay as varchar)
as datetime
) as RecordDate
No, don't cast to string, and definitely not to varchar without length.
Try:
DECLARE #x TABLE
(
RecordId CHAR(6) PRIMARY KEY,
RecordYear INT,
RecordMonth INT,
RecordDay INT
);
INSERT #x VALUES('000001',2001,1,26);
INSERT #x VALUES('000002',2002,3,10);
SELECT
RecordId,
RecordDate = DATEADD(DAY, RecordDay-1,
DATEADD(MONTH, RecordMonth-1,
DATEADD(YEAR, RecordYear-1900, '19000101'
)))
FROM #x
ORDER BY RecordId;
Results:
RecordId RecordDate
-------- ----------
000001 2001-01-26
000002 2002-03-10

Append a zero to value if necessary in SQL statement DB2

I have a complex SQL statement that I need to match up two table based on a join. The the intial part of the complex query has a location number that is stored in a table as a Smallint and the second table has the Store number stored as a CHAR(4). I have been able to cast the smallint to a char(4) like this:
CAST(STR_NBR AS CHAR(4)) AND LOCN_NBR
The issue is that because the Smallint suppresses the leading '0' the join returns null values from the right hand side of the LEFT OUTER JOIN.
Example
Table set A(Smallint) Table Set B (Char(4))
| 96 | | 096 |
| 97 | | 097 |
| 99 | | 099 |
| 100 | <- These return -> | 100 |
| 101 | <- These return -> | 101 |
| 102 | <- These return -> | 102 |
I need to add make it so that they all return, but since it is in a join statement how do you append a zero to the beginning and in certain conditions and not in others?
SELECT RIGHT('0000' || STR_NBR, 4)
FROM TABLE_A
Casting Table B's CHAR to tinyint would work as well:
SELECT ...
FROM TABLE_A A
JOIN TABLE_B B
ON A.num = CAST(B.txt AS TINYINT)
Try LPAD function:
LPAD(col,3,'0' )
I was able to successfully match it out to obtain a 3 digit location number at all times by doing the following:
STR_NBR was originally defined as a SmallINT(2)
LOCN_NO was originally defined as a Char(4)
SELECT ...
FROM TABLE_A AS A
JOIN TABLE_B AS B
ON CAST(SUBSTR(DIGITS(A.STR_NBR),3,3)AS CHAR(4)) = B.LOCN_NO