This question already has answers here:
SQL Server: Best way to concatenate multiple columns?
(6 answers)
Closed 4 years ago.
I have two columns:
colA colB
a1 b1
NULL b2
a3 NULL
I want to concatenate both columns in a SELECT-query for the following cases:
if value of colA is NULL and colB is NULL return NULL
if value of colA is NULL and colB is NOT NULL return :b1
if value of colA is NOT NULL and colB is NULL return a1
if both values are NOT NULL return a1:b1
How can i select the appropriate values for the cases?
SELECT NULLIF(COALESCE(colA,'')+COALESCE(':'+colB,''), '') FROM myTable
Some explanation:
COALESCE returns the first not-null argument in its argument list. So the first COALESCE turns a null colA into the empty string.
The second COALESCE first prepends a colon to colB -- but if colB is null, attempting to append a string returns NULL! So the result is again the empty string if colB is null, and a colon plus colB if it wasn't.
We append the two COALESCE outputs. We now have everything the OP wanted, except that if both are null, we have the empty string. NULLIF takes care of that -- if its arguments are equal, it returns NULL, otherwise it returns the first argument.
This should do it:
DECLARE #t TABLE (cola VARCHAR(100), colb VARCHAR(100));
INSERT INTO #t VALUES
(NULL, NULL),
('a1', NULL),
(NULL, 'b1'),
('a1', 'b1');
SELECT NULLIF(CONCAT(cola, ':' + colb), '')
FROM #t
NULL
a1
:b1
a1:b1
Keep in mind that:
+ operator yields NULL if any operand is NULL
CONCAT treats NULL values as empty strings
NULLIF is there to handle the special case
;WITH col_data
As
(SELECT cols = CASE WHEN colA IS NOT NULL THEN colA ELSE '' END
+ CASE WHEN colB IS NOT NULL THEN ': '+colB ELSE '' END
FROM CTE
)
SELECT ISNULL(cols, '') from col_data;
Related
I have this table:
IF OBJECT_ID('tempdb..#Test') IS NOT NULL
DROP TABLE #Test;
CREATE TABLE #Test (Col VARCHAR(100));
INSERT INTO #Test
VALUES ('1'), ('2'), ('10'), ('A'), ('B'), ('C1'), ('1D'), ('10HH')
SELECT * FROM #Test
I want to sort by numeric value first and then alphabetically.
Outcome of sort I want to is:
1
1D
2
10
10HH
A
B
C1
Assume structure of entries is one of those (with no dash of course)
number
number-string
string-number
string
if there is an entry like string-number-string, assume it is string-number
It's not pretty, but it works.
SELECT T.Col
FROM #Test T
CROSS APPLY (VALUES(PATINDEX('%[^0-9]%',T.Col)))PI(I)
CROSS APPLY (VALUES(TRY_CONVERT(int,NULLIF(ISNULL(LEFT(T.Col,NULLIF(PI.I,0)-1),LEN(T.Col)),''))))TC(L)
ORDER BY CASE WHEN TC.L IS NULL THEN 1 ELSE 0 END,
TC.L,
T.Col;
Honestly, I would suggest that if you want to order your data like a numerical value you actually store the numerical value in a numerical column; clearly the above should be a numerical prefix value, and then the string suffix. If you then want to then have the values you have, the use a (PERSISTED) computed column. Like this:
CREATE TABLE #Test (Prefix int NULL,
Suffix varchar(100) NULL,
Col AS CONCAT(Prefix, Suffix) PERSISTED);
INSERT INTO #Test (Prefix, Suffix)
VALUES (1,NULL), (2,NULL), (10,NULL), (NULL,'A'), (NULL,'B'), (NULL,'C1'), (1,'D'), (10,'HH');
SELECT Col
FROM #Test
ORDER BY CASE WHEN Prefix IS NULL THEN 1 ELSE 0 END,
Prefix,
Suffix;
This awful and unintuitive solution, that would be unnecessary if you stored the two pieces of data separately, brought to you by bad idea designs™:
;WITH cte AS
(
SELECT Col, rest = SUBSTRING(Col, pos, 100),
possible_int = TRY_CONVERT(bigint, CASE WHEN pos <> 1 THEN
LEFT(Col, COALESCE(NULLIF(pos,0),100)-1) END)
FROM (SELECT Col, pos = PATINDEX('%[^0-9]%', Col) FROM #Test) AS src
)
SELECT Col FROM cte
ORDER BY CASE
WHEN possible_int IS NULL THEN 2 ELSE 1 END,
possible_int,
rest;
Result:
Col
1
1D
2
10
10HH
A
B
C1
Example db<>fiddle
I'm trying to create some reports for auditing, but I have a very specific question.
There's about 120 columns, each with a specific numeric answer. I'd like to return the column name and the value of the rows of that column. I'm aware I'll get a lot of results, but it's not a problem.
For example I have:
KEY |ColumnA | ColumnB
1 |Value A | ValueB
2 |ValueA2 | ValueB2
But want I want is:
1 |ColumnA | Value A
2 |ColumnA | Value A2
1 |ColumnB | Value B
2 |ColumnB | Value B2
I've tried returning all rows and then joining on itself, but it didn't provide me with the output I needed.
Simple unpivot will do the work :)
declare #tbl table ([Key] int, ColumnA varchar(15), ColumnB varchar(15));
insert into #tbl values
(1, 'Value A', 'ValueB'),
(2, 'ValueA2', 'ValueB2');
select [key], [column], [value] from
(select * from #tbl) p
unpivot
([value] for [column] in (ColumnA, ColumnB)) u
order by [column]
it's so simple...If you know the column names, you could use a simple UNION
SELECT * FROM tblAuditing
SELECT 'ColumnA' AS ColumnA,'ColumnB' AS ColumnA UNION
SELECT ColumnA AS ColumnA,ColumnB AS ColumnA FROM tblAuditing
The following query should do what you want - you need to do a customized sorting for the columns names:
CREATE TABLE #temp (ColumnA VARCHAR(20), ColumnB VARCHAR(20))
INSERT INTO #temp VALUES ('Value A','Value B'),('Value A2','Value B2')
SELECT t.Col, t.Val
FROM (SELECT *,ROW_NUMBER() OVER (ORDER BY (SELECT 1)) RNO FROM #temp t) tmp
CROSS APPLY (VALUES (tmp.ColumnA,'ColumnA',tmp.RNO),(tmp.ColumnB,'ColumnB',tmp.RNO)) AS T(Val,Col,sort)
ORDER BY T.Col, Sort
The result is as below,
Col Val
ColumnA Value A
ColumnA Value A2
ColumnB Value B
ColumnB Value B2
I have a table with many columns with 2.1M rows. Here are the columns which are related with my problem :
Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation
id int no 4 10 0 no (n/a) (n/a) NULL
val varchar no 15 yes no yes SQL_Latin1_General_CP1_CI_AS
I want to return rows which contain characters other than A-Z, a-z, 0-9, (space) and _ in column val.
Sample Data :
INSERT INTO tabl
(id, val)
VALUES (1, 'Extemporè'),
(2, 'Aâkash'),
(3, 'Driver 12'),
(4, 'asd'),
(5, '10'),
(6, 'My_Car'),
(7, 'Johnson & Sons'),
(8, 'Johan''s Service'),
(9, 'Indus Truck')
Expected output :
id val
-- -----------
1 Extemporè
2 Aâkash
7 Johnson & Sons
8 Johan's Service
I found similar question here but it is also not giving expected results :
SELECT *
FROM tabl
WHERE val LIKE '%[^A-Z0-9 _]%'
Gives result :
id val
-- ----------
7 Johnson & Sons
8 Johan's Service
I would do this with the help of a collation like Latin1_General_BIN like this:
SELECT *
FROM tabl
WHERE val COLLATE Latin1_General_BIN LIKE '%[^A-Za-z0-9 _]%'
It would seem easier this way because BIN collations are both case-sensitive and accent-sensitive and, moreover, accented characters are collated separately from non-accented ones. The latter means that it is easy to specify non-accented letters in the form of a range. (But case sensitivity means you also have to specify letters of both cases explicitly, as you can see above.)
updated answer: the use of temporary table is to exclude values such as "Driver" or "Indus Truck"; the temporary table also forces a collation change for values such as "Aâkash" - this is to make sure correct values are not qualified for exclusion in the join.
Note: special characters such as ' or & that are contained in correct values must be manually added to the list (where marked below).
create table #tabl(id int, val varchar(15))
insert #tabl(id, val)
select i.id, cast(i.val as varchar(200)) Collate SQL_Latin1_General_CP1253_CI_AI as val
from tabl i
where i.val <> upper(i.val) Collate SQL_Latin1_General_CP1_CS_AS
and i.val <> lower(i.val) Collate SQL_Latin1_General_CP1_CS_AS
and i.val not like '%[0-9]%'
and i.val not like '%[_]%'
and i.val not like '%[]%'
and i.val not like '%[''&]%' -- add special characters (like ' or &) that are permitted in this list;
-- this is the only "manual" requirement for this solution to work.
select t.id, t.val
from tabl t
left join #tabl tt on t.val = tt.val
where tt.val is null
and t.val <> upper(t.val) Collate SQL_Latin1_General_CP1_CS_AS
and t.val <> lower(t.val) Collate SQL_Latin1_General_CP1_CS_AS
and t.val not like '%[0-9]%'
and t.val not like '%[_]%'
and t.val not like '%[]%'
I have a column in sql server in which there are integer values, now i want to replace a range e.g values> 55 with a string 'Normal value' and values<55 replace with 'abnormal value', I have tried replace() function but it didn't work. any help please??
The query below doesn't do anything on the table but instead of displaying the value, it displays the equivalent string,
SELECT yourValue,
CASE WHEN yourValue > 55 THEN 'Normal' ELSE 'Abnormal' END
FROM tableName
SQLFiddle Demo
56 and up will be Normal
55 and down will be Abnormal
If the column allows NULLs then you could try:
SELECT t.ColA, CASE WHEN t.ColA >= 55 THEN 'Normal value' WHEN t.ColA < 55 THEN 'abnormal value' END
FROM MySchema.MyTable AS t
Example:
DECLARE #MyTable TABLE
(
ID INT IDENTITY(1,1) PRIMARY KEY,
ColA INT NULL
);
INSERT INTO #MyTable (ColA) VALUES(11);
INSERT INTO #MyTable (ColA) VALUES(111);
INSERT INTO #MyTable (ColA) VALUES(NULL);
SELECT t.ColA,
CASE
WHEN t.ColA >= 55 THEN 'Normal value'
WHEN t.ColA < 55 THEN 'abnormal value'
-- WHEN t.ColA IS NULL THEN NULL
END AS CaseWhen
FROM #MyTable AS t
Results:
ColA CaseWhen
----------- --------------
11 abnormal value
111 Normal value
NULL NULL
This question already has answers here:
Why does Oracle 9i treat an empty string as NULL?
(10 answers)
Closed 9 years ago.
I have used to two queries to update one column as NULL:
update table_name
set col1 = NULL
where col2 = 'MUTHU';
update table_name
set col1 = ''
where col2 = 'MUTHU';
But when i used to query with function NVL then i am getting the same result for both queries.
select nvl(col1, 'value') from table_name;
My question is: what is the 'difference' and 'use' between NULL and '' ?
One difference is that null usually propagates so if you concat null with another string :
create table t
(
col1 varchar(10),
col2 varchar(10),
col3 varchar(10)
);
insert into t values ( null, '', 'hello' ) ;
select
concat(col1 ,col3),
concat(col2 ,col3)
from t
>> NULL, 'hello'
'' implies that the column has a value, which is an empty string
But NULL means “a missing unknown value”
so NULL cannot be compared with =, <= and so on