Select 2 columns in one and combine them - sql

Is it possible to select 2 columns in just one and combine them?
Example:
select something + somethingElse as onlyOneColumn from someTable

(SELECT column1 as column FROM table )
UNION
(SELECT column2 as column FROM table )

Yes, just like you did:
select something + somethingElse as onlyOneColumn from someTable
If you queried the database, you would have gotten the right answer.
What happens is you ask for an expression. A very simple expression is just a column name, a more complicated expression can have formulas etc in it.

Yes,
SELECT CONCAT(field1, field2) AS WHOLENAME FROM TABLE
WHERE ...
will result in data set like:
WHOLENAME
field1field2

None of the other answers worked for me but this did:
SELECT CONCAT(Cust_First, ' ', Cust_Last) AS CustName FROM customer

Yes it's possible, as long as the datatypes are compatible. If they aren't, use a CONVERT() or CAST()
SELECT firstname + ' ' + lastname AS name FROM customers

The + operator should do the trick just fine. Keep something in mind though, if one of the columns is null or does not have any value, it will give you a NULL result. Instead, combine + with the function COALESCE and you'll be set.
Here is an example:
SELECT COALESCE(column1,'') + COALESCE(column2,'') FROM table1.
For this example, if column1 is NULL, then the results of column2 will show up, instead of a simple NULL.
Hope this helps!

To complete the answer of #Pete Carter, I would add an "ALL" on the UNION (if you need to keep the duplicate entries).
(SELECT column1 as column FROM table )
UNION ALL
(SELECT column2 as column FROM table )
DROP TABLE IF EXISTS #9
CREATE TABLE #9
(
USER1 int
,USER2 int
)
INSERT INTO #9
VALUES(1, 2), (1, 3), (1, 4), (2, 3)
------------------------------------------------
(SELECT USER1 AS 'column' from #9)
UNION ALL
(SELECT USER2 AS 'column' from #9)
Would then return : Result

Yes, you can combine columns easily enough such as concatenating character data:
select col1 | col 2 as bothcols from tbl ...
or adding (for example) numeric data:
select col1 + col2 as bothcols from tbl ...
In both those cases, you end up with a single column bothcols, which contains the combined data. You may have to coerce the data type if the columns are not compatible.

if one of the column is number i have experienced the oracle will think '+' as sum operator instead concatenation.
eg:
select (id + name) as one from table 1; (id is numeric)
throws invalid number exception
in such case you can || operator which is concatenation.
select (id || name) as one from table 1;

Your syntax should work, maybe add a space between the colums like
SELECT something + ' ' + somethingElse as onlyOneColumn FROM someTable

I hope this answer helps:
SELECT (CAST(id AS NVARCHAR)+','+name) AS COMBINED_COLUMN FROM TABLENAME;

select column1 || ' ' || column2 as whole_name FROM tablename;
Here || is the concat operator used for concatenating them to single column and ('') inside || used for space between two columns.

SELECT firstname || ' ' || lastname FROM users;

Related

How to write different data type in one result column

For example, i have two columns in one table named tab1
1th column has type int, it is PK column with numerals
2nd one has nvarchar type.
ID Name
1 Anna
2 Vladimir
What i want:
Result
ID_Name
1 Anna
2 Vladimir
Use CONCAT function in your select:
Select Concat(ID, ' ', Name) AS ID_Name FROM tab1
your first column is of type int so you need to convert it intoor nvarchar to concatenate with name column wich is nvarchar
SELECT CONVERT(VARCHAR,ID) + ' ' + Name AS ID_Name
FROM my_table
also you can use CONCAT like this
SELECT CONCAT(ID,' ',Name) FROM my_table
select concat(column1,' ',column2) from table
in your question, it will be
select concat(ID,' ',Name) from tab1
SELECT CONVERT(VARCHAR(30),ID) + ' ' + Name AS ID_Name
FROM my_table
you can use (concat('column1','column2') as column name)
You need to use CONVERT() or CAST() string function while you want to use integer column concate with VARCHAR OR NVARCHAR column.
From SQL Server 2012 onward, you can use CONCAT() string function, which is take care of integer to string conversion.
Please check below select script.
SELECT
*
INTO #tblA
FROM
(
SELECT 1 ID,'Anna' Name UNION ALL
SELECT 2 ID,'Vladimir' Name
) A
SELECT
CONVERT(NVARCHAR(11),t.ID) + ' ' + t.Name AS ID_Name
--CONCAT(t.ID,' ',t.Name) AS ID_Name /*SQL Server 2012 Onwards*/
FROM #tblA t

SQL Server converting string to numeric and sorting

I am attempting to sort a column of strings, which contains one alphanumeric character and a numbers. If I run this:
select
column_name
from
table_name
where
item_type = 'ABC'
and item_sub_type = 'DEF'
order by
cast(replace([column_name], 'A', '') as Numeric(10, 0)) desc
I get the correct sorted output:
A218
A217
A216
but if I try to grab the top row
select top 1
column_name
from
table_name
where
item_type = 'ABC'
and item_sub_type = 'DEF'
order by
cast(replace([column_name], 'A', '') as numeric(10, 0)) desc
it fails with the following error:
Error converting data type varchar to numeric
Any ideas on how I can select the top row?
Thanks!
I think this is the problem with your data - not all are match your pattern.
You can check what is not valid using:
select column_name from table_name where ISNUMERIC(replace([column_name],'A','')) = 0
I think this is the optimizer not executing your query how you would hope. What I mean is SQL is a declarative language--your query merely states what you are trying to accomplish, not how you are trying to accomplish it (in most cases). Thus, the optimizer determines the best way and in cases as yours, does things in certain order that causes errors. Try and force your logic with a CTE.
with cte as(
select column_name, [item_value]
from table_name
where item_type='ABC' and item_sub_type='DEF')
select top 1 column_name
from cte
ORDER BY CAST(replace([item_value],'A','') AS Numeric(10,0)) desc
SQL Server 2012/2016
select top 1 column_name
from table_name
where item_type='ABC' and item_sub_type='DEF'
order by TRY_CONVERT(Numeric(10,0),replace([column_name],'A','')) desc
Order by dropping the prefix and casting to int, provided that the prefix is the only non-numeric the rest should be pure numbers.
select
column_name
from
table_name
where
item_type = 'ABC'
and item_sub_type = 'DEF'
order by
cast(replace([column_name], 'A', '') as Int) desc
the script above should just work, i reckon there is something wrong with your data could multiple values...
see example below
declare #mytable table
(
code varchar(10)
)
insert into #mytable
values
('A323'),
('A223'),
('A123'),
('A553'),
('A923'),
('A23'),
('A235')
select
code
from
#mytable
order by
cast(replace(code, 'A', '') as Int) desc
code
----------
A923
A553
A323
A235
A223
A123
A23

Aggregating date in one column

For a purpose of reporting I need to present data from table:
table A (column1, column2, date1, date2, date3,...,dateN)
My query need to present all dates in one column separated with # .
(YYYY-MM-DD# YYYY-MM-DD#..)
But problem is that number of date columns is not fixed, because from product to product can be different number of dates.
Any idea ?
This statement works for postgresql, but you can replace the || with + for sql-server I think, just try and it should work, you should be able to figure out the rest.
|| is the concatenation symbol in postgres or you could also use the concatenation function available.
SELECT column1 || CASE WHEN column2 IS NOT NULL THEN '#' || column2 ELSE '' END || ... FROM tablehere
Hope it helps!
After consideration, I want to change my answer. Perhaps a little lighter
Declare #YourTable table (Column1 int,Column2 int,date1 date,date2 date,date3 date)
Insert Into #YourTable values
(1,25,'2016-01-15','2016-03-22',null),
(2,50,'2016-04-15','2016-07-29','2016-09-30')
Select Column1
,Column2
,Dates=Replace(Replace((
Select x=format(date1,'yyyy-MM-dd# ')
,x=format(date2,'yyyy-MM-dd# ')
,x=format(date3,'yyyy-MM-dd# ')
--x=format(dateNN,'yyyy-MM-dd# ')
For XML Path('')
) ,'<x>',''),'</x>','')
From #YourTable
Returns
Column1 Column2 Dates
1 25 2016-01-15# 2016-03-22#
2 50 2016-04-15# 2016-07-29# 2016-09-30#

Not Null - Spaces on field

I have the below query that shows me the records in Oracle that are not null but some of the records contain spaces such as '',' ', etc.
How can I modify the query so it will ignore empty spaces?
select * from table where field1 is not null
Many Thanks.
If you problem is empty or extra space you can do something like this..
select * from table where replace(field1,' ','') is not null
You should use trim or replace function
e.g.
1.
select * from table
where field1 is not null
and trim(field1) != ''
;
2.
select * from table
where field1 is not null
and replace(field1,' ')
;
p.s null is not empty data ! it is unknown.
select * from table where field1 is not null and trim(field1) <> ''

Count the Null columns in a row in SQL

I was wondering about the possibility to count the null columns of row in SQL, I have a table Customer that has nullable values, simply I want a query that return an int of the number of null columns for certain row(certain customer).
This method assigns a 1 or 0 for null columns, and adds them all together. Hopefully you don't have too many nullable columns to add up here...
SELECT
((CASE WHEN col1 IS NULL THEN 1 ELSE 0 END)
+ (CASE WHEN col2 IS NULL THEN 1 ELSE 0 END)
+ (CASE WHEN col3 IS NULL THEN 1 ELSE 0 END)
...
...
+ (CASE WHEN col10 IS NULL THEN 1 ELSE 0 END)) AS sum_of_nulls
FROM table
WHERE Customer=some_cust_id
Note, you can also do this perhaps a little more syntactically cleanly with IF() if your RDBMS supports it.
SELECT
(IF(col1 IS NULL, 1, 0)
+ IF(col2 IS NULL, 1, 0)
+ IF(col3 IS NULL, 1, 0)
...
...
+ IF(col10 IS NULL, 1, 0)) AS sum_of_nulls
FROM table
WHERE Customer=some_cust_id
I tested this pattern against a table and it appears to work properly.
My answer builds on Michael Berkowski's answer, but to avoid having to type out hundreds of column names, what I did was this:
Step 1: Get a list of all of the columns in your table
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'myTable';
Step 2: Paste the list in Notepad++ (any editor that supports regular expression replacement will work). Then use this replacement pattern
Search:
^(.*)$
Replace:
\(CASE WHEN \1 IS NULL THEN 1 ELSE 0 END\) +
Step 3: Prepend SELECT identityColumnName, and change the very last + to AS NullCount FROM myTable and optionally add an ORDER BY...
SELECT
identityColumnName,
(CASE WHEN column001 IS NULL THEN 1 ELSE 0 END) +
-- ...
(CASE WHEN column200 IS NULL THEN 1 ELSE 0 END) AS NullCount
FROM
myTable
ORDER BY
NullCount DESC
For ORACLE-DBMS only.
You can use the NVL2 function:
NVL2( string1, value_if_not_null, value_if_null )
Here is a select with a similiar approach as Michael Berkowski suggested:
SELECT (NVL2(col1, 0, 1)
+ NVL2(col2, 0, 1)
+ NVL2(col3, 0, 1)
...
...
+ NVL2(col10, 0, 1)
) AS sum_of_nulls
FROM table
WHERE Customer=some_cust_id
A more generic approach would be to write a PL/SQL-block and use dynamic SQL. You have to build a SELECT string with the NVL2 method from above for every column in the all_tab_columns of a specific table.
Unfortunately, in a standard SQL statement you will have to enter each column you want to test, to test all programatically you could use T-SQL. A word of warning though, ensure you are working with genuine NULLS, you can have blank stored values that the database will not recognise as a true NULL (I know this sounds strange).
You can avoid this by capturing the blank values and the NULLS in a statement like this:
CASE WHEN col1 & '' = '' THEN 1 ELSE 0 END
Or in some databases such as Oracle (not sure if there are any others) you would use:
CASE WHEN col1 || '' = '' THEN 1 ELSE 0 END
You don't state RDBMS. For SQL Server 2008...
SELECT CustomerId,
(SELECT COUNT(*) - COUNT(C)
FROM (VALUES(CAST(Col1 AS SQL_VARIANT)),
(Col2),
/*....*/
(Col9),
(Col10)) T(C)) AS NumberOfNulls
FROM Customer
Depending on what you want to do, and if you ignore mavens, and if you use SQL Server 2012, you could to it another way. .
The total number of candidate columns ("slots") must be known.
1. Select all the known "slots" column by column (they're known).
2. Unpivot that result to get a
table with one row per original column. This works because the null columns don't
unpivot, and you know all the column names.
3. Count(*) the result to get the number of non-nulls;
subtract from that to get your answer.
Like this, for 4 "seats" in a car
select 'empty seats' = 4 - count(*)
from
(
select carId, seat1,seat2,seat3,seat4 from cars where carId = #carId
) carSpec
unpivot (FieldValue FOR seat in ([seat1],[seat2],[seat3],[seat4])) AS results
This is useful if you may need to do more later than just count the number of non-null columns, as it gives you a way to manipulate the columns as a set too.
This will give you the number of columns which are not null. you can apply this appropriately
SELECT ISNULL(COUNT(col1),'') + ISNULL(COUNT(col2),'') +ISNULL(COUNT(col3),'')
FROM TABLENAME
WHERE ID=1
The below script gives you the NULL value count within a row i.e. how many columns do not have values.
{SELECT
*,
(SELECT COUNT(*)
FROM (VALUES (Tab.Col1)
,(Tab.Col2)
,(Tab.Col3)
,(Tab.Col4)) InnerTab(Col)
WHERE Col IS NULL) NullColumnCount
FROM (VALUES(1,2,3,4)
,(NULL,2,NULL,4)
,(1,NULL,NULL,NULL)) Tab(Col1,Col2,Col3,Col4) }
Just to demonstrate I am using an inline table in my example.
Try to cast or convert all column values to a common type it will help you to compare the column of different type.
I haven't tested it yet, but I'd try to do it using a PL\SQL function
CREATE OR REPLACE TYPE ANYARRAY AS TABLE OF ANYDATA
;
CREATE OR REPLACE Function COUNT_NULL
( ARR IN ANYARRAY )
RETURN number
IS
cnumber number ;
BEGIN
for i in 1 .. ARR.count loop
if ARR(i).column_value is null then
cnumber := cnumber + 1;
end if;
end loop;
RETURN cnumber;
EXCEPTION
WHEN OTHERS THEN
raise_application_error
(-20001,'An error was encountered - '
||SQLCODE||' -ERROR- '||SQLERRM);
END
;
Then use it in a select query like this
CREATE TABLE TEST (A NUMBER, B NUMBER, C NUMBER);
INSERT INTO TEST (NULL,NULL,NULL);
INSERT INTO TEST (1 ,NULL,NULL);
INSERT INTO TEST (1 ,2 ,NULL);
INSERT INTO TEST (1 ,2 ,3 );
SELECT ROWNUM,COUNT_NULL(A,B,C) AS NULL_COUNT FROM TEST;
Expected output
ROWNUM | NULL_COUNT
-------+-----------
1 | 3
2 | 2
3 | 1
4 | 0
This is how i tried
CREATE TABLE #temptablelocal (id int NOT NULL, column1 varchar(10) NULL, column2 varchar(10) NULL, column3 varchar(10) NULL, column4 varchar(10) NULL, column5 varchar(10) NULL, column6 varchar(10) NULL);
INSERT INTO #temptablelocal
VALUES (1,
NULL,
'a',
NULL,
'b',
NULL,
'c')
SELECT *
FROM #temptablelocal
WHERE id =1
SELECT count(1) countnull
FROM
(SELECT a.ID,
b.column_title,
column_val = CASE b.column_title
WHEN 'column1' THEN a.column1
WHEN 'column2' THEN a.column2
WHEN 'column3' THEN a.column3
WHEN 'column4' THEN a.column4
WHEN 'column5' THEN a.column5
WHEN 'column6' THEN a.column6
END
FROM
( SELECT id,
column1,
column2,
column3,
column4,
column5,
column6
FROM #temptablelocal
WHERE id =1 ) a
CROSS JOIN
( SELECT 'column1'
UNION ALL SELECT 'column2'
UNION ALL SELECT 'column3'
UNION ALL SELECT 'column4'
UNION ALL SELECT 'column5'
UNION ALL SELECT 'column6' ) b (column_title) ) AS pop WHERE column_val IS NULL
DROP TABLE #temptablelocal
Similary, but dynamically:
drop table if exists myschema.table_with_nulls;
create table myschema.table_with_nulls as
select
n1::integer,
n2::integer,
n3::integer,
n4::integer,
c1::character varying,
c2::character varying,
c3::character varying,
c4::character varying
from
(
values
(1,2,3,4,'a','b','c','d'),
(1,2,3,null,'a','b','c',null),
(1,2,null,null,'a','b',null,null),
(1,null,null,null,'a',null,null,null)
) as test_records(n1, n2, n3, n4, c1, c2, c3, c4);
drop function if exists myschema.count_nulls(varchar,varchar);
create function myschema.count_nulls(schemaname varchar, tablename varchar) returns void as
$BODY$
declare
calc varchar;
sqlstring varchar;
begin
select
array_to_string(array_agg('(' || trim(column_name) || ' is null)::integer'),' + ')
into
calc
from
information_schema.columns
where
table_schema in ('myschema')
and table_name in ('table_with_nulls');
sqlstring = 'create temp view count_nulls as select *, ' || calc || '::integer as count_nulls from myschema.table_with_nulls';
execute sqlstring;
return;
end;
$BODY$ LANGUAGE plpgsql STRICT;
select * from myschema.count_nulls('myschema'::varchar,'table_with_nulls'::varchar);
select
*
from
count_nulls;
Though I see that I didn't finish parametising the function.
My answer builds on Drew Chapin's answer, but with changes to get the result using a single script:
use <add_database_here>;
Declare #val Varchar(MAX);
Select #val = COALESCE(#val + str, str) From
(SELECT
'(CASE WHEN '+COLUMN_NAME+' IS NULL THEN 1 ELSE 0 END) +' str
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '<add table name here>'
) t1 -- getting column names and adding the case when to replace NULLs for zeros or ones
Select #val = SUBSTRING(#val,1,LEN(#val) - 1) -- removing trailling add sign
Select #val = 'SELECT <add_identity_column_here>, ' + #val + ' AS NullCount FROM <add table name here>' -- adding the 'select' for the column identity, the 'alias' for the null count column, and the 'from'
EXEC (#val) --executing the resulting sql
With ORACLE:
Number_of_columns - json_value( json_array( comma separated list of columns ), '$.size()' ) from your_table
json_array will build an array with only the non null columns and the json_query expression will give you the size of the array
There isn't a straightforward way of doing so like there would be with counting rows. Basically, you have to enumerate all the columns that might be null in one expression.
So for a table with possibly null columns a, b, c, you could do this:
SELECT key_column, COALESCE(a,0) + COALESCE(b,0) + COALESCE(c,0) null_col_count
FROM my_table