Insert records into a table with a condition in SQL Server 2016 - sql

I have a table that has columns that I want to insert into another table with certain conditions. The closest I could get was using this post.
This is how I want to do it:
if (cnt > 50, avg_account, else null) as avg_account
if (cnt > 50, avg_change, else null) as avg_change
As shown above, I want to insert some columns from Table1 into Table2 and especially avg_account and avg_change columns when cnt column in Table1 is > 50.
I tried the following, I am not getting my desired output using the following code,
SELECT
id, branch, account, total, change,
'avg_account' = CASE
WHEN cnt > 50 THEN avg_account -- How can I refer to avg_account values?
ELSE 'Null'
END,
'avg_change' = CASE
WHEN cnt > 50 THEN avg_change -- How can I refer to avg_change values?
ELSE 'Null'
END
INTO
Table2
FROM
Table1
PRINT ' ' + CONVERT(VARCHAR, ##ROWCOUNT) + ' rows updated'
Is my approach correct? Can I get values as indicated in the code snippet? OR should I use a where clause or subquery?
Any help would be appreciated.

I think this is what you want:
select id, branch, account, total, change,
(case when cnt > 50 then avg_account end) as avg_account,
(case when cnt > 50 then avg_change end) as avg_change
into Table2
from Table1;
Notes:
With no else clause, a case expression returns NULL.
NULL and 'NULL' are not the same thing. The latter is a string. The format is probably what you want.
Only use single quotes for string and date constants. Don't use them for column aliases -- because the distinction between a column name and a string is not always obvious.
Don't escape names that do not need to be escaped.
If you want to filter out rows with a cnt <= 50, then use where rather than case.

another approach is to put your condition into where and use two SQLs
select * into table2 from table1 where cnt > 50
insert into table2
select id, branch, account, total, change, null, null from cnt <= 50
Your original query has several bugs
select id, branch, account, total, change,
CASE WHEN cnt > 50 THEN avg_account
ELSE 'Null'
END,
CASE WHEN cnt > 50 THEN avg_change
ELSE 'Null'
END
into Table2
from Table1

I think you just need a WHERE clause, like:
INSERT INTO TABLE2
(id, branch, account, total, change, avg_account, avg_change)
SELECT
id, branch, account, total, change,
avg_account, avg_change
FROM
TABLE1
WHERE
cnt > 50
I may also me misunderstanding you situation. Can you elaborate a little more?

Almost correct. You actually have the logic correct. Your syntax is your problem:
When you enter 'Null' into your database, it will enter the STRING VALUE 'Null', and NOT the NULL Value. Take the quotes off your Nulls.
select id, branch, account, total, change,
'avg_account' = CASE
WHEN cnt > 50 THEN avg_account
ELSE Null
END,
'avg_change' = CASE
WHEN cnt > 50 THEN avg_change ELSE Null
END
into Table2
from Table1
print ' ' + convert(varchar,##rowcount)+ ' rows updated'
Assuming avg_account and avg_change are columns in table 1.

Related

How to check unique values in SQL

I have a table named Bank that contains a Bank_Values column. I need a calculated Bank_Value_Unique column to shows whether each Bank_Value exists somewhere else in the table (i.e. whether its count is greater than 1).
I prepared this query, but it does not work. Could anyone help me with this and/or modify this query?
SELECT
CASE
WHEN NULLIF(LTRIM(RTRIM(Bank_Value)), '') =
(SELECT Bank_Value
FROM [Bank]
GROUP BY Bank_Value
HAVING COUNT(*) = 1)
THEN '0' ELSE '1'
END AS Bank_Key_Unique
FROM [Bank]
A windowed count should work:
SELECT
*,
CASE
COUNT(*) OVER (PARTITION BY Bank_Value)
WHEN 1 THEN 1 ELSE 0
END AS Bank_Value_Unique
FROM
Bank
;
It works also, but I found solution also:
select CASE WHEN NULLIF(LTRIM(RTRIM(Bank_Value)),'') =
(select Bank_Value
from Bank
group by Bank_Value
having (count(distinct Bank_Value) > 2 )) THEN '1' ELSE '0' END AS
Bank_Value_Uniquness
from Bank
It was missing "distinct" in having part.

Sum distinct records in a table with duplicates in Teradata

I have a table that has some duplicates. I can count the distinct records to get the Total Volume. When I try to Sum when the CompTia Code is B92 and run distinct is still counts the dupes.
Here is the query:
select
a.repair_week_period,
count(distinct a.notif_id) as Total_Volume,
sum(distinct case when a.header_comptia_cd = 'B92' then 1 else 0 end) as B92_Sum
FROM artemis_biz_app.aca_service_event a
where a.Sales_Org_Cd = '8210'
and a.notif_creation_dt >= current_date - 180
group by 1
order by 1
;
Is There a way to only SUM the distinct records for B92?
I also tried inner joining the table on itself by selecting the distinct notification id and joining on that notification id, but still getting wrong sum counts.
Thanks!
Your B92_Sum currently returns either NULL, 1 or 2, this is definitely no sum.
To sum distinct values you need something like
sum(distinct case when a.header_comptia_cd = 'B92' then column_to_sum else 0 end)
If this column_to_sum is actually the notif_id you get a conditional count but not a sum.
Otherwise the distinct might remove too many vales and then you probably need a Derived Table where you remove duplicates before aggregation:
select
repair_week_period,
--no more distinct needed
count(a.notif_id) as Total_Volume,
sum(case when a.header_comptia_cd = 'B92' then column_to_sum else 0 end) as B92_Sum
FROM
(
select repair_week_period,
notif_id
header_comptia_cd,
column_to_sum
from artemis_biz_app.aca_service_event
where a.Sales_Org_Cd = '8210'
and a.notif_creation_dt >= current_date - 180
-- only onw row per notif_id
qualify row_number() over (partition by notif_id order by ???) = 1
) a
group by 1
order by 1
;
#dnoeth It seems the solution to my problem was not to SUM the data, but to count distinct it.
This is how I resolved my problem:
count(distinct case when a.header_comptia_cd = 'B92' then a.notif_id else NULL end) as B92_Sum

Check and Change for empty or null value of column in SQL?

How can I change column text Not Exists when it is empty or null ?
My query :
Select TOP 1 ISNULL(NULLIF(DR.Name,''),'Not Exists') as Name,
DR.Name as Name ,Coalesce(NullIf(rtrim(DR.Name),''),'Not Exist') as Name,
Name = case when DR.Name is null then 'Not Exists'
when DR.Name='' then 'Not Exists' else DR.Name end
from Transfer TR
join Driver DR on DR.OID=TR.DriverID
WHERE TR.TruckID=51 AND TR.Statues<>7 and TR.DateScheduled<GETDATE()
AND TR.DateScheduled>=DATEADD(DAY,-7,GETDATE()) ORDER BY TR.OID DESC
Result :
If you just need a single column, then you can use a sub-select, this way when no rows are returned by the query you will still get not exists:
SELECT Name = ISNULL(( SELECT TOP 1 NULLIF(DR.Name,'')
FROM Transfer AS TR
INNER JOIN Driver AS DR
ON DR.OID = TR.DriverID
WHERE TR.TruckID = 51
AND TR.Statues <> 7
AND TR.DateScheduled < GETDATE()
AND TR.DateScheduled >= DATEADD(DAY, -7, GETDATE())
ORDER BY TR.OID DESC), 'Not Exists');
If you need multiple columns then you could union your Not Exists record to the bottom of the query, place all this inside a subquery then select the top 1 again, ensuring that your actual value takes precedence (by adding the column SortOrder):
SELECT TOP 1 Name, SomeOtherColumn
FROM ( SELECT TOP 1
Name = NULLIF(DR.Name,''),
SomeOtherColumn,
SortOrder = 0
FROM Transfer AS TR
INNER JOIN Driver AS DR
ON DR.OID = TR.DriverID
WHERE TR.TruckID = 51
AND TR.Statues <> 7
AND TR.DateScheduled < GETDATE()
AND TR.DateScheduled >= DATEADD(DAY, -7, GETDATE())
ORDER BY TR.OID DESC
UNION ALL
SELECT 'Not Exists', NULL, 1
) AS t
ORDER BY SortOrder;
I'm not entirely sure I understand your question, but if you are trying to catch nulls and empty strings "in one go", try this:
select TOP 1
case when length(trim(coalesce(DR.Name, ''))) = 0 then
'Not Exists'
else
DR.Name
as Name
....
The coalesce catches the NULLs and sets a replacement value. The trim gets rid of any padding and the length checks if what is left is an empty string --> so this covers nulls, padded- and non-padded trivial strings.
Assuming the value has regular spaces, the following would keep your approach:
Select TOP 1 ISNULL(NULLIF(ltrim(rtrim((DR.Name))), ''), 'Not Exists') as Name,
I would probably go with the more explicit:
select top 1 (case when ltrim(rtrim((DR.Name)) = '' or DR.Name is null then 'Not Exists'
else DR.Name end) as Name
Unless you also wanted the spaces removed from Name in the output.
If you have other characters, then you can use ASCII() to find them. Something like:
select ASCII(LEFT(DR.Name, 1))
. . .
where LEFT(DR.Name, 1) NOT LIKE '[a-zA-Z0-9 ]' -- you can expand this list of allowed characters
It seems to me you are not actually looking for a way to replace an empty string with 'Not Exists', but an empty result set.
In other words: It looks like you are looking for a way to show 'Not Exists' in case your query returns no rows. If it is this what you are looking for, then first "add" a 'Not Exists' record to your result set and then show the best row, i.e. the desired row in case such a row exists, else your 'Not Exists' row.
select top 1 name
from
(
select name, tr.oid
from transfer tr
join driver dr on dr.oid=tr.driverid
where tr.truckid=51 and tr.statues<>7 and tr.datescheduled<getdate()
and tr.datescheduled>=dateadd(day,-7,getdate())
union all
select 'Not Exists', -1
)
order by oid desc;
I chose -1 for the dummy OID. It must be a value smaller than any real OID. So if you have negative values, make that value even smaller.

SQL Nested Select statements with COUNT()

I'll try to describe as best I can, but it's hard for me to wrap my whole head around this problem let alone describe it....
I am trying to select multiple results in one query to display the current status of a database. I have the first column as one type of record, and the second column as a sub-category of the first column. The subcategory is then linked to more records underneath that, distinguished by status, forming several more columns. I need to display every main-category/subcategory combination, and then the count of how many of each sub-status there are beneath that subcategory in the subsequent columns. I've got it so that I can display the unique combinations, but I'm not sure how to nest the select statements so that I can select the count of a completely different table from the main query. My problem lies in that to display the main category and sub category, I can pull from one table, but I need to count from a different table. Any ideas on the matter would be greatly appreciated
Here's what I have. The count statements would be replaced with the count of each status:
SELECT wave_num "WAVE NUMBER",
int_tasktype "INT / TaskType",
COUNT (1) total,
COUNT (1) "LOCKED/DISABLED",
COUNT (1) released,
COUNT (1) "PARTIALLY ASSEMBLED",
COUNT (1) assembled
FROM (SELECT DISTINCT
(t.invn_need_type || ' / ' || s.code_desc) int_tasktype,
t.task_genrtn_ref_nbr wave_num
FROM sys_code s, task_hdr t
WHERE t.task_genrtn_ref_nbr IN
(SELECT ship_wave_nbr
FROM ship_wave_parm
WHERE TRUNC (create_date_time) LIKE SYSDATE - 7)
AND s.code_type = '590'
AND s.rec_type = 'S'
AND s.code_id = t.task_type),
ship_wave_parm swp
GROUP BY wave_num, int_tasktype
ORDER BY wave_num
Image here: http://i.imgur.com/JX334.png
Guessing a bit,both regarding your problem and Oracle (which I've - unfortunately - never used), hopefully it will give you some ideas. Sorry for completely messing up the way you write SQL, SELECT ... FROM (SELECT ... WHERE ... IN (SELECT ...)) simply confuses me, so I have to restructure:
with tmp(int_tasktype, wave_num) as
(select distinct (t.invn_need_type || ' / ' || s.code_desc), t.task_genrtn_ref_nbr
from sys_code s
join task_hdr t
on s.code_id = t.task_type
where s.code_type = '590'
and s.rec_type = 'S'
and exists(select 1 from ship_wave_parm p
where t.task_genrtn_ref_nbr = p.ship_wave_nbr
and trunc(p.create_date_time) = sysdate - 7))
select t.wave_num "WAVE NUMBER", t.int_tasktype "INT / TaskType",
count(*) TOTAL,
sum(case when sst.sub_status = 'LOCKED' then 1 end) "LOCKED/DISABLED",
sum(case when sst.sub_status = 'RELEASED' then 1 end) RELEASED,
sum(case when sst.sub_status = 'PARTIAL' then 1 end) "PARTIALLY ASSEMBLED",
sum(case when sst.sub_status = 'ASSEMBLED' then 1 end) ASSEMBLED
from tmp t
join sub_status_table sst
on t.wave_num = sst.wave_num
group by t.wave_num, t.int_tasktype
order by t.wave_num
As you notice, I don't know anything about the table with the substatuses.
You can use inner join, grouping and count to get your result:
suppose tables are as follow :
cat (1)--->(n) subcat (1)----->(n) subcat_detail.
so the query would be :
select cat.title cat_title ,subcat.title subcat_title ,count(*) as cnt from
cat inner join sub_cat on cat.id=subcat.cat_id
inner join subcat_detail on subcat.ID=am.subcat_detail_id
group by cat.title,subcat.title
Generally when you need different counts, you need to use the CASE statment.
select count(*) as total
, case when field1 = "test' then 1 else 0 end as testcount
, case when field2 = 'yes' then 1 else 0 endas field2count
FROM table1

Counting null and non-null values in a single query

I have a table
create table us
(
a number
);
Now I have data like:
a
1
2
3
4
null
null
null
8
9
Now I need a single query to count null and not null values in column a
This works for Oracle and SQL Server (you might be able to get it to work on another RDBMS):
select sum(case when a is null then 1 else 0 end) count_nulls
, count(a) count_not_nulls
from us;
Or:
select count(*) - count(a), count(a) from us;
If I understood correctly you want to count all NULL and all NOT NULL in a column...
If that is correct:
SELECT count(*) FROM us WHERE a IS NULL
UNION ALL
SELECT count(*) FROM us WHERE a IS NOT NULL
Edited to have the full query, after reading the comments :]
SELECT COUNT(*), 'null_tally' AS narrative
FROM us
WHERE a IS NULL
UNION
SELECT COUNT(*), 'not_null_tally' AS narrative
FROM us
WHERE a IS NOT NULL;
Here is a quick and dirty version that works on Oracle :
select sum(case a when null then 1 else 0) "Null values",
sum(case a when null then 0 else 1) "Non-null values"
from us
for non nulls
select count(a)
from us
for nulls
select count(*)
from us
minus
select count(a)
from us
Hence
SELECT COUNT(A) NOT_NULLS
FROM US
UNION
SELECT COUNT(*) - COUNT(A) NULLS
FROM US
ought to do the job
Better in that the column titles come out correct.
SELECT COUNT(A) NOT_NULL, COUNT(*) - COUNT(A) NULLS
FROM US
In some testing on my system, it costs a full table scan.
As i understood your query, You just run this script and get Total Null,Total NotNull rows,
select count(*) - count(a) as 'Null', count(a) as 'Not Null' from us;
usually i use this trick
select sum(case when a is null then 0 else 1 end) as count_notnull,
sum(case when a is null then 1 else 0 end) as count_null
from tab
group by a
Just to provide yet another alternative, Postgres 9.4+ allows applying a FILTER to aggregates:
SELECT
COUNT(*) FILTER (WHERE a IS NULL) count_nulls,
COUNT(*) FILTER (WHERE a IS NOT NULL) count_not_nulls
FROM us;
SQLFiddle: http://sqlfiddle.com/#!17/80a24/5
This is little tricky. Assume the table has just one column, then the Count(1) and Count(*) will give different values.
set nocount on
declare #table1 table (empid int)
insert #table1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(NULL),(11),(12),(NULL),(13),(14);
select * from #table1
select COUNT(1) as "COUNT(1)" from #table1
select COUNT(empid) "Count(empid)" from #table1
Query Results
As you can see in the image, The first result shows the table has 16 rows. out of which two rows are NULL. So when we use Count(*) the query engine counts the number of rows, So we got count result as 16. But in case of Count(empid) it counted the non-NULL-values in the column empid. So we got the result as 14.
so whenever we are using COUNT(Column) make sure we take care of NULL values as shown below.
select COUNT(isnull(empid,1)) from #table1
will count both NULL and Non-NULL values.
Note: Same thing applies even when the table is made up of more than one column. Count(1) will give total number of rows irrespective of NULL/Non-NULL values. Only when the column values are counted using Count(Column) we need to take care of NULL values.
I had a similar issue: to count all distinct values, counting null values as 1, too. A simple count doesn't work in this case, as it does not take null values into account.
Here's a snippet that works on SQL and does not involve selection of new values.
Basically, once performed the distinct, also return the row number in a new column (n) using the row_number() function, then perform a count on that column:
SELECT COUNT(n)
FROM (
SELECT *, row_number() OVER (ORDER BY [MyColumn] ASC) n
FROM (
SELECT DISTINCT [MyColumn]
FROM [MyTable]
) items
) distinctItems
Try this..
SELECT CASE
WHEN a IS NULL THEN 'Null'
ELSE 'Not Null'
END a,
Count(1)
FROM us
GROUP BY CASE
WHEN a IS NULL THEN 'Null'
ELSE 'Not Null'
END
Here are two solutions:
Select count(columnname) as countofNotNulls, count(isnull(columnname,1))-count(columnname) AS Countofnulls from table name
OR
Select count(columnname) as countofNotNulls, count(*)-count(columnname) AS Countofnulls from table name
Try
SELECT
SUM(ISNULL(a)) AS all_null,
SUM(!ISNULL(a)) AS all_not_null
FROM us;
Simple!
If you're using MS Sql Server...
SELECT COUNT(0) AS 'Null_ColumnA_Records',
(
SELECT COUNT(0)
FROM your_table
WHERE ColumnA IS NOT NULL
) AS 'NOT_Null_ColumnA_Records'
FROM your_table
WHERE ColumnA IS NULL;
I don't recomend you doing this... but here you have it (in the same table as result)
use ISNULL embedded function.
All the answers are either wrong or extremely out of date.
The simple and correct way of doing this query is using COUNT_IF function.
SELECT
COUNT_IF(a IS NULL) AS nulls,
COUNT_IF(a IS NOT NULL) AS not_nulls
FROM
us
SELECT SUM(NULLs) AS 'NULLS', SUM(NOTNULLs) AS 'NOTNULLs' FROM
(select count(*) AS 'NULLs', 0 as 'NOTNULLs' FROM us WHERE a is null
UNION select 0 as 'NULLs', count(*) AS 'NOTNULLs' FROM us WHERE a is not null) AS x
It's fugly, but it will return a single record with 2 cols indicating the count of nulls vs non nulls.
This works in T-SQL. If you're just counting the number of something and you want to include the nulls, use COALESCE instead of case.
IF OBJECT_ID('tempdb..#us') IS NOT NULL
DROP TABLE #us
CREATE TABLE #us
(
a INT NULL
);
INSERT INTO #us VALUES (1),(2),(3),(4),(NULL),(NULL),(NULL),(8),(9)
SELECT * FROM #us
SELECT CASE WHEN a IS NULL THEN 'NULL' ELSE 'NON-NULL' END AS 'NULL?',
COUNT(CASE WHEN a IS NULL THEN 'NULL' ELSE 'NON-NULL' END) AS 'Count'
FROM #us
GROUP BY CASE WHEN a IS NULL THEN 'NULL' ELSE 'NON-NULL' END
SELECT COALESCE(CAST(a AS NVARCHAR),'NULL') AS a,
COUNT(COALESCE(CAST(a AS NVARCHAR),'NULL')) AS 'Count'
FROM #us
GROUP BY COALESCE(CAST(a AS NVARCHAR),'NULL')
Building off of Alberto, I added the rollup.
SELECT [Narrative] = CASE
WHEN [Narrative] IS NULL THEN 'count_total' ELSE [Narrative] END
,[Count]=SUM([Count]) FROM (SELECT COUNT(*) [Count], 'count_nulls' AS [Narrative]
FROM [CrmDW].[CRM].[User]
WHERE [EmployeeID] IS NULL
UNION
SELECT COUNT(*), 'count_not_nulls ' AS narrative
FROM [CrmDW].[CRM].[User]
WHERE [EmployeeID] IS NOT NULL) S
GROUP BY [Narrative] WITH CUBE;
SELECT
ALL_VALUES
,COUNT(ALL_VALUES)
FROM(
SELECT
NVL2(A,'NOT NULL','NULL') AS ALL_VALUES
,NVL(A,0)
FROM US
)
GROUP BY ALL_VALUES
select count(isnull(NullableColumn,-1))
if its mysql, you can try something like this.
select
(select count(*) from TABLENAME WHERE a = 'null') as total_null,
(select count(*) from TABLENAME WHERE a != 'null') as total_not_null
FROM TABLENAME
Just in case you wanted it in a single record:
select
(select count(*) from tbl where colName is null) Nulls,
(select count(*) from tbl where colName is not null) NonNulls
;-)
for counting not null values
select count(*) from us where a is not null;
for counting null values
select count(*) from us where a is null;
I created the table in postgres 10 and both of the following worked:
select count(*) from us
and
select count(a is null) from us
In my case I wanted the "null distribution" amongst multiple columns:
SELECT
(CASE WHEN a IS NULL THEN 'NULL' ELSE 'NOT-NULL' END) AS a_null,
(CASE WHEN b IS NULL THEN 'NULL' ELSE 'NOT-NULL' END) AS b_null,
(CASE WHEN c IS NULL THEN 'NULL' ELSE 'NOT-NULL' END) AS c_null,
...
count(*)
FROM us
GROUP BY 1, 2, 3,...
ORDER BY 1, 2, 3,...
As per the '...' it is easily extendable to more columns, as many as needed
Number of elements where a is null:
select count(a) from us where a is null;
Number of elements where a is not null:
select count(a) from us where a is not null;