Grouping By different rows - sql

I have returned rows which look like this:
2 - Eggs
3 - Bacon
4 - Bacon Smoked
I would like to group by '%Bacon%' so that my count is 2.
How can i do this is SQL?
I should see results like this:
Eggs - 1
Bacon - 2

How about the following (Demo):
SELECT 'Eggs' AS Category, COUNT(*) AS MyCount
FROM MyTable
WHERE MyField LIKE '%Eggs%'
UNION ALL
SELECT 'Bacon' AS Category, COUNT(*) AS MyCount
FROM MyTable
WHERE MyField LIKE '%Bacon%'

Not tested, but I think it should work
SELECT COUNT(*) as QTY, RS.FOOD_TYPE
FROM
(SELECT
Case patIndex ('%[ /-]%', LTrim (FOOD_TYPE))
When 0 Then LTrim (FOOD_TYPE)
Else substring (LTrim (FOOD_TYPE), 1, patIndex ('%[ /-]%', LTrim (FOOD_TYPE)) - 1)
End FOOD_TYPE
FROM YOUR_TABLE) RS
GROUP BY RS.FOOD_TYPE

Other solution:
SELECT
Eggs = SUM(CASE WHEN FoodColumn LIKE '%Eggs%' THEN 1 ELSE 0 END),
Bacon = SUM(CASE WHEN FoodColumn LIKE '%Bacon%' THEN 1 ELSE 0 END)
FROM Test
You can see demo here.
If you need to separate the result into two separate rows
SELECT *
FROM
(
SELECT
Eggs = SUM(CASE WHEN FoodColumn LIKE '%Eggs%' THEN 1 ELSE 0 END),
Bacon = SUM(CASE WHEN FoodColumn LIKE '%Bacon%' THEN 1 ELSE 0 END)
FROM Test
) AS Test
UNPIVOT
(
Quantity FOR Foods IN (Eggs, Bacon)
) AS Result
You can see demo here.

This is a very specific case.. can you provide more Data? Does this help in anyway?
with list (item) as (
select
item
from (
values
('Eggs'),
('Bacon'),
('Bacon Smoked')) list (item)
)
select
LEFT(item,
(case
when CHARINDEX(' ',item,1) = 0
then LEN(item)
else CHARINDEX(' ',item,1) end)
) filtered,
COUNT(*)
from list
group by
LEFT(item,
(case
when CHARINDEX(' ',item,1) = 0
then LEN(item)
else CHARINDEX(' ',item,1) end))

create table MyTable
(id int, FieldName varchar(50) )
insert into MyTable values (1, 'Eggs')
insert into MyTable values (2, 'Bacon')
insert into MyTable values (3, 'Bacon Smoked')
select count(FieldName), FieldName from (
select
case
when charindex('eggs', FieldName) > 0 then 'eggs'
when charindex('bacon', FieldName) > 0 then 'bacon'
end as FieldName
from MyTable) as myMyTablealias
group by FieldName
check it out

Related

How to check If table contains diferent values?

I have table:
Id Value
1 79868
2 79868
3 79868
4 97889
5 97889
Now, I want to make next select with bool variable that check if table contains difrent values at table column Value. Something like this:
select
v= (select case when exists(...)
then 1
else 0
end)
Table contais Values: 79868, 97889 so v should return 1 in other case 0.
How to write select iniside select case??
You can compare the min and max values:
select (case when (select min(value) from t) = (select max(value) from t)
then 1 else 0
end) as all_same
With an index on (value), this should be quite fast.
The above solution assumes that there are no null values or that NULL values should be ignored.
You might try this:
SELECT CASE COUNT(*)
WHEN 1 THEN 1
ELSE 0
END AS all_equal
FROM (SELECT DISTINCT Value FROM my_table);
If I get your question correct, you want to check if value column contains more than 1 distinct values. You can achieve this using,
select (case when count(value) > 1 then 1 else 0 end) as out
from (select value from table group by value) temp
May this is better:
SELECT CASE COUNT(DISTINCT value) WHEN 1 THEN 1
ELSE 0
END AS all_equal
FROM my_table;
So, you just need one case expression with two Boolean variable
declare #bit1 bit = 1, #bit0 bit = 0
select
(case when min(value) = max(value) then #bit1 else #bit0 end) as v
from table t
where value is not null
This is a the same as another answers
But is has some test data
declare #T table(pk int identity primary key, val int not null);
insert into #T (val) values (79868), (79868), (79868);
select case when count(distinct(val)) = 1 then 0 else 1 end as dd
from #t t;
select case when min(val) = max(val) then 0 else 1 end as dd
from #t t;
insert into #T (val) values (97889), (97889);
select case when count(distinct(val)) = 1 then 0 else 1 end as dd
from #t t;
select case when min(val) = max(val) then 0 else 1 end as dd
from #t t;
I like the min max answer from Gordon best

Query to exclude negating value-pairs

Create Table #Temp(Number Varchar(20), Category Varchar(20))
Insert Into #Temp
Select '123', '-A'
Union all
Select '123', 'A'
Union all
Select '123', 'A'
Union all
Select '123','B'
Union all
Select '123','-B'
Select * From #temp
result set
---------------------------
Number Category
123 -A
123 A
123 A
123 B
123 -B
123 C
123 -C
123 -C
---------------------------
From the above set of the data I need to query showing only one A when there are 2 -A and one A.
All i need is to have an output that cancels the -A and A where ever necessary, from the above example,
the query should return the below only
result set
---------------------------
Number Category
123 A
123 -C
---------------------------
This should do what you want:
select t.*
from (select t.*,
count(*) over (partition by replace(category, '-', ''), seqnum) as cnt_sc
from (select t.*,
row_number() over (partition by category order by category) as seqnum
from temp t
) t
) t
where cnt_sc = 1;
For a given category this enumerates the rows. It then counts the number for each enumeration, taking the "-" into account. It returns the rows that have only one enumeration -- they have no matches.
Note: This assumes that category has no hyphens except at the beginning.
EDIT:
If you know that there will be at most one such row, you can do:
select number,
(case when count(*) > sum(case when category like '-%' then 1 else 0)
then '-' + replace(category, '-', '')
else replace(category, '-', '')
end)
from t
group by number, replace(category, '-', '')
having count(*) <> 2 * sum(case when category like '-%' then 1 else 0 end)
Not the prettiest solution but perhaps someone else has a more elegant approach:
WITH cte as(
SELECT
SUM(CASE WHEN LEFT(Category, 1) = '-'
THEN -1
ELSE 1
END) as summed
, Right(Category,1) AS nuCat
FROM #Temp
GROUP BY Number, RIGHT(Category, 1)
)
SELECT CASE WHEN SUM(summed) > 0
THEN nuCat
ELSE '-' + nuCat
END AS DerivedCategory
FROM cte
GROUP BY nuCat
HAVING SUM(summed) <> 0
Using the CTE to turn the character strings into integers to SUM them. Then, when selecting from the CTE, the string is concatenated back with the integer "sign" from the summed values in the CTE.
The result:
DerivedCategory
---------------
A
-C

SQL Server - count how many names have 'A' and how many have 'E'

I have problem with SQL query.
I have names in column Name in Table_Name, for example:
'Mila', 'Adrianna' 'Emma', 'Edward', 'Adam', 'Piter'
I would like to count how many names contain the letter 'A' and how many contain the letter 'E'.
The output should be:
letter_A ( 5 )| letter_E (3)
I tried to do this:
SELECT Name,
letter_A = CHARINDEX('A', Name),
letter_E = CHARINDEX('E', Name)
FROM Table_Name
GROUP BY Name
HAVING ( CHARINDEX('A', Nazwisko) != 0
OR ( CHARINDEX('E', Nazwisko) ) != 0 )
My query only shows if 'A' or 'E' is in Name :/
Can anyone help? :)
You can use conditional aggregation:
select sum(case when Nazwisko like '%A%' then 1 else 0 end) as A_cnt,
sum(case when Nazwisko like '%E%' then 1 else 0 end) as E_cnt
from table_name
where Nazwisko like '%A%' or Nazwisko like '%E%';
You just need to aggregate if you only need the counts.
select
sum(case when charindex('a',name) <> 0 then 1 else 0 end) as a_count
,sum(case when charindex('e',name) <> 0 then 1 else 0 end) as e_count
from table_name
;WITH CTE
AS (SELECT NAME
FROM (VALUES ('MILA'),
('ADRIANNA'),
('EMMA'),
('EDWARD'),
('ADAM'),
('PITER'))V(NAME)),
CTE_NAME
AS (SELECT COUNT(NAME_A) NAME_A,
COUNT(NAME_E) NAME_E
FROM (SELECT CASE
WHEN NAME LIKE '%A%' THEN NAME
END NAME_A,
CASE
WHEN NAME LIKE '%E%' THEN NAME
END NAME_E
FROM CTE
GROUP BY NAME)A)
SELECT *
FROM CTE_NAME

SQL Server case when or enum

I have a table something like:
stuff type price
first_stuff 1 43
second_stuff 2 46
third_stuff 3 24
fourth_stuff 2 12
fifth_stuff NULL 90
And for every type of stuff is assigned a description which is not stored in DB
1 = Bad
2 = Good
3 = Excellent
NULL = Not_Assigned
All I want is to return a table which count each type separately, something like:
Description Count
Bad 1
Good 2
Excellent 1
Not_Assigned 1
DECLARE #t TABLE ([type] INT)
INSERT INTO #t ([type])
VALUES (1),(2),(3),(2),(NULL)
SELECT
[Description] =
CASE t.[type]
WHEN 1 THEN 'Bad'
WHEN 2 THEN 'Good'
WHEN 3 THEN 'Excellent'
ELSE 'Not_Assigned'
END, t.[Count]
FROM (
SELECT [type], [Count] = COUNT(*)
FROM #t
GROUP BY [type]
) t
ORDER BY ISNULL(t.[type], 999)
output -
Description Count
------------ -----------
Bad 1
Good 2
Excellent 1
Not_Assigned 1
;WITH CTE_TYPE
AS (SELECT DESCRIPTION,
VALUE
FROM (VALUES ('BAD',
1),
('GOOD',
2),
('EXCELLENT',
3))V( DESCRIPTION, VALUE )),
CTE_COUNT
AS (SELECT C.DESCRIPTION,
Count(T.TYPE) TYPE_COUNT
FROM YOUR_TABLE T
JOIN CTE_TYPE C
ON T.TYPE = C.VALUE
GROUP BY TYPE,
DESCRIPTION
UNION ALL
SELECT 'NOT_ASSIGNED' AS DESCRIPTION,
Count(*) TYPE_COUNT
FROM YOUR_TABLE
WHERE TYPE IS NULL)
SELECT *
FROM CTE_COUNT
Hope, this helps.
SELECT ISNULL(D.descr, 'Not_Assigned'),
T2.qty
FROM
(SELECT T.type,
COUNT(*) as qty
FROM Table AS T
GROUP BY type) AS T2
LEFT JOIN (SELECT 1 as type, 'Bad' AS descr
UNION ALL
SELECT 2, 'Good'
UNION ALL
SELECT 3, 'Excellent') AS D ON D.type = T2.type
If you are using Sql server 2012+ use this
SELECT
[Description] = coalesce(choose (t.[type],'Bad','Good' ,'Excellent'), 'Not_Assigned'),
t.[Count]
FROM (
SELECT [type], [Count] = COUNT(*)
FROM yourtable
GROUP BY [type]
) t

Looping in select query

I want to do something like this:
select id,
count(*) as total,
FOR temp IN SELECT DISTINCT somerow FROM mytable ORDER BY somerow LOOP
sum(case when somerow = temp then 1 else 0 end) temp,
END LOOP;
from mytable
group by id
order by id
I created working select:
select id,
count(*) as total,
sum(case when somerow = 'a' then 1 else 0 end) somerow_a,
sum(case when somerow = 'b' then 1 else 0 end) somerow_b,
sum(case when somerow = 'c' then 1 else 0 end) somerow_c,
sum(case when somerow = 'd' then 1 else 0 end) somerow_d,
sum(case when somerow = 'e' then 1 else 0 end) somerow_e,
sum(case when somerow = 'f' then 1 else 0 end) somerow_f,
sum(case when somerow = 'g' then 1 else 0 end) somerow_g,
sum(case when somerow = 'h' then 1 else 0 end) somerow_h,
sum(case when somerow = 'i' then 1 else 0 end) somerow_i,
sum(case when somerow = 'j' then 1 else 0 end) somerow_j,
sum(case when somerow = 'k' then 1 else 0 end) somerow_k
from mytable
group by id
order by id
this works, but it is 'static' - if some new value will be added to 'somerow' I will have to change sql manually to get all the values from somerow column, and that is why I'm wondering if it is possible to do something with for loop.
So what I want to get is this:
id somerow_a somerow_b ....
0 3 2 ....
1 2 10 ....
2 19 3 ....
. ... ...
. ... ...
. ... ...
So what I'd like to do is to count all the rows which has some specific letter in it and group it by id (this id isn't primary key, but it is repeating - for id there are about 80 different values possible).
http://sqlfiddle.com/#!15/18feb/2
Are arrays good for you? (SQL Fiddle)
select
id,
sum(totalcol) as total,
array_agg(somecol) as somecol,
array_agg(totalcol) as totalcol
from (
select id, somecol, count(*) as totalcol
from mytable
group by id, somecol
) s
group by id
;
id | total | somecol | totalcol
----+-------+---------+----------
1 | 6 | {b,a,c} | {2,1,3}
2 | 5 | {d,f} | {2,3}
In 9.2 it is possible to have a set of JSON objects (Fiddle)
select row_to_json(s)
from (
select
id,
sum(totalcol) as total,
array_agg(somecol) as somecol,
array_agg(totalcol) as totalcol
from (
select id, somecol, count(*) as totalcol
from mytable
group by id, somecol
) s
group by id
) s
;
row_to_json
---------------------------------------------------------------
{"id":1,"total":6,"somecol":["b","a","c"],"totalcol":[2,1,3]}
{"id":2,"total":5,"somecol":["d","f"],"totalcol":[2,3]}
In 9.3, with the addition of lateral, a single object (Fiddle)
select to_json(format('{%s}', (string_agg(j, ','))))
from (
select format('%s:%s', to_json(id), to_json(c)) as j
from
(
select
id,
sum(totalcol) as total_sum,
array_agg(somecol) as somecol_array,
array_agg(totalcol) as totalcol_array
from (
select id, somecol, count(*) as totalcol
from mytable
group by id, somecol
) s
group by id
) s
cross join lateral
(
select
total_sum as total,
somecol_array as somecol,
totalcol_array as totalcol
) c
) s
;
to_json
---------------------------------------------------------------------------------------------------------------------------------------
"{1:{\"total\":6,\"somecol\":[\"b\",\"a\",\"c\"],\"totalcol\":[2,1,3]},2:{\"total\":5,\"somecol\":[\"d\",\"f\"],\"totalcol\":[2,3]}}"
In 9.2 it is also possible to have a single object in a more convoluted way using subqueries in instead of lateral
SQL is very rigid about the return type. It demands to know what to return beforehand.
For a completely dynamic number of resulting values, you can only use arrays like #Clodoaldo posted. Effectively a static return type, you do not get individual columns for each value.
If you know the number of columns at call time ("semi-dynamic"), you can create a function taking (and returning) polymorphic parameters. Closely related answer with lots of details:
Dynamic alternative to pivot with CASE and GROUP BY
(You also find a related answer with arrays from #Clodoaldo there.)
Your remaining option is to use two round-trips to the server. The first to determine the the actual query with the actual return type. The second to execute the query based on the first call.
Else, you have to go with a static query. While doing that, I see two nicer options for what you have right now:
1. Simpler expression
select id
, count(*) AS total
, count(somecol = 'a' OR NULL) AS somerow_a
, count(somecol = 'b' OR NULL) AS somerow_b
, ...
from mytable
group by id
order by id;
How does it work?
Compute percents from SUM() in the same SELECT sql query
SQL Fiddle.
2. crosstab()
crosstab() is more complex at first, but written in C, optimized for the task and shorter for long lists. You need the additional module tablefunc installed. Read the basics here if you are not familiar:
PostgreSQL Crosstab Query
SELECT * FROM crosstab(
$$
SELECT id
, count(*) OVER (PARTITION BY id)::int AS total
, somecol
, count(*)::int AS ct -- casting to int, don't think you need bigint?
FROM mytable
GROUP BY 1,3
ORDER BY 1,3
$$
,
$$SELECT unnest('{a,b,c,d}'::text[])$$
) AS f (id int, total int, a int, b int, c int, d int);