SQL 'GROUP BY' to filter an array of 'text' data type - sql

I am new to SQL and I an trying to understand the GROUP BY statement.
I have inserted the following data in SQL:
CREATE TABLE table( id integer, type text);
INSERT INTO table VALUES (1,'start');
INSERT INTO table VALUES (2,'start');
INSERT INTO table VALUES (2,'complete');
INSERT INTO table VALUES (3,'complete');
INSERT INTO table VALUES (3,'start');
INSERT INTO table VALUES (4,'start');
I want to select those IDs that do not have a type 'complete'. For this example I should get IDs 1, 4.
I have tried multiple GROUP BY - HAVING combinations. My best approach is:
SELECT id from customers group by type having type!='complete';
but the resulted IDs are 4,3,2.
Could anyone give me a hint about what I am doing wrong?

You are close. The having clause needs an aggregation function and you need to aggregate by id:
select id
from table t
group by id
having sum(case when type = 'complete' then 1 else 0 end) = 0;
Normally, if you have something called an id, you would also have a table with that as primary key. If so, you can also do:
select it.id
from idtable it
where not exists (select 1
from table t
where t.type = 'complete' and it.id = t.id
);

Related

postgresql unnest and pivot int array column

I have below table
create table test(id serial, key int,type text,words text[],numbers int[] );
insert into test(key,type,words) select 1,'Name',array['Table'];
insert into test(key,type,numbers) select 1,'product_id',array[2];
insert into test(key,type,numbers) select 1,'price',array[40];
insert into test(key,type,numbers) select 1,'Region',array[23,59];
insert into test(key,type,words) select 2,'Name',array['Table1'];
insert into test(key,type,numbers) select 2,'product_id',array[1];
insert into test(key,type,numbers) select 2,'price',array[34];
insert into test(key,type,numbers) select 2,'Region',array[23,59,61];
insert into test(key,type,words) select 3,'Name',array['Chair'];
insert into test(key,type,numbers) select 3,'product_id',array[5];
I was using below query to pivot table for users.
select key,
max(array_to_string(words,',')) filter(where type='Name') as "Name",
cast(max(array_to_string(numbers,',')) filter(where type='product_id') as int) as "product_id",
cast(max(array_to_string(numbers,',')) filter(where type='price') as int) as "price" ,
max(array_to_string(numbers,',')) filter(where type='Region') as "Region"
from test group by key
But I couldn't unnest the Region column during Pivot in-order to use Region column to join with another table .
My expected output is below
Since we are using unnest("Region") to do to pivot. There must be a row with region data for each product.
Or below code will do the trick by creating an array of null.
unnest(CASE WHEN array_length("Region", 1) >= 1
THEN "Region"
ELSE '{null}'::int[] END)
Schema:
create table test(id serial, key int,type text,words text[],numbers int[] );
insert into test(key,type,words) select 1,'Name',array['Table'];
insert into test(key,type,numbers) select 1,'product_id',array[2];
insert into test(key,type,numbers) select 1,'price',array[40];
insert into test(key,type,numbers) select 1,'Region',array[23,59];
insert into test(key,type,words) select 2,'Name',array['Table1'];
insert into test(key,type,numbers) select 2,'product_id',array[1];
insert into test(key,type,numbers) select 2,'price',array[34];
insert into test(key,type,numbers) select 2,'Region',array[23,59,61];
insert into test(key,type,words) select 3,'Name',array['Chair'];
insert into test(key,type,numbers) select 3,'product_id',array[5];
select key,"Name",product_id,price,unnest(CASE WHEN array_length("Region", 1) >= 1
THEN "Region"
ELSE '{null}'::int[] END) from
(
select key,
max(array_to_string(words,',')) filter(where type='Name') as "Name",
cast(max(array_to_string(numbers,',')) filter(where type='product_id') as int) as "product_id",
cast(max(array_to_string(numbers,',')) filter(where type='price') as int) as "price" ,
max(numbers) filter(where type='Region') as "Region"
from test group by key
)t order by key
key
Name
product_id
price
unnest
1
Table
2
40
23
1
Table
2
40
59
2
Table1
1
34
23
2
Table1
1
34
59
2
Table1
1
34
61
3
Chair
5
null
null
db<>fiddle here
Very strange database design... I'm assuming you inherited it?
If none of the other array values will ever have a cardinality > 1 then, you can simply unnest:
select
key,
(max (words) filter (where type = 'Name'))[1] as name,
(max (numbers) filter (where type = 'product_id'))[1] as product_id,
(max (numbers) filter (where type = 'price'))[1] as price,
unnest (max (numbers) filter (where type = 'Region')) as region
from test
group by key
If they can have multiple values, that can also be handled.
-- EDIT 3/15/2021 --
Short version: an unnest against a null won't product a row, so if you coalesce the null value into an array of a single null element, that should take care of this part:
select
key,
(max (words) filter (where type = 'Name'))[1] as name,
(max (numbers) filter (where type = 'product_id'))[1] as product_id,
(max (numbers) filter (where type = 'price'))[1] as price,
unnest (coalesce (max (numbers) filter (where type = 'Region'), array[null]::integer[])) as region
from test
group by key
order by key
Now for the part you didn't ask... I and at least one other have been gently nudging you that your database design is going to cause multiple problems at every turn. The fact that it's in production doesn't mean you shouldn't fix it as soon as you can.
This design is what's known as EAV - Entity - Attribute - Value. It has its use cases, but like most good things it can also be applied when it shouldn't. The use case that comes to mind is if you want users to be able to dynamically add attributes to certain objects. Even then, there might be better/easier ways.
And as one example, if you have one million objects, five attributes means you have to store that as five million rows, and the majority of that space will be occupied with repeating the key and attribute names.
Just food for thought. We can continue to triage this with every new scenario you find, but it would be better to redo the design.

How to query hugeblob data

I wanted to query to the hugeblob attribute in a table. I have tried below, but it doesnt give any data while selecting.
select DBMS_LOB.substr(mydata, 1000,1) from mytable;
Is there any other to do this?
DBMS_LOB.substr() is the right function to use. Ensure that there is data in the column.
Example usage:
-- create table
CREATE TABLE myTable (
id INTEGER PRIMARY KEY,
blob_column BLOB
);
-- insert couple of rows
insert into myTable values(1,utl_raw.cast_to_raw('a long data item here'));
insert into myTable values(2,null);
-- select rows
select id, blob_column from myTable;
ID BLOB_COLUMN
1 (BLOB)
2 null
-- select rows
select id, DBMS_LOB.substr(blob_column, 1000,1) from myTable;
ID DBMS_LOB.SUBSTR(BLOB_COLUMN,1000,1)
1 61206C6F6E672064617461206974656D2068657265
2 null
-- select rows
select id, UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.substr(blob_column,1000,1)) from myTable;
ID UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(BLOB_COLUMN,1000,1))
1 a long data item here
2 null

How do I Use Count Function for different values of the same column in MS-SQL?

For DB StudentInfo and Table Student as follows:
CREATE TABLE Student
(
ID INT PRIMARY KEY IDENTITY(1,1),
Name nvarchar(255)
)
and inserting values:
Insert Into Student Values ('Ashok')`
executing it 3 times, and
Insert Into Student Values ('Achyut')
executing it 2 times and total 5 rows of data are inserted into the table.
I want to display a result counting the result with the name having 'Ashok' & 'Achyut'.
Generally for single values count in a column I use:
SELECT Count(Name) AS NoOfStudentHavingNameAshok
FROM Student
WHERE Name = 'Ashok'
but how to display the NoOfStudentHavingNameAshok & NoOfStudentHavingNameAchyut what query should I run?
You should include name in the select and group by name.
SELECT name, Count(*)
From Student
group by name
You can put conditions inside your COUNT() function:
select count(case when Name = 'Ashok' then 'X' end) as NoOfStudentHavingNameAshok,
count(case when Name = 'Achyut' then 'X' end) as NoOfStudentHavingNameAchyut
from Student

SQL aggregate function to return single value if there is only one, otherwise null

I'm looking for the best way to achieve an aggregate function that does this:-
If the group contains only a single repeated value, return that value
If the group contains any nulls, then return null
If the group contains more than one value, return null
Here's some sample data:
CREATE TABLE EXAMPLE
( ID NUMBER(3),
VAL VARCHAR2(3));
INSERT INTO EXAMPLE VALUES (1,'A');
INSERT INTO EXAMPLE VALUES (2,'A');
INSERT INTO EXAMPLE VALUES (2,'B');
INSERT INTO EXAMPLE VALUES (3,null);
INSERT INTO EXAMPLE VALUES (3,'A');
INSERT INTO EXAMPLE VALUES (4,'A');
INSERT INTO EXAMPLE VALUES (4,'A');
SQLFiddle Link
The SQL should be something like:-
SELECT ID, ????( VAL ) ONLY_VAL
FROM EXAMPLE
GROUP BY ID
ORDER BY ID
The result I am after should look like this:-
ID ONLY_VAL
1 A
2
3
4 A
In the real thing, I want to do this on multiple VAL columns (grouped by the same ID). There would be several hundred records per ID.
I thought this was an interesting problem The only solution I have is a mess of NVL, MIN and MAX and it seems like there should be a neater way.
Will this work for for your original data?
SELECT ID,
CASE WHEN COUNT(DISTINCT VAL) = 1 AND COUNT(ID) = COUNT(VAL)
THEN MAX(VAL)
ELSE NULL
END ONLY_VAL
FROM EXAMPLE
GROUP BY ID
ORDER BY ID
SQLFiddle Demo

Oracle SQL: Returning a Record even when a specific value doesn't exist

I have a query where I'm trying to pull some values from a table where a specific ID is queried for. If that value doesn't exist, I would still like the query to return a record that only has that ID value I was looking for. Here's what I've tried so far.
Select attr.attrval, attr.uidservicepoint, sp.servicepointid
From bilik.lssrvcptmarketattr attr
Join bilik.lsmarketattrtype type on attr.uidmarketattrtype = type.uidmarketattrtype AND
type.attrtype IN ('CAPACITY_REQUIREMENT_KW') and TO_CHAR( attr.starttime , 'mm/dd/yyyy')in ('05/01/2011')
Right Outer Join bilik.lsservicepoint sp on attr.uidservicepoint = sp.uidservicepoint
Where sp.servicepointid in ('RGE_R01000051574382') Order By sp.servicepointid ASC
In this example, I'm trying to look for RGE_R01000051574382. If that doesn't exist in table SP.servicepointid, I want it to still return the 'RGE_R01000051574382' in a record with nulls for the other values I'm pulling. Normally, when I'm running this, I will be pulling about 1000 specific values at a time.
If anyone has any insight that they can give on this, it would be greatly appreciated. Thanks so much!
If I understand correctly, you just need to move the WHERE clause into the JOIN clause.
select attr.attrval,
attr.uidservicepoint,
sp.servicepointid
from bilik.lssrvcptmarketattr attr
join bilik.lsmarketattrtype type on attr.uidmarketattrtype = type.uidmarketattrtype
and type.attrtype in ('CAPACITY_REQUIREMENT_KW')
and TO_CHAR(attr.starttime, 'mm/dd/yyyy') in ('05/01/2011')
right outer join bilik.lsservicepoint sp on attr.uidservicepoint = sp.uidservicepoint
and sp.servicepointid in ('RGE_R01000051574382')
order by sp.servicepointid
I think you're saying you want to have a record returned, with the servicepointid column populated, but all others null?
In that case, use a union.
select ...your query without order by...
and sp.servicepointid = 'RGE_R010000515743282'
union
select null, null, 'RGE_R010000515743282'
from dual
where not exists (select 'x' from (...your query without order by...))
Here's a complete example:
create table test (id number, val varchar2(10));
insert into test (id, val) values (1, 'hi');
select id,
val
from test
where id = 1
union
select 1,
null
from dual
where not exists (select 'x'
from test
where id = 1)