How to replace all non-zero values from column in select? - sql

I need to replace non-zeros in column within select statement.
SELECT Status, Name, Car from Events;
I can do it like this:
SELECT (Replace(Status, '1', 'Ready'), Name, Car from Events;
Or using Case/Update.
But I have numbers from -5 to 10 and writing Replace or something for each case is not good idea.
How can I add comparasing with replace without updating database?
Table looks like this:
Status Name Car
0 John Porsche
1 Bill Dodge
5 Megan Ford

The standard method is to use case:
select t.*,
(case when status = 1 then 'Ready'
else 'Something else'
end) as status_string
from t;
I would instead recommend, though, that you have a status reference table:
create table statuses (
status int primary key,
name varchar(255)
);
insert into statuses (status, name)
values (0, 'UNKNOWN'),
(1, 'READY'),
. . . -- for the rest of the statuses
Then use JOIN:
select t.*, s.name
from t join
statuses s
on t.status = s.status;

SELECT IF(status =1, 'Approved', 'Pending') FROM TABLENAME

Related

Split one column into 2 columns based on the last character of the column

I have a column region_no in a table. The column can hold 2 kinds of values - one set of values that end with the letter 'R' and the other that end with letter 'B'. This column must be split into 2 columns based on the last letter.
The create table and insert of sample data is :
CREATE TABLE test_17Jan
(
Region_No varchar(8),
Customer_Name varchar(20),
City varchar(20),
zip_code varchar(10)
)
INSERT INTO test_17Jan VALUES ('101R', 'John Doe', 'Detroit', '48127')
INSERT INTO test_17Jan VALUES ('202B', 'John Doe', 'Detroit', '48127')
INSERT INTO test_17Jan VALUES ('201B', 'Tim Smith', 'Waunakee', '53597')
The desired output is :
Customer_Name
City
zip_code
Inside_Sales_Region
B2B_Region
John Doe
Detroit
48127
101R
202B
Tim Smith
Waunakee
53597
NULL
201B
I thought of pivot function, but that needs to have an aggregate. Is there a way to get the output in the above format? Any help will be appreciated. The code will run on SQL Server 2019 (v15).
You can use conditional aggregation to get your desired results:
select Customer_Name, City, zip_code,
Max(Inside_Sales_Region) Inside_Sales_Region,
Max(B2B_Region) B2B_Region
from (
select Customer_Name, City, zip_code,
case when Right(Region_No,1) = 'R' then Region_No end Inside_Sales_Region,
case when Right(Region_No,1) = 'B' then Region_No end B2B_Region
from test_17Jan
)t
group by Customer_Name, City, zip_code;
better option should be LEFT SELF JOIN. But this should also work.
SELECT
CUSTOMER_NAME, CITY, ZIP_CODE,
CASE
WHEN Region_No like '%R' THEN Region_No
ELSE NULL AS Inside_Sales_Region,
CASE
WHEN Region_No like '%B' THEN Region_No
ELSE NULL AS B2B_Region
FROM test_17Jan;

SQL for selecting values in a single column by 'AND' condition

I have a table data like bellow
PersonId
Eat
111
Carrot
111
Apple
111
Orange
222
Carrot
222
Apple
333
Carrot
444
Orange
555
Apple
I need an sql query which return the total number of PersonId's who eat both Carrot and Apple.
In the above example the result is, Result : 2. (PersonId's 111 and 222)
An ms-sql query like 'select count(distinct PersonId) from Person where Eat = 'Carrot' and Eat = 'Apple''
You can actually get the count without using a subquery to determine the persons who eat both. Assuming that the rows are unique:
select ( count(distinct case when eat = 'carrot' then personid end) +
count(distinct case when eat = 'apple' then personid end) -
count(distinct personid)
) as num_both
from t
where eat in ('carrot', 'apple')
SELECT PersonID FROM Person WHERE Eat = 'Carrot'
INTERSECT
SELECT PersonID FROM Person WHERE Eat = 'Apple'
You can use conditional aggregation of a sort:
select
personid
from <yourtable>
group by
personid
having
count (case when eat = 'carrot' then 1 else null end) >= 1
and count (case when eat = 'apple' then 1 else null end) >= 1
At this example, I use STRING_AGG to make easy the count and transform 'Apple' and 'Carrot' to one string comparison:
create table #EatTemp
(
PersonId int,
Eat Varchar(50)
)
INSERT INTO #EatTemp VALUES
(111, 'Carrot')
,(111, 'Apple')
,(111, 'Orange')
,(222, 'Carrot')
,(222, 'Apple')
,(333, 'Carrot')
,(444, 'Orange')
,(555, 'Apple')
SELECT Count(PersonId) WhoEatCarrotAndApple FROM
(
SELECT PersonId,
STRING_AGG(Eat, ';')
WITHIN GROUP (ORDER BY Eat) Eat
FROM #EatTemp
WHERE Eat IN ('Apple', 'Carrot')
GROUP BY PersonId
) EatAgg
WHERE Eat = 'Apple;Carrot'
You can use EXISTS statements to achieve your goal. Below is a full set of code you can use to test the results. In this case, this returns a count of 2 since PersonId 111 and 222 match the criteria you specified in your post.
CREATE TABLE Person
( PersonId INT
, Eat VARCHAR(10));
INSERT INTO Person
VALUES
(111, 'Carrot'), (111, 'Apple'), (111, 'Orange'),
(222, 'Carrot'), (222, 'Apple'), (333, 'Carrot'),
(444, 'Orange'), (555, 'Apple');
SELECT COUNT(DISTINCT PersonId)
FROM Person AS p
WHERE EXISTS
(SELECT 1
FROM Person e1
WHERE e1.Eat = 'Apple'
AND p.PersonId = e1.PersonId)
AND EXISTS
(SELECT 1
FROM Person e1
WHERE e1.Eat = 'Carrot'
AND p.PersonId = e1.PersonId);
EXISTS statements have a few advantages:
No chance of changing the granularity of your data since you aren't joining in your FROM clause.
Easy to add additional conditions as needed. Just add more EXISTS statements in your WHERE clause.
The condition is cleanly encapsulated in the EXISTS, so code intent is clear.
If you ever need complex conditions like existence of a value in another table based on specific filter conditions, then you can easily add this without introducing table joins in your main query.
Some alternative solutions such as PersonId IN (SUBQUERY) can introduce unexpected behavior in certain conditions, particularly when the subquery returns a NULL value.
select
count(PersonID)
from Person
where eat = 'Carrot'
and PersonID in (select PersonID
from Person
where eat = 'Apple');
Only selecting those persons who eat apples, and from that result select all those that eat carrots too.
SELECT COUNT (A.personID) FROM
(SELECT distinct PersonID FROM Person WHERE Eat = 'Carrot'
INTERSECT
SELECT distinct PersonID FROM Person WHERE Eat = 'Apple') as A

Find rows which have never satistifed a condition

Say I have a table of customers with three possible statuses: loan default, open loan, paid in full.
How can I find the customers who never defaulted?
Example: John and Alex had multiple loans with different statuses.
id | customer | status
----------------------
1 john default
1 john open
1 john paid
2 alex open
2 alex paid
John defaulted once and Alex never defaulted. A simple where status <> "default" attempt doesn't work because it incorrectly includes John's non-defaulted cases. The result should give me:
id | customer
-------------
2 alex
How can I find the customers who never defaulted?
You can use aggregation and having:
select id, customer
from t
group by id, customer
having sum(case when status = 'default' then 1 else 0 end) = 0;
The having clause counts the number of defaults for each customer and returns those customers with no defaults.
If you have a separate table of customers, I would recommend not exists:
select c.*
from customers c
where not exists (select 1
from t
where t.id = c.id and t.status = 'default'
);
Something like
select distinct `customer` from `customers`
where `customer` not in (
select `customer` from `customers where `status` = 'default'
);
The ALL() operator with a correlated sub-query works here:
WITH cte AS (
SELECT * FROM (VALUES
(1, 'john', 'default'),
(1, 'john', 'open'),
(1, 'john', 'paid'),
(2, 'alex', 'open'),
(2, 'alex', 'paid')
) AS x(id, customer, status)
)
SELECT *
FROM cte AS a
WHERE 'default' <> ALL (
SELECT status
FROM cte AS b
WHERE a.id = b.id
);
If you want just user and/or id, do select distinct «your desired columns» instead of select *.

postgresql unnest and pivot int array column

I have below table
create table test(id serial, key int,type text,words text[],numbers int[] );
insert into test(key,type,words) select 1,'Name',array['Table'];
insert into test(key,type,numbers) select 1,'product_id',array[2];
insert into test(key,type,numbers) select 1,'price',array[40];
insert into test(key,type,numbers) select 1,'Region',array[23,59];
insert into test(key,type,words) select 2,'Name',array['Table1'];
insert into test(key,type,numbers) select 2,'product_id',array[1];
insert into test(key,type,numbers) select 2,'price',array[34];
insert into test(key,type,numbers) select 2,'Region',array[23,59,61];
insert into test(key,type,words) select 3,'Name',array['Chair'];
insert into test(key,type,numbers) select 3,'product_id',array[5];
I was using below query to pivot table for users.
select key,
max(array_to_string(words,',')) filter(where type='Name') as "Name",
cast(max(array_to_string(numbers,',')) filter(where type='product_id') as int) as "product_id",
cast(max(array_to_string(numbers,',')) filter(where type='price') as int) as "price" ,
max(array_to_string(numbers,',')) filter(where type='Region') as "Region"
from test group by key
But I couldn't unnest the Region column during Pivot in-order to use Region column to join with another table .
My expected output is below
Since we are using unnest("Region") to do to pivot. There must be a row with region data for each product.
Or below code will do the trick by creating an array of null.
unnest(CASE WHEN array_length("Region", 1) >= 1
THEN "Region"
ELSE '{null}'::int[] END)
Schema:
create table test(id serial, key int,type text,words text[],numbers int[] );
insert into test(key,type,words) select 1,'Name',array['Table'];
insert into test(key,type,numbers) select 1,'product_id',array[2];
insert into test(key,type,numbers) select 1,'price',array[40];
insert into test(key,type,numbers) select 1,'Region',array[23,59];
insert into test(key,type,words) select 2,'Name',array['Table1'];
insert into test(key,type,numbers) select 2,'product_id',array[1];
insert into test(key,type,numbers) select 2,'price',array[34];
insert into test(key,type,numbers) select 2,'Region',array[23,59,61];
insert into test(key,type,words) select 3,'Name',array['Chair'];
insert into test(key,type,numbers) select 3,'product_id',array[5];
select key,"Name",product_id,price,unnest(CASE WHEN array_length("Region", 1) >= 1
THEN "Region"
ELSE '{null}'::int[] END) from
(
select key,
max(array_to_string(words,',')) filter(where type='Name') as "Name",
cast(max(array_to_string(numbers,',')) filter(where type='product_id') as int) as "product_id",
cast(max(array_to_string(numbers,',')) filter(where type='price') as int) as "price" ,
max(numbers) filter(where type='Region') as "Region"
from test group by key
)t order by key
key
Name
product_id
price
unnest
1
Table
2
40
23
1
Table
2
40
59
2
Table1
1
34
23
2
Table1
1
34
59
2
Table1
1
34
61
3
Chair
5
null
null
db<>fiddle here
Very strange database design... I'm assuming you inherited it?
If none of the other array values will ever have a cardinality > 1 then, you can simply unnest:
select
key,
(max (words) filter (where type = 'Name'))[1] as name,
(max (numbers) filter (where type = 'product_id'))[1] as product_id,
(max (numbers) filter (where type = 'price'))[1] as price,
unnest (max (numbers) filter (where type = 'Region')) as region
from test
group by key
If they can have multiple values, that can also be handled.
-- EDIT 3/15/2021 --
Short version: an unnest against a null won't product a row, so if you coalesce the null value into an array of a single null element, that should take care of this part:
select
key,
(max (words) filter (where type = 'Name'))[1] as name,
(max (numbers) filter (where type = 'product_id'))[1] as product_id,
(max (numbers) filter (where type = 'price'))[1] as price,
unnest (coalesce (max (numbers) filter (where type = 'Region'), array[null]::integer[])) as region
from test
group by key
order by key
Now for the part you didn't ask... I and at least one other have been gently nudging you that your database design is going to cause multiple problems at every turn. The fact that it's in production doesn't mean you shouldn't fix it as soon as you can.
This design is what's known as EAV - Entity - Attribute - Value. It has its use cases, but like most good things it can also be applied when it shouldn't. The use case that comes to mind is if you want users to be able to dynamically add attributes to certain objects. Even then, there might be better/easier ways.
And as one example, if you have one million objects, five attributes means you have to store that as five million rows, and the majority of that space will be occupied with repeating the key and attribute names.
Just food for thought. We can continue to triage this with every new scenario you find, but it would be better to redo the design.

mySQL inserting multiple records with a select

I have a "dictionary table" called car_status that has an 'id' column and a 'status' column.
car_status
id status
1 broken
2 fixed
3 working
4 fast
5 slow
I have a table called cars with the columns: id, type, status_id
cars
id type status_id
1 jeep 1
2 ford 3
3 acura 4
I am hoping to insert multiple records into cars and give them all the status associated with "working". What is the best/ easiest way? I know I can query and find the status_id that is "working", and then do an insert query, but is there anyway to do it with one insert query using a join or select?
I tried stuff like:
INSERT INTO cars (type, status_id)
VALUES
('GM',status_id),
('Toyota',status_id),
('Honda',status_id)
SELECT id as status_id
FROM car_status
WHERE status = "working"
Thanks!
DECLARE temp_status_id INT;
SELECT status_id
INTO temp_status_id
FROM car_status;
INSERT INTO cars (type, status_id)
VALUES ('GM',temp_status_id),
('Toyota',temp_status_id),
('Honda',temp_status_id);
This is MS SQL, but I think you can do something like this:
DECLARE #status_id int
SELECT #status_id = status_id FROM car_status WHERE status = 'working'
INSERT INTO cars (type, status_id)
SELECT 'GM', #status_id
UNION
SELECT 'Toyota', #status_id
UNION...
Is there some reason you want to do it in a single statement? Personally, I'd just write an insert statement for each row. I guess you could also create a temp table with all the new types and then join that with the status id.