SQL SELECT column value based on value - sql

I am using Postgresql. I would like to write a SELECT statement with a column value based on the value in the database.
For example.
| id | indicator |
| 1 | 0 |
| 2 | 1 |
indicator can be 0 or 1 only where 0 = manual and 1 = auto.
Expected output from a SELECT *
1 manual
2 auto

You can use a case expression:
select id, case indicator
when 0 then 'manual'
when 1 then 'auto'
end as indicator
from the_table;
If you need that frequently you could create a view for that.
In the long run, it might be better to create a proper lookup table and join to that:
create table indicator
(
id integer primary key,
name text not null
);
insert into indicator (id, name)
values (0, 'manua', 1, 'auto');
alter table the_table
add constraint fk_indicator
foreign key (indicator) references indicator (id);
Then join to it:
select t.id, i.name as indicator
from the_table t
join indicator i on i.id = t.indicator;

Related

SQLite: How to update rows with a sequence of numbers?

In SQLIte I would like to renumber the values in a specific column with a sequence of numbers.
For example the relevance-column in these rows:
relevance | value
-------------------
3 | value1
5 | valueb
8 | valuex
9 | valueaa
must be updated starting from 1 with increment 1:
relevance | value
-------------------
1 | value1
2 | valueb
3 | valuex
4 | valueaa
What I'm looking for, is something like this:
-- first set all to startvalue
UPDATE MyTable SET relevance = 0;
-- then renumber:
UPDATE MyTable SET relevance = (some function to increase by 1 to the previous row);
I tried this, but its not increasing, seems like Max is not evaluating on each row:
UPDATE MyTable SET relevance = (SELECT Max(relevance ))+1;
First create a temporary table where you will insert the column relevance from your table and with ROW_NUMBER() window function another column with the new sequence and then update from this temporary table:
drop table if exists temp.tmp;
create temporary table tmp as
select relevance, row_number() over (order by relevance) rn
from MyTable;
update MyTable
set relevance = (
select rn from temp.tmp
where temp.tmp.relevance = MyTable.relevance
);
drop table temp.tmp;
See the demo.

Make sure no two rows contain identical values in Postgresql

I have a table and I want to make sure that no two rows can be alike.
So, for example, this table would be valid:
user_id | printer
---------+-------------
1 | LaserWriter
4 | LaserWriter
1 | ThinkJet
2 | DeskJet
But this table would not be:
user_id | printer
---------+-------------
1 | LaserWriter
4 | LaserWriter
1 | ThinkJet <--error (duplicate row)
2 | DeskJet
1 | ThinkJet <--error (duplicate row)
This is because the last table has two instances of 1 | ThinkJet.
So, user_id can be repeated (i.e. 1) and printer can be repeated (i.e. LaserWriter) but once a record like 1 | ThinkJet is entered once that combination cannot be entered again.
How can I prevent such occurrences in a Postgresql 11.5 table?
I would try experimenting with SQL code but alas I am still new on the matter.
Please note this is for INSERTing data into the table, not SELECTing it. Like a constraint iirc.
Thanks
Here's your script
ALTER TABLE tableA ADD CONSTRAINT some_constraint PRIMARY KEY(user_id,printer);
INSERT INTO tableA(user_id, printer)
VALUES
(
1,
'LaserWriter'
)
ON CONFLICT (user_id, printer)
DO NOTHING;
You can use DISTINCT. For example:
SELECT user_id, DISTINCT printer FROM my_table;
That's all. Hope it helps!
You need a series of steps (assuming there is no already assigned unique key).
Add a temporary column to make each row unique.
Assign a value to the new columns.
Remove the already existing duplicates.
Create a Unique or Primary Key on the composite columns.
Remove the temporary column.
alter table your_table add temp_unique integer unique;
do $$
declare
row_num integer = 1;
c_assign cursor for
select temp_unique
from your_table
for update;
begin
for rec in c_assign
loop
update your_table
set temp_unique = row_num
where current of c_assign;
row_num = row_num + 1;
end loop;
end;
$$
delete from your_table ytd
where exists ( select 1
from your_table ytk
where ytd.user_id = ytk.user_id
and ytd.printer = ytk.printer
and ytd.temp_unique > ytk.temp_unique
) ;
alter table your_table add constraint id_prt_uk unique (user_id, printer);
alter table your_table drop temp_unique;
I found the answer. When creating the table I needed to specify the two columns as UNIQUE. Observe:
CREATE TABLE foo (user_id INT, printer VARCHAR(20), UNIQUE (user_id, printer));
Now, here are my results:
=# INSERT INTO foo VALUES (1, 'LaserWriter');
INSERT 0 1
=# INSERT INTO foo VALUES (4, 'LaserWriter');
INSERT 0 1
=# INSERT INTO foo VALUES (1, 'ThinkJet');
INSERT 0 1
=# INSERT INTO foo VALUES (2, 'DeskJet');
INSERT 0 1
=# INSERT INTO foo VALUES (1, 'ThinkJet');
ERROR: duplicate key value violates unique constraint "foo_user_id_printer_key"
DETAIL: Key (user_id, printer)=(1, ThinkJet) already exists.
=# SELECT * FROM foo;
user_id | printer
---------+-------------
1 | LaserWriter
4 | LaserWriter
1 | ThinkJet
2 | DeskJet
(4 rows)

Create primary key with two columns

I have two tables, bank_data and sec_data. Table bank_data has the columns id, date, asset, and liability. The date column is divided into quarters.
id | date | asset | liability
--------+----------+--------------------
1 | 6/30/2001| 333860 | 308524
1 | 3/31/2001| 336896 | 311865
1 | 9/30/2001| 349343 | 308524
1 |12/31/2002| 353863 | 322659
2 | 6/30/2001| 451297 | 425156
2 | 3/31/2001| 411421 | 391846
2 | 9/30/2001| 430178 | 41356
2 |12/31/2002| 481687 | 46589
3 | 6/30/2001| 106506 | 104532
3 | 3/31/2001| 104196 | 102983
3 | 9/30/2001| 106383 | 104865
3 |12/31/2002| 107654 | 105867
Table sec_data has columns of id, date, and security. I combined the two tables into a new table named new_table in R using this code:
dbGetQuery(con, "CREATE TABLE new_table
AS (SELECT sec_data.id,
bank_data.date,
bank_data.asset,
bank_data.liability,
sec_data.security
FROM bank_data,bank_sec
WHERE (bank_data.id = sec_data.id) AND
(bank_data.date = sec_data.date)")
I would like to set two primary keys (id and date) in this R code without using pgAdmin. I want to use something like Constraint bankkey Primary Key (id, date) but the AS and SELECT functions are throwing me off.
First your query is wrong.. You say table sec_data but you assign table bank_sec and i am rephrase your query
CREATE TABLE new_table AS
SELECT
sec_data.id,
bank_data.date,
bank_data.asset,
bank_data.liability,
sec_data.security
FROM bank_data
INNER JOIN sec_data on bank_data.id = sec_data.id
and bank_data.date = sec_data.date
Avoid using Implicit Join and use Explicit Join instead.. And as stated by # a_horse_with_no_name you can't define more than 1 primary key in 1 table. So what you do are Composite Primary Key
Define :
is a combination of two or more columns in a table that can be used to
uniquely identify each row in the table
So you need to Alter Function because Your create statement base on other table..
ALTER TABLE new_table
ADD PRIMARY KEY (id, date);
You may run these two separate statements ( create table and Insert into )
CREATE TABLE new_table (
id int, date date, asset int, liability int, security int,
CONSTRAINT bankkey PRIMARY KEY (id, date)
) ;
INSERT INTO new_table (id,date,asset,liability,security)
SELECT s.id,
b.date,
b.asset,
b.liability,
s.security
FROM bank_data b JOIN bank_sec s
ON b.id = s.id AND b.date = s.date;
Demo
To create the primary key you desire, run the following SQL statement after your CREATE TABLE ... AS statement:
ALTER TABLE new_table
ADD CONSTRAINT bankkey PRIMARY KEY (id, date);
That has the advantage that the primary key index won't slow down the data insertion.

Prevent B & A entry in SQL when A & B already exists

I have a table that contains two rows: AccountID and PartnerAccountID. I need to prevent duplicates across both columns. Meaning, if an entry exists:
| AccountID | PartnerAccountID |
| 1 | 2 |
I need to make sure that the following can't also exist:
| AccountID | PartnerAccountID |
| 2 | 1 |
Any way to do that in a constraint?
It would be nice if you could create a unique index on an expression:
create unique index unq_t_AccountID_PartnerAccountID
on t((case when AccountID < PartnerAccountID then AccountId else PartnerAccountID end),
(case when AccountID < PartnerAccountID then PartnerAccountIDelse AccountId end)
);
But you can do almost the same thing by creating the columns as computed columns and then creating the index:
alter table t add minid as (case when AccountID < PartnerAccountID then AccountId else PartnerAccountID end);
alter table t add maxid as (case when AccountID < PartnerAccountID then PartnerAccountIDelse AccountId end);
create unique index unq_t_minid_maxid on t(minid, maxid);
An approach could also be to create an instead-of trigger which simply ignores such duplicates across two columns; The advantage would be that transactions do not need to be aborted if the information is already stored (yet the other way round). Here is an attempt of how such a trigger could look like:
CREATE TRIGGER tr_t ON t
INSTEAD OF INSERT
AS
BEGIN
INSERT t
SELECT AccountID, PartnerAccountID
FROM inserted
where not exists (select * from t2 where t2.AccountID = ParnterAccountID and t2.ParnterAccountId = AccountID);
END

Oracle SQL - return record only if colB is the same for all of colA

I have a table like the following ( there is of course other data in the table):
Col A Col B
1 Red
1 Red
2 Blue
2 Green
3 Black
I am trying to return a value for Col A only when ALL the Col B values match, otherwise return null.
This will be used as part of another sql statement that will be passing the Col A value, ie
Select * from Table where Col A = 1
I need to return the value in Col B. The correct result in the above table would be Red,Black
any ideas ?
how about this?
SQL Fiddle
Oracle 11g R2 Schema Setup:
create table t( id number, color varchar2(20));
insert into t values(1,'RED');
insert into t values(1,'RED');
insert into t values(2,'BLUE');
insert into t values(2,'GREEN');
insert into t values(3,'BLACK');
Query 1:
select color from t where id in (
select id
from t
group by id having min(color) = max(color) )
group by color
Results:
| COLOR |
|-------|
| RED |
| BLACK |
If you just want the values in A (rather than each row), then use group by:
select a
from table t
group by a
having min(b) = max(b);
Note: this ignores NULL values. If you want to treat them as an additional value, then add another condition:
select a
from table t
group by a
having min(b) = max(b) and count(*) = count(b);
It is also tempting to use count(distinct). In general, though, count(distinct) requires more processing effort than a min() and a max().
You can use a case statement.
select cola,
case when max(colb) = min(colb) and count(*) = count(colb) then max(colb)
end as colb
from tablename
group by cola
SQL Fiddle
Oracle 11g R2 Schema Setup:
create table t( id number, color varchar2(20));
insert into t values(1,'RED');
insert into t values(1,'RED');
insert into t values(2,'BLUE');
insert into t values(2,'GREEN');
insert into t values(3,'BLACK');
Query 1:
select id
from t
group by id having min(color) = max(color)
Results:
| ID |
|----|
| 1 |
| 3 |
hope this is what you were looking for.. :)