I need to fetch data from multiple tables but with some scenarios here are test scripts to re-create the problem
create table sub_test (sub_id number);
create table sub_svc_test (sub_id number, sub_svc_id number);
create table sub_svc_parm_test (sub_svc_id number, parm_id number, val varchar2(20) );
insert into sub_test values (100);
insert into sub_test values (101);
insert into sub_test values (102);
insert into sub_svc_test values (100,1001);
insert into sub_svc_test values (100,1002);
insert into sub_svc_test values (101,1005);
insert into sub_svc_test values (101,1006);
insert into sub_svc_test values (101,1007);
insert into sub_svc_test values (102,1009);
insert into sub_svc_test values (102,1010);
insert into sub_svc_parm_test values (1001, 51, 'test_id');
insert into sub_svc_parm_test values (1001, 53, 'no');
insert into sub_svc_parm_test values (1002, 54, 'max');
insert into sub_svc_parm_test values (1005, 51, 'test_id');
insert into sub_svc_parm_test values (1007, 51, 'test_id');
insert into sub_svc_parm_test values (1007, 54, 'min');
I need to fetch values from sub_svc_parm_test table for VAL column for a particular parm_id such that
select * from sub_svc_test ss, sub_svc_parm_test ssp
where ss.sub_svc_id = ssp.sub_svc_id and parm_id = 51;
this query will give me the VAL for 51 parm_id now i need to craete a view which will show me the VAL for parm_id 51, 54 but in differnet column like
select ssp.val, ssp1.val
from sub_svc_test ss, sub_svc_parm_test ssp, sub_svc_test ss1,
sub_svc_parm_test ssp1
where ss.sub_svc_id = ssp.sub_svc_id
and ssp.parm_id = 51
and ssp1.parm_id = 54
and ss1.sub_svc_id = ssp1.sub_svc_id;
This query will give me output but also it performs cross joins as i have not join the sub_svc_test ss, and sub_svc_test ss1 so it gives me 6 rows 2*3
but the requirement is that it should show me MAX rows of any column in our case it is first column (3 rows) and the remaining row which do not have data can contain any string or simply null in it like
VAL VAL_1
-------------- -------------
test_id max
test_id min
test_id null
i am using ----
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
Please ask for any clarification
Thanks
So you actually want to perform two separate selects and display them next to each other.
Maybe something like this could work:
select val, val1 from (
(select rownum as r, val as val from sub_svc_parm_test ssp1 where parm_id = 51)
full outer join
(select rownum as r1, val as val1 from sub_svc_parm_test ssp2 where parm_id = 54)
on r = r1
)
Related
I have 2 tables:
CREATE TABLE remdesivir_inventory
(
hospital_id int,
stock int,
state varchar(2)
);
CREATE TABLE remdesivir_requests
(
patient_id int,
prescribed_qty int,
state varchar(2)
);
I want to write a SQL that inserts rows in the remdesivir_assignments table
Every patient whose request can be fulfilled (until the stock runs out) will have a representative row in
the remdesivir_assignments table.
Each patient can be assigned to only 1 hospital (ie. requests cannot be split)
The 'state' of the patient and the hospital must match
CREATE TABLE remdesivir_assignments
(
patient_id int,
hospital_id int
);
Example:
INSERT INTO remdesivir_inventory VALUES (1, 200, 'CA');
INSERT INTO remdesivir_inventory VALUES (2, 100, 'FL');
INSERT INTO remdesivir_inventory VALUES (3, 500, 'TX');
INSERT INTO remdesivir_requests VALUES (10, 100, 'CA');
INSERT INTO remdesivir_requests VALUES (20, 200, 'FL');
INSERT INTO remdesivir_requests VALUES (30, 300, 'TX');
INSERT INTO remdesivir_requests VALUES (40, 100, 'AL');
INSERT INTO remdesivir_requests VALUES (50, 200, 'CA');
In this scenario, the following rows will be inserted to the remdesivir_assignments table
(10, 1)
(30, 3)
You can use a cumulative sum and join:
select rr.*, ri.hospital_id
from (select rr.*,
sum(prescribed_qty) over (partition by state order by patient_id) as running_pq
from remdesivir_requests rr
) rr join
remdesivir_inventory ri
on ri.state = rr.state and
rr.running_pq <= ri.stock
Here is a db<>fiddle.
Please help with SQL query. I've got a table:
CREATE TABLE PCDEVUSER.tabletest
(
id INT PRIMARY KEY NOT NULL,
name VARCHAR2(64),
pattern INT DEFAULT 1 NOT NULL,
tempval INT
);
Let's pretend it was filled with values:
INSERT INTO TABLETEST (ID, NAME, PATTERN, TEMPVAL) VALUES (1, 'A', 1, 10);
INSERT INTO TABLETEST (ID, NAME, PATTERN, TEMPVAL) VALUES (2, 'A', 1, 20);
INSERT INTO TABLETEST (ID, NAME, PATTERN, TEMPVAL) VALUES (3, 'A', 2, 10);
INSERT INTO TABLETEST (ID, NAME, PATTERN, TEMPVAL) VALUES (5, 'A', 2, 20);
INSERT INTO TABLETEST (ID, NAME, PATTERN, TEMPVAL) VALUES (4, 'A', 2, 30);
And I need to update all records (grouped by pattern) with NO MAX value TEMPVALUE. So as result I have to update records with Ids (1, 3, 5). Records with IDs (2, 4) has max values in there PATTERN group.
HELP PLZ
This select statement will help you get the IDs you need :
SELECT
*
FROM
(SELECT
id
,name
,pattern
,tempval
,MAX(tempval) OVER (PARTITION BY pattern) max_tempval
FROM
tabletest
)
WHERE 1=1
AND tempval != max_tempval
;
You should be able to build an update statement around that easily enough
Something like this:
update tabletest t
set ????
where t.tempval < (select max(tempval) from tabletest tt where tt.pattern = t.pattern);
It is unclear what values you want to set. The ???? is for the code that sets the values.
I have a requirement with some business rules to implement on SQL (within a PL/SQL block): I need to evaluate such rules and according to the result perform the corresponding update, delete or insert into a target table.
My database model contains a "staging" and a "real" table. The real table stores records inserted in the past and the staging one contains "fresh" data coming from somewhere that needs to be merged into the real one.
Basically these are my business rules:
Delta between staging MINUS real --> Insert rows into the real
Delta between real MINUS staging--> Delete rows from the real
Rows which PK is the same but any other fields different: Update.
(Those "MINUS" will compare ALL the fields to get equality and distinguise the 3rd case)
I haven't figured out the way to accomplish such tasks without overlapping between rules by using a merge statement: Any suggestion for the merge structure? Is it possible to do it all together within the same merge?
Thank you!
If I understand you task correctly following code should do the job:
--drop table real;
--drop table stag;
create table real (
id NUMBER,
col1 NUMBER,
col2 VARCHAR(10)
);
create table stag (
id NUMBER,
col1 NUMBER,
col2 VARCHAR(10)
);
insert into real values (1, 1, 'a');
insert into real values (2, 2, 'b');
insert into real values (3, 3, 'c');
insert into real values (4, 4, 'd');
insert into real values (5, 5, 'e');
insert into real values (6, 6, 'f');
insert into real values (7, 6, 'g'); -- PK the same but at least one column different
insert into real values (8, 7, 'h'); -- PK the same but at least one column different
insert into real values (9, 9, 'i');
insert into real values (10, 10, 'j'); -- in real but not in stag
insert into stag values (1, 1, 'a');
insert into stag values (2, 2, 'b');
insert into stag values (3, 3, 'c');
insert into stag values (4, 4, 'd');
insert into stag values (5, 5, 'e');
insert into stag values (6, 6, 'f');
insert into stag values (7, 7, 'g'); -- PK the same but at least one column different
insert into stag values (8, 8, 'g'); -- PK the same but at least one column different
insert into stag values (9, 9, 'i');
insert into stag values (11, 11, 'k'); -- in stag but not in real
merge into real
using (WITH w_to_change AS (
select *
from (select stag.*, 'I' as action from stag
minus
select real.*, 'I' as action from real
)
union (select real.*, 'D' as action from real
minus
select stag.*, 'D' as action from stag
)
)
, w_group AS (
select id, max(action) as max_action
from w_to_change
group by id
)
select w_to_change.*
from w_to_change
join w_group
on w_to_change.id = w_group.id
and w_to_change.action = w_group.max_action
) tmp
on (real.id = tmp.id)
when matched then
update set real.col1 = tmp.col1, real.col2 = tmp.col2
delete where tmp.action = 'D'
when not matched then
insert (id, col1, col2) values (tmp.id, tmp.col1, tmp.col2);
I have 3 tables with the following schema
create table main (
main_id int PRIMARY KEY,
secondary_id int NOT NULL
);
create table secondary (
secondary_id int NOT NULL,
tags varchar(100)
);
create table bad_words (
words varchar(100) NOT NULL
);
insert into main values (1, 1001);
insert into main values (2, 1002);
insert into main values (3, 1003);
insert into main values (4, 1004);
insert into secondary values (1001, 'good word');
insert into secondary values (1002, 'bad word');
insert into secondary values (1002, 'good word');
insert into secondary values (1002, 'other word');
insert into secondary values (1003, 'ugly');
insert into secondary values (1003, 'bad word');
insert into secondary values (1004, 'pleasant');
insert into secondary values (1004, 'nice');
insert into bad_words values ('bad word');
insert into bad_words values ('ugly');
insert into bad_words values ('worst');
expected output
----------------
1, 1000, good word, 0 (boolean flag indicating whether the tags contain any one of the words from the bad_words table)
2, 1001, bad word,good word,other word , 1
3, 1002, ugly,bad word, 1
4, 1003, pleasant,nice, 0
I am trying to use case to select 1 or 0 for the last column and use a join to join the main and secondary table, but getting confused and stuck. Can someone please help me with a query ? These tables are stored in redshift and i want query compatible with redshift.
you can use the above schema to try your query in sqlfiddle
EDIT: I have updated the schema and expected output now by removing the PRIMARY KEY in secondary table so that easier to join with the bad_words table.
You can use EXISTS and a regex comparison with \m and \M (markers for beginning and end of a word, respectively):
with
main(main_id, secondary_id) as (values (1, 1000), (2, 1001), (3, 1002), (4, 1003)),
secondary(secondary_id, tags) as (values (1000, 'very good words'), (1001, 'good and bad words'), (1002, 'ugly'),(1003, 'pleasant')),
bad_words(words) as (values ('bad'), ('ugly'), ('worst'))
select *, exists (select 1 from bad_words where s.tags ~* ('\m'||words||'\M'))::int as flag
from main m
join secondary s using (secondary_id)
select main_id, a.secondary_id, tags, case when c.words is not null then 1 else 0 end
from main a
join secondary b on b.secondary_id = a.secondary_id
left outer join bad_words c on c.words like b.tags
SELECT m.main_id, m.secondary_id, t.tags, t.is_bad_word
FROM srini.main m
JOIN (
SELECT st.secondary_id, st.tags, exists (select 1 from srini.bad_words b where st.tags like '%'+b.words+'%') is_bad_word
FROM
( SELECT secondary_id, LISTAGG(tags, ',') as tags
FROM srini.secondary
GROUP BY secondary_id ) st
) t on t.secondary_id = m.secondary_id;
This worked for me in redshift and produced the following output with the above mentioned schema.
1 1001 good word false
3 1003 ugly,bad word true
2 1002 good word,other word,bad word true
4 1004 pleasant,nice false
I have a schema:
http://sqlfiddle.com/#!4/e9917/1
CREATE TABLE test_table (
id NUMBER,
period NUMBER,
amount NUMBER
);
INSERT INTO test_table VALUES (1000, 1, 100);
INSERT INTO test_table VALUES (1000, 1, 500);
INSERT INTO test_table VALUES (1001, 1, 200);
INSERT INTO test_table VALUES (1001, 2, 300);
INSERT INTO test_table VALUES (1002, 1, 900);
INSERT INTO test_table VALUES (1002, 1, 250);
I want to update the amount field by adding amounts of records which has same (id, period) pair. like after op :
ID| period| amount
1000 1 600
1001 1 200
1001 2 300
1002 1 1150
I Couldn't figure out how :(
EDIT:
In actual case this table is populated by insertion operation from other 2 tables:
CREATE TABLE some_table1(
id NUMBER,
period NUMBER,
amount NUMBER
);
INSERT INTO some_table1 VALUES (1000, 1, 100);
INSERT INTO some_table1 VALUES (1000, 1, 500);
INSERT INTO some_table1 VALUES (1001, 1, 200);
INSERT INTO some_table1 VALUES (1001, 2, 300);
INSERT INTO some_table1 VALUES (1002, 1, 900);
INSERT INTO some_table1 VALUES (1002, 1, 250);
CREATE TABLE some_table2(
id NUMBER,
period NUMBER,
amount NUMBER
);
INSERT INTO some_table2 VALUES (1000, 1, 30);
INSERT INTO some_table2 VALUES (1000, 1, 20);
INSERT INTO some_table2 VALUES (1001, 1, 15);
INSERT INTO some_table2 VALUES (1001, 2, 20);
INSERT INTO some_table2 VALUES (1002, 1, 50);
INSERT INTO some_table2 VALUES (1002, 1, 60);
Dublicates occures when two insertions done:
INSERT INTO TEST_TABLE (id,period,amount) SELECT id,period,amount from some_table1
INSERT INTO TEST_TABLE (id,period,amount) SELECT id,period,amount from some_table2
new sqlfiddle link: http://sqlfiddle.com/#!4/cd45b/1
May be it can be solved during insertion from two table..
A script like this would do what you want:
CREATE TABLE test_table_summary (
id NUMBER,
period NUMBER,
amount NUMBER
);
INSERT INTO test_table_summary (id, period, amount)
SELECT id, period, SUM(amount) AS total_amount FROM test_table
GROUP BY id, period;
DELETE FROM test_table;
INSERT INTO test_table (id, period, amount)
SELECT id, period, total_amount FROM test_table_summary;
DROP TABLE test_table_summary;
But you should actually decide if test_table is to have a primary key and the total amount or all the detail data. It's not a good solution to use one table for both.
By what you have added, then I'd say you can use the Oracle MERGE INTO statement:
MERGE INTO test_table t
USING (SELECT id, period, amount FROM some_table1) s
ON (t.id=s.id AND t.period=s.period)
WHEN MATCHED THEN UPDATE SET t.amount=t.amount+s.amount
WHEN NOT MATCHED THEN INSERT (t.id, t.period, t.amount)
VALUES (s.id, s.period, s.amount);
Beware though... this will work only if test_table already has no duplicate id, period rows to begin with. So if your table is already messed up, you still have to reinitialize it properly a first time (and maybe add a unique id, period key to avoid problems in the future).