INSERTING VALUES INTO A NESTED TABLE - sql

I am trying to develop a university database with the help of nested tables, I have successfully created all other nested tables required and inserted data as well, but while inserting data into marks table I am facing problem of inconsistent datatype.
codes:
CREATE OR REPLACE TYPE MODULE_MARKS;
CREATE OR REPLACE TYPE MM_NT_TYPE AS TABLE OF REF MODULE_MARKS;
CREATE OR REPLACE TYPE MODULE_MARKS AS OBJECT
(
MODULE REF MODULE_T, MARKS_OBTAINED, TOTAL_MARKS, STATUS
)
CREATE TABLE MARK_TAB
(
student ref student_t,
modules_marks mm_nt_type
)
I am able to insert reference to student correctly but I want to insert data into module_marks.
Tried doing :
INSERT INTO MARK_TAB VALUES((SELECT REF(S) FROM STUDENT_TAB S WHERE
S.S_ID=1),
MM_NT_TYPE( MODULE_MARKS_T((SELECT REF (M) FROM MODULE_TAB M WHERE
M.MODULE_ID =1),
90,100,'PASS')));
this shoes the error ORA-00932. EXPECTED REFERENCE OF MODULE_MARKS_T GOT MODULE_MARKS_T.

It seems a familiar structure to me. Maybe I have created the same structure for one of my projects.
I think you are confused when inserting the record in the column having a type which is table of REF.
I have COURSES_TABLE_TYPE which is table of REF COURSES_T and courses table is table of COURSES_T;
I suggest you do the following:
INSERT INTO DEPARTMENT VALUES (
1,
COURSES_TABLE_TYPE(( -- REFs of single records delimited by comma
SELECT
REF(C)
FROM
COURSE C
WHERE
COURSE_ID = 1
),(
SELECT
REF(C)
FROM
COURSE C
WHERE
COURSE_ID = 2
))
);

MM_NT_TYPE is a collection of REF MODULE_MARKS whereas you are passing MODULE_MARKS objects and not references. Instead, you need to have a table containing containing MODULE_MARKS objects that you can reference:
CREATE TABLE module_marks_tab OF module_marks;
Then you can reference those objects. For example:
INSERT INTO mark_tab VALUES (
( SELECT REF(s) FROM students s WHERE id = 2 ),
MM_NT_TYPE(
( SELECT REF( m ) FROM module_marks_tab m WHERE m.module.id = 1 AND marks_obtained = 3 ),
( SELECT REF( m ) FROM module_marks_tab m WHERE m.module.id = 3 AND marks_obtained = 8 )
)
);
db<>fiddle

Related

How to pass value as parameter to join tables in Oracle SQL

We have situation like each project has separate / unique table. and each table has unique column name.
For ex Project AAA is having table A1_table, in this table the column name will be A1_APP, A1_DOCUMENT, A1_Pages and so on.
Similarly for Project BBB will have table B1_Table and this table will have column name like B1_APP, B1_DOCUMENT, B1_Pages.
I am trying to join the table by passing the column name value as parameter. Since it will be difficult to change the column name for each project
Since we have different column name i could not able to join the table.
Kindly advise
Note :
The table is already created by vendor. i am just trying to extract data for all studies. so it will be difficult for me to rename the column one by one
Sql Script :
DECLARE
V_IMG_DOC_ID INT := '12345';
V_SHORT_DESC NVARCHAR2(100) := 'B18' ;
v_sql VARCHAR2(5000);
BEGIN
Select C.PROJECT "PROJECT", D.SUBJECT "SUBJECT_NO",D.SITE_NUMBER, E.IMAGE_ID,E.IMG_DOC_ID,F.
DTYPE_DESC "DOCUMENT_TYPE",E.IMG_FILENAME,E.IMG_NAT_FILE_ORG "FILE_LOCATION"
from APPLICATION A
inner join B18_DOCUMENT B on A. APP_ID = B.||V_SHORT_DESC||_APP_ID
inner join PROJECT_IMAGE E on E.IMG_DOC_ID = B.||V_SHORT_DESC||D_DOC_ID
inner join SUBJECT D on D.SJ_ID = B.||V_SHORT_DESC||D_SJ_ID
Inner join PROTOCOL C on C.APP_ID = A.APP_ID
inner join DOCUMENTTYPE F on F. DT_APP_ID = A. APP_ID and F. DT_ID =
B.||V_SHORT_DESC||D_DT_ID
where E.IMG_DOC_ID = 5877630
ORDER BY E.IMG_DOC_ID DESC;
END;
Refactor your code so that the projects are all in the same table and add a project_name column.
CREATE TABLE project_documents (
project_name VARCHAR2(10),
app_id NUMBER,
document CLOB,
pages VARCHAR2(50)
-- ,... etc.
)
If you want to restrict users to only seeing their own projects then you can use a virtual private database.
Then you do not need to use dynamic SQL to build queries with lots of different table names and can just use the one table for all projects and add a filter for the specific project using the added project_name column.
If you cannot do that then you are going to have to either:
use dynamic SQL to build the queries and dynamically set the table and column names each time you run the query; or
create a view of all the projects:
CREATE VIEW all_projects (project_name, app_id, document, pages /*, ... */) AS
SELECT 'A1', a1_app_id, a1_document, a1_pages /*, ... */ FROM a1_table UNION ALL
SELECT 'A2', a2_app_id, a2_document, a2_pages /*, ... */ FROM a2_table UNION ALL
SELECT 'B1', b1_app_id, b1_document, b1_pages /*, ... */ FROM b1_table UNION ALL
SELECT 'B18', b18_app_id, b18_document, b18_pages /*, ... */ FROM b18_table
and then you can query the view using the normalised column names rather than the project-specific names.
(Note: You will have to update the view when new projects are added.)
That looks like a terribly wrong data model. If you want to pass table/column names and use them in your queries, you'll have to use dynamic SQL which is difficult to maintain and debug.
By the way, do you really plan to duplicate, triplicate, ... all your tables to fit all new projects? That's insane!
Should be something like this (table_1 has a foreign key constraint, pointing to project table).
SQL> create table project
2 (id_project number constraint pk_proj primary key,
3 name varchar2(20) not null
4 );
Table created.
SQL> create table table_1
2 (id number constraint pk_t1 primary key,
3 id_project number constraint fk_t1_proj references project (id_project),
4 app varchar2(20),
5 document varchar2(20),
6 pages number
7 );
Table created.
Sample rows:
SQL> insert into project (id_project, name) values (1, 'Project AAA');
1 row created.
SQL> insert into project (id_project, name) values (2, 'Project BBB');
1 row created.
SQL> insert into table_1 (id, id_project, app) values (1, 1, 'App. 1');
1 row created.
SQL> insert into table_1 (id, id_project, app) values (2, 1, 'App. 2');
1 row created.
SQL> insert into table_1 (id, id_project, app) values (3, 2, 'App. 3');
1 row created.
Sample queries:
SQL> select * from project;
ID_PROJECT NAME
---------- --------------------
1 Project AAA
2 Project BBB
SQL> select a.id, p.name project_name, a.app
2 from table_1 a join project p on p.id_project = a.id_project
3 order by p.name, a.id;
ID PROJECT_NAME APP
---------- -------------------- --------------------
1 Project AAA App. 1
2 Project AAA App. 2
3 Project BBB App. 3
SQL>
I have created a view for this view and it worked fine as per my expectation

Distinct value in array

Hello i have problem i have table
CREATE TABLE doctor(
idDoc SERIAL PRIMARY KEY,
meds TEXT[]
);
I want to store only unique values if my user enter same meds inside
Table before user insert
id
meds
1
{Nalfon,Ocufen,Actron}
After user insert (insert is Actron,Soma,Robaxin) i want update and have in table old values and new values ( Actron is same)
if I UPDATE table with new values I will get
UPDATE doctor SET meds='{Actron,Soma,Robaxin}') WHERE id=1;
id
meds
1
{Actron,Soma,Robaxin}
But i want
id
meds
1
{Nalfon,Ocufen,Actron,Soma,Robaxin}
I don't know how to check if new value is same like value in table and only insert/update table to have unique values.
Given your data structure, you can unnest the existing meds, add in the new ones, and re-aggregate as a unique array:
UPDATE doctor d
SET meds = (select array_agg(distinct med)
from (select med
from unnest('{Actron,Soma,Robaxin}'::text[]) u(med)
union all
select med
from unnest(d.meds) u(med)
) m
)
WHERE idDoc = 1;
Here is a db<>fiddle.
That said, you might consider a junction table with one row per doctor and medicine. That is the more traditional SQL representation for a many-to-many relationship.

DB2 Using with statement to keep last id

In my project I need to create a script that insert data with auto generate value for the primary key and then to reuse this number for foreign on other tables.
I'm trying to use the WITH statement in order to keep that value.
For instance, I'm trying to do this:
WITH tmp as (SELECT ID FROM (INSERT INTO A ... VALUES ...))
INSERT INTO B ... VALUES tmp.ID ...
But I can't make it work.
Is it at least possible to do it or am I completely wrong???
Thank you
Yes, it is possible, if your DB2-server version supports the syntax.
For example:
create table xemp(id bigint generated always as identity, other_stuff varchar(20));
create table othertab(xemp_id bigint);
SELECT id FROM FINAL TABLE
(INSERT INTO xemp(other_stuff)
values ('a'), ('b'), ('c'), ('d')
) ;
The above snippet of code gives the result below:
ID
--------------------
1
2
3
4
4 record(s) selected.
If you want to re-use the ID to populate another table:
with tmp1(id) as ( SELECT id FROM new TABLE (INSERT INTO xemp(other_stuff) values ('a1'), ('b1'), ('c1'), ('d1') ) tmp3 )
, tmp2 as (select * from new table (insert into othertab(xemp_id) select id from tmp1 ) tmp4 )
select * from othertab;
As per my understanding
You will have to create an auto-increment field with the sequence object (this object generates a number sequence).
You can CREATE SEQUENCE to achieve the auto increment value :
CREATE SEQUENCE seq_person
MINVALUE 1
START WITH 1
INCREMENT BY 1
CACHE 10

In PostgreSQL 9.6, what's the simplest way to expand a JSONB column filled with simple JSON dicts?

Say, I have a table json_table with a JSONB column, json_field. Each element in this column is a single uncomplicated dict, e.g.,
{'first_field': 2 , 'second_field': 42}
Is there a way to create a new table were the dicts are turned into columns?
My current approach is as follows:
CREATE TABLE normal_table ... first_field, second_field ... etc;
INSERT INTO normal_table (
id,
first_field,
second_field,
...
)
SELECT
id,
json_field->>'first_field',
json_field->>'second_field',
...
FROM json_table;
Is there a way to do something like the following?
SELECT
id,
expand_json_dict(json_field)
FROM json_table;
Or a similar succinct way of doing it? The JSONB column has a lot of fields I want to expand, and the queries become unwieldy. I've actually made a Python function which generates create/insert scripts. Still, I'd love for there to be a nice PostgreSQL way to do it.
Any thoughts?
Edit
The following is the working solution based on feedback here. Thanks guys.
drop table if exists json_table;
create table json_table (
id int,
json_field jsonb
);
insert into json_table VALUES
(2, ('{"second_field": 43, "first_field": 3}'::jsonb)),
(1, ('{"first_field": 2 , "second_field": 42}'::jsonb));
drop table if exists normal_table;
create table normal_table (
id int,
first_field int,
second_field int
);
insert into normal_table
select (
jsonb_populate_record(
null::normal_table,
jsonb_set(json_field, '{id}', id::text::jsonb)
)
).*
from json_table;
select * from normal_table;
Use the normal_table type as the base type to the jsonb_populate_record function:
create table normal_table (
id int,
first_field int,
second_field int
);
with json_table (json_field) as ( values
('{"first_field": 2 , "second_field": 42}'::jsonb)
)
select (jsonb_populate_record(null::normal_table, json_field)).*
from json_table
;
id | first_field | second_field
----+-------------+--------------
| 2 | 42
If it is necessary to generate the id to be inserted use jsonb_set:
with json_table (json_field) as ( values
('{"first_field": 2 , "second_field": 42}'::jsonb),
('{"first_field": 5 , "second_field": 1}')
)
select (
jsonb_populate_record(
null::normal_table,
jsonb_set(json_field, '{id}', (row_number() over())::text::jsonb)
)
).*
from json_table
;
id | first_field | second_field
----+-------------+--------------
1 | 2 | 42
2 | 5 | 1
You can create a type (record) that reflects your keys and then use json_populate_record:
create type my_type as (first_field varchar, second_field varchar);
SELECT id, (json_populate_record(null::my_type, json_field)).*
FROM json_table;
If there are keys in the JSON document that are not present in the type, they are simply ignored. If there are fields in the type definition that don't have a match in the JSON document they will be null.

SQL server query for pulling data from child table

I have a table called Identifier which has identifierType, identifierValue and foreignkey to patient table.
One patient can have multiple identifiers so for a given patient there will be multiple rows in identifier table.
I want to pull value of patientforeign key from this table which meets given criteria,
one example is I want to find
patientId where identifierType = 'PatientFirst"
and identifierValue = 'sally'
and identifierType= 'patientFirst'
and identifier value = 'sally'.
what will be sql statement to pull this result in sqlserver
References : ( http://sqlfiddle.com/#!3/33fc6/2/0 )
Seems a bit easy for this website, no? :)
SELECT fk_patientID
FROM identifier
WHERE IdentifierType = 'PatientFirst'
AND IdentifierValue = 'sally'
Pivot table may be useful to you as well if you'd like to flatten certain properties into a single row per patient ID:
;with PatientFullName( fk_patientId, PatientLast, PatientFirst )
as
(
select
fk_patientId
, pt.PatientLast
, pt.PatientFirst
from
Identifier
pivot
(
MAX(identifierValue)
for identifierType in ( [PatientLast], [PatientFirst] )
) as pt
)
select
*
from
PatientFullName pfn
where
pfn.PatientLast = 'Doe'
and pfn.PatientFirst = 'Sally'