Get value based on column name stored in another table - sql

I have following tables structure:
"DocumentSubject"
(
"Id" SERIAL NOT NULL,
"Description" text,
"List1Left" text,
"List1Right" text
);
"DocumentRegistryAttributes"
(
"Id" SERIAL NOT NULL,
"DocumentSubjectId" integer,
"Code" text,
"WorkDetails" text,
"Name" text
);
In DocumentSubject there is column named List1Left which contains a name of column from DocumentRegistryAttributes table f.e. Code.
Can I retrieve value of column Code from DocumentRegistryAttributes based on string column name stored in DocumentSubject table?
I need something like this:
"DocumentRegistryAttributes"["DocumentSubject"."List1Left"] <-- first get value of column "List1Left" from table "DocumentSubject", and then retrieve this column value from "DocumentRegistryAttributes" table.
Here is fiddle: http://www.sqlfiddle.com/#!17/6cbc3/1
The real problem is, that I cannot use any static conditions in WHERE clause. Each document in DocumentRegistryAttributes table can be assigned to different subject in DocumentSubject table, and each subject can have different configuration.
Is it possible?

You can use to_jsonb() to make a JSON for the row from "DocumentRegistryAttributes", with the column names as keys and then select the text from the JSON where the key is the text in "DocumentSubject"."List1Left".
SELECT *,
to_jsonb(dra)->>ds."List1Left"
FROM "DocumentSubject" ds
LEFT JOIN "DocumentRegistryAttributes" dra
ON dra."DocumentSubjectId" = ds."Id";

Related

PostgreSQL JSON array value from another column

I have employee table in postgreSQL
CREATE TABLE Employee(EmployeeID integer PRIMARY KEY AUTO_INCREMENT,
Employeename VARCHAR(100));
alter table Employee add column parents JSON;
Now, I want to update JSON column with JSON array with a value from the existing column like below.
update employee set parents = json_array_elements('[{"name":Employeename, "id":EmployeeID }]')
Any way I can achieve this?
Try using:
JSON_BUILD_OBJECT function, to generate your json element
JSON_BUILD_ARRAY function, to enclose your json object into an array
UPDATE employee
SET parents = JSON_BUILD_ARRAY(
JSON_BUILD_OBJECT('name', Employeename,
'id' , EmployeeID ));
Check the demo here.
If you need to store it as an array, since

Change Schema while creating table

I have an issue later in my process when I want the append tables with a different Datatypes.
I am creating a new table out of an existing table. One column is the Calenderweek(KW) which was originally a STRING. In order to append my tables later on I need the same datatype for the column.
Is there a way to change the datatype for a column while creating the new table?
CREATE TABLE IF NOT EXISTS
MyNewTable
AS(
SELECT
Column_1 AS
Column_1_alias,
**KW_ AS KW,**
FROM
SourceTable);
What this Query does is that it only grabs the value of the column KW that contains a number, then checks if the STRING value contains a character and removes it from the STRING. Finally it CAST to the desired value type of the column, so it ends as an INT.
CREATE TABLE IF NOT EXISTS
dataset.MyNewTable
AS(
SELECT
Column1 AS
Column1_alias,
CAST(REGEXP_REPLACE(KW,'[^0-9^]','') as INT64) as KW_Alias
FROM
`project.dataset.source`
WHERE REGEXP_CONTAINS(KW,'[0-9]')
);
Another possible solution is to use the function REPLACE instead of REGEXP_REPLACE, to replace the string to a number.

Convert List Of XML Tags in varchar column to comma separated list

I have a table that contains a list of xml tags/values that I need to use to join to another table to retrieve their actual value and display the result as a csv list.
Example varchar data:
<choice id="100"/><choice id="101"/><choice id="102"/>
However, these values actually translate to other values: red, white, blue respectively. I need to convert that list to the following list:
red,white,blue
As a recap, the "source" table column is varchar, and contains a list of xml attribute values, and those values translate to other values by joining to another table. So the other table has a primary key of id (int) with rows for 100,101,102. Each of those rows has values red,white,blue respectively. I hope this makes enough sense.
Here is the ddl to set up the scenario:
create table datatable(
id int,
data nvarchar(449)
primary key (id)
);
insert into datatable(id, data)
values(1,'<choice id="100"/><choice id="101"/><choice id="102"/>')
,(2,'<choice id="100"/>')
,(3,'<choice id="101"/>')
,(4,'<choice id="102"/>');
create table choicetable(
id int,
choicevalue nvarchar(449)
primary key (id)
);
insert into choicetable(id, choicevalue)
values(100,'red')
,(101,'white')
,(102,'blue');
This would be the first time I've tried parsing XML in this manner so I'm a little stumped where to start. Also, I do not have control over the database I am retrieving the data from (3rd party software).
Without proper sample data it's hard to give an exact query. But you would do something like this
Use CROSS APPLY to convert the varchar to xml
Use .nodes to shred the XML into separate rows.
Join using .value to get the id attribute
Group up, and concatenate using STRING_AGG. You may not need GROUP BY depending on your situation.
SELECT
xt.Id,
STRING_AGG(ot.Value, ',')
FROM XmlTable xt
CROSS APPLY (SELECT CAST(xt.XmlColumn AS xml) ) v(XmlData)
CROSS APPLY v.XmlData.nodes('/choice') x1(choice)
JOIN OtherTable ot ON ot.Id = x1.choice.value('#id','int')
GROUP BY
xt.Id;
I would advise you to store XML data in an xml typed column if at all possible.

SSIS: Cannot Insert NULL

Source have some column like ID, Name, Phone and Address and destination has same column but one extra column Operator and want to put static text there during lookup transformation.Our destination has composite primary key ID+Operator.
I tried lookup transformation and working well.but want to add static text during lookup transformation.
...
...
check the destination table if column Operator accept NULL value
Regards

Is it OK to have separate column in Audit table to store column name to reflect what changes were made

Is it a good practice to store column name to represent what were the changes made in a data in parent table which led to trigger the audit.
Ex :-
create table employee
(
emp_id character varying(10),
fname character varying(30),
lname character varying(30),
tel_no character varying(15)
);
create table aud_employee
(
emp_id character varying(10),
fname character varying(30),
lname character varying(30),
tel_no character varying(15)
aud_col_changed character varying(100)
);
--
insert into employee values('215','Mark','Cooper','222-458-254');
This will also result to insert the record in an audit table through trigger and will have null value in aud_col_changed column.
Now when I update the same record :-
update employee set tel_no='255-458-254' where emp_id='215';
So, audit would also be created for this update made and audit table should now consist another record and would consist value 'tel_no' in aud_col_changed column.
If there are multiple columns changed at a time, it would be separated by comma in same field.
If this is the right approach, could you please describe the ways of achieving it?
Please note that the table on which I am trying to implement this approach has around 18 columns out of whih 6-7 columns are JSON.
Your method is likely to be fine -- you should specify what you want to do with the audit table.
Personally, I would rather have a table where the audit table was one of the following:
One row per column changed, with the old value and the new value.
One row per row changed, with all the columns appearing twice, once for the old value and once for the new value.
In other words, I usually want to see both the old and new values together.
The first method is tricky when dealing with columns that have different types. The second is tricky when you want to modify the structure of the table.
I did some more research and I found that if we want to store column name then that data needs to be updated through function. In function we need to check each value passed with NOT NULL. If it appears to be not null then we need to hard code the column name and assign it to the variable. If more values are found which are NOT NULL, then that hard coded column name needs to be appended to main variable until we check all the values passed in function with NOT NULL.
This will definitely degrade performance of DB and making it run after every update is obviously not preferable.
Hence, I will not prefer using audit_col_changed column.