Qlik Sense How to load a new table from existing one using for each loop when the source table is filled vertically? - qlikview

I have a table with the following structure:
The desired loaded table in Qlik Sense is as following:
BY this structure, I will be able to add a table showing each doctor, and how many medications he prescribed, and even break it down into detailed pivot table to show what are these meds:
I tried to loop over the initial table [Medications] (which is coming from a REST API, so we cannot change it before loading to desired form):
FOR Each ev in FieldValueList('Event')
[MedAndDoctors]:
LOAD
$(ev) as event_id,
if (Field = 'Medication1' OR Field=Medication2 OR Field = Medication3 OR..., [Field]) AS med_name,
if (Field = 'Doctor', [Field]) AS doctor_name,
if (Field = 'Medication1 Quantity' OR Field = 'Medication2 Quantity' OR ..., [Field]) AS Quantity
RESIDENT ([Medications]);
WHERE event_id = '$(ev)';
Next ev;
Note that Field column contains lots of more info. Actually, the survey filled, is saved in a vertical structure, instead of the regular horizontal structure where all values of each event are on the same row.
The result was exactly the same of [Medications] table but with only the specified field values, so I couldn't display the desired output table.

After executing the script below we will get two tables - Doctors and Medications
Doctors table will contain the following data:
And Medications:
Once we have the data in this format then its very easy to create the result table:
Raw:
Load * Inline [
Event, Field , Value
Ev1 , Medication1 , TRUE
Ev1 , Medication2 , TRUE
Ev1 , Doctor , XYZ
Ev1 , Medication1 Quantity, 13
Ev1 , Medication2 Quantity, 3
Ev2 , Medication1 , TRUE
Ev2 , Doctor , ABC
Ev2 , Medication1 Quantity, 5
];
// List of Doctors by event
Doctors:
Load Distinct
Event,
Value as Doctor
Resident
Raw
Where
Field = 'Doctor'
;
Medications:
// List of all medications names by event
Load Distinct
Event,
Field as Medication
Resident
Raw
Where
SubStringCount(Field, 'Medication') > 0
and SubStringCount(Field, 'Quantity') = 0
and Value = 'TRUE'
;
join
// List of medication quantities by event
Load Distinct
Event,
trim(replace(Field, 'Quantity', '')) as Medication,
Value as MedicationQuantity
Resident
Raw
Where
SubStringCount(Field, 'Medication') > 0
and SubStringCount(Field, 'Quantity') > 0
;
Drop Table Raw;

Related

How to join 2 tables that have the values represented differently in each table?

I currently have 2 tables estimate_details and delivery_service.
estimate_details has a column called event that has events such as: checkout, buildOrder
delivery_service has a column called source that has events such as: makeBasket, buildPurchase
checkout in estimate_details is equivalent to makeBasket in delivery_service, and buildOrder is equivalent to buildPurchase.
estimate_details
id
event
...
1
checkout
...
2
buildOrder
...
delivery_service
id
source
date
...
1
makeBasket
'2022-10-01'
...
2
buildPurchase
'2022-10-02'
...
1
makeBasket
'2022-10-20'
...
I would like to be able to join the tables on the event and source columns where checkout = makeBasket and buildOrder = buildPurchase.
Also if there are multiple records for the specific ID and source in delivery_service , choose the latest one.
How would I be able to do this? I cannot UPDATE either table to have the same values as the other table.
I still want all the data from estimate_details, but would like the latest records from the delivery_service.
The Expected output in this situation would be:
id
event
Date
...
1
checkout
'2022-10-20'
...
2
buildOrder
'2022-10-02'
...
The best approach here is to use a CTE, which is like a subquery but more readable.
So first, in the 'CTE' you will use the delivery_service table to get the max date for each id and source. Then, you will handle the 'text' to manually replace it to make it match that in estimate details
WITH delivery_service_cte AS (
SELECT
id
, CASE
WHEN source = 'makeBasket' THEN 'checkout'
WHEN source = 'buildPurchase' THEN 'buildOrder'
END AS source
, MAX(date) AS date
FROM
delivery_service
GROUP BY
1, 2
)
SELECT
ed.* -- select whichever columns you want from here
, ds.id
, ds.source
, ds.date
FROM
estimate_details ed
LEFT JOIN
-- or JOIN (you didn't give enough info on what you are trying to achieve in
-- the output
delivery_service_cte ds
ON ds.source = ed.event

SD Invoice with amount 0 EUR not to be transmitted to FI

I am trying to fix a certain already developed function with the goal that the SD Invoice with amount 0 EUR should not be transmitted to FI. As I understood, the below code is used to select the data from FI and SD:
* select order-related invoices
SELECT * FROM vbfa AS v INTO TABLE gt_vbfa_inv
FOR ALL ENTRIES IN gt_vbak
WHERE vbelv = gt_vbak-vbeln
AND vbtyp_n IN ('M', 'O', 'P', '5', '6')
AND stufe = '00'
AND NOT EXISTS ( SELECT * FROM vbfa
WHERE vbelv = v~vbeln
AND posnv = v~posnn
AND vbtyp_n IN ('N', 'S')
AND stufe = '00' ) .
IF sy-subrc = 0.
* select invoice head status
SELECT DISTINCT * FROM vbuk APPENDING TABLE gt_vbuk_inv
FOR ALL ENTRIES IN gt_vbfa_inv
WHERE vbeln = gt_vbfa_inv-vbeln. "#EC CI_SUBRC
ENDIF.
SORT gt_vbuk_inv BY vbeln.
DELETE ADJACENT DUPLICATES FROM gt_vbuk_inv COMPARING vbeln.
IF me->gv_items = abap_true AND gt_vbuk_inv IS NOT INITIAL.
SELECT * FROM vbrp INTO TABLE gt_vbrp
FOR ALL ENTRIES IN gt_vbuk_inv
WHERE vbeln = gt_vbuk_inv-vbeln. "#EC CI_SUBRC
ENDIF.
As far as I can understand from the above written code, is that the table VBFA is used to get the data for FI, while the table VBRP is used to get the data for SD. What I want to achieve is that when the invoice number does not have a FI document, then the invoice number will be empty.
If the tables BKPF(for the FI) and VBRK(for the SD) would be used, then I could have tried the relation:
vbrk-xblnr=bkpf-xblnr.
However, those tables are not used in the function. May I please ask you, how can I fix the code so that when the invoice number does not have a FI document, thus the invoices with a value of 0 EUR will not generate an FI document, then the invoice number will be empty.
Thank you all in advance!
Since the goal is
the SD Invoice with amount 0 EUR should not be transmitted to FI
I suppose your code is in some user-exit or standard program modification when releasing the SD invoice to Accounting. If so, the BKPF is not created yet and there's no reason in selecting it.
The select from VBFA is not extracting data from FI. Starting from the Sales Order it is extracting the following SD documents (first document flow level only)
M Invoice
N Invoice Cancellation
P Debit Memo
5 Intercompany Invoice
6 Intercompany Credit Memo
And excluding those invoices that have a subsequent cancellation
N Invoice Cancellation
S Credit Memo Cancellation
Those documents can be found in VBRK table (SD invoice header) with the following select
SELECT DISTINCT * FROM vbrk APPENDING TABLE gt_vbrk
FOR ALL ENTRIES IN gt_vbfa_inv
WHERE vbeln = gt_vbfa_inv-vbeln.
Btw: I don't know the reason for the VBUK select since you're not using any document status information
If you're asking for the SD invoices with zero amount with the purpose to not release them to accounting (since otherwise they would produce an error in FI) you don't have to select BKPF but check VBRK-NETWR = 0 for every entry in gt_vbrk table

Error deleting json object from array in Postgres

I have a Postgres table timeline with two columns:
user_id (varchar)
items (json)
This is the structure of items json field:
[
{
itemId: "12345",
text: "blah blah"
},
//more items with itemId and text
]
I need to delete all the items where itemId equals a given value. e.g. 12345
I have this working SQL:
UPDATE timeline
SET items = items::jsonb - cast((
SELECT position - 1 timeline, jsonb_array_elements(items::jsonb)
WITH ORDINALITY arr(item_object, position)
WHERE item_object->>'itemId' = '12345') as int)
It works fine. It only fails when no items are returned by the subquery i.e. when there are no items whose itemId equals '12345'. In those cases, I get this error:
null value in column "items" violates not-null constraint
How could I solve this?
Try this:
update timeline
set items=(select
json_agg(j)
from json_array_elements(items) j
where j->>'itemId' not in ( '12345')
);
DEMO
The problem is that when null is passed to the - operator, it results in null for the expression. That not only violates your not null constraint, but it is probably also not what you are expecting.
This is a hack way of getting past it:
UPDATE timeline
SET items = items::jsonb - coalesce(
cast((
SELECT position - 1 timeline, jsonb_array_elements(items::jsonb)
WITH ORDINALITY arr(item_object, position)
WHERE item_object->>'itemId' = '12345') as int), 99999999)
A more correct way to do it would be to collect all of the indexes you want to delete with something like the below. If there is the possibility of more than one userId: 12345 within a single user_id row, then this will either fail or mess up your items(I have to test to see which), but at least it updates only rows with the 12345 records.
WITH deletes AS (
SELECT t.user_id, e.rn - 1 as position
FROM timeline t
CROSS JOIN LATERAL JSONB_ARRAY_ELEMENTS(t.items)
WITH ORDINALITY as e(jobj, rn)
WHERE e.jobj->>'itemId' = '12345'
)
UPDATE timeline
SET items = items - d.position
FROM deletes d
WHERE d.user_id = timeline.user_id;

How to using loop with oracle query?

Here is a query in a procedure where I pull the data and write it to the table:
SELECT
SUBSTR(sl.TASK_JOB_ID,1,12) AS schedule_id,
null AS schedule_seq,
null AS schedule_subseq,
BULK COLLECT
INTO
v_schedule_detail_table
FROM
sch_line sl,
SCH_INPUT_MATERIAL sim ,
SCH_INPUT_PIECE sip,
SCH_PIECES_RELATION spr,
SCH_CUT sc
WHERE
sl.TASK_JOB_ID = v_schedule_table(i).schedule_id
AND sl.SCH_LINE_NUM_ID = sim.SCH_LINE_NUM_ID
AND sim.INPUT_MAT_NUM_ID = sip.INPUT_MAT_NUM_ID
AND sip.INPUT_PIECE_NUM_ID = spr.INPUT_PIECE_NUM_ID
AND spr.SCHC_CUT_NUM_ID = sc.SCHC_CUT_NUM_ID;
When this code gets the data in sch_cut table , I want to fills schedule seq and schedule_subseq as in the example below.
For an example schedule id value, the following output of the sch_cut table appears.
According to this table without taking the value of 0 in piece grouping ;
If to_cut_trans_flag = Y, then that values increment to schedule_seq
If to_cut_long_flag = Y, then that values increment to schedule_subseq
and the algorithm will be as follows
Finally the output I want to see for this example should be filled like this

SQL - 1 Column Twice in the SELECT Statement for different values

I have created a database of "trips" with the following
TripParent
Id
DrivingCompany
Client
TripDetails
Id
Destination
PlannedArrivalDate
StatusLog
Id
CatStatusId (Comes from another table just with the names)
DateTimeModified
Let me explain the tables, first of all I hid another fields to keep it simple, the "Parent" table has MANY TripDetails, so it is just a summary for the many "Details" it has. The TripDetails table its 1 row for 1 destination, let's say the Trip is going from A to C, then we have a Row for each "stop" (A, B, C).
And then we got the StatusLog table that has MANY Rows for each "TripDetails".
The problem is, I need a stored procedure that returns the DrivingCompany, Client, PlannedArrivalDate, RealArrivalDate and RealDepartureDate.
The "Real Dates" come from the StatusLog table. Status 1 means that the truck has arrived Destination (A/B/C) and the status 2 means that it has left said location.
So far I got the following
SELECT
TP.DrivingCompany, TP.Client, TD.PlannedArrivalDate,
'Real Arrival Date' = CASE SL.CatStatusId
WHEN 1 THEN SL.DateTimeModified
ELSE NULL
END,
'Real Departure Date' = CASE SL.CatStatusId
WHEN 2 THEN SL.DateTimeModified
ELSE NULL
END
FROM
TripParent TP
JOIN
TripDetails TD ON TD.TripParentId = TE.Id
JOIN
StatusLog SL ON SL.TripDetailsId = TD.Id
GROUP BY
TP.Id
ORDER BY
TD.Id
Is using the CASE the correct way to show the same column twice in the SELECT statement? I think I'm on the right track but I can't group by TP.Id and I also need to show ALL the rows, going by this query, It doesn't show the "TripDetails" that don't have a "StatusLog" row because they haven't arrived.
Any help is appreciated
Try this:
SELECT TP.Id,
TP.DrivingCompany,
TP.Client,
TD.PlannedArrivalDate,
'Real Arrival Date' = CASE SL.CatStatusId
WHEN 1 THEN SL.DateTimeModified
ELSE NULL
END,
'Real Departure Date' = CASE SL.CatStatusId
WHEN 2 THEN SL.DateTimeModified
ELSE NULL
END
FROM TripParent TP
JOIN TripDetails TD ON TD.TripParentId = TE.Id
LEFT JOIN StatusLog SL ON SL.TripDetailsId = TD.Id
GROUP BY TP.Id,
TP.DrivingCompany,
TP.Client,
TD.PlannedArrivalDate
ORDER BY TD.Id
They way you use CASE if perfectly fine, yes. Columns you want to group by must be contained in the SELECT. Using a LEFT JOIN for the log entries ensures that you get also rows without corresponding log.
When grouping, each column you display must be either contained in the GROUP BY clause or in an aggregate function in the SELECT. So you have to think about why you are grouping, do you want to create a sum, count, ... ? >> Change the two CASE columns accordingly.