SELECT into deep structure - abap

I am relatively new to ABAP and so I still need to get used to internal tables and the like,
so currently I am a little bit struggling with how to use SQL in ABAP to fill a nested structure.
For example:
TYPES: BEGIN of <<mystructure>>,
someID type sometype,
relatedItemsInDataModel type table of sometabletype,
END of <<mystructure>>.
DATA wa type <<mystructure>>.
<<SELECT INTO STATEMENT>>
DATA(lv_json) = /ui2/cl_json=>serialize( data = wa compress abap_true ... ).
So basically, I've got a table (A) in the dictionary which has a one-to-many relationship to another table (B) and I want to select all items in A and for every item in A I want to select all related items in B for that record.
The reason I wanna do this is because I later on want to convert that data to JSON looking like:
[
{
"someID": "someValue",
"relatedItemsInDataModel": [{...}, {...}]
},
{
"someID": "someValue2",
"relatedItemsInDataModel": [{...}, {...}, {...}, ...]
},
...
]
So, am I approaching this the right way in the first place and how can I achieve what I just described?

SELECTs only retrieve flat tables. Your therefore need to retrieve nested data in multiple steps and assemble it in the ABAP code.
Your example could look like this:
DATA the_ids_i_want TYPE RANGE OF sometype.
SELECT <field-list>
FROM table_a
INTO TABLE #DATA(selection_a)
WHERE some_id IN #the_ids_i_want.
SELECT <field-list>
FROM table_b
INTO TABLE #DATA(selection_b)
WHERE parent_id IN #the_ids_i_want.
LOOP AT selection_a INTO DATA(row_a).
DATA(result_row) =
VALUE <<mystructure>>(
some_id = row_a-some_id ).
LOOP AT selection_b INTO DATA(row_b)
WHERE parent_id = row_a-some_id.
INSERT VALUE #(
field_a = row_b-field_a
... )
INTO TABLE row_a-relatedItemsInDataModel.
ENDLOOP.
ENDLOOP.
There are a lot of aspects that can be optimized, depending on your data. For example, the WHERE conditions may vary depending on the type of foreign key relations, you may want to consider adding sorted keys to the internal table to speed up the LOOP ... WHERE lookup, there may be shorter variants for the assembly, using FOR instead of LOOP AT, etc. - but basically, this is what you need.

Related

JSONB to Array Data Type

I currently have a table sessions with a column actions(JSONB).
I am able to properly store an array from my frontend to my database using array_to_jsonb(array value).
The structure of my array (array value) looks like this:
var arrayStructure = [
{name: 'email client', check: false},
{name: 'send contract', check: false}
]
Unlike a lot of the questions I've seen on SO about modifying JSONB or accessing certain keys, I am only interested in converting the entire JSONB data back to an array for the frontend.
My temporary fix is to map through the array and use JSON.parse() in Javascript to restructure each object in the array. I cannot use JSON.parse() on the entire array, as it throws an error.
I'm looking for a query to bring back the array format to the JSONB data type stored. I've seen JSONB_SET, LATERAL, AND JSONB_ARRAY_ELEMENTS_TEXT. But not in a way that worked to bring back a proper array.
Starts As: JSONB in actions column in table named sessions
Should Result In: A query that brings back all rows, but with the actions column (JSONB) converted back to an array for the frontend:
select session_id, session_name, someFunction or lateral here(actions) from sessions
Screenshot of Sessions Table
I've tried queries like this:
SELECT
session_id,
actions::jsonb -> 'name' as name
FROM sessions;
And receive back null for name. I've tried ->> to access a deeper level, but that didn't work either.
This is half of the correct query result:
select session_id, jsonb_array_elements_text(actions)
from sessions
group by session_id;
Which results in this (only pay attention to results for session_id of 264):
query result
Now I have objects in their own rows as:
{"name": "some task", "check": "false}
When what I want for the actions column is:
[ {name: "some task", check: false}, {name: "other task", check: true} ]
So I need to further parse the JSON and group by session_id. I'm just struggling to build a sub-query that does that.
Steps to Create Set Up:
create table fakeSessions (
session_id serial primary key,
name varchar(20),
list jsonb
)
insert into fakeSessions(name, list)
VALUES(
'running',
'["{\"name\":\"inquired\",\"check\":false}", "{\"name\":\"sent online guide\",\"check\":false}", "{\"name\":\"booked!\",\"check\":false}"]'
)
insert into fakeSessions(name, list)
VALUES(
'snowboarding',
'["{\"name\":\"rental\",\"check\":false}", "{\"name\":\"booked ski passes\",\"check\":false}", "{\"name\":\"survey\",\"check\":false}"]'
)
The closest query I've created:
with exports as (
select jsonb_array_elements_text(actions)::jsonb as doc from sessions
)
select array_agg(doc) from
exports, sessions
group by session_id;
Get the text values, and then apply an aggregate function to those returned rows. Just can't get the select array_agg(doc) to work as expected. Most likely because I need a different function in that place.
Does this help?
demo:db<>fiddle
SELECT
jsonb_agg(elem)
FROM
sessions, jsonb_array_elements(actions) as elem
jsonb_array_elements() expands the jsonb array into one row each jsonb element
jsonb_agg() aggregates these jsonb elements into one big array.
I was able to reach the answer by building off of your query! Thank you so much. The query that got the expected outcome was this:
with exports as (
select session_id, jsonb_array_elements_text(actions)::jsonb as doc from sessions
)
select session_id, jsonb_agg(doc) from exports
group by session_id;
I wasn't using the jsonb_agg() function once I got the elements. The difference for getting the exact format was just using jsonb_array_elements_text::jsonb

check if a jsonb field contains an array

I have a jsonb field in a PostgreSQL table which was supposed to contain a dictionary like data aka {} but few of its entries got an array due to source data issues.
I want to weed out those entries. One of the ways is to perform following query -
select json_field from data_table where cast(json_field as text) like '[%]'
But this requires converting each jsonb field into text. With data_table having order of 200 million entries, this looks like bit of an overkill.
I investigated pg_typeof but it returns jsonb which doesn't help differentiate between a dictionary and an array.
Is there a more efficient way to achieve the above?
How about using the json_typeof function?
select json_field from data_table where json_typeof(json_field) = 'array'

Can get an average of values in a json array using postgres?

One of the great things about postgres is that it allows indexing into a json object.
I have a column of data formatted a little bit like this:
{"Items":
[
{"RetailPrice":6.1,"EffectivePrice":0,"Multiplier":1,"ItemId":"53636"},
{"RetailPrice":0.47,"EffectivePrice":0,"Multiplier":1,"ItemId":"53404"}
]
}
What I'd like to do is find the average RetailPrice of each row with these data.
Something like
select avg(json_extract_path_text(item_json, 'RetailPrice'))
but really I need to do this for each item in the items json object. So for this example, the value in the queried cell would be 3.285
How can I do this?
Could work like this:
WITH cte(tbl_id, json_items) AS (
SELECT 1
, '{"Items": [
{"RetailPrice":6.1,"EffectivePrice":0,"Multiplier":1,"ItemId":"53636"}
,{"RetailPrice":0.47,"EffectivePrice":0,"Multiplier":1,"ItemId":"53404"}]}'::json
)
SELECT tbl_id, round(avg((elem->>'RetailPrice')::numeric), 3) AS avg_retail_price
FROM cte c
, json_array_elements(c.json_items->'Items') elem
GROUP BY 1;
The CTE just substitutes for a table like:
CREATE TABLE tbl (
tbl_id serial PRIMARY KEY
, json_items json
);
json_array_elements() (Postgres 9.3+) to unnest the json array is instrumental.
I am using an implicit JOIN LATERAL here. Much like in this related example:
Query for element of array in JSON column
For an index to support this kind of query consider this related answer:
Index for finding an element in a JSON array
For details on how to best store EAV data:
Is there a name for this database structure?

PostgreSQL - best way to return an array of key-value pairs

I'm trying to select a number of fields, one of which needs to be an array with each element of the array containing two values. Each array item needs to contain a name (character varying) and an ID (numeric). I know how to return an array of single values (using the ARRAY keyword) but I'm unsure of how to return an array of an object which in itself contains two values.
The query is something like
SELECT
t.field1,
t.field2,
ARRAY(--with each element containing two values i.e. {'TheName', 1 })
FROM MyTable t
I read that one way to do this is by selecting the values into a type and then creating an array of that type. Problem is, the rest of the function is already returning a type (which means I would then have nested types - is that OK? If so, how would you read this data back in application code - i.e. with a .Net data provider like NPGSQL?)
Any help is much appreciated.
ARRAYs can only hold elements of the same type
Your example displays a text and an integer value (no single quotes around 1). It is generally impossible to mix types in an array. To get those values into an array you have to create a composite type and then form an ARRAY of that composite type like you already mentioned yourself.
Alternatively you can use the data types json in Postgres 9.2+, jsonb in Postgres 9.4+ or hstore for key-value pairs.
Of course, you can cast the integer to text, and work with a two-dimensional text array. Consider the two syntax variants for a array input in the demo below and consult the manual on array input.
There is a limitation to overcome. If you try to aggregate an ARRAY (build from key and value) into a two-dimensional array, the aggregate function array_agg() or the ARRAY constructor error out:
ERROR: could not find array type for data type text[]
There are ways around it, though.
Aggregate key-value pairs into a 2-dimensional array
PostgreSQL 9.1 with standard_conforming_strings= on:
CREATE TEMP TABLE tbl(
id int
,txt text
,txtarr text[]
);
The column txtarr is just there to demonstrate syntax variants in the INSERT command. The third row is spiked with meta-characters:
INSERT INTO tbl VALUES
(1, 'foo', '{{1,foo1},{2,bar1},{3,baz1}}')
,(2, 'bar', ARRAY[['1','foo2'],['2','bar2'],['3','baz2']])
,(3, '}b",a{r''', '{{1,foo3},{2,bar3},{3,baz3}}'); -- txt has meta-characters
SELECT * FROM tbl;
Simple case: aggregate two integer (I use the same twice) into a two-dimensional int array:
Update: Better with custom aggregate function
With the polymorphic type anyarray it works for all base types:
CREATE AGGREGATE array_agg_mult (anyarray) (
SFUNC = array_cat
,STYPE = anyarray
,INITCOND = '{}'
);
Call:
SELECT array_agg_mult(ARRAY[ARRAY[id,id]]) AS x -- for int
,array_agg_mult(ARRAY[ARRAY[id::text,txt]]) AS y -- or text
FROM tbl;
Note the additional ARRAY[] layer to make it a multidimensional array.
Update for Postgres 9.5+
Postgres now ships a variant of array_agg() accepting array input and you can replace my custom function from above with this:
The manual:
array_agg(expression)
...
input arrays concatenated into array of one
higher dimension (inputs must all have same dimensionality, and cannot
be empty or NULL)
I suspect that without having more knowledge of your application I'm not going to be able to get you all the way to the result you need. But we can get pretty far. For starters, there is the ROW function:
# SELECT 'foo', ROW(3, 'Bob');
?column? | row
----------+---------
foo | (3,Bob)
(1 row)
So that right there lets you bundle a whole row into a cell. You could also make things more explicit by making a type for it:
# CREATE TYPE person(id INTEGER, name VARCHAR);
CREATE TYPE
# SELECT now(), row(3, 'Bob')::person;
now | row
-------------------------------+---------
2012-02-03 10:46:13.279512-07 | (3,Bob)
(1 row)
Incidentally, whenever you make a table, PostgreSQL makes a type of the same name, so if you already have a table like this you also have a type. For example:
# DROP TYPE person;
DROP TYPE
# CREATE TABLE people (id SERIAL, name VARCHAR);
NOTICE: CREATE TABLE will create implicit sequence "people_id_seq" for serial column "people.id"
CREATE TABLE
# SELECT 'foo', row(3, 'Bob')::people;
?column? | row
----------+---------
foo | (3,Bob)
(1 row)
See in the third query there I used people just like a type.
Now this is not likely to be as much help as you'd think for two reasons:
I can't find any convenient syntax for pulling data out of the nested row.
I may be missing something, but I just don't see many people using this syntax. The only example I see in the documentation is a function taking a row value as an argument and doing something with it. I don't see an example of pulling the row out of the cell and querying against parts of it. It seems like you can package the data up this way, but it's hard to deconstruct after that. You'll wind up having to make a lot of stored procedures.
Your language's PostgreSQL driver may not be able to handle row-valued data nested in a row.
I can't speak for NPGSQL, but since this is a very PostgreSQL-specific feature you're not going to find support for it in libraries that support other databases. For example, Hibernate isn't going to be able to handle fetching an object stored as a cell value in a row. I'm not even sure the JDBC would be able to give Hibernate the information usefully, so the problem could go quite deep.
So, what you're doing here is feasible provided you can live without a lot of the niceties. I would recommend against pursuing it though, because it's going to be an uphill battle the whole way, unless I'm really misinformed.
A simple way without hstore
SELECT
jsonb_agg(to_jsonb (t))
FROM (
SELECT
unnest(ARRAY ['foo', 'bar', 'baz']) AS table_name
) t
>>> [{"table_name": "foo"}, {"table_name": "bar"}, {"table_name": "baz"}]

passing an array into oracle sql and using the array

I am running into the following problem, I am passing an array of string into Oracle SQL, and I would like to retrieve all the data where its id is in the list ...
here's what i've tried ...
OPEN O_default_values FOR
SELECT ID AS "Header",
VALUE AS "DisplayValue",
VALUE_DESC AS "DisplayText"
FROM TBL_VALUES
WHERE ID IN I_id;
I_id is an array described as follows - TYPE gl_id IS TABLE OF VARCHAR2(15) INDEX BY PLS_INTEGER;
I've been getting the "expression is of wrong type" error.
The I_id array can sometimes be as large as 600 records.
My question is, is there a way to do what i just describe, or do i need to create some sort of cursor and loop through the array?
What has been tried - creating the SQL string dynamically and then con-cat the values to the end of the SQL string and then execute it. This will work for small amount of data and the size of the string is static, which will caused some other errors (like index out of range).
have a look at this link: http://asktom.oracle.com/pls/asktom/f?p=100:11:620533477655526::::P11_QUESTION_ID:139812348065
effectively what you want is a variable in-list with bind variables.
do note this:
"the" is deprecated. no need for it
today.
TABLE is it's replacement
select * from TABLE( function );
since you already have the type, all you need to do is something similar to below:
OPEN O_default_values FOR
SELECT ID AS "Header",
VALUE AS "DisplayValue",
VALUE_DESC AS "DisplayText"
FROM TBL_VALUES
WHERE ID IN (select column_value form table(I_id));