I'm trying to implement the behavior of selecting data based on either an array of input, or get all data if array is null or empty.
SELECT * FROM table_name
WHERE
('{}' = $1 OR col = ANY($1))
This will return pq: op ANY/ALL (array) requires array on right side.
If I run
SELECT * FROM table_name
WHERE
(col = ANY($1))
This works just fine and I get the contents I expected.
I can also use array_length but it will request me to assert what type of data is in $1. If I do (array_length($1::string[],1) < 1 OR col = ANY($1)), it seems to always return false on the array_length and go on to the col = ANY($1)
How can I return either JUST the values from $1 OR all if $1 is '{}' or NULL?
Got it:
($1::string[] IS NULL OR event_id = ANY($1))
Related
If I have a tabkey value, e.g., DATA(lv_tabkey) = '1000041508773180000013000'., which is the concatenated value of all table keys for an entry and I know the name of the corresponding table:
How I can get the table entry for it without splitting tabkey manually and therefore having to write the order and length of each key field?
Full example:
" The first 3 chars always belong to the 'mandt' field
" which can't be filtered in the SELECT, therefore
" I ignore it and start with key2
DATA(lv_tabkey) = '1000041508773180000013000'.
"ToDo - how to make this generic? - START
DATA(lv_key2) = lv_tabkey+3(12).
DATA(lv_key3) = lv_tabkey+15(3).
DATA(lv_key4) = lv_tabkey+18(4).
DATA(lv_key5) = lv_tabkey+22(3).
DATA(lv_where) = 'key2 = ' && lv_key2 &&
' AND key3 = ' && lv_key3 &&
' AND key4 = ' && lv_key4 &&
' AND key5 = ' && lv_key5.
"ToDo - how to make this generic? - END
SELECT *
FROM table_x
INTO TABLE DATA(lt_results)
WHERE (lv_where).
I think I have to somehow iterate over the table fields, find out the keys and their length - but I don't know how to do this.
The statement you are seeking is:
ASSIGN tabkey TO < structure> CASTING TYPE HANDLE r_type_struct.
Knowing type handle for the (table key) structure you can fill it with values in a generic way and query the table using the structure. Here is how:
DATA: handle TYPE REF TO data,
lref_struct TYPE REF TO cl_abap_structdescr.
FIELD-SYMBOLS: <key_fld> TYPE abap_componentdescr.
SELECT * UP TO 5000 ROWS
FROM cdpos
INTO TABLE #DATA(t_cdpos)
WHERE tabname NOT LIKE '/%'.
LOOP AT t_cdpos ASSIGNING FIELD-SYMBOL(<fs_cdpos>).
lref_struct ?= cl_abap_structdescr=>describe_by_name( <fs_cdpos>-tabname ).
* get key fields
DATA(key_fields) = VALUE ddfields( FOR line IN lref_struct->get_ddic_field_list( ) WHERE ( keyflag NE space ) ( line ) ).
* filling key field components
DATA(key_table) = VALUE abap_component_tab( FOR ls_key IN key_fields
( name = ls_key-fieldname
type = CAST #( cl_abap_datadescr=>describe_by_name( ls_key-domname ) )
)
).
* create key fields type handle
TRY.
DATA(r_type_struct) = cl_abap_structdescr=>create( key_table ).
CATCH cx_sy_struct_creation .
ENDTRY.
* create key type
CHECK r_type_struct IS NOT INITIAL.
CREATE DATA handle TYPE HANDLE r_type_struct.
ASSIGN handle->* TO FIELD-SYMBOL(<structure>).
* assigning final key structure
ASSIGN <fs_cdpos>-tabkey TO <structure> CASTING TYPE HANDLE r_type_struct.
* filling values
LOOP AT key_table ASSIGNING <key_fld>.
ASSIGN COMPONENT <key_fld>-name OF STRUCTURE <structure> TO FIELD-SYMBOL(<val>).
CHECK sy-subrc = 0.
<key_fld>-suffix = <val>.
ENDLOOP.
DATA(where_cond) = REDUCE string( INIT where = ` ` FOR <field> IN key_table WHERE ( name <> 'MANDT' ) NEXT where = where && <field>-name && ` = '` && <field>-suffix && `' AND ` ).
where_cond = substring( val = where_cond off = 0 len = strlen( where_cond ) - 4 ).
IF <fs_cdpos>-tabname = 'BNKA'.
SELECT *
INTO TABLE #DATA(lt_bnka)
FROM bnka
WHERE (where_cond).
ENDIF.
ENDLOOP.
Here I built the sample on table CDPOS that contain table names and additionally concatenated key values in field tabkey, in other words exactly what you are trying to use.
In a loop it detects table types, builds the key and make SQL query in a generic way. Here I used table BNKA for simplicity, but SQL SELECT can be generized as well via field-symbol. Also I made a trick by filling values into the same tab that contains structure components, in SUFFIX field.
P.S. Before passing where condition into query make proper data type validation to avoid such errors as SAPSQL_DATA_LOSS, because with new syntax it makes a strict check.
your use case reminds me that how I deal with Change Document key.(CDHDR/CDPOS).
Hope it helps!
DATA:
lv_tabkey TYPE char50,
ls_table TYPE table_x.
FIELD-SYMBOLS:
<ls_src_x> TYPE x,
<ls_tgt_x> TYPE x.
"Add Client info the Table key if your table is Client dependent.
CONCATENATE sy-mandt lv_tabkey INTO lv_tabkey.
ASSIGN lv_tab_key TO <ls_src_x> CASTING.
ASSIGN ls_table TO <ls_tgt_x> CASTING.
<ls_tgt_x> = <ls_src_x>.
"Now ls_table has the key info filled including MANDT if you have the MANDT in table key.
SELECT *
FROM table_x
INTO TABLE DATA(lt_results)
WHERE key2 = ls_table-key2 AND key3 = ls_table-key3
AND key4 = ls_table-key4 AND key5 = ls_table_key5.
I am trying to update the jsonb column media, with two keys i.e
default** is of type jsonb and image_set is an array of jsonb.
Is there is solution for single select update statement to update both keys.
test_media table
id | media | name
----+-------------------------------------------------------------------------------------------------------------------------------------------------------+-------
2 | {"default": {"w1": "fff", "w2": "aaa", "w3": "ddd"}, "image_set": [{"w1": "fff", "w2": "aaa", "w3": "ddd"}, {"w1": "bbb", "w2": "rrr", "w3": "vvv"}]} | pooja
Updating image-set
Update test_media
set media = media #- ('{image_set,'||(select pos-1 from test_media, jsonb_array_elements(media->'image_set') with ordinality arr(value, pos) where name='pooja' and value->>'w1'='fff')
|| '}')::text[]
|| jsonb_set(media, '{default}', '{"w1": "bbb", "w2": "rrr", "w3": "vvv"}' )
where name='pooja';
Here, based on delete, I want to update the default and image_set together depends on different condition.default jsonb value is from image_set array. I tried using case statement but it is not working fine. Different conditions of delete are :
When the jsonb value, i want to delete is in default as well as in
image_set, it should delete that value from image set and update the
default with other value from image set.
If not so, it won't update default, just the image_set value will be
deleted.
If the value of array i.e image_set is 1 , then the media='{}' updated
to null json.
Tried two things updating separately, default and image_set.
Update test_media
set media = ( CASE
WHEN jsonb_array_length(media->'image_set')::int > 1
THEN (Select media #- ('{image_set,'||(select pos-1 from test_media , jsonb_array_elements(media->'image_set') with ordinality arr(value, pos) where name='pooja' and value->>'w1'='fff') || '}')::text[])
ELSE media = '{}'
END IF
)
where name='pooja';
Here, i got the error:CASE types boolean and jsonb cannot be matched
Secondly,
update test_media
set media = jsonb_set(media, '{default}', (select from (select CASE WHEN media->'default'->>'w1'='fff' AND jsonb_array_length(media->'image_set')::int >0 THEN (select media->'image_set'->0 from test_media where name='pooja' ) WHEN media->'default'->>'w1'='fff' AND jsonb_array_length(media->'image_set')::int = 0 THEN (select media - 'default' from test_media where name = 'pooja') END) As Sub), True)
where name='pooja';
I would be thankful if i get support for case statement using select update. Hope for a positive response. Thanks.
updating multiple jsonb column
update-multiple-values-in-a-jsonb-data
Test data
DROP TABLE t;
CREATE TABLE t(_id serial PRIMARY KEY, data jsonb);
INSERT INTO t(data) VALUES
('{"a":1,"b":2, "c":3}')
, ('{"a":11,"b":12, "c":13}')
, ('{"a":21,"b":22, "c":23}')
Problem statement: I want to receive an arbitrary JSONB parameter which acts as a filter on column t.data, such as
{ "b":{ "from":0, "to":20 }, "c":13 }
and use this to select matching rows from my test table t.
In this example, I want rows where b is between 0 and 20 and c = 13.
No error is required if the filter specifies a "column" (or "tag") which does not exist in t.data - it just fails to find a match.
I've used numeric values for simplicity but would like an approach which generalises to text as well.
What I have tried so far. I looked at the containment approach, which works for equality conditions, but am stumped on a generic way of handling range conditions:
select * from t
where t.data#> '{"c":13}'::jsonb;
Background: This problem arose when building a generic table-preview page on a website (for Admin users).
The page displays a filter based on various columns in whichever table is selected for preview.
The filter is then passed to a function in Postgres DB which applies this dynamic filter condition to the table.
It returns a jsonb array of the rows matching the filter specified by the user.
This jsonb array is then used to populate the Preview resultset.
The columns which make up the filter may change.
My Postgres version is 9.6 - thanks.
if you want to parse { "b":{ "from":0, "to":20 }, "c":13 } you need a parser. It is out of scope of json functions, but you can write "generic" query using AND and OR to filter by such json, eg:
https://www.db-fiddle.com/f/jAPBQggG3p7CxqbKLMbPKw/0
with filt(f) as (values('{ "b":{ "from":0, "to":20 }, "c":13 }'::json))
select *
from t
join filt on
(f->'b'->>'from')::int < (data->>'b')::int
and
(f->'b'->>'to')::int > (data->>'b')::int
and
(data->>'c')::int = (f->>'c')::int
;
Thanks for the comments/suggestions.
I will definitely look at GraphQL when I have more time - I'm working under a tight deadline at the moment.
It seems the consensus is that a fully generic solution is not achievable without a parser.
However, I got a workable first draft - it's far from ideal but we can work with it. Any comments/improvements are welcome ...
Test data (expanded to include dates & text fields)
DROP TABLE t;
CREATE TABLE t(_id serial PRIMARY KEY, data jsonb);
INSERT INTO t(data) VALUES
('{"a":1,"b":2, "c":3, "d":"2018-03-10", "e":"2018-03-10", "f":"Blah blah" }')
, ('{"a":11,"b":12, "c":13, "d":"2018-03-14", "e":"2018-03-14", "f":"Howzat!"}')
, ('{"a":21,"b":22, "c":23, "d":"2018-03-14", "e":"2018-03-14", "f":"Blah blah"}')
First draft of code to apply a jsonb filter dynamically, but with restrictions on what syntax is supported.
Also, it just fails silently if the syntax supplied does not match what it expects.
Timestamp handling a bit kludgey, too.
-- Handle timestamp & text types as well as int
-- See is_timestamp(text) function at bottom
with cte as (
select t.data, f.filt, fk.key
from t
, ( values ('{ "a":11, "b":{ "from":0, "to":20 }, "c":13, "d":"2018-03-14", "e":{ "from":"2018-03-11", "to": "2018-03-14" }, "f":"Howzat!" }'::jsonb ) ) as f(filt) -- equiv to cross join
, lateral (select * from jsonb_each(f.filt)) as fk
)
select data, filt --, key, jsonb_typeof(filt->key), jsonb_typeof(filt->key->'from'), is_timestamp((filt->key)::text), is_timestamp((filt->key->'from')::text)
from cte
where
case when (filt->key->>'from') is null then
case jsonb_typeof(filt->key)
when 'number' then (data->>key)::numeric = (filt->>key)::numeric
when 'string' then
case is_timestamp( (filt->key)::text )
when true then (data->>key)::timestamp = (filt->>key)::timestamp
else (data->>key)::text = (filt->>key)::text
end
when 'boolean' then (data->>key)::boolean = (filt->>key)::boolean
else false
end
else
case jsonb_typeof(filt->key->'from')
when 'number' then (data->>key)::numeric between (filt->key->>'from')::numeric and (filt->key->>'to')::numeric
when 'string' then
case is_timestamp( (filt->key->'from')::text )
when true then (data->>key)::timestamp between (filt->key->>'from')::timestamp and (filt->key->>'to')::timestamp
else (data->>key)::text between (filt->key->>'from')::text and (filt->key->>'to')::text
end
when 'boolean' then false
else false
end
end
group by data, filt
having count(*) = ( select count(distinct key) from cte ) -- must match on all filter elements
;
create or replace function is_timestamp(s text) returns boolean as $$
begin
perform s::timestamp;
return true;
exception when others then
return false;
end;
$$ strict language plpgsql immutable;
I have the name of a table DATA lv_tablename TYPE tabname VALUE 'xxxxx', and a generic FIELD-SYMBOLS: <lt_table> TYPE ANY TABLE. which contains entries selected from that corresponding table.
I've defined my line structure FIELD-SYMBOLS: <ls_line> TYPE ANY. which i'd use for reading from the table.
Is there a way to create a READ statement on <lt_table> fully specifying the key fields?
I am aware of the statement / addition READ TABLE xxxx WITH KEY (lv_field_name) = 'asdf'., but this however wouldn't work (afaik) for a dynamic number of key fields, and I wouldn't like to create a large number of READ TABLE statements with an increasing number of key field specifications.
Can this be done?
Actually i found this to work
DATA lt_bseg TYPE TABLE OF bseg.
DATA ls_bseg TYPE bseg.
DATA lv_string1 TYPE string.
DATA lv_string2 TYPE string.
lv_string1 = ` `.
lv_string2 = lv_string1.
SELECT whatever FROM wherever INTO TABLE lt_bseg.
READ TABLE lt_bseg INTO ls_bseg
WITH KEY ('MANDT') = 800
(' ') = ''
('BUKRS') = '0005'
('BELNR') = '0100000000'
('GJAHR') = 2005
('BUZEI') = '002'
('') = ''
(' ') = ''
(' ') = ' '
(lv_string1) = '1'
(lv_string2) = ''.
By using this syntax one can just specify as many key fields as required. If some fields will be empty, then these will just get ignored, even if values are specified for these empty fields.
One must pay attention that using this exact syntax (static definitions), 2 fields with the exact same name (even blank names) will not be allowed.
As shown with the variables lv_string1 and lv_string2, at run-time this is no problem.
And lastly, one can specify the fields in any order (i don't know what performance benefits or penalties one might get while using this syntax)
There seems to be the possibility ( like a dynamic select statement whith binding and lt_dynwhere ).
Please refer to this post, there was someone, who also asked for the requirement:
http://scn.sap.com/thread/1789520
3 ways:
READ TABLE itab WITH [TABLE] KEY (comp1) = value1 (comp2) = value2 ...
You can define a dynamic number of key fields by indicating statically the maximum number of key fields in the code, and indicate at runtime some empty key field names if there are less key fields to be used.
LOOP AT itab WHERE (where) (see Addition 4 "WHERE (cond_syntax)")
Available since ABAP 7.02.
SELECT ... FROM #itab WHERE (where) ...
Available since ABAP 7.52. It may be slow if the condition is complex and cannot be handled by the ABAP kernel, i.e. it needs to be executed by the database. In that case, only few databases are supported (I think only HANA is supported currently).
Examples (ASSERT statements are used here to prove that the conditions are true, otherwise the program would fail):
TYPES: BEGIN OF ty_table_line,
key_name_1 TYPE i,
key_name_2 TYPE i,
attr TYPE c LENGTH 1,
END OF ty_table_line,
ty_internal_table TYPE SORTED TABLE OF ty_table_line WITH UNIQUE KEY key_name_1 key_name_2.
DATA(itab) = VALUE ty_internal_table( ( key_name_1 = 1 key_name_2 = 1 attr = 'A' )
( key_name_1 = 1 key_name_2 = 2 attr = 'B' ) ).
"------------------ READ TABLE
DATA(key_name_1) = 'KEY_NAME_1'.
DATA(key_name_2) = 'KEY_NAME_2'.
READ TABLE itab WITH TABLE KEY
(key_name_1) = 1
(key_name_2) = 2
ASSIGNING FIELD-SYMBOL(<line>).
ASSERT <line> = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 2 attr = 'B' ).
key_name_2 = ''. " ignore this key field
READ TABLE itab WITH TABLE KEY
(key_name_1) = 1
(key_name_2) = 2 "<=== will be ignored
ASSIGNING FIELD-SYMBOL(<line_2>).
ASSERT <line_2> = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 1 attr = 'A' ).
"------------------ LOOP AT
DATA(where) = 'key_name_1 = 1 and key_name_2 = 1'.
LOOP AT itab ASSIGNING FIELD-SYMBOL(<line_3>)
WHERE (where).
EXIT.
ENDLOOP.
ASSERT <line_3> = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 1 attr = 'A' ).
"---------------- SELECT ... FROM #itab
SELECT SINGLE * FROM #itab WHERE (where) INTO #DATA(line_3).
ASSERT line_3 = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 1 attr = 'A' ).
is there a way to create a small constant relation(table) in pig?
I need to create a relation with only 1 tuple that contains constant values.
something along the lines of:
A = LOAD using ConstantLoader('{(1,2,3)}');
thanks, Ido
I'm not sure why you would need that but, here's an ugly solution:
A = LOAD 'some/sample/file' ;
B = FOREACH A GENERATE '' ;
C = LIMIT A 1 ;
Now, you can use 'C' as the 'empty relation' that has one empty tuple.
DEFINE GenerateRelationFromString(string) RETURNS relation {
temp = LOAD 'somefile';
tempLimit1 = LIMIT temp 1;
$relation = FOREACH tempLimit1 GENERATE FLATTEN(TOKENIZE('$string', ','));
};
usage:
fourRows = GenerateRelationFromString('1,2,3,4');
myConstantRelation = FOREACH fourRows GENERATE (
CASE $0
WHEN '1' THEN (1, 'Ivan')
WHEN '2' THEN (2, 'Boris')
WHEN '3' THEN (3, 'Vladimir')
WHEN '4' THEN (4, 'Olga')
END
) as myTuple;
This for sure is hacky, and the right way, in my mind, would be to implement a StringLoader() that would work like this:
fourRows = LOAD '1,2,3,4' USING StringLoader(',');
The argument typically used for file location would just be used as litral string input.
Fast answer: no.
I asked about it in pig-dev mailing list.