is it possible, in postgresql - to access a cte defined in a view?
By that I mean - if you have the following:
create view my_view as
with
blah as (select 1 as x, 2 as y, 3 as z)
select
x*x as x_squared,
y*y as y_squared,
z*z as z_squared
from
blah
is there any way from outside of getting at blah? eg looking for something like:
select * from my_view.blah
Basically we have large views that use a number of complicated CTE's - and it's quite difficult sometimes to troubleshoot them without splitting them all out into separate smaller views [yes, I would prefer to just keep it like that, but I don't have that option]
I know I will be able to do it by making a stored proc that pulls out the view definition - extracts the with clauses, parses up to the blah definition, changes that to the main select, gets rid of the rest, and then does the query - but that all seems like a lot of work. Am hoping there's a built-in way?
You can create the CTE as a view by itself. For example:
create table a (b int);
insert into a (b) values (1), (50), (200), (350), (1000);
create view blah as
select * from a where b > 100;
Anf then base your original view on this new intermediate one to avoid repeating code:
create view my_view as
select * from blah where b < 500;
See running example at DB Fiddle.
ok - so I have a sort of solution - it's a not a function - it's a procedure that turns a cte into a materialized_view.
I first wanted to formulate it as function, so you could say:
select * from cte_from( 'my_real_view', 'the_cte' )
but it appears that a function needs its schema defined in advance, which is obviously impossible in this case. If anyone can suggest a hack to make it closer to above, I'd apperciate it. But anyway - bottom line this works:
create procedure from_cte(view_schema text, view_name text, cte_name text, materialized_view_name text) as
$func$
declare
_code text;
_others text;
_script text;
begin
execute format('drop materialized view if exists %s', materialized_view_name);
with recursive
string_provider as (
select view_definition as the_string,
position(concat(cte_name, ' as (') in lower(view_definition)) + length(cte_name) + 5 as start_location
from information_schema.views
where table_name = view_name
and table_schema = view_schema
),
string_walk as (
select start_location as x,
1 as brackets
from string_provider
union
select x + 1,
new_brackets
from string_provider,
string_walk,
lateral (select case
when substring(the_string from x + 1 for 1) = '(' then brackets + 1
when substring(the_string from x + 1 for 1) = ')' then brackets - 1
else brackets
end as new_brackets
) calculated
where new_brackets != 0
and x < length(the_string)
)
select substring(the_string from start_location for 1 + max(x) - start_location),
trim(substring(the_string from 0 for start_location - length(cte_name) - 5))
into _code, _others
from string_walk,
string_provider
group by the_string, start_location;
if length(_others) < 5 then
select _code into _script;
else
select concat(substring(_others from 0 for length(_others)), ' ', _script) into _script;
end if;
execute format('create materialized view %s as ( %s )', materialized_view_name, _code);
end
$func$ language plpgsql;
Related
I'm working on a trigger.
declare
v_time number(11, 0);
begin
for i in 0..1
loop
select JSON_VALUE(body, '$.sections[0].capsules[0].timer.seconds') into v_time from bodycontent where contentid=1081438;
dbms_output.put_line(v_time);
end loop;
end;
However, index references do not become dynamic.
like JSON_VALUE(body, '$.sections[i].capsules[i].timer.seconds')
Is there any way I can do this?
You can use JSON_TABLE:
declare
v_time number(11, 0);
begin
for i in 0..1 loop
SELECT time
INTO v_time
FROM bodycontent b
CROSS APPLY
JSON_TABLE(
b.body,
'$.sections[*]'
COLUMNS (
section_index FOR ORDINALITY,
NESTED PATH '$.capsules[*]'
COLUMNS (
capsule_index FOR ORDINALITY,
time NUMBER(11,0) PATH '$.timer.seconds'
)
)
) j
WHERE j.section_index = i + 1
AND j.capsule_index = i + 1
AND b.contentid=1081438;
dbms_output.put_line(v_time);
end loop;
end;
/
Which, for the test data:
CREATE TABLE bodycontent ( body CLOB CHECK ( body IS JSON ), contentid NUMBER );
INSERT INTO bodycontent ( body, contentid ) VALUES (
'{"sections":[
{"capsules":[{"timer":{"seconds":0}},{"timer":{"seconds":1}},{"timer":{"seconds":2}}]},
{"capsules":[{"timer":{"seconds":3}},{"timer":{"seconds":4}},{"timer":{"seconds":5}}]},
{"capsules":[{"timer":{"seconds":6}},{"timer":{"seconds":7}},{"timer":{"seconds":8}}]}]}',
1081438
);
Outputs:
0
4
Or, you can just use a query:
SELECT section_index, capsule_index, time
FROM bodycontent b
CROSS APPLY
JSON_TABLE(
b.body,
'$.sections[*]'
COLUMNS (
section_index FOR ORDINALITY,
NESTED PATH '$.capsules[*]'
COLUMNS (
capsule_index FOR ORDINALITY,
time NUMBER(11,0) PATH '$.timer.seconds'
)
)
) j
WHERE ( j.section_index, j.capsule_index) IN ( (1,1), (2,2) )
AND b.contentid=1081438;
Which outputs:
SECTION_INDEX | CAPSULE_INDEX | TIME
------------: | ------------: | ---:
1 | 1 | 0
2 | 2 | 4
(Note: the indexes from FOR ORDINALITY are 1 higher than the array indexes in the JSON path.)
db<>fiddle here
You would need to concatenate the variable in the json path:
JSON_VALUE(body, '$.sections[' || to_char(i) || '].capsules[0].timer.seconds')
I don't really see how your question relates to a trigger.
So, the problem is that you want to loop over an index (to go diagonally through the array of arrays, picking up just two elements) - but the JSON_* functions don't work with variables as array indices - they require hard-coded indexes.
PL/SQL has an answer for that - native dynamic SQL, as demonstrated below.
However, note that this approach makes repeated calls to JSON_VALUE() over the same document. Depending on the actual requirement (I assume the one in your question is just for illustration) and the size of the document, it may be more efficient to make a single call to JSON_TABLE(), as illustrated in MT0's answer. If in fact you are indeed only extracting two scalar values from a very large document, two calls to JSON_VALUE() may be justified, especially if the document is much larger than shown here; but if you are extracting many scalar values from a document that is not too complicated, then a single call to JSON_TABLE() may be better, even if in the end you don't use all the data it produces.
Anyway - as an illustration of native dynamic SQL, here's the alternative solution (using MT0's table):
declare
v_time number(11, 0);
begin
for i in 0..1 loop
execute immediate q'#
select json_value(body, '$.sections[#' || i ||
q'#].capsules[#' || i || q'#].timer.seconds')
from bodycontent
where contentid = 1081438#'
into v_time;
dbms_output.put_line(v_time);
end loop;
end;
/
0
4
PL/SQL procedure successfully completed.
You can see the example below:
Just hit this query in your console and try to understand the implementation.
SELECT JSON_VALUE('{
"increment_id": "2500000043",
"item_id": "845768",
"options": [
{
"firstname": "Kevin"
},
{
"firstname": "Okay"
},
{
"lastname": "Test"
}
]
}', '$.options[0].firstname') AS value
FROM DUAL;
I created this query:
SELECT
(SELECT value from UNNEST(project.labels) where key = "project") as project,
ROUND(SUM(cost), 2) as cost
FROM cloud.dataset.billing_export
group by ar
And it returns me something like:
Row | project | cost
1 | PJ1 | 23
2 | PJ2 | 50
Is there a way to create a VIEW for each value (each project)?
I'm trying with UDF and each view must have a name based on the project (eg: view_PJ1) and got something like (but with a lot of errors):
LOOP
SET vars = (SELECT (SELECT value FROM UNNEST(project.labels) WHERE key = "project") AS project
FROM FROM cloud.dataset.billing_export
GROUP BY project);
IF vars=null THEN
LEAVE;
END IF;
CREATE OR REPLACE VIEW `cloud`.`dataset`.AR
AS
SELECT DISTINCT
(SELECT value from UNNEST(project.labels) where key = "project") as project,
ROUND(SUM(cost), 2) as cost
FROM cloud.dataset.billing_export
WHERE project=vars
GROUP BY project;
END LOOP;
Thanks in advance
This is a script that runs inside BigQuery that generates 4 views - giving each view a name coming out of a SQL query:
DECLARE x INT64 DEFAULT 0;
DECLARE rs ARRAY<STRING>;
SET rs = (
WITH data AS (SELECT i FROM `fh-bigquery.public_dump.numbers_255` WHERE i < 4)
SELECT ARRAY_AGG(
'CREATE OR REPLACE VIEW `temp.number' || i
||'` AS SELECT i FROM `fh-bigquery.public_dump.numbers_255` WHERE i=' || i
)
FROM data
);
LOOP
EXECUTE IMMEDIATE(SELECT rs[OFFSET(x)]);
SET x = x + 1;
IF x >= ARRAY_LENGTH(rs) THEN
LEAVE;
END IF;
END LOOP;
The secret is to use EXECUTE IMMEDIATE with a generated string creating the view.
I try to run the next 2 queries sequentially. The first one runs perfectly, the second one throws
ORA-30926: unable to get a stable set of rows in the source tables
I searched the net for a solution but I can't replicate it for my queries. Can anyone help me please?
Query 1:
merge into sdc_compare_person dcip
using (
select anumber, position, character
from sdc_diakrietposities_cip
where kind = 'Surname'
) x
on (dcip.sourcekey = x.anumber)
when matched then update set
dcip.GESVOR = substr(dcip.GESVOR, 1, x.position - 1) ||
x.character ||
substr(dcip.GESVOR, x.position + 1, length(dcip.GESVOR)-x.position)
;
188 rows merged.
Query 2:
merge into sdc_compare_person dcip
using (
select anumber, position, character
from sdc_diakrietposities_cip
where kind = 'Lastname'
) x
on (dcip.sourcekey = x.anumber)
when matched then update set
dcip.GESNAM_D = substr(dcip.GESNAM_D, 1, x.position - 1) ||
x.character ||
substr(dcip.GESNAM_D, x.position + 1, length(dcip.GESNAM_D) - x.position)
;
SQL Error: ORA-30926: Unable to get a stable set of rows in the source tables
You can alway use ordinary update, it's not so elegant as MERGE, but should work:
UPDATE sdc_compare_person dcip
SET dcip.GESNAM_D = (
SELECT substr(dcip.GESNAM_D, 1, x.position - 1) ||
x.character ||
substr(dcip.GESNAM_D, x.position + 1, length(dcip.GESNAM_D) -
x.position)
FROM sdc_diakrietposities_cip x
where kind = 'Lastname'
AND dcip.sourcekey = x.anumber
)
WHERE dcip.sourcekey IN (
select anumber
from sdc_diakrietposities_cip
where kind = 'Lastname'
);
From the comments to the question it becomes clear that the author wants to update the same record many times.
Of course, this cannot get past ORA-30926 when trying to do it by a merge construct.
It's hard or impossible to do such a thing in pure oracle sql, but it's easily done with a pl/sql function.
For example:
create or replace function replace_chars(p_str varchar2, p_id number, p_kind varchar2) return varchar2 as
l_str varchar2(32767):=p_str;
begin
for u in (select u.position, u.character from sdc_diakrietposities_cip u
where u.anumber=p_id and u.kind=p_kind order by u.position) loop
if (u.position >= 1 or u.position <= length(l_str)) then
l_str:=substr(l_str, 1, u.position-1)|| u.character || substr(l_str, u.position+1);
end if;
end loop;
return l_str;
end;
Use like this:
update sdc_compare_person t
set t.GESNAM_D= replace_chars(t.GESNAM_D, t.sourcekey, 'Lastname');
I'd suggest backing up your table before running this.
Consider following situation: I have PL/pgSQL function which checks, If given auditor has some prerequisites for QS Auditor function. Thresholds of this prerequisites are defined in separate table quasar_settings. Every time, If is the function called, is executed SELECT which retrieves these prerequisites. This is quite inefficient, because this SELECT is called for every row. This quasar_settings table contains only one row. Is there any other more effective solution (global variable, caching, etc)?
Table quasar_settings has only one row.
Using PostgreSQL 9.3
PL/pgSQL function
CREATE OR REPLACE FUNCTION qs_auditor_training_auditing(auditor quasar_auditor) RETURNS boolean AS $$
DECLARE
settings quasar_settings%ROWTYPE;
BEGIN
SELECT s INTO settings
FROM quasar_settings s LIMIT 1;
RETURN auditor.nb1023_procedures_hours >= settings.qs_auditor_nb1023_procedures AND
-- MD Training
auditor.mdd_hours + auditor.ivd_hours >= settings.qs_auditor_md_training AND
-- ISO 9001 Trainig
(
auditor.is_aproved_for_iso13485 OR
(auditor.is_aproved_for_iso9001 AND auditor.iso13485_hours >= settings.qs_auditor_iso13485_training) OR
(auditor.iso13485_hours + auditor.iso9001_hours >= settings.qs_auditor_class_room_training)
);
END;
$$ LANGUAGE plpgsql;
Example of usage:
SELECT auditor.id, qs_auditor_training_auditing(auditor) FROM quasar_auditor auditor;
Do a cross join to the settings table in instead of calling the function at every row
select
a.id,
a.nb1023_procedures_hours >= s.qs_auditor_nb1023_procedures and
-- md training
a.mdd_hours + a.ivd_hours >= s.qs_auditor_md_training and
-- iso 9001 trainig
(
a.is_aproved_for_iso13485 or
(
a.is_aproved_for_iso9001 and
a.iso13485_hours >= s.qs_auditor_iso13485_training
) or
(a.iso13485_hours + a.iso9001_hours >= s.qs_auditor_class_room_training)
)
from
quasar_auditor a
cross join
quasar_settings s
I need to find the next single event that appears after the last occurrence of the following pattern of events "5065|5373|5373". My problem is that the pattern can be in the string 1 to n times. Here's an example of the some data that I have to search through.
The events in BOLD are what i would be looking for.
5065|5373|5373|5065|5373|5373|5065|5373|5373|5509|5329|5321
5065|5373|5373|5065|5373|5373|5509|5270|5373|5373|5373|5080|5081|5013|5040|5295|5321
5065|5373|5373|5295|5323|5321
Any help would be greatly appreciated!
If you can't create a new stored procedure or UDF, here's a recursive query that'll do it for you:
WITH Recurs(id, index, token, source) as (
SELECT id, LOCATE('5065|5373|5373|', M.PATH_2), '', M.PATH_2
FROM M
UNION ALL
SELECT id, LOCATE('5065|5373|5373|', source, index + 15),
SUBSTR(source, index + 15, 4), source
FROM Recurs
WHERE index > 0)
SELECT *
FROM Recurs
WHERE index = 0
Which yields the expected:
ID INDEX TOKEN SOURCE
3 0 5295 5065|5373|5373|5295|5323|5321
2 0 5509 5065|5373|5373|5065|5373|5373|5509|5270|5373|5373|5373|5080|5081|5013|5040|5295|5321
1 0 5509 5065|5373|5373|5065|5373|5373|5065|5373|5373|5509|5329|5321
One fairly straightforward way to do this is with a recursive common table expression (CTE):
CREATE FUNCTION localutil.locatelastmatch(
searchparm VARCHAR(4000), inputparm VARCHAR(4000)
)
RETURNS SMALLINT
LANGUAGE SQL
RETURN
WITH rcurs(counter, output ) AS (
VALUES (0,0)
UNION ALL
SELECT counter+1, LOCATE(searchparm,inputparm,counter+1)
FROM rcurs
WHERE counter < LENGTH(inputparm) AND counter < 32767
)
SELECT MAX(output) FROM rcurs
;
It may not be the cheapest way to find the last matching occurrence, but it's at least a contender for it. By burying the complexity into a scalar user-defined function (UDF), you won't have to introduce the SQL recursion into every query that needs to search for the last instance of a pattern.
Here's how it works against your sample strings:
WITH originput(val) as (VALUES
('5065|5373|5373|5065|5373|5373|5065|5373|5373|5509|5329|5321'),
('5065|5373|5373|5065|5373|5373|5509|5270|5373|5373|5373|5080|5081|5013|5040|5295|5321'),
('5065|5373|5373|5295|5323|5321')
)
SELECT LENGTH(val) AS inputlength,
localutil.locatelastmatch( '5065|5373|5373|', val ) AS finaloffset,
SUBSTR(val, localutil.locatelastmatch( '5065|5373|5373|', val )
+ LENGTH( '5065|5373|5373|' ), 4) AS nextitem
FROM originput
;
INPUTLENGTH FINALOFFSET NEXTITEM
----------- ----------- --------
59 31 5509
84 16 5509
29 1 5295