SQL on-demand cache table (possibly using SQL MERGE) - sql

I am working on implementing an on-demand SQL cache table for an application so I have
CacheTable with columns Type, Number, Value
Then I have a function called GetValue( Type, Number )
So I want to have a function that does the following
If (CacheTable contains Type, Number) then return value
Else call GetValue( Type, Number) and put that value into CacheTable and return the Value
Does anyone know the most elegant way to do this?
I was thinking of using a SQL merge.

Not sure how elegant one can get, but we might do it just the way you describe. Query the database
select Value from Tab1 where Type=#type and Number=#num
and if no rows are returned, compute the value, then store it in the database for next time.
However, if the "compute the value" requires the database itself, and we can compute it in the database, then we can do the whole cycle with one database round trip -- more 'elegant' perhaps but faster at least than 3 round trips (lookup, compute, store).
declare #val int
select #val=Value from Tab1 where Type=#type and Number=#num
if ##ROWCOUNT=0 BEGIN
exec compute_val #type,#num,#val OUTPUT
insert into Tab1 values (#type,#num,#val)
END
SELECT #val[Value]--return
The only use for SQL Merge is if you think there may be concurrent users and the number is inserted between above select and insert, giving an error on the insert. I'd just catch the error and skip the insert (as we can assume the value won't be different by definition).

Related

SELECT the last ID after INSERT in Oracle

I'm aware of using RETURNING in an INSERT statement but need a way to either have it returned as a result of the INSERT statement (or enclosing transaction) or be able to retrieve that value using SELECT. Is there a way to do this? I cannot use DBMS_OUTPUT. Thanks!
RETURNING clause is the easiest way to guarantee getting the value of the ID generated by an INSERT statement. Querying for max(id) is unreliable in a multi-user environment, especially if you're using RAC.
If the ID is populated from a sequence you can get the current value of the sequence by running this in the same session as the INSERT:
select your_sequence.currval from dual;
This means you need to know the name of the sequence, which isn't always obvious, even more so if you're using 12c Identity columns.
Basically, if you want the ID use RETURNING.

Moving everything from one table to another in sql server while converting one of the rows

I am trying to move everything from one table to another table. I know how to do that if all of the columns had the same data type, however one column is different. On the new table its varcahar(24) and on the old table its a bigint.
This is what i have so far, Im using the first select statement to set the id, and the rest to add it to the new table. However this always returns the last id, and i need it to be synced up with the second select statement
Any suggestions on how to do this would be awesome. i tried to google it for about an hour but couldnt come up with anything. I am using SQL Server 2012
use DatabaseA
GO
DECLARE #id bigint;
SELECT #id= columnName FROM differentDB.tableB
INSERT INTO tableA
(buyer_id, restOfTheColumns)
Select CAST(#id AS varchar(24)), restOfTheColumns
FROM differentDB.tableB
GO
Variables don't work like that. T-SQL variables hold data values, not object names. Furthermore, that's a scalar variable. It only holds one value.
That said, you don't need the variable at all. You can just do this:
INSERT INTO tableA (buyer_id, restOfTheColumns)
SELECT CAST(columnName AS varchar(24)), restOfTheColumns
FROM differentDB.tableB

How to pass a set of rows from one function into another?

Overview
I'm using PostgreSQL 9.1.14, and I'm trying to pass the results of a function into another function. The general idea (specifics, with a minimal example, follow) is that we can write:
select * from (select * from foo ...)
and we can abstract the sub-select away in a function and select from it:
create function foos()
returns setof foo
language sql as $$
select * from foo ...
$$;
select * from foos()
Is there some way to abstract one level farther, so as to be able to do something like this (I know functions cannot actually have arguments with setof types):
create function more_foos( some_foos setof foo )
language sql as $$
select * from some_foos ... -- or unnest(some_foos), or ???
$$:
select * from more_foos(foos())
Minimal Example and Attempted Workarounds
I'm using PostgreSQL 9.1.14. Here's a minimal example:
-- 1. create a table x with three rows
drop table if exists x cascade;
create table if not exists x (id int, name text);
insert into x values (1,'a'), (2,'b'), (3,'c');
-- 2. xs() is a function with type `setof x`
create or replace function xs()
returns setof x
language sql as $$
select * from x
$$;
-- 3. xxs() should return the context of x, too
-- Ideally the argument would be a `setof x`,
-- but that's not allowed (see below).
create or replace function xxs(x[])
returns setof x
language sql as $$
select x.* from x
join unnest($1) y
on x.id = y.id
$$;
When I load up this code, I get the expected output for the table definitions, and I can call and select from xs() as I'd expect. But when I try to pass the result of xs() to xxs(), I get an error that "function xxs(x) does not exist":
db=> \i test.sql
DROP TABLE
CREATE TABLE
INSERT 0 3
CREATE FUNCTION
CREATE FUNCTION
db=> select * from xs();
1 | a
2 | b
3 | c
db=> select * from xxs(xs());
ERROR: function xxs(x) does not exist
LINE 1: select * from xxs(xs());
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I'm a bit confused about "function xxs(x) does not exist"; since the return type of xs() was setof x, I'd expected that its return type would be setof x (or maybe x[]), not x. Following the complaints about the type, I can get to either of the following , but while with either definition I can select xxs(xs());, I can't select * from xxs(xs());.
create or replace function xxs( x )
returns setof x
language sql as $$
select x.* from x
join unnest(array[$1]) y -- unnest(array[...]) seems pretty bad
on x.id = y.id
$$;
create or replace function xxs( x )
returns setof x
language sql as $$
select * from x
where x.id in ($1.id)
$$;
db=> select xxs(xs());
(1,a)
(2,b)
(3,c)
db=> select * from xxs(xs());
ERROR: set-valued function called in context that cannot accept a set
Summary
What's the right way to pass the results of a set-returning function into another function?
(I have noted that create function … xxs( setof x ) … results in the error: ERROR: functions cannot accept set arguments, so the answer won't literally be passing a set of rows from one function to another.)
Table functions
I perform very high speed, complex database migrations for a living, using SQL as both the client and server language (no other language is used), all running server side, where the code rarely surfaces from the database engine. Table functions play a HUGE role in my work. I don't use "cursors" since they are too slow to meet my performance requirements, and everything I do is result set oriented. Table functions have been an immense help to me in completely eliminating use of cursors, achieving very high speed, and have contributed dramatically towards reducing code volume and improving simplicity.
In short, you use a query that references two (or more) table functions to pass the data from one table function to the next. The select query result set that calls the table functions serves as the conduit to pass the data from one table function to the next. On the DB2 platform / version I work on, and it appears based on a quick look at the 9.1 Postgres manual that the same is true there, you can only pass a single row of column values as input to any of the table function calls, as you've discovered. However, because the table function call happens in the middle of a query's result set processing, you achieve the same effect of passing a whole result set to each table function call, albeit, in the database engine plumbing, the data is passed only one row at a time to each table function.
Table functions accept one row of input columns, and return a single result set back into the calling query (i.e. select) that called the function. The result set columns passed back from a table function become part of the calling query's result set, and are therefore available as input to the next table function, referenced later in the same query, typically as a subsequent join. The first table function's result columns are fed as input (one row at a time) to the second table function, which returns its result set columns into the calling query's result set. Both the first and second table function result set columns are now part of the calling query's result set, and are now available as input (one row at a time) to a third table function. Each table function call widens the calling query's result set via the columns it returns. This can go on an on until you start hitting limits on the width of a result set, which likely varies from one database engine to the next.
Consider this example (which may not match Postgres' syntax requirements or capabilities as I work on DB2). This is one of many design patterns in which I use table functions, is one of the simpler ones, that I think is very illustrative, and one that I anticipate would have broad appeal if table functions were in heavy mainstream use (to my knowledge they are not, but I think they deserve more attention than they are getting).
In this example, the table functions in use are: VALIDATE_TODAYS_ORDER_BATCH, POST_TODAYS_ORDER_BATCH, and DATA_WAREHOUSE_TODAYS_ORDER_BATCH. On the DB2 version I work on, you wrap the table function inside "TABLE( place table function call and parameters here )", but based on quick look at a Postgres manual it appears you omit the "TABLE( )" wrapper.
create table TODAYS_ORDER_PROCESSING_EXCEPTIONS as (
select TODAYS_ORDER_BATCH.*
,VALIDATION_RESULT.ROW_VALID
,POST_RESULT.ROW_POSTED
,WAREHOUSE_RESULT.ROW_WAREHOUSED
from TODAYS_ORDER_BATCH
cross join VALIDATE_TODAYS_ORDER_BATCH ( ORDER_NUMBER, [either pass the remainder of the order columns or fetch them in the function] )
as VALIDATION_RESULT ( ROW_VALID ) --example: 1/0 true/false Boolean returned
left join POST_TODAYS_ORDER_BATCH ( ORDER_NUMBER, [either pass the remainder of the order columns or fetch them in the function] )
as POST_RESULT ( ROW_POSTED ) --example: 1/0 true/false Boolean returned
on ROW_VALIDATED = '1'
left join DATA_WAREHOUSE_TODAYS_ORDER_BATCH ( ORDER_NUMBER, [either pass the remainder of the order columns or fetch them in the function] )
as WAREHOUSE_RESULT ( ROW_WAREHOUSED ) --example: 1/0 true/false Boolean returned
on ROW_POSTED = '1'
where coalesce( ROW_VALID, '0' ) = '0' --Capture only exceptions and unprocessed work.
or coalesce( ROW_POSTED, '0' ) = '0' --Or, you can flip the logic to capture only successful rows.
or coalesce( ROW_WAREHOUSED, '0' ) = '0'
) with data
If table TODAYS_ORDER_BATCH contains 1,000,000 rows, then
VALIDATE_TODAYS_ORDER_BATCH will be called 1,000,000 times, once for
each row.
If 900,000 rows pass validation inside VALIDATE_TODAYS_ORDER_BATCH, then POST_TODAYS_ORDER_BATCH will be called 900,000 times.
If only 850,000 rows successfully post, then VALIDATE_TODAYS_ORDER_BATCH needs some loopholes closed LOL, and DATA_WAREHOUSE_TODAYS_ORDER_BATCH will be called 850,000 times.
If 850,000 rows successfully made it into the Data Warehouse (i.e. no additional exceptions were generated), then table TODAYS_ORDER_PROCESSING_EXCEPTIONS will be populated with 1,000,000 - 850,000 = 150,000 exception rows.
The table function calls in this example are only returning a single column, but they could be returning many columns. For example, the table function validating an order row could return the reason why an order failed validation.
In this design, virtually all the chatter between a HLL and the database is eliminated, since the HLL requestor is asking the database to process the whole batch in ONE request. This results in a reduction of millions of SQL requests to the database, in a HUGE removal of millions of HLL procedure or method calls, and as a result provides a HUGE runtime improvement. In contrast, legacy code which often processes a single row at a time, would typically send 1,000,000 fetch SQL requests, 1 for each row in TODAYS_ORDER_BATCH, plus at least 1,000,000 HLL and/or SQL requests for validation purposes, plus at least 1,000,000 HLL and/or SQL requests for posting purposes, plus 1,000,000 HLL and/or SQL requests for sending the order to the data warehouse. Granted, using this table function design, inside the table functions SQL requests are being sent to the database, but when the database makes requests to itself (i.e from inside a table function), the SQL requests are serviced much faster (especially in comparison to a legacy scenario where the HLL requestor is doing single row processing from a remote system, with the worst case over a WAN - OMG please don't do that).
You can easily run into performance problems if you use a table function to "fetch a result set" and then join that result set to other tables. In that case, the SQL optimizer can't predict what set of rows will be returned from the table function, and therefore it can't optimize the join to subsequent tables. For that reason, I rarely use them for fetching a result set, unless I know that result set will be a very small number of rows, hence not causing a performance problem, or I don't need to join to subsequent tables.
In my opinion, one reason why table functions are underutilized is that they are often perceived as only a tool to fetch a result set, which often performs poorly, so they get written off as a "poor" tool to use.
Table functions are immensely useful for pushing more functionality over to the server, for eliminating most of the chatter between the database server and programs on remote systems, and even for eliminating chatter between the database server and external programs on the same server. Even chatter between programs on the same server carries more overhead than many people realize, and much of it is unnecessary. The heart of the power of table functions lies in using them to perform actions inside result set processing.
There are more advanced design patterns for using table functions that build on the above pattern, where you can maximize result set processing even further, but this post is a lot for most to absorb already.

SQL / DB2 - retrieve value and update/increment simultaniously

I'm connecting to a DB2 database and executing SQL statements.
One example of what is being done is:
select field from library/file
[program code line finishes executing]
[increment value by one]
update library/file set field = 'incremented value'
I have a need to immediately update the value while returning the value. Rather than having to wait for the script to complete, and then run a separate UPDATE statement.
The concept of what I would like to do is this:
select field from library/file; update library/file set field = (Current Value + 1); go;
Please note... this is not the common SQL database most would be familiar with, it is a DB2 database on an IBM i.
Thanks!
Consider using a DB2 SEQUENCE to manage the next available number, if this file is simply intended to have a single row storing your counter. That is what a SEQUENCE is designed to do.
To set it up, use a CREATE SEQUENCE statement.
To increment the value and retrieve, use a SEQUENCE reference expression of the form NEXT VALUE FOR sequence-name. To find out what the most recent value was, use the PREVIOUS VALUE FOR sequence-name. These expressions can be used like a regular any column expression, such as in a SELECT or INSERT statement.
Suppose, for example you want to do this for invoice numbers (and maybe your accounting department doesn't want their first invoice number to be 000001, so we will initialize it higher).
CREATE SEQUENCE InvoiceSeq
as decimal (7,0)
start with 27000; -- for example
You could get a number for a new invoice like this:
SELECT NEXT VALUE FOR InvoiceSeq
INTO :myvar
FROM SYSIBM/SYSDUMMY1;
But what is this SYSIBM/SYSDUMMY1 table? We're not really getting anything from table, so why are we pretending to do so? The SELECT needs a FROM-table clause. But since we don't need one, let's use a VALUES INTO statement.
VALUES NEXT VALUE FOR InvoiceSeq
INTO :myvar;
So that has incremented the counter, and put the value into our variable. You could use that value to INSERT into our InvoiceHeaders and InvoiceDetails tables.
Or, you could increment the counter as you write an InvoiceHeader, then use it again when writing the InvoiceDetails.
INSERT INTO InvoiceHeaders
(InvoiceNbr, Customer, InvoiceDate)
VALUES (NEXT VALUE FOR InvoiceSeq, :custnbr, :invdate);
for each invoice detail
INSERT INTO InvoiceDetails
(InvoiceNbr, InvoiceLine, Reason, Fee)
VALUES (PREVIOUS VALUE FOR InvoiceSeq, :line, :itemtxt, :amt);
The PREVIOUS VALUE is local to the particular job, so there should be no risk of another job getting the same number.
update library/file set field = field + 1;
select field from library/file;
[program code line finishes executing]
[increment value by one]
This handles the problem of another app updating the number between the time you fetch it and the time you update it. Update it and then use it. If two apps try to update simultaneously, one will wait.
A SEQUENCE object is designed exactly for this purpose, but if you are forced to keep this 'next ID' file updated, this is how I'd do it. Follow the link in the comment by #Clockwork-Muse for info on the SEQUENCE object, or try this example from V5R4.
His request is like this:
UPDATE sometable
SET somecounter = somecounter + 10,
:returnvar = somecounter + 10;
Updates and retrieves at the same time.
This is possible in MSSQL, In fact I use it alot there,
but DB2 doesnt seem to have this feature.

How to reuse a large query without repeating it?

If I have two queries, which I will call horrible_query_1 and ugly_query_2, and I want to perform the following two minus operations on them:
(horrible_query_1) minus (ugly_query_2)
(ugly_query_2) minus (horrible_query_1)
Or maybe I have a terribly_large_and_useful_query, and the result set it produces I want to use as part of several future queries.
How can I avoid copying and pasting the same queries in multiple places? How can I "not repeat myself," and follow DRY principles. Is this possible in SQL?
I'm using Oracle SQL. Portable SQL solutions are preferable, but if I have to use an Oracle specific feature (including PL/SQL) that's OK.
create view horrible_query_1_VIEW as
select .. ...
from .. .. ..
create view ugly_query_2_VIEW as
select .. ...
from .. .. ..
Then
(horrible_query_1_VIEW) minus (ugly_query_2_VIEW)
(ugly_query_2_VIEW) minus (horrible_query_1_VIEW)
Or, maybe, with a with clause:
with horrible_query_1 as (
select .. .. ..
from .. .. ..
) ,
ugly_query_2 as (
select .. .. ..
.. .. ..
)
(select * from horrible_query_1 minus select * from ugly_query_2 ) union all
(select * from ugly_query_2 minus select * from horrible_query_1)
If you want to reuse the SQL text of the queries, then defining views is the best way, as described earlier.
If you want to reuse the result of the queries, then you should consider global temporary tables. These temporary tables store data for the duration of session or transaction (whichever you choose). These are really useful in case you need to reuse calculated data many times over, especially if your queries are indeed "ugly" and "horrible" (meaning long running). See Temporary tables for more information.
If you need to keep the data longer than a session, you can consider materialized views.
Since you're using Oracle, I'd create Pipelined TABLE functions.
The function takes parameters and returns an object (which you have to create)
and then you SELECT * or even specific columns from it using the TABLE() function and can use it with a WHERE clause or with JOINs. If you want a unit of reuse (a function) you're not restricted to just returning values (i.e a scalar function) you can write a function that returns rows or recordsets.
something like this:
FUNCTION RETURN_MY_ROWS(Param1 IN type...ParamX IN Type)
RETURN PARENT_OBJECT PIPELINED
IS
local_curs cursor_alias; --you need a cursor alias if this function is in a Package
out_rec ROW_RECORD_OF_CUSTOM_OBJECT:=ROW_RECORD_OF_CUSTOM_OBJECT(NULL, NULL,NULL) --one NULL for each field in the record sub-object
BEGIN
OPEN local_curs FOR
--the SELECT query that you're trying to encapsulate goes here
-- and it can be very detailed/complex and even have WITH () etc..
SELECT * FROM baseTable WHERE col1 = x;
-- now that you have captured the SELECT into a Cursor
-- here you put a LOOP to take what's in the cursor and put it in the
-- child object (that holds the individual records)
LOOP
FETCH local_curs --opening the ref-cursor
INTO out_rec.COL1,
out_rec.COL2,
out_rec.COL3;
EXIT WHEN local_curs%NOTFOUND;
PIPE ROW(out_rec); --piping out the Object
END LOOP;
CLOSE local_curs; -- always do this
RETURN; -- we're now done
END RETURN_MY_ROWS;
after you've done that, you can use it like so
SELECT * FROM TABLE(RETURN_MY_ROWS(val1, val2));
you can INSERT SELECT or even CREATE TABLE out of it , you can have it in joins.
two more things to mention:
--ROW_RECORD_OF_CUSTOM_OBJECT is something along these lines
CREATE or REPLACE TYPE ROW_RECORD_OF_CUSTOM_OBJECT AS OBJECT
(
col1 type;
col2 type;
...
colx type;
);
and PARENT_OBJECT is a table of the other object (with the field definitions) we just made
create or replace TYPE PARENT_OBJECT IS TABLE OF ROW_RECORD_OF_CUSTOM_OBJECT;
so this function needs two OBJECTs to support it, but one is a record, the other is a table of that record (you have to create the record first).
In a nutshell, the function is easy to write, you need a child object (with fields), and a parent object that will house that child object that is of type TABLE of the child object, and you open the original base-table fetching SQL into a SYS_REFCURSOR (which you may need to alias) if you're in a package and you read from that cursor from a loop into the individual records.
The function returns a type of PARENT_OBJECT but inside it packs the records sub-object with values from the cursor.
I hope this works for you (there may be permissioning issues with your DBA if you want to create OBJECTs and Table functions)*/
If you operate with values, you could write functions.
Here you find infos on how to do it. It basically works like writing a function in any language. You can define parameters and return values.
Which gives you the cool possibility to write code just once. Here is how you do it:
http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5009.htm
Have you tried using RESULT_CACHE hint in your queries? Also, you could
ALTER SESSION SET RESULT_CACHE_MODE=FORCE
and see if it helps.