Weird result of using CTE - sql

Could somebody explain me why this script returns 'some_word' but not error about not existed schema trying to retrieve data from schema_that_doesnt_exist.tab?
with tab as
(
select 'some_word' str
from dual
)
select *
from schema_that_doesnt_exist.tab;
Some URL on Oracle documentation about this question will help me too.

I guess it has connection with qualified names bypass with:
MariaDB Demo
Oracle Demo
SQLite Demo -- no such table: schema_that_doesnt_exists.tab
PostgreSQL Demo -- relation "schema_that_doesnt_exists.tab" does not exist
SQLServer Demo -- Invalid object name 'schema_that_doesnt_exists.tab'.
Same as:
Image from: https://modern-sql.com/blog/2018-04/mysql-8.0
Anyway it could be useful when you need to mock some data for database unit tests (read only queries).
For example:
SELECT *
FROM schema.table_name -- here goes real data (lots of records)
WHERE col = 'sth';
If I want to prepare input dataset for test I have to work with actual data.
Using WITH I could rewrite it as:
WITH table_name AS (
SELECT 'sth' AS col, ... FROM dual UNION ALL
SELECT 'sth2' AS col, ... FROM dual...
)
SELECT *
FROM schema.table_name -- cte is closer and data is taken from it
WHERE col = 'sth';
More: Unit Tests on Transient Data

Related

query json data from oracle 12.1 having fields value with "."

I have a table that has JSON data stored and I'm using json_exists functions in the query. Below is my sample data from the column for one of the rows.
{"fields":["query.metrics.metric1.field1",
"query.metrics.metric1.field2",
"query.metrics.metric1.field3",
"query.metrics.metric2.field1",
"query.metrics.metric2.field2"]}
I want all those rows which have a particular field. So, I'm trying below.
SELECT COUNT(*)
FROM my_table
WHERE JSON_EXISTS(fields, '$.fields[*]."query.metrics.metric1.field1"');
It does not give me any results back. Not sure what I'm missing here. Please help.
Thanks
You can use # operator which refers to an occurrence of the array fields such as
SELECT *
FROM my_table
WHERE JSON_EXISTS(fields, '$.fields?(#=="query.metrics.metric1.field1")')
Demo
Edit : The above case works for 12R2+, considering that it doesn't work for your version(12R1), try to use JSON_TABLE() such as
SELECT fields
FROM my_table,
JSON_TABLE(fields, '$.fields[*]' COLUMNS ( js VARCHAR2(90) PATH '$' ))
WHERE js = 'query.metrics.metric1.field1'
Demo
I have no idea how to "pattern match" on the array element, but just parsing the whole thing and filtering does the job.
with t(x, json) as (
select 1, q'|{"fields":["a", "b"]}|' from dual union all
select 2, q'|{"fields":["query.metrics.metric1.field1","query.metrics.metric1.field2","query.metrics.metric1.field3","query.metrics.metric2.field1","query.metrics.metric2.field2"]}|' from dual
)
select t.*
from t
where exists (
select null
from json_table(
t.json,
'$.fields[*]'
columns (
array_element varchar2(100) path '$'
)
)
where array_element = 'query.metrics.metric1.field1'
);
In your code, you are accessing the field "query.metrics.metric1.field1" of an object in the fields array, and there is no such object (the elements are strings)...

Abbreviate a list in PostgreSQL

How can I abbreviate a list so that
WHERE id IN ('8893171511',
'8891227609',
'8884577292',
'886790275X',
.
.
.)
becomes
WHERE id IN (name of a group/list)
The list really would have to appear somewhere. From the point of view of your code being maintainable and reusable, you could represent the list in a CTE:
WITH id_list AS (
SELECT '8893171511' AS id UNION ALL
SELECT '8891227609' UNION ALL
SELECT '8884577292' UNION ALL
SELECT '886790275X'
)
SELECT *
FROM yourTable
WHERE id IN (SELECT id FROM cte);
If you have a persistent need to do this, then maybe the CTE should become a bona fide table somewhere in your database.
Edit: Using the Horse's suggestion, we can tidy up the CTE to the following:
WITH id_list (id) AS (
VALUES
('8893171511'),
('8891227609'),
('8884577292'),
('886790275X')
)
If the list is large, I would create a temporary table and store the list there.
That way you can ANALYZE the temporary table and get accurate estimates.
The temp table and CTE answers suggested will do.
Just wanted to bring another approach, that will work if you use PGAdmin for querying (not sure about workbench) and represent your data in a "stringy" way.
set setting.my_ids = '8893171511,8891227609';
select current_setting('setting.my_ids');
drop table if exists t;
create table t ( x text);
insert into t select 'some value';
insert into t select '8891227609';
select *
from t
where x = any( string_to_array(current_setting('setting.my_ids'), ',')::text[]);

How to do a Select in another Select with Postgresql

I must do this query in another Select in Postgresql :
SELECT COUNT(tn.autoship_box_transaction_id)
FROM memberships.autoship_box_transaction tn
WHERE tn.autoship_box_id = b.autoship_box_id
Do I must use the clause WITH ?
As long as the query produces a single data element, you can use it in place of an attribute:
SELECT (
SELECT COUNT(tn.autoship_box_transaction_id)
FROM memberships.autoship_box_transaction tn
WHERE tn.autoship_box_id = b.autoship_box_id
) AS cnt
, other_column
FROM wherever
;
Have a look at this SQL fiddle demonstrating the use case.
This method often comes with a performance penalty if the db engine actually iterates over the result set and performs the query on each record encountered.
The db engine's optimizer may be smart enough to avoid the extra cost (and it should in the fiddle's toy example), but you have to look at the explain plan to be sure.
Note that its mostly an issue with 'correlated subqueries', ie. queries embedded as shown which depend on the embedding. Your example example appears to be of this kind as you use a table alias b which isn't defined anywhere.
There might be the option of moving the subselect to the from clause (beware: This statement is for explanatory purposes only; you must adapt it to your use case, I am just wild guessing here):
SELECT stats.cnt
, b.other_column
FROM b_table b
JOIN (
SELECT COUNT(tn.autoship_box_transaction_id) cnt
, tn.autoship_box_id
FROM memberships.autoship_box_transaction tn
GROUP BY tn.autoship_box_id
) stats
ON (stats.autoship_box_id = b.autoship_box_id)
;
There are two options. You can either use the with clause, like so:
WITH some_count AS (
SELECT COUNT(tn.autoship_box_transaction_id)
FROM memberships.autoship_box_transaction tn
WHERE tn.autoship_box_id = b.autoship_box_id
)
SELECT * FROM some_count;
Or the second option is to use a sub-query, like so:
SELECT
*
FROM
(
SELECT COUNT(tn.autoship_box_transaction_id)
FROM memberships.autoship_box_transaction tn
WHERE tn.autoship_box_id = b.autoship_box_id
);

sql temporary tables in rstudio notebook's sql chunks?

I am trying to use temp tables in an sql codechunk in rstudio.
An example: When I select one table and return it into an r object things seem to be working:
```{sql , output.var="x", connection='db' }
SELECT count(*) n
FROM origindb
```
When I try anything with temp tables it seems like the commands are running but returns an empty r data.frame
```{sql , output.var="x", connection='db' }
SELECT count(*) n
INTO #whatever
FROM origindb
SELECT *
FROM #whatever
```
My impression is that the Rstudio notebook sql chunks are just set to make one single query. So my temporary solution is to create the tables in a stored procedure in the database. Then I can get the results I want with something simple. I would prefer to have a bit more flexibility in the sql code chunks.
my db connection looks like this:
```{r,echo=F}
db <- DBI::dbConnect(odbc::odbc(),
driver = "SQL Server",
server = 'sql',
database = 'databasename')
```
Like this question, it will work if you put
set nocount on
at the top of your chunk. R seems to get confused when it's handed back the rowcount for the temp table.
I accomplished my goal using CTEs. As long as you define your CTEs in the order that they will be used it works. It is just like using temp tables with one big exception. The CTEs are gone after the query finishes where temp tables exist until you spid is kill (typically via a disconnect).
WITH CTE_WHATEVER AS (
SELECT COUNT(*) n
FROM origindb
)
SELECT *
FROM CTE_WHATEVER
You can also do this for multiple temp table examples
WITH CTE1 AS (
SELECT
STATE
,COUNTY
,COUNT(*) n
FROM origindb
GROUP BY
STATE
,COUNTY
),
CTE2 AS (
SELECT
STATE
,AVG(n)
,COUNTY_AVG
FROM CTE1
GROUP BY
STATE
)
SELECT *
FROM CTE2
WHERE COUNTY_AVG > 1000000
Sorry for the formatting. I couldn't figure out how to get the carriage returns to work in the code block.
I hope this helps.
You could manage a transaction within the SQL chunk defining a BEGIN and COMMIT clauses. For example:
BEGIN ;
CREATE TABLE foo (id varchar) ;
COMMENT ON TABLE foo IS 'Foo';
COMMIT ;

Oracle- create a temporary resultset for use in a query

How do I create a temporary result set for use in an SQL without creating a table and inserting the data?
Example: I have a list of, say 10 codes for example. I want to put this into a query, and then query the database to see which codes in this temporary list do not exist in a table.
If it was already in a table, I could do something like:
SELECT
ITEM_CODE
FROM
TEMP_ITEMS
MINUS
SELECT
ITEM_CODE
FROM
M_ITEMS
Is their a way without using PL/SQL, and pure SQL to create a temporary rowset before querying?
Please don't answer with something like:
SELECT 1 FROM DUAL
UNION ALL
SELECT 2 FROM DUAL
I am sort of thinking of something where I can provide my codes in an IN statement, and it turns that into rows for use in a later query.
Edit: so everyone knows my objective here, basically I sometimes get a list of product codes that I need to find which ones in the list are not setup in our system. I want a quick way to throw this into an SQL statement so I can see which ones are not in the system (rather than importing data etc). I usually put these into excel, then do a formula such as :
="'"&A1&"',"
So that I can create my comma separated list.
If you are using oracle 11g you can do this
with t as
(
select (column_value).getnumberval() Codes from xmltable('1,2,3,4,5')
)
SELECT * FROM t
WHERE NOT EXISTS (SELECT 1 FROM M_ITEMS M WHERE codes = M.ITEM_CODE);
or
with t as
(
select (column_value).getstringval() Codes from xmltable('"A","B","C"')
)
SELECT * FROM t
WHERE NOT EXISTS (SELECT 1 FROM M_ITEMS M WHERE codes = M.ITEM_CODE);
I would go with:
with t as (
select 1 as val from dual union all
select 2 as val from dual
)
select . . .
And then use "t" or whatever you call it, in the subsequent query block.
I'm not sure what the objection is to using the select method . . . just pop the values you want in a column in Excel and produce the code for each value by copying down the formula. Then paste the results back into your query interface.
If you want to use a temporary table, you can use the values clause. Alternatively, you can use string functions if you only want IN functionality. Put the values in a comma separated list and check to see if it matches a particular value:
where ','||<list>||',' like '%,'||col||',%'
This one is interesting because it's not a union and fit in a single select. You have to enter the string with delimiters ('a/b/c/def') two times though:
SELECT regexp_substr('a/b/c/def', '[^/]+', 1, ROWNUM) var,
regexp_substr('2/432/sd/fsd', '[^/]+', 1, ROWNUM) var2
FROM dual
CONNECT BY LEVEL <= length(regexp_replace('a/b/c/def', '[^/]', '')) + 1;
var var2
=== ====
a 2
b 432
c sd
def fsd
Note: Credits go to : https://stackoverflow.com/a/1381495/463056
So using the with clause it would give someting like :
with tempo as (
SELECT regexp_substr('a/b/c/def', '[^/]+', 1, ROWNUM) var,
regexp_substr('2/432/sd/fsd', '[^/]+', 1, ROWNUM) var2
FROM dual
CONNECT BY LEVEL <= length(regexp_replace('a/b/c/def', '[^/]', '')) + 1
)
select ...
or you can use it in a from clause :
select ...
from (
SELECT regexp_substr('a/b/c/def', '[^/]+', 1, ROWNUM) var,
regexp_substr('2/432/sd/fsd', '[^/]+', 1, ROWNUM) var2
FROM dual
CONNECT BY LEVEL <= length(regexp_replace('a/b/c/def', '[^/]', '')) + 1
) tempo
There are two approaches I would lean towards:
1. Global Temporary Table
Although you say you don't want to create a table, it depends on why you don't want a table. If you choose to create a Global Temporary table, the rows are only visible to the session that inserted them, so it's like having a private in-memory table but gives you all the benefits of a real table - i.e. being able to query and join to it.
2. Pipelined function
You can create a function that returns the results in a form that can be queried using the TABLE() operator. More info here: http://www.oracle-base.com/articles/misc/pipelined-table-functions.php
It's a bit hokey-looking. But you can parse a string into separate rows using regular expressions assuming you are using 10g or later. For example
SQL> ed
Wrote file afiedt.buf
1 SELECT REGEXP_SUBSTR('a,b,c,def,g', '[^ |,]+', 1, LEVEL) parsed_str
2 FROM dual
3* CONNECT BY LEVEL <= REGEXP_COUNT('a,b,c,def,g', '[^ |,]+')
SQL> /
PARSED_STR
--------------------------------------------
a
b
c
def
g
Personally, I would find a pipelined table function or a PL/SQL block that generates a collection easier to understand, but if you have to do it in SQL you can.
Based on your edit, if you are getting a list of product codes that is already in some sort of file, it would seem to make more sense to use an external table to expose the file as a table or to use SQL*Loader to load the data into a table (temporary or permanent) that you can query. Barring either of those options, if you really want to manipulate the list in Excel first, it would make more sense to generate an IN list in Excel and just copy and past that into your query. Generating a comma-separated list of codes in Excel only to parse that list into it's constituent elements in SQL seems like way too many steps.