We are migrating the SQL Server SSRS reports to Snowflake; I have a question about parameters in SQL version -
select *
from tbname
where xtype in (#xtype)
How to write this in the Snowflake version?
In Snowflake you set a variable using the SET command and then reference it in SQL statements by prefixing it with $ e.g.
set (min, max)=(40, 70);
select $min;
select avg(salary) from emp where age between $min and $max;
This is documented here: https://docs.snowflake.com/en/sql-reference/session-variables.html
It is possible to pass multiple values to Snowflake from the report using ODBC driver:
? used as a parameter placeholder
multiple values are provided as a single concatenated string, therefore in the main query they are splitted SPLIT_TO_TABLE
SELECT *
FROM TBNAME
WHERE XTYPE IN (
SELECT t.value
FROM TABLE(SPLIT_TO_TABLE(?, '^')) AS t
)
The direct translation of SQL Server version will not work and error out:
SELECT *
FROM TBNAME
WHERE XTYPE IN (?)
Cannot add multi value query parameter '?' for dataset 'TBNAME' because it is not supported by the data extension.
Snowflake setup:
CREATE OR REPLACE TABLE TEST.TBNAME AS
SELECT 1 AS XTYPE, 'A' AS col, 'GB' AS country UNION
SELECT 2 AS XTYPE, 'B' AS col, 'BE' AS country UNION
SELECT 3 AS XTYPE, 'C' AS col, 'CH' AS country;
Report side:
Parameter:
Sample input values:
Dataset definition:
Dataset parameter (expression: =JOIN(Parameters!Param_XType.Value, "^")):
Report test:
Related
I need to compose a dynamic SQL statement, which includes certain functions on column names.
E.g.: SELECT json_col::text, SUBSTRING ( string_col ,1 , 2 ) FROM TABLE
Since the column names are enclosed in "", I keep getting undefined column error.
What is the best way to compose a dynamic SQL with functions on column names?
You could add the the SQL functions to your SQL statement:
cols = ["column1", "column2", "column3"]
table = "myTable"
query = sql.SQL(
"select {0}::text, upper({1}), lower({2}), {3}::jsonb from {4} where id = %s").format(
sql.Identifier("id"),
sql.Identifier(cols[0]),
sql.Identifier(cols[1]),
sql.Identifier(cols[2]),
sql.Identifier(table)
)
print(cur.mogrify(query, (240, )).decode('utf-8'))
Out:
select "id"::text, upper("column1"), lower("column2"), "column3"::jsonb from "myTable" where id = 240
I have the following query in SQL Server :
Select -1 AS DeptSK, 0 AS DeptID, "Undefined" AS DeptName
It represents a dummy record in my Department dimension.
I try to do the same in Oracle but I get this error :
FROM Keyword not found where expected
In Oracle, the SELECT statement must have a FROM clause. However, some queries don’t require any table like the example you provided. You can use the DUAL table which is a special table that belongs to the schema of the user SYS but it is accessible to all users.
The DUAL table has one column named DUMMY whose data type is VARCHAR2() and contains one row with a value X.:
Select -1 AS DeptSK, 0 AS DeptID, "Undefined" AS DeptName
FROM dual
In SQL Developer, we can use parameters in order to test our query with different values - for example:
I have a table called Fruits (code, name). I want to retrieve code's apples.
SELECT *
FROM fruits
WHERE name IN (:PM_NAME)
It works correctly when I fill out one value (in this case :PM_NAME equal apple)
But when I want to fill out many values it doesn't work! I've tried these forms and these separators but still..
apple;orange
'apple';'orange'
('apple','orange')
['apple','orange']
"apple","orange"
In a nutshell what's the correct format to fill out multiple values in a SQL parameter in SQL Developer ?
I can't take credit for this 'trick' but the solution lies in a regular expression.
I want to search on multiple types of objects, fed to a :bind, and used in a WHERE clause.
SELECT owner,
object_name,
object_type
FROM all_objects
WHERE object_name LIKE :SEARCH
AND owner NOT IN (
'SYS',
'MDSYS',
'DBSNMP',
'SYSTEM',
'DVSYS',
'APEX_050100',
'PUBLIC',
'ORDS_METADATA',
'APEX_LISTENER'
)
AND object_type IN (
SELECT regexp_substr(:bind_ename_comma_sep_list,'[^,]+',1,level)
FROM dual CONNECT BY
regexp_substr(:bind_ename_comma_sep_list,'[^,]+',1,level) IS NOT NULL
)
ORDER BY owner,
object_name,
object_type;
I first learned of this 'trick' or technique from here.
So your query would look like this
SELECT *
FROM fruits
WHERE name IN (
SELECT regexp_substr(:PM_NAME,'[^,]+',1,level)
FROM dual CONNECT BY
regexp_substr(:PM_NAME,'[^,]+',1,level) IS NOT NULL
)
When you're prompted for values by SQL Developer, don't quote the strings, just comma separate them. Also, no spaces.
So in the input box, enter
apple,orange
And I suppose if you want ; vs , then update the regex call as needed.
Could somebody explain me why this script returns 'some_word' but not error about not existed schema trying to retrieve data from schema_that_doesnt_exist.tab?
with tab as
(
select 'some_word' str
from dual
)
select *
from schema_that_doesnt_exist.tab;
Some URL on Oracle documentation about this question will help me too.
I guess it has connection with qualified names bypass with:
MariaDB Demo
Oracle Demo
SQLite Demo -- no such table: schema_that_doesnt_exists.tab
PostgreSQL Demo -- relation "schema_that_doesnt_exists.tab" does not exist
SQLServer Demo -- Invalid object name 'schema_that_doesnt_exists.tab'.
Same as:
Image from: https://modern-sql.com/blog/2018-04/mysql-8.0
Anyway it could be useful when you need to mock some data for database unit tests (read only queries).
For example:
SELECT *
FROM schema.table_name -- here goes real data (lots of records)
WHERE col = 'sth';
If I want to prepare input dataset for test I have to work with actual data.
Using WITH I could rewrite it as:
WITH table_name AS (
SELECT 'sth' AS col, ... FROM dual UNION ALL
SELECT 'sth2' AS col, ... FROM dual...
)
SELECT *
FROM schema.table_name -- cte is closer and data is taken from it
WHERE col = 'sth';
More: Unit Tests on Transient Data
Oracle 12cR1 - I have a complex business process I am putting into a query.
In general, the process will be
with t1 as (select CATEGORY, PRODUCT from ... )
select <some manipulation> from t1;
t1 -aka the output of the first line- will look like this:
CATEGORY PRODUCT
Database Oracle, MS SQL Server, DB2
Language C, Java, Python
I need the 2nd line of the SQL query (aka the manipulation) to keep the CATEGORY column, and to split the PRODUCT column on the comma. The output needs to look like this
CATEGORY PRODUCT
Database Oracle
Database MS SQL Server
Database DB2
Language C
Language Java
Language Python
I have looked at a couple of different CSV splitting options. I cannot use the DBMS_Utility.comma_to_Table function as this has restrictions with special characters or starting with numbers. I found a nice TABLE function which will convert a string to separate rows, called f_convert. This function is on StackOverflow about 1/3 the way down the page here.
Since this is a table function, it is called like so...And will give me 3 rows, as expected.
SELECT * FROM TABLE(f_convert('Oracle, MS SQL Server, DB2'));
How do I treat this TABLE function as it is was a "column function"? Although this is totally improper SQL, I am looking for something like
with t1 as (select CATEGORY, PRODUCT from ... )
select CATEGORY from T1, TABLE(f_convert(PRODUCT) as PRODUCT from t1;
Any help appreciated...
Use connect by to "loop" through the elements of the list where a comma-space is the delimiter. regexp_substr gets the list elements (the regex allows for NULL list elements) and the prior clauses keep the categories straight.
with t1(category, product) as (
select 'Database', 'Oracle, MS SQL Server, DB2' from dual union all
select 'Language', 'C, Java, Python' from dual
)
select category,
regexp_substr(product, '(.*?)(, |$)', 1, level, NULL, 1) product
from t1
connect by level <= regexp_count(product, ', ')+1
and prior category = category
and prior sys_guid() is not null;
CATEGORY PRODUCT
-------- --------------------------
Database Oracle
Database MS SQL Server
Database DB2
Language C
Language Java
Language Python
6 rows selected.
SQL>