From a string array, create a table - sql

Basically, when given a list of strings,
I want to create a table with select statement.
For example,
"A", "B", "C",
I want to create a table as a sub-select like:
sub-select
+---------+
| "A" |
+---------+
| "B" |
+---------+
| "C" |
+---------+
How do I do this in redshift and postgres?
Thanks!
Update:
select 'a' as A;
is sort of what I want that returns:
a
+---------+
| "a" |
+---------+
How do I have multiple rows for this column a from the query select 'a' as A;

One of the way to convert a column value to multiple rows is using split_part function and UNION.
Here is an example.
Source Table:
=> CREATE TABLE t_csv (value varchar(64));
=> INSERT INTO t_csv VALUES ('"A","B","C"');
=> INSERT INTO t_csv VALUES ('"D","E"');
=> INSERT INTO t_csv VALUES ('"F","G","H","I"');
=> SELECT * FROM t_csv;
value
-----------------
"D","E"
"A","B","C"
"F","G","H","I"
(3 rows)
Here is the query to get multiple rows.
=> WITH a AS (SELECT value FROM t_csv)
SELECT * FROM
(
SELECT split_part(a.value,',',1) AS value FROM a
UNION
SELECT split_part(a.value,',',2) AS value FROM a
UNION
SELECT split_part(a.value,',',3) AS value FROM a
UNION
SELECT split_part(a.value,',',4) AS value FROM a
)
WHERE value != '';
value
-------
"A"
"B"
"C"
"D"
"E"
"F"
"G"
"H"
"I"
(9 rows)

Have no chance to test it in db, but something like that
select * INTO table from (
SELECT CAST('A' AS VARCHAR(100)) AS col
UNION ALL
SELECT 'B' AS col
UNION ALL
SELECT 'C' AS col
) a

You can use string_to_array and unnest
select *
from unnest(string_to_array('"A","B","C"', ','))
(But I don't know if that is available in Redshift)

Related

How to use LIKE with ANY in BigQuery?

I would like to use the LIKE ANY operator to exclude rows based on an array of substrings, but BigQuery does not recognize it.
declare unlaunched_clistings array<string>;
set unlaunched_clistings = {unlaunched_clistings} ;
select * from {p}.simeon_logs.process_errors e
where not e.message like any(unlaunched_clistings)
Error : LIKE ANY is not supported at [8:32]
Is there any workaround for this?
LIKE ANY is not supported, however you can use following 2 ways:
Use LIKE with ORs between them
WITH table AS (
SELECT 'abc' as col union all
SELECT 'xzy' as col
)
SELECT col
FROM TABLE
WHERE (col like '%abc%'
OR col like '%cde%' OR col like '%something%')
User RegEx
WITH table AS (
SELECT 'abc' as col
UNION ALL
SELECT 'xzy' as col
)
SELECT col
FROM TABLE
WHERE REGEXP_CONTAINS(col
, 'abc|cde|something')
Above both will give you abc row.

Equivalent function in HANA DB for json_object

I would like to return the query results into json format in HANA DB.
There is a json_object function in oracle to achieve this requirement, but I am not seeing any function in HANA.
Does anyone knows if this kind of function exists in HANA
For example:
Table Author contains non-json data as follows:
---------------------------------------------
| firstName | lastName |
---------------------------------------------
| Paulo | Coelho |
| George | Orwell |
---------------------------------------------
write a select statement to return result as json.
In Oracle it can be returned using query:
SELECT json_object(
KEY 'firstName' VALUE author.first_name,
KEY 'lastName' VALUE author.last_name
)
FROM author
Output looks like this:
---------------------------------------------
| json_array |
---------------------------------------------
| {"firstName":"Paulo","lastName":"Coelho"} |
| {"firstName":"George","lastName":"Orwell"} |
----------------------------------------------
Does anyone knows query or function in HANA to achieve the same result?
you can use the already mentioned function in SAP HANA too
JSON_QUERY (
<JSON_API_common_syntax>
[ <JSON_output_clause> ]
[ <JSON_query_wrapper_behavior> ]
[ <JSON_query_empty_behavior> ON EMPTY ]
[ <JSON_query_error_behavior> ON ERROR ]
)
research
For 2.0 SP04 and above there's a for json addition to the select statement. As documentation says, it is only permitted in subqueries, so you need to select individual columns in subselect (if you need a result set of JSON objects) of generate a JSON array as a single scalar result. Column names are inherited from subquery aliases.
Case 1:
with a as (
select 'AAA' as field1, 'Value 1' as val from dummy union all
select 'BBB' as field1, 'Value 2' as val from dummy
)
select
/*Use correlated subquery with single row*/
json_value((select a.field1, a.val from dummy for json), '$[0]') as res
from a
Or more effort to type-in, but less structure-dependent:
with a as (
select 'AAA' as field1, 'Value 1' as val from dummy union all
select 'BBB' as field1, 'Value 2' as val from dummy
)
, json_source as (
/*Intermediate query to use as correlation source in JSON_TABLE*/
select (select * from a for json) as tmp_json
from dummy
)
select json_parsed.*
from json_source,
json_table(
json_source.tmp_json
/*Access individual items*/
, '$[*]'
columns (
res nvarchar(1000) format json path '$'
)
) as json_parsed
Both return:
RES
{"FIELD1":"AAA","VAL":"Value 1"}
{"FIELD1":"BBB","VAL":"Value 2"}
Or as a scalar query returning JSON array (Case 2):
with a as (
select 'AAA' as field1, 'Value 1' as val from dummy union all
select 'BBB' as field1, 'Value 2' as val from dummy
)
select *
from (select * from a for json)
JSONRESULT
[{"FIELD1":"AAA","VAL":"Value 1"},{"FIELD1":"BBB","VAL":"Value 2"}]

Oracle plsql store 2 values from same column in 2 different variable

My table tbl has values something like this:
+-------+
|name |
+-------+
|n1 |
|n2 |
+-------+
What I want is to have a single query that store the values n1 and n2 into two different variable in the same time.
declare
val1 varchar2(2);
val2 varchar2(2);
begin
select name
into --val1,val2
from tbl
where ...
end;
val1 value must be n1 and val2 must be n2
Use a simple aggregation as :
select max(name), min(name)
into val1, val2
from tbl;
which also works for non-numeric variables.
Or alternatively use a correlated subquery in a single query as :
select ( select name from tbl where name = 'n1' ),
( select name from tbl where name = 'n2' )
into val1, val2
from dual;
To make it readable you can add 2 select into statements:
select name
into val1
from tbl
where name = 'n1';
select name
into val2
from tbl
where name = 'n2';

SQL query to get column names if it has specific value

I have a situation here, I have a table with a flag assigned to the column names(like 'Y' or 'N'). I have to select the column names of a row, if it have a specific value.
My Table:
Name|sub-1|sub-2|sub-3|sub-4|sub-5|sub-6|
-----------------------------------------
Tom | Y | | Y | Y | | Y |
Jim | Y | Y | | | Y | Y |
Ram | | Y | | Y | Y | |
So I need to get, what are all the subs are have 'Y' flag for a particular Name.
For Example:
If I select Tom I need to get the list of 'Y' column name in query output.
Subs
____
sub-1
sub-3
sub-4
sub-6
Your help is much appreciated.
The problem is that your database model is not normalized. If it was properly normalized the query would be easy. So the workaround is to normalize the model "on-the-fly" to be able to make the query:
select col_name
from (
select name, sub_1 as val, 'sub_1' as col_name
from the_table
union all
select name, sub_2, 'sub_2'
from the_table
union all
select name, sub_3, 'sub_3'
from the_table
union all
select name, sub_4, 'sub_4'
from the_table
union all
select name, sub_5, 'sub_5'
from the_table
union all
select name, sub_6, 'sub_6'
from the_table
) t
where name = 'Tom'
and val = 'Y'
The above is standard SQL and should work on any (relational) DBMS.
Below code works for me.
select t.Subs from (select name, u.subs,u.val
from TableName s
unpivot
(
val
for subs in (sub-1, sub-2, sub-3,sub-4,sub-5,sub-6,sub-7)
) u where u.val='Y') T
where t.name='Tom'
Somehow I am near to the solution. I can get for all rows. (I just used 2 columns)
select col from ( select col, case s.col when 'sub-1' then sub-1 when 'sub-2' then sub-2 end AS val from mytable cross join ( select 'sub-1' AS col union all select 'sub-2' ) s ) s where val ='Y'
It gives the columns for all row. I need the same data for a single row. Like if I select "Tom", I need the column names for 'Y' value.
I'm answering this under a few assumptions here. The first is that you KNOW the names of the columns of the table in question. Second, that this is SQL Server. Oracle and MySql have ways of performing this, but I don't know the syntax for that.
Anyways, what I'd do is perform an 'UNPIVOT' on the data.
There's a lot of parans there, so to explain. The actual 'unpivot' statement (aliased as UNPVT) takes the data and twists the columns into rows, and the SELECT associated with it provides the data that is being returned. Here's I used the 'Name', and placed the column names under the 'Subs' column and the corresponding value into the 'Val' column. To be precise, I'm talking about this aspect of the above code:
SELECT [Name], [Subs], [Val]
FROM
(SELECT [Name], [Sub-1], [Sub-2], [Sub-3], [Sub-4], [Sub-5], [Sub-6]
FROM pvt) p
UNPIVOT
(Orders FOR [Name] IN
([Name], [Sub-1], [Sub-2], [Sub-3], [Sub-4], [Sub-5], [Sub-6])
)AS unpvt
My next step was to make that a 'sub-select' where I could find the specific name and val that was being hunted for. That would leave you with a SQL Statement that looks something along these lines
SELECT [Name], [Subs], [Val]
FROM (
SELECT [Name], [Subs], [Val]
FROM
(SELECT [Name], [Sub-1], [Sub-2], [Sub-3], [Sub-4], [Sub-5], [Sub-6]
FROM pvt) p
UNPIVOT
(Orders FOR [Name] IN
([Name], [Sub-1], [Sub-2], [Sub-3], [Sub-4], [Sub-5], [Sub-6])
)AS unpvt
) AS pp
WHERE 1 = 1
AND pp.[Val] = 'Y'
AND pp.[Name] = 'Tom'
select col from (
select col,
case s.col
when 'sub-1' then sub-1
when 'sub-2' then sub-2
when 'sub-3' then sub-3
when 'sub-4' then sub-4
when 'sub-5' then sub-5
when 'sub-6' then sub-6
end AS val
from mytable
cross join
(
select 'sub-1' AS col union all
select 'sub-2' union all
select 'sub-3' union all
select 'sub-4' union all
select 'sub-5' union all
select 'sub-6'
) s on name="Tom"
) s
where val ='Y'
included the join condition as
on name="Tom"

Taking the "transpose" of a table using SQL

I don't know if there is a name for this operation but it's similar to the transpose in linear algebra.
Is there a way to turn an 1 by n table T1 such as
c_1|c_2|c_3|...|a_n
-------------------
1 |2 |3 |...|n
Into a n by 2 table like the following
key|val
-------
c_1|1
b_2|2
c_3|3
. |.
. |.
a_n|n
I am assuming that each column c_i in T1 can be unlikely identified.
Basically, you need to UNPIVOT this data, you can perform this using a UNION ALL:
select 'c_1' col, c_1 value
from yourtable
union all
select 'c_2' col, c_2 value
from yourtable
union all
select 'c_3' col, c_3 value
from yourtable
#swasheck then I'd guess they'd have to read the column names in to a list
mylistobject = SELECT sql FROM sqlite_master WHERE tbl_name = 'table_name' AND type = 'table'
Create the new table with the column name is primary key, then value, and then iterate on the list, something a lot less messy than this in Python
for columnName in list:
row = cursor.execute('SELECT ' + str(value) + 'FROM tableToBeTransposed WHERE COLUMN = ' + str(c_i) + ';').fetchone()
cursor.execute('INSERT INTO newTable(c_i, values), (?,?)' (columnName, value))