Select from multiple rows as one row with defaults - sql

Here is what my table looks like:
Table items
idx bigint unique
merkle char(64)
tag text
digest char(64)
Since idx is unique, I will use the subscript operator [] to signify the field corresponding to the idx speficied, so for example by merkle[i] I will mean the merkle field in the row that has as idx the value i.
What I would like is a query that, for a given i, selects tag[i], digest[i], merkle[2 * i], merkle[2 * i + 1], with default values for merkle[2 * i] and merkle[2 * i + 1] if no rows exist with those idx values.
So for example, say that I have
idx merkle tag digest
1 merk1 tag1 dig1
I would like my query to return tag1, dig1, "default", "default". If I have
idx merkle tag digest
1 merk1 tag1 dig1
2 merk2 tag2 dig2
I would like to get tag1, dig1, merk2, "default", if I have
idx merkle tag digest
1 merk1 tag1 dig1
2 merk2 tag2 dig2
3 merk3 tag3 dig3
I would like to get tag1, dig1, merk2, merk3, and so on.
How can I do such a thing? Is it possible to do it in just one transaction with the database? (Of course I could do it with three separate queries, but that looks inefficient.)

You can do it using LEFT JOIN and COALESCE:
SELECT t1.idx, t1.tag, t1.digest,
COALESCE(t2.merkle, 'default'),
COALESCE(t3.merkle, 'default')
FROM mytable AS t1
LEFT JOIN mytable AS t2 ON t2.idx = 2 * t1.idx
LEFT JOIN mytable AS t3 ON t3.idx = 2 * t1.idx + 1
This will match every row with idx = i with rows with idx = 2 * i and idx = 2 * i + 1. If there is no match for either of these indices (or both), then default will be selected.

Related

How do you insert into an SQLite JSON array without duplicates? (like a set)

Given this table:
CREATE TABLE "carts" (
"id" INTEGER NOT NULL,
"products" TEXT NOT NULL,
PRIMARY KEY("id" AUTOINCREMENT)
)
Where the products column contains text values representing JSON arrays of numbers like [12,13,14], how can I insert a single item without duplicates?
Examples
Add 17 to [12,13,14] to give [12,13,14,17].
Add 13 to [12,13,14] to give [12,13,14] (no change therefore duplicate avoided).
SQLite does not have a special JSON data type (although you can use it in the definition of a column).
All JSON-like values are strings (data type TEXT) and you can easily use string functions and the operator LIKE to check for a pattern in their values:
UPDATE carts
SET products = json_insert(products, '$[#]', ?)
WHERE id = 1
AND REPLACE(REPLACE(products, '[', ','), ']', ',') NOT LIKE '%,' || ? || ',%';
Replace ? with the value that you want to insert.
See the demo.
WITH
new_product(id, product_id) AS (VALUES (1, 13)),
new_records AS (
SELECT carts.id, json_insert(carts.products, '$[#]', np.product_id) AS products
FROM carts, new_product AS np
WHERE carts.id = np.id
AND NOT (carts.products like np.product_id || ',%'
OR carts.products like '%,' || np.product_id
OR carts.products like '%,' || np.product_id || ',%')
)
UPDATE carts SET products = new_records.products
FROM new_records
WHERE carts.id = new_records.id;
I have found this to be the easiest way:
Insert the item into the array using the standard function json_insert.
> SELECT json_insert('[12,13,14]','$[#]',13) AS tempArray
tempArray
[12,13,14,13]
Use the table-valued function json_each to break apart the array into a temporary table.
> SELECT * FROM (SELECT json_insert('[12,13,14]','$[#]',13) AS tempArray), json_each(tempArray)
tempArray key value type atom id parent fullkey path
[12,13,14,13] 0 12 integer 12 1 $[0] $
[12,13,14,13] 1 13 integer 13 2 $[1] $
[12,13,14,13] 2 14 integer 14 3 $[2] $
[12,13,14,13] 3 13 integer 13 4 $[3] $
Take only the value column (as the others are not needed).
> SELECT value FROM (SELECT json_insert('[12,13,14]','$[#]',13) AS tempArray), json_each(tempArray)
value
12
13
14
13
Use DISTINCT to remove duplicates.
> SELECT DISTINCT value FROM (SELECT json_insert('[12,13,14]','$[#]',13) AS tempArray), json_each(tempArray)
value
12
13
14
Use the aggregation function json_group_array to combine the results in a JSON array text value.
> SELECT json_group_array(DISTINCT value) FROM (SELECT json_insert('[12,13,14]','$[#]',13) AS tempArray), json_each(tempArray)
json_group_array(DISTINCT value)
[12,13,14]
Stick this statement into an UPDATE statement, replacing the example array with a reference to the desired field.
UPDATE carts
SET product = (SELECT json_group_array(DISTINCT value) FROM (SELECT json_insert(carts.product,'$[#]',13) AS tempArray), json_each(tempArray))
WHERE id = 1

Case statement with four columns, i.e. attributes

I have a table with values "1", "0" or "". The table has four columns: p, q, r and s.
I need help creating a case statement that returns values when the attribute is equal to 1.
For ID 5 the case statement should return "p s".
For ID 14 the case statement should return "s".
For ID 33 the case statement should return 'p r s". And so on.
Do I need to come with a case statement that has every possible combination? Or is there a simpler way. Below is what I have come up with thus far.
case
when p = 1 and q =1 then "p q"
when p = 1 and r =1 then "p r"
when p = 1 and s =1 then "p s"
when r = 1 then r
when q = 1 then q
when r = 1 then r
when s = 1 then s
else ''
end
One solution could be this which uses a case for each attribute to return the correct value, surrounded by a trim to remove the trailing space.
with tbl(id, p, q, r, s) as (
select 5,1,0,0,1 from dual union all
select 14,0,0,0,1 from dual
)
select id,
trim(regexp_replace(case p when 1 then 'p' end ||
case q when 1 then 'q' end ||
case r when 1 then 'r' end ||
case s when 1 then 's' end, '(.)', '\1 '))
from tbl;
The real solution would be to fix the database design. This design technically violates Boyce-Codd 4th normal form in that it contains more than 1 independent attribute. The fact an ID "has" or "is part of" attribute p or q, etc should be split out. This design should be 3 tables, the main table with the ID, the lookup table containing info about attributes that the main ID could have (p, q, r or s) and the associative table that joins the two where appropriate (assuming an ID row could have more than one attribute and an attribute could belong to more than one ID), which is how to model a many-to-many relationship.
main_tbl main_attr attribute_lookup
ID col1 col2 main_id attr_id attr_id attr_desc
5 5 1 1 p
14 5 4 2 q
14 4 3 r
4 s
Then it would be simple to query this model to build your list, easy to maintain if an attribute description changes (only 1 place to change it), etc.
Select from it like this:
select m.ID, m.col1, listagg(al.attr_desc, ' ') within group (order by al.attr_desc) as attr_desc
from main_tbl m
join main_attr ma
on m.ID = ma.main_id
join attribute_lookup al
on ma.attr_id = al.attr_id
group by m.id, m.col1;
You can use concatenations with decode() functions
select id, decode(p,1,'p','')||decode(q,1,'q','')
||decode(r,1,'r','')||decode(s,1,'s','') as "String"
from t;
Demo
If you need spaces between letters, consider using :
with t(id,p,q,r,s) as
(
select 5,1,0,0,1 from dual union all
select 14,0,0,0,1 from dual union all
select 31,null,0,null,1 from dual union all
select 33,1,0,1,1 from dual
), t2 as
(
select id, decode(p,1,'p','')||decode(q,1,'q','')
||decode(r,1,'r','')||decode(s,1,'s','') as str
from t
), t3 as
(
select id, substr(str,level,1) as str, level as lvl
from t2
connect by level <= length(str)
and prior id = id
and prior sys_guid() is not null
)
select id, listagg(str,' ') within group (order by lvl) as "String"
from t3
group by id;
Demo
in my opinion, its a bad practice to use columns for relationships.
you should have two tables, one that's called arts and another that is called mapping art looks like this:
ID - ART
1 - p
2 - q
3 - r
4 - 2
...
and mapping maps your base-'ID's to your art-ids and looks like this
MYID - ARTID
5 - 1
5 - 4
afterwards, you should make use of oracles pivot operator. its more dynamically

GBQ SQL: Return blank spaces if a record is not found in the table

I have a query as below. I would like SQL to return blank spaces if a key is not found in the table.
Select * from table_A where key in (1, 2, 3, 4)
Output:
1 x y
2 a b
'' '' ''
4 ds c
Assuming table_A has 3 columns and key 3 record in not in the table
Instead of empty strings you should work with NULL values to be type-safe.
NULL indicates that there is no value present in contrast to empty string or zeros which are still values of a certain type.
If you wanted to use empty strings you'd have to cast the key to a string on the run - not very convenient.
The trick to get your result is to create an ideal key-table with all keys - I'm using generate_array here from 1 to the max(key). Then left join your table to it and voila:
WITH test AS (SELECT * FROM UNNEST([
STRUCT(1 AS key, 'x' AS col1, 'y' AS col2),
STRUCT(2 AS key, 'a' AS col1, 'b' AS col2),
STRUCT(4 AS key, 'x' AS col1, 'y' AS col2)
])
)
SELECT
test.*
FROM UNNEST(GENERATE_ARRAY(1, (SELECT MAX(key) FROM test))) AS key
LEFT JOIN test USING(key)
gives you
If you wanted all keys, just SELECT * FROM ...

Is it possible to use Informix NVL with two subqueries?

I want to get a parameter. The priority for getting that parameter is that I have to look for it in Table1, but if it doesn't exist there, I have to look for it in Table2. If not, so that parameter is null (this situation should not happen, but, well, there is always an edge case).
I wanted to try something like this:
SELECT NVL(
SELECT paramValue from Table1
where paramName = "paramName" and Id = "id",
SELECT paramValue from Table2
where paramName = "paramName" and Id = "id")
But it gives me a syntax error.
Is there any way of doing something like this?
Enclose the sub-queries in their own set of parentheses, like this:
SELECT NVL((SELECT Atomic_Number FROM Elements WHERE Name = 'Tungsten'),
(SELECT Atomic_Number FROM Elements WHERE Name = 'Helium'))
FROM sysmaster:informix.sysdual;
74
SELECT NVL((SELECT Atomic_Number FROM Elements WHERE Name = 'Wunderkind'),
(SELECT Atomic_Number FROM Elements WHERE Name = 'Helium'))
FROM sysmaster:informix.sysdual;
2
SELECT NVL((SELECT Atomic_Number FROM Elements WHERE Name = 'Wunderkind'),
(SELECT Atomic_Number FROM Elements WHERE Name = 'Helios'))
FROM sysmaster:informix.sysdual;
 
The last query generated a NULL (empty line) as output, which is mimicked by a non-breaking space on the last line.
Granted, I'm not selecting from two tables; that's immaterial to the syntax, and the sub-queries would work on two separate tables as well as on one table.
Tested with Informix 12.10.FC6 and CSDK 4.10.FC6 on Mac OS X 10.11.5.
There's another way:
SELECT * FROM (
SELECT paramValue from Table1
where paramName = "paramName" and Id = "id"
union all
SELECT paramValue from Table2
where paramName = "paramName" and Id = "id"
) x
LIMIT 1
Which is IMHO easier to read.

Taking the "transpose" of a table using SQL

I don't know if there is a name for this operation but it's similar to the transpose in linear algebra.
Is there a way to turn an 1 by n table T1 such as
c_1|c_2|c_3|...|a_n
-------------------
1 |2 |3 |...|n
Into a n by 2 table like the following
key|val
-------
c_1|1
b_2|2
c_3|3
. |.
. |.
a_n|n
I am assuming that each column c_i in T1 can be unlikely identified.
Basically, you need to UNPIVOT this data, you can perform this using a UNION ALL:
select 'c_1' col, c_1 value
from yourtable
union all
select 'c_2' col, c_2 value
from yourtable
union all
select 'c_3' col, c_3 value
from yourtable
#swasheck then I'd guess they'd have to read the column names in to a list
mylistobject = SELECT sql FROM sqlite_master WHERE tbl_name = 'table_name' AND type = 'table'
Create the new table with the column name is primary key, then value, and then iterate on the list, something a lot less messy than this in Python
for columnName in list:
row = cursor.execute('SELECT ' + str(value) + 'FROM tableToBeTransposed WHERE COLUMN = ' + str(c_i) + ';').fetchone()
cursor.execute('INSERT INTO newTable(c_i, values), (?,?)' (columnName, value))