How to store an array of date ranges in Postgres? - sql

I am trying to build a schedule, I generate an array of objects on the client containing date ranges
[
{start: "2020-07-06 0:0", end: "2020-07-10 23:59"},
{start: "2020-07-13 0:0", end: "2020-07-17 23:59"}
]
I have a column of type daterange[] what is the proper way to format this data to insert it into my table?
This is what I have so far:
INSERT INTO schedules(owner, name, dates) VALUES (
1,
'work',
'{
{[2020-07-06 0:0,2020-07-10 23:59]},
{[2020-07-13 0:0,2020-07-17 23:59]}
}'
)

I think you want:
insert into schedules(owner, name, dates) values (
1,
'work',
array[
'[2020-07-06, 2020-07-11)'::daterange,
'[2020-07-13, 2020-07-18)'::daterange
]
);
Rationale:
you are using dateranges, so you cannot have time portions (for this, you would need tsrange instead); as your code stands, it seems like you want an inclusive lower bound and an exclusive upper bound (hence [ at the left side, and ) at the right side)
explicit casting is needed so Postgres can recognize the that array elements have the proper datatype (otherwise, they look like text)
then, you can surround the list of ranges with the array[] constructor
Demo on DB Fiddle:
owner | name | dates
----: | :--- | :----------------------------------------------------
1 | work | {"[2020-07-06,2020-07-11)","[2020-07-13,2020-07-18)"}

Related

SQL get the value of a nested key in a jsonb field

Let's suppose I have a table my_table with a field named data, of type jsonb, which thus contains a json data structure.
let's suppose that if I run
select id, data from my_table where id=10;
I get
id | data
------------------------------------------------------------------------------------------
10 | {
|"key_1": "value_1" ,
|"key_2": ["value_list_element_1", "value_list_element_2", "value_list_element_3" ],
|"key_3": {
| "key_3_1": "value_3_1",
| "key_3_2": {"key_3_2_1": "value_3_2_1", "key_3_2_2": "value_3_2_2"},
| "key_3_3": "value_3_3"
| }
| }
so in pretty formatting, the content of column data is
{
"key_1": "value_1",
"key_2": [
"value_list_element_1",
"value_list_element_2",
"value_list_element_3"
],
"key_3": {
"key_3_1": "value_3_1",
"key_3_2": {
"key_3_2_1": "value_3_2_1",
"key_3_2_2": "value_3_2_2"
},
"key_3_3": "value_3_3"
}
}
I know that If I want to get directly in a column the value of a key (of "level 1") of the json, I can do it with the ->> operator.
For example, if I want to get the value of key_2, what I do is
select id, data->>'key_2' alias_for_key_2 from my_table where id=10;
which returns
id | alias_for_key_2
------------------------------------------------------------------------------------------
10 |["value_list_element_1", "value_list_element_2", "value_list_element_3" ]
Now let's suppose I want to get the value of key_3_2_1, that is value_3_2_1.
How can I do it?
I have tryed with
select id, data->>'key_3'->>'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 from my_table where id=10;
but I get
select id, data->>'key_3'->>'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 from my_table where id=10;
^
HINT: No operators found with name and argument types provided. Types may need to be converted explicitly.
what am I doing wrong?
The problem in the query
select id, data->>'key_3'->>'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 --this is wrong!
from my_table
where id=10;
was that by using the ->> operand I was turning a json to a string, so that with the next ->> operand I was trying to get a json key object key_3_2 out of a string object, which makes no sense.
Thus one has to use the -> operand, which does not convert json into string, until one gets to the "final" key.
so the query I was looking for was
select id, data->'key_3'->'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 --final ->> : this gets the value of 'key_3_2_1' as string
from my_table
where id=10;
or either
select id, data->'key_3'->'key_3_2'->'key_3_2_1' alias_for_key_3_2_1 --final -> : this gets the value of 'key_3_2_1' as json / jsonb
from my_table
where id=10;
More info on JSON Functions and Operators can be find here

Change null to empty array in databricks SQL?

I have a value in a JSON column that is sometimes all null in an Azure Databricks table. The full process to get to JSON_TABLE is: read parquet, infer schema of JSON column, convert the column from JSON string to deeply nested structure, explode any arrays within. I am working in SQL with python-defined UDFs (json_exists() checks the schema to see if the key is possible to use, json_get() gets a key from the column or returns a default) and want to do the following:
SELECT
ID, EXPLODE(json_get(JSON_COL, 'ARRAY', NULL)) AS SINGLE_ARRAY_VALUE
FROM
JSON_TABLE
WHERE
JSON_COL IS NOT NULL AND
json_exists(JSON_COL, 'ARRAY')==1
When the data has at least one instance of JSON_COL containing ARRAY, the schema is such that this has no problems. If, however, the data has all null values in JSON_COL.ARRAY, an error occurs because the column has been inferred as a string type (error received: input to function explode should be array or map type, not string). Unfortunately, while the json_exists() function returns the expected values, the error still occurs even when the returned dataset would be empty.
Can I get around this error via casting or replacement of nulls? If not, what is an alternative that still allows inferring the schema of the JSON?
Note: This is a simplified example. I am writing code to generate SQL code for hundreds of similar data structures, so while I am open to workarounds, a direct solution would be ideal. Please ask if anything is unclear.
Example table that causes error:
| ID | JSON_COL |
| 1 | {"_corrupt_record": null, "otherInfo": [{"test": 1, "from": 3}]} |
| 2 | {"_corrupt_record": null, "otherInfo": [{"test": 5, "from": 2}]} |
Example table that does not cause error:
| ID | JSON_COL |
| 1 | {"_corrupt_record": null, "array": [{"test": 1, "from": 3}]} |
| 2 | {"_corrupt_record": null, "otherInfo": [{"test": 5, "from": 2}]} |
This question seems like it might hold the answer, but I was not able to get anything working from it.
You can filter the table before calling json_get and explode, so that you only explode when json_get returns a non-null value:
SELECT
ID, EXPLODE(json_get(JSON_COL, 'ARRAY', NULL)) AS SINGLE_ARRAY_VALUE
FROM (
SELECT *
FROM JSON_TABLE
WHERE
JSON_COL IS NOT NULL AND
json_exists(JSON_COL, 'ARRAY')==1
)

how to update value of key in json type field on PostgreSQL

I am running postgresql 9.6 version. I am storing data in json type field.
insert into testable (name, jsonvalues)
values("Jacky", "{'has_attachment': True, 'flag':'True'")
I want to update "flag" to "False".
What query i have to use?
Here is one way to do it:
update testable
set jsonvalues = jsonb_set(jsonvalues::jsonb, '{flag}', '"False"')::json
where name = 'Jacky'
Demo on DB Fiddle:
name | jsonvalues
:---- | :------------------------------------------
Jacky | {"flag": "False", "has_attachment": "True"}
Note that your original insert query is not valid Postgres syntax. You need single quotes around the values, and double quotes within the JSON, so that should be:
insert into testable (name, jsonvalues) values(
'Jacky',
'{"has_attachment": "True", "flag":"True" }'
)

How to get JSON value from varchar field

*outdated Oracle version
I have a table for receipt data.
I want to get some data from field EXT_ATTR. such as PAYMENT_RECEIPT_NO
The field "EXT_ATTR" is varchar(4000) stored JSON value
SerialId | EXT_ATTR
1 |
{
"PAYMENT_RECEIPT_NO": "PS00000000000000001",
"IS_CORPOR": "1",
"POSTCODE1": "51000",
"POSTCODE2": "51000",
"BILLADDR1PART1": "BILLADDR1PART1_DATA",
"BILLADDR1PART2": "BILLADDR1PART2_DATA",
"NEED_PRINT_WHT": "1",
"WHT_AMT": "0",
"TRXAMT": "2340600",
"LOCATIONID": "02140",
"PAYMENT_METHOD_NAME": "Cash",
"WITH_TAX": "1"
}
2 |
{
"PAYMENT_RECEIPT_NO": "PS00000000000000055",
"IS_CORPOR": "1",
"POSTCODE1": "51000",
"POSTCODE2": "51000",
"BILLADDR1PART1": "BILLADDR1PART1_DATA",
"BILLADDR1PART2": "BILLADDR1PART2_DATA",
"NEED_PRINT_WHT": "1",
"WHT_AMT": "0",
"TRXAMT": "2340600",
"LOCATIONID": "02140",
"PAYMENT_METHOD_NAME": "Cash",
"WITH_TAX": "1"
}
How can I extract varchar filed to get only value.
SerialId | PAYMENT_RECEIPT_NO
1 | PS00000000000000001
2 | PS00000000000000055
Thank you very much.
to work with json documents you can use PL/JSON
if you want to parse it without json Tools, than you can use substr, instr function in Oracle.
depending on what your string looks like, you have to adjust string positions.
create table tab (json varchar2(1000));
insert into tab values('{"PAYMENT_RECEIPT_NO": "PS00000000000000001","IS_CORPOR": "1","POSTCODE1": "51000","POSTCODE2": "51000","BILLADDR1PART1": "BILLADDR1PART1_DATA","BILLADDR1PART2": "BILLADDR1PART2_DATA","NEED_PRINT_WHT": "1","WHT_AMT": "0","TRXAMT": "2340600","LOCATIONID": "02140","PAYMENT_METHOD_NAME": "Cash","WITH_TAX": "1"}');
insert into tab values('{"PAYMENT_RECEIPT_NO": "PS00000000000000055","IS_CORPOR": "1","POSTCODE1": "51000","POSTCODE2": "51000","BILLADDR1PART1": "BILLADDR1PART1_DATA","BILLADDR1PART2": "BILLADDR1PART2_DATA","NEED_PRINT_WHT": "1","WHT_AMT": "0","TRXAMT": "2340600","LOCATIONID": "02140","PAYMENT_METHOD_NAME": "Cash","WITH_TAX": "1"}');
select substr(json,instr(json,': ',1,1)+3,instr(json,',',1,1)-instr(json,': ',1,1)-4)
from tab;
| SUBSTR(JSON,INSTR(JSON,':',1,1)+3,INSTR(JSON,',',1,1)-INSTR(JSON,':',1,1)-4) |
| :--------------------------------------------------------------------------- |
| PS00000000000000001 |
| PS00000000000000055 |
db<>fiddle here
JSON functions are defined for Database Oracle12c+ version. APEX_JSON package with release 5.0+ should be installed for the previous releases. Whenever installation complete, then the following code might be used as an XML data type manner through APEX_JSON.TO_XMLTYPE() function in order to extract the desired values :
WITH t AS
(
SELECT SerialId, APEX_JSON.TO_XMLTYPE(Payment_Receipt_No) AS xml_data
FROM tab
)
SELECT SerialId, Payment_Receipt_No
FROM t
CROSS JOIN
XMLTABLE('/json'
PASSING xml_data
COLUMNS
Payment_Receipt_No VARCHAR2(100) PATH 'PAYMENT_RECEIPT_NO'
)

PostgreSQL: Sub-select inside insert

I have a table called map_tags:
map_id | map_license | map_desc
And another table (widgets) whose records contains a foreign key reference (1 to 1) to a map_tags record:
widget_id | map_id | widget_name
Given the constraint that all map_licenses are unique (however are not set up as keys on map_tags), then if I have a map_license and a widget_name, I'd like to perform an insert on widgets all inside of the same SQL statement:
INSERT INTO
widgets w
(
map_id,
widget_name
)
VALUES (
(
SELECT
mt.map_id
FROM
map_tags mt
WHERE
// This should work and return a single record because map_license is unique
mt.map_license = '12345'
),
'Bupo'
)
I believe I'm on the right track but know right off the bat that this is incorrect SQL for Postgres. Does anybody know the proper way to achieve such a single query?
Use the INSERT INTO SELECT variant, including whatever constants right into the SELECT statement.
The PostgreSQL INSERT syntax is:
INSERT INTO table [ ( column [, ...] ) ]
{ DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | query }
[ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ]
Take note of the query option at the end of the second line above.
Here is an example for you.
INSERT INTO
widgets
(
map_id,
widget_name
)
SELECT
mt.map_id,
'Bupo'
FROM
map_tags mt
WHERE
mt.map_license = '12345'
INSERT INTO widgets
(
map_id,
widget_name
)
SELECT
mt.map_id, 'Bupo'
FROM
map_tags mt
WHERE
mt.map_license = '12345'
Quick Answer:
You don't have "a single record" you have a "set with 1 record"
If this were javascript: You have an "array with 1 value" not "1 value".
In your example, one record may be returned in the sub-query,
but you are still trying to unpack an "array" of records into separate
actual parameters into a place that takes only 1 parameter.
It took me a few hours to wrap my head around the "why not".
As I was trying to do something very similiar:
Here are my notes:
tb_table01: (no records)
+---+---+---+
| a | b | c | << column names
+---+---+---+
tb_table02:
+---+---+---+
| a | b | c | << column names
+---+---+---+
|'d'|'d'|'d'| << record #1
+---+---+---+
|'e'|'e'|'e'| << record #2
+---+---+---+
|'f'|'f'|'f'| << record #3
+---+---+---+
--This statement will fail:
INSERT into tb_table01
( a, b, c )
VALUES
( 'record_1.a', 'record_1.b', 'record_1.c' ),
( 'record_2.a', 'record_2.b', 'record_2.c' ),
-- This sub query has multiple
-- rows returned. And they are NOT
-- automatically unpacked like in
-- javascript were you can send an
-- array to a variadic function.
(
SELECT a,b,c from tb_table02
)
;
Basically, don't think of "VALUES" as a variadic
function that can unpack an array of records. There is
no argument unpacking here like you would have in a javascript
function. Such as:
function takeValues( ...values ){
values.forEach((v)=>{ console.log( v ) });
};
var records = [ [1,2,3],[4,5,6],[7,8,9] ];
takeValues( records );
//:RESULT:
//: console.log #1 : [1,2,3]
//: console.log #2 : [4,5,7]
//: console.log #3 : [7,8,9]
Back to your SQL question:
The reality of this functionality not existing does not change
just because your sub-selection contains only one result. It is
a "set with one record" not "a single record".