Formatted UUID from HSQLDB - hsqldb

Using HSQLDB, when I try,
VALUES (UUID())
i get unformatted UUID i.e qqqq345623457612. Is there a statement where I can get formatted uuid?

When the UUID function is called with a binary UUID argument, it converts it to a formatted string. When used with VALUES, a cast is also required:
VALUES CAST(UUID(UUID() ) AS CHAR(34))
CALL UUID(UUID())

Related

the USE OF CAST FUNCTION IN THE FOLLOWING QUERY

the thing is to replace whatever observation that hold the number 999.9 as NULL
but i dont know the utility of CAST here as i know it helps change the data type but WDSP in the description is STRING
IF(
wdsp="999.9",
NULL,
CAST(wdsp AS Float64)) AS wind_speed,
nothing yet the cast should it come before IF or within IF
The CAST function in being used to convert the wdsp column to a number (specifically FLOAT64) instead of returning it as a STRING (which you mention is the declared type). This is likely so that whatever is issuing the query can get the aliased column wind_speed as a numerical type that can be processed as a number and not a string.

SnowFlake Failed to cast variant value "" to TIMESTAMP_NTZ

Generating external tables in SnowFlake works very well, but when you use the dbt to do this, it generates errors in the validation of fields that come null for timestamp_ntz.
CREATE OR REPLACE EXTERNAL TABLE EX_USERS
( deleted_at timestamp_ntz as (NULLIF(value :deleted_at, '')::timestamp_ntz)
dbt
name: deleted_at data_type: timestamp_ntz description: "deleted_at"
Failed to cast variant value "" to TIMESTAMP_NTZ
Instead of using ::timestamp TRY_TO_TIMESTAMP_NTZ could be used:
deleted_at timestamp_ntz AS (TRY_TO_TIMESTAMP_NTZ(some_col) )
When you get a Failed to cast error back from Snowflake, the value returned in the error message is (unfortunately) trimmed of spaces. This means that it might not be '' that is your deleted_at but it could be ' ' or ' ' etc.
In other words, your NULLIF might be comparing one (or more) spaces against the empty string. In Snowflake, this comparison will return false. For example
SELECT NULLIF(value :deleted_at, '')::timestamp_ntz
FROM (SELECT {'deleted_at':' '} as value)
will return
SQL Error [100071] [22000]: Failed to cast variant value " " to TIMESTAMP_NTZ
Now, you could use TRY_TO_TIMESTAMP_NTZ, but that would mean silently ignoring any "real" errors (such as attempting to convert say 2023-02-29 to a timestamp. Therfore, it might be preferable to code this
NULLIF(TRIM(value:deleted_at), '')::timestamp_ntz
or, you could use the TRIM_SPACE option on the CREATE EXTERNAL TABLE if you are happy for all input columns to be trimmed

How to cast postgres JSON column to int without key being present in JSON (simple JSON values)?

I am working on data in postgresql as in the following mytable with the fields id (type int) and val (type json):
id
val
1
"null"
2
"0"
3
"2"
The values in the json column val are simple JSON values, i.e. just strings with surrounding quotes and have no key.
I have looked at the SO post How to convert postgres json to integer and attempted something like the solution presented there
SELECT (mytable.val->>'key')::int FROM mytable;
but in my case, I do not have a key to address the field and leaving it empty does not work:
SELECT (mytable.val->>'')::int as val_int FROM mytable;
This returns NULL for all rows.
The best I have come up with is the following (casting to varchar first, trimming the quotes, filtering out the string "null" and then casting to int):
SELECT id, nullif(trim('"' from mytable.val::varchar), 'null')::int as val_int FROM mytable;
which works, but surely cannot be the best way to do it, right?
Here is a db<>fiddle with the example table and the statements above.
Found the way to do it:
You can access the content via the keypath (see e.g. this PostgreSQL JSON cheatsheet):
Using the # operator, you can access the json fields through the keypath. Specifying an empty keypath like this {} allows you to get your content without a key.
Using double angle brackets >> in the accessor will return the content without the quotes, so there is no need for the trim() function.
Overall, the statement
select id
, nullif(val#>>'{}', 'null')::int as val_int
from mytable
;
will return the contents of the former json column as int, respectvely NULL (in postgresql >= 9.4):
id
val_int
1
NULL
2
0
3
2
See updated db<>fiddle here.
--
Note: As pointed out by #Mike in his comment above, if the column format is jsonb, you can also use val->>0 to dereference scalars. However, if the format is json, the ->> operator will yield null as result. See this db<>fiddle.

Extract XML data from BLOB column in HANA

I want to apply the HANA function XMLEXTRACT to a BLOB column of a table containing an UTF8 encoded XML document.
In the concrete example, I have a database table zprc_prot_cont with a column named content of datatype BLOB, and I want to extract the text content of the first <AKTNR> element in the XML document which is contained in that column. Since by its documentation the function XMLEXTRACT only applies to arguments of datatype CLOB, NCLOB, VARCHAR, or NVARCHAR, but not to type BLOB, some conversion is necessary. But which is the right one?
I tried conversion functions like cast() or to_clob() but with no success:
select xmlextract( to_clob( content ), '//AKTNR/text()' ) as aktnr
from zprc_prot_cont
The answer is
SQL-ERROR 266: inconsistent datatype: BLOB is invalid for function
to_clob: line 1 col ...
Found the solution myself. The required function to make the BLOB column work as argument for XMLEXTRACT is the composition of to_varbinary with bintostr:
select
xmlextract( bintostr( to_varbinary( content ) ),
'(//MATNR)[1]/text()' )
as matnr
from zprc_prot_content
where ...
A caveat: If the XPath expression yields no result, the function xmlectract aborts with error, in conformance with the documentation (I would have expected a null value as result).

Oracle DECODE not working

I have an oracle decode that is checking if a value is NULL before updating the decimal precision. The problem is when the value in the price_precision column isn't null the decode still goes to the d.price value, but it should go to the default value. Here is the line of code for the decode:
DECODE(d.PRICE_PRECISION, NULL, d.price,TO_CHAR(DECODE(d.price,NULL, '', d.price), CONCAT('9999990',RPAD('D', d.PRICE_PRECISION+1,'9')))) price
I know for a fact there is non-NULL data in the Price _Precision column, because I can see it in the return for the select statement. Is there something wrong with my decode? any ideas why the decode isn't going to the default statement?
It seems implicit conversion took place. Consider this
DECODE(d.PRICE_PRECISION, NULL, to_char(d.price), TO_CHAR....
From Oracle docs:
Oracle automatically converts expr and each search value to the
datatype of the first search value before comparing. Oracle
automatically converts the return value to the same datatype as the
first result.
For Null values, use NVL function.
select nvl(name,'not registered') from table;
When name is null values, return 'not registered'.
You can use this together with DECODE function.
decode(nvl(PRICE,'Not valid'),'Not valid',0,PRICE)
In calculations, this avoids problems.
From Oracle docs: