How to access json array element in plsql-oracle 12c - sql

I am going to fetch one column in oracle 12c table which is clob type, column name is dynamic_fields, apparently it was a json format.
The data looks like this in the column:
{
"App": 20187.7",
"CTList":
"[
{\"lineOfBusiness\":\"0005",
\"coverageId\":659376737,
\"premiumPercentage\":0,
\"lobInCt\":\"4CI5\"},
{\"lineOfBusiness\":\"0005\",
\"coverageId\":659376738,
\"premiumPercentage\":0,
\"lobInCt\":\"4CE5\"},
{\"lineOfBusiness\":\"0005\",
\"coverageId\":659376739,
\"premiumPercentage\":1,
\"lobInCt\":\"4CD5\"}]"
}
I want to use the json_value function to fetch the fields lineOfbusiness of the first element.
json_value(dynamic_fields,'$.CTList[0].lineOfBusiness')
It returns null.
Is that anything wrong I did? I do not want to use json_table to fetch the array value since it will be needs to embed into another query.

You need to fix your dynamic_fields column's format. First, start by creating your table with a check constraint to make sure your column conforms json format (
it's allowed to add a check constraint within DB version 12c ) :
create table tab
(
dynamic_fields clob constraint chk_dyn_fld
check (dynamic_fields is json)
);
If you try to insert you current value for dynamic_fields column, Oracle hurls by raising ORA-02290 error ( check constraint (<yourCurSchema>.CHK_DYN_FLD) violated ), Fix your format by adding a double-quote just before "App"'s value (^^"^^20187.7"), and remove double-quotes from before beginning and after trailing parts of square brackets, and lastly try to replace backslashes by an empty string ('') by replace() function during the insertion :
insert into tab
values(replace('{
"App": "20187.7",
"CTList":
[
{\"lineOfBusiness\":\"0005",
\"coverageId\":659376737,
\"premiumPercentage\":0,
\"lobInCt\":\"4CI5\"},
{\"lineOfBusiness\":\"0005\",
\"coverageId\":659376738,
\"premiumPercentage\":0,
\"lobInCt\":\"4CE5\"},
{\"lineOfBusiness\":\"0005\",
\"coverageId\":659376739,
\"premiumPercentage\":1,
\"lobInCt\":\"4CD5\"}]
}','\',''));
which doesn't raise any exception. And this time, you're able to get the desired value (0005) by your original query :
select json_value(dynamic_fields,'$.CTList[0].lineOfBusiness')
from tab;
Demo

Related

How to select value from Oracle table having field name as default keyword

We have Oracle table having default keyword(i.e in as field name) field name.Now i am querying table but unable to extract specific field data.
select a.filename,a.in from table a
Following error appears "invalid field name.
Try using double quotes.
select a."IN" from table a
You can use default (oracle reserved) keywords as the name of the columns but yes it is not advisable to use it.
Anyway, If you want to use oracle reserved keywords then you must have to enclose them in the double-quotes.
Note that oracle is case insensitive in terms of its object names until and unless it is wrapped in the double-quotes. it means if you enclose any object name in double-quotes then you must have to use them anywhere in the entire DB as case sensitive manner.
So if your table definition is:
CREATE TABLE YOUR_TABLE ("IN" NUMBER);
Then you need to use "IN" wherever you want to refer the column but if your table definition is:
CREATE TABLE YOUR_TABLE ("in" NUMBER);
Then you need to use "in" wherever you want to refer the column. -- case sensitive names.
I hope it will clear all your doubts.
Cheers!!

Postgres Jsonb datatype

I am using PostgreSQL to create a table based on json input given to my Java code, and I need validations on JSON keys that is passed on the database just like oracle but problem here is the whole jsonb datatype column name lets say data is single column. Consider I get json in below format -
{
"CountActual": 1234,
"CountActualCharacters": "thisreallyworks!"
"Date": 09-11-2001
}
Correct datatype of above json:- number(10), varchar(50), date
Now to put validations on I'm using constraints
Query 1 -
ALTER TABLE public."Detail"
ADD CONSTRAINT "CountActual"
CHECK ((data ->> 'CountActual')::bigint >=0 AND length(data ->> 'CountActual') <= 10);
--Working fine.
But for Query 2-
ALTER TABLE public."Detail"
ADD CONSTRAINT "CountActualCharacters"
CHECK ((data ->> 'CountActualCharacters')::varchar >=0 AND length(data ->> 'CountActualCharacters') <= 50);
I'm getting below error -
[ERROR: operator does not exist: character varying >= integer
HINT: No operator matches the given name and argument type(s).
You might need to add explicit type casts.]
I tried another way also like -
ALTER TABLE public."Detail"
ADD CONSTRAINT CountActualCharacters CHECK (length(data ->> 'CountActualCharacters'::VARCHAR)<=50)
Above constraints works successfully but I don't think this is the right way as my validation is not working when inserting the data -
Insert into public."Detail" values ('{"
CountActual":1234,
"CountActualCharacters":789
"Date": 11-11-2009
}');
And its shows insert successfully when passing in 789 in CountActualCharacters instead of varchar like "the78isgood!".
So please can anyone suggest me proper constraint for PostgreSQL for varchar just like number that I have written in Query 1.
And if possible for Date type also with DD-MM-YYYY format.
I just started with PostgresSQL, forgive me if I'm sounded silly but I'm really stuck here.
You can use jsonb_typeof(data -> 'CountActualCharacters') = 'string'
Note the single arrow, as ->> will try to convert anything to string.
You can read more about JSON functions in PostgreSQL here:
https://www.postgresql.org/docs/current/static/functions-json.html

Insert an array of UUIDs using Objection.js

I am attempting to insert a new row into a table with a column defined as an array or UUIDs:
alter table medias add column "order" uuid[];
I am using Objection.js ORM and attempting to execute the following query:
const order = [
'BFAD6B0D-D3E6-4EB3-B3AB-108244A5DD7F'
]
Medias
.query()
.insert({
order: lit(order.map(id => lit(id).castType('uuid'))).castArray()
})
But the query is malformed and therefore does not execute:
INSERT INTO xxx ("order")
VALUES (ARRAY [
{"_value":"BFAD6B0D-D3E6-4EB3-B3AB-108244A5DD7F","_cast":"uuid","_toJson":false,"_toArray":false}
])
As can be seen, the query contains the JSON-stringified representation of the LiteralBuilder object and not something that the SQL syntax understands as a typecast.
If I skip casting the individual UUID strings and just cast the whole column into an array, then Postgres rejects the query because the column is of type uuid[] but I am attempting to insert the column as text[].
How can I format this query using Objection.js ORM?
My goal is to keep the column definition untouched and be able to insert a Postgres' array of UUIDs using Objection.js, either through its API or via raw query. If this is not currently possible with Objection, I am willing, as a last resort, to re-define the column as text[], but I would like to make sure I really have no other option.

Is there a way to define replacement of one string to other in external table creation in greenplum.?

I need to create external table for a hdfs location. The data is having null instead of empty space for few fields. If the field length is less than 4 for such fields, it is throwing error when selecting data. Is there a way to define replacement of all such nulls with empty space while creating table it self.?
I am trying it in greenplum, just tagged hive to see what can be done for such cases in hive.
You could use the serialization property for mapping NULL string to empty string.
CREATE TABLE IF NOT EXISTS abc ( ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE TBLPROPERTIES ("serialization.null.format"="")
In this case when you query it from hive you would get empty value for that field and hdfs would have "\N".
Or
If you want to represented empty string instead of '\N', you can using COALESCE function:
INSERT OVERWRITE tabname SELECT NULL, COALESCE(NULL,"") FROM data_table;
the answer to the problem is using NULL as 'null' statement in create table syntax for greenplum. As i have mentioned, i wanted to get few inputs from people who faced such issues in hive. so i have tagged hive as well. But, greenplum external table syntax supports NULL AS phrase in which we can specify the form of NULL that you want to keep.

H2 DB CSVREAD command converting value to date before placing into VARCHAR

I am attempting to load a tab delimited text file which contains a column of values which happen to look exactly like a date, but aren't. It appears that the CSVREAD command scans the row, converts the text value in the column to a java.Sql.Date, and then sees that the target column is a VARCHAR and executes toString() to obtain the value...which is exactly NOT what I need. I actually need the raw unconverted text with no date processing whatsoever.
So, is there some way to turn off "helpful date-like column conversion" in the CSVREAD command?
Here's the simplest case I can make to demonstrate the undesired behavior:
CREATE TABLE x
(
name VARCHAR NOT NULL
value VARCHAR
) AS
SELECT * CSVREAD('C:\myfile.tab', null, 'UTF-8', chr(9))
;
The file contains three rows, a header and two records of values:
name\tvalue\n
x\t110313\n
y\t102911\n
Any assistance on how I can bypass the overhelpful part of CVSREAD would be greatly appreciated. Thank you.
(It seems you found this out yourself, but anyway):
For CSVREAD, all columns are strings. The CSVREAD function or the database do not try to convert values to a date, or in any other way try to detect the data type. The database only does what you ask it for, which is read the data as a string in your case.
If you do want to convert a column to a date, you need to do that explicitly, for example:
CREATE TABLE x(name VARCHAR NOT NULL, value TIMESTAMP) AS
SELECT *
FROM CSVREAD('C:\myfile.tab', null, 'UTF-8', chr(9));
If non-default parsing is needed, you could use:
CREATE TABLE x(name VARCHAR NOT NULL, value TIMESTAMP) AS
SELECT "name", parsedatetime("value", "M/d/y") as v
FROM CSVREAD('C:\myfile.tab', null, 'UTF-8', chr(9));
For people who don't have headers in there csv files the example could be like this:
CREATE TABLE x(name VARCHAR NOT NULL, value TIMESTAMP) AS
SELECT "0", parsedatetime("1", 'd-M-yyyy') as v
FROM CSVREAD('C:\myfile.tab', '0|1', 'UTF-8', '|');
Beware of the single quotes around the date format. When I tried the example from Thomas it gave me an error using H2:
Column "d-M-yyyy" not found; SQL statement:
My csv files:
firstdate|13-11-2013\n
seconddate|14-11-2013