Creating two types of table to store JSON values (BLOB and NCLOB) and my aim to search the required values from JSON table;
Code Snippet 1: Creating table with one column as NCLOB.
create table departments_json_nclob (
department_id integer not null primary key,
department_data NCLOB not null
);
Simple insert with multibyte character (that is value : məharaːʂʈrə):
insert into departments_json_nclob
values ( 200,'{"department_list":[{"Deptname":"DEPT-A", "value" : "məharaːʂʈrə"}]}');
Code Snippet 2: Now, I have created one other table with BLOB datatype:
create table departments_json (
department_id integer not null primary key,
department_data blob not null
);
Added constraint to allow only JSON
alter table departments_json
add constraint dept_data_json
check ( department_data is JSON FORMAT JSON STRICT );
Insert normal JSON
insert into departments_json
values ( 100, utl_raw.cast_to_raw ('{"department_list":[{"Deptname":"DEPT-A", "value" : "məharaːʂʈrə"}]}'));
Insertion Verified from below query:
SELECT json_value(department_data format json, '$.department_list.value' )
FROM departments_json JS
WHERE DEPARTMENT_ID=100;
output is: məharaːʂʈrə
Now, I will have one more insertion in same table i.e. 'departments_json' but this time I will take required insertion value from NCLOB table departments_json_nclob:
declare
i nclob;
begin
select department_data into i from departments_json_nclob where department_id =200;
--inserting same way as I inserted in departments_json for department_id 100 but value comes from NCLOB
insert into departments_json
values ( 101, utl_raw.cast_to_raw (i));
commit;
end;
Again, insertion verified with below query:
SELECT json_value(department_data format json, '$.department_list.value' )
FROM departments_json JS
WHERE DEPARTMENT_ID=101;
output is: məharaːʂʈrə
Now my question is:
When I search for multibyte character, query return result of one query only that is direct insertion in BLOB table. Which is DEPARTMENT_ID=100 - why not 101?
Below query:
SELECT *
FROM departments_json
WHERE JSON_value(department_data format json, '$.department_list.value') = ('məharaːʂʈrə');
SELECT *
FROM departments_json
WHERE JSON_TEXTCONTAINS(department_data, '$.department_list.value', 'məharaːʂʈrə')
below query shows which character are multibyte:
select c, length(c), lengthb(c)
from ( select substr(s, level, 1) c
from ( select 'məharaːʂʈrə' s
from dual)
connect by level <= length(s));
Related
Product type table contains product types. Some ids may missing :
create table artliik (liiginrlki char(3) primary key);
insert into artliik values('1');
insert into artliik values('3');
insert into artliik values('4');
...
insert into artliik values('999');
Property table contais comma separated list of types.
create table strings ( id char(100) primary key, kirjeldLku chr(200) );
insert into strings values ('item1', '1,4-5' );
insert into strings values ('item2', '1,2,3,6-9,23-44,45' );
Type can specified as single integer, e.q 1,2,3 or as range like 6-9 or 23-44
List can contain both of them.
How to all properties for given type.
Query
select id
from artliik
join strings on ','||trim(strings.kirjeldLku)||',' like '%,'||trim(artliik.liiginrlki)||',%'
returns date for single integer list only.
How to change join so that type ranges in list like 6-9 are also returned?
Eq. f list contains 6-9, Type 6,7,8 and 9 shoud included in report.
Postgres 13 is used.
I would suggest a helper function similar to unnest that honors ranges.
Corrected function
create or replace function unnest_ranges(s text)
returns setof text language sql immutable as
$$
with t(x) as (select unnest(string_to_array(s, ',')))
select generate_series
(
split_part(x, '-', 1)::int,
case when x ~ '-' then split_part(x, '-', 2)::int else x::int end,
1
)::text
from t;
$$;
Then you can 'normalize' table strings and join.
select *
from artliik a
join (select id, unnest_ranges(kirjeldLku) from strings) as t(id, v)
on a.liiginrlki = v;
The use of a function definition is of course optional. I prefer it because the function is generic and reusable.
dbfiddle.uk demo will only works on pg14, since only pg14 have multirange data type. But customizeable icu collation works in pg13.
Collation doc: https://www.postgresql.org/docs/current/collation.html
Idea: create a multirange text data type that will sort numeric value based on their numerical value. like 'A-21' < 'A-123'.
CREATE COLLATION testcoll_numeric (
provider = icu,
locale = '#colNumeric=yes'
);
CREATE TYPE textrange AS RANGE (
subtype = text,
multirange_type_name = mulitrange_of_text,
COLLATION = testcoll_numeric
);
So
SELECT
mulitrange_of_text (textrange ('1'::text, '11'::text)) #> '9'::text AS contain_9;
should return true.
artliik table structure remain the same, but strings table need to change a bit.
CREATE temp TABLE strings (
id text PRIMARY KEY,
kirjeldLku mulitrange_of_text
);
then query it:
SELECT DISTINCT
strings.id
FROM
artliik,
strings
WHERE
strings.kirjeldLku #> liiginrlki::text
ORDER BY
1;
i have created a custom Postgres type with :
CREATE TYPE new_type AS (new_date timestamp, some_int bigint);
i have a table that store arrays of new_type like:
CREATE TABLE new_table (
table_id uuid primary key,
new_type_list new_type[] not null
)
and i insert data in this table with something like this:
INSERT INTO new_table VALUES (
'*inApplicationGeneratedRandomUUID*',
ARRAY[[NOW()::timestamp, '146252'::bigint],
[NOW()::timestamp, '526685'::bigint]]::new_type[]
)
and i get this error
ERROR: cannot cast type timestamp without time zone to new_type
What am I missing?
I've also tried array syntax that uses {} but nothing better.
The easiest way would probably be:
INSERT INTO new_table VALUES (
'9fd92c53-d0d8-4aba-8925-1bd648d565f2'::uuid,
ARRAY[ row(now(), 146252)::new_type,
row(now(), 526685)::new_type
] );
Note that you have to cast the row type to ::new_type.
As an alternative, you could also write:
INSERT INTO new_table VALUES (
'9fd92c53-d0d8-4aba-7925-1ad648d565f2'::uuid,
ARRAY['("now", 146252)'::new_type,
'("now", 526685)'::new_type
] );
Check PostgreSQL documentation about Composite Value Input.
I have a table designed like,
create table tbl (
id number(5),
data blob
);
Its found that the column data have
very small size data, which can be stored in raw(200):
so the new table would be,
create table tbl (
id number(5),
data raw(200)
);
How can I migrate this table to new design without loosing the data in it.
This is a bit lengthy method, but it works if you are sure that your data column values don't go beyond 200 in length.
Create a table to hold the contents of tbl temporarily
create table tbl_temp as select * from tbl;
Rem -- Ensure that tbl_temp contains all the contents
select * from tbl_temp;
Rem -- Double verify by subtracting the contents
select * from tbl minus select * from tbl_temp;
Delete the contents in tbl
delete from tbl;
commit;
Drop column data
alter table tbl drop column data;
Create a column data with raw(200) type
alter table tbl add data raw(200);
Select & insert from the temporary table created
insert into tbl select id, dbms_lob.substr(data,200,1) from tbl_temp;
commit;
We are using substr method of dbms_lob package which returns raw type data. So, the resulted value can be directly inserted.
I have a fairly simple insert from a csv file into a temp table into a table with an encrypted column.
CREATE TABLE table1
(number varchar(32) NOT NULL
, user_varchar1 varchar(65) NOT NULL
, account varchar(32) NOT NULL)
CREATE TABLE #temp1
(number varchar(32) NOT NULL
, user_varchar1 varchar(65) NOT NULL
, account varchar(32) NOT NULL)
OPEN SYMMETRIC KEY SKey
DECRYPTION BY CERTIFICATE CERTCERT
--Flat File Insert
BULK INSERT #temp1
FROM '\\Server\Data\filename.csv'
WITH (FIELDTERMINATOR = ','
, FIRSTROW =2
, ROWTERMINATOR = '\n'
);
INSERT INTO table1
(number, user_varchar1, account_encrypted)
SELECT user_varchar1, number
, ENCRYPTBYKEY(KEY_GUID('SKey'),(CONVERT(varbinary(MAX), account)))
FROM #temp1
--SELECT * FROM #esa_import_ach
DROP TABLE #temp1
SELECT * FROM table1
CLOSE MASTER KEY
CLOSE SYMMETRIC KEY SKey;
The error I receive is
Msg 8152, Level 16, State 11, Line 40
String or binary data would be truncated.
Now if I allow NULLS into table1, it fills with NULLS, obviously. If I omit the account_encrypted column altogether, the script works.
If I use
INSERT INTO table1 (number, user_varchar1, account)
VALUES ('175395', '87450018RS', ENCRYPTBYKEY(KEY_GUID('SKey'), (CONVERT(varbinary(MAX), account)))
there's no problem.
So, is there something wrong with the way I'm executing the BULK INSERT, is it my declaration of the data types or is it the source file itself.
The source file looks like this (just one row):
emp_id, number, account
175395, 87450018RS,GRDI27562**CRLF**
Thanks and I'm hoping this makes sense.
The problem is that your account column is defined as varchar(32).
ENCRYPTBYKEY returns a result with a max size of 8000. That just won't fit in your column. Either expand the column, or cast the result to a smaller size to fit it inside the column. Right now it just won't fit.
I create a table and sequence in order to replace identity in the table I use SQL Server 2012 Express but I get this error while I tried to insert data to the table
Msg 11719, Level 15, State 1, Line 2
NEXT VALUE FOR function is not allowed in check constraints, default objects, computed columns,
views, user-defined functions, user-defined aggregates, user-defined
table types, sub-queries, common table expressions, or derived
tables.
T-SQL code:
insert into Job_Update_Log(log_id, update_reason, jobid)
values((select next value for Job_Log_Update_SEQ),'grammer fixing',39);
This is my table:
create table Job_Update_Log
(
log_id int primary key ,
update_reason nvarchar(100) ,
update_date date default getdate(),
jobid bigint not null,
foreign key(jobid) references jobslist(jobid)
);
and this is my sequence:
CREATE SEQUENCE [dbo].[Job_Log_Update_SEQ]
AS [int]
START WITH 1
INCREMENT BY 1
NO CACHE
GO
Just get rid of the subselect in the VALUES section, like this:
insert into Job_Update_Log(log_id,update_reason,jobid)
values (next value for Job_Log_Update_SEQ,'grammer fixing',39);
Reference: http://msdn.microsoft.com/en-us/library/hh272694%28v=vs.103%29.aspx
Your insert syntax appears to be wrong. You are attempting to use a SELECT statement inside of the VALUES section of your query. If you want to use SELECT then you will use:
insert into Job_Update_Log(log_id,update_reason,jobid)
select next value for Job_Log_Update_SEQ,'grammer fixing',39;
See SQL Fiddle with Demo
I changed the syntax from INSERT INTO VALUES to INSERT INTO ... SELECT. I used this because you are selecting the next value of the sequence.
However, if you want to use the INSERT INTO.. VALUES, you will have to remove the SELECT from the query:
insert into Job_Update_Log(log_id,update_reason,jobid)
values(next value for Job_Log_Update_SEQ,'grammer fixing',39);
See SQL Fiddle with Demo
Both of these will INSERT the record into the table.
Try this one:
–With a table
create sequence idsequence
start with 1 increment by 3
create table Products_ext
(
id int,
Name varchar(50)
);
INSERT dbo.Products_ext (Id, Name)
VALUES (NEXT VALUE FOR dbo.idsequence, ‘ProductItem’);
select * from Products_ext;
/* If you run the above statement two types, you will get the following:-
1 ProductItem
4 ProductItem
*/
drop table Products_ext;
drop sequence idsequence;
------------------------------