postgres: ensure json column root is an object - sql

I'm wondering how to ensure that the data inserted into a json or jsonb column is an object, not an array (or an array of objects).
Example:
-- ok
insert into users (settings) values ('{ "theme": "cobalt" }')
-- ok
insert into users (settings) values ('{}')
-- error!
insert into users (settings) values ('[]')
-- error!
insert into users (settings) values ('[{}]')
Thanks!

you could do smth like:
t=# create table so16(j jsonb check (left(ltrim(j::text), 1) <> '['));
CREATE TABLE
t=# insert into so16 values('{"b":[1,2,3]}');
INSERT 0 1
t=# insert into so16 values('[1,2,3]');
ERROR: new row for relation "so16" violates check constraint "so16_j_check"
DETAIL: Failing row contains ([1, 2, 3]).
t=# insert into so16 values(' [1,2,3]');
ERROR: new row for relation "so16" violates check constraint "so16_j_check"
DETAIL: Failing row contains ([1, 2, 3]).

Related

Not able to insert a row in a table which has auto incremented primary key

I have a table reportFilters which has the following column names:
The reportFilterId is auto increment. I want to insert a row in the table with the script below:
IF OBJECT_ID(N'ReportFilters', N'U') IS NOT NULL
BEGIN
IF NOT EXISTS (SELECT * FROM [ReportFilters]
WHERE ReportId IN (SELECT ReportId FROM [Reports] WHERE ReportType = 'Operational Insights Command Staff Dashboard') )
BEGIN
INSERT INTO [ReportFilters] Values(1, 'SelectView', 'Select Views', 13, 'Views','Views', 'SelectView', 'a', 'b', 'c' );
END
END
GO
But I am getting the following error:
Column name or number of supplied values does not match table definition.
Can I please get help on this ? Thanks in advance.
I think the problem is on inserted columns can't match with inserted data because that will instead by your table column order which is ReportFilterId instead of ReportId
So that there are 11 columns in your table but your statement only provides 10 columns.
I would use explicitly specify for inserted columns (inserted columns start from ReportId except your PK ReportFilterId column)
INSERT INTO [ReportFilters] (ReportId,ReportFilterName,ReportFilterTitle....)
Values (1, 'SelectView', 'Select Views', 13, 'Views','Views', 'SelectView', 'a', 'b', 'c' );

SQLDelight FTS5 insert trouble

I created a table in DBBrowser:
CREATE VIRTUAL TABLE IF NOT EXISTS Students USING FTS5
(
GroupId UNINDEXED,
StudentName
);
and insert values to it. After that I add DB with this table to my project.
It is declaration of this table in sqldelight .sq file:
CREATE VIRTUAL TABLE IF NOT EXISTS Students USING FTS5
(
GroupId INTEGER AS Int,
StudentName TEXT,
rank REAL
);
I need to explicit declare rank because I want to apply HAVING MIN(rank) for it when SELECT from table (otherwise it is not compile), but when I trying to insert values in table like that:
insert:
INSERT INTO Students VALUES (?,?);
I receive an error:
Unexpected number of values being inserted. found: 2 expected: 3
If I do like that:
insert:
INSERT INTO Students VALUES (?,?,?);
I receive an exception:
SQLiteException - table Students has 2 columns but 3 values were supplied (code 1): , while compiling: INSERT INTO Students VALUES (?,?,?)
How I can perform insert? Or maybe I can apply HAVING MIN(rank) without explicit declare?
does
insert:
INSERT INTO Students(GroupId, StudentName) VALUES (?,?);
work?

How to write a WHERE clause for NULL value in ARRAY type column?

I created a table which contains a column of string ARRAY type as:
CREATE TABLE test
(
id integer NOT NULL,
list text[] COLLATE pg_catalog."default",
CONSTRAINT test_pkey PRIMARY KEY (id)
)
I then added rows which contain various values for that array, including an empty array and missing data (null):
insert into test (id, list) values (1, array['one', 'two', 'three']);
insert into test (id, list) values (2, array['four']);
insert into test (id, list) values (3, array['']);
insert into test (id, list) values (4, array[]::text[]); // empty array
insert into test (id, list) values (5, null); // missing value
pgAdmin shows table like this:
I am trying to get a row which contains a null value ([null]) in the list column but:
select * from test where list = null;
...returns no rows and:
select * from test where list = '{}';
...returns row with id = 4.
How to write WHERE clause which would target NULL value for column of ARRAY type?
demo:db<>fiddle
... WHERE list IS NULL
select * from test where list IS null;
Like this:
select * from test where list IS NULL;

Insert Json array into Postgres as separate column

I have created a table in PostgreSQL using this:
CREATE TABLE TEST (MULTIPROCESS VARCHAR(20), HTTP_REFERER VARCHAR(50));
I try to insert JSON array into Table. Like below
INSERT INTO TEST
SELECT MULTIPROCESS, HTTP_REFERER
FROM json_populate_record(
NULL::TEST_POS,
'[{"multiprocess":true,"http_referer": "http://localhost:9000/"}, {"multiprocess": false,"http_referer": "http://localhost:9002/"}]'
);
It throws error :
[Error Code: 0, SQL State: 22023] ERROR: cannot call json_populate_record on an array
How can I insert JSON array data into Table as below
MULTIPROCESS HTTP_REFERER
true http://localhost:9000/
false http://localhost:9002/

How to to get the value of an auto increment column in postgres from a .sql script file?

In postgres I have two tables like so
CREATE TABLE foo (
pkey SERIAL PRIMARY KEY,
name TEXT
);
CREATE TABLE bar (
pkey SERIAL PRIMARY KEY,
foo_fk INTEGER REFERENCES foo(pkey) NOT NULL,
other TEXT
);
What I want to do is to write a .sql script file that does the following
INSERT INTO foo(name) VALUES ('A') RETURNING pkey AS abc;
INSERT INTO bar(foo_fk,other) VALUES
(abc, 'other1'),
(abc, 'other2'),
(abc, 'other3');
which produces the error below in pgAdmin
Query result with 1 row discarded.
ERROR: column "abc" does not exist
LINE 3: (abc, 'other1'),
********** Error **********
ERROR: column "abc" does not exist
SQL state: 42703
Character: 122
Outside of a stored procedure how do a define a variable that I can use between statements? Is there some other syntax for being able to insert into bar with the pkey returned from the insert to foo.
You can combine the queries into one. Something like:
with foo_ins as (INSERT INTO foo(name)
VALUES ('A')
RETURNING pkey AS foo_id)
INSERT INTO bar(foo_fk,other)
SELECT foo_id, 'other1' FROM foo_ins
UNION ALL
SELECT foo_id, 'other2' FROM foo_ins
UNION ALL
SELECT foo_id, 'other3' FROM foo_ins;
Other option - use an anonymous PL/pgSQL block like:
DO $$
DECLARE foo_id INTEGER;
BEGIN
INSERT INTO foo(name)
VALUES ('A')
RETURNING pkey INTO foo_id;
INSERT INTO bar(foo_fk,other)
VALUES (foo_id, 'other1'),
(foo_id, 'other2'),
(foo_id, 'other3');
END$$;
You can use lastval() to ...
Return the value most recently returned by nextval in the current session.
This way you do not need to know the name of the seqence used.
INSERT INTO foo(name) VALUES ('A');
INSERT INTO bar(foo_fk,other) VALUES
(lastval(), 'other1')
, (lastval(), 'other2')
, (lastval(), 'other3')
;
This is safe because you control what you called last in your own session.
If you use a writable CTE as proposed by #Ihor, you can still use a short VALUES expression in the 2nd INSERT. Combine it with a CROSS JOIN (or append the CTE name after a comma (, ins) - same thing):
WITH ins AS (
INSERT INTO foo(name)
VALUES ('A')
RETURNING pkey
)
INSERT INTO bar(foo_fk, other)
SELECT ins.pkey, o.other
FROM (
VALUES
('other1'::text)
, ('other2')
, ('other3')
) o(other)
CROSS JOIN ins;
Another option is to use currval
INSERT INTO foo
(name)
VALUES
('A') ;
INSERT INTO bar
(foo_fk,other)
VALUES
(currval('foo_pkey_seq'), 'other1'),
(currval('foo_pkey_seq'), 'other2'),
(currval('foo_pkey_seq'), 'other3');
The automatically created sequence for serial columns is always named <table>_<column>_seq
Edit:
A more "robust" alternative is to use pg_get_serial_sequence as Igor pointed out.
INSERT INTO bar
(foo_fk,other)
VALUES
(currval(pg_get_serial_sequence('public.foo', 'pkey')), 'other1'),
(currval(pg_get_serial_sequence('public.foo', 'pkey')), 'other2'),
(currval(pg_get_serial_sequence('public.foo', 'pkey')), 'other3');