I have a table with a column called data that contains some JSON. If the data column for any given row in the table is not null, it will contain a JSON-encoded object with a key called companyDescription. The value associated with companyDescription is an arbitrary JavaScript object.
If I query my table like this
select data->>'companyDescription' from companies where data is not null;
I get rows like this
{"ops":[{"insert":"\n"}]}
I am trying to update all rows in the table so that the companyDescription values will be wrapped in another JSON-encoded JavaScript object in the following manner:
{"type":"quill","content":{"ops":[{"insert":"\n"}]}}
Here's what I have tried, but I think it won't work because the ->> operator is for selecting some JSON field as text, and indeed it fails with a syntax error.
update companies
set data->>'companyDescription' = CONCAT(
'{"type":"quill","content":',
(select data->>'companyDescription' from companies),
'}'
);
What is the correct way to do this?
You can use a function jsonb_set. Currently XML or JSON values are immutable. You cannot to update some parts of these values. You can replace these values by some new modified value.
postgres=# select * from test;
┌──────────────────────────────────────────────────────────────────────┐
│ v │
╞══════════════════════════════════════════════════════════════════════╡
│ {"companyId": 10, "companyDescription": {"ops": [{"insert": "\n"}]}} │
└──────────────────────────────────────────────────────────────────────┘
(1 row)
postgres=# select jsonb_build_object('type', 'quill', 'content', v->'companyDescription') from test;
┌───────────────────────────────────────────────────────────┐
│ jsonb_build_object │
╞═══════════════════════════════════════════════════════════╡
│ {"type": "quill", "content": {"ops": [{"insert": "\n"}]}} │
└───────────────────────────────────────────────────────────┘
(1 row)
postgres=# select jsonb_set(v, ARRAY['companyDescription'], jsonb_build_object('type', 'quill', 'content', v->'companyDescription')) from test;
┌────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ jsonb_set │
╞════════════════════════════════════════════════════════════════════════════════════════════════════╡
│ {"companyId": 10, "companyDescription": {"type": "quill", "content": {"ops": [{"insert": "\n"}]}}} │
└────────────────────────────────────────────────────────────────────────────────────────────────────┘
(1 row)
So you final statement can looks like:
update companies
set data = jsonb_set(data::jsonb,
ARRAY['companyDescription'],
jsonb_build_object('type', 'quill',
'content', data->'companyDescription'))
where data is not null;
Related
Let's suppose I have a table my_table with a field named data, of type jsonb, which thus contains a json data structure.
let's suppose that if I run
select id, data from my_table where id=10;
I get
id | data
------------------------------------------------------------------------------------------
10 | {
|"key_1": "value_1" ,
|"key_2": ["value_list_element_1", "value_list_element_2", "value_list_element_3" ],
|"key_3": {
| "key_3_1": "value_3_1",
| "key_3_2": {"key_3_2_1": "value_3_2_1", "key_3_2_2": "value_3_2_2"},
| "key_3_3": "value_3_3"
| }
| }
so in pretty formatting, the content of column data is
{
"key_1": "value_1",
"key_2": [
"value_list_element_1",
"value_list_element_2",
"value_list_element_3"
],
"key_3": {
"key_3_1": "value_3_1",
"key_3_2": {
"key_3_2_1": "value_3_2_1",
"key_3_2_2": "value_3_2_2"
},
"key_3_3": "value_3_3"
}
}
I know that If I want to get directly in a column the value of a key (of "level 1") of the json, I can do it with the ->> operator.
For example, if I want to get the value of key_2, what I do is
select id, data->>'key_2' alias_for_key_2 from my_table where id=10;
which returns
id | alias_for_key_2
------------------------------------------------------------------------------------------
10 |["value_list_element_1", "value_list_element_2", "value_list_element_3" ]
Now let's suppose I want to get the value of key_3_2_1, that is value_3_2_1.
How can I do it?
I have tryed with
select id, data->>'key_3'->>'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 from my_table where id=10;
but I get
select id, data->>'key_3'->>'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 from my_table where id=10;
^
HINT: No operators found with name and argument types provided. Types may need to be converted explicitly.
what am I doing wrong?
The problem in the query
select id, data->>'key_3'->>'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 --this is wrong!
from my_table
where id=10;
was that by using the ->> operand I was turning a json to a string, so that with the next ->> operand I was trying to get a json key object key_3_2 out of a string object, which makes no sense.
Thus one has to use the -> operand, which does not convert json into string, until one gets to the "final" key.
so the query I was looking for was
select id, data->'key_3'->'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 --final ->> : this gets the value of 'key_3_2_1' as string
from my_table
where id=10;
or either
select id, data->'key_3'->'key_3_2'->'key_3_2_1' alias_for_key_3_2_1 --final -> : this gets the value of 'key_3_2_1' as json / jsonb
from my_table
where id=10;
More info on JSON Functions and Operators can be find here
when i used :
SELECT * FROM hive('thrift://xxx:9083', 'ods_qn', 'ods_crm_prod_on_off_line_msg_es_df', 'bizid Nullable(String), corpid Nullable(Int32),time Nullable(Int64),reasontype Nullable(Int32),weworkid Nullable(Int64), type Nullable(Int8),pt String', 'pt');
i get:
Received exception from server (version 22.3.2):
Code: 210. DB::Exception: Received from localhost:9000. DB::Exception: Unable to connect to HDFS: InvalidParameter: Cannot create namenode proxy, does not contain host or port. (NETWORK_ERROR)
ps:
my hdfs used HA mode
this is my clickhouse config.xnl about hdfs:
<libhdfs3_conf>/etc/clickhouse-server/hdfs-client.xml</libhdfs3_conf>
how can i do? thank you
ps:
when i use :
CREATE TCREATE TABLE hdfs_engine_table (name String, value UInt32) ENGINE=HDFS('hdfs://nn1:8020/testck/other_test', 'TSV')
INSERT IINSERT INTO hdfs_engine_table VALUES ('one', 1), ('two', 2), ('three', 3)
select * from hdfs_engine_table;
SELECT *
FROM hdfs_engine_table
Query id: f736cbf4-09e5-4a0f-91b4-4d869b78e6e7
┌─name──┬─value─┐
│ one │ 1 │
│ two │ 2 │
│ three │ 3 │
└───────┴───────┘
it works ok!
but when i use hive url.
i got error
I have the next query:
SELECT DISTINCT col_name, toTypeName(col_name)
FROM remote('host_name', 'db.table', 'user', 'password')
The result is a 6 records(WITHOUT NULL). Example:
some_prefix-1, Nullable(String)
...
some_prefix-6, Nullable(String)
Now I try splitByChar, but I'm getting:
Code: 43, e.displayText() = DB::Exception: Nested type Array(String)
cannot be inside Nullable type (version 20.1.2.4 (official build))
I tried to use not null condition and convert type but the problem still remains. Like that:
SELECT DISTINCT toString(col_name) AS col_name_str,
splitByChar('-', col_name_str)
FROM remote('host_name', 'db.table', 'user', 'password')
WHERE col_name IS NOT NULL
Is this expected behavior? How to fix this?
Lack of Nullable support in splitByChar (https://github.com/ClickHouse/ClickHouse/issues/6517)
You use wrong cast toString
SELECT DISTINCT
cast(col_name, 'String') AS col_name_str,
splitByChar('-', col_name_str)
FROM
(
SELECT cast('aaaaa-vvvv', 'Nullable(String)') AS col_name
)
WHERE isNotNull(col_name)
┌─col_name_str─┬─splitByChar('-', cast(col_name, 'String'))─┐
│ aaaaa-vvvv │ ['aaaaa','vvvv'] │
└──────────────┴────────────────────────────────────────────┘
or assumeNotNull
SELECT DISTINCT
assumeNotNull(col_name) AS col_name_str,
splitByChar('-', col_name_str)
FROM
(
SELECT cast('aaaaa-vvvv', 'Nullable(String)') AS col_name
)
WHERE isNotNull(col_name)
┌─col_name_str─┬─splitByChar('-', assumeNotNull(col_name))─┐
│ aaaaa-vvvv │ ['aaaaa','vvvv'] │
└──────────────┴───────────────────────────────────────────┘
I'm writing database queries with pg-promise. My tables look like this:
Table "public.setting"
│ user_id │ integer │ not null
│ visualisation_id │ integer │ not null
│ name │ character varying │ not null
Table "public.visualisation"
│ visualisation_id │ integer │ not null
│ template_id │ integer │ not null
I want to insert some values into setting - three are hard-coded, and one I need to look up from visualisation.
The following statement does what I need, but must be vulnerable to SQL injection:
var q = "INSERT INTO setting (user_id, visualisation_id, template_id) (" +
"SELECT $1, $2, template_id, $3 FROM visualisation WHERE id = $2)";
conn.query(q, [2, 54, 'foo']).then(data => {
console.log(data);
});
I'm aware I should be using SQL names, but if I try using them as follows I get TypeError: Invalid sql name: 2:
var q = "INSERT INTO setting (user_id, visualisation_id, template_id) (" +
"SELECT $1~, $2~, template_id, $3~ FROM visualisation WHERE id = $2)";
which I guess is not surprising since it's putting the 2 in double quotes, so SQL thinks it's a column name.
If I try rewriting the query to use VALUES I also get a syntax error:
var q = "INSERT INTO setting (user_id, visualisation_id, template_id) VALUES (" +
"$1, $2, SELECT template_id FROM visualisation WHERE id = $2, $3)";
What's the best way to insert a mix of hard-coded and variable values, while avoiding SQL injection risks?
Your query is fine. I think you know value placeholders ($X parameter) and SQL Names too, but you are a bit confused.
In your query you only assign values to placeholders. The database driver will handle them for you, providing proper escaping and variable substitution.
The documentation says:
When a parameter's data type is not specified or is declared as
unknown, the type is inferred from the context in which the parameter
is used (if possible).
I can't find a source that states what is the default type, but I think the INSERT statement provides enough context to identify the real types.
On the other hand you have to use SQL Names when you build your query dinamically. For example you have variable column or table names. They must be inserted through $1~ or $1:name style parameters keeping you safe from injection attacks.
I have postgresql with jsonb field that always contains array.
I need to append new values to that array or update already existing values by index.
Looks like jsonb_set function meet my requirements. And for append new element i just need to max array index and update element with it.
But i have a trouble doing this. Lets make it step by step.
We have table campaigns with jsonb field team_members.
select id, jsonb_set(team_members, '{0}', '{"name" : "123"}') from campaigns;
id | jsonb_set
-----+-------------------
102 | [{"name": "123"}]
Okay great, if set path '{0}' statically everything works.
Lets do that dynamically
SQL for getting array length (it is our index for append)
select '{' || jsonb_array_length(team_members) || '}'::text from campaigns;
?column?
----------
{0}
Getting all together
select jsonb_set(team_members, '{' || jsonb_array_length(team_members) || '}', '{"name" : "123"}') from campaigns;
ERROR: function jsonb_set(jsonb, text, unknown) does not exist
LINE 1: select jsonb_set(team_members, '{' ||
jsonb_array_length(tea...
^ HINT: No function matches the given name and argument types. You might
need to add explicit type casts.
My question is - how can i get rid of this error ? What i'm doing wrong ?
Thanks in advance.
something like this?..
t=# with jpath as (select concat('{',0,'}')::text[] path) select jsonb_set('[]'::jsonb,path,'{"name": "123"}'::jsonb) from jpath;
jsonb_set
-------------------
[{"name": "123"}]
(1 row)
In your case should be like:
select
jsonb_set(
team_members
, concat('{',jsonb_array_length(team_members),'}')::text[]
, '{"name" : "123"}'
)
from campaigns;