I'm trying to update a column table in postgres but I don't know how to do this...
I have a table with 3 columns like nickname, color and json which is a string this one has an object like
{"value1":"answer1", "value2":"answer2" }
At the column with the json I want to add nickname and color values...
Like this:
{"value1":"answer1", "value2":"answer2", "nickname":"name1", "color":"red" }
How can I do this update?
If you really want to do this from Postgres, then you may try using REGEXP_REPLACE:
WITH cte AS (
SELECT '{"value1":"answer1", "value2":"answer2"}'::json AS col
)
UPDATE cte
SET col = REGEXP_REPLACE(col::text,
'\}$',
', "nickname":"name1", "color":"red"}')::json;
{"value1":"answer1", "value2":"answer2", "nickname":"name1", "color":"red"}
This approach uses direct string manipulation to find the end of the JSON string, and then to insert the new key value pairs.
If you were using Postgres 9.5 or later, you might be to use the merge || operator to add the new JSON content. Postgres' support for JSON manipulation has gotten better in recent versions, but version 9.4 does not offer much.
Also, if you could handle this from your Hibernate/JPA layer, it might make more sense.
In case you want to use the existing columns of the table, you could use json_build_object.
UPDATE t
SET json_col = ( replace(json_col :: text, '}', ',')
|| replace(Json_build_object ('nickname', nickname,
'color', color) :: text,'{', '' ) ) :: json ;
The replace() statements help in shaping the json string. Note that here I have considered your string to be a simple json string ( without nested arrays/jsons).
Demo
Related
How can I add a new key/val pair in an already existing JSON col in bigqyery using SQL (big query flavor).
To something like
BigQuery provides Data Manipulation Language (DML) statements such as the SQL Update statement. See:
https://cloud.google.com/bigquery/docs/reference/standard-sql/dml-syntax#update_statement
What you will have to do is retrieve the original value of your structured column and then perform a SQL UPDATE statement to set the new value of the column to be the absolute new value that you want.
Take care to realize that BigQuery is an OLAP database and is optimized for queries rather that updates or deletes. Make sure you read the information on using DML statements in BigQuery found here.
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language
I feel like this question is less about how to update the table, but more about how to adjust existing json with extra/new key:value (then to either update table or just simply select out)
So, I assume you have table like below
and you might have another table with those new key:value pairs to use
in case if you don't really have second table - you can just use CTE like below
with new_key_val as (
select 1 id, '{"key3":"value3"}' add_json union all
select 2 id, '{"key14":"value14"}'
)
So, having above - you can use below approach
select *,
( select '{' || string_agg(trim(kv)) || ',' || trim(add_json, '{}') || '}'
from unnest(split(trim(json_col, '{}'), ',')) kv
) adjusted_json
from your_table
left join new_key_val
using(id)
with output
BigQuery supports JSON as a native data type but only offers a limited set of JSON functions. Unless your json data has a pre-defined, simple schema with known keys, you probably want to go the string-manipulation way.
How can i filter inside a json file in SQL server?
i have a Column call details.
{"test","source":"web"}
i want to filter by source
what i did:
select * from TABLE_NAME
CROSS APPLY OPENJSON(details,'$.source')
where value ='web'
As per Zohar's comment, make your json valid, then something like:
--{"mode":"test","source":"web"}
select * from TABLE_NAME
CROSS APPLY
OPENJSON(details)
WITH (
m varchar(256) '$.mode',
s varchar(256) '$.source'
) j
where
j.w = 'web'
But it might suit you better/simpler to just use JSON_VALUE:
select * from TABLE_NAME
WHERE json_value(details, '$.source') = 'web'
Use CROSS APPLY OPENJSON if you want to turn each row's json into a pseudotable looking like the table spec in the WITH clause. SQLServer behaves as if all the matching "rows" in each row's json are compounded into the psseudotable and auto-joined to the source data table based on where each bunch of json pseudorows came from
Use JSON_VALUE if you only really want one value out of the json and can uniquely identify a single "row" in the json from which to get the value.. Either the json only has one "row" / is not a collection, or you want a "row" out of a json collection that can be referenced according to a formula
In my table mytable I have a json field called data and I inserted json with a lot of keys & values.
I know that it's possible to select individual fields like so:
SELECT data->'mykey' as mykey from mytable
But how can I get an overview of all of the json keys on a certain depth? I would have expected something like
SELECT data->* from mytable
but that doesn't work. Is there something similar?
You can use the json_object_keys() function to get all the top-level keys of a json value:
SELECT keys.*
FROM mytable, json_object_keys(mytable.data) AS keys (mykey);
If you want to search at a deeper level, then first extract that deeper level from the json value using the #> operator:
SELECT keys.*
FROM mytable, json_object_keys(mytable.data #> '{level1, level2}') AS keys (mykey);
Note that the function returns a set of text, so you should invoke the function as a row source.
If you are using the jsonb data type, then use the jsonb_object_keys() function.
I have update query that will manually change the field value as a unique string, the table already have a lost of data and the id as unique Pkey.
So I need the names should look like
mayname-id-1,
mayname-id-2,
mayname-id-3, etc
I tried to update with string_agg, but that doesn't work in update queries
UPDATE mytable
SET name = string_agg('mayname-id-', id);
How to construct string dynamically in an update query?
How about the following:
UPDATE mytable
SET name = 'mayname-id-' || CAST(id AS text)
Typically, you should not add such a completely redundant column at all. It's cleaner and cheaper to generate it as functionally dependent value on the fly. You can use a view or a "generated column" for that. Details:
Store common query as column?
You can even have a unique index on such a functional value if needed.
Use string concatenation
UPDATE mytable SET name = 'nayname-id-' || (id :: text);
I would like to create a sql query (or plpgsql) that will md5() all given rows regardless of type. However, below, if one is null then the hash is null:
UPDATE thetable
SET hash = md5(accountid || accounttype || createdby || editedby);
I am later using the hash to compare uniqueness so null hash does not work for this use case.
The problem was the way it handles concatenating nulls. For example:
thedatabase=# SELECT accountid || accounttype || createdby || editedby
FROM thetable LIMIT 5;
1Type113225
<NULL>
2Type11751222
3Type10651010
4Type10651
I could use coalesce or CASE statements if I knew the type; however, I have many tables and I will not know the type ahead of time of every column.
There is much more elegant solution for this.
In Postgres, using table name in SELECT is permitted and it has type ROW. If you cast this to type TEXT, it gives all columns concatenated together in string that is actually JSON.
Having this, you can get md5 of all columns as follows:
SELECT md5(mytable::TEXT)
FROM mytable
If you want to only use some columns, use ROW constructor and cast it to TEXT:
SELECT md5(ROW(col1, col2, col3)::TEXT)
FROM mytable
Another nice property about this solution is that md5 will be different for NULL vs. empty string.
Obligatory SQLFiddle.
You can also use something else similar to mvp's solution. Instead of using ROW() function which is not supported by Amazon Redshift...
Invalid operation: ROW expression, implicit or explicit, is not supported in target list;
My proposition is to use NVL2 and CAST function to cast different type of columns to CHAR, as long as this type is compatible with all Redshift data types according to the documentation. Below there is an example of how to achieve null proof MD5 in Redshift.
SELECT md5(NVL2(col1,col1::char,''),
NVL2(col2,col2::char,''),
NVL2(col3,col3::char,''))
FROM mytable
This might work without casting second NVL2 function argument to char but it would definately fail if you'd try to get md5 from date column with null value.
I hope this would be helpful for someone.
Have you tried using CONCAT()? I just tried in my PG 9.1 install:
SELECT CONCAT('aaaa',1111,'bbbb'); => aaaa1111bbbb
SELECT CONCAT('aaaa',null,'bbbb'); => aaaabbbb
Therefore, you can try:
SELECT MD5(CONCAT(column1, column2, column3, column_n)) => md5_hash string here
select MD5(cast(p as text)) from fiscal_cfop as p