I have jsonb field(data) in Postgresql with a structure like:
{ "id" => { "some_key" => [1, 2, 3] } }
I need to migrate the value to a different field.
t.jsonb "data"
t.integer "portals", default: [], array: true
When I'm trying to do like this:
UPDATE table_name
SET portals = ARRAY[data -> '1' ->> 'portals']
WHERE id = 287766
It raises an error:
Caused by PG::DatatypeMismatch: ERROR: column "portals" is of type integer[] but expression is of type text[]
Here is one way to do it. But if you search the site, as you should had to do, you get more.
Schema
create table t (
data jsonb
);
insert into t values ('{"1" : { "k1" : [1,2,3,5]} }');
insert into t values ('{"2" : { "k2" : [4,5,6,7]} }');
create table i (
id int,
v int[]
)
Some tests
select data -> '1' -> 'k1'
from t
where data ? '1'
;
insert into i values(1,ARRAY[1,2,3]);
update i
set v = (select replace(replace(data -> '1' ->> 'k1', '[', '{'), ']', '}')::int[] from t where data ? '1')
where id = 1;
select * from i;
The above gets array as a text, as you did. After that, just some text replacements to cast the text to an integer array literal.
DB Fiddle
I understand that in order to prevent injection attack, PDO::prepare first sends the query to the server then the parameters go later; Now, I feel this introduces another problem: that implies there cannot be rollback after PDO::execute, or am I missing something?
I have two tables 1 and 2 in an application. The two tables are not supposed to contain the same row. When I use INSERT INTO table1 SELECT FROM Table2, I want to DELETE FROM table2 if the INSERT query succeeds. And if either of the queries fails, I want to rollback. So I have the following code:
$dbConn->beginTransaction();
$stmt1 = $dbConn->prepare( "INSERT INTO table1 ( field1, field2, field3 )
SELECT field1, field2, field3 FROM table2 WHERE field4 = :field4" );
$stmt1->execute( array( $field4 ) );
$stmt2 = $dbConn->prepare( "DELETE FROM table2 WHERE field4 = :field4" );
$stmt2->execute( array( $field4 ) );
if ( $stmt1->rowCount() > 0 && $stmt2->rowCount() > 0 )
{
$dbConn->commit();
return true;
}
else
{
$dbConn->rollBack();
return false;
}
Without prepared statement, this is very easy; but with it, it looks like difficult; has anyone done something like this before?
Thanks.
This isn't difficult and from what I can tell should work fine. However I would restructure the code to use exceptions:
try {
$dbConn->beginTransaction();
$stmt1 = $dbConn->prepare( "INSERT INTO table1 ( field1, field2, field3 )
SELECT field1, field2, field3 FROM table2 WHERE field4 = :field4" );
$stmt1->execute( array( $field4 ) );
$stmt2 = $dbConn->prepare( "DELETE FROM table2 WHERE field4 = :field4" );
$stmt2->execute( array( $field4 ) );
if ( $stmt1->rowCount() > 0 && $stmt2->rowCount() > 0 )
{
$dbConn->commit();
return true;
}
else
{
throw new LogicException('Unequal row counts.');
}
} catch (Exception $e) {
$dbConn->rollBack();
if ($e instanceof LogicException) {
// just return
return false;
} else {
// otherwise rethrow because something we didnt expect to go
// wrong did
throw $e;
}
}
This way we also roll back if something else generates an exception like PDO or some other thing we may be doing as part of our data preparation.
I'm looking to update multiple rows in PostgreSQL in one statement. Is there a way to do something like the following?
UPDATE table
SET
column_a = 1 where column_b = '123',
column_a = 2 where column_b = '345'
You can also use update ... from syntax and use a mapping table. If you want to update more than one column, it's much more generalizable:
update test as t set
column_a = c.column_a
from (values
('123', 1),
('345', 2)
) as c(column_b, column_a)
where c.column_b = t.column_b;
You can add as many columns as you like:
update test as t set
column_a = c.column_a,
column_c = c.column_c
from (values
('123', 1, '---'),
('345', 2, '+++')
) as c(column_b, column_a, column_c)
where c.column_b = t.column_b;
sql fiddle demo
Based on the solution of #Roman, you can set multiple values:
update users as u set -- postgres FTW
email = u2.email,
first_name = u2.first_name,
last_name = u2.last_name
from (values
(1, 'hollis#weimann.biz', 'Hollis', 'Connell'),
(2, 'robert#duncan.info', 'Robert', 'Duncan')
) as u2(id, email, first_name, last_name)
where u2.id = u.id;
Yes, you can:
UPDATE foobar SET column_a = CASE
WHEN column_b = '123' THEN 1
WHEN column_b = '345' THEN 2
END
WHERE column_b IN ('123','345')
And working proof: http://sqlfiddle.com/#!2/97c7ea/1
For updating multiple rows in a single query, you can try this
UPDATE table_name
SET
column_1 = CASE WHEN any_column = value and any_column = value THEN column_1_value end,
column_2 = CASE WHEN any_column = value and any_column = value THEN column_2_value end,
column_3 = CASE WHEN any_column = value and any_column = value THEN column_3_value end,
.
.
.
column_n = CASE WHEN any_column = value and any_column = value THEN column_n_value end
if you don't need additional condition then remove and part of this query
Let's say you have an array of IDs and equivalent array of statuses - here is an example how to do this with a static SQL (a sql query that doesn't change due to different values) of the arrays :
drop table if exists results_dummy;
create table results_dummy (id int, status text, created_at timestamp default now(), updated_at timestamp default now());
-- populate table with dummy rows
insert into results_dummy
(id, status)
select unnest(array[1,2,3,4,5]::int[]) as id, unnest(array['a','b','c','d','e']::text[]) as status;
select * from results_dummy;
-- THE update of multiple rows with/by different values
update results_dummy as rd
set status=new.status, updated_at=now()
from (select unnest(array[1,2,5]::int[]) as id,unnest(array['a`','b`','e`']::text[]) as status) as new
where rd.id=new.id;
select * from results_dummy;
-- in code using **IDs** as first bind variable and **statuses** as the second bind variable:
update results_dummy as rd
set status=new.status, updated_at=now()
from (select unnest(:1::int[]) as id,unnest(:2::text[]) as status) as new
where rd.id=new.id;
Came across similar scenario and the CASE expression was useful to me.
UPDATE reports SET is_default =
case
when report_id = 123 then true
when report_id != 123 then false
end
WHERE account_id = 321;
Reports - is a table here, account_id is same for the report_ids mentioned above. The above query will set 1 record (the one which matches the condition) to true and all the non-matching ones to false.
The answer provided by #zero323 works great on Postgre 12. In case, someone has multiple values for column_b (referred in OP's question)
UPDATE conupdate SET orientation_status = CASE
when id in (66934, 39) then 66
when id in (66938, 49) then 77
END
WHERE id IN (66934, 39, 66938, 49)
In the above query, id is analogous to column_b; orientation_status is analogous to column_a of the question.
In addition to other answers, comments and documentation, the datatype cast can be placed on usage. This allows an easier copypasting:
update test as t set
column_a = c.column_a::number
from (values
('123', 1),
('345', 2)
) as c(column_b, column_a)
where t.column_b = c.column_b::text;
#Roman thank you for the solution, for anyone using node, I made this utility method to pump out a query string to update n columns with n records.
Sadly it only handles n records with the same columns so the recordRows param is pretty strict.
const payload = {
rows: [
{
id: 1,
ext_id: 3
},
{
id: 2,
ext_id: 3
},
{
id: 3,
ext_id: 3
} ,
{
id: 4,
ext_id: 3
}
]
};
var result = updateMultiple('t', payload);
console.log(result);
/*
qstring returned is:
UPDATE t AS t SET id = c.id, ext_id = c.ext_id FROM (VALUES (1,3),(2,3),(3,3),(4,3)) AS c(id,ext_id) WHERE c.id = t.id
*/
function updateMultiple(table, recordRows){
var valueSets = new Array();
var cSet = new Set();
var columns = new Array();
for (const [key, value] of Object.entries(recordRows.rows)) {
var groupArray = new Array();
for ( const [key2, value2] of Object.entries(recordRows.rows[key])){
if(!cSet.has(key2)){
cSet.add(`${key2}`);
columns.push(key2);
}
groupArray.push(`${value2}`);
}
valueSets.push(`(${groupArray.toString()})`);
}
var valueSetsString = valueSets.join();
var setMappings = new String();
for(var i = 0; i < columns.length; i++){
var fieldSet = columns[i];
setMappings += `${fieldSet} = c.${fieldSet}`;
if(i < columns.length -1){
setMappings += ', ';
}
}
var qstring = `UPDATE ${table} AS t SET ${setMappings} FROM (VALUES ${valueSetsString}) AS c(${columns}) WHERE c.id = t.id`;
return qstring;
}
I don't think the accepted answer is entirely correct. It is order dependent. Here is an example that will not work correctly with an approach from the answer.
create table xxx (
id varchar(64),
is_enabled boolean
);
insert into xxx (id, is_enabled) values ('1',true);
insert into xxx (id, is_enabled) values ('2',true);
insert into xxx (id, is_enabled) values ('3',true);
UPDATE public.xxx AS pns
SET is_enabled = u.is_enabled
FROM (
VALUES
(
'3',
false
,
'1',
true
,
'2',
false
)
) AS u(id, is_enabled)
WHERE u.id = pns.id;
select * from xxx;
So the question still stands, is there a way to do it in an order independent way?
---- after trying a few things this seems to be order independent
UPDATE public.xxx AS pns
SET is_enabled = u.is_enabled
FROM (
SELECT '3' as id, false as is_enabled UNION
SELECT '1' as id, true as is_enabled UNION
SELECT '2' as id, false as is_enabled
) as u
WHERE u.id = pns.id;
I have to INSERT INTO two tables at once, let's say one table is my client_enquiry and another table is the client_materials.
Until here it's okay, the INSERT command it's working in both tables. And If something bad happens when I'm inserting on the second table (client_materials)? How can I "rool back" if the INSERT command fails on table client_materials?
Basically I have this:
$sql_table1 = "INSERT INTO client_enquiry (reference, date) VALUES ('REF', '2013-05-12')";
$q = $conn->prepare($sql_table1);
$q ->execute();
$Last_ID = $conn->lastInsertId('id_enquiry');
$sql_table2 = "INSERT INTO client_materials (id_client_enquiry,description, date)
VALUES (".$Last_ID."'Description', '2013-05-12')";
$q = $conn->prepare($sql_table2);
$q -> execute();
Do the very rollback you mentioned.
$conn->beginTransaction();
try
{
$sql = "INSERT INTO client_enquiry (reference, date) VALUES (?,?)";
$q = $conn->prepare($sql);
$q ->execute(array('REF', '2013-05-12'));
$Last_ID = $conn->lastInsertId();
$sql_table2 = "INSERT INTO client_materials (id_client_enquiry,description, date)
VALUES (?,?,?)";
$q = $conn->prepare($sql);
$q -> execute(array($Last_ID, 'Description', '2013-05-12'));
$conn->commit();
}
catch (PDOException $e)
{
$conn->rollback();
throw $e;
}
You just need to be sure that engine supports transactions and PDO is set into exception throwing mode
I want to add new record to table1 on SQLite
use SQL::Abstract;
my %data = (
id => \'max(id)', # it is doesn't work so which variant is right?
record => 'Something'
);
my $sql = SQL::Abstract->new;
my ($stmt, #bind) = $sql->insert('table1', \%data);
...
my $sth = $dbh->prepare($stmt);
If I used DBIx::Class in Catalyst app I would written like so:
id => $c->model('Model')->get_column('id')->max()
and it will work fine.
So how can I reach the same aim but using just SQL::Abstract which is used in DBIx::Class as well.
Could someone fixed it? Thanks.
This is a piece of code. As you can see, first you need to get the max id+1 and then do the insert command. I have to notice you this is not safe, because in a multi-(user,process,thread) environment, a second process can execute the same code and get race conditions.
But I assume you are just learning the SQL::Abstract api, and that problem doesn't matter
use DBI;
use SQL::Abstract;
#create table TEST(ID integer, NAME varchar);
my $dbh = DBI->connect('dbi:SQLite:dbname=test.db', '', '', {AutoCommit=>1});
my $sql = SQL::Abstract->new;
my($stmt, #bind) = $sql->select("TEST", [ 'max(ID)+1 as ID' ] );
my $sth = $dbh->prepare($stmt);
$sth->execute(#bind);
my ($id) = $sth->fetchrow_array // 1;
print "Select ID: $id", "\n";
$sth->finish;
($stmt, #bind) = $sql->insert("TEST", { ID=>$id, NAME=>"test-name"} );
$sth = $dbh->prepare($stmt);
$sth->execute(#bind);
$dbh->disconnect;