How can I copy data from one column to another in the same table? - sql

Is it possible to copy data from column A to column B for all records in a table in SQL?

How about this
UPDATE table SET columnB = columnA;
This will update every row.

UPDATE table_name SET
destination_column_name=orig_column_name
WHERE condition_if_necessary

This will update all the rows in that columns if safe mode is not enabled.
UPDATE table SET columnB = columnA;
If safe mode is enabled then you will need to use a where clause.
I use primary key as greater than 0 basically all will be updated
UPDATE table SET columnB = columnA where table.column>0;

If you want to copy a column to another column with a different data type in PostgresSQL, you must cast/convert to the data type first, otherwise it will return
Query 1 ERROR: ERROR: column "test_date" is of type timestamp without
time zone but expression is of type character varying LINE 1: update
table_name set test_date = date_string_col
^ HINT: You will need to rewrite or cast the expression.
An example of converting varchar to timestamp:
update table_name set timestamp_col = date_string_col::TIMESTAMP;
An example of converting varchar to int:
update table_name set int_column = string_col::INTEGER;
but any column type(except file or the similar) can be copied to string(character varying) without cast the type.

Related

Can't alter column type to a shorter VARCHAR in Redshift. Getting "target column size should be greater or equal to current maximum column size"

I'm trying to run
alter table schema_name.table_name
ALTER COLUMN column_name TYPE varchar(256)
in Amazon Redshift, but I'm getting this error:
SQL Error [500310] [0A000]: Amazon Invalid operation: cannot alter column "column_name" of relation "table_name", target column size 256 should be greater or equal to current maximum column size 879;
I've already tried
update schema_name.table_name
set column_name = CAST(column_name AS VARCHAR(256))
and
update schema_name.table_name
set column_name = SUBSTRING(column_name, 1, 256)
in order to reduce this maximum column size of 879, but I still get the same error. I know I can work around it by creating a new VARCHAR(256) column with the same data, but is there another way?
I believe the answer is 'No, you cannot reduce a varchar column length'. You need to copy the data somewhere else (new table, new column) to reduce the size. Redshift doesn't know if the old data can fit in the new size w/o trying to perform such a copy itself.
You can create a new column of the size you want, update the table to put the data from the old column into this new column (reducing string size if necessary), rename the old column to something unique, rename the new column the same as the old column name, and then drop the unique named column. You will have the same column names as before but the max length of the column in question will be reduced. However the default column order will be different.
CASCADE worked in my case.
drop table dbname.tablename CASCADE;

How to append a custom object to JSONB column type in postgres

I need to update a jsonb column which contains data {"X":true}.
I need to append a complex object of the type {"obj":{"a":1,"b":2}} so the final value of row for that column {"x":true,"obj":{"a":1,"b":2}} . What will be the query to update this row .
postgres version 12
Update - The following query update tableName set columnName = (select '{"obj":{"a":1,"b":2}}'::jsonb || columnName::jsonb) where ... returns successfully when there is a value present , but when the column is null it still remains null after running the update query . I need to be able to add {"obj":{"a":1,"b":2}} even when the column is null .
You can use the concatenation operator:
'{"X":true}'::jsonb || '{"obj":{"a":1,"b":2}}'::jsonb
If you want to update an existing column, use coalesce() to deal with NULL values:
update the_table
set the_column = coalesce(the_column, '{}')||'{"obj":{"a":1,"b":2}}'
Online example

Update column B based on Column A for all rows in table

I need to insert hash value into column b based on value of column a, but I need to do this for every row in table.
I always get this error no matter what I tried:
ERROR: more than one row returned by a subquery used as an expression
I have been trying different versions of the following:
UPDATE table
SET column b = md5((SELECT column a FROM table))
WHERE column a IS NOT NULL;
Any suggestions on how to perform this operation?
No need for a subquery here. As I understand, you want to store the checksum of column_a in column_b. As one would expect, Postgres' md5() function expects a single, scalar argument of string datatype, so:
UPDATE table
SET column_b = md5(column_a)
WHERE column_a IS NOT NULL;
Note that it would probably be simpler to use a computed column (available in Postgres 12) to store this derived information.

SQL update set table if value in table A is equals to value in table B

this query is working fine.
UPDATE data
SET unit_id='a3110a89'
WHERE unit_id='7d18289f';
Now, I need to run this query over 30 times
so I made csv file import it to the DB with this command:
COPY mytable FROM 'D:/test.csv' WITH CSV HEADER DELIMITER AS ','
Now I have table called my table with 2 columns OLD and NEW
i want to search the table "data" in column unit_id anywhere there if the value equals to the value in table "mytable.old" replace it with the value "mytable.new" on the same row.
I tried to run this query but I get an error:
UPDATE data
SET unit_id=(SELECT mytable."old" FROM public.mytable)
WHERE unit_id=(SELECT mytable."new" FROM public.mytable)
error:
more than one row returned by a subquery used as an expression
I think i'm just trying to do it in the wrong way...
thx for the help!
by the way Im using PostgreSQL
Your subqueries need to be correlated to the outer update:
UPDATE data
SET unit_id = (SELECT mytable."new" FROM public.mytable where data.old = mytable.old)
WHERE unit_id in (SELECT mytable."old" FROM public.mytable);
That is, set the unit_id to the "new" value, when you find the "old" value in the table.
Can you try like this,
UPDATE data A
SET A.unit_id=B.old
FROM (SELECT mytable."old",mytable."new" FROM public.mytable) B
WHERE A.unit_id=B.new
UPDATE data A
SET unit_id = B."old"
FROM public.mytable B
WHERE A.unit_id = B."new"
;
BTW: it looks like you also have old and new swapped in your question. Do you really want A's value to be set to B's old field?

MS Access Alter Statement: change column data type to DATETIME

I have a column named as recordTime in my Access DB table table1.
This column is of TEXT type right now, and most of its value are in format as: yyyy-mm-dd hh:nn:ss, but there are also some wrong records like this: yyyy-mm- ::.
Now I would like to change the data type of this column from TEXT to DATETIME. I tried with the following query but nothing happens:
ALTER TABLE table1
ALTER COLUMN recordTime DATETIME;
Am I doing it wrong?
Try running these:
ALTER TABLE table1 ADD NewDate DATE
Then run
UPDATE table1
SET NewDate = RecordTime
WHERE RIGHT(RecordTime,4) <> '- ::'
You can then delete the RecordTime and rename NewDate.
I prefer adding a new column just in case there are any issues with the UPDATE and you can compare the 'cleaned' column and the initial data before proceeding.