CONCAT in Oracle - sql

I've been researching the concat function extensively but hit a wall creating a temp table. I have two columns: ID (ex. 4323) and Source (ex. PHI). I want to add a column that includes a prefix of "API-" to the ID column (ex. API-4332). Anyone have any insight?

All rows?
UPDATE TheTable
SET id = 'API-'||id;
Or,
UPDATE TheTable
SET id = CONCAT('API-',id);
EDIT:
Your problem statement led me to believe you wanted to create a new column ("I want to add a column..."). Sorry for the confusion. I have changed the answer to update the ID column.

Related

How to populate dummy data into an existing table?

I have a large table with given number of rows in which I'd like to replace personal informations with dummy data. I've written functions for this but actually struggling with how to implement it.
I'd like to do something like:
ALTER TABLE SomeTable DROP COLUMN SomeName
ALTER TABLE SomeTable ADD COLUMN SomeName NVARCHAR(30) DEFAULT (SELECT * FROM dbo.FakeName)
Help would be appreciated.
Instead of dropping and adding a column, just do an UPDATE.
If you just want to update the actual data with dummy data , why can't you use update statement as below. We do almost similar in our day to day work. For ex. if we would like to sanitize actual email address of users while restoring the data in my local or test machine (in column SomeName) and in another column we just want to update it with 'XXX' .
UPDATE SomeTable
SET Email_address= SUBSTRING(Email_address,0,CHARINDEX('#',Email_address)) + '#mytest.com',
SomeName2= 'XXX',

How to update an already existing value in oracle sql column

I tried to find possible solution on my question, but without success. Let say that I have a TEST table with many records and one of the columns is called CA_GROUP. That column contain the following values:
{"TEST1":"1","TEST2":"2"}
I want to add this part ',"TEST3":"3"' to the already existing values in that column. So the result should be as:
{"TEST1":"1","TEST2":"2","TEST3":"3"}
The only thing that I know is this:
update test t
set t.ca_group = replace(t.ca:group, '{"TEST1":"1","TEST2":"2"}'
, '{"TEST1":"1","TEST2":"2","TEST3":"3"}')
where id = xxxxxx
and other conditions.
update test t
set t.ca_group = replace (t.ca:group, '{"CODE1":"1","CODE2":"2"}'
, '{"CODE1":"1","CODE2":"2","TEST3":"3"}')
where id = xxxxxx
and other conditions.
But this is not efficient for me, because I have a lot of records and I need to add the same value in all columns one by one. Is there any smartest way of doing this ?
How about appending 'TEST3' to each existing value?
update test t
set t.ca_group = substr(t.ca_group, 1, length(t.ca_group) - 1) || ',"TEST3":"3"}'
where id = xxxxxx

Update redshift column value with modified data from other column

I have a redshift table which is used for tracking, and as a result its pretty huge. I need to update one column after applying some text operations and extracting a value from another column.
The query that I have managed to write works only for one row.
UPDATE schema.table_name SET data_id = (SELECT split_part(regexp_substr(data_column,'pattern=[^&]*'),'=',2)::BIGINT FROM schema.table_name where id = 1620) WHERE id = 1620;
How do I get it to work for every row in the table.
UPDATE
schema.table_name
SET
data_id = SPLIT_PART(REGEXP_SUBSTR(data_column, 'pattern=[^&]*'),'=',2)::BIGINT;
Just don't put WHERE id = 1620; at end of update query.
Updates are not efficient in Redshift. If you have a huge table and you intend to update every single row, you should instead copy the data (with the updated column) to a new table and then flip them.

SQL Server - Select list and update a field in each row

I hate to ask something that I know has been answered, but after 45 minutes of searching I yield to everyone here.
I have a table where I need to select a list of items based on a value, and then take one field and add a year to it and update another column with that value.
Example:
select * from table where description='value_i_need'
UPDATE table SET endDate=DATEADD(year, 1, beginDate) -- For each item in the select
EDIT: The select statement will return a ton of results. I need to update each row uniquely.
I don't know how to combine the select or loop through the results.
My SQL knowledge is limited, and I wish I could have figured this out without having to ask. I appreciate any help
Edit 2:
I have one table that I need to go through and update the end date on each item that matches a description. So I have 1000 "Computers" for example, that I need to update a field in each row with that description based on another field in the same row. All coming from one table
Try like this
UPDATE
Table T1
SET
T1.endDate=DATEADD(year, 1, beginDate)
WHERE T1.description='value_i_need'
Unless I missed something, you should be able to simply add your criteria to your update statement as follows:
UPDATE [table]
SET endDate=DATEADD(year, 1, beginDate)
WHERE description='value_i_need'
UPDATE T
SET T.END_DATE = DATEADD(YEAR,1,T.BEGINDATE)
FROM TABLE T
WHERE T.DESCRIPTION = 'value_i_need
Do you mean:
UPDATE [TABLE] SET ENDDATE=DATEADD(YEAR,1,BEGINDATE) WHERE DESCRIPTION='VALUE_I_NEED'

postgresql: Fast way to update the latest inserted row

What is the best way to modify the latest added row without using a temporary table.
E.g. the table structure is
id | text | date
My current approach would be an insert with the postgresql specific command "returning id" so that I can update the table afterwards with
update myTable set date='2013-11-11' where id = lastRow
However I have the feeling that postgresql is not simply using the last row but is iterating through millions of entries until "id = lastRow" is found. How can i directly access the last added row?
update myTable date='2013-11-11' where id IN(
SELECT max(id) FROM myTable
)
Just to add to mvb13's answer (since I don't have enough points to comment directly yet) there is one word missing. Hopefully, this will save someone some time from working out the correct syntax LOL.
update myTable set date='2013-11-11' where id IN(
SELECT max(id) FROM myTable
);