update a column using values from a different column of the same table - sql

Given the DB table:
CREATE TABLE stuff (
id text not null,
other text
);
That has lots of id values but has all other set to NULL, is there an elegant way to update the table so that all other rows get updated to OTHER-{id} (where {id} is the value of the id column)?
(It must work in Postgresql)

Only a simple update statement is needed with some string concatenation (||):
update stuff
set other = 'OTHER-' || id

You'll want to use the following:
UPDATE stuff
SET other = 'OTHER-' || id;
UPDATE is the keyword used to identify which table you'd like to update.
SET is the keyword used to identify which column you'd like to update, and this is where you choose to assign the column to:
'OTHER-' || id
'OTHER-' being a string
|| a shorthand way to concatenate
id the value you want.
Another way of writing this would be
other = concat('OTHER-',id);
I along with many others will find the || method to be much cleaner, but it's worth knowing about the dedicated function as well.

Related

SQL removing part of delimited field based on joining match?

Maybe there is a better method before I get to this step, but is there an easy way to match on one field, if it matches remove part of the match from a string in a second field.
TABLE example
ID | ID LIST
-----|---------
ID07 |ID05;ID06;ID07;ID08
This is just a one record example so ID and ID LIST will vary.
I'm looking to join and update/ replace the "nothing" or perhaps add a value to remove later.
Result I'm looking for
ID | ID LIST
-----|---------
ID07 |ID05;ID06;ID08
Is there any easy way to do this or should I go about this another way? I know some people would use a WHERE IN, but ID is going to vary. Maybe WHERE IN that field name. I'm a little confused conceptualizing this.
I'm using SQL Server MGMT studio.
You can use replace function .. if id is in id_list is replaced with empty string
select replace(ID_LIST, ID +';', '')
from your_table;
UPDATE TABLE
SET ID_LIST = CASE WHEN ID_LIST = ID THEN ''
WHEN ID_LIST LIKE ID + ';%' THEN SUBSTRING(ID_LIST, LEN(ID)+1, LEN(ID_LIST)-LEN(ID)-1)
WHEN ID_LIST LIKE '%;' + ID THEN LEFT(ID_LIST, LEN(ID_LIST)-LEN(ID)-1)
ELSE REPLACE(ID_LIST, ';'+ID+';', ';')
END
WHERE ';'+ID_LIST+';' LIKE '%;'+ID+';%'

How to skip condition check in where clause if parameter value is null or blank?

Need a small help. I am working on a stored procedure call which runs a select. (I cant put in the actual select due to business constraints but I will give you a dummy scenario.)
Here is a stored procedure:
Procedure checkData (name IN VARCHAR2, vehicle IN VARCHAR2, retval OUT VARCHAR2, retdata OUT returnData)
The select inside it gets values from a text box field which is used for search. The select is like this,
select *
from myTable tab
WHERE tab.vehicle = vehicle
AND tab.name = name
Now what the issue is when the user specifies only one value out of the 2 mentioned above, the search fails as it tries to check the details based on AND condition
For example, if user provides only vehicle value, lets say 'BMW' then the result should contain all the entries for vehicle. But what happens is as the user has not provided name value, it takes it as '' or null and the query doesn't return anything as it doesn't find any data matching name = '' condition.
So I tried below,
select *
from myTable tab
WHERE tab.vehicle = vehicle
AND IF name IS NOT NULL OR name != ''
tab.name = name
But it says,
ORA-00920:invalid relational operator.
So what can I do to skip the second check if the value for name is either null or ''. In that specific case, I simply want to skip the second check and return result based on vehicle values only.
I can't use OR condition because I will need to get exact matches too when both the parameters are available. I am using Oracle DB.
Any help is appreciated. Thanks a lot.
Assuming that the columns vehicle and name in your table are always not null, you can try something like this:
select *
from myTable tab
WHERE nvl(vehicle, tab.vehicle) = tab.vehicle
AND nvl(name, tab.name) = tab.name
Try This.
SELECT * FROM myTable tab
WHERE tab.vehicle=vehicle
AND IFNULL(tab.name,1) = IF(tab.name IS NULL,1, name)

How to write pgsql update query with string aggregate?

I have update query that will manually change the field value as a unique string, the table already have a lost of data and the id as unique Pkey.
So I need the names should look like
mayname-id-1,
mayname-id-2,
mayname-id-3, etc
I tried to update with string_agg, but that doesn't work in update queries
UPDATE mytable
SET name = string_agg('mayname-id-', id);
How to construct string dynamically in an update query?
How about the following:
UPDATE mytable
SET name = 'mayname-id-' || CAST(id AS text)
Typically, you should not add such a completely redundant column at all. It's cleaner and cheaper to generate it as functionally dependent value on the fly. You can use a view or a "generated column" for that. Details:
Store common query as column?
You can even have a unique index on such a functional value if needed.
Use string concatenation
UPDATE mytable SET name = 'nayname-id-' || (id :: text);

How to get unique values from each column based on a condition?

I have been trying to find an optimal solution to select unique values from each column. My problem is I don't know column names in advance since different table has different number of columns. So first, I have to find column names and I could use below query to do it:
select column_name from information_schema.columns
where table_name='m0301010000_ds' and column_name like 'c%'
Sample output for column names:
c1, c2a, c2b, c2c, c2d, c2e, c2f, c2g, c2h, c2i, c2j, c2k, ...
Then I would use returned column names to get unique/distinct value in each column and not just distinct row.
I know a simplest and lousy way is to write select distict column_name from table where column_name = 'something' for every single column (around 20-50 times) and its very time consuming too. Since I can't use more than one distinct per column_name, I am stuck with this old school solution.
I am sure there would be a faster and elegant way to achieve this, and I just couldn't figure how. I will really appreciate any help on this.
You can't just return rows, since distinct values don't go together any more.
You could return arrays, which can be had simpler than you may have expected:
SELECT array_agg(DISTINCT c1) AS c1_arr
,array_agg(DISTINCT c2a) AS c2a_arr
,array_agg(DISTINCT c2b) AS c2ba_arr
, ...
FROM m0301010000_ds;
This returns distinct values per column. One array (possibly big) for each column. All connections between values in columns (what used to be in the same row) are lost in the output.
Build SQL automatically
CREATE OR REPLACE FUNCTION f_build_sql_for_dist_vals(_tbl regclass)
RETURNS text AS
$func$
SELECT 'SELECT ' || string_agg(format('array_agg(DISTINCT %1$I) AS %1$I_arr'
, attname)
, E'\n ,' ORDER BY attnum)
|| E'\nFROM ' || _tbl
FROM pg_attribute
WHERE attrelid = _tbl -- valid, visible table name
AND attnum >= 1 -- exclude tableoid & friends
AND NOT attisdropped -- exclude dropped columns
$func$ LANGUAGE sql;
Call:
SELECT f_build_sql_for_dist_vals('public.m0301010000_ds');
Returns an SQL string as displayed above.
I use the system catalog pg_attribute instead of the information schema. And the object identifier type regclass for the table name. More explanation in this related answer:
PLpgSQL function to find columns with only NULL values in a given table
If you need this in "real time", you won't be able to archive it using a SQL that needs to do a full table scan to archive it.
I would advise you to create a separated table containing the distinct values for each column (initialized with SQL from #Erwin Brandstetter ;) and maintain it using a trigger on the original table.
Your new table will have one column per field. # of row will be equals to the max number of distinct values for one field.
For on insert: for each field to maintain check if that value is already there or not. If not, add it.
For on update: for each field to maintain that has old value != from new value, check if the new value is already there or not. If not, add it. Regarding the old value, check if any other row has that value, and if not, remove it from the list (set field to null).
For delete : for each field to maintain, check if any other row has that value, and if not, remove it from the list (set value to null).
This way the load mainly moved to the trigger, and the SQL on the value list table will super fast.
P.S.: Make sure to pass all you SQL from trigger to explain plan to make sure they use best index and execution plan as possible. For update/deletion, just check if old value exists (limit 1).

PostgreSQL - Rule to create a copy of the primaryID table

In my schema I want to have a primaryID and a SearchID. For every SearchID it is the primaryID plus some text at the start. I need this to look like this:
PrimaryID = 1
SearchID = Search1
Since the PrimaryID is set to autoincrement, I was hoping I could use a postgresql rule to do the following (pseudo code)
IF PRIMARYID CHANGES
{
SEARCHID = SEARCH(PRIMARYID)
}
This would hopefully occure exactly after the primaryID is updated and happen automatically. So, is this the best way of achieving this and can anyone provide an example of how it is done?
Thank you
Postgres 11 introduced genuine generated columns. See:
Computed / calculated / virtual / derived columns in PostgreSQL
For older (or any) versions, you could emulate a "virtual generated column" with a special function. Say your table is named tbl and the serial primary key is named tbl_id:
CREATE FUNCTION search_id(t tbl)
RETURNS text STABLE LANGUAGE SQL AS
$$
SELECT 'Search' || $1.tbl_id;
$$;
Then you can:
SELECT t.tbl_id, t.search_id FROM tbl t;
Table-qualification in t.search_id is needed in this case. Since search_id is not found as column of table tbl, Postgres looks for a function that takes tbl as argument next.
Effectively, t.search_id is just a syntax variant of search_id(t), but makes usage rather intuitive.