SQL - UPDATE Commands fails when using LIKE - sql

I'm performing a bulk update on a column within a table. I need to change the current date from NULL to a past date.. Which I know works fine when performning against a single account. But when using a WILDCARD, this seems to fail.
Any ideas what my issue is, can I not use LIKE in subquery..
SET message.archived_at = (SELECT TO_CHAR(systimestamp-31, 'DD-MON-YY HH.MI.SS')
FROM dual)
WHERE EXISTS = (SELECT entity_id FROM user_info
WHERE UPPER(user_info.directory_auth_id) like 'USER%')
I have 10,000 records that I need to update..
I've changed to
UPDATE message
SET message.archived_at = (SELECT TO_CHAR(systimestamp-31, 'DD-MON-YY HH.MI.SS')
FROM dual)
WHERE EXISTS (SELECT entity_id FROM user_info
WHERE UPPER(directory_auth_id) like 'JLOADUSER1001%')
the SELECT query in the WHERE EXISTS section, when run by itself returns 10 users ID.. But when the whole query is run, this updates 1.8 million rows.. expected result is ~1500 rows..

LIKE clauses are allowed in Oracle subqueries and UPDATE statements. The line that seems erroneous is:
WHERE EXISTS = (SELECT entity_id FROM user_info
Use:
WHERE EXISTS (SELECT entity_id FROM user_info
instead

Related

GBQ - Execute INSERT query ONLY if SELECT query returns results

This question has been asked before in many forms. But none of the solutions proposed worked for my case.
I am using GBQ.
I have this table:
Hour Orders
2022-01-12T00:00:00 12
2022-01-12T01:00:00 8
2022-01-12T02:00:00 9
I want to create a query to insert data into this table automatically per hour, under these conditions:
If the "most recent hour" that I want to insert already exists, I do not want to insert it twice.
I tried the following SQL query :
IF EXISTS (SELECT 1 FROM `Table` WHERE Hour = var_most_recent_hour)
UPDATE `Table` SET Orders = var_most_recent_orders WHERE Hour = var_most_recent_hour
ELSE
INSERT INTO `Table` (Hour, Orders) VALUES (var_most_recent_hour, var_most_recent_orders)
This syntax is returning an error in GBQ, although the SQL syntax is usually accepted.
Is there a way to do this?
My priority is to insert without duplicates.
I don't care about the UPDATE part in my query.
Ideally I want something like (I know this syntax does not exist):
IF NOT EXISTS (SELECT 1 FROM `Table` WHERE Hour = var_most_recent_hour)
INSERT INTO `Table` (Hour, Orders) VALUES (var_most_recent_hour, var_most_recent_orders)
Thank you
Try Sample code below
declare most_rcnt_hour time;
INSERT INTO dataset.targettable(...
SELECT *
FROM dataset.targettable T
JOIN (SELECT most_rcnt_hour AS most_rcnt_hour) as S
ON T.rcnt_hour <> S.most_rcnt_hour
Note that IF in BQ works differently. IF

Proper way to make hundreds database update?

I have a function that each minute queries Google BigQuery, and I want to use BigQuery results to update rows in another relational database.
I was trying to do something like this:
BEGIN TRANSACTION
UPDATE MyTable SET MyField = BigQueryResult_Row1_MyField WHERE a_id = BigQueryResult_Row1_a_id
UPDATE MyTable SET MyField = BigQueryResult_Row2_MyField WHERE a_id = BigQueryResult_Row2_a_id
UPDATE MyTable SET MyField = BigQueryResult_Row3_MyField WHERE a_id = BigQueryResult_Row3_a_id
.
.
.
UPDATE MyTable SET MyField = BigQueryResult_RowN_MyField WHERE a_id = BigQueryResult_RowN_a_id
COMMIT TRANSACTION
That is an UPDATE statement for each bigquery row. I can't do a single UPDATE statement combined with SELECT statement because data come from BigQuery, not from another database table.
Trying to execute this transaction, I get timeout error, so I want to ask: is this a proper way to do hundred updates at a time? There could even be thousand updates at a time in some cases. How can I do that in a better way?
Renat's answer is fine. But a more colloquial way of writing uses values():
UPDATE t
SET MyField = v.MyField
FROM MyTable t JOIN
(VALUES ('BigQueryResult_Row1_MyField', 'BigQueryResult_Row1_a_id'),
('BigQueryResult_Row2_MyField', 'BigQueryResult_Row2_a_id'),
('BigQueryResult_Row3_MyField', 'BigQueryResult_Row3_a_id'),
('BigQueryResult_RowN_MyField', 'BigQueryResult_RowN_a_id')
) v(MyField, a_id)
ON v.a_id = t.a_id;
In general, using UNION when you intend UNION ALL is a bad idea -- because UNION incurs overhead for removing duplicates. In this case, a table constructor is a simpler solution anyway.
Does a query like this work without timeout (using temp table as suggested #JohnHC)?
BEGIN TRANSACTION
;WITH UpdateTempTable AS (
SELECT 'BigQueryResult_Row1_MyField' As MyField, 'BigQueryResult_Row1_a_id' AS a_id
UNION SELECT 'BigQueryResult_Row2_MyField', 'BigQueryResult_Row2_a_id'
UNION SELECT 'BigQueryResult_Row3_MyField', 'BigQueryResult_Row3_a_id'
UNION SELECT 'BigQueryResult_RowN_MyField', 'BigQueryResult_RowN_a_id'
) UPDATE MyTable
SET MyTable.MyField = UpdateTempTable.MyField
FROM MyTable
JOIN UpdateTempTable ON UpdateTempTable.a_id = MyTable.a_id;
COMMIT TRANSACTION

Finding max value of multiple columns from multiple tables to update Sequence

I had a problem where the DBAs needed to recreate my sequence (had to create with "NO CACHE". Unfortunately, he dropped the sequence before grabbing the current value! The problem is, from what I can tell, there are almost 25 tables that use this sequence. My plan was to try to find the max value of each of the Primary Key "ID" fields, then run a sequence loop to get the sequence back up.
What I'm hoping to do now, is clean up my "ugly" process for a more streamlined process that I can put in my documentation (in the event this occurs again!).
My original solution was do something like the following:
SELECT 'TABLE_1','TABLE_1_ID', MAX(TABLE_1_ID) from TABLE_1
UNION ALL
SELECT 'TABLE_2','TABLE_2_ID', MAX(TABLE_2_ID) from TABLE_2
UNION ALL
SELECT 'TABLE_3','TABLE_3_ID', MAX(TABLE_3_ID) from TABLE_3
UNION ALL
...... (continue select statements for other 20+ tables)
SELECT 'TABLE_25','TABLE_25_ID', MAX(TABLE_25_ID) from TABLE_25
ORDER BY 2 DESC;
This shows works, but putting the table with the highest "MAX" at the top; but to clean it up I'd like to:
1. Simplify the query (an eliminate the UNION ALL) if possible
2. I'd really like to just run the query that returns a single row..
This would be 'gravy', but I have a loop that will run through the next val of the sequence; that loop starts off with:
declare
COL_MaxVal pls_integer;
SEQ_Currval pls_integer default -1;
BEGIN
SELECT MAX(TABLE_X_ID) INTO COL_MaxVal
FROM TABLE_X
while SEQ_Currval < COL_MaxVal
loop
select My_Sequence_SEQ.nexval into SEQ_Currval
from dual;
end loop;
end
If possible, I'd really like to just run the loop script which would discover which table/column has the highest max value, then use that table in the loop to increment the sequence to that max value.
Appreciate any help on this.
Here is solution returning one row:
WITH all_data as
(
SELECT 'TABLE_1','TABLE_1_ID', MAX(TABLE_1_ID) as id from TABLE_1
UNION ALL
SELECT 'TABLE_2','TABLE_2_ID', MAX(TABLE_2_ID) from TABLE_2
UNION ALL
SELECT 'TABLE_3','TABLE_3_ID', MAX(TABLE_3_ID) from TABLE_3
UNION ALL
...... (continue select statements for other 20+ tables)
SELECT 'TABLE_25','TABLE_25_ID', MAX(TABLE_25_ID) from TABLE_25
),
max_id as
(
SELECT max(id) as id FROM all_data
)
SELECT
ad.*
FROM
all_data ad
JOIN max_id mi ON (ad.id = mi.id)
I can not see any simpler solution for this...
If it's not too late then dba might try flashback query against dictionary. E.g.
SELECT * FROM dba_sequences AS OF TIMESTAMP systimestamp - 1/24;
Your safe value should be last_number+cache size. See details in:
LAST_NUMBER on oracle sequence

SQL - Delete selected row/s from database

I'm quite new to SQL and I'm having issues with deleting a selected row/s from a table.
I've written a query that selects the desired rows from the table, but when I try to execute DELETE FROM table_name WHERE EXISTS it deletes all the rows in the database.
Here is my complete query:
DELETE FROM USR_PREF WHERE EXISTS (
SELECT *
FROM USR_PREF
WHERE USR_PREF.USR_ID = 1
AND ((USR_PREF.SRV NOT IN (SELECT SEC_ENTITY_FOR_USR_ACTION_VIEW.ENTITYT_ID
FROM SEC_ENTITY_FOR_USR_ACTION_VIEW
WHERE SEC_ENTITY_FOR_USR_ACTION_VIEW.USR_ID = 1
AND SEC_ENTITY_FOR_USR_ACTION_VIEW.ENTITYTYP_CODE = 2
AND USR_PREF.DEVICE IS NULL)
OR (USR_PREF.DEVICE NOT IN (SELECT SEC_ENTITY_FOR_USR_ACTION_VIEW.ENTITYT_ID
FROM SEC_ENTITY_FOR_USR_ACTION_VIEW
WHERE SEC_ENTITY_FOR_USR_ACTION_VIEW.USR_ID = 1
AND SEC_ENTITY_FOR_USR_ACTION_VIEW.ENTITYTYP_CODE = 3)))))
The select query returns the desired rows, but the DELETE command just deletes that entire table.
Please assist.
Your where clause WHERE EXISTS (SOME QUERY) is the problem here. You are basically saying "Delete everything if this subquery returns even one result".
You need to be more explicit. Perhaps something like:
DELETE FROM USR_PREF
WHERE USR_FIELD IN (
SELECT USR_FIELD
FROM USR_PREF
WHERE USR_PREF_T.USER_ID=1
AND ((USR_PREF.SRV NOT IN ...
and so on... With this, only records that match records returned in your subquery will be deleted.

Writing a single UPDATE statement that prevents duplicates

I've been trying for a few hours (probably more than I needed to) to figure out the best way to write an update sql query that will dissallow duplicates on the column I am updating.
Meaning, if TableA.ColA already has a name 'TEST1', then when I'm changing another record, then I simply can't pick a value for ColA to be 'TEST1'.
It's pretty easy to simply just separate the query into a select, and use a server layer code that would allow conditional logic:
SELECT ID, NAME FROM TABLEA WHERE NAME = 'TEST1'
IF TableA.recordcount > 0 then
UPDATE SET NAME = 'TEST1' WHERE ID = 1234
END IF
But I'm more interested to see if these two queries can be combined into a single query.
I am using Oracle to figure things out, but I'd love to see a SQL Server query as well. I figured a MERGE statement can work, but for obvious reasons you can't have the clause:
..etc.. WHEN NOT MATCHED UPDATE SET ..etc.. WHERE ID = 1234
AND you can't update a column if it's mentioned in the join (oracle limitation but not limited to SQL Server)
ALSO, I know you can put a constraint on a column that prevents duplicate values, but I'd be interested to see if there is such a query that can do this without using constraint.
Here is an example start-up attempt on my end just to see what I can come up with (explanations on it failed is not necessary):
ERROR: ORA-01732: data manipulation operation not legal on this view
UPDATE (
SELECT d.NAME, ch.NAME FROM (
SELECT 'test1' AS NAME, '2722' AS ID
FROM DUAL
) d
LEFT JOIN TABLEA a
ON UPPER(a.name) = UPPER(d.name)
)
SET a.name = 'test2'
WHERE a.name is null and a.id = d.id
I have tried merge, but just gave up thinking it's not possible. I've also considered not exists (but I'd have to be careful since I might accidentally update every other record that doesn't match a criteria)
It should be straightforward:
update personnel
set personnel_number = 'xyz'
where person_id = 1001
and not exists (select * from personnel where personnel_number = 'xyz');
If I understand correctly, you want to conditionally update a field, assuming the value is not found. The following query does this. It should work in both SQL Server and Oracle:
update table1
set name = 'Test1'
where (select count(*) from table1 where name = 'Test1') > 0 and
id = 1234