Too long execution time for update query - sql

I'm trying to update the table with a query, that executing in ~5 sec on Postgresql and Oracle but takes too long on Firebird 2.5.
UPDATE GoodsCatUnit SET isDisplay=1
WHERE Id In (SELECT Min(gcu.Id) FROM GoodsCatUnit gcu GROUP BY gcu.GoodsCat_Id);
In the GoodsCatUnit ~34k rows and updating first 200 takes 15 seconds.

Try writing this using a correlated subquery and defining an index.
The query is:
UPDATE GoodsCatUnit gcu
SET isDisplay = 1
WHERE gcu.id = (SELECT MIN(gcu2.id)
FROM GoodsCatUnit gcu2
WHERE gcu2.GoodsCat_Id = gcu.GoodsCat_Id
) AND
gcu.isDisplay <> 1;
The index is on GoodsCatUnit(GoodsCat_Id, id).

Related

TPC-DS Query 6: Why do we need 'where j.i_category = i.i_category' condition?

I'm going through TPC-DS for Amazon Athena.
It was fine until query 5.
I got some problem on query 6. (which is below)
select a.ca_state state, count(*) cnt
from customer_address a
,customer c
,store_sales s
,date_dim d
,item i
where a.ca_address_sk = c.c_current_addr_sk
and c.c_customer_sk = s.ss_customer_sk
and s.ss_sold_date_sk = d.d_date_sk
and s.ss_item_sk = i.i_item_sk
and d.d_month_seq =
(select distinct (d_month_seq)
from date_dim
where d_year = 2002
and d_moy = 3 )
and i.i_current_price > 1.2 *
(select avg(j.i_current_price)
from item j
where j.i_category = i.i_category)
group by a.ca_state
having count(*) >= 10
order by cnt, a.ca_state
limit 100;
It took more than 30 minutes so it failed with timeout.
I tried to find which part cause problem, so I checked the where conditions and I found where j.i_category = i.i_category for the last part of where condition.
I don't know why this condition is needed so I deleted this part and the query ran Ok.
can you guys tell me why this part is needed?
The j.i_category = i.i_category is subquery correlation condition.
If you remove it from the subquery
select avg(j.i_current_price)
from item j
where j.i_category = i.i_category)
the subquery becomes uncorrelated, and becomes a global aggregation on the item table, which is easy to calculate and the query engine needs to do it once.
If you want a fast, performant query engine on AWS, i can recommend Starburst Presto (disclaimer: i am from Starburst). See https://www.concurrencylabs.com/blog/starburst-presto-vs-aws-redshift/ for a related comparison (note: this is not a comparison with Athena).
If it doesn't have to be that fast, you can use PrestoSQL on EMR (note that "PrestoSQL" and "Presto" components on EMR are not the same thing).

Oracle SQL Update statement with value generated in subquery

I am trying to write an update statement to insert a value that's calculated in a subquery, and having limited success.
The statement I've tried so far is:
update intuit.men_doc doc1
set doc1.doc_udf5 = (select
substr(doc.doc_dtyc, instr(doc.doc_dtyc, 'GAPP-', 2)+5 )||'_'||row_number() over(partition by
doc.doc_dtyc order by doc.doc_cret) docDeleteId
from
intuit.men_doc doc
where
doc.doc_dtyc != 'DM-GAPP-SFUL'
and doc.doc_dtyc like 'DM-GAPP%'
and doc.doc_cred >= '01/Oct/2017' and doc.doc_cred < '01/Oct/2018'
and doc1.doc_code = doc.doc_code
)
Which gives mes the following error message
ERROR: Error 1427 was encountered whilst running the SQL command. (-3)
Error -3 running SQL : ORA-01427: single-row subquery returns more than one row
I don't have much experience with UPDATE statements, so any advice on how I can rewrite this so that I can update a few thousand records at once would be appreciated.
EDIT: Adding example data
Example data:
MEN_DOC
DOC_CODE DOC_DTYC DOC_UDF5 DOC_CRED
123456A CV 08/Nov/2017
456789B CV 11/Jan/2018
789123C CV 15/Feb/2018
123987B TRAN 01/Dec/2017
How I want the data to look once the script is run
MEN_DOC
DOC_CODE DOC_DTYC DOC_UDF5 DOC_CRED
123456A CV CV_1 08/Nov/2017
456789B CV CV_2 11/Jan/2018
789123C CV CV_3 15/Feb/2018
123987B TRAN TRAN_1 01/Dec/2017
Thanks
You are using row_number(), which suggests that you expect the subquery to return more than one row. The inequality on doc_code supports this interpretation.
Just change the row_number() to count(*), so you have an aggregation which will always return one row and get the sequential count you want:
update intuit.men_doc doc1
set doc1.doc_udf5 = (select substr(doc.doc_dtyc, instr(doc.doc_dtyc, 'GAPP-', 2)+5 ) ||'_'|| count(*) docDeleteId
from intuit.men_doc doc
where doc.doc_dtyc <> 'DM-GAPP-SFUL' and
doc.doc_dtyc like 'DM-GAPP%' and
doc.doc_cred >= date '2017-10-01' and
doc.doc_cred < date '2018-10-01' and
doc1.doc_code = doc.doc_code
);
You can use your select as source table in merge, like here:
merge into men_doc tgt
using (select doc_code,
doc_dtyc||'_'||row_number() over (partition by doc_dtyc order by doc_cred) as calc
from men_doc) src
on (tgt.doc_code = src.doc_code)
when matched then update set tgt.doc_udf5 = src.calc;
dbfiddle
I assumed that doc_code is unique.

increasing speed of sql update postgresql

I'm performing the 2 below queries on my database, and I'm trying to figure out how to make it faster.
The first query takes 208796.8ms. The second one takes 611654.9ms. I'm not sure there is a way to make them faster. I need these updates to be in the same transaction, so I'm also not sure if the update by batches of n records would be faster. I will take any idea !
UPDATE ticket_memberships AS my_table
SET ticket_id = foreign_table.id
FROM tickets AS foreign_table
WHERE my_table.agency_id = 2
AND foreign_table.agency_id = 2
AND my_table.ticket_id IS NOT NULL
AND my_table.ticket_id = foreign_table.old_id
UPDATE ticket_memberships AS my_table
SET person_contact_id = foreign_table.id
FROM person_contacts AS foreign_table
WHERE my_table.agency_id = 2
AND foreign_table.agency_id = 2
AND my_table.person_contact_id IS NOT NULL
AND my_table.person_contact_id = foreign_table.old_id

oracle sql optimization

I have this query:
SELECT sd.sdt_service_type,
sd.sdt_status,
count(*) col_count
FROM mci_service_data sd
WHERE
sd.sdt_version = 1
AND sd.sdt_type = 'MMSP'
AND sd.sdt_status in (?)
AND(sd.STD_OPERATION_FLAG is null OR sd.STD_OPERATION_FLAG not like 'mark%')
AND sd.sdt_office_id in
(SELECT op.fld_ofs_id
FROM mci_ofs_per op
WHERE op.fld_per_id = ?)
group by sd.sdt_service_type,sd.sdt_status
in mci_service_data table there are indexes on
mci_service_data(sdt_type, sdt_version, sdt_status, sdt_office_id)
and mci_ofs_per(fld_per_id, fld_ofs_id).but this query takes time more than 10 seconds!
So,how this query will be optimize and faster?
For this query, I would recommend the following indexes:
mci_service_data(sdt_type, sdt_version, sdt_status, sdt_office_id)
mci_ofs_per(fld_per_id, fld_ofs_id)

Update SQL query by comparing two table

How to update query in sql column by comparing two tables ? This might be duplicated question, but yet still cannot solve my problem. Any help would be appreciated.
What i've tried so far, but error
UPDATE b SET b.STAMP = b.STAMP + 10 FROM TB_FWORKERSCH b,TB_FWORKERCN a
WHERE a.ISSDATE>='20150401' AND a.UKEY=b.UKEY2 and b.STAMP=0 AND b.IG_SUMINS!=0
DB2 Database
DB2 doesn't allow a JOIN or FROM for an UPDATE clause (this is also not specified in the SQL standard).
You can achieve what you want with a co-related sub-query:
UPDATE tb_fworkersch b
SET stamp = stamp + 10
WHERE EXISTS (SELECT 1
FROM tb_fworkercn a
WHERE a.issdate >= '20150401'
AND a.ukey = b.ukey2)
AND b.stamp = 0
AND b.ig_sumins <> 0
Try this:
MERGE INTO TB_FWORKERSCH b
USING TB_FWORKERCN a
ON a.UKEY=b.UKEY2
AND a.ISSDATE>='20150401' AND b.STAMP=0 AND b.IG_SUMINS<>0
WHEN MATCHED
THEN UPDATE SET b.STAMP = b.STAMP + 10;