magento Lock wait timeout exceeded sales_flat_order_grid - sql

I have a magento site with few extensions. The main extension is giftcard extension for unique codes. We are running promotion right now with 800K codes so it is creating huge traffic. Problem is now it is creating ghost orders as after taking payment - on the last moment when order must be registered from reserved order to confirmed - it shows table lockout error.
Exact error is:
SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction, query was: INSERT INTO sales_flat_order_grid (entity_id, status, store_id, customer_id, base_grand_total, base_total_paid, grand_total, total_paid, increment_id, base_currency_code, order_currency_code, store_name, created_at, updated_at, billing_name, shipping_name) SELECT main_table.entity_id, main_table.status, main_table.store_id, main_table.customer_id, main_table.base_grand_total, main_table.base_total_paid, main_table.grand_total, main_table.total_paid, main_table.increment_id, main_table.base_currency_code, main_table.order_currency_code, main_table.store_name, main_table.created_at, main_table.updated_at, CONCAT(IFNULL(table_billing_name.firstname, ''), ' ', IFNULL(table_billing_name.middlename, ''), ' ', IFNULL(table_billing_name.lastname, '')) AS billing_name, CONCAT(IFNULL(table_shipping_name.firstname, ''), ' ', IFNULL(table_shipping_name.middlename, ''), ' ', IFNULL(table_shipping_name.lastname, '')) AS shipping_name FROM sales_flat_order AS main_table LEFT JOIN sales_flat_order_address AS table_billing_name ON main_table.billing_address_id=table_billing_name.entity_id LEFT JOIN sales_flat_order_address AS table_shipping_name ON main_table.shipping_address_id=table_shipping_name.entity_id WHERE (main_table.entity_id IN('140650')) ON DUPLICATE KEY UPDATE entity_id = VALUES(entity_id), status = VALUES(status), store_id = VALUES(store_id), customer_id = VALUES(customer_id), base_grand_total = VALUES(base_grand_total), base_total_paid = VALUES(base_total_paid), grand_total = VALUES(grand_total), total_paid = VALUES(total_paid), increment_id = VALUES(increment_id), base_currency_code = VALUES(base_currency_code), order_currency_code = VALUES(order_currency_code), store_name = VALUES(store_name), created_at = VALUES(created_at), updated_at = VALUES(updated_at), billing_name = VALUES(billing_name), shipping_name = VALUES(shipping_name)
There seems to have no reference except : 140650 for entity id for sales_flat_order_grid.
If any one have any idea please let me know the possible solution.

This issue may be related to a bug in earlier versions of Magento. It used to be that the INSERT into sales_flat_order_grid occurred within a transaction. And because of the query that Magento uses to populate it the query planner could not figure out which rows to lock it locked the entire sales_flat_order_grid table. And because that happens within a transaction the lock is retained until COMMIT.
If this is what is causing your problem you will need to resolve the move the order grid calculation to commit_after.

Ok, I am replying my own question - in case someone else find helpful.
Solution: run Insert query for missing entity_id from sales_flat_order, run above query by replacing 140650 with main_table.entity_id NOT IN (select sales_flat_order.entity_id from sales_flat_order) and that will solve it. This is because both tables share same entity_id values.
I also came to know please set slow queries log as you can see which query taking long ( they are the one creating deadlock as db server resources gets occupied there ) - soon as I find such and resolved the third party extension slow query problem - no error of table locks.

Related

Build query that brings only sessions that have only errors?

I have a table with sessions events names. Each session can have 3 different types of events.
There are sessions that have only error type event and I need to identify them by getting a list those session.
I tried the following code:
SELECT
test.SessionId, SS.RequestId
FROM
(SELECT DISTINCT
SSE.SessionId,
SSE.type,
COUNT(SSE.SessionId) OVER (ORDER BY SSE.SessionId, SSE.type) AS total_XSESIONID_TYPE,
COUNT(SSE.SessionId) OVER (ORDER BY SSE.SessionId) AS total_XSESIONID
FROM
[CMstg].SessionEvents SSE
-- WHERE SSE.SessionId IN ('fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb' )
) AS test
WHERE
test.total_XSESIONID_TYPE = test.total_XSESIONID
AND test.type = 'Errors'
-- AND test.SessionId IN ('fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb' )
Each session can have more than one type, and I need to count only the sessions that have only type 'errors'. I don't want to include sessions that have additional types of events in the count
While I'm running the first query I'm getting a count of 3 error event per session, but while running the all procedure the number is multiplied to 90?
Sample table :
sessionID
type
fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Errors
fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Errors
fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Errors
00c896a0-dccc-41bf-8dff-a5cd6856bb76
NonError
00c896a0-dccc-41bf-8dff-a5cd6856bb76
Errors
00c896a0-dccc-41bf-8dff-a5cd6856bb76
Errors
00c896a0-dccc-41bf-8dff-a5cd6856bb76
Errors
In this case I should get
sessionid = fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Please advice - hope this is clearer now, thanks!
It's been a long time but I think something like this should get you the desired results:
SELECT securemeSessionId
FROM <TableName> -- replace with actual table name
GROUP BY securemeSessionId
HAVING COUNT(*) = COUNT(CASE WHEN type = 'errors' THEN 1 END)
And a pro tip: When asking sql-server questions, it's best to follow these guidelines
SELECT *
FROM NameOfDataBase
WHERE type!= 'errors'
Is it what you wanted to do?

Using SQL query - How to identify the attribute's that a OIM request has updated + OIM 11g R2 PS3

We extend contractor term date in OIM to 80 days but some times it gets extended by admins/managers more than 80 days. When it gets extended, OIM creates a request id. Now, we would like to know all the users who term date is more than 80 days from the day(request creation date) they got extended.
Is there a way to get the details of the users and the request creation date that happened on termination date attribute in a SQL query so that we can create a BI report.
As i have a requestid which was created yesterday i am using it for developing the query. I tried below query by joining usr, request and request_beneficiary tables but it doesn't return anything. Are there any other tables which i need to use to accomplish this use case.
-- Even try with specific requestid req3.request_id=123456
-- Tried with the request id's beneficiary key too.
SELECT
req3.request_key rk,
usr2.usr_login buid,
usr2.usr_status,
req3.request_creation_date,
req3.request_model_name,
to_char(usr2.usr_udf_terminationdate, 'MM-DD-YYYY') AS Terminationdate
FROM
request req3,
request_beneficiary reqb1,
usr usr2
WHERE
req3.request_key = reqb1.request_key
AND beneficiary_key = usr2.usr_key
and usr2.usr_status = 'Active'
AND usr2.usr_emp_type IN ( 'Contractor');
If anyone has done this type of use case. can you please provide your inputs.
Appreciate your inputs and suggestions
Thanks in advance.
I'm sure you've already figure this out, but here is some SQL that should get you to the data you need.
SELECT r.request_key rk,
R.Request_Creation_Date,
Red.Entity_Field_Name,
Red.Entity_Field_Value,
usr_status,
usr_end_date,
usr_udf_terminationdate
FROM request r
INNER JOIN Request_Entities re
ON R.Request_Key = re.request_key
INNER JOIN Request_Entity_data red
ON re.request_entity_key = red.request_entity_key
INNER JOIN usr
ON Re.Entity_Key = usr.usr_key
WHERE request_model_name = 'Modify User Profile';

Google BiqQuery Internal Error

Edit: Tidied up the query a bit. Checked running on one day (versus the 27 I need) and the query runs. With 27 days of data it's trying to process 5.67TB. Could this be the issue?
Latest ID of error run:
Job ID: ee-corporate:bquijob_3f47d425_1530e03af64
I keep getting this error message when trying to run a query in BigQuery, both through the UI and Bigrquery.
Query Failed
Error: An internal error occurred and the request could not be completed.
Job ID: ee-corporate:bquijob_6b9bac2e_1530dba312e
Code below:
SELECT
CASE WHEN d.category_grouped IS NULL THEN 'N/A' ELSE d.category_grouped END AS category_grouped_cleaned,
COUNT(UNIQUE(msisdn_token)) AS users,
(SUM(up_link_data_bytes) + SUM(down_link_data_bytes))/1000000 AS tot_data_mb
FROM (
SELECT
request_domain, up_link_data_bytes, down_link_data_bytes, msisdn_token, timestamp
FROM (TABLE_DATE_RANGE([helpful-skyline-97216:WEBLOG_Staging.WEBLOG_], TIMESTAMP('20160101'), TIMESTAMP('20160127')))
WHERE SUBSTR(http_status_code,1,1) IN ('1',
'2',
'3')) a
LEFT JOIN EACH web_usage_201601.domain_to_cat_lookup_27JAN_with_groups d
ON
a.request_domain = d.request_domain
WHERE
DATE(timestamp) >= '2016-01-01'
AND DATE(timestamp) <= '2016-01-27'
GROUP EACH BY
1
Is there something I'm doing wrong?
The problem seems to be coming from UNIQUE() - it returns repeated field with too many elements in it. The error could be improved, but workaround for you would be to use explicit GROUP BY and then run COUNT on top of it.
If you are okay with an approximation, you can also use
COUNT(DISTINCT msisdn_token) AS users
or a higher approximation parameter than the default 1000,
COUNT(DISTINCT msisdn_token, 5000) AS users
GROUP BY is the most general approach, but these can be faster if they do what you need.

Operation must be an updatable query - Access

I'm writing a database and I simply want to update tblSchedule with the ItemNo from tblStock but I get an error when trying to run this:
Operation must be an updatable query
I can't seem to figure out why it's not working.
UPDATE [tblSchedule]
SET [tblSchedule].ItemNo =
(SELECT DISTINCT Item
FROM [tblStock], [tblSchedule]
WHERE [tblStock].Bookcode=[tblSchedule].[PartCode]
)`;
Any help would really be appreciated
You are missing a closing bracket in your SQL.
UPDATE [tblSchedule] Set
[tblSchedule].ItemNo = (
SELECT DISTINCT Item
FROM [tblStock], [tblSchedule -- Missing closing bracket
WHERE ((([tblStock].Bookcode)=[tblSchedule].[PartCode]))
)
Try closing the bracket on tblSchedule.
I do not have an Access database to test this on for you, though.
My guess is your inner SELECT is returning 2 records instead of one.
You can do this to validate.
SELECT Items.ItemNo, count(*) total
FROM
(
SELECT DISTINCT Sc.ItemNo, St.Item
FROM
[tblSchedule] Sc INNER JOIN
[tblStock] St ON Sc.PartCode = St.Bookcode
) as Items
GROUP BY Items.ItemNo
HAVING count(*) > 1;
Due to the simplicity of what I wanted I've gone down the Dlookup route which works successfully.
UPDATE [tblSchedule], [tblStock] SET [tblSchedule].ItemNo = DLookUp("Item","[tblStock]","[tblStock].Bookcode='" & [tblSchedule].[PartCode] & "'")
WHERE (([tblStock].[Bookcode]=[tblSchedule].[PartCode]));
It's probably not the best method but due to the small amount of records it updates (252) it works perfectly without any noticable time delay.
Thanks Again!
Chris

ORA-00937: not a single-group group function PL/SQL issue

Firstly, I know ORA-000937 is a common issue, with an obvious answer, but I am yet to find any results that could point to a possible solution.
Quick Spec;
National TB/HIV report, based on patient medical records/encounters/visits and drug's provided. This is only a tiny portion of the report, which loops all patient drugs, and calculates most of it's figures off date calculations, we do not store historic/aggregated data, everything is aggregated when requested. I mention this because I expect a few suggestions to move away from GTT's and to rather use MVIEW's - I hear you, but no, not a solution.
Here is my problem, this is one of my queries populating a GTT, within a function, which stores aggregated results. I have structured my data collection in such away as to reduce server load as the medical table exceeds 12 million records. (Each patient has 3 per default).
Here is the GTT
CREATE GLOBAL TEMPORARY TABLE EKAPAII.TEMP_ART_VISIT_MEDS
(
EPISODE_ID NUMBER,
LAST_MEDS_DATE DATE
)
ON COMMIT DELETE ROWS
RESULT_CACHE (MODE DEFAULT)
NOCACHE;
CREATE UNIQUE INDEX EKAPAII.TEMP_ART_VISIT_MEDS_PK ON EKAPAII.TEMP_ART_VISIT_MEDS
(EPISODE_ID);
ALTER TABLE EKAPAII.TEMP_ART_VISIT_MEDS ADD (
CONSTRAINT TEMP_ART_VISIT_MEDS_PK
PRIMARY KEY
(EPISODE_ID)
USING INDEX EKAPAII.TEMP_ART_VISIT_MEDS_PK
ENABLE VALIDATE);
And my simple insert query
INSERT INTO temp_art_visit_meds (EPISODE_ID, LAST_MEDS_DATE)
SELECT episode_id, encounter_date + number_of_days
FROM ( SELECT enc_meds.episode_id,
MAX (enc_meds.encounter_date) encounter_date,
MAX (
CASE
WHEN (NVL (meds.number_of_days, 0) > 150)
THEN
90
ELSE
NVL (meds.number_of_days, 0)
END)
number_of_days
FROM temp_art_visit enc_meds,
vd_medication meds,
dl_drugs_episode_class dlc,
( SELECT latest_meds_visit.episode_id,
MAX (latest_meds_visit.encounter_date)
encounter_date
FROM temp_art_visit latest_meds_visit,
vd_medication latest_meds,
dl_drugs_episode_class dc
WHERE latest_meds_visit.encounter_id =
latest_meds.encounter_id
AND latest_meds.drug_id = dc.drug_id
AND dc.sd_drug_application_id = 8401
GROUP BY latest_meds_visit.episode_id) latest_meds
WHERE enc_meds.encounter_id = meds.encounter_id
AND enc_meds.episode_id = latest_meds.episode_id
AND enc_meds.encounter_date =
latest_meds.encounter_date
AND meds.drug_id = dlc.drug_id
AND dlc.sd_drug_application_id = 8401
AND meds.active_flag = 'Y'
GROUP BY enc_meds.episode_id);
Now my error, is ORA-000937 not a single-group group function, but if I run this query in a normal editor window it works, but I get ORA-000937 when executing the select query in the package body itself, calling the function does not return any error, even though I have an exception block to handle any errors.
Any help will do, I do understand that this errors could occur only at runtime, and not at compile time? Or is it the fact that I am running the query from the pl/sql block?
Toad for Oracle version 12.5 - in all it's glory. (sarcasm)
Again, pardon me if this has already been asked/answered.
EDIT - SOLUTION
So, after a few hours of trouble shooting, I was able to understand why this error is being generated. Firstly, the fixed query;
INSERT INTO temp_art_visit_meds (EPISODE_ID, LAST_MEDS_DATE)
SELECT enc_meds.episode_id ,
TRUNC( MAX (enc_meds.encounter_date)) + MAX (CASE WHEN (NVL (meds.number_of_days, 0) > 150) THEN 90 ELSE NVL (meds.number_of_days, 0) END) last_meds_date
FROM temp_art_visit enc_meds,
vd_medication meds,
dl_drugs_episode_class dlc,
( SELECT latest_meds_visit.episode_id,
MAX (latest_meds_visit.encounter_date) encounter_date
FROM temp_art_visit latest_meds_visit,
vd_medication latest_meds,dl_drugs_episode_class dc
WHERE latest_meds_visit.encounter_id = latest_meds.encounter_id
AND latest_meds.drug_id = dc.drug_id
AND dc.sd_drug_application_id = 8401
GROUP BY latest_meds_visit.episode_id) latest_meds
WHERE enc_meds.encounter_id = meds.encounter_id
AND enc_meds.episode_id = latest_meds.episode_id
AND enc_meds.encounter_date = latest_meds.encounter_date
AND meds.drug_id = dlc.drug_id
AND dlc.sd_drug_application_id = 8401
AND meds.active_flag = 'Y'
GROUP BY enc_meds.episode_id, meds.number_of_days, enc_meds.encounter_date;
It would appear that problem was due to the amount of sub-queries, I attempted to use different hint optimizers with no avail. If you look closely at the first query, you will notice I am basically aggregating results from aggregated results, so the column's encounter_date, episode_id, number_of_days are no longer 'available', so even if I added the appropriate GROUP BY clause to my last (outer) subquery, Oracle would not be able to group on those column names/identifiers.
I am not sure why this would fail only in a PACKAGE BODY, and did not return any SQLERR or SQLCODE when executed.
Happy days.