I use Datables with SSP Class. And i have query with result 2000 lines.
But when I try running query I got error 500/504 error (if I have 504 table is didn't load)
I use OVH CloudDB with MariadDB 10.2 and on server I have php 7.2
My joinQuery looks like:
FROM data_platforms p
LEFT JOIN game_platforms gp ON gp.platform_id = p.platform_id
LEFT JOIN games g ON g.game_id = gp.game_id
LEFT JOIN tastings t ON t.tasting_game_id = g.game_id
LEFT JOIN notes n ON n.note_game_id = g.game_id
LEFT JOIN ratings r ON r.rating_game_id = g.game_id
LEFT JOIN images i ON i.image_type_id = g.game_id AND (i.image_type = 2 || i.image_type = 1)
LEFT JOIN game_generes gg ON gg.game_id = g.game_id
LEFT JOIN generes gen ON gen.id = gg.genere_id
And extraWhere
p.platform_id = '.$platformID.' AND g.game_status = 1
And groupBy
gp.game_id
Is there any way to be able to optimize this query?
Am I doomed to fail at this point, or should I use a different SSP class?
Don't use LEFT unless you really expect the 'right' table to be missing.
Don't "over-normalize". For example, I would expect genre to simply be a column, not a many-to-many mapping table plus a genre table.
(i.image_type = 2 || i.image_type = 1) --> i.image_type IN (1,2) might optimizer better.
See this for a likely improvement in many-to-many table indexes: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#many_to_many_mapping_table
Please provide SHOW CREATE TABLE so we can check other indexes, such as on platform_id.
Let's see the entire query, plus EXPLAIN SELECT .... GROUP BY gp.game_id may be invalid (unless everything else is dependent on it).
Related
I have the following 2 queries that are almost identical except the second query contains a table join, where clause and has less FAXDEPTs(the commented out portion is included in Query 2 just shared the one query to make easier to read).
I want to join the two queries by FAXDEPT and have the results look something like this,
FAXDEPT| TOTAL DOCUMENTS(Query1)|TOTAL PAGES (Query1)|TOTAL DOCUMENTS(Query2)|TOTAL PAGES (Query2)
Query 1 contains more FAXDEPTs than Query2. Is there a way I can display "0"s for Query 2 TOTAL PAGES and TOTAL DOCS
I'm assumed I would do some sort of FUll OUTER JOIN but can't seem to get it to work. Not sure if using alias is part of my issue or not. I appreciate all the help in advance!
select sq.faxdept 'Fax Department', count(sq.docid)'Total Documents', sum(sq.pages)'Total Pages'
from
(
select idp.id as docid, (MAX(idp.pagenum)+1) as pages, ki178.keyvaluechar as faxdept
from DOCDATA dd
left join FAXDEPT ki178 on id.id = ki178.id
left join IMPORTSOURCE ki228 on id.id = ki228.id
left join BATCHINFO kgd105 on dd.id = kgd105.id
left join PAGEDATA idp on id.id = idp.id
--left join QUEUE ilc on id.id = ilc.id
where
id.datestored > '10/07/2021' and id.status = 0
--and ilc.num = 252
and (kgd105.kg128 like 'DISC DIP%SJIN%' or ki228.keyvaluechar like 'SJIN' )
group by idp.id, ki178.keyvaluechar
)
as sq
group by sq.faxdept
This runs in a constant time:
SELECT row_number() OVER (order by PackagingUniqueId) as RowNum, Barcode, pu.PackagingUniqueId,
rd.Name, pu.ComponentBarcode, rrl.ponum, rrl.mfgpart, rrl.new_lot_code, rrl.pno
FROM Trace.dbo.TraceData td
INNER JOIN Trace.dbo.TraceJob tj ON td.Id = tj.TraceDataId
INNER JOIN Trace.dbo.Job j ON tj.JobId = j.Id
INNER JOIN Trace.dbo.[Order] o ON j.OrderId = o.id
INNER JOIN Trace.dbo.PCBBarcode p ON td.PCBBarcodeId = p.Id
INNER JOIN Trace.dbo.TracePlacement tp ON td.Id = tp.TraceDataId
INNER JOIN Trace.dbo.Placement p2 ON p2.PlacementGroupId = tp.PlacementGroupId
INNER JOIN Trace.dbo.Charge c ON p2.ChargeId = c.Id
INNER JOIN Trace.dbo.PackagingUnit pu ON c.PackagingUnitId = pu.Id
INNER JOIN Trace.dbo.RefDesignator rd ON p2.RefDesignatorId = rd.Id
INNER JOIN SpotlightSQL.spot_light_dbo.peel_off_ids po ON po.peel_off_id = pu.PackagingUniqueId
INNER JOIN SpotlightSQL.spot_light_dbo.recv_receipts_log rrl ON rrl.label_id = po.label_id
WHERE p.Barcode = '20092619153'
However, this one takes about 7 seconds:
SELECT * FROM Component WHERE Barcode = '20092619153'
Component is a SQL view which consists of the first longer query without WHERE clause.
Why this happens? Does the view retrieve all records and then apply Where clause? Is there a way to speed up the second query? (without applying indexes)
Why this happens? Does the view retrieve all records and then apply Where clause?
Yes, in this particular case, SQL Server will first execute the original underlying query, and then apply a WHERE filter on top of that intermediate result.
Is there a way to speed up the second query? (without applying indexes)
A SQL view generally performs as well as the underlying query. So, if Barcode is a good way to filter off many records, then adding an index to Barcode is the way to go. Other than this, there is not much you can do to speed up the view.
One option would be to create a materialized view, which is basically just a table whose data is generated by your view's query. Selecting all records from a materialized view, with no additional restrictions, should have a speed limited only by the time of data transfer.
How improve this query performance second table CustomerAccountBrand inner join
taking long time. I have added Non clustered index that is not use. Is this is split two inner join after that able concatenate?. Please any one help to get that data.
SELECT DISTINCT
RA.AccountNumber,
RA.ShipTo,
RA.SystemCode,
CAB.BrandCode
FROM dbo.CustomerAccountRelatedAccounts RA -- Views
INNER JOIN dbo.CustomerAccount CA
ON RA.RelatedAccountNumber = CA.AccountNumber
AND RA.RelatedShipTo = CA.ShipTo
AND RA.RelatedSystemCode = CA.SystemCode
INNER JOIN dbo.CustomerAccountBrand CAB ---- Taking long time 4:30 mins
ON CA.AccountNumber = CAB.AccountNumber
AND CA.ShipTo = CAB.ShipTo
AND CA.SystemCode = CAB.SystemCode
ALTER VIEW [dbo].[CustomerAccountRelatedAccounts]
AS
SELECT
ca.AccountNumber, ca.ShipTo, ca.SystemCode, cafg.AccountNumber AS RelatedAccountNumber, cafg.ShipTo AS RelatedShipTo,
cafg.SystemCode AS RelatedSystemCode
FROM dbo.CustomerAccount AS ca
LEFT OUTER JOIN dbo.CustomerAccount AS cafg
ON ca.FinancialGroup = cafg.FinancialGroup
AND ca.NationalAccount = cafg.NationalAccount
AND cafg.IsActive = 1
WHERE CA.IsActive = 1
From my experience, the SQL server query optimizer often fails to pick the correct join algorithm when queries become more complex (e.g. joining with your view means that there's no index readily available to join on). If that's what's happening here, then the easy fix is to add a join hint to turn it into a hash join:
SELECT DISTINCT
RA.AccountNumber,
RA.ShipTo,
RA.SystemCode,
CAB.BrandCode
FROM dbo.CustomerAccountRelatedAccounts RA -- Views
INNER JOIN dbo.CustomerAccount CA
ON RA.RelatedAccountNumber = CA.AccountNumber
AND RA.RelatedShipTo = CA.ShipTo
AND RA.RelatedSystemCode = CA.SystemCode
INNER HASH JOIN dbo.CustomerAccountBrand CAB ---- Note the "HASH" keyword
ON CA.AccountNumber = CAB.AccountNumber
AND CA.ShipTo = CAB.ShipTo
AND CA.SystemCode = CAB.SystemCode
I need to improve this query inside a stored procedure, the performance with lots of data broke the application. Is there any way to make it faster?
I need to collect certain columns from several tables to build a dashboard in my app web, others columns I collected in other query and joined in my controller through classes.
EDIT: this query is a part from dynamically transaction, they can change the database.
SELECT
a.fact_num as cotizacion, a.comentario, m.co_cli, k.cli_des,
m.co_ven, l.ven_des, m.fec_emis, m.fec_venc, m.campo8,
a.reng_num, a.co_art, g.art_des, a.co_alma, b.fact_num as pedido,
c.fact_num as factura, d.fact_num as despacho, e.cob_num as cobro,
f.fec_venc as fecha_venc, f.fec_emis as fecha_pedido,
h.odp_num as ord_produccion, h.co_ced as cedula, i.req_num as requisicion,
j.ent_num as cierre
FROM
reng_cac a
LEFT JOIN
cotiz_c m ON a.fact_num = m.fact_num
LEFT JOIN
reng_ped b ON a.co_art = b.co_art AND a.fact_num = b.num_doc AND b.tipo_doc = 'T'
LEFT JOIN
pedidos f ON b.fact_num = f.fact_num
LEFT JOIN
reng_fac c ON b.fact_num = c.num_doc AND a.co_art = c.co_art AND c.tipo_doc = 'P'
LEFT JOIN
reng_ndd d ON c.fact_num = d.num_doc AND a.co_art = d.co_art AND d.tipo_doc = 'F'
LEFT JOIN
reng_cob e ON c.fact_num = e.doc_num AND e.tp_doc_cob = 'FACT'
LEFT JOIN
art g ON a.co_art = g.co_art
LEFT JOIN
spodp h ON b.fact_num = h.doc_ori AND b.co_art = h.co_art
LEFT JOIN
spreqalm i ON h.odp_num = i.odp_num
LEFT JOIN
spcierre j ON h.odp_num = j.odp_num
LEFT JOIN
clientes k ON m.co_cli = k.co_cli
LEFT JOIN
vendedor l ON m.co_ven = l.co_ven
WHERE
a.fact_num BETWEEN '0' AND '999999999'
AND m.fec_emis BETWEEN '01/01/2012' AND '30/06/2012'
AND m.co_cli BETWEEN '' AND 'þþþþþþþþþþþþþþþþþþþþþþþþþþþþþþ'
ORDER BY
a.fact_num, a.reng_num ASC
As Nick commented, you always need to run your query in a execution plan. There are some points you need to consider. For instance, having a lot of LEFT JOINs will reduce the performance. Try to see if you can use INNER JOINs where possible. Check if you have proper indexes. If you can attach your execution plan to your question, you will get more helpful answers.
I am trying to execute the following sql query but it takes 22 seconds to execute. the number of returned items is 554192. I need to make this faster and have already put indexes in all the tables involved.
SELECT mc.name AS MediaName,
lcc.name AS Country,
i.overridedate AS Date,
oi.rating,
bl1.firstname + ' ' + bl1.surname AS Byline,
b.id BatchNo,
i.numinbatch ItemNumberInBatch,
bah.changedatutc AS BatchDate,
pri.code AS IssueNo,
pri.name AS Issue,
lm.neptunemessageid AS MessageNo,
lmt.name AS MessageType,
bl2.firstname + ' ' + bl2.surname AS SourceFullName,
lst.name AS SourceTypeDesc
FROM profiles P
INNER JOIN profileresults PR
ON P.id = PR.profileid
INNER JOIN items i
ON PR.itemid = I.id
INNER JOIN batches b
ON b.id = i.batchid
INNER JOIN itemorganisations oi
ON i.id = oi.itemid
INNER JOIN lookup_mediachannels mc
ON i.mediachannelid = mc.id
LEFT OUTER JOIN lookup_cities lc
ON lc.id = mc.cityid
LEFT OUTER JOIN lookup_countries lcc
ON lcc.id = mc.countryid
LEFT OUTER JOIN itembylines ib
ON ib.itemid = i.id
LEFT OUTER JOIN bylines bl1
ON bl1.id = ib.bylineid
LEFT OUTER JOIN batchactionhistory bah
ON b.id = bah.batchid
INNER JOIN itemorganisationissues ioi
ON ioi.itemorganisationid = oi.id
INNER JOIN projectissues pri
ON pri.id = ioi.issueid
LEFT OUTER JOIN itemorganisationmessages iom
ON iom.itemorganisationid = oi.id
LEFT OUTER JOIN lookup_messages lm
ON iom.messageid = lm.id
LEFT OUTER JOIN lookup_messagetypes lmt
ON lmt.id = lm.messagetypeid
LEFT OUTER JOIN itemorganisationsources ios
ON ios.itemorganisationid = oi.id
LEFT OUTER JOIN bylines bl2
ON bl2.id = ios.bylineid
LEFT OUTER JOIN lookup_sourcetypes lst
ON lst.id = ios.sourcetypeid
WHERE p.id = #profileID
AND b.statusid IN ( 6, 7 )
AND bah.batchactionid = 6
AND i.statusid = 2
AND i.isrelevant = 1
when looking at the execution plan I can see an step which is costing 42%. Is there any way I could get this to a lower threshold or any way that I can improve the performance of the whole query.
Remove the profiles table as it is not needed and change the WHERE clause to
WHERE PR.profileid = #profileID
You have a left outer join on the batchactionhistory table but also have a condition in your WHERE clause which turns it back into an inner join. Change you code to this:
LEFT OUTER JOIN batchactionhistory bah
ON b.id = bah.batchid
AND bah.batchactionid = 6
You don't need the batches table as it is used to join other tables which could be joined directly and to show the id in you SELECT which is also available in other tables. Make the following changes:
i.batchidid AS BatchNo,
LEFT OUTER JOIN batchactionhistory bah
ON i.batchidid = bah.batchid
Are any of the fields that are used in joins or the WHERE clause from tables that contain large amounts of data but are not indexed. If so try adding an index on at time to the largest table.
Do you need every field in the result - if you could loose one or to you maybe could reduce the number of tables further.
First, if this is not a stored procedure, make it one. That's a lot of text for sql server to complile.
Next, my experience is that "worst practices" are occasionally a good idea. Specifically, I have been able to improve performance by splitting large queries into a couple or three small ones and assembling the results.
If this query is associated with a .net, coldfusion, java, etc application, you might be able to do the split/re-assemble in your application code. If not, a temporary table might come in handy.