I wrote this SQL query:
SELECT *
FROM
dbo.RDB_LOG_ITEM,
(
SELECT '000' + CAST(operatore as varchar) + cast(scontrino as varchar) search
FROM
(
SELECT
N0_XACT_NO scontrino,
N0_OPERATOR_NO operatore
FROM
dbo.RDB_SCALE_ITEM
WHERE
BL_RECORD_EXPLODED = 0 AND
N0_COUNTER_NO = 1 AND
DT_TIME_STAMP LIKE '20160526%'
) db
) db2
WHERE
DT_TIME_STAMP > '2016-05-26T00:00:00.000' AND
SZ_SCALE_LABEL LIKE db2.search + '%'
But this query is executed in 3+ sec. The result of this query is a single row. The result of the select db2 are only 7 rows.
I think when I use from data1,db2 that SQL does a cross join (data1 is a big db with something like 300k+ rows) and slows the process.
If I try to write the select hard-coded with the result from the 2nd select I get the result in 0.01 sec like this: select * from data1 where DT_TIME_STAMP > '2016-05-26T00:00:00.000' and SZ_SCALE_LABEL like '0001013530%'
How can I use the db2 without joining it with the other db?
edit
the subquery:
(
SELECT '000' + CAST(operatore as varchar) + cast(scontrino as varchar) search
FROM
(
SELECT
N0_XACT_NO scontrino,
N0_OPERATOR_NO operatore
FROM
dbo.RDB_SCALE_ITEM
WHERE
BL_RECORD_EXPLODED = 0 AND
N0_COUNTER_NO = 1 AND
DT_TIME_STAMP LIKE '20160526%'
) db
) db2
give me X rows like
0001013530
0001013531
0001013532
0001013533
0001013534
what i need is a query like this select * from dbo.RDB_LOG_ITEM where DT_TIME_STAMP > '2016-05-26T00:00:00.000' and (SZ_SCALE_LABEL like '0001013530%' or SZ_SCALE_LABEL like '0001013531%' or SZ_SCALE_LABEL like '0001013532%' or SZ_SCALE_LABEL like '0001013533%' or SZ_SCALE_LABEL like '0001013534%')
i think is something near the subquery IN http://www.dofactory.com/sql/subquery but with the LIKE
PS sorry for the incomplete post but i was at work and they was kicking me for close :-)
First, you should be using explicit JOINs. Your WHERE clause should not have more than one table (in this case, considering the subquery as a "table").
Second, there is no need for a subquery here at all. You need to get out of the habit of leaning on subqueries and think in a set-based way when writing SQL.
Finally, use your aliases to prefix all of your columns in a query. It's impossible to tell where some of these columns are coming from without prefixing them.
I believe that this will get you the same results and will ideally be done in a performant way. If it's not, then you will need to post the full table structures (including indexes) as well as the query plans for anyone to be able to help you with performance issues.
SELECT
LI.dt_time_stamp,
LI.sz_scale_label,
<list other columns here, because we **never** use SELECT *>
FROM
dbo.RDB_LOG_ITEM LI
INNER JOIN dbo.RDB_SCALE_ITEM SI ON
SI.bl_record_exploded = 0 AND
SI.no_counter_no = 1 AND
SI.dt_time_stamp LIKE '20160526%' AND -- You should be using date datatypes for your date columns
LI.sz_scale_label LIKE '000' + CAST(operatore AS VARCHAR(20)) + cast(scontrino AS VARCHAR(20)) + '%' -- I guessed on appropriate VARCHAR sizes, which you should have
WHERE
LI.dt_time_stamp > '2016-05-26T00:00:00.000'
How much data does your tables have? If they have huge data, without join your execution time will be more. I had similar experience. Using join reduced my execution time significantly.
Related
The query structure: Helper-select in "with" clause - selects most recent entry using 'top 1 transaction_date'. Then does many joins. It takes too much time to run - what am I doing wrong?
CREATE VIEW [IRWSMCMaterialization].[FactInventoryItemOnHandDailyView] AS
WITH TempTBLFactIvnItmDaily AS (
SELECT TOP 20
ITEM_NUMBER AS [InventoryItemNumber]
,CAST(FORMAT(TRANSACTION_DATE, 'yyyyMMdd') AS INT) AS [DateKey]
,BRANCH_PLANT_FHK AS [BranchPlantKey]
,BRANCH_PLANT_CODE AS [BranchPlantCode]
,CAST(QUANTITY_ON_HAND AS BIGINT) AS [QuantityOnHand]
,TRANSACTION_DATE AS [Date]
,WAREHOUSE_LOCATION_FHK AS [WarehouseLocationKey]
,WAREHOUSE_LOCATION_CODE AS [WarehouseLocationCode]
,WAREHOUSE_LOT_NUMBER_CODE AS [WarehouseLotNumber]
,WAREHOUSE_LOT_NUMBER_FHK AS [WarehouseLotNumberKey]
,UNIT_OF_MEASURE AS [UnitOfMeasureName]
,UNIT_OF_MEASURE_PHK AS [UnitOfMeasureKey]
FROM dbo.RS_INV_ITEM_ON_HAND
-- below is where clause, choose only most recent entry
WHERE TRANSACTION_DATE = (SELECT TOP 1 TRANSACTION_DATE FROM dbo.RS_INV_ITEM_ON_HAND ORDER BY TRANSACTION_DATE DESC)
)
SELECT [InventoryItemNumber],
[DateKey],
[Date],
[BranchPlantCode] AS [BP],
[WarehouseLocationCode] AS [Location],
[QuantityOnHand],
[UnitOfMeasureName] AS [UoM],
CASE [WarehouseLotNumber]
WHEN 'Not Assigned' THEN NULL
ELSE [WarehouseLotNumber]
END
AS [Lot]
FROM TempTBLFactIvnItmDaily iioh
JOIN DWH.DimBranchPlant bp ON iioh.BranchPlantKey = bp.BRANCH_PLANT_PHK
JOIN DWH.DimWarehouseLocation wloc ON iioh.WarehouseLocationKey = wloc.WAREHOUSE_LOCATION_PHK
JOIN DWH.DimWarehouseLotNumber wlot ON iioh.WarehouseLotNumberKey = wlot.WarehouseLotNumber_PHK
JOIN DWH.DimUnitOfMeasure uom ON CAST(iioh.UnitOfMeasureKey AS VARCHAR(100)) = uom.UNIT_OF_MEASURE_PHK
where bp.BRANCH_PLANT_CODE = '96100'
AND iioh.QuantityOnHand > 0
AND (wloc.WAREHOUSE_LOCATION_CODE like '6000W01%' OR wloc.WAREHOUSE_LOCATION_CODE like 'BL%')
GO
There are a lot of things that does not seems good. First of all, your base query must be a lot simpler. Something like this:
SELECT iioh.ITEM_NUMBER AS [InventoryItemNumber],
CAST(FORMAT(iioh.TRANSACTION_DATE, 'yyyyMMdd') AS INT) AS [DateKey],
iioh.TRANSACTION_DATE AS [Date],
iioh.BRANCH_PLANT_CODE AS [BP],
iioh.WAREHOUSE_LOCATION_CODE AS [Location],
CAST(iioh.QUANTITY_ON_HAND AS BIGINT) AS [QuantityOnHand],
iioh.UNIT_OF_MEASURE AS [UoM],
NULLIF(iioh.WAREHOUSE_LOT_NUMBER_CODE, 'Not Assigned') AS [Lot]
FROM dbo.RS_INV_ITEM_ON_HAND iioh
JOIN DWH.DimBranchPlant bp
ON iioh.BranchPlantKey = bp.BRANCH_PLANT_PHK
JOIN DWH.DimWarehouseLocation wloc
ON iioh.WarehouseLocationKey = wloc.WAREHOUSE_LOCATION_PHK
JOIN DWH.DimUnitOfMeasure uom
ON CAST(iioh.UnitOfMeasureKey AS VARCHAR(100)) = uom.UNIT_OF_MEASURE_PHK
where bp.BRANCH_PLANT_CODE = '96100'
AND iioh.QuantityOnHand > 0
AND (wloc.WAREHOUSE_LOCATION_CODE like '6000W01%' OR wloc.WAREHOUSE_LOCATION_CODE like 'BL%')
AND iioh.TRANSACTION_DATE = #TRANSACTION_DATE
For example, you are joining the DWH.DimWarehouseLotNumber but you are not extracting columns - do you really need it? Also, there are other columns which are not returned by the view - why to query them?
From, there you are first filtering by date and then y other fields, so your first TOP 20 records may be filtered by the next conditions - is this a behavior you want?
Also, do you really want this cast?
ON CAST(iioh.UnitOfMeasureKey AS VARCHAR(100)) = uom.UNIT_OF_MEASURE_PHK
It's better to use CONVERT, not FORMAT in performance aspect. Also, why not saving/materializing the TRANSACTION_DATE as INT (for example using a persisted computed column or just on CRUD) instead of calculating this value on each read?
Filtering by location code using LIKE clause can heart the performance, too. Why not adding a new column WareHouseLocationCodeType and set a same value for all locations satisfying this condition:
(wloc.WAREHOUSE_LOCATION_CODE like '6000W01%' OR wloc.WAREHOUSE_LOCATION_CODE like 'BL%')
Then you can filter by this column in the view since this is very important for you. Also, you can create filter index on this column to increase the performance, more.
Also, you may want to create a inline-function instead a view and pass the date as parameter:
CREATE OR ALTER FUNCTION [IRWSMCMaterialization].[FactInventoryItemOnHandDailyView]
(
#TRANSACTION_DATE datetime
)
RETURNS TABLE
AS
RETURN
(
SELECT iioh.ITEM_NUMBER AS [InventoryItemNumber],
CAST(FORMAT(iioh.TRANSACTION_DATE, 'yyyyMMdd') AS INT) AS [DateKey],
iioh.TRANSACTION_DATE AS [Date],
iioh.BRANCH_PLANT_CODE AS [BP],
iioh.WAREHOUSE_LOCATION_CODE AS [Location],
CAST(iioh.QUANTITY_ON_HAND AS BIGINT) AS [QuantityOnHand],
iioh.UNIT_OF_MEASURE AS [UoM],
NULLIF(iioh.WAREHOUSE_LOT_NUMBER_CODE, 'Not Assigned') AS [Lot]
,iioh.TRANSACTION_DATE
FROM dbo.RS_INV_ITEM_ON_HAND iioh
JOIN DWH.DimBranchPlant bp
ON iioh.BranchPlantKey = bp.BRANCH_PLANT_PHK
JOIN DWH.DimWarehouseLocation wloc
ON iioh.WarehouseLocationKey = wloc.WAREHOUSE_LOCATION_PHK
JOIN DWH.DimUnitOfMeasure uom
ON CAST(iioh.UnitOfMeasureKey AS VARCHAR(100)) = uom.UNIT_OF_MEASURE_PHK
where bp.BRANCH_PLANT_CODE = '96100'
AND iioh.QuantityOnHand > 0
AND (wloc.WAREHOUSE_LOCATION_CODE like '6000W01%' OR wloc.WAREHOUSE_LOCATION_CODE like 'BL%')
AND iioh.TRANSACTION_DATE = #TRANSACTION_DATE
)
Then call it like this:
SELECT TOP 20 *
FROM [IRWSMCMaterialization].[FactInventoryItemOnHandDailyView] ('2020-12-04')
ORDER BY #TRANSACTION_DATE DESC
The query optimization is science today. If you want to find bottlenecks in your query you can follow some of these steps:
As the first step, enable statistics with these commands:
SET STATISTICS TIME ON;
SET STATISTICS IO ON;
Once you execute these commands in some query windows in the same window execute your query. When your query is executed switch to the Messages tab and you will see a lot of useful information like TIME execution, parse and compile-time and maybe the most interesting I/O reads.
As the second step, try to understand which table has a lot of reads, for example if you are expecting 10 rows from the query, but in some tables you have 10k or 100k logical reads something is wrong. That means for the 10 rows query execution from one table reads 10k pages. Obviously you are missing some index on this table, try to find which index you need.
If you are having some static values in where clause like the following one, then think about Filtered Index:
bp.BRANCH_PLANT_CODE = '96100' AND iioh.QuantityOnHand > 0
Not always, but in some cases conversion can break your indexes if you are casting them or using some other function in where clause like the following one, even you have an index on this column query optimizer will not use it in query execution:
CAST(iioh.UnitOfMeasureKey AS VARCHAR(100))
The last one, if you have OR logical operator in your query try to execute one by one part of your OR logical operator separately see to performance. This logical operator can really kill your query, and this is one example:
AND (wloc.WAREHOUSE_LOCATION_CODE like '6000W01%' OR wloc.WAREHOUSE_LOCATION_CODE like 'BL%')
Once, you determine here that you don't have any issues you can go more further.
I have a sqlite database and I need to perform some arithmetic in my sql to get the final amount. The thing is I am not certain how to go and achieve this, I'm pretty certain from what I've been reading its a subquery which I need to group by lineID (unique id in my db)
The fields I want to include in the calculation below are
el.material_cost1 * el.material_qty1
el.material_cost2 * el.material_qty2
el.material_cost3 * el.material_qty3
The query I currently have is below. It returns the value of my costs apart from the fields missing above. As I store the material costs individually, I can't work out how to do a subquery within my query to get the desired result.
SELECT sum(el.enquiry_cost1) + sum(el.enquiry_cost2) + sum(el.enquiry_cost3)
FROM estimate e
LEFT JOIN estimate_line el
ON e.estimateID=el.estimateID
WHERE e.projectID=7 AND el.optional='false'
Use of a LEFT JOIN instead of an [INNER] JOIN is pointless, as your WHERE condition filters out any rows that could have differed.
I think you're making this harder than it needs to be. In particular, nothing in your description makes me think you need a subquery. Instead, it looks like this query would be close to what you're after:
SELECT
sum(el.enquiry_cost1)
+ sum(el.enquiry_cost2)
+ sum(el.enquiry_cost3)
+ sum(el.material_cost1 * el.material_qty1)
+ sum(el.material_cost2 * el.material_qty2)
+ sum(el.material_cost3 * el.material_qty3)
AS total_costs,
FROM
estimate e
JOIN estimate_line el
ON e.estimateID = el.estimateID
WHERE e.projectID = 7 AND el.optional = 'false'
I am having an issue with the following query returning results a bit too slow and I suspect I am missing something basic. My initial guess is the 'CASE' statement is taking too long to process its result on the underlying data. But it could be something in the derived tables as well.
The question is, how can I speed this up? Are there any glaring errors in the way I am pulling the data? Am I running into a sorting or looping issues somewhere? The query runs for about 40 seconds, which seems quite long. C# is my primary expertise, SQL is a work in progress.
Note I am not asking "write my code" or "fix my code". Just for a pointer in the right direction, I can't seem to figure out where the slow down occurs. Each derived table runs very quickly (less than a second) by themselves, the joins seem correct and the result set is returning exactly what I need. It's just too slow and I'm sure there are better SQL scripter's out there ;) Any tips would be greatly appreciated!
SELECT
hdr.taker
, hdr.order_no
, hdr.po_no as display_po
, cust.customer_name
, hdr.customer_id
, 'INCORRECT-LARGE ORDER' + CASE
WHEN (ext_price_calc >= 600.01 and ext_price_calc <= 800) and fee_price.unit_price <> round(ext_price_calc * -.01,2)
THEN '-1%: $' + cast(cast(ext_price_calc * -.01 as decimal(18,2)) as varchar(255))
WHEN ext_price_calc >= 800.01 and ext_price_calc <= 1000 and fee_price.unit_price <> round(ext_price_calc * -.02,2)
THEN '-2%: $' + cast(cast(ext_price_calc * -.02 as decimal(18,2)) as varchar(255))
WHEN ext_price_calc > 1000 and fee_price.unit_price <> round(ext_price_calc * -.03,2)
THEN '-3%: $' + cast(cast(ext_price_calc * -.03 as decimal(18,2)) as varchar(255))
ELSE
'OK'
END AS Status
FROM
(myDb_view_oe_hdr hdr
LEFT OUTER JOIN myDb_view_customer cust
ON hdr.customer_id = cust.customer_id)
LEFT OUTER JOIN wpd_view_sales_territory_by_customer territory
ON cust.customer_id = territory.customer_id
LEFT OUTER JOIN
(select
order_no,
SUM(ext_price_calc) as ext_price_calc
from
(select
hdr.order_no,
line.item_id,
(line.qty_ordered - isnull(qty_canceled,0)) * unit_price as ext_price_calc
from myDb_view_oe_hdr hdr
left outer join myDb_view_oe_line line
on hdr.order_no = line.order_no
where
line.delete_flag = 'N'
AND line.cancel_flag = 'N'
AND hdr.projected_order = 'N'
AND hdr.delete_flag = 'N'
AND hdr.cancel_flag = 'N'
AND line.item_id not in ('LARGE-ORDER-1%','LARGE-ORDER-2%', 'LARGE-ORDER-3%', 'FUEL','NET-FUEL', 'CONVENIENCE-FEE')) as line
group by order_no) as order_total
on hdr.order_no = order_total.order_no
LEFT OUTER JOIN
(select
order_no,
count(order_no) as convenience_count
from oe_line with (nolock)
left outer join inv_mast inv with (nolock)
on oe_line.inv_mast_uid = inv.inv_mast_uid
where inv.item_id in ('LARGE-ORDER-1%','LARGE-ORDER-2%', 'LARGE-ORDER-3%')
and oe_line.delete_flag <> 'Y'
group by order_no) as fee_count
on hdr.order_no = fee_count.order_no
INNER JOIN
(select
order_no,
unit_price
from oe_line line with (nolock)
where line.inv_mast_uid in (select inv_mast_uid from inv_mast with (nolock) where item_id in ('LARGE-ORDER-1%','LARGE-ORDER-2%', 'LARGE-ORDER-3%'))) as fee_price
ON fee_count.order_no = fee_price.order_no
WHERE
hdr.projected_order = 'N'
AND hdr.cancel_flag = 'N'
AND hdr.delete_flag = 'N'
AND hdr.completed = 'N'
AND territory.territory_id = ‘CUSTOMERTERRITORY’
AND ext_price_calc > 600.00
AND hdr.carrier_id <> '100004'
AND fee_count.convenience_count is not null
AND CASE
WHEN (ext_price_calc >= 600.01 and ext_price_calc <= 800) and fee_price.unit_price <> round(ext_price_calc * -.01,2)
THEN '-1%: $' + cast(cast(ext_price_calc * -.01 as decimal(18,2)) as varchar(255))
WHEN ext_price_calc >= 800.01 and ext_price_calc <= 1000 and fee_price.unit_price <> round(ext_price_calc * -.02,2)
THEN '-2%: $' + cast(cast(ext_price_calc * -.02 as decimal(18,2)) as varchar(255))
WHEN ext_price_calc > 1000 and fee_price.unit_price <> round(ext_price_calc * -.03,2)
THEN '-3%: $' + cast(cast(ext_price_calc * -.03 as decimal(18,2)) as varchar(255))
ELSE
'OK' END <> 'OK'
Just as a clue to the right direction for optimization:
When you do an OUTER JOIN to a query with calculated columns, you are guaranteeing not only a full table scan, but that those calculations must be performed against every row in the joined table. It appears that you can actually do your join to oe_line without the column calculations (i.e. by filtering ext_price_calc to a specific range).
You don't need to do most of the subqueries that are in your query--the master query can be recrafted to use regular table join syntax. Joins to subqueries containing subqueries presents a challenge to the SQL optimizer that it may not be able to meet. But by using regular joins, the optimizer has a much better chance at identifying more efficient query strategies.
You don't tag which SQL engine you're using. Every database has proprietary extensions that may allow for speedier or more efficient queries. It would be easier to provide useful feedback if you indicated whether you were using MySQL, SQL Server, Oracle, etc.
Regardless of the database you're using, reviewing the query plan is always a good place to start. This will tell you where most of the I/O and time in your query is being spent.
Just on general principle, make sure your statistics are up-to-date.
It's may not be solvable by any of us without the real stuff to test with.
IF that's the case and nobody else posts the answer, I can still help. Here is how to trouble shoot it.
(1) take joins and pieces out one by one.
(2) this will cause errors. Remove or fake the references to get rid of them.
(3) see how that works.
(4) Put items back before you try taking something else out
(5) keep track...
(6) also be aware where a removal of something might drastically reduce the result set.
You might find you're missing an index or some other smoking gun.
I was having the same problem and I was able to solve it by indexing one of the tables and setting a primary key.
I strongly suspect that the problem lies in the number of joins you're doing. A lot of databases do joins basically by systemically checking all possible combinations of the various tables as being valid - so if you're joinging table A and B on column C, and A looks like:
Name:C
Fred:1
Alice:2
Betty:3
While B looks like:
C:Pet
1:Alligator
2:Lion
3:T-Rex
When you do the join, it checks all 9 possibilities:
Fred:1:1:Alligator
Fred:1:2:Lion
Fred:1:3:T-Rex
Alice:2:1:Alligator
Alice:2:2:Lion
Alice:2:3:T-Rex
Betty:3:1:Alligator
Betty:3:2:Lion
Betty:3:3:T-Rex
And goes through and deletes the non-matching ones:
Fred:1:1:Alligator
Alice:2:2:Lion
Betty:3:3:T-Rex
... which means with three entries in each table, it creates nine temporary records, sorts through them all, and deletes six of them ... all before it actually sorts through the results for what you're after (so if you are looking for Betty's Pet, you only want one row on that final result).
... and you're doing how many joins and sub-queries?
What approach should I follow to construct my SQL query if I need to select a data exepct some other data?
For example, my
I want so select all the data from the data-base EXCEPT this result-set:
SELECT *
FROM table1
WHERE table1.MarketTYpe = 'EmergingMarkets'
AND IsBigOne = 1
AND MarketVolume = 'MIDDLE'
AND SomeClass = 'ThirdClass'
Should I use
NOT IN (the aboe result set)
Or shoudl I get INVERSE of the conditions like != inseat of = etc.
Or ?
Can you advice?
Use the EXCEPT construct?
SELECT *
FROM table1
EXCEPT
SELECT *
FROM table1
WHERE table1.MarketTYpe = 'EmergingMarkets'
AND IsBigOne = 1
AND MarketVolume = 'MIDDLE'
AND SomeClass = 'ThirdClass'
Note that EXCEPT and NOT EXISTS give the same query plan using "left anti semi joins".
NOT IN (subquery with above) may not give correct results if there are NULL values in the sub-query, hence I wouldn't use
I would avoid negation in the WHERE clause because it isn't readable straight awayAs the comments show on Michael's answer...
For more on "all rows except some rows", see these:
Combining datasets with EXCEPT versus checking on IS NULL in a LEFT JOIN
To take out those dept who has no employees assigned to it
SQL NOT IN possibly performance issues
(DBA.SE) https://dba.stackexchange.com/questions/4009/the-use-of-not-logic-in-relation-to-indexes/4010#4010
What database engine?
Minus operator in ORACLE
Except operator in SQL Server
Simplest and probably fastest here is to simply invert the conditions:
SELECT *
FROM table1
WHERE table1.MarketTYpe <> 'EmergingMarkets'
OR IsBigOne <> 1
OR MarketVolume <> 'MIDDLE'
OR SomeClass <> 'ThirdClass'
This is likely to use lots fewer resources than doing a NOT IN(). You may wish to benchmark them to be certain, but the above is likely to be faster.
Use NOT IN because that makes it clear that you want the set in the main select statement excluding the subset in the NOT IN select statement.
I like gbn's answer, but another way of doing it can be:
SELECT *
FROM table1
WHERE NOT (table1.MarketTYpe = 'EmergingMarkets'
AND IsBigOne = 1
AND MarketVolume = 'MIDDLE'
AND SomeClass = 'ThirdClass')
I need to execute some SQL that looks like this:
select approve_firm_id,approve_dt,approve_result
from main_approve
group by approve_firm_id
having MAX(approve_dt) and approve_result=0;
it runs (mysql-5.1),
but if I try in the Django model like this:
Approve.objects.annotate(max_dt=Max('approve_dt')).
filter(max_dt__gt=0).filter(approve_result=0).query
The query generated is this:
SELECT `main_approve`.`id`, `main_approve`.`approve_result`,
`main_approve`.`approve_dt`, `main_approve`.`approve_user_id`,
`main_approve`.`approve_firm_id`, `main_approve`.`exported_at`,
MAX(`main_approve`.`approve_dt`) AS `max_dt` FROM `main_approve`
WHERE (`main_approve`.`approve_result` = 0 )
GROUP BY `main_approve`.`id`
HAVING MAX(`main_approve`.`approve_dt`) > 0
ORDER BY NULL
I need the WHERE clause to be AFTER the GROUP BY clause.
Does the SQL even work? The having MAX(approve_dt) part certainly looks suspicious. Perhaps you mean this:
SELECT DISTINCT
main_approve.approve_firm_id,
main_approve.approve_dt,
main_approve.approve_result
FROM
main_approve
JOIN (
SELECT
approve_firm_id,
MAX(approve_dt) AS max_dt
FROM
main_approve
GROUP BY
approve_firm_id
) AS t
ON
main_approve.approve_firm_id = t.approve_firm_id
AND main_approve.approve_dt = t.max_dt
WHERE
main_approve.approve_result = 0;
It will be easier to construct the ORM expression after you know what exactly is the SQL going to be.