Filtering SQL query by unique id and earliest dates that are in the future - sql

I have this query that returns the correct data, but I would like to filter it.
SELECT TOP (100) PERCENT dbo.Reg_Master.id, dbo.Cart_Programs.cartid, dbo.Reg_Master.F_ID, dbo.BlockPeriod.profileid, dbo.Reg_Master.FirstName,
dbo.Reg_Master.LastName, dbo.BlockPeriod.startdate, dbo.Cart_Programs.blockid
FROM dbo.Cart_Programs LEFT OUTER JOIN
dbo.Reg_Master ON dbo.Cart_Programs.cartid = dbo.Reg_Master.cartid LEFT OUTER JOIN
dbo.BlockPeriod ON dbo.Cart_Programs.blockid = dbo.BlockPeriod.id
WHERE (dbo.BlockPeriod.profileid = xxx) AND (dbo.Reg_Master.F_ID = xxxx)
ORDER BY dbo.BlockPeriod.startdate
For each dbo.Reg_Master.id, I would like to return only the earliest dbo.BlockPeriod.startdate (that is today or later - in other words ignoring dates that have already passed) for each dbo.Reg_Master.id, I cannot seem to get it formatted correctly.

First of all, TOP 100 Percent does nothing, the optimizer will just ignore it.
Also, your left joins do not serve any purpose because your WHERE condition, so I have edited the SQL to use an inner join + cross apply vs an outer join + outer apply.
If I understand you correctly for each Reg_Master record, you want at most 1 record from BlockPeriod, where that 1 record is the closest date that is greater than today's date.
If so, then what you are looking for is an APPLY table operator combined with TOP (1) as shown below:
UPDATED:
SELECT Reg_Master.id,
Cart_Programs.cartid,
Reg_Master.F_ID,
T.profileid,
Reg_Master.FirstName,
Reg_Master.LastName,
T.startdate,
Cart_Programs.blockid
FROM Cart_Programs
JOIN Reg_Master ON Cart_Programs.cartid = Reg_Master.cartid
CROSS APPLY(
SELECT TOP 1 * FROM BlockPeriod
WHERE BlockPeriod.id = Cart_Programs.blockid
AND BlockPeriod.profileid = xxx AND Reg_Master.F_ID = xxxx
AND BlockPeriod.startdate >= GETDATE()
ORDER BY BlockPeriod.startdate ASC
) AS T

Related

How to use RIGHT OUTER JOIN with GROUP BY in SQl Server 2017?

i want use ROJ in sql server with filter date but not work, i mean must return data table round null but not return how to fix problem ?
SQL Code
SELECT Lines.Target, COUNT(Round.ID) AS cnt
FROM Round RIGHT OUTER JOIN Line as Lines
on Round.Line = Lines.ID
WHERE Lines.Company = 20 AND
CAST(Round.System_Date AS DATE) BETWEEN
CAST('2019-03-01' AS DATE) AND CAST('2019-03-01' AS DATE)
GROUP BY Lines.Target
with out filter date code work
Must return =>
Target cnt
------ -----
7 0
9 0
15 0
Switch to LEFT JOIN. Move outer table condition from WHERE to ON to get true outer join result:
SELECT Lines.Target, COUNT(Round.ID) AS cnt
FROM Line as Lines
LEFT OUTER JOIN Round
on Round.Line = Lines.ID
AND CAST(Round.System_Date AS DATE) BETWEEN
CAST('2019-03-01' AS DATE) AND CAST('2019-03-01' AS DATE)
WHERE Lines.Company = 20
GROUP BY Lines.Target
Theres no BETWEEN required as its the same date you are searching for and make sure record exists for that specific date
Plus,
If you want the rounds table data being the master table then why you are using right outer join use Left Outer Join

SQL Inner Join and nearest row to date

I dont't get it. I changed some of the code. In the WPLEVENT Table are a lot of Events per person. In the Persab-Table are the Persons with their History. Now I need the from the Persab Table just that row wich matches the persab.gltab Date nearest to the WPLEVENT.vdat Date. So all rows from the WPLEVENT, but just the one matching row from the PERSAB-Table.
SELECT
persab.name,
persab.vorname,
vdat,
eventstart,
persab.rc1,
persab.rc2
FROM wplevent
INNER JOIN
persab ON WPLEVENT.PersID = persab.PRIMKEY
INNER JOIN
(SELECT TOP 1 persab.rc1
FROM PERSAB
WHERE persab.gltab <= getdate() --/ Should be wplevent.vdat instead of getdate()
) NewTable ON wplevent.persid = persab.primkey
WHERE
persid ='100458'
ORDER BY vdat DESC
Need to use the MAX() function with the proper syntax by supplying an expression like MAX(persab.rc1). Also need to use GROUP BY for the second column rc2 in the subquery (although it looks like you do not need it). Finally you are missing the ON clause for the final INNER JOIN. I can update the answer to fix the query if you provide that information.
SELECT
Z1PERS.NAME
, Z1PERS.VORNAME
, WPLEVENT.VDat
, WPLEVENT.EventStart
, WPLEVENT.EventStop
, WPLEVENT.PEPGROUP
, Z1SGRP.TXXT
, PERSAB.GLTAB
, Z1PERS.PRIMKEY AS Expr1
, PERSAB.PRIMKEY
FROM
Z1PERS
INNER JOIN
WPLEVENT ON Z1PERS.PRIMKEY = WPLEVENT.PersID
INNER JOIN
Z1SGRP ON WPLEVENT.PEPGROUP = Z1SGRP.GRUPPE
INNER JOIN
(
SELECT MAX(Persab.rc1) --Fixed MAX expression
, persab.rc2
FROM
persab
GROUP BY
persab.rc2 --Need to group on rc2 if you want that column in the query otherwise remove this AND the rc2 column from select list
WHERE
WPLEVENT.PersID = PERSAB.PRIMKEY
AND WPLEVENT.VDat <= PERSAB.GLTAB
) --Missing ON clause for the INNER JOIN here
WHERE z1pers.vorname = 'henning'

SQL query: Iterate over values in table and use them in subquery

I have a simple SQL table containing some values, for example:
id | value (table 'values')
----------
0 | 4
1 | 7
2 | 9
I want to iterate over these values, and use them in a query like so:
SELECT value[0], x1
FROM (some subquery where value[0] is used)
UNION
SELECT value[1], x2
FROM (some subquery where value[1] is used)
...
etc
In order to get a result set like this:
4 | x1
7 | x2
9 | x3
It has to be in SQL as it will actually represent a database view. Of course the real query is a lot more complicated, but I tried to simplify the question while keeping the essence as much as possible.
I think I have to select from values and join the subquery, but as the value should be used in the subquery I'm lost on how to accomplish this.
Edit: I oversimplified my question; in reality I want to have 2 rows from the subquery and not only one.
Edit 2: As suggested I'm posting the real query. I simplified it a bit to make it clearer, but it's a working query and the problem is there. Note that I have hardcoded the value '2' in this query two times. I want to replace that with values from a different table, in the example table above I would want a result set of the combined results of this query with 4, 7 and 9 as values instead of the currently hardcoded 2.
SELECT x.fantasycoach_id, SUM(round_points)
FROM (
SELECT DISTINCT fc.id AS fantasycoach_id,
ffv.formation_id AS formation_id,
fpc.round_sequence AS round_sequence,
round_points,
fpc.fantasyplayer_id
FROM fantasyworld_FantasyCoach AS fc
LEFT JOIN fantasyworld_fantasyformation AS ff ON ff.id = (
SELECT MAX(fantasyworld_fantasyformationvalidity.formation_id)
FROM fantasyworld_fantasyformationvalidity
LEFT JOIN realworld_round AS _rr ON _rr.id = round_id
LEFT JOIN fantasyworld_fantasyformation AS _ff ON _ff.id = formation_id
WHERE is_valid = TRUE
AND _ff.coach_id = fc.id
AND _rr.sequence <= 2 /* HARDCODED USE OF VALUE */
)
LEFT JOIN fantasyworld_FantasyFormationPlayer AS ffp
ON ffp.formation_id = ff.id
LEFT JOIN dbcache_fantasyplayercache AS fpc
ON ffp.player_id = fpc.fantasyplayer_id
AND fpc.round_sequence = 2 /* HARDCODED USE OF VALUE */
LEFT JOIN fantasyworld_fantasyformationvalidity AS ffv
ON ffv.formation_id = ff.id
) x
GROUP BY fantasycoach_id
Edit 3: I'm using PostgreSQL.
SQL works with tables as a whole, which basically involves set operations. There is no explicit iteration, and generally no need for any. In particular, the most straightforward implementation of what you described would be this:
SELECT value, (some subquery where value is used) AS x
FROM values
Do note, however, that a correlated subquery such as that is very hard on query performance. Depending on the details of what you're trying to do, it may well be possible to structure it around a simple join, an uncorrelated subquery, or a similar, better-performing alternative.
Update:
In view of the update to the question indicating that the subquery is expected to yield multiple rows for each value in table values, contrary to the example results, it seems a better approach would be to just rewrite the subquery as the main query. If it does not already do so (and maybe even if it does) then it would join table values as another base table.
Update 2:
Given the real query now presented, this is how the values from table values could be incorporated into it:
SELECT x.fantasycoach_id, SUM(round_points) FROM
(
SELECT DISTINCT
fc.id AS fantasycoach_id,
ffv.formation_id AS formation_id,
fpc.round_sequence AS round_sequence,
round_points,
fpc.fantasyplayer_id
FROM fantasyworld_FantasyCoach AS fc
-- one row for each combination of coach and value:
CROSS JOIN values
LEFT JOIN fantasyworld_fantasyformation AS ff
ON ff.id = (
SELECT MAX(fantasyworld_fantasyformationvalidity.formation_id)
FROM fantasyworld_fantasyformationvalidity
LEFT JOIN realworld_round AS _rr
ON _rr.id = round_id
LEFT JOIN fantasyworld_fantasyformation AS _ff
ON _ff.id = formation_id
WHERE is_valid = TRUE
AND _ff.coach_id = fc.id
-- use the value obtained from values:
AND _rr.sequence <= values.value
)
LEFT JOIN fantasyworld_FantasyFormationPlayer AS ffp
ON ffp.formation_id = ff.id
LEFT JOIN dbcache_fantasyplayercache AS fpc
ON ffp.player_id = fpc.fantasyplayer_id
-- use the value obtained from values again:
AND fpc.round_sequence = values.value
LEFT JOIN fantasyworld_fantasyformationvalidity AS ffv
ON ffv.formation_id = ff.id
) x
GROUP BY fantasycoach_id
Note in particular the CROSS JOIN which forms the cross product of two tables; this is the same thing as an INNER JOIN without any join predicate, and it can be written that way if desired.
The overall query could be at least a bit simplified, but I do not do so because it is a working example rather than an actual production query, so it is unclear what other changes would translate to the actual application.
In the example I create two tables. See how outer table have an alias you use in the inner select?
SQL Fiddle Demo
SELECT T.[value], (SELECT [property] FROM Table2 P WHERE P.[value] = T.[value])
FROM Table1 T
This is a better way for performance
SELECT T.[value], P.[property]
FROM Table1 T
INNER JOIN Table2 p
on P.[value] = T.[value];
Table 2 can be a QUERY instead of a real table
Third Option
Using a cte to calculate your values and then join back to the main table. This way you have the subquery logic separated from your final query.
WITH cte AS (
SELECT
T.[value],
T.[value] * T.[value] as property
FROM Table1 T
)
SELECT T.[value], C.[property]
FROM Table1 T
INNER JOIN cte C
on T.[value] = C.[value];
It might be helpful to extract the computation to a function that is called in the SELECT clause and is executed for each row of the result set
Here's the documentation for CREATE FUNCTION for SQL Server. It's probably similar to whatever database system you're using, and if not you can easily Google for it.
Here's an example of creating a function and using it in a query:
CREATE FUNCTION DoComputation(#parameter1 int)
RETURNS int
AS
BEGIN
-- Do some calculations here and return the function result.
-- This example returns the value of #parameter1 squared.
-- You can add additional parameters to the function definition if needed
DECLARE #Result int
SET #Result = #parameter1 * #parameter1
RETURN #Result
END
Here is an example of using the example function above in a query.
SELECT v.value, DoComputation(v.value) as ComputedValue
FROM [Values] v
ORDER BY value

Performance tuning of row-based subqueries: LEFT OUTER JOIN and OUTER APPLY, alternatives?

The performance of a certain query (on a Dynamics CRM 2011 database) was abysmal. Since it is a normalized datamodel, but a flattened view on this data (an SSRS report) is required, I did a lot (12) of LEFT OUTER JOINs with a SELECT TOP (1) subquery, e.g.:
LEFT JOIN Filterednew_rates FRates ON FRates.new_ratesid =
(SELECT TOP (1)
FRR.new_ratesid
FROM Filterednew_rates FRR
WHERE
FRR.new_contractid = FContract.contractid
AND FRR.statuscode <> 803270000 -- NOT Obsolete
ORDER BY FRR.new_startdate DESC
)
This worked for a small number of result rows (like 10 seconds for 3 rows), but I've had it run for 45 minutes on about 100 expected result rows (the amount of source data is the same, just different WHERE clause). So I started looking for ways to "force" SQL Server to run the subqueries per row (since logically to me, that would scale linearly).
Then I read The power of T-SQL's APPLY operator and managed to change the above to
OUTER APPLY (
SELECT TOP (1)
FRR.*
FROM Filterednew_rates FRR
WHERE
FRR.new_contractid = FContract.contractid
AND FRR.statuscode <> 803270000 -- NOT Obsolete
ORDER BY FRR.new_startdate DESC
) AS FRates
Which made the execution time scale about linearly with the number of result records (about 3:30 minutes for 100 rows, still about 6 seconds for 3 rows). Somehow this made SQL Server decide to change the query execution plan for the better!
Is there any other way in SQL to "flatten" a normalized datamodel without resorting to Integration/Analysis Services?
EDIT:
Thanks for the input #Aaron and #BAReese. I'll try to apply PIVOT/UNPIVOT and the Windowing Functions and report back on query performance differences.
And by popular request, a larger part of the query. I've tried to "anonymize" the query a bit, so the actual query properties are more descriptive.
OUTER APPLY (
SELECT TOP (1)
FCO.*
FROM Filterednew_contractoption FCO
WHERE
FCO.new_contractid = FContract.contractid
AND FCO.new_included = 1 -- Is Included
AND FCO.new_optionidname = 'SomeOption1'
) AS FOptionSomeOption1
OUTER APPLY (
SELECT TOP (1)
FCO.*
FROM Filterednew_contractoption FCO
WHERE
FCO.new_contractid = FContract.contractid
AND FCO.new_included = 1 -- Is Included
AND FCO.new_optionidname = 'SomeOption2'
) AS FOptionSomeOption2
OUTER APPLY (
SELECT TOP (1)
FCD.*
FROM FilteredContractDetail FCD
JOIN FilteredProduct FProd ON FCD.productid = FProd.productid
WHERE
FContract.contractid = FCD.contractid
AND FCD.new_included = 1 -- Is Included
AND FProd.productnumber IN ('COLDEL1', 'COLDEL2', 'COLDEL3', 'COLDEL4')
) AS FColDelContractDetail
LEFT JOIN FilteredProduct FColDelProduct ON FColDelContractDetail.productid = FColDelProduct.productid
OUTER APPLY (
SELECT TOP (1)
FCO.*
FROM Filterednew_contractoption FCO
JOIN Filterednew_contractdetail_new_contractoptions FCD_CO ON FCO.new_contractoptionid = FCD_CO.new_contractoptionid
WHERE
FCD_CO.contractdetailid = FColDelContractDetail.contractdetailid
AND FCO.new_included = 1 -- Is Included
AND FCO.new_optionidname LIKE 'Input1'
) AS FColDelInput1Option
OUTER APPLY (
SELECT TOP (1)
FCO.*
FROM Filterednew_contractoption FCO
JOIN Filterednew_contractdetail_new_contractoptions FCD_CO ON FCO.new_contractoptionid = FCD_CO.new_contractoptionid
WHERE
FCD_CO.contractdetailid = FColDelContractDetail.contractdetailid
AND FCO.new_included = 1 -- Is Included
AND FCO.new_optionidname LIKE 'Input2'
) AS FColDelInput2Option
OUTER APPLY (
SELECT TOP (1)
FCO.*
FROM Filterednew_contractoption FCO
JOIN Filterednew_contractdetail_new_contractoptions FCD_CO ON FCO.new_contractoptionid = FCD_CO.new_contractoptionid
WHERE
FCD_CO.contractdetailid = FColDelContractDetail.contractdetailid
AND FCO.new_included = 1 -- Is Included
AND FCO.new_optionidname LIKE 'Input3'
) AS FColDelInput3Option
OUTER APPLY (
SELECT TOP (1)
FCP.*
FROM Filterednew_price FCP
WHERE FCP.new_contractid = FContract.contractid
AND FCP.statuscode <> 803270000 -- NOT Obsolete
ORDER BY FCP.new_validfrom DESC
) AS FPrice
OUTER APPLY (
SELECT TOP (1)
FCFR.*
FROM Filterednew_contractforecastresult FCFR
WHERE FCFR.new_contractid = FContract.contractid
ORDER BY FCFR.createdon DESC
) AS FForecastResult
Since you're using SQL Server, this would be an excellent opportunity to use windowing functions to improve efficiency.
something like this might help it run quicker:
LEFT JOIN
(
SELECT FRR.new_contractid, ROW_NUMBER() over(partition by FRR.new_contractid
order by FRR.new_startdate DESC) as Last_ID
FROM Filterednew_rates as FRR
WHERE FRR.statuscode <> 803270000 -- NOT Obsolete
) AS FRates
ON FRates.new_contractid = FContract.contractid
and FRates.Last_ID = 1
What this should do is allow the derived table to produce a list of all contractids but give a priority list. In theory, it will be easier on the server and you won't be hitting the table more times than necessary. Another thing you can do is add SET STATISTICS IO ON and SET STATISTICS TIME ON to the top of your query (assuming you're testing this in SQL Server Management Studio). If in SSMS, you'll get a log on the [Messages] tab telling what the logical/physical read count of each table is, as well as the amount of time spent querying.

Limit join to one row

I have the following query:
SELECT sum((select count(*) as itemCount) * "SalesOrderItems"."price") as amount, 'rma' as
"creditType", "Clients"."company" as "client", "Clients".id as "ClientId", "Rmas".*
FROM "Rmas" JOIN "EsnsRmas" on("EsnsRmas"."RmaId" = "Rmas"."id")
JOIN "Esns" on ("Esns".id = "EsnsRmas"."EsnId")
JOIN "EsnsSalesOrderItems" on("EsnsSalesOrderItems"."EsnId" = "Esns"."id" )
JOIN "SalesOrderItems" on("SalesOrderItems"."id" = "EsnsSalesOrderItems"."SalesOrderItemId")
JOIN "Clients" on("Clients"."id" = "Rmas"."ClientId" )
WHERE "Rmas"."credited"=false AND "Rmas"."verifyStatus" IS NOT null
GROUP BY "Clients".id, "Rmas".id;
The problem is that the table "EsnsSalesOrderItems" can have the same EsnId in different entries. I want to restrict the query to only pull the last entry in "EsnsSalesOrderItems" that has the same "EsnId".
By "last" entry I mean the following:
The one that appears last in the table "EsnsSalesOrderItems". So for example if "EsnsSalesOrderItems" has two entries with "EsnId" = 6 and "createdAt" = '2012-06-19' and '2012-07-19' respectively it should only give me the entry from '2012-07-19'.
SELECT (count(*) * sum(s."price")) AS amount
, 'rma' AS "creditType"
, c."company" AS "client"
, c.id AS "ClientId"
, r.*
FROM "Rmas" r
JOIN "EsnsRmas" er ON er."RmaId" = r."id"
JOIN "Esns" e ON e.id = er."EsnId"
JOIN (
SELECT DISTINCT ON ("EsnId") *
FROM "EsnsSalesOrderItems"
ORDER BY "EsnId", "createdAt" DESC
) es ON es."EsnId" = e."id"
JOIN "SalesOrderItems" s ON s."id" = es."SalesOrderItemId"
JOIN "Clients" c ON c."id" = r."ClientId"
WHERE r."credited" = FALSE
AND r."verifyStatus" IS NOT NULL
GROUP BY c.id, r.id;
Your query in the question has an illegal aggregate over another aggregate:
sum((select count(*) as itemCount) * "SalesOrderItems"."price") as amount
Simplified and converted to legal syntax:
(count(*) * sum(s."price")) AS amount
But do you really want to multiply with the count per group?
I retrieve the the single row per group in "EsnsSalesOrderItems" with DISTINCT ON. Detailed explanation:
Select first row in each GROUP BY group?
I also added table aliases and formatting to make the query easier to parse for human eyes. If you could avoid camel case you could get rid of all the double quotes clouding the view.
Something like:
join (
select "EsnId",
row_number() over (partition by "EsnId" order by "createdAt" desc) as rn
from "EsnsSalesOrderItems"
) t ON t."EsnId" = "Esns"."id" and rn = 1
this will select the latest "EsnId" from "EsnsSalesOrderItems" based on the column creation_date. As you didn't post the structure of your tables, I had to "invent" a column name. You can use any column that allows you to define an order on the rows that suits you.
But remember the concept of the "last row" is only valid if you specifiy an order or the rows. A table as such is not ordered, nor is the result of a query unless you specify an order by
Necromancing because the answers are outdated.
Take advantage of the LATERAL keyword introduced in PG 9.3
left | right | inner JOIN LATERAL
I'll explain with an example:
Assuming you have a table "Contacts".
Now contacts have organisational units.
They can have one OU at a point in time, but N OUs at N points in time.
Now, if you have to query contacts and OU in a time period (not a reporting date, but a date range), you could N-fold increase the record count if you just did a left join.
So, to display the OU, you need to just join the first OU for each contact (where what shall be first is an arbitrary criterion - when taking the last value, for example, that is just another way of saying the first value when sorted by descending date order).
In SQL-server, you would use cross-apply (or rather OUTER APPLY since we need a left join), which will invoke a table-valued function on each row it has to join.
SELECT * FROM T_Contacts
--LEFT JOIN T_MAP_Contacts_Ref_OrganisationalUnit ON MAP_CTCOU_CT_UID = T_Contacts.CT_UID AND MAP_CTCOU_SoftDeleteStatus = 1
--WHERE T_MAP_Contacts_Ref_OrganisationalUnit.MAP_CTCOU_UID IS NULL -- 989
-- CROSS APPLY -- = INNER JOIN
OUTER APPLY -- = LEFT JOIN
(
SELECT TOP 1
--MAP_CTCOU_UID
MAP_CTCOU_CT_UID
,MAP_CTCOU_COU_UID
,MAP_CTCOU_DateFrom
,MAP_CTCOU_DateTo
FROM T_MAP_Contacts_Ref_OrganisationalUnit
WHERE MAP_CTCOU_SoftDeleteStatus = 1
AND MAP_CTCOU_CT_UID = T_Contacts.CT_UID
/*
AND
(
(#in_DateFrom <= T_MAP_Contacts_Ref_OrganisationalUnit.MAP_KTKOE_DateTo)
AND
(#in_DateTo >= T_MAP_Contacts_Ref_OrganisationalUnit.MAP_KTKOE_DateFrom)
)
*/
ORDER BY MAP_CTCOU_DateFrom
) AS FirstOE
In PostgreSQL, starting from version 9.3, you can do that, too - just use the LATERAL keyword to achieve the same:
SELECT * FROM T_Contacts
--LEFT JOIN T_MAP_Contacts_Ref_OrganisationalUnit ON MAP_CTCOU_CT_UID = T_Contacts.CT_UID AND MAP_CTCOU_SoftDeleteStatus = 1
--WHERE T_MAP_Contacts_Ref_OrganisationalUnit.MAP_CTCOU_UID IS NULL -- 989
LEFT JOIN LATERAL
(
SELECT
--MAP_CTCOU_UID
MAP_CTCOU_CT_UID
,MAP_CTCOU_COU_UID
,MAP_CTCOU_DateFrom
,MAP_CTCOU_DateTo
FROM T_MAP_Contacts_Ref_OrganisationalUnit
WHERE MAP_CTCOU_SoftDeleteStatus = 1
AND MAP_CTCOU_CT_UID = T_Contacts.CT_UID
/*
AND
(
(__in_DateFrom <= T_MAP_Contacts_Ref_OrganisationalUnit.MAP_KTKOE_DateTo)
AND
(__in_DateTo >= T_MAP_Contacts_Ref_OrganisationalUnit.MAP_KTKOE_DateFrom)
)
*/
ORDER BY MAP_CTCOU_DateFrom
LIMIT 1
) AS FirstOE
Try using a subquery in your ON clause. An abstract example:
SELECT
*
FROM table1
JOIN table2 ON table2.id = (
SELECT id FROM table2 WHERE table2.table1_id = table1.id LIMIT 1
)
WHERE
...