SQL Query with combining MAX() and SUM() aggregates - sql

I have tried looking into different topics over here and in other forums but I can't seem to find a solution to my problem.
What I'm trying to achieve is "Display the net sales (in dollars) of the Product Line with the highest revenue for that Customer. Use a heading of: Best Sales. Format as $999,999.99.
Here's what I've tried so far:
SELECT cc.CustID, cc.CompanyName, cc.ContactName, pl.pl_id,to_char((sum(od.unitprice*od.quantity*(1-discount))), '$9,999,999.99') as rev
FROM corp.customers cc JOIN corp.orders co ON (cc.CustID=co.CustID)
LEFT OUTER JOIN corp.order_details od ON (co.orderID=od.orderID)
LEFT OUTER JOIN corp.products cp ON (od.ProductID=cp.ProductID)
LEFT OUTER JOIN corp.product_lines pl ON (cp.pl_id=pl.pl_id)
GROUP BY cc.CustID, cc.CompanyName, cc.ContactName, pl.pl_id
HAVING sum(od.unitprice*od.quantity*(1-discount))=
(
SELECT max(sum(od.unitprice*od.quantity*(1-discount)))
FROM corp.customers cc JOIN corp.orders co ON (cc.CustID=co.CustID)
JOIN corp.order_details od ON (co.orderID=od.orderID)
JOIN corp.products cp ON (od.ProductID=cp.ProductID)
JOIN corp.product_lines pl ON (cp.pl_id=pl.pl_id)
GROUP BY cc.CustID, cc.CompanyName, cc.ContactName, pl.pl_id);
This gives me only one output indicating the highest revenue of all customers, but I would like it to display the highest revenue according to each of the product line for that customer.
The result is shown below.
CustID | Company Name | Contact Name | PL_ID | Revenue
QUICK | QUICK-Stop | Horst Kloss | 1 | $37,161.63
I would like it to show something like.
CustID | Company Name | Contact Name | PL_ID | Revenue
QUICK | QUICK-Stop | Horst Kloss | 1 | $37,161.63
QS | QUICK-Start | Clark Stone | 2 | $50,000.00
QUI | QUICK | Mary Haynes | 1 | $60,000.00
QShelf | QUICK-Shelf | Doreen Lucas | 4 | $35,161.63
Any help is appreciated. Thank you!

This query uses your original query, a rank() function to order by your rev column, and a selection to only get the highest rev. This will give multiple rows if you have multiple rows with the same rev value. Change rank() to row_number() if you only want one.
You could also use CTE instead of the nested queries, wont make any difference.
select CustID, CompanyName, ContactName, pl_id, rev from (
select CustID, CompanyName, ContactName, pl_id, to_char(rev, '$9,999,999.99') as rev,
rank() over(order by rev desc) r
from (
SELECT cc.CustID, cc.CompanyName, cc.ContactName, pl.pl_id,
sum(od.unitprice*od.quantity*(1-discount)) as rev
FROM corp.customers cc JOIN corp.orders co ON (cc.CustID=co.CustID)
LEFT OUTER JOIN corp.order_details od ON (co.orderID=od.orderID)
LEFT OUTER JOIN corp.products cp ON (od.ProductID=cp.ProductID)
LEFT OUTER JOIN corp.product_lines pl ON (cp.pl_id=pl.pl_id)
GROUP BY cc.CustID, cc.CompanyName, cc.ContactName, pl.pl_id
) q
) q2 where r=1

Since you didn't provide us with sample input data for your tables, I've knocked up a simple example that you can hopefully use to amend your query:
WITH sample_data AS (SELECT 1 ID, 1 id2, 10 val FROM dual UNION ALL
SELECT 1 ID, 1 id2, 20 val FROM dual UNION ALL
SELECT 1 ID, 2 id2, 30 val FROM dual UNION ALL
SELECT 1 ID, 2 id2, 40 val FROM dual UNION ALL
SELECT 2 ID, 1 id2, 50 val FROM dual UNION ALL
SELECT 2 ID, 2 id2, 60 val FROM dual UNION ALL
SELECT 2 ID, 3 id2, 60 val FROM dual)
SELECT ID,
id2,
max_sum_val
FROM (SELECT ID,
id2,
SUM(val) sum_val,
MAX(SUM(val)) OVER (PARTITION BY ID) max_sum_val
FROM sample_data
GROUP BY ID, id2)
WHERE sum_val = max_sum_val;
ID ID2 MAX_SUM_VAL
---------- ---------- -----------
1 2 70
2 2 60
2 3 60
This will display all id2 values that have the same sum(val) that's the highest. If you don't want to display all tied rows, you can used the row_number() analytic function instead:
WITH sample_data AS (SELECT 1 ID, 1 id2, 10 val FROM dual UNION ALL
SELECT 1 ID, 1 id2, 20 val FROM dual UNION ALL
SELECT 1 ID, 2 id2, 30 val FROM dual UNION ALL
SELECT 1 ID, 2 id2, 40 val FROM dual UNION ALL
SELECT 2 ID, 1 id2, 50 val FROM dual UNION ALL
SELECT 2 ID, 2 id2, 60 val FROM dual UNION ALL
SELECT 2 ID, 3 id2, 60 val FROM dual)
SELECT ID,
id2,
max_sum_val
FROM (SELECT ID,
id2,
SUM(val) sum_val,
row_number() OVER (PARTITION BY ID ORDER BY SUM(val) DESC, id2) rn
FROM sample_data
GROUP BY ID, id2)
WHERE rn = 1;
ID ID2 MAX_SUM_VAL
---------- ---------- -----------
1 2 70
2 2 60
ETA:
That means your query would end up something like:
SELECT custid,
companyname,
contactname,
pl_id,
to_char(rev, '$9,999,999.99') rev
FROM (SELECT cc.custid,
cc.companyname,
cc.contactname,
pl.pl_id,
SUM(od.unitprice * od.quantity * (1 - discount)) AS rev,
MAX(SUM(od.unitprice * od.quantity * (1 - discount))) OVER (PARTITION BY cc.custid) max_rev
FROM corp.customers cc
INNER JOIN corp.orders co ON (cc.custid = co.custid)
LEFT OUTER JOIN corp.order_details od ON (co.orderid = od.orderid)
LEFT OUTER JOIN corp.products cp ON (od.productid = cp.productid)
LEFT OUTER JOIN corp.product_lines PL ON (cp.pl_id = pl.pl_id)
GROUP BY cc.custid,
cc.companyname,
cc.contactname,
pl.pl_id)
WHERE rev = max_rev;

Related

First value in DATE minus 30 days SQL

I have bunch of data out of which I'm showing ID, max date and it's corresponding values (user id, type, ...). Then I need to take MAX date for each ID, substract 30 days and show first date and it's corresponding values within this date period.
Example:
ID Date Name
1 01.05.2018 AAA
1 21.04.2018 CCC
1 05.04.2018 BBB
1 28.03.2018 AAA
expected:
ID max_date max_name previous_date previous_name
1 01.05.2018 AAA 05.04.2018 BBB
I have working solution using subselects, but as I have quite huge WHERE part, refresh takes ages.
SUBSELECT looks like that:
(SELECT MIN(N.name)
FROM t1 N
WHERE N.ID = T.ID
AND (N.date < MAX(T.date) AND N.date >= (MAX(T.date)-30))
AND (...)) AS PreviousName
How'd you write the select?
I'm using TSQL
Thanks
I can do this with 2 CTEs to build up the dates and names.
SQL Fiddle
MS SQL Server 2017 Schema Setup:
CREATE TABLE t1 (ID int, theDate date, theName varchar(10)) ;
INSERT INTO t1 (ID, theDate, theName)
VALUES
( 1,'2018-05-01','AAA' )
, ( 1,'2018-04-21','CCC' )
, ( 1,'2018-04-05','BBB' )
, ( 1,'2018-03-27','AAA' )
, ( 2,'2018-05-02','AAA' )
, ( 2,'2018-05-21','CCC' )
, ( 2,'2018-03-03','BBB' )
, ( 2,'2018-01-20','AAA' )
;
Main Query:
;WITH cte1 AS (
SELECT t1.ID, t1.theDate, t1.theName
, DATEADD(day,-30,t1.theDate) AS dMinus30
, ROW_NUMBER() OVER (PARTITION BY t1.ID ORDER BY t1.theDate DESC) AS rn
FROM t1
)
, cte2 AS (
SELECT c2.ID, c2.theDate, c2.theName
, ROW_NUMBER() OVER (PARTITION BY c2.ID ORDER BY c2.theDate) AS rn
, COUNT(*) OVER (PARTITION BY c2.ID) AS theCount
FROM cte1
INNER JOIN cte1 c2 ON cte1.ID = c2.ID
AND c2.theDate >= cte1.dMinus30
WHERE cte1.rn = 1
GROUP BY c2.ID, c2.theDate, c2.theName
)
SELECT cte1.ID, cte1.theDate AS max_date, cte1.theName AS max_name
, cte2.theDate AS previous_date, cte2.theName AS previous_name
, cte2.theCount
FROM cte1
INNER JOIN cte2 ON cte1.ID = cte2.ID
AND cte2.rn=1
WHERE cte1.rn = 1
Results:
| ID | max_date | max_name | previous_date | previous_name |
|----|------------|----------|---------------|---------------|
| 1 | 2018-05-01 | AAA | 2018-04-05 | BBB |
| 2 | 2018-05-21 | CCC | 2018-05-02 | AAA |
cte1 builds the list of max_date and max_name grouped by the ID and then using a ROW_NUMBER() window function to sort the groups by the dates to get the most recent date. cte2 joins back to this list to get all dates within the last 30 days of cte1's max date. Then it does essentially the same thing to get the last date. Then the outer query joins those two results together to get the columns needed while only selecting the most and least recent rows from each respectively.
I'm not sure how well it will scale with your data, but using the CTEs should optimize pretty well.
EDIT: For the additional requirement, I just added in another COUNT() window function to cte2.
I would do:
select id,
max(case when seqnum = 1 then date end) as max_date,
max(case when seqnum = 1 then name end) as max_name,
max(case when seqnum = 2 then date end) as prev_date,
max(case when seqnum = 2 then name end) as prev_name,
from (select e.*, row_number() over (partition by id order by date desc) as seqnum
from example e
) e
group by id;

Select min of one column, max of another column and fields that go with max

I am trying to aggregate a dataset I'll call cust_info. It looks like this:
ID Sec_ID Group_id Cust_ID Gender EFF_DATE END_DATE
--------------------------------------------------------------------
11 H12 222 12 F 1/1/2014 12/31/2014
11 H11 222 31 F 1/1/2015 12/31/2015
11 H11 222 12 F 1/1/2016 4/30/2016
11 H11 222 44 F 5/1/2016 4/30/2017
11 H11 333 11 F 5/1/2017 12/31/9999
22 H23 222 22 M 12/1/2015 11/30/2016
22 H21 222 11 M 1/1/2017 6/30/2017
22 H21 222 33 M 7/1/2017 11/30/2017
I want to get the minimum EFF_DATE and the maximum END_DATE for each ID, sec_id. I also want the group_id and cust_id from the record with the maximum END_DATE.
So I end up with:
11 H11 333 11 F 1/1/2014 12/31/9999
22 H21 222 33 M 12/1/2015 11/30/2017
Currently my code pulls min(eff_date) and Max(end_date) with a group by ID, Sec_id, Grp_id, Gender. But if there are more than two records for a group this doesn't work. Also, this is an inner query that joins to another file.
Here's the code I'm using now:
select a.id, b.sec_id, b.group_id, b.cust_id, b.gender,
min(b.min_eff_date) as min_eff_date,
max(b.max_end_date) as max_end_date
from first_dataset a
left join (
select b.id, b.sec_id, b.group_id, b.gender, b.cust_id,
min(b.eff_date) as min_eff_date,
max(b.end_date) as max_end_date
from cust_info b
group by b.id, b.sec_id, b.group_id, b.cust_id, b.gender
) b on a.id=b.id and
a.sec_id = b.sec_id
And then I run another query on the results of the above with a min(min_eff_date) and a max(max_end_date). But I still get duplicates.
I want to see if I can do this in one query. I've tried a bunch of combinations of ROW_NUMBER. I've also tried using the KEEP(DENSE_RANK LAST ORDER BY MAX_END_DATE).
Can I do this in one query?
The data and code I've provided are all test examples, the real data involves ~ 3 million rows.
This does what your description says:
WITH cte AS (
SELECT row_number() OVER (PARTITION BY id, sec_id ORDER BY end_date DESC) AS rn
, ID, Sec_ID, Group_id, Cust_ID, Gender
, min(eff_date) OVER (PARTITION BY id, sec_id) AS EFF_DATE -- exception
, END_DATE
FROM cust_info
)
SELECT ID, Sec_ID, Group_id, Cust_ID, Gender, EFF_DATE, END_DATE
FROM cte
WHERE rn = 1;
Key element is the analytic function ROW_NUMBER() in the CTE.
Neither your displayed result nor your query currently fit the description.
SQL Fiddle.
Related:
Select first row in each GROUP BY group?
I think the folowing request will do the job:
SELECT DISTINCT a.id,
b.sec_id,
FIRST_VALUE(b.group_id) OVER (PARTITION BY a.id, b.sec_id GROUP BY b.end_date DESC) group_id,
FIRST_VALUE(b.cust_id) OVER (PARTITION BY a.id, b.sec_id GROUP BY b.end_date DESC) group_id,
b.gender,
min(b.eff_date) OVER (PARTITION BY a.id, b.sec_id) as min_eff_date,
max(b.end_date) OVER (PARTITION BY a.id, b.sec_id) as max_end_date
FROM first_dataset a,
cust_info b
WHERE a.id = b.id (+)
AND a.sec_id = b.sec_id (+)

SQL - Finding Customer's largest Location by Order $

I have a table with customer IDs, location IDs, and their order values. I need to select the location ID for each customer with the largest spend
Customer | Location | Order $
1 | 1A | 100
1 | 1A | 20
1 | 1B | 100
2 | 2A | 50
2 | 2B | 20
2 | 2B | 50
So I would get
Customer | Location | Order $
1 | 1A | 120
2 | 2B | 70
I tried something like this:
SELECT
a.CUST
,a.LOC
,c.BOOKINGS
FROM (SELECT DISTINCT TOP 1 b.CUST, b.LOC, sum(b.ORDER_VAL) as BOOKINGS
FROM ORDER_TABLE b
GROUP BY b.CUST, b.LOC
ORDER BY BOOKINGS DESC) as c
INNER JOIN ORDER_TABLE a
ON a.CUST = c.CUST
But that just returns the top order.
Just use variables to emulate ROW_NUM()
DEMO
SELECT *
FROM ( SELECT `Customer`, `Location`, SUM(`Order`) as `Order`,
#rn := IF(#customer = `Customer`,
#rn + 1,
IF(#customer := `Customer`, 1, 1)
) as rn
FROM Table1
CROSS JOIN (SELECT #rn := 0, #customer := '') as par
GROUP BY `Customer`, `Location`
ORDER BY `Customer`, SUM(`Order`) DESC
) t
WHERE t.rn = 1
Firs you have to sum the values for each location:
select Customer, Location, Sum(Order) as tot_order
from order_table
group by Customer, Location
then you can get the maximum order with MAX, and the top location with a combination of group_concat that will return all locations, ordered by total desc, and substring_index in order to get only the top one:
select
Customer,
substring_index(
group_concat(Location order by tot_order desc),
',', 1
) as location,
Max(tot_order) as max_order
from (
select Customer, Location, Sum(Order) as tot_order
from order_table
group by Customer, Location
) s
group by Customer
(if there's a tie, two locations with the same top order, this query will return just one)
This seems like an order by using aggregate function problem. Here is my stab at it;
SELECT
c.customer,
c.location,
SUM(`order`) as `order_total`,
(
SELECT
SUM(`order`) as `order_total`
FROM customer cm
WHERE cm.customer = c.customer
GROUP BY location
ORDER BY `order_total` DESC LIMIT 1
) as max_order_amount
FROM customer c
GROUP BY location
HAVING max_order_amount = order_total
Here is the SQL fiddle. http://sqlfiddle.com/#!9/2ac0d1/1
This is how I'd handle it (maybe not the best method?) - I wrote it using a CTE first, only to see that MySQL doesn't support CTEs, then switched to writing the same subquery twice:
SELECT B.Customer, C.Location, B.MaxOrderTotal
FROM
(
SELECT A.Customer, MAX(A.OrderTotal) AS MaxOrderTotal
FROM
(
SELECT Customer, Location, SUM(`Order`) AS OrderTotal
FROM Table1
GROUP BY Customer, Location
) AS A
GROUP BY A.Customer
) AS B INNER JOIN
(
SELECT Customer, Location, SUM(`Order`) AS OrderTotal
FROM Table1
GROUP BY Customer, Location
) AS C ON B.Customer = C.Customer AND B.MaxOrderTotal = C.OrderTotal;
Edit: used the table structure provided
This solution will provide multiple rows in the event of a tie.
SQL fiddle for this solution
How about:
select a.*
from (
select customer, location, SUM(val) as s
from orders
group by customer, location
) as a
left join
(
select customer, MAX(b.tot) as t
from (
select customer, location, SUM(val) as tot
from orders
group by customer, location
) as b
group by customer
) as c
on a.customer = c.customer where a.s = c.t;
with
Q_1 as
(
select customer,location, sum(order_$) as order_sum
from cust_order
group by customer,location
order by customer, order_sum desc
),
Q_2 as
(
select customer,max(order_sum) as order_max
from Q_1
group by customer
),
Q_3 as
(
select Q_1.customer,Q_1.location,Q_1.order_sum
from Q_1 inner join Q_2 on Q_1.customer = Q_2.customer and Q_1.order_sum = Q_2.order_max
)
select * from Q_3
Q_1 - selects normal aggregate, Q_2 - selects max(aggregate) out of Q_1 and Q_3 selects customer,location, sum(order) from Q_1 which matches with Q_2

SQL- jaccard similarity

My table looks as follows:
author | group
daniel | group1,group2,group3,group4,group5,group8,group10
adam | group2,group5,group11,group12
harry | group1,group10,group15,group13,group15,group18
...
...
I want my output to look like:
author1 | author2 | intersection | union
daniel | adam | 2 | 9
daniel | harry| 2 | 11
adam | harry| 0 | 10
THANK YOU
Try below (for BigQuery)
SELECT
a.author AS author1,
b.author AS author2,
SUM(a.item=b.item) AS intersection,
EXACT_COUNT_DISTINCT(a.item) + EXACT_COUNT_DISTINCT(b.item) - intersection AS [union]
FROM FLATTEN((
SELECT author, SPLIT([group]) AS item FROM YourTable
), item) AS a
CROSS JOIN FLATTEN((
SELECT author, SPLIT([group]) AS item FROM YourTable
), item) AS b
WHERE a.author < b.author
GROUP BY 1,2
Added solution for BigQuery Standard SQL
WITH YourTable AS (
SELECT 'daniel' AS author, 'group1,group2,group3,group4,group5,group8,group10' AS grp UNION ALL
SELECT 'adam' AS author, 'group2,group5,group11,group12' AS grp UNION ALL
SELECT 'harry' AS author, 'group1,group10,group13,group15,group18' AS grp
),
tempTable AS (
SELECT author, SPLIT(grp) AS grp
FROM YourTable
)
SELECT
a.author AS author1,
b.author AS author2,
(SELECT COUNT(1) FROM a.grp) AS count1,
(SELECT COUNT(1) FROM b.grp) AS count2,
(SELECT COUNT(1) FROM UNNEST(a.grp) AS agrp JOIN UNNEST(b.grp) AS bgrp ON agrp = bgrp) AS intersection_count,
(SELECT COUNT(1) FROM (SELECT * FROM UNNEST(a.grp) UNION DISTINCT SELECT * FROM UNNEST(b.grp))) AS union_count
FROM tempTable a
JOIN tempTable b
ON a.author < b.author
What I like about this one:
much simpler / friendlier code
no CROSS JOIN and extra GROUP BY needed
When/If try - make sure to uncheck Use Legacy SQL checkbox under Show Options
I propose this option that scales better:
WITH YourTable AS (
SELECT 'daniel' AS author, 'group1,group2,group3,group4,group5,group8,group10' AS grp UNION ALL
SELECT 'adam' AS author, 'group2,group5,group11,group12' AS grp UNION ALL
SELECT 'harry' AS author, 'group1,group10,group13,group15,group18' AS grp
),
tempTable AS (
SELECT author, grp
FROM YourTable, UNNEST(SPLIT(grp)) as grp
),
intersection AS (
SELECT a.author AS author1, b.author AS author2, COUNT(1) as intersection
FROM tempTable a
JOIN tempTable b
USING (grp)
WHERE a.author > b.author
GROUP BY a.author, b.author
),
count_distinct_groups AS (
SELECT author, COUNT(DISTINCT grp) as count_distinct_groups
FROM tempTable
GROUP BY author
),
join_it AS (
SELECT
intersection.*, cg1.count_distinct_groups AS count_distinct_groups1, cg2.count_distinct_groups AS count_distinct_groups2
FROM
intersection
JOIN
count_distinct_groups cg1
ON
intersection.author1 = cg1.author
JOIN
count_distinct_groups cg2
ON
intersection.author2 = cg2.author
)
SELECT
*,
count_distinct_groups1 + count_distinct_groups2 - intersection AS unionn,
intersection / (count_distinct_groups1 + count_distinct_groups2 - intersection) AS jaccard
FROM
join_it
A full cross join on Big Data (tens of thousands x millions) fails for too much shuffling while the second proposal takes hours to execute. That one takes minutes.
The consequence of this approach though is that pairs having no intersection will not appear, so it will be the responsibility of the process that uses it to handle IFNULL.
Last detail: the union on Daniel and Harry is 10 rather than 11 as group15 is repeated in the initial example.
Inspired by Mikhail Berlyant's second answer, here is essentially the same method reformatted for Presto (as another example for a different flavor of SQL). Again all credit to Mikhail for this one.
WITH
YourTable AS (
SELECT
'daniel' AS author,
'group1,group2,group3,group4,group5,group8,group10' AS grp
UNION ALL
SELECT
'adam' AS author,
'group2,group5,group11,group12' AS grp
UNION ALL
SELECT
'harry' AS author,
'group1,group10,group13,group15,group18' AS grp
),
tempTable AS (
SELECT
author,
SPLIT(grp, ',') AS grp
FROM
YourTable
)
SELECT
a.author AS author1,
b.author AS author2,
CARDINALITY(a.grp) AS count1,
CARDINALITY(b.grp) AS count2,
CARDINALITY(ARRAY_INTERSECT(a.grp, b.grp)) AS intersection_count,
CARDINALITY(ARRAY_UNION(a.grp, b.grp)) AS union_count
FROM tempTable a
JOIN tempTable b ON a.author < b.author
;
Note that this will give slightly different counts for harry as well as the union_count as it only counts unique entries, e.g. harry has two group15 values, but only one will be counted:
author1 | author2 | count1 | count2 | intersection_count | union_count
---------+---------+--------+--------+--------------------+-------------
daniel | harry | 7 | 5 | 2 | 10
adam | harry | 4 | 5 | 0 | 9
adam | daniel | 4 | 7 | 2 | 9

left join without duplicate values using MIN()

I have a table_1:
id custno
1 1
2 2
3 3
and a table_2:
id custno qty descr
1 1 10 a
2 1 7 b
3 2 4 c
4 3 7 d
5 1 5 e
6 1 5 f
When I run this query to show the minimum order quantities from every customer:
SELECT DISTINCT table_1.custno,table_2.qty,table_2.descr
FROM table_1
LEFT OUTER JOIN table_2
ON table_1.custno = table_2.custno AND qty = (SELECT MIN(qty) FROM table_2
WHERE table_2.custno = table_1.custno )
Then I get this result:
custno qty descr
1 5 e
1 5 f
2 4 c
3 7 d
Customer 1 appears twice each time with the same minimum qty (& a different description) but I only want to see customer 1 appear once. I don't care if that is the record with 'e' as a description or 'f' as a description.
First of all... I'm not sure why you need to include table_1 in the queries to begin with:
select custno, min(qty) as min_qty
from table_2
group by custno;
But just in case there is other information that you need that wasn't included in the question:
select table_1.custno, ifnull(min(qty),0) as min_qty
from table_1
left outer join table_2
on table_1.custno = table_2.custno
group by table_1.custno;
"Generic" SQL way:
SELECT table_1.custno,table_2.qty,table_2.descr
FROM table_1, table_2
WHERE table_2.id = (SELECT TOP 1 id
FROM table_2
WHERE custno = table_1.custno
ORDER BY qty )
SQL 2008 way (probably faster):
SELECT custno, qty, descr
FROM
(SELECT
custno,
qty,
descr,
ROW_NUMBER() OVER (PARTITION BY custno ORDER BY qty) RowNum
FROM table_2
) A
WHERE RowNum = 1
If you use SQL-Server you could use ROW_NUMBER and a CTE:
WITH CTE AS
(
SELECT table_1.custno,table_2.qty,table_2.descr,
RN = ROW_NUMBER() OVER ( PARTITION BY table_1.custno
Order By table_2.qty ASC)
FROM table_1
LEFT OUTER JOIN table_2
ON table_1.custno = table_2.custno
)
SELECT custno, qty,descr
FROM CTE
WHERE RN = 1
Demolink