I am breaking my head on joining of three tables. I have recreated a simple test case where I see the same problem, so it looks I make a fundamental mistake in my join query:
I have three tables:
case:
id (PK)| date_closed
155 | '2018-04-17 10:08'
156 | '2018-03-17 10:08'
pizza | '2018-02-17 10:08'
registration:
id (FK) | source | quantity
155 | market | 300
155 | sawdust| 200
bagged:
id | case_id (FK) | kg_bagged
X | 155 | 123
Y | 155 | 90
These tables I want to join to compare the total amounts per 'case' in quantity column and kg_bagged. So the case table has a 1:* many relationship to the other two. Therefore I make a join query like this:
SELECT case.id,
date_closed,
SUM(quantity),
SUM(kg_bagged),
SUM(kg_bagged)/SUM(quantity) AS reduction_factor
FROM case
JOIN bagged ON case.id = bagged.case_id
JOIN registration ON case.id = registration.id
Than I would think this would be a correct query, but Postgres tells me I have to add case.id, date_closed to the group by clause. So I add this:
GROUP BY case.id, date_closed;
This code is running without errors, but it shows 1000 for the quanity at case 155 not the expected 500 (200+300). This behaviour only appears when there is more than 1 record. When joining only 1 table to the case table it also works fine. Can someone see the mistake made at the JOIN query?
I also tried using a subquery for joining two tables and than use a join on the table left, but it gave me similar results
When you joining data 2 rows on 2 other tables it match together, so you get the multiplied result. In your example is 2*2 = 4
For easier understand, in your case when you execute the query
SELECT case.id, date_closed, source, quantity, kg_bagged
FROM case
JOIN registration ON registration.id = case.id
JOIN bagged ON bagged.case_id = case.id
You will get the data like this:
| id | date_closed | source | quantity | kg_bagged |
| :-: | :----------------: | :----: | :------: | :-------: |
| 155 | '2018-04-17 10:08' | market | 300 | 123 |
| 155 | '2018-04-17 10:08' | sawdust| 200 | 123 |
| 155 | '2018-04-17 10:08' | market | 300 | 90 |
| 155 | '2018-04-17 10:08' | sawdust| 200 | 90 |
In this case, as my experience before, I used to write subquery first to get the sum data first then joining it together.
Such as:
WITH r AS (SELECT id, sum(quantity) as quantity FROM registration GROUP BY id),
b as (SELECT case_id, SUM(kg_bagged) as kg_bagged FROM bagged GROUP BY case_id)
SELECT case.id,
date_closed,
quantity,
kg_bagged,
kg_bagged/quantity AS reduction_factor
FROM case
JOIN b ON case.id = b.case_id
JOIN r ON case.id = r.id
Hopefully, this answer will help you.
Related
I'm trying to create a basic rapport from these 2 tables:
Table Products
|--------|----------------|----------|
| PRO_Id | PRO_CategoryId | PRO_Name |
|--------|----------------|----------|
| 1 | 98 | Banana |
| 2 | 98 | Apple |
|--------|----------------|----------|
Table Categories
|--------|----------|
| CAT_Id | CAT_Name |
|--------|----------|
| 98 | Fruits |
| 99 | Other |
|--------|----------|
What I needed is this output:
|------------|
| Categories |
|------------|
| Fruits (2) |
|------------|
I would like a report listing all the categories from Categories but only when product from Products has a link (with is the case form Fruits but not for Other).
This is where I am actually:
SELECT CAT_Name, COUNT(PRO_Name IN sum)
FROM Categories
JOIN Products
ON Products.PRO_CategoryId = Categories.CAT_Id as sum
ORDER BY CAT_Name ASC
Anyone to help me with this please ?
Thanks.
You are pretty close. You need to get rid of the garbage in the query and use a group by:
SELECT c.cat_name, COUNT(*)
FROM Categories c JOIN
Products p
ON p.PRO_CategoryId = c.CAT_Id
GROUP BY c.CAT_Name ;
Notes:
SELECT * is not appropriate for an aggregation query. What you want to select is.
This puts the count in a separate column which seems to be your intention, despite the sample results.
COUNT(pro_name in sum) doesn't make sense.
as sum doesn't make sense.
There's a general type of query I'm trying to perform, and I'm not sure how to express it in words so that I can find a discussion of best practices and examples for executing it.
Here's an example use case.
I have a customers table that has info about customers and an orders table. I want to fetch a subset of records from orders based on customer characteristics, limited by the "earliest" and "latest" dates contained as data in the customers table. It's essential to the solution that I limit my query results to within this date range, which varies by customer.
CUSTOMERS
+------------+------------+----------+---------------------+-------------------+
| CustomerID | Location | Industry | EarliestActiveOrder | LatestActiveOrder |
+------------+------------+----------+---------------------+-------------------+
| 001 | New York | Finance | 2017-11-03 | 2019-07-30 |
| 002 | California | Tech | 2018-06-18 | 2019-09-22 |
| 003 | New York | Finance | 2015-09-30 | 2019-02-26 |
| 004 | California | Finance | 2019-02-02 | 2019-08-15 |
| 005 | New York | Finance | 2017-10-19 | 2018-12-20 |
+------------+------------+----------+---------------------+-------------------+
ORDERS
+----------+------------+------------+---------+
| OrderID | CustomerID | StartDate | Details |
+----------+------------+------------+---------+
| 5430 | 003 | 2015-06-30 | ... |
| 5431 | 003 | 2016-03-31 | ... |
| 5432 | 003 | 2018-09-30 | ... |
| 5434 | 001 | 2018-11-05 | ... |
| 5435 | 001 | 2019-10-11 | ... |
A sample use case expressed in words would be: "Give me all Active Orders from Finance customers in New York".
Desired result is to return the full records from orders table for OrderID's 5431,5432,5434.
What is a generally good approach for structuring this kind of query, given an orders table with ~10^6 records?
You are looking for a join:
select o.*
from orders o
inner join customers c
on c.Customer_id = o.Customer_id
and o.StartDate between c.EarliestActiveOrder and c.LatestActiveOrder
and c.Industry = 'Finance'
and c.Location = 'New York'
For performance in this query, consider the following indexes:
orders(customer_id, StartDate)
customers(Customer_id, Industry, Location, EarliestActiveOrder, LatestActiveOrder)
Assuming that the result set is a small subset of the orders (say less then 1% of orders but the 1% is for illustration), I would phrase the query like this:
select o.*
from customers c join
orders o
on o.Customer_id = c.Customer_id and
o.StartDate between c.EarliestActiveOrder and c.LatestActiveOrder
where c.Location = 'New York' and c.industry = 'Finance';
The indexing strategy is tricky. For smallish result sets, you probably want to restrict the customers first and then find the matching orders. This approach suggsts indexes on:
customers(location, industry, customer_id, EarliestActiveOrder, LatestActiveOrder)
orders(customer_id, startdate)
If you had other columns for filtering, you would need separate indexes for them. For instance, for industry-only filtering:
customers(industry, customer_id, EarliestActiveOrder, LatestActiveOrder)
This can get cumbersome.
If, on the other hand, your result set is likely to be a significant number of orders, then scanning the orders table might be more efficient. You can try to rely on the optimizer. Or just push it in the right direction by phrasing the query as:
select o.*
from orders o
where exists (select 1
from customers c
where o.Customer_id = c.Customer_id and
o.StartDate between c.EarliestActiveOrder and c.LatestActiveOrder and
c.Location = 'New York' and c.industry = 'Finance'
);
In this case, you want an index on customers(customer_id) -- but that is probably already the primary key so you are fine. This has the advantage that you don't need to worry about the exact filtering criteria. The downside is a full table scan on orders (but not additional work for a join, group by, or order by).
I have these two tables:
I need to join the payment table with the discount table. The expected output seems not possible since the discounts table doesn't have a payment date. I can only get the net_amount
payment table:
id | net_amount | payment_dt | person_id
1001 | 2765.36 | 2016-05-28 | 372
1002 | 2474.76 | 2016-05-29 | 372
1003 | 22694.25 | 2016-05-29 | 384
1004 | 1911.92 | 2016-05-29 | 384
discounts table:
id | person_id | gross_amount | sc_discount | other_discount_amount | other_discount_type
1 | 372 | 3566.7 | 713.34 | 88.00 | MISC
2 | 372 | 3202.2 | 640.44 | 87.00 | PAT
3 | 384 | 3566.7 | 713.34 | 285.34 | MISC
4 | 384 | 27953.10 | 5590.62 | 2236.25 | PAT
5 | 384 | 2655.45 | 531.09 | 212.44 | MISC
*1 - payment_dt is 2016-05-28
expected output: (where payment_dt=2016-05-29)
total_gross_amount | total_sc_discount | total_misc_discount | total_pat_discount | total_net_amount
37,377.45 | 7475.49 | 497.78 | 2,323.25 | 27,080.93
As I see in both tables common column is person_id, you can try to join with it. And for more information you can have in mind natural joins, read about it on The Internet ;)
"A NATURAL JOIN is a JOIN operation that creates an implicit join clause for you based on the common columns in the two tables being joined. Common columns are columns that have the same name in both tables. A NATURAL JOIN can be an INNER join, a LEFT OUTER join, or a RIGHT OUTER join. The default is INNER join. "
By assuming :
net_amount = gross_amount - sc_discount - other_discount_amount
You do not need to go to payments table for total_net_amount in the expected output
You can write it like this :
Select sum(gross_amount) as total_gross_amount,
sum(sc_discount) as total_sc_discount,
sum(gross_amount - sc_discount - other_discount_amount) as total_net_amount
sum(CASE other_discount_type when 'MISC' THEN other_discount_amount WHEN 'PAT' THEN 0) as total_misc_discount,
sum(CASE other_discount_type when 'MISC' THEN 0 WHEN 'PAT' THEN other_discount_amount) as total_pat_discount
from discounts
And in case the above assumption is not true, As the aggregation is complete, only 1 row will come in output, you can get all columns except total_net_amount as in above query, get sum(net_amount) from payments table, and join them on true as only 1,1 row.
I have a item table from which i want to get Sum of item quantity
Query:
Select item_id, Sum(qty) from item_tbl group by item_id
Result:
==================
| ID | Quantity |
===================
| 1 | 10 |
| 2 | 20 |
| 3 | 5 |
| 4 | 20 |
The second table is invoice table from which i am getting the item quantity which is sold. I am joining these two tables as
Query:
Select item_tbl.item_id, Sum(item_tbl.qty) as [item_qty],
-isnull(Sum(invoice.qty),0) as [invoice_qty]
from item_tbl
left join invoice on item_tbl.item_id = invoice invoice.item_id group by item_tbl.item_id
Result:
=================================
| ID | item_qty | invoice_qty |
=================================
| 1 | 10 | -5 |
| 2 | 20 | -20 |
| 3 | 10 | -25 | <------ item_qty raised from 5 to 10 ??
| 4 | 20 | -20 |
I don't know if i am joining these tables in right way. Because i want to get everything from item table and available things from invoice table to maintain the inventory. So i use left join. Help please..
Modification
when i added group by item_id, qty i got this:
=================================
| ID | item_qty | invoice_qty |
=================================
| 1 | 10 | -5 |
| 2 | 20 | -20 |
| 3 | 5 | -5 |
| 3 | 5 | -20 |
| 4 | 20 | -20 |
As its a view so ID is repeated. what should i do to avoid this ??
Clearing things up, my answer from the comments explained:
While using left join operation (A left join B) - a record will be created for every matching B record to an A record, also - a record will be created for any A record that has no matching B record, using null values wherever needed to complement the fields from B.
I would advise reading up on Using Joins in SQL when approaching such problems.
Below are 2 possible solutions, using different assumptions.
Solution A
Without any assumptions regarding primary key:
We have to sum up the item quantity column to determine the total quantity, resulting in two sums that need to be performed, I would advise using a sub query for readability and simplicity.
select item_tbl.item_id, Sum(item_tbl.qty) as [item_qty], -isnull(Sum(invoice_grouped.qty),0) as [invoice_qty]
from item_tbl left join
(select invoice.item_id as item_id, Sum(invoice.qty) as qty from invoice group by item_id) invoice_grouped
on (invoice_grouped.item_id = item_tbl.item_id)
group by item_tbl.item_id
Solution B
Assuming item_id is primary key for item_tbl:
Now we know we can rely on the fact that there is only one quantity for each item_id, so we can do without the sub query by selecting any (max) of the item quantities in the join result, resulting in a quicker execution plan.
select item_tbl.item_id, Max(item_tbl.qty) as [item_qty], -isnull(Sum(invoice.qty),0) as [invoice_qty]
from item_tbl left join invoice on (invoice.item_id = item_tbl.item_id)
group by item_tbl.item_id
If your database design is following the common rules, item_tbl.item_id must be unique.
So just change your query:
Select item_tbl.item_id, item_tbl.qty as [item_qty],
-isnull(Sum(invoice.qty),0) as [invoice_qty]
from item_tbl
left join invoice on item_tbl.item_id = invoice invoice.item_id group by item_tbl.item_id, item_tbl.qty
I have two tables, a master table and a general information table. I need to update my master table from the general table. How can I update the master table when the general info table can have slightly different values for the descriptions?
Master
+------+---------+
| Code | Desc |
+------+---------+
| 156 | Milk |
| 122 | Eggs |
| 123 | Diapers |
+------+---------+
Info
+------+---------------+--------+
| Code | Desc | Price |
+------+---------------+--------+
| 156 | Milk | $3.00 |
| 122 | Eggs | $2.00 |
| 123 | Diapers | $15.00 |
| 124 | Shopright Cola| $2.00 |
| 124 | SR Cola | $2.00 |
+------+---------------+--------+
As you can see item 124 has 2 descriptions. It does not matter which description.
My attempt is returning 124 with both codes, I understand my code is looking for both the unique Code and description in the master which is why it returns both 124 but I'm unsure how to fix it.
INSERT INTO MASTER
(
SELECT UNIQUE(Code), Desc FROM INFO A
WHERE NOT EXISTS
(SELECT Code FROM MASTER B
WHERE A.Code = B.Code )
);
I have also tried:
INSERT INTO MASTER
(
SELECT UNIQUE(PROC_CDE), Desc FROM FIR_CLAIM_DETAIL A
WHERE Code NOT IN
(SELECT Code FROM FIR_CODE_PROC_CDE_MSTR B
WHERE A.Code = B.Code )
);
Unique filters the duplicated entries in the SELECTed result set across all columns, not just one key.
When you want to extract the other attributes of a key you filtered, you have to instruct the database to first group the unique keys. To choose one of attributes of a grouped key, we can use an AGGREGATE function. Like MAX(), MIN().
INSERT INTO MASTER
(
SELECT PROC_CDE, MAX(Desc) FROM FIR_CLAIM_DETAIL A
WHERE Code NOT IN
(SELECT Code FROM FIR_CODE_PROC_CDE_MSTR B
WHERE A.Code = B.Code )
GROUP BY PROC_CDE
);
There're analytical functions which can be used for even complex requirements.