I have two tables:
SHOPPING
date
id_customer
id_shop
id_fruit
28.03.2018
7423
123
1
13.02.2019
8408
354
1
28.03.2019
7767
123
9
13.02.2020
8543
472
7
28.03.2020
8640
346
9
13.02.2021
7375
323
9
28.03.2021
7474
323
8
13.02.2022
7476
499
1
28.03.2022
7299
123
4
13.02.2023
8879
281
2
28.03.2023
8353
452
1
13.02.2024
8608
499
6
28.03.2024
8867
318
1
13.02.2025
7997
499
6
28.03.2025
7715
499
4
13.02.2026
7673
441
7
FRUITS
id_fruit
name
1
apple
2
pear
3
grape
4
banana
5
plum
6
melon
7
watermelon
8
orange
9
pineapple
I would like to find fruits that have never been bought in a specific id_shop
I tried with this:
SELECT
s.idshop,
s.id_fruit ,
f.name
FROM
shopping s
LEFT JOIN fruit f ON f.id_fruit = s.id_fruit
WHERE NOT EXISTS (
SELECT *
FROM
fruit f1
WHERE f1.id_fruit = s.id_fruit
)
but it does not work...
Yes, you need an OUTER JOIN, but that should be RIGHT JOIN along with NULL values picked from shopping table after join applied, considering your current query such as
SELECT f.*
FROM shopping s
RIGHT JOIN fruit f
ON f.id_fruit = s.id_fruit
WHERE s.id_fruit IS NULL
Demo
please, help advice.
I have a table.
id|score_max|score_min| segment
--|---------|---------|--------
1 |264 | |girl
2 |263 | 250 |girl+
3 |249 | 240 |girl
4 | | 239 |girl
It is not necessary to obtain a value depending on the value of the score.
But it can be null.
For example, 260 is value from other table
select segment
from mytable
where score_max<260 and score_min>260
Output:
2 |263 | 250 |girl+
but if value =200, sql is not correct
How to make a request correctly?
For this sample data that makes more sense:
id|score_max|score_min| segment
--|---------|---------|--------
1 | | 264 |girl
2 |263 | 250 |girl+
3 |249 | 240 |girl
4 |239 | |girl
you can get the result that you want like this:
select *
from tablename
where
(? >= score_min or score_min is null)
and
(? <= score_max or score_max is null)
Replace ? with the value that you search for.
See the demo.
For school I need to make a function on an auction website. For this I need to join a couple of tables in a VIEW. This worked just fine, until I needed to add a filter for price range. Seems easy enough but the query result needs to allow a NULL when no bid has been placed.
The Statement for the View:
SELECT I.itemID, I.title, I.startPrice, B.highestBid, Cfi.category, I.endDate
FROM dbo.Items AS I INNER JOIN dbo.category_for_item AS Cfi ON V.itemID = Vir.itemID
LEFT OUTER JOIN dbo.Bid AS B ON V.itemID = B.itemID
This would get the following Table:
itemID title startPrice highestBid category endDate
1 1234 Alfa 25 26 PC 2018-09-22
2 1234 Alfa 25 NULL PC 2018-09-22
3 5678 Bravo 9 20 Console 2018-07-03
4 5678 Bravo 9 15 Console 2018-07-03
5 5678 Bravo 9 NULL Console 2018-07-03
6 9876 Charlie 84 100 Stamps 2018-06-14
7 9876 Charlie 84 90 Stamps 2018-06-14
8 9876 Charlie 84 85 Stamps 2018-06-14
9 9876 Charlie 84 NULL Stamps 2018-06-14
10 1470 Delta 98 100 Fashion 2018-06-15
11 1470 Delta 98 99 Fashion 2018-06-15
12 1470 Delta 98 NULL Fashion 2018-06-15
13 9631 Echo 56 65 Cars 2018-06-25
14 9631 Echo 56 NULL Cars 2018-06-25
15 7856 Foxtrot 98 NULL Dolls 2018-12-26
After looking around for answers I got a query for joining the VIEW on itself with only showing the highest bid instead of all bids:
SELECT VW.itemID, VW.title, VW.startPrice, VW.highestBid, VW.category, VW.endDate
FROM VW_SEARCH AS VW
INNER JOIN (SELECT itemID, MAX(highestBid) AS MaxBid
FROM VW_SEARCH
GROUP BY itemID) VJ
ON VW.itemID = VJ.itemID AND VW.highestBid = VJ.MaxBid
This gave the next results:
itemID title startPrice highestBid category endDate
1 1234 Alfa 25 26 PC 2018-09-22
2 5678 Bravo 9 20 Console 2018-07-03
3 9876 Charlie 84 85 Stamps 2018-06-14
4 1470 Delta 98 100 Fashion 2018-06-15
5 9631 Echo 56 65 Cars 2018-06-25
As I expected the result only showed the items with at least one bid on them. I tried added one extra condition on the subQuery and Joining RIGHT OUTER to make sure I would not get doubles of an itemID.
SELECT VW.itemID, VW.title, VW.startPrice, VW.highestBid, VW.category, VW.endDate
FROM VW_SEARCH AS VW
RIGHT OUTER JOIN (SELECT itemID, MAX(highestBid) AS MaxBid
FROM VW_SEARCH
WHERE highestBid > 0 OR highestBid IS NULL
GROUP BY itemID) VJ
ON VW.itemID = VJ.itemID AND VW.highestBid = VJ.MaxBid
This gave the following results (did not add result 5 - 1199 because it is all the same as result 4, this would happen in the actual table not the example table from above):
itemID title startPrice highestBid category endDate
1 1234 Alfa 25 26 PC 2018-09-22
2 5678 Bravo 9 20 Console 2018-07-03
3 9876 Charlie 84 85 Stamps 2018-06-14
4 NULL NULL NULL NULL NULL NULL
1200 1470 Delta 98 100 Fashion 2018-06-15
1201 9631 Echo 56 65 Cars 2018-06-25
While this is technicly allowing a NULL in the colums I need to get a result in the likes of :
itemID title startPrice highestBid catgory endDate
1 1234 Alfa 25 26 PC 2018-09-22
2 5678 Bravo 9 20 Console 2018-07-03
3 9876 Charlie 84 85 Stamps 2018-06-14
4 1470 Delta 98 100 Fashion 2018-06-15
5 9631 Echo 56 65 Cars 2018-06-25
6 7856 Foxtrot 98 NULL Dolls 2018-12-26
How do I get the desired result, or is it just impossible?
Also if the query could be written better, please say so.
Thanks in advance.
Solve the problem using a left join:
SELECT VW.itemID, VW.title, VW.startPrice, VW.highestBid, VW.category, VW.endDate
FROM VW_SEARCH VW LEFT JOIN
(SELECT itemID, MAX(highestBid) AS MaxBid
FROM VW_SEARCH
GROUP BY itemID
) VJ
ON VW.itemID = VJ.itemID AND VW.highestBid = VJ.MaxBid;
Or, use the ANSI-standard ROW_NUMBER() function:
select vw.*
from (select vw.*,
row_number() over (partition by itemID
order by highestBid nulls last
) as seqnum
from vw_search vw
) vw
where seqnum = 1;
This guarantees one row per item.
Note: Not all databases support NULLS LAST. This may not even be necessary, but you can also implement it using a case expression.
Can you give the definition of the view at least? Maybe the table definition too.
I would go only with subquery as because identity column :
select vw.*
from vw_search vw
where id = (select vm1.id
from vw_search vm1
where vm.itemID = vw1.itemID and vm1.highestBid is not null
order by vm1.highestBid desc
limit 1
);
However, some DBMS has not support LIMIT clause such as SQL Server if so, then you can use TOP clause instead.
Is it possible, in Oracle, to group data on the output of a user defined function? I get errors when I try to, and it best illustrated by the below example:
I am trying to interrogate results in table structure similar to below:
id | data
1000 | {abc=123, def=234, ghi=111, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=123, def=234, ghi=222, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=123, def=434, ghi=333, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=123, def=434, ghi=444, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=123, def=634, ghi=555, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=923, def=634, ghi=666, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=923, def=434, ghi=777, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=923, def=434, ghi=888, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=923, def=234, ghi=999, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=923, def=234, ghi=000, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
There are other columns, just not shown. The id column can have different values, but in this example, does not. In the data column, only the fields abc, def, and ghi differ, all the others are the same. Again this is only illustrative for this data example.
I have written a function to extract the value assigned to fields in the data column, and it is used in the following query:
select id
,extract_data(data,abc) as abc
,extract_data(data,def) as def
from table
giving results:
id | abc | def
1000 | 123 | 234
1000 | 123 | 234
1000 | 123 | 434
1000 | 123 | 434
1000 | 123 | 634
1000 | 923 | 634
1000 | 923 | 434
1000 | 923 | 434
1000 | 923 | 234
1000 | 923 | 234
For reporting purposes, I would like to be able to display the amount of each type of record. There are 6 types in the above example, and ideally the output would be:
id | abc | def | count
1000 | 123 | 234 | 2
1000 | 123 | 434 | 2
1000 | 123 | 634 | 1
1000 | 923 | 634 | 1
1000 | 923 | 434 | 2
1000 | 923 | 234 | 2
I expected to achieve this by writing SQL like so (and I'm convinced I have done so in the past):
select id
,extract_data(data,abc) as abc
,extract_data(data,def) as def
,count(1)
from table
group by id
,abc
,def
This however, will not work. Oracle is giving me an error of:
ORA-00904: "ABC": invalid identifier
00904. 00000 - "%s: invalid identifier"
From my initial research on "the google", I have seen that I should perhaps be grouping on the column I am passing into my user defined function. This would be due to SQL requiring all columns not part of an aggregate function needing to be part of the group by clause.
This will work for some records, however in my data example, the field ghi in the data column is different for every record , thus making the data column unique, and ruining the group by clause, as a count of 1 is given for each record.
I've used sybase and db2 in the past, and (setting myself up for a fall here...) I'm pretty sure in both that I was able to group by on the output of a user defined function.
I thought that there might be an issue with the naming of the columns and how they can be referenced by the group by? Referencing by column number hasn't worked.
I've tried various combinations of what I have, and can't get it to work, so I'd appreciate any insight you guys out there could give.
If you need any more information I'll edit as required or clarify in the comments.
Thanks,
GC.
You should be able to group by the functions themselves, not by the aliases
select id
,extract_data(data,abc) as abc
,extract_data(data,def) as def
,count(*)
from table
group by id
,extract_data(data,abc)
,extract_data(data,def)
Note that this does not generally involve executing the function multiple times. You can see that yourself with a simple function that increments a counter in a package every time it is called
SQL> ed
Wrote file afiedt.buf
1 create or replace package pkg_counter
2 as
3 g_cnt integer := 0;
4* end;
SQL> /
Package created.
SQL> create or replace function f1( p_arg in number )
2 return number
3 is
4 begin
5 pkg_counter.g_cnt := pkg_counter.g_cnt + 1;
6 return mod( p_arg, 2 );
7 end;
8 /
Function created.
There are 16 rows in the EMP table
SQL> select count(*) from emp;
COUNT(*)
----------
16
so when we execute a query that involves grouping by the function call, we hope to see the function executed only 16 times. And that is, in fact, what we see.
SQL> select deptno,
2 f1( empno ),
3 count(*)
4 from emp
5 group by deptno,
6 f1( empno );
DEPTNO F1(EMPNO) COUNT(*)
---------- ---------- ----------
1 1
30 0 4
20 1 1
10 0 2
30 1 2
20 0 4
10 1 1
0 1
8 rows selected.
SQL> begin
2 dbms_output.put_line( pkg_counter.g_cnt );
3 end;
4 /
16
PL/SQL procedure successfully completed.
Try this:
select id, abc, def, count(1)
from
(
select
id,
extract_data(data,abc) as abc,
extract_data(data,def) as def
from table
)
group by id, abc, def
Have you tried:
SELECT
id,
extract_data(data, abc) as abc,
extract_data(data, def) as def,
COUNT(1)
FROM
table
GROUP BY
id,
extract_data(data, abc)
extract_data(data, def)
I have a table "AuctionResults" like below
Auction Action Shares ProfitperShare
-------------------------------------------
Round1 BUY 6 200
Round2 BUY 5 100
Round2 SELL -2 50
Round3 SELL -5 80
Now I need to aggregate results by every auction with BUYS after netting out SELLS in subsequent rounds on a "First Come First Net basis"
so in Round1 I bought 6 Shares and then sold 2 in Round2 and rest "4" in Round3 with a total NET profit of 6 * 200-2 * 50-4 * 80 = 780
and in Round2 I bought 5 shares and sold "1" in Round3(because earlier "4" belonged to Round1) with a NET Profit of 5 * 100-1 * 80 = 420
...so the Resulting Output should look like:
Auction NetProfit
------------------
Round1 780
Round2 420
Can we do this using just Oracle SQL(10g) and not PL-SQL
Thanks in advance
I know this is an old question and won't be of use to the original poster, but I wanted to take a stab at this because it was an interesting question. I didn't test it out enough, so I would expect this still needs to be corrected and tuned. But I believe the approach is legitimate. I would not recommend using a query like this in a product because it would be difficult to maintain or understand (and I don't believe this is really scalable). You would be much better off creating some alternate data structures. Having said that, this is what I ran in Postgresql 9.1:
WITH x AS (
SELECT round, action
,ABS(shares) AS shares
,profitpershare
,COALESCE( SUM(shares) OVER(ORDER BY round, action
ROWS BETWEEN UNBOUNDED PRECEDING
AND 1 PRECEDING)
, 0) AS previous_net_shares
,COALESCE( ABS( SUM(CASE WHEN action = 'SELL' THEN shares ELSE 0 END)
OVER(ORDER BY round, action
ROWS BETWEEN UNBOUNDED PRECEDING
AND 1 PRECEDING) ), 0 ) AS previous_sells
FROM AuctionResults
ORDER BY 1,2
)
SELECT round, shares * profitpershare - deduction AS net
FROM (
SELECT buy.round, buy.shares, buy.profitpershare
,SUM( LEAST( LEAST( sell.shares, GREATEST(buy.shares - (sell.previous_sells - buy.previous_sells), 0)
,GREATEST(sell.shares + (sell.previous_sells - buy.previous_sells) - buy.previous_net_shares, 0)
)
) * sell.profitpershare ) AS deduction
FROM x buy
,x sell
WHERE sell.round > buy.round
AND buy.action = 'BUY'
AND sell.action = 'SELL'
GROUP BY buy.round, buy.shares, buy.profitpershare
) AS y
And the result:
round | net
-------+-----
1 | 780
2 | 420
(2 rows)
To break it down into pieces, I started with this data set:
CREATE TABLE AuctionResults( round int, action varchar(4), shares int, profitpershare int);
INSERT INTO AuctionResults VALUES(1, 'BUY', 6, 200);
INSERT INTO AuctionResults VALUES(2, 'BUY', 5, 100);
INSERT INTO AuctionResults VALUES(2, 'SELL',-2, 50);
INSERT INTO AuctionResults VALUES(3, 'SELL',-5, 80);
INSERT INTO AuctionResults VALUES(4, 'SELL', -4, 150);
select * from auctionresults;
round | action | shares | profitpershare
-------+--------+--------+----------------
1 | BUY | 6 | 200
2 | BUY | 5 | 100
2 | SELL | -2 | 50
3 | SELL | -5 | 80
4 | SELL | -4 | 150
(5 rows)
The query in the "WITH" clause adds some running totals to the table.
"previous_net_shares" indicates how many shares are available to sell before the current record. This also tells me how many 'SELL' shares I need to skip before I can start allocating it to this 'BUY'.
"previous_sells" is a running count of the number of "SELL" shares encountered, so the difference between two "previous_sells" indicates the number of 'SELL' shares used in that time.
round | action | shares | profitpershare | previous_net_shares | previous_sells
-------+--------+--------+----------------+---------------------+----------------
1 | BUY | 6 | 200 | 0 | 0
2 | BUY | 5 | 100 | 6 | 0
2 | SELL | 2 | 50 | 11 | 0
3 | SELL | 5 | 80 | 9 | 2
4 | SELL | 4 | 150 | 4 | 7
(5 rows)
With this table, we can do a self-join where each "BUY" record is associated with each future "SELL" record. The result would look like this:
SELECT buy.round, buy.shares, buy.profitpershare
,sell.round AS sellRound, sell.shares AS sellShares, sell.profitpershare AS sellProfitpershare
FROM x buy
,x sell
WHERE sell.round > buy.round
AND buy.action = 'BUY'
AND sell.action = 'SELL'
round | shares | profitpershare | sellround | sellshares | sellprofitpershare
-------+--------+----------------+-----------+------------+--------------------
1 | 6 | 200 | 2 | 2 | 50
1 | 6 | 200 | 3 | 5 | 80
1 | 6 | 200 | 4 | 4 | 150
2 | 5 | 100 | 3 | 5 | 80
2 | 5 | 100 | 4 | 4 | 150
(5 rows)
And then comes the crazy part that tries to calculate the number of shares available to sell in the order vs the number over share not yet sold yet for a buy. Here are some notes to help follow that. The "greatest"calls with "0" are just saying we can't allocate any shares if we are in the negative.
-- allocated sells
sell.previous_sells - buy.previous_sells
-- shares yet to sell for this buy, if < 0 then 0
GREATEST(buy.shares - (sell.previous_sells - buy.previous_sells), 0)
-- number of sell shares that need to be skipped
buy.previous_net_shares
Thanks to David for his assistance