My table CUSTOMER_TABLE has a nested table of references toward ACCOUNT_TABLE. Each account in ACCOUNT_TABLE has a reference toward a branch: branch_ref.
CREATE TYPE account AS object(
accid integer,
acctype varchar2(15),
balance number,
rate number,
overdraft_limit integer,
branch_ref ref branch,
opendate date
) final;
CREATE TYPE customer as object(
custid integer,
infos ref type_person,
accounts accounts_list
);
create type branch under elementary_infos(
bid integer
) final;
All tables are inherited from these object types.
I want to select the account with the highest balance per branch. I arrived to do that with this query:
select MAX(value(a).balance), value(a).branch_ref.bid
from customer_table c, table(c.accounts) a
group by value(a).branch_ref.bid
order by value(a).branch_ref.bid;
Which returns:
MAX(VALUE(A).BALANCE) VALUE(A).BRANCH_REF.BID
--------------------------------------- ---------------------------------------
176318.88 0
192678.14 1
190488.19 2
196433.93 3
182909.84 4
However, how to select as well others attribues from the max accounts displayed ? I would like to display the name of the owner plus the customer's id. The id is directly an attribute of customer. But the name is stored with a reference toward person_table. So I have to select as well c.id & deref(c.infos).names.surname.
How to select these other attributes with my MAX() query ?
Thank you
I generally use analytic functions to achieve that kind of functionality. With analytic functions, you can add aggregate columns to your query without losing the original rows. In your particular case it would be something like:
select
-- select interesting fields
accid,
acctype,
balance,
rate,
overdraft_limit,
branch_ref,
opendate,
max_balance
from (
select
-- propagate original fields to outer query
value(a).accid accid,
value(a).acctype acctype,
value(a).balance balance,
value(a).rate rate,
value(a).overdraft_limit overdraft_limit,
value(a).branch_ref branch_ref,
value(a).opendate opendate,
-- add max(balance) of our branch_ref to the row
max(value(a).balance) over (partition by value(a).branch_ref.bid) max_balance
from customer_table c, table(c.accounts) a
) data
where
-- we are only interested in rows with balance equal to the max
-- (NOTE: there might be more than one, you should fine tune the filtering!)
data.balance = data.max_balance
-- order by branch
order by data.branch_ref.bid;
I don't have any Oracle instance available right now to test this, but that would be the idea, unless there is some kind of incompatibility between analytic functions and collection columns, you should be able to have that working with little effort.
Related
Please go easy on me as I'm learning SQL from scratch pretty recently haha.
Here's what I have going on right now (function works, update does not due to the 'u.name' can't reference the table in this spot according to the error. Added what I thought would work and hopefully you can see where I was going): skip to the end for the question
CREATE OR REPLACE FUNCTION getLastSetDate(selected_sub_bucket character varying)
RETURNS int AS $$
SELECT max(coalesce(
(SELECT transaction_date
FROM (
SELECT *, row_number() over(ORDER BY transaction_date DESC
) as position
FROM bucket_transactions
) RESULT
WHERE type = 'SET' and sub_bucket = selected_sub_bucket
ORDER BY transaction_date DESC limit 1),
0))
$$ LANGUAGE sql;
UPDATE buckets u
SET amount = a.total
FROM (WITH selectedRows as (select * from bucket_transactions where transaction_date > getLastSetDate(u.name))
SELECT sub_bucket, SUM(amount) as total
FROM selectedRows
WHERE transaction_date > getLastSetDate(u.name)
GROUP BY sub_bucket
) a
WHERE u.name = a.sub_bucket and u.bucket_type = 'sub';
I have two tables, one being a 'bucket_transactions' table, and the other being a 'buckets' table. The buckets table contains the name, and the sum of all transactions' amounts under that bucket past a certain date. To find this date, I'm searching for the most recent transaction under that bucket or sub_bucket with the transaction type being 'SET' (setting the value of the bucket).
TABLE: buckets
name
bucket_type
amount
Living Expenses
primary
-143.00
Food
sub
-89.00
Gas
sub
-54.00
TABLE: bucket_transactions
bucket
sub_bucket
type
transaction_date
amount
Living Expenses
Food
SET
1662593637
0.00
Living Expenses
Food
EXPENSE
1662593954
-89.00
Living Expenses
Gas
EXPENSE
1662537592
-54.00
My question is, how would I change my code in order for each bucket to start at the 'SET' transaction and get the sum past of the amounts past that transaction date? Each bucket/sub_bucket would have a different SET date, if any. I'm looking to input into the function the current row's name, even though the script is updating the amount column.
Appreciate your time!
I'm trying to find rows with max credit on my table ,
CREATE TABLE Course(
CourseNr INTEGER,
CourseTitel VARCHAR(60),
CourseTyp VARCHAR(10),
CourselenghtDECIMAL,
Credit DECIMAL,
PRIMARY KEY (CourseNr)
);
and there is more than one courses with max value. I dont want to use any default functions for that, any ideas?
Presumably, you want the rows with the maximum credit. A common method is to find any rows that have no larger credit:
select c.*
from course c
where c.credit >= all (select c2.credit from course c2);
Get the rows with Credit for which there don't exist any rows with greater Credit:
SELECT
c.*
FROM Course c
WHERE
NOT EXISTS (
SELECT 1 FROM Course WHERE Credit > c.Credit
)
Basically I'm trying to create a database schema based around multiple unrelated tables that will not need to reference each other AFAIK.
Each table will be a different "category" that will have the same columns in each table - name, date, two int values and then a small string value.
My issue is that each one will need to be "updated" daily, but I want to keep a record of the items for every single day.
What's the best way to go about doing this? Would it be to make the composite key the combination of the date and the name? Or use something called a "trigger"?
Sorry I'm somewhat new to database design, I can be more specific if I need to be.
Yes, you have to create a trigger for each category table
I'm assuming name is PK for each table? If isnt the case, you will need create a PK.
Lets say you have
table categoryA
name, date, int1, int2, string
table categoryB
name, date, int1, int2, string
You will create another table to store changes log.
table category_history
category_table, name, date, int1, int2, string, changeDate
You create two trigger, one for each category table
Where you save what table gerate the update and what time was made.
create trigger before update for categoryA
INSERT INTO category_history VALUES
('categoryA', OLD.name, OLD.date, OLD.int1, Old.int2, OLD.string, NOW());
This is pseudo code, you need write trigger using your rdbms syntaxis, and check how get system date now().
As has already been pointed out, it is poor design to have different identical tables for each category. Better would be a Categories table with one entry for each category and then a Dailies table with the daily information.
create table Categories(
ID smallint not null auto_generated,
Name varchar( 20 ) not null,
..., -- other information about each category
constraint UQ_Category_Name unique( Name ),
constraint PK_Categories( ID )
);
create table Dailies(
CatID smallint not null,
UpdDate date not null,
..., -- Daily values
constraint PK_Dailies( CatID, UpdDate ),
constraint FK_Dailies_Category foreign key( CatID )
references Categories( ID )
);
This way, adding a new category involves inserting a row into the Categories table rather than creating an entirely new table.
If the database has a Date type distinct from a DateTime -- no time data -- then fine. Otherwise, the time part must be removed such as by Oracle's trunc function. This allows only one entry for each category per day.
Retrieving all the values for all the posted dates is easy:
select C.Name as Category, d.UpdDate, d.<daily values>
from Categories C
join Dailies D
on D.CatID = C.ID;
This can be made into a view, DailyHistory. To see the complete history for Category Cat1:
select *
from DailyHistory
where Name = 'Cat1';
To see all the category information as it was updated on a specific date:
select *
from DailyHistory
where UpdDate = date '2014-05-06';
Most queries will probably be interested in the current values -- that is, the last update made (assuming some categories are not updated every day). This is a little more complicated but still very fast if you are worried about performance.
select C.Name as Category, d.UpdDate as "Date", d.<daily values>
from Categories C
join Dailies D
on D.CatID = C.ID
and D.UpdDate =(
select Max( UpdDate )
from Dailies
where CatID = D.CatID );
Of course, if every category is updated every day, the query is simplified:
select C.Name as Category, d.UpdDate as "Date", d.<daily values>
from Categories C
join Dailies D
on D.CatID = C.ID
and D.UpdDate = <today's date>;
This can also be made into a view. To see today's (or the latest) updates for Category Cat1:
select *
from DailyCurrent
where Name = 'Cat1';
Suppose now that updates are not necessarily made every day. The history view would show all the updates that were actually made. So the query shown for all categories as they were on a particular day would actually show only those categories that were actually updated on that day. What if you wanted to show the data that was "current" as of a particular date, even if the actual update was several days before?
That can be provided with a small change to the "current" query (just the last line added):
select C.Name as Category, d.UpdDate as "Date", d.<daily values>
from Categories C
join Dailies D
on D.CatID = C.ID
and D.UpdDate =(
select Max( UpdDate )
from Dailies
where CatID = D.CatID
and UpdDate <= date '2014-05-06' );
Now this shows all categories with the data updated on that date if it exists otherwise the latest update made previous to that date.
As you can see, this is a very flexible design which allows access the data just about any way desired.
I have a database table that I need to process with either a view or a stored procedure or something else that gives me a result based on the live data.
The table holds records of people with data associated with each one. The thing is that people can be in the table more than once. Each record shows a time when one or more pieces of information was recorded for an individual.
The identifier field for the people is cardholder_index. I need to take a DISTINCT list of that field. There is also a date field called bio_complete_date. What I need to do is, for all the other fields in the table, take the most recent non-null (or possibly non-zero) value.
For instance, there is a bmi field. For each distinct cardholder index, I need to take the most recent (by the bio_complete_date field) non-null bmi for that cardholder_index. But there's also a body_fat field, and I need to take the most recent non-null value in that field, which might not necessarily be the same row as the most recent non-null bmi value.
For the record, the table itself does have its own unique identifier column, bio_id, if that helps.
I don't need to show when the most recent piece of information was taken. I just need to show the data itself.
I figure I need to do a distinct on the card_holder index, and then join to it the result sets of querys for each other field. It's writing the subqueries that is giving me problems.
From your description I guess your table looks something like this:
create table people (
bio_id int identity(1,1),
cardholder_index int,
bio_complete_date date,
bmi int,
body_fat int
)
If so, one way (of many) to do the query would be to use correlated queries to pull the latest non-null value for the cardholder_index, either using subqueries like this:
select
cardholder_index,
(
select top 1 bmi
from people
where cardholder_index = p.cardholder_index and bmi is not null
order by bio_complete_date desc
) as latest_bmi,
(
select top 1 body_fat
from people
where cardholder_index = p.cardholder_index and body_fat is not null
order by bio_complete_date desc
) as latest_body_fat
from people p
group by cardholder_index
or to use the apply operator like this:
select cardholder_index, latest_bmi.bmi, latest_body_fat.body_fat
from people p
outer apply (
select top 1 bmi
from people
where cardholder_index = p.cardholder_index and bmi is not null
order by bio_complete_date desc
) as latest_bmi
outer apply (
select top 1 body_fat
from people
where cardholder_index = p.cardholder_index and body_fat is not null
order by bio_complete_date desc
) as latest_body_fat
group by cardholder_index, latest_bmi.bmi, latest_body_fat.body_fat
Sample SQL Fiddle demo
Hi all I am using SQL server.
I have one table that has a whole list of details on cars and events that have happened with those cars.
What I need is to be able to pick out the last entry for each vehicle based on their (Reg_No) registration number.
I have the following to work with
Table name = UnitHistory
Columns = indx (This is just the primary key, with increment)
Transdate(This is my date time column) and have Reg_No (Unique to each vehicle) .
There are about 45 vehicles with registration numbers if that helps?
I have looked at different examples but they all seem to have another table to work with.
Please help me. Thanks in advance for the help
WITH cte
AS
(
SELECT *,
ROW_NUMBER() OVER
(
PARTITION BY Reg_No
ORDER BY Transdate DESC
) AS RowNumber
FROM unithistory
)
SELECT *
FROM cte
WHERE RowNumber = 1
If you only need the index and the transdatem and they are both incremental (I am assuming that a later date corresponds to a higher index number) then the simplest query would be:
SELECT Reg_No, MAX(indx), MAX(Transdate)
FROM UnitHistory
GROUP BY Reg_No
If you want all data for a known Reg_No, you can use Dd2's answer
If you want a list of all Reg_No's with thier data, you will need a subquery