Derby DB last x row average - sql

I have the following table structure.
ITEM TOTAL
----------- -----------------
ID | TITLE ID |ITEMID|VALUE
1 A 1 2 6
2 B 2 1 4
3 C 3 3 3
4 D 4 3 8
5 E 5 1 2
6 F 6 5 4
7 4 5
8 2 8
9 2 7
10 1 3
11 2 2
12 3 6
I am using Apache Derby DB. I need to perform the average calculation in SQL. I need to show the list of item IDs and their average total of the last 3 records.
That is, for ITEM.ID 1, I will go to TOTAL table and select the last 3 records of the rows which are associated with the ITEMID 1. And take average of them. In Derby database, I am able to do this for a given item ID but I cannot make it without giving a specific ID. Let me show you what I've done it.
SELECT ITEM.ID, AVG(VALUE) FROM ITEM, TOTAL WHERE TOTAL.ITEMID = ITEM.ID GROUP BY ITEM.ID
This SQL gives the average of all items in a list. But this calculates for all values of the total tables. I need last 3 records only. So I changed the SQL to this:
SELECT AVG(VALUE) FROM (SELECT ROW_NUMBER() OVER() AS ROWNUM, TOTAL.* FROM TOTAL WHERE ITEMID = 1) AS TR WHERE ROWNUM > (SELECT COUNT(ID) FROM TOTAL WHERE ITEMID = 1) - 3
This works if I supply the item ID 1 or 2 etc. But I cannot do this for all items without giving an item ID.
I tried to do the same thing in ORACLE using partition and it worked. But derby does not support partitioning. There is WINDOW but I could not make use of it.
Oracle one
SELECT ITEMID, AVG(VALUE) FROM(SELECT ITEMID, VALUE, COUNT(*) OVER (PARTITION BY ITEMID) QTY, ROW_NUMBER() OVER (PARTITION BY ITEMID ORDER BY ID) IDX FROM TOTAL ORDER BY ITEMID, ID) WHERE IDX > QTY -3 GROUP BY ITEMID ORDER BY ITEMID
I need to use derby DB for its portability.
The desired output is this
RESULT
-----------------
ITEMID | AVERAGE
1 (9/3)
2 (17/3)
3 (17/3)
4 (5/1)
5 (4/1)
6 NULL

As you have noticed, Derby's support for the SQL 2003 "OLAP Operations" support is incomplete.
There was some initial work (see https://wiki.apache.org/db-derby/OLAPOperations), but that work was only partially completed.
I don't believe anyone is currently working on adding more functionality to Derby in this area.
So yes, Derby has a row_number function, but no, Derby does not (currently) have partition by.

Related

Select column's occurence order without group by

I currently have two tables, users and coupons
id
first_name
1
Roberta
2
Oliver
3
Shayna
4
Fechin
id
discount
user_id
1
20%
1
2
40%
2
3
15%
3
4
30%
1
5
10%
1
6
70%
4
What I want to do is select from the coupons table until I've selected X users.
so If I chose X = 2 the resulting table would be
id
discount
user_id
1
20%
1
2
40%
2
4
30%
1
5
10%
1
I've tried using both dense_rank and row_number but they return the count of occurrences of each user_id not it's order.
SELECT id,
discount,
user_id,
dense_rank() OVER (PARTITION BY user_id)
FROM coupons
I'm guessing I need to do it in multiple subqueries (which is fine) where the first subquery would return something like
id
discount
user_id
order_of_occurence
1
20%
1
1
2
40%
2
2
3
15%
3
3
4
30%
1
1
5
10%
1
1
6
70%
4
4
which I can then use to filter by what I need.
PS: I'm using postgresql.
You've stated that you want to parameterize the query so that you can retrieve X users. I'm reading that as all coupons for the first X distinct user_ids in coupon id column order.
It appears your attempt was close. dense_rank() is the right idea. Since you want to look over the entire table you can't use partition by. And a sorting column is also required to determine the ranking.
with data as (
select *,
dense_rank() over (order by id) as dr
from coupons
)
select * from data where dr <= <X>;

Update new foreign key column of existing table with ids from another table in SQL Server

I have an existing table to which I have added a new column which is supposed to hold the Id of a record in another (new) table.
Simplified structure is sort of like this:
Customer table
[CustomerId] [GroupId] [LicenceId] <-- new column
Licence table <-- new table
[LicenceId] [GroupId]
The Licence table has a certain number of licences per group than can be assigned to customers in that same group. There are multiple groups, and each group has a variable number of customers and licences.
So say there are 100 licences available for group 1 and there are 50 customers in group 1, so each can get a license. There are never more customers than there are licences.
Sample
Customer
[CustomerId] [GroupId] [LicenceId]
1 1 NULL
2 1 NULL
3 1 NULL
4 1 NULL
5 2 NULL
6 2 NULL
7 2 NULL
8 3 NULL
9 3 NULL
Licence
[LicenceId] [GroupId]
1 1
2 1
3 1
4 1
5 1
6 1
7 2
8 2
9 2
10 2
11 2
12 3
13 3
14 3
15 3
16 3
17 3
Desired outcome
Customer
[CustomerId] [GroupId] [LicenceId]
1 1 1
2 1 2
3 1 3
4 1 4
5 2 7
6 2 8
7 2 9
8 3 12
9 3 13
So now I have to do this one time update to give every customer a licence and I have no idea how to go about it.
I'm not allowed to use a cursor. I can't seem to do a MERGE UPDATE, because joining the Customer to the Licence table by GroupId will result in multiple hits.
How do I assign each customer the next available LicenceId within their group in one query?
Is this even possible?
You can use window functions:
with c as (
select c.*, row_number() over (partition by groupid order by newid()) as seqnum
from customers c
),
l as (
select l.*, row_number() over (partition by groupid order by newid()) as seqnum
from licenses c
)
update c
set c.licenceid = l.licenseid
from c join
l
on c.seqnum = l.seqnum and c.groupid = l.groupid;
This assigns the licenses randomly. That is really just for fun. The most efficient method is to use:
row_number() over (partition by groupid order by (select null)) as seqnum
SQL Server often avoids an additional sort operation in this case.
But you might want to order them by something else -- for instance by the ordering of the customer ids, or by some date column, or something else.
Gordon has put it very well in his answer.
Let me break it down into simpler steps for you.
Step 1. Use the ROW_NUMBER() function to assign a SeqNum to the Customers. Use PARTITION BY GroupId so that the number starts from 1 in every group. I would ORDER BY CustomerId
Step 2. Use the ROW_NUMBER() function to assign a SeqNum to the Licences. Use PARTITION BY GroupId so that the number starts from 1 in every group. ORDER BY LicenseId because your ask is to "assign each customer the next available LicenceId within their group".
Now use these 2 queries to update LicenseId in Customer table.

how to sum Stacked In Line sql

i have a table like this
code Quantity
1 5
1 6
2 2
2 1-
3 4
.
.
how can made it like this
code Quantity remain
1 5 5
1 6 11
2 2 2
2 1- 1
3 4 4
.
.
Your query presumes an ordering of the rows. I will assume you have such a column.
Assuming the values are numbers (1- ???), then you can simply use a cumulative sum:
select t.*,
sum(quantity) over (partition by code order by ?) as remaining
from t;
The ? is for the column that specifies the ordering.
You can do a window sum, but you need a column to unambiguously order the records within groups sharing the same code. I assumed that this column is called id.
select t.*, sum(quantity) over(partition by code order by id) remain from mytable t

Calculate "position in run" in SQL

I have a table of consecutive ids (integers, 1 ... n), and values (integers), like this:
Input Table:
id value
-- -----
1 1
2 1
3 2
4 3
5 1
6 1
7 1
Going down the table i.e. in order of increasing id, I want to count how many times in a row the same value has been seen consecutively, i.e. the position in a run:
Output Table:
id value position in run
-- ----- ---------------
1 1 1
2 1 2
3 2 1
4 3 1
5 1 1
6 1 2
7 1 3
Any ideas? I've searched for a combination of windowing functions including lead and lag, but can't come up with it. Note that the same value can appear in the value column as part of different runs, so partitioning by value may not help solve this. I'm on Hive 1.2.
One way is to use a difference of row numbers approach to classify consecutive same values into one group. Then a row number function to get the desired positions in each group.
Query to assign groups (Running this will help you understand how the groups are assigned.)
select t.*
,row_number() over(order by id) - row_number() over(partition by value order by id) as rnum_diff
from tbl t
Final Query using row_number to get positions in each group assigned with the above query.
select id,value,row_number() over(partition by value,rnum_diff order by id) as pos_in_grp
from (select t.*
,row_number() over(order by id) - row_number() over(partition by value order by id) as rnum_diff
from tbl t
) t

Update rows based on rownumber in SQL Server 2012

Ive been given some data in a spreadsheet which will soon be going into an automated import so i cannot do any manual entry on the spreadsheet. The data basically has the following columns. Trayid, trayname, itemdescription and rownumber. I didnt build these tables myself or i would of built it differently but i have to stick to the format which is already set.
The Data that is being imported will look at followed.
Trayid | Trayname | ItemDescription | RowNumber
1 Tray 1 Product 1 1
Product 2 2
Product 3 3
Product 4 4
2 Tray 2 Product 1 1
Product 2 2
Product 3 3
Product 4 4
Product 5 5
What i need to do is update the trayid and trayname for each of the other rows following row 1, so for example it will look like.
Trayid | Trayname | ItemDescription | RowNumber
1 Tray 1 Product 1 1
1 Tray 1 Product 2 2
1 Tray 1 Product 3 3
1 Tray 1 Product 4 4
2 Tray 2 Product 1 1
2 Tray 2 Product 2 2
2 Tray 2 Product 3 3
2 Tray 2 Product 4 4
2 Tray 2 Product 5 5
Im guessing i need to use a curser or something but im not sure, i think it can be done by going down the rownumbers and stopping when it see's rownumber 1 again and then carrying on with the next trayid and trayname.
Sorry if what i need doesnt make sense, it was awkward to explain.
SQL tables have no inherent ordering. So you cannot depend on that. But, there is something that you can do:
Define an identity column in the source table.
Create a view on the source table that excludes the identity.
Bulk insert into the view.
This will assign a sequential number to rows in the same order as the original data. Let's call this id. Then you can do your update by doing:
with toupdate (
select t.*,
max(TrayId) over (partition by grp) as new_TrayId,
max(TrayName) over (partition by grp) as new_TrayName
from (select t.*,
count(TrayId) over (order by id) as grp
from t
) t
)
update toupdate
set TrayId = new_TrayId,
TrayName = new_TrayName
where TrayId is null;
The idea is to define groups of rows corresponding to each tray. The simple idea is to count the number of non-NULL values before any given row -- everything in a group will then have the same grp value. Window functions then spread the actual value through all rows in the group (using max()), and these values are used for the update.