I'm trying to generate number sequence based on 2 columns, Sno and UnitCost. The numbers should run down sequentially but they shouldn't change when both the columns are same. But if any one column is different it should increment.
I tried something with row_number(), rank(), dense_rank() but have been unable to hit the right logic.
Here's the required column and existing columns:
Sno UnitCost RequiredColumn
ch01 10 01
ch01 10 01
ch02 20 02
ch02 20 02
ch02 30 03
ch02 30 03
ch03 10 04
Any tips? Thanks.
Using DENSE_RANK:
SELECT Sno, UnitCost, DENSE_RANK() OVER (ORDER BY Sno, UnitCost) RequiredColumn
FROM yourTable;
Related
I have a table that keeps activity records, containing the date of registration and other information about the activity performed. I would like to make a query that would return one more column in the table, containing the maximum record date.
I don't think it's too complicated, but my knowledge is limited on the subject. Would a join between tables be the solution? How can I do it?
my original table:
ID
Value
Date
01
34
2022-02-15
01
42
2022-02-08
02
12
2022-02-08
02
30
2022-02-01
I need to get:
ID
Value
Date
Date_max
01
34
2022-02-15
2022-02-15
01
42
2022-02-08
2022-02-15
02
12
2022-02-08
2022-02-15
02
30
2022-02-01
2022-02-15
I just need a column with the global maximum value. It will be the same value for all rows.
You can use a window function:
select id, value, date, max(date) over () as date_max
from the_table
order by id, date desc;
Lots of ways to do this:
Analytic function if it's available to you.
Inline Select query
cross join
. The 1st two have been covered in answers already so here's a 3rd option. The analytic would be the most "modern" way to handle this I can think of.
SELECT A.*, B.Date_Max
FROM Table A
CROSS JOIN (SELECT max(Date) Date_Max FROM Table Sub) B
Another approach: getting the max date with subquery:
select id, value, date, (select max(date) from original_table) Date_max
from original_table o
order by id
PK Date ID
=== =========== ===
1 07/04/2017 22
2 07/05/2017 22
3 07/07/2017 03
4 07/08/2017 04
5 07/09/2017 22
6 07/09/2017 22
7 07/10/2017 05
8 07/11/2017 03
9 07/11/2017 03
10 07/11/2017 03
I want to count the number of ID occurred in a given week/month, something like this.
ID Count
22 3 --> count as 1 only in the same date occurred twice one 07/09/2017
03 2 --> same as above, increment only one regardless how many times it occurred in a same date
04 1
05 1
I'm trying to implement this in a perl file, to output/print it in a csv file, I have no idea on what query will I execute.
Seems like a simple case of count distinct and group by:
SELECT Id, COUNT(DISTINCT [Date]) As [Count]
FROM TableName
WHERE [Date] >= #StartDate
AND [Date] <= #EndDate
GROUP BY Id
ORDER BY [Count] DESC
You can use COUNT with DISTINCT e.g.:
SELECT ID, COUNT(DISTINCT Date)
FROM table
GROUP BY ID;
You can read more abot how to get month from a date in get month from a date (it also works for year).
Your query will be :
select DATEPART(mm,Date) AS month, COUNT(ID) AS count from table group by month
Hope that helped you.
I have a historical table XY with these contents:
ID Person_ID Balance_on_account ts
---- ----------- -------------------- ----------
01 05 +10 10.10.14
02 05 -10 20.10.14
03 05 -50 30.10.14
04 05 +50 30.10.14
05 05 -10 30.10.14
06 06 11 11.10.14
07 06 -40 15.10.14
08 06 +5 16.10.14
09 06 -10 30.10.14
and I need to create an SQL query which will give me those Person_ID's and timestamps where are
a) the Balance_on_account is negative - that's the easy one,
b) and at the same time is the record of negative Balance_on_account followed by a positive number.
Like for Person_ID = 05 I would have the row with ID = 05, and for Person_ID = 06 the row with ID = 09.
I never used it, but you could try analytic LEAD function
SELECT *
FROM (
SELECT ID, Person_ID, Balance_on_account, ts
LEAD (Balance_on_account, 1)
OVER (PARTITION BY Person_ID ORDER BY ID) next_balance
FROM XY)
WHERE Balance_on_account < 0 and next_balance >= 0
ORDER BY ID
LEAD lets you access the following rows in a query without joining with itself.
PARTITION BY groups rows by Person_ID so it doesn't mix different person's balances and ORDER BY defines the order within each group.
The filtering cannot be done in the inner query because it'd filter out the rows with positive balance.
next_balance will be null for the last row.
source analytic functions and LEAD
The following query should give you the expected results provided the database platform you are using supports Common Table Expressions and Window Functions e.g. SQL Server 2008 and up.
SqlFiddle
WITH TsOrder AS
(
SELECT
Id
, Person_Id
, Balance_on_account
, ts
, ROW_NUMBER() OVER(PARTITION BY Person_Id
ORDER BY ts, Id) AS ts_Order
FROM
[TableName]
)
SELECT
*
FROM
TsOrder
LEFT JOIN TsOrder AS NextTs
ON TsOrder.Person_id = NextTs.Person_Id
AND TsOrder.ts_order = NextTs.ts_order - 1
WHERE
TsOrder.Balance_on_account < 0
AND NextTs.Balance_on_account > 0
So I have a table like:
UNIQUE_ID MONTH
abc 01
93j 01
acc 01
7as 01
oks 02
ais 02
asi 03
asd 04
etc
I query:
select count(unique_id) as amount, month
from table
group by month
now everything looks great:
AMOUNT MONTH
4 01
2 02
1 03
etc
is there a way to get oracle to split the amounts by weeks?
the way that the result look something like:
AMOUNT WEEK
1 01
1 02
1 03
1 04
etc
Assuming you know the year - lets say we go with 2014 then you need to generate all the weeks a year
select rownum as week_no
from all_objects
where rownum<53) weeks
then state which months contain the weeks (for 2014)
select week_no, to_char(to_date('01-JAN-2014','DD-MON-YYYY')+7*(week_no-1),'MM') month_no
from
(select rownum as week_no
from all_objects
where rownum<53) weeks
Then join in your data
select week_no,month_no, test.unique_id from (
select week_no, to_char(to_date('01-JAN-2014','DD-MON-YYYY')+7*(week_no-1),'MM') month_no
from
(select rownum as week_no
from all_objects
where rownum<53) weeks) wm
join test on wm.month_no = test.tmonth
This gives your data for the each week as you described above. You can redo your query and count by week instead of month.
If you have the select statement below where the PK is the primary key:
select distinct dbo.DateAsInt( dateEntered) * 100 as PK,
recordDescription as Data
from MyTable
and the output is something like this (the first numbers are spaced for clarity):
PK Data
2010 01 01 00 New Years Day
2010 01 01 00 Make Resolutions
2010 01 01 00 Return Gifts
2010 02 14 00 Valentines day
2010 02 14 00 Buy flowers
and you want to output something like this:
PK Data
2010 01 01 01 New Years Day
2010 01 01 02 Make Resolutions
2010 01 01 03 Return Gifts
2010 02 14 01 Valentines day
2010 02 14 02 Buy flowers
Is it possible to make the "00" in the PK have an "identity" number effect within a single select? Otherwise, how could you increment the number by 1 for each found activity for that date?
I am already thinking as I type to try something like Sum(case when ?? then 1 end) with a group by.
Edit: (Answer provided by JohnFX below)
This was the final answer:
select PK + row_number() over
(PARTITION BY eventid order by eventname) as PK,
recordDescription
from (select distinct -- Removes row counts of excluded rows)
dbo.DateAsInt( dateEntered) as PK,
recordDescription as Data
from MyTable) A
order by eventid, recordDescription
I think you are looking for ROW_NUMBER and the associated PARTITION clause.
SELECT DISTINCT dbo.DateAsInt(DateEntered) * 100 as PK,
ROW_NUMBER() OVER (PARTITION BY DateEntered ORDER BY recordDescription) as rowID,
recordDescription as Data
FROM MyTable
If DateEntered has duplicate values you probably also want to check out DENSE_RANK() and RANK() depending on your specific needs for what to do with the incrementor in those cases.
Datetime columns should never ever be used as primary key. Plus, if you had an indetity column, you couldn't have:
PK Data
2010 01 01 01 New Years Day
2010 01 01 02 Make Resolutions
2010 01 01 03 Return Gifts
2010 02 14 01 Valentines day
2010 02 14 02 Buy flowers
As 2010 01 01 01 would have the same identity as 2010 02 14 01.
The best approach would be to have an identity column apart, and have your holiday date in another column, and concatenate the converted to nvarchar value of each of these fields.