SQL How to order each entry by date - sql

I have a list of customer_ids, the date on which some information was changed, and the corresponding changes. I would like to number each change, by order of date, on each customer. So for example; I have something that looks like the following
Cust_id Date information
-----------------------------------------------------
12345 2015-04-03 blue hat
12345 2015-04-05 red scarf
54321 2015-04-12 yellow submarine
and I would like an output which looks something like this;
cust_id change_number Date information
---------------------------------------------------------------
12345 1 2015-04-03 blue hat
12345 2 2015-04-0 red scarf
54321 1 2015-04-12 yellow submarine
This will be quite a big table, so it will need to be somewhat efficient.
There will be at most 1 entry per customer per day.
Any help you can give is appreciated.

If you want to order over a change number like that you need to use an inner select like this:
SELECT *
FROM (
SELECT *
, ROW_NUMBER() OVER (PARTITION BY Cust_id ORDER BY [Date]) As Change_Number
FROM yourTable) t
ORDER BY
Cust_id, Change_Number;

As Indian said, Try this :
select cust_id,
Row_number() over(partition by cust_id order by date) change_number,
Date,
information
from tablename;

Simply use the ORDER BY clause:
SELECT *
FROM (
SELECT *
, ROW_NUMBER() OVER (PARTITION BY Cust_id ORDER BY [Date]) As Change_Number
FROM yourTable) t
ORDER BY
Cust_id, Change_Number;

Related

How to select rows where values changed for an ID

I have a table that looks like the following
id effective_date number_of_int_customers
123 10/01/19 0
123 02/01/20 3
456 10/01/19 6
456 02/01/20 6
789 10/01/19 5
789 02/01/20 4
999 10/01/19 0
999 02/01/20 1
I want to write a query that looks at each ID to see if the salespeople have newly started working internationally between October 1st and February 1st.
The result I am looking for is the following:
id effective_date number_of_int_customers
123 02/01/20 3
999 02/01/20 1
The result would return only the salespeople who originally had 0 international customers and now have at least 1.
I have seen similar posts here that use nested queries to pull records where the first date and last have different values. But I only want to pull records where the original value was 0. Is there a way to do this in one query in SQL?
In your case, a simple aggregation would do -- assuming that 0 is the earliest value:
select id, max(number_of_int_customers)
from t
where effective_date in ('2019-10-01', '2020-02-01')
group by id
having min(number_of_int_customers) = 0;
Obviously, this is not correct if the values can decrease to zero. But this having clause fixes that problem:
having min(case when number_of_int_customers = 0 then effective_date end) = min(effective_date)
An alternative is to use window functions, such asfirst_value():
select distinct id, last_noic
from (select t.*,
first_value(number_of_int_customers) over (partition by id order by effective_date) as first_noic,
first_value(number_of_int_customers) over (partition by id order by effective_date desc) as last_noic,
from t
where effective_date in ('2019-10-01', '2020-02-01')
) t
where first_noic = 0;
Hmmm, on second thought, I like lag() better:
select id, number_of_int_customers
from (select t.*,
lag(number_of_int_customers) over (partition by id order by effective_date) as prev_noic
from t
where effective_date in ('2019-10-01', '2020-02-01')
) t
where prev_noic = 0;

Running Count of Unique Identifier Occurrences in SQL

So I'm trying to get a running count of uses over time by a unique identifier,
E.G.
Date UniqueID Running Count
1/1/2019 234567 1
1/1/2019 123456 1
1/2/2019 234567 2
1/3/2019 234567 3
1/3/2019 123456 2
Basically I want to be able to see that on 1/3/2019 that was the 3rd time that UniqueID 234567 showed up in the data.
I tried:
SELECT Date, UniqueID,
count(UniqueID) OVER (ORDER BY Date, UniqueID rows unbounded preceding) AS RunningTotal
but this just does a overall running total, so it doesn't reset with a new UniqueID
SELECT Date, UniqueID, count(UniqueID) OVER (ORDER BY Date, UniqueID rows unbounded preceding) AS RunningTotal
Is there anything I could do to make it reset for each UniqueID
Assuming that the 2 in the last row is a typo, you want either ROW_NUMBER() or DENSE_RANK():
SELECT Date, UniqueID,
ROW_NUMBER(UniqueID) OVER (PARTITION BY UniqueID ORDER BY Date) AS RunningTotal
You would use DENSE_RANK() if you could have duplicates on one day that you wanted to count only once.
By the way, you could also express this using COUNT(*):
SELECT Date, UniqueID,
COUNT(*) OVER (PARTITION BY UniqueID ORDER BY Date) AS RunningTotal
There are some subtle differences in the handling of duplicate values. Normally, COUNT() is not used for this purpose because the ranking functions are so pervasive (and useful).

Combining COUNT and RANK - PostgreSQL

What I need to select is total number of trips made by every 'id_customer' from table user and their id, dispatch_seconds, and distance for first order. id_customer, customer_id, and order_id are strings.
It should looks like this
+------+--------+------------+--------------------------+------------------+
| id | count | #1order id | #1order dispatch seconds | #1order distance |
+------+--------+------------+--------------------------+------------------+
| 1ar5 | 3 | 4r56 | 1 | 500 |
| 2et7 | 2 | dc1f | 5 | 100 |
+------+--------+------------+--------------------------+------------------+
Cheers!
Original post was edited as during discussion S-man helped me to find exact problem solution. Solution by S-man https://dbfiddle.uk/?rdbms=postgres_10&fiddle=e16aa6008990107e55a26d05b10b02b5
db<>fiddle
SELECT
customer_id,
order_id,
order_timestamp,
dispatch_seconds,
distance
FROM (
SELECT
*,
count(*) over (partition by customer_id), -- A
first_value(order_id) over (partition by customer_id order by order_timestamp) -- B
FROM orders
)s
WHERE order_id = first_value -- C
https://www.postgresql.org/docs/current/static/tutorial-window.html
A window function which gets the total record count per user
B window function which orders all records per user by timestamp and gives the first order_id of the corresponding user. Using first_value instead of min has one benefit: Maybe it could be possible that your order IDs are not really increasing by timestamp (maybe two orders come in simultaneously or your order IDs are not sequential increasing but some sort of hash)
--> both are new columns
C now get all columns where the "first_value" (aka the first order_id by timestamp) equals the order_id of the current row. This gives all rows with the first order by user.
Result:
customer_id count order_id order_timestamp dispatch_seconds distance
----------- ----- -------- ------------------- ---------------- --------
1ar5 3 4r56 2018-08-16 17:24:00 1 500
2et7 2 dc1f 2018-08-15 01:24:00 5 100
Note that in these test data the order "dc1f" of user "2et7" has a smaller timestamp but comes later in the rows. It is not the first occurrence of the user in the table but nevertheless the one with the earliest order. This should demonstrate the case first_value vs. min as described above.
You are on the right track. Just use conditional aggregation:
SELECT o.customer_id, COUNT(*)
MAX(CASE WHEN seqnum = 1 THEN o.order_id END) as first_order_id
FROM (SELECT o.*,
ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY order_timestamp ASC) as seqnum
FROM orders o
) o
GROUP BY o.customer_id;
Your JOIN is not necessary for this query.
You can use window function :
select distinct customer_id,
count(*) over (partition by customer_id) as no_of_order
min(order_id) over (partition by customer_id order by order_timestamp) as first_order_id
from orders o;
I think there are many mistakes in your original query, your rank isn't partitioned, the order by clause seems incorrect, you filter out all but one "random" order, then apply the count, the list goes on.
Something like this seems closer to what you seem to want?
SELECT
customer_id,
order_count,
order_id
FROM (
SELECT
a.customer_id,
a.order_count,
a.order_id,
RANK() OVER (PARTITION BY a.order_id, a.customer_id ORDER BY a.order_count DESC) AS rank_id
FROM (
SELECT
customer_id,
order_id,
COUNT(*) AS order_count
FROM
orders
GROUP BY
customer_id,
order_id) a) b
WHERE
b.rank_id = 1;

Top 2 Months of Sales by Customer - Oracle

I am trying to develop a query to pull out the top 2 months of sales by customer id. Here is a sample table:
Customer_ID Sales Amount Period
144567 40 2
234567 50 5
234567 40 7
144567 80 10
144567 48 2
234567 23 7
desired output would be
Customer_ID Sales Sum Period
144567 80 10
144567 48 2
234567 50 5
234567 40 7
I've tried
select sum(net_sales_usd_spot), valid_period, customer_id
from sales_trans_price_output
where valid_period in (select valid_period, sum(net_sales_usd_spot)
from sales_trans_price_output
where rank<=2)
group by valid_period, customer_id
error is
too many values ORA-00913.
I see why, but not sure how to rework it.
Try:
SELECT *
FROM (
SELECT t.*,
row_number() over (partition by customer_id order by sales_amount desc ) rn
FROM sales_trans_price t
)
WHERE rn <= 2
ORDER BY 1,2 desc
Demo: http://sqlfiddle.com/#!4/882888/3
what if you change your where clause to:
where valid_period in
(
select p.valid_period from sales_trans_price_output p
join (select valid_period, sum(net_sales_usd_spot)
from sales_trans_price_output
where rank<=2) s on s.valid_period = p.valid_period
)
It might be ugly and need refactoring, but I think this is the logic you're after.
The error is because of this.
where valid_period in (select valid_period, sum(net_sales_usd_spot)
from sales_trans_price_output
where rank<=2)
The subquery can only contain one field.
You are on the right track using rank, but you might not be using it correctly. Google oracle rank to find the correct syntax.
Back to what you are looking to achieve, a derived table is the approach I would use. That's simply a subquery with an alias. Or, if you use the keyword with, it might be called a CTE - Computed Table Expression.
Try it
SELECT * FROM (
SELECT T.*,
RANK () OVER (PARTITION BY CUSTOMER_ID
ORDER BY VALID_PERIOD DESC) FN_RANK
FROM SALES_TRANS_PRICE_OUTPUT T
) A
WHERE A.FN_RANK <= 2
ORDER BY CUSTOMER_ID ASC, VALID_PERIOD DESC, FN_RANK DESC

Create table with distinct values based on date

I have a table which fills up with lots of transactions monthly, like below.
Name ID Date OtherColumn
_________________________________________________
John Smith 11111 2012-11-29 Somevalue
John Smith 11111 2012-11-30 Somevalue
Adam Gray 22222 2012-12-11 Somevalue
Tim Blue 33333 2012-12-15 Somevalue
John NewName 11111 2013-01-01 Somevalue
Adam Gray 22222 2013-01-02 Somevalue
From this table i want to create a dimension table with the unique names and id's. The problem is that a person can change his/her name, like "John" in the example above. The Id's are otherwise always unique. In those cases I want to only use the newest name (the one with the latest date).
So that I end up with a table like this:
Name ID
______________________
John NewName 11111
Adam Gray 22222
Tim Blue 33333
How do I go about achieving this?
Can I do it in a single query?
Use a CTE for this. It simplifies ranking and window functions.
;WITH CTE as
(SELECT
RN = ROW_NUMBER() OVER (PARTITION BY ID ORDER BY [Date] DESC),
ID,
Name
FROM
YourTable)
SELECT
Name,
ID
FROM
CTE
WHERE
RN = 1
I think creating a table is a bad idea, but this is how you get the most recent name.
select name
from yourtable yt join
(select id, max(date) maxdate
from yourtable
group by id ) temp on temp.id = yt.id and yt.date = maxdate
JNK's CTE solution is an equivalent of the following.
SELECT
Name,
ID
FROM (
SELECT
RN = ROW_NUMBER() OVER (PARTITION BY ID ORDER BY [Date] DESC),
Name,
ID
FROM theTable
)
WHERE RN = 1
Trying to think a way to get rid of the partition function without introducing the possible duplicates.