I have a table with 3 columns ordernum, username, and amount. I want to select its rows and show an additional column expected.
The rule for calculating the expected column is as follows:
These rows same OrderNum value will be ranking again based on amount column (Desc order). I don't know how I can describe, but the expected result is shown below :(
I tried with RANK() and ROW_NUMBER(), but have not been able to properly apply above algorithm.
This is my table declaration:
CREATE TABLE data
(
ordernum INT,
username NVARCHAR(30),
amount MONEY
);
This is my table content:
+----------+----------+------------+
| ORDERNUM | USERNAME | AMOUNT |
+----------+----------+------------+
| 1 | test01 | 18382.5079 |
| 1 | test02 | 10476.0000 |
| 1 | test03 | 8128.0000 |
| 1 | test04 | 6680.0000 |
| 1 | test05 | 5388.9673 |
| 1 | test06 | 5356.0000 |
| 12 | test07 | 2806.0000 |
| 12 | test08 | 2806.0000 |
| 12 | test09 | 2806.0000 |
| 14 | test10 | 2530.0000 |
| 15 | test11 | 2330.0000 |
| 16 | test12 | 2183.0000 |
| 16 | test13 | 2182.0000 |
| 17 | test14 | 2000.0000 |
| 18 | test15 | 1621.0000 |
+----------+----------+------------+
And this is my expected result:
+----------+----------+------------+----------+
| ORDERNUM | USERNAME | AMOUNT | EXPECTED |
+----------+----------+------------+----------+
| 1 | test01 | 18382.5079 | 1 |
| 1 | test02 | 10476.0000 | 2 |
| 1 | test03 | 8128.0000 | 3 |
| 1 | test04 | 6680.0000 | 4 |
| 1 | test05 | 5388.9673 | 5 |
| 1 | test06 | 5356.0000 | 6 |
| 12 | test07 | 2806.0000 | 12 |
| 12 | test08 | 2806.0000 | 12 |
| 12 | test09 | 2806.0000 | 12 |
| 14 | test10 | 2530.0000 | 15 |
| 15 | test11 | 2330.0000 | 16 |
| 16 | test12 | 2183.0000 | 17 |
| 16 | test13 | 2182.0000 | 18 |
| 17 | test14 | 2000.0000 | 19 |
| 18 | test15 | 1621.0000 | 20 |
+----------+----------+------------+----------+
Here is a fiddle for the problem: https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=4014cb469a9ec8f57ded5a5e0e60adaf
This should work.
select
OrderNum,
Username,
Amount,
RANK() over (order by OrderNum) as Expected
from yourTable
I am building out a customer retention report. We identify customers by their email. Here is some sample data from our table:
+----------------------------+------------------+-------------------+---------------------+------------+-------------+--------------+---------------+------------------+---------------+----------------+--------------+------------------+--+--+--+--+--+
| Email | BrandNewCustomer | RecurringCustomer | ReactivatedCustomer | OrderCount | TotalOrders | Date_Created | Customer_Name | Customer_Address | Customer_City | Customer_State | Customer_Zip | Customer_Country | | | | | |
+----------------------------+------------------+-------------------+---------------------+------------+-------------+--------------+---------------+------------------+---------------+----------------+--------------+------------------+--+--+--+--+--+
| zyw#marketplace.amazon.com | 1 | 0 | 0 | 1 | 1 | 41:50.0 | Sha | 990 | BRO | NY | 112 | US | | | | | |
| zyu#gmail.com | 1 | 0 | 0 | 1 | 1 | 57:25.0 | Zyu | 181 | Mia | FL | 330 | US | | | | | |
| ZyR#aol.com | 1 | 0 | 0 | 1 | 1 | 10:19.0 | Day | 581 | Myr | SC | 295 | US | | | | | |
| zyr#gmail.com | 1 | 0 | 0 | 1 | 1 | 25:19.0 | Nic | 173 | Was | DC | 200 | US | | | | | |
| zy#gmail.com | 1 | 0 | 0 | 1 | 1 | 19:18.0 | Kim | 675 | MIA | FL | 331 | US | | | | | |
| zyou#gmail.com | 1 | 0 | 0 | 1 | 1 | 40:29.0 | zoe | 160 | Mob | AL | 366 | US | | | | | |
| zyon#yahoo.com | 1 | 0 | 0 | 1 | 1 | 17:21.0 | Zyo | 84 | Sta | CT | 690 | US | | | | | |
| zyo#gmail.com | 1 | 0 | 0 | 2 | 2 | 02:03.0 | Zyo | 432 | Ell | GA | 302 | US | | | | | |
| zyo#gmail.com | 1 | 0 | 0 | 1 | 2 | 12:54.0 | Zyo | 432 | Ell | GA | 302 | US | | | | | |
| zyn#icloud.com | 1 | 0 | 0 | 1 | 1 | 54:56.0 | Zyn | 916 | Nor | CA | 913 | US | | | | | |
| zyl#gmail.com | 0 | 1 | 0 | 3 | 3 | 31:27.0 | Ser | 123 | Mia | FL | 331 | US | | | | | |
| zyk#marketplace.amazon.com | 1 | 0 | 0 | 1 | 1 | 44:00.0 | Myr | 101 | MIA | FL | 331 | US | | | | | |
+----------------------------+------------------+-------------------+---------------------+------------+-------------+--------------+---------------+------------------+---------------+----------------+--------------+------------------+--+--+--+--+--+
We define our customer by email. So all orders with the same email are marked to be under one customer and then we do calculations on top of that.
Now I am trying to find out about customers whose emails have changed. So to do this we will try to line up customers by their address.
So per each row (so when separated by email), I want to have another column called something like Orders_With_Same_Address_Different_Email. How would I do that?
I have tried doing something with Dense Rank but it doesn't seem to work:
SELECT DISTINCT
Email
,BrandNewCustomer
,RecurringCustomer
,ReactivatedCustomer
,OrderCount
,TotalOrders
,Date_Created
,Customer_Name
,Customer_Address
,Customer_City
,Customer_State
,Customer_Zip
,Customer_Country
,(DENSE_RANK() over (partition by Email order by (case when email <> email then Customer_Address end) asc)
+DENSE_RANK() over ( partition by Email order by (case when email <> email then Customer_Address end) desc)
- 1) as Orders_With_Same_Name_Different_Email
--*
FROM Customers
Try counting the email partitioned by address, not by email:
select Email,
-- ...
Orders_With_Same_Name_Different_Email = iif(
(count(email) over (partition by Customer_Address) > 1,
1, 0)
from Customers;
But this is a lesson in why you wouldn't use an email as an identifier for a client. Address is a bad idea as well. Use something that won't change. That usually means making an internal identifier, such as something that auto-increments:
alter table #customers
add customerId int identity(1,1) primary key not null
Now customerId = 1 will always refer to that particular customer.
You can group by customer_address and check the count. This is by the assumption that each customer has one address.
Select * from table where
customer_address IN (
Select customer_address
From table group by customer_address
having count(distinct customer_email)
>1)
If I understand what you want to do, this is how I would solve it:
Note, you don't need the having clause in the CTE but depending on your data it could make it faster. (That is, if you have a large dataset.)
WITH email2addr
(
select email, count(distinct customer_address) as addr_cnt
from customers
group by email
having count(distinct customer_address) > 1
)
SELECT
Email
,BrandNewCustomer
,RecurringCustomer
,ReactivatedCustomer
,OrderCount
,TotalOrders
,Date_Created
,Customer_Name
,Customer_Address
,Customer_City
,Customer_State
,Customer_Zip
,Customer_Country
CASE when coalese(email2addr.addr_cnt,1) > 1 then 'Y' ELSE 'N' END as has_more_than_1_email
from customers
left join email2addr on customers.email = email2addr.email
I have table for eg "employee" with just one column "id". Say you have records from 1 through 1000.
Employee
------------
ID
------------
1
2
3
..
..
999
1000
Now I would like to write a query which gives the following results i.e. sort by ascending order and concatenate first 5 to 1 record, second 5 to 2 second, and so on. Any ideas how I can do this?
Here is the output I am looking to have.
1,2,3,4,5
6,7,8,9,10
11,12,13,14,15
...........
...........
996,997,998,999,1000
Use row_number and listagg functions, in this way:
SELECT listagg( id, ',' ) within group( order by group_no, id )
FROM (
select id,
trunc((row_number() over( order by id ) -1) / 5) as group_no
from employee
)
GROUP BY group_no
Working demo: http://sqlfiddle.com/#!4/ef526/10
| LISTAGG(ID,',')WITHINGROUP(ORDERBYGROUP_NO,ID) |
|------------------------------------------------|
| 1,2,3,4,5 |
| 6,7,8,9,10 |
| 11,12,13,14,15 |
| 16,17,18,19,20 |
| 21,22,23,24,25 |
| 26,27,28,29,30 |
| 31,32,33,34,35 |
| 36,37,38,39,40 |
| 41,42,43,44,45 |
| 46,47,48,49,50 |
| 51,52,53,54,55 |
| 56,57,58,59,60 |
| 61,62,63,64,65 |
| 66,67,68,69,70 |
| 71,72,73,74,75 |
| 76,77,78,79,80 |
| 81,82,83,84,85 |
| 86,87,88,89,90 |
| 91,92,93,94,95 |
| 96,97,98,99,100 |
| 101,102,103,104,105 |
| 106,107,108,109,110 |
| 111,112,113,114,115 |
| 116,117,118,119,120 |
| 121,122,123,124,125 |
| 126,127,128,129,130 |
| 131,132,133,134,135 |
| 136,137,138,139,140 |
| 141,142,143,144,145 |
| 146,147,148,149,150 |
| 151,152,153,154,155 |
| 156,157,158,159,160 |
| 161,162,163,164,165 |
| 166,167,168,169,170 |
| 171,172,173,174,175 |
| 176,177,178,179,180 |
| 181,182,183,184,185 |
| 186,187,188,189,190 |
| 191,192,193,194,195 |
| 196,197,198,199,200 |