Increase Row Number when ever Product is encountered - T SQL - Single Query (More than one statement not allowed) - sql

There is a table like below one
ID Vals
1 Product
2 Milk
3 Butter
4 Cheese
5 Yogurt
6 Product
7 Muesli
8 Porridge
9 Product
10 Banana
Output Needed like below
RWNUM ID Vals
1 1 Product
1 2 Milk
1 3 Butter
1 4 Cheese
1 5 Yogurt
2 6 Product
2 7 Muesli
2 8 Porridge
3 9 Product
3 10 Banana
Every time Product is encountered, the RWNUM column value will be increased by one.
This needs to be implemented in a single TSQL Query.
Any idea is welcome.

It looks like you want a cumulative sum of "product":
select t.*,
sum(case when val = 'Product' then 1 else 0 end) over (order by id) as rwnum
from t;

Related

increase rank based on particular value in column

I would appreciate some help for below issue. I have below table
id
items
1
Product
2
Tea
3
Coffee
4
Sugar
5
Product
6
Rice
7
Wheat
8
Product
9
Beans
10
Oil
I want output like below. Basically I want to increase the rank when item is 'Product'. May I know how can I do that? For data privacy and compliance purposes I have modified the data and column names
id
items
ranks
1
Product
1
2
Tea
1
3
Coffee
1
4
Sugar
1
5
Product
2
6
Rice
2
7
Wheat
2
8
Product
3
9
Beans
3
10
Oil
3
I have tried Lag and lead functions but unable to get expected output
Here is solution using a derived value of 1 or 0 to denote data boundaries SUM'ed up with the ROWS UNBOUNDED PRECEDING option, which is key here.
SELECT
id,
items,
SUM(CASE WHEN items='Product' THEN 1 ELSE 0 END) OVER (ORDER BY id ROWS UNBOUNDED PRECEDING) as ranks
FROM

Find the sum of previous count occurrences per unique ID in pandas

I have a history of customer IDs and purchase IDs where no customer has ever bought the same product. However, for each purchase ID (which is unique), how can I find out the number of times the customer has made a previous purchase
I have tried using groupby() and sort_values()
df = pd.DataFrame({'id_cust': [1,2,1,3,2,4,1],
'id_purchase': ['20A','143C','99B','78R','309D','90J','78J']})
df.sort_values(by='id_cust')
df.groupby('id_cust')['id_purchase'].cumcount()
This is what I expect:
id_cust id_purchase value
1 20A 1
2 143C 1
1 99B 2
3 78R 1
2 3097D 2
4 900J 1
1 78J 3
You can just use the cumcount() on the id_cust column since the id_purchase is unique:
df['value']=df.groupby('id_cust')['id_cust'].cumcount()+1
print(df)
id_cust id_purchase value
0 1 20A 1
1 2 143C 1
2 1 99B 2
3 3 78R 1
4 2 309D 2
5 4 90J 1
6 1 78J 3

DB Query matching ids and sum data on columns

Here is the info i have on my tables, what i need is to create a report based on certain dates and make a sum of every stock movement of the same id
Table one Table Two
Items Stocks
---------- ---------------------------
ID - NAME items_id - altas - bajas - created_at
1 White 4 5 0 8/10/2016
2 Black 2 1 5 8/10/2016
3 Red 3 3 2 8/11/2016
4 Blue 4 1 4 8/11/2016
2 10 2 8/12/2016
So based on a customer choice of dates (on this case lets say it selects all the data available on the table), i need to group them by items_id and then SUM all altas, and all bajas for that items_id, having the following at the end:
items_id altas bajas
1 0 0
2 11 7
3 3 2
4 6 4
Any help solving this?
Hope this will help:
Stock.select("sum(altas) as altas, sum(bajas) as bajas").group("item_id")

Using temporary extended table to make a sum

From a given table I want to be able to sum values having the same number (should be easy, right?)
Problem: A given value can be assigned from 2 to n consecutive numbers.
For some reasons this information is stored in a single row describing the value, the starting number and the ending number as below.
TABLE A
id | starting_number | ending_number | value
----+-----------------+---------------+-------
1 2 5 8
2 0 3 5
3 4 6 6
4 7 8 10
For instance the first row means:
value '8' is assigned to numbers: 2, 3 and 4 (5 is excluded)
So, I would like the following intermediairy result table
TABLE B
id | number | value
----+--------+-------
1 2 8
1 3 8
1 4 8
2 0 5
2 1 5
2 2 5
3 4 6
3 5 6
4 7 10
So I can sum 'value' for elements having identical 'number'
SELECT number, sum(value)
FROM B
GROUP BY number
TABLE C
number | sum(value)
--------+------------
2 13
3 8
4 14
0 5
1 5
5 6
7 10
I don't know how to do this and didn't find any answer on the web (maybe not looking with appropriate key words...)
Any idea?
You can do what you want with generate_series(). So, TableB is basically:
select id, generate_series(starting_number, ending_number - 1, 1) as n, value
from tableA;
Your aggregation is then:
select n, sum(value)
from (select id, generate_series(starting_number, ending_number - 1, 1) as n, value
from tableA
) a
group by n;

How to find count from two joined tables

We have to find count for each risk category for impact level as shown in last result part
Risk Table
RiskID RiskName
----------------------
1 Risk1
2 Risk2
3 Risk3
4 Risk4
5 Risk5
6 Risk6
7 Risk7
8 Risk8
9 Risk9
10 Risk10
11 Risk11
Category Table
Cat_ID Cat_Name
--------------------------
1 Design
2 Operation
3 Technical
Risk_Category table
Risk_ID Category_ID
------------------------
1 1
1 2
2 1
3 1
3 3
4 1
5 2
6 1
7 3
8 1
9 3
10 3
Risk_Impact_Assessment table
Risk_ID Impact_Level Impact_Score
---------------------------------------------
1 High 20
2 Medium 15
3 High 20
4 Low 10
5 High 20
6 High 20
7 High 20
8 Low 10
9 Medium 15
10 Low 15
11 Medium 15
Result should be like this
Cat_Name Impact_Level_High Impact_Level_Medium Impact_Level_Low
-------------------------------------------------------------------------------------
Design 1 1 2
Operation 2
Technical 2 2 1
You probably want to use the group by clause, along with case, eg.:
select
Cat_Name,
sum(case when Impact_Level = 'High' then 1 else 0 end) as [Impact_Level_High],
sum(case when Impact_Level = 'Medium' then 1 else 0 end) as [Impact_Level_Medium],
sum(case when Impact_Level = 'Low' then 1 else 0 end) as [Impact_Level_Low]
from [Risk_Impact_Assessment]
...
group by Cat_Name;
(I left out all the joins, I assume you can write these no problem)
You can use this trick to accomplish a lot of cool things, including parametric sorting and (just like here) complicated aggregate functions with little work.