How to perform the operation in PostgreSQL? - sql

I have the following table called items_per_order. I want to perform the following operation:
(1 * 500) + (2 * 1000) + (3 * 800) + (4 * 1000).

You can perform operations using standard maths operators:
SELECT *, item_count * order_occurrences AS total FROM your_table;
item order total
1 500 500
2 1000 2000
3 800 2400
4 1000 4000
And you can calculate the total using the SUM function:
SELECT SUM(item_count * order_occurrences) AS total FROM your_table;
total
8900

Related

How to update USING sum and subquery

I have a SQL Server table similar to this:
InkitemNo
CapacityUnit
NewInk
OldInk
ReturnInk
ProdQty
Description
UsedInk
204
Machine1
5
2
0
4000
Next
?
223
machine2
4
3
1
8000
NULL
?
204
Machine2
0
0
0
5000
Next
?
224
Machine2
4
0
2
3000
Next
?
I'm trying to write a query with this formula:
Example to get 1 row used ink
(5 + 2 -2 )* 4000/ 12000 = 1,67
to get 2 row used ink
(4 + 3 - 1) = 6
to get 3 row usedink
(5 + 2 - 2) * 5000 / 12000 = 2,08
to get 4 row usedink
(5 + 2 - 2) * 3000 / 12000 = 1,25
(NewInk + OldInk - ReturnInk) * ProdQty / Sum(ProdQty)
This formula used when the criteria is
CapacityUnit & InkItemNo is same
Description is not NULL
To get the result of used ink, I used this query
update InkEstimationSave =
(NewInk + OldInk - ReturnInk) * ProdQty / Sum(ProdQty]
but it does not work.
Based on your logic query you are looking for is
fiddle link
;with cte as
(
select *, SUM(ProdQty) OVER (partition by InkItemNo, Capacityunit) as denom
from yourtable
)
update cte
set UsedInk =
(newInk+OldInk - ReturnInk) * ProdQty
/denom
*1.00
where Description is NOT NULL
select * from Yourtable

Divide a value by it's group max value in sql

I have some data which looks like :
Class
Time
1
12
1
14
2
3
1
56
3
4
5
32
...
...
How to write a SQL query to find a average of a class score?
Class score : (100 * Time) / max(Time of that class) [kind of a % but instead of total time of that class we will use max time of that class.]
Expected result is :
for 1 : avg( (12 * 100 / max(12, 14, 56)), (14 * 100 / max(12, 14, 56)), (56 * 100 / max(12, 14, 56)))
same for all values
Thanks in advance
Table and data
create table classes(Class int,Time int);
insert into classes values(1,12),(1,14),(2,3),(1,56),(3,4),(5,32);
select Class,(100*Time)/max(Time) over(PARTITION BY Class) from classes;
results ....
class | ?column?
-------+----------
1 | 21
1 | 25
1 | 100
2 | 100
3 | 100
5 | 100
If you look closely in your math formula, you will notice that it can be simplified a lot.
So this simple query will give same result
SELECT class, SUM(time*100)/(MAX(time)*COUNT(*))
FROM *table_name*
GROUP BY class;
Using subquery like this SELECT MAX(time) FROM test1 as b WHERE a.class = b.class help you to find max value of each class group.
In this example I suppose table test1 as your table whitch contains time and class columns. test1 used two times in this query with a and b alias. special_avg column is your calculated value:
SELECT *,
(100 * time / (SELECT MAX(time) FROM test1 as b WHERE a.class = b.class)) as special_avg
FROM test1

Percentage from TOTAL in SQL [duplicate]

This question already has an answer here:
Percentage SQL Oracle
(1 answer)
Closed 1 year ago.
there is any way to calculate the first 80% percentage
select
testoo.ttamount,
egct.Category_name,
SUM(pola.LIST_PRICE * nvl(pola.QUANTITY,1)) * NVL(poh.RATE,1)
Line_amount,
ROUND ( SUM((pola.LIST_PRICE * nvl(pola.QUANTITY,1)) * NVL(poh.RATE,1)*100) / (testoo.ttamount) , 2 ) PERCENTAGE,
poh.CURRENCY_CODE
FROM
(SELECT
SUM(test.line_amount) TTAmount
FROM
( select
egct.Category_name,
SUM(pola.LIST_PRICE * nvl(pola.QUANTITY,1)) * NVL(poh.RATE,1)
Line_amount,
poh.CURRENCY_CODE
from EGP_CATEGORIES_TL egct,
PO_LINES_ALL pola,
PO_HEADERS_ALL poh
where
egct.category_ID=pola.category_ID
AND pola.po_header_id = poh.po_header_id
AND LANGUAGE='US'
AND TYPE_LOOKUP_CODE='STANDARD'
AND poh.APPROVED_FLAG='Y'
group by
egct.Category_name,
poh.CURRENCY_CODE,
poh.RATE ) Test ) Testoo,
EGP_CATEGORIES_TL egct,
PO_LINES_ALL pola,
PO_HEADERS_ALL poh
where
egct.category_ID=pola.category_ID
AND pola.po_header_id = poh.po_header_id
AND LANGUAGE='US'
AND TYPE_LOOKUP_CODE='STANDARD'
AND poh.APPROVED_FLAG='Y'
group by
egct.Category_name,
poh.RATE,
testoo.ttamount,
poh.CURRENCY_CODE
order by
Line_amount desc
for example the output
Category Percentage
1 32%
2 20%
3 20%
4 10%
5 18%
I want to get the high percentage which the percentage of it about 80 %
so the output will be
Category Percentage
1 32%
2 20%
3 20%
4 10%
5 18%
thanks.
You don't even need to calculate each percentage:
with t(x) as (
select * from table(sys.odcinumberlist(1,1,2,2,3,3,4,4,5,5))
)
select *
from (
select
x,
ratio_to_report(x)over() rtr,
percent_rank()over(order by x) pr
from t
)
where pr<=0.8;
Results:
X RTR PR
---------- ---------- ----------
1 .033333333 0
1 .033333333 0
2 .066666667 .222222222
2 .066666667 .222222222
3 .1 .444444444
3 .1 .444444444
4 .133333333 .666666667
4 .133333333 .666666667
8 rows selected.
Another variant for cumulative percentage filter:
select *
from (
select v.*, 100*sum(rtr)over(order by r) cumulative_percentage
from (
select
rownum r,
column_value val,
ratio_to_report(column_value) over() rtr
from table(sys.odcinumberlist(10,40,30,20))
) v
)
where cumulative_percentage<=80;
R VAL RTR CUMULATIVE_PERCENTAGE
---------- ---------- ---------- ---------------------
1 10 .1 10
2 40 .4 50
3 30 .3 80
3 rows selected.

BigQuery - count(*) of Range of IDs

Im trying to run a select query on Googe BigQuery to group 100000 rows and count(*) the rows in that range.
Sample rows:
ID
1
2
3
...
100
101
...
100000
I want to group these IDs into 10000 rows bucket.
EX:
Bucket 1 - 1 to 10000 ID
Next - 10001 to 20000
At the same time, I want the number of rows in each bucket in the table.
I tried the sample code from my previous question(which worked for MySQL and Postgres), but in BQ, the count(*) is not doing a count of the bucket, instead its doing individual rows.
Query I used:
select concat(min((id-1) / 10000) * 10000+ 1) || '-' || (min((id-1) / 10000) * 10000+ 10000) as id,
count(*) as total_rows
from mytbl
group by (id-1) / 10000
order by (id);
Expected output:
id | total rows
----------------
1-10000 | 10000
10001-20000 | 10000
20001-30000 | 8000 (if 2000 ids are not there in (where id between 20001 and 30000))
... | ..
... | ..
90001-100000| 10000
If you want the range of the bucket, how about using this?
10000 as bucket_range
The number is constant. The count(*) is the number of rows that fall into the range.
You have used wrong group by. You should use trunc with the divison.
select concat(min(trunc((id-1) / 100000000)) * 100000000 + 1),
'-',
(min(trunc((id-1) / 100000000)) * 100000000 + 100000000
) as id,
count(*) as total_rows
from mytbl
group by trunc((id-1) / 100000000);
Also, you have used concat with some issue. I have corrected it in the answer.

SQL Bigquery fill null values with calculated from certain factor

I want to fill null value with a new price value. The new price value will calculated from the other product available price (same product) times the factor.
given table,
Prod | unit | factor | price
abc X 1 24000
abc Y 12 NULL
xyz X 1 NULL
xyz y 5 60000
xyz Z 20 NULL
that formula that comes to mind
null price = avail same prod price * it's factor/null price factor
with the existing table above, examples price formula will be
'abc Y price' = 20000 * 1 / 12 = 2000 (avail price is abc X)
'xyz X price' = 60000 * 5 / 1 = 300000 (avail price is xyz Y)
'xyz Z price' = 60000 * 5 / 20 = 15000 (avail price is xyz Y)
is there any way i can do this?
I think this does what you want:
select t.*,
coalesce(price,
max(price * factor) over (partition by prod) / factor
) as calculated_price
from t;
This replaces NULL prices with the maximum price * factor for the product -- then divided by the factor on the given row.
Below is for BigQuery Standard SQL
if a product has 2 or more price list, just fill the null with the lowest factor
#standardSQL
SELECT t.* REPLACE(IFNULL(t.price, t.factor * p.price / p.factor) AS price)
FROM `project.dataset.table` t
LEFT JOIN (
SELECT prod, ARRAY_AGG(STRUCT(price, factor) ORDER BY factor LIMIT 1)[SAFE_OFFSET(0)].*
FROM `project.dataset.table`
WHERE NOT price IS NULL
GROUP BY prod
) p
USING(prod)
If to apply to sample from your question - result is
Row prod unit factor price
1 abc X 1 24000.0
2 abc Y 12 288000.0
3 xyz X 1 12000.0
4 xyz Y 5 60000.0
5 xyz Z 20 240000.0
Note: it looks like in your formula you need to reverse factors - for example 60000 * 20 / 5 - not sure, but this looks more logical for me. If I am wrong you can adjust t.factor * p.price / p.factor and use p.factor * p.price / t.factor instead
In this case result will be (which matches what you expected but as I said already I suspect is wrong -but it is up to you obviously)
Row prod unit factor price
1 abc X 1 24000.0
2 abc Y 12 2000.0
3 xyz X 1 300000.0
4 xyz Y 5 60000.0
5 xyz Z 20 15000.0