I would like to sum the result set from a table when it matches a conditions
Suppose, the table contains below data
ID PILLER AMOUNT
1 1M 10000
2 2M 15000
3 1M 10000
4 3W 50000
5 1M 10000
Now, from the table rows I would like to sum the amount of 1M which appears 3 time to one row.
Is this what you want ?
In case there are multiple pillars associated to an id
Select
ID,PILLER,Sum(AMOUNT)
from table where piller
in ('2W','3W','1M')
group
by ID,PILLAR ;
or only pillarwise sum
Select
PILLER,Sum(AMOUNT)
from table where piller
in ('2W','3W','1M')
group
by PILLAR ;
use in operator in where clause
select sum(amount) from table
where piller in ('2W','3W','1M')
Related
I would like select some elements from the last id
Here an example that I have :
id money
1 200
1 150
1 500
3 50
4 40
4 300
5 110
Here what I would like :
1 500
3 50
4 300
5 110
So like you can see, I took last id and the money who corresponds.
I tried to do a group by id order by id descending with limit 1. But limit 1 is not available in proc sql from sas and it doesn't work.
Thanks in advance
Unlike SAS datasets, SQL tables represent unordered sets. In your case, it looks like you want the maximum value in the second column, in which case you can use aggregation:
proc sql;
select id, max(money)
from t
group by id;
If you actually mean the last row per id based on the ordering in the SAS dataset, I would suggest using a data step instead.
I have a Hive table (my_table) which is in ORC format and has 30 columns. Two of the columns (col_us, col_ds) store numeric values which can be 0 or null or some integer. The table is partitioned on the bases of day and hourly.
The table has approx. 8 Million x 96 records in a days partition and I am referring to 15 daily partitions
Currently I am running separate queries to retrieve top 500 records with value greater than 0 using a rank function. One query to retrieve col_us and other for col_ds
It is possible that clo_US may have a numeric value while col_DS is 0 or null
Question:
I want to retrieve top 500 non null and non 0 records from each of these columns from one query.
My Query:
From(
SELECT D.COL_US, D.DATESTAMP,
ROW_NUMBER() OVER (PARTITION BY D.ID,D.SUB_ID ORDER BY CONCAT (D.DATESTAMP,D.HOURSTAMP,D.TIMESTAMP) DESC) AS RNK
FROM ${wf_table_name} D
WHERE DATESTAMP >= '${datestamp_15}' AND DATESTAMP < '${datestamp}'
AND COL_US > 0)T
INSERT OVERWRITE TABLE ${wf_us_table}
SELECT T.COL_US, T.DATESTAMP, T.RNK WHERE T.RNK < 500;
As per your query I can guess that you are trying to get top 500 rows from your table based on date/time that means latest 500 rows where col_us, col_ds both have a value which is >0 but not top 500 from each of these columns.
As per your question your table may have 2 type of value. for example.
col_us
0
NULL
10
5
col_ds
5
10
0
NULL
or both column may have >0 value.
So instead of 'AND COL_US > 0' under WHERE clause use 'AND (COL_US > 0 and col_ds > 0)'
But with this condition you will not get any value from above stated 4 rows.
So if you want to get 10,5 from col_us along with 5,10 col_ds then I should say it's not possible using a single query.
Again, as per your question stated "I want to retrieve top 500 non null and non 0 records from each of these columns from one query." ,
I can guess that you want to get top 500 records from col_us, col_ds depends on the value of col_us/col_ds then you must have to use these columns within rank clause instead of date/time.
What you want to retrieve you may get by UPDATE query depending on other available columns but before that I want to request you to share exactly what you want (top 500 based on col_us/col_ds or latest 500) along with your base and target table structure.
I have a query output showing a list of orders. Some orders might occupy more then one record in the query output if those orders consist of sub-orders.Each sub-order occupies a separate line in the output. There is the OrderID column which has the same value for all sub-orders in the output:
OrderID Sub-Order Price
1 1 100
1 2 50
2 1 30
3 1 50
I need to add a column "Discount" to the output and fill it by following rules:
If certain order has one sub-order - the discount is 10% of the Price
If certain order has more than one sub-order, the discount is 20% on all sub-orders'
My query is a UNION of two SELECTs.
I use mssql with ms sql studio
Use CASE and COUNT window function
SELECT OrderID, Sub-Order, Price,
CASE WHEN (count(*) OVER (PARTITION BY OrderID)) > 1
THEN Price * 0.8
ELSE Price * 0.9
END
FROM ( table or <query> )
Need help with this issue. I have a Develop, i need find the duplicate values in SQL, after need Sum the INVOICE_AMOUNT and Divide for individualy amount Example.
FA-0001 $25.00 BILL-0001
FA-0001 $75.00 BILL-0002.
I need SUM TOTAL of this invoice. SUM(AMOUNT_INVOICE)= $100.00, after divide this result with the individual amount. Example 100.00/25=0.25 , etc etc. and this percentage multiply for DET_SOL_AMOUNT.
I need apply this query in duplicate values.
I try with this query.
UPDATE [T4DET] SET [DET_SOL]=(([LOC_AMOUNT]/SUM([LOC_AMOUNT]))*[DET_SOL_CALC]) FROM [1WEB] WHERE [1WEB].[INVOICE] IN (SELECT [T4DET].[ASSIGNMENT] FROM [T4DET] GROUP BY [T4DET].[ASSIGNMENT] HAVING COUNT(*) > 1)
Thanks for your Help.
If I understood what you want to do correctly, it is easy with Excel. You need to write formulas in 2 columns only, for example:
Group Amount Bill No DET_SOL_CALC Sum of Group Result
FA-0001 $25.00 BILL-0001 2 100 0.5
FA-0001 $75.00 BILL-0002 2 100 1.5
FA-0002 $200.00 BILL-0001 5 600 1.666666667
FA-0002 $100.00 BILL-0002 5 600 0.833333333
FA-0002 $300.00 BILL-0003 5 600 2.5
Put your data in columns A, B and C
ColumnD: DET_SOL_CALC
Column E formula should be: =SUMIF($A$2:$C$6,A2,$B$2:$B$6)
Column F formula should be: =B2/E2*D2
Row 1 is headers of your data
put these formulas in row to and drag them down to the last row of your data, your numbers should be calculated correctly.
Please hit the check mark if this is your answer!
The alter Solution is, Create a Temporal Table with SUM and GROUP BY and agregate three columns for calculations
Example
DET4TEMP
ASSINGMENT NVARCHAR
DOC_AMOUNT MONEY
INSERT INTO 4DETTEMP (ASSINGNMENT,[TOTAL]) ASSIGNMENT, SUM(DOC_AMOUNT) FROM FBL5N GROUP BY ASSIGNMENT
and after query is+
Obtain DET SOL Amount in the other table.
UPDATE 4BET SET DET_SOL_CAL=T2.INCOMING_AMOUNT FROM FBL5N T2 WHERE ASSIGNMENT=T2.INV_CON
Obtain DOC AMOUNT TOTAL of the temporal table.
UPDATE 4BET SET DOC_AMNT_TOTAL=T2.[TOTAL] FROM 4DETTEMP T2 WHERE ASSIGNMENT=T2.ASSIGNMENT
Obtain the Calculation Percentage.
UPDATE 4BET PERC_CAL_AMNT=(DOC_AMNT_TOTAL/DOC_AMNT), DET_SOL=(PERC_CAL_AMNT*DET_SOL_CALC)
after delete temp tables and finish.
This is my solution. The question is Viable?
I have a query based on basic criteria that will return X number of records on any given day.
I'm trying to check the result of the basic query then apply a percentage split to it based on the total of X and split it in 2 buckets. Each bucket will be a percentage of the total query result returned in X.
For example:
Query A returns 3500 records.
If the number of records returned from Query A is <= 3000, then split the 3500 records into a 40% / 60% split (1,400 / 2,100).
If the number of records returned from Query A is >=3001 and <=50,000 then split the records into a 10% / 90% split.Etc. Etc.
I want the actual records returned, and not just the math acting on the records that returns one row with a number in it (in the column).
I'm not sure how you want to display different parts of the resulting set of rows, so I've just added additional column(part) in the resulting set of rows that contains values 1 indicating that row belongs to the first part and 2 - second part.
select z.*
, case
when cnt_all <= 3000 and cnt <= 40
then 1
when (cnt_all between 3001 and 50000) and (cnt <= 10)
then 1
else 2
end part
from (select t.*
, 100*(count(col1) over(order by col1) / count(col1) over() )cnt
, count(col1) over() cnt_all
from split_rowset t
order by col1
) z
Demo #1 number of rows 3000.
Demo #2 number of rows 3500.
For better usability you can create a view using the query above and then query that view filtering by part column.
Demo #3 using of a view.