Query to multiply certain sets of rows on a single table - sql

I've got a bit of a complicated query that I'm struggling with. You will notice that the schema isn't the easiest thing to work with but it's what I've been given and there isn't time to re-design (common story!).
I have rows like the ones below. Note: The 3 digit value numbers are just random numbers I made up.
id field_id value
1 5 999
1 6 888
1 7 777
1 8 foo <--- foo so we want the 3 values above
1 9 don't care
2 5 123
2 6 456
2 7 789
2 8 bar <--- bar so we DON'T want the 3 values above
2 9 don't care
3 5 623
3 6 971
3 7 481
3 8 foo <--- foo so we want the 3 values above
3 9 don't care
...
...
n 5 987
n 6 654
n 7 321
n 8 foo <--- foo so we want the 3 values above
n 9 don't care
I want this result:
id result
1 999*888*777
3 623*971*481
...
n 987*654*321
Is this clear? So we have a table with n*5 rows. For each of the sets of 5 rows: 3 of them have values we might want to multiply together, 1 of them tells us if we want to multiply and 1 of them we don't care about so we don't want the row in the query result.
Can we do this in Oracle? Preferably one query.. I guess you need to use a multiplication operator (somehow), and a grouping.
Any help would be great. Thank you.

something like this:
select m.id, exp(sum(ln(m.value)))
from mytab m
where m.field_id in (5, 6, 7)
and m.id in (select m2.id
from mytab m2
where m2.field_id = 8
and m2.value = 'foo')
group by m.id;
eg:
SQL> select * from mytab;
ID FIELD_ID VAL
---------- ---------- ---
1 5 999
1 6 888
1 7 777
1 8 foo
1 9 x
2 5 123
2 6 456
2 7 789
2 8 bar
2 9 x
3 5 623
3 6 971
3 7 481
3 8 foo
3 9 x
15 rows selected.
SQL> select m.id, exp(sum(ln(m.value))) result
2 from mytab m
3 where m.field_id in (5, 6, 7)
4 and m.id in (select m2.id
5 from mytab m2
6 where m2.field_id = 8
7 and m2.value = 'foo')
8 group by m.id;
ID RESULT
---------- ----------
1 689286024
3 290972773

Same logic; just removed the hard-coded values. posting this answer thinking might be helpful to some others.
SELECT a.id,
exp(sum(ln(a.val)))
FROM mytab a,
(SELECT DISTINCT id,
field_id
FROM mytab
WHERE val = 'foo') b
WHERE a.id = b.id
AND a.field_id < b.field_id
GROUP BY a.id;

Related

Simply stack column to existing table

Simple question. Say I have a table with two columns, PersonID and CardID and 10 rows. I have another Table that contains 1 column, Code and 10 rows. I simply want to stack the codes to the other table. Note that is does not matter which person gets which code, they should all just get one.
At the moment I find myself adding rownumbers in both tables and updating were TableA.ID = TableB.ID. But this seems overly complex for such a simple taks.
Any suggestions?
TableA
PersonID CardID
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
Table B
Code
Random1
Random2
Random3
Random4
Random5
Random6
Random7
Random8
Random9
Random10
Desired Result:
PersonID CardID Code
1 1 Random1
2 2 Random2
3 3 Random3
4 4 Random4
5 5 Random5
6 6 Random6
7 7 Random7
8 8 Random8
9 9 Random9
10 10 Random10
Again, code order does not matter. Each person, one code. That's it.
You need not to append 1 extra column to each row, You can use CTE with RowNum like :
;WITH CTE_Results
AS (
SELECT ROW_NUMBER() OVER (ORDER BY Code) as ID, Code
from TableB B
)
select A.PersonId, A.CardID, CTE_Results.Code from CTE_Results
INNER JOIN TableA A ON A.PersonId= CTE_Results.ID
Working SQL Fiddle is : http://sqlfiddle.com/#!18/f8f86/9
Result will be like :
PersonId CardID Code
1 1 Random1
2 2 Random10
3 3 Random2
4 4 Random3
5 5 Random4
6 6 Random5
7 7 Random6
8 8 Random7
9 9 Random8
10 10 Random9

re-indexing duplicate rows

Hi I have a table below;
ID length
1 1050
1 1000
1 900
1 600
2 545
2 434
3 45
3 7
4 5
I need an SQL code to make the below table
ID IDK length
1 1 1050
1 2 1000
1 3 900
1 4 600
2 1 545
2 2 434
3 1 45
3 2 7
4 1 5
IDK is the new column to reindexing the same ID according to ascending order of length.
Thank you very much
This is a pain in MS Access. Here is one way using a correlated subquery:
select t.*,
(select count(*)
from foo as t2
where t2.id = t.id and t2.length >= t.length
) as idk
from foo as t;

Using temporary extended table to make a sum

From a given table I want to be able to sum values having the same number (should be easy, right?)
Problem: A given value can be assigned from 2 to n consecutive numbers.
For some reasons this information is stored in a single row describing the value, the starting number and the ending number as below.
TABLE A
id | starting_number | ending_number | value
----+-----------------+---------------+-------
1 2 5 8
2 0 3 5
3 4 6 6
4 7 8 10
For instance the first row means:
value '8' is assigned to numbers: 2, 3 and 4 (5 is excluded)
So, I would like the following intermediairy result table
TABLE B
id | number | value
----+--------+-------
1 2 8
1 3 8
1 4 8
2 0 5
2 1 5
2 2 5
3 4 6
3 5 6
4 7 10
So I can sum 'value' for elements having identical 'number'
SELECT number, sum(value)
FROM B
GROUP BY number
TABLE C
number | sum(value)
--------+------------
2 13
3 8
4 14
0 5
1 5
5 6
7 10
I don't know how to do this and didn't find any answer on the web (maybe not looking with appropriate key words...)
Any idea?
You can do what you want with generate_series(). So, TableB is basically:
select id, generate_series(starting_number, ending_number - 1, 1) as n, value
from tableA;
Your aggregation is then:
select n, sum(value)
from (select id, generate_series(starting_number, ending_number - 1, 1) as n, value
from tableA
) a
group by n;

Grouping query into group and subgroup

I want to group my data using SQL or R so that I can get top or bottom 10 Subarea_codes for each Company and Area_code. In essence: the Subarea_codes within the Area_codes where each Company has its largest or smallest result.
data.csv
Area_code Subarea_code Company Result
10 101 A 15
10 101 P 10
10 101 C 4
10 102 A 10
10 102 P 8
10 102 C 5
11 111 A 15
11 111 P 20
11 111 C 5
11 112 A 10
11 112 P 5
11 112 C 10
result.csv should be like this
Company Area_code Largest_subarea_code Result Smallest_subarea_code Result
A 10 101 15 102 10
P 10 101 10 102 8
C 10 102 5 101 4
A 11 111 15 112 10
P 11 111 20 112 5
C 11 112 10 111 5
Within each Area_code there can be hundreds of Subarea_codes but I only want the top and bottom 10 for each Company.
Also this doesn't have to be resolved in one query, but can be divided into two queries, meaning smallest is presented in results_10_smallest and largest in result_10_largest. But I'm hoping I can accomplish this with one query for each result.
What I've tried:
SELECT Company, Area_code, Subarea_code MAX(Result)
AS Max_result
FROM data
GROUP BY Subarea_code
ORDER BY Company
;
This gives me all the Companies with the highest results within each Subarea_code. Which would mean: A, A, P, A-C for the data above.
Using sqldf package:
df <- read.table(text="Area_code Subarea_code Company Result
10 101 A 15
10 101 P 10
10 101 C 4
10 102 A 10
10 102 P 8
10 102 C 5
11 111 A 15
11 111 P 20
11 111 C 5
11 112 A 10
11 112 P 5
11 112 C 10", header=TRUE)
library(sqldf)
mymax <- sqldf("select Company,
Area_code,
max(Subarea_code) Largest_subarea_code
from df
group by Company,Area_code")
mymaxres <- sqldf("select d.Company,
d.Area_code,
m.Largest_subarea_code,
d.Result
from df d, mymax m
where d.Company=m.Company and
d.Subarea_code=m.Largest_subarea_code")
mymin <- sqldf("select Company,
Area_code,
min(Subarea_code) Smallest_subarea_code
from df
group by Company,Area_code")
myminres <- sqldf("select d.Company,
d.Area_code,
m.Smallest_subarea_code,
d.Result
from df d, mymin m
where d.Company=m.Company and
d.Subarea_code=m.Smallest_subarea_code")
result <- sqldf("select a.*, b.Smallest_subarea_code,b.Result
from mymaxres a, myminres b
where a.Company=b.Company and
a.Area_code=b.Area_code")
If you already doing it in R, why not use the much more efficient data.table instead of sqldf using SQL syntax? Assuming data is your data set, simply:
library(data.table)
setDT(data)[, list(Largest_subarea_code = Subarea_code[which.max(Result)],
Resultmax = max(Result),
Smallest_subarea_code = Subarea_code[which.min(Result)],
Resultmin = min(Result)), by = list(Company, Area_code)]
# Company Area_code Largest_subarea_code Resultmax Smallest_subarea_code Resultmin
# 1: A 10 101 15 102 10
# 2: P 10 101 10 102 8
# 3: C 10 102 5 101 4
# 4: A 11 111 15 112 10
# 5: P 11 111 20 112 5
# 6: C 11 112 10 111 5
There seems to be a discrepancy between the output shown and the description. The description asks for the top 10 and bottom 10 results for each Area code/Company but the sample output shows only the top 1 and the bottom 1. For example, for area code 10 and company A subarea 101 is top with a result of 15 and and subarea 102 is 2nd largest with a result of 10 so according to the description there should be two rows for that company/area code combination. (If there were more data there would be up to 10 rows for that company/area code combination.)
We give two answers. The first assumes the top 10 and bottom 10 are wanted for each company and area code as in the question's description and the second assumes only the top and bottom for each company and area code as in the question's sample output.
1) Top/Bottom 10
Here we assume that the top 10 and bottom 10 results for each Company/Area code are wanted. If its just the top and bottom one then see (2) later on (or replace 10 with 1 in the code here). Bottom10 is all rows for which there are 10 or fewer subareas for the same area code and company with equal or smaller results. Top10 is similar.
library(sqldf)
Bottom10 <- sqldf("select a.Company,
a.Area_code,
a.Subarea_code Bottom_Subarea,
a.Result Bottom_Result,
count(*) Bottom_Rank
from df a join df b
on a.Company = b.Company and
a.Area_code = B.Area_code and
b.Result <= a.Result
group by a.Company, a.Area_code, a.Subarea_code
having count(*) <= 10")
Top10 <- sqldf("select a.Company,
a.Area_code,
a.Subarea_code Top_Subarea,
a.Result Top_Result,
count(*) Top_Rank
from df a join df b
on a.Company = b.Company and
a.Area_code = B.Area_code and
b.Result >= a.Result
group by a.Company, a.Area_code, a.Subarea_code
having count(*) <= 10")
The description indicated you wanted the top 10 OR the bottom 10 for each company/area code in which case just use one of the results above. If you want to combine them we show a merge below. We have added a Rank column to indicate the smallest/largest (Rank is 1), second smallest/largest (Rank is 2), etc.
sqldf("select t.Area_code,
t.Company,
t.Top_Rank Rank,
t.Top_Subarea,
t.Top_Result,
b.Bottom_Subarea,
b.Bottom_Result
from Bottom10 b join Top10 t
on t.Area_code = b.Area_code and
t.Company = b.Company and
t.Top_Rank = b.Bottom_Rank
order by t.Area_code, t.Company, t.Top_Rank")
giving:
Area_code Company Rank Top_Subarea Top_Result Bottom_Subarea Bottom_Result
1 10 A 1 101 15 102 10
2 10 A 2 102 10 101 15
3 10 C 1 102 5 101 4
4 10 C 2 101 4 102 5
5 10 P 1 101 10 102 8
6 10 P 2 102 8 101 10
7 11 A 1 111 15 112 10
8 11 A 2 112 10 111 15
9 11 C 1 112 10 111 5
10 11 C 2 111 5 112 10
11 11 P 1 111 20 112 5
12 11 P 2 112 5 111 20
Note that this format makes less sense if there are ties and, in fact, could generate more than 10 rows for a Company/Area code so you might just want to use the individual Top10 and Bottom10 in that case. You could also consider jittering df$Result if this a problem:
df$Result <- jitter(df$Result)
# now perform SQL statements
2) Top/Bottom Only
Here we give only the top and bottom results and the corresponding subareas for each company/area code. Note that this uses an extension to SQL supported by sqlite and the SQL code is substantially simpler:
Bottom1 <- sqldf("select Company,
Area_code,
Subarea_code Bottom_Subarea,
min(Result) Bottom_Result
from df
group by Company, Area_code")
Top1 <- sqldf("select Company,
Area_code,
Subarea_code Top_Subarea,
max(Result) Top_Result
from df
group by Company, Area_code")
sqldf("select a.Company,
a.Area_code,
Top_Subarea,
Top_Result,
Bottom_Subarea
Bottom_Result
from Top1 a join Bottom1 b
on a.Company = b.Company and
a.Area_code = b.Area_code
order by a.Area_code, a.Company")
This gives:
Company Area_code Top_Subarea Top_Result Bottom_Result
1 A 10 101 15 102
2 C 10 102 5 101
3 P 10 101 10 102
4 A 11 111 15 112
5 C 11 112 10 111
6 P 11 111 20 112
Update Correction and added (2).
Above answers are fine to fetch max result.
This solves the top10 issue:
data.top <- data[ave(-data$Result, data$Company, data$Area_code, FUN = rank) <= 10, ]
In this script the user declares the company. The script then indicates the max top 10 results (idem for min values).
Result=NULL
A <- read.table(/your-file.txt",header=T,sep="\t",na.string="NA")
Company<-A$Company=="A" #can be A, C, P or other values
Subarea<-unique(A$Subarea)
for (i in 1:length(unique(A$Subarea)))
{Result[i]<-max(A$Result[Company & A$Subarea_code==Subarea[i]])}
Res1<-t((rbind(Subarea,Result)))
Res2<-Res1[order(-Res1[,2]),]
Res2[1:10,]

Comapre number range or char to get the result in oracle

RSC by comparing values from tab.col1 , tab.col2 ,tab.col3 ,tab.col4 to Result.INTR
Tab table has 1000s of rows
If any of the col1 to 4 has NULL then return 1
Col1 will hold values pertaining to RID = 10
Col2 will hold values pertaining to RID = 20
Col3 will hold values pertaining to RID = 30
Col4 will hold values pertaining to RID = 40
For eg:
if tab.col1 is 3 then 4
if tab.col2 is 'R' then 3
if tab.col3 is 1900 then it query should give 4
if 1945 then 3
if 1937 then 3 (lower bound is less than and upper bound is greater than equal to)
if tab.col4 is 6 then 5
and so on.....
Result table
RID INTR RSC
----- ----- -----
10 1 0
10 2 1
10 3 4
10 4 2
20 I 4
20 R 3
20 U 1
30 1900 5
30 1900-1937 4
30 1937-1967 3
30 1967 3
40 3-4 2
40 1-3 1
40 4 5
Check CASE and DECODE functions of Oracle.Google and check some examples.You will be able to implement your requirement with them.
For example check this one http://www.club-oracle.com/forums/case-and-decode-two-powerfull-constructs-of-sql-t181/
You would do this like:
select (case when tab.col1 = 3 then 4
when tab.col2 = 'R' then 3
when tab.col3 = 1900 then 4
when tab.col3 in (1945, 1937) then 3
when tab.col4 = 6 then 5
. . .
Try something like:
select t1.RSC, t2.RSC, t3.RSC, t4.RSC
from your_table tab join Result t1 on t1.RID=10 and t1.INTR=tab.col1
join Result t2 on t2.RID=20 and t2.INTR=tab.col2
join Result t3 on t3.RID=30 and t3.INTR >= regexp_substr(tab.col3, '^\d*') and t3.INTR <= regexp_substr(tab.col3, '\d*$')
join Result t4 on t4.RID=40 and t4.INTR >= regexp_substr(tab.col4, '^\d*') and t4.INTR <= regexp_substr(tab.col4, '\d*$')