Qlik compare average by dimension compared to average of first dimension - qlikview

Background:
I have a list of scores for a number of questions and I have split my
participants into generation groups.
I am trying to compare the average score for each question, with the average for the generation. My current (failing) script is as below.
if(Generation = 'Boomers2'
,(Avg(Score)-Avg({<Generation = {"Boomers2"}>} Score))
,if(Generation = 'Generation X'
,(Avg(Score)-Avg({<Generation = {"Generation X"}>} Score))
,(Avg(Score)-Avg({<Generation = {"Millenials"}>} Score))
)
)
I'm sure I could do this with some ETL - ideally I'm looking to do this using set analysis but will accept either answer. For reference, here is my load script.
SurveyRaw:
LOAD
[F1] as RowID,
Timestamp(Timestamp#([A], 'DD/MM/YYYY hh:mm:ss') ) AS [EntryDate],
[B] AS [YearOfBirth],
[C] AS [PerceivedGeneration],
[D] AS [AbilityToAdapt],
[E] AS [TeamWork],
[F] AS [ProblemSolving],
[G] AS [Collaboration],
[H] AS [Entrepreneurial],
[I] AS [Overtime],
[J] AS [Collaboration2],
[M] AS [FutureQuestion]
FROM [lib://workingstyles]
(html, codepage is 1252, embedded labels, table is #1)
where IsNum([B]) and [B]<1998; //and [B]>=1966;
Scores:
CrossTable(Question, Score)
Load RowID, [AbilityToAdapt],[TeamWork],[ProblemSolving],[Collaboration],[Entrepreneurial],[Overtime],[Collaboration2]
Resident SurveyRaw;
Load Question, Avg(Score) as AvgQuestionScore
Resident Scores
Group By Question;
Left Join (SurveyRaw)
Load Sum(Score) as TotalScore
,Sum(Score)/7 as AvgUserScore
,RowID
Resident Scores Group By RowID;
Drop Fields [AbilityToAdapt],[TeamWork],[ProblemSolving],[Collaboration],[Entrepreneurial],[Overtime],[Collaboration2] From [SurveyRaw];
Generations:
Load * Inline
[Year_Start, Year_End, Generation
1946, 1954, Boomers1
1955, 1965, Boomers2
1966, 1976, Generation X
1977, 1994, Millenials
1995, 2012, Z];
IntervalMatch:
IntervalMatch([YearOfBirth])
Load Distinct Year_Start,Year_End
Resident Generations;

if you can assign an INT value to your Generation, you could do something like
=Avg(Score)- avg({<Generation ={$(=Generation-1)}>} Score)

Create 2 master dimensions based on calculated field, one with set analysis to calculate avg per generation and one for total average over all selections. Doing it in the load script will make it faster.

Related

Filtering row start given a specific result

I want to create a survival analysis datatable. However, Im having a small problem. Look at the image to see the output.
enter image description here
What I need is my query to start from the min(Fechacarga) where Estado = S. Also, this variable will, at some point, start to generate N's and I want to keep those and there will be more than just one client.
Basically, I want to keep all the Ns before the Ss, but delete all the rows that have Ns before Ss (for each client).
This is my actual code:
select Codigo_Cliente,
Es_Cliente Estado,
ROW_NUMBER() over(partition by Codigo_Cliente order by Fechacarga) as Mes,
FechaCarga
from Clientes_MAC
where FechaCarga in (SELECT MAX(fecha)
FROM (
select DISTINCT(FECHACARGA) as fecha
FROM CLIENTES_MAC
) fec
GROUP BY MONTH(fecha), YEAR(fecha)
)
and Codigo_Cliente = 363193

SQL-Server drill down in an other table (group by)

OK, I'm quite new to SQL and didn't get to much training!
I'm using SSMS to create stored procedures and open them in Excel.
The code below work just fine, but I need to add a drill down to get more info on some lines.
We need to follow what was invoice and paid on a batch of contracts for our project. Each contract have multiple lines with a description and a couple of other fields(reference, line #, G/L # etc). Plus we have the value of the line, the amount that was invoice for this line and the amount that was paid.
Main table 'CSCOMVTL' have the basic infos including the base value and the invoice amount, but not the paid amount.
'JRAPRVTL' is the list of all invoices with; invoice no., invoice date, invoiced amount and paid amount that we may need to see.
So for each base line, we need a +/- button to show/hide details of the invoice.
Invoice amount and paid amount could be from a rollup, but the number and date won't be on the parent line. If they could be in the same column as other field not needed it would be great, but I could live with 2 extra columns.
Thanks!
ALTER PROCEDURE [dbo].[marpt_qmd_AccPmt_DetailsST]
#contrat varchar(30), #projet varchar(30)
AS
BEGIN
CREATE TABLE #RPT
(
Ligne INT,
Lien INT,
Act VARCHAR (10),
Descr VARCHAR (90),
MntBase DECIMAL (20,2),
MntFact DECIMAL (20,2),
Modif VARCHAR (40),
Descr3 VARCHAR (90),
Lien2 INT,
MntPy DECIMAL (20,2) default '0',
)
INSERT INTO #RPT (Ligne, Lien, Act, Descr, MntBase, MntFact)
SELECT ROW, DETAILCHANGEORDERCOU, ACTIVITY, DESCRIPTION, AMOUNT, INVOICE
FROM cscomvtl
WHERE PROJECTNUMBER = #projet
and LTRIM(RTRIM(PONUMBER)) = #contrat
UPDATE #RPT
SET Modif=m.CHANGEORDERNUMBER, Descr3=m.DESCRIPTION, Lien2=m.CHANGEORDERCOUNTER
FROM cscomac m, #RPT r
where m.COUNTER=r.Lien
UPDATE #RPT
SET MntPy=payment
FROM #RPT r, (select POLINE, sum(payment) payment from jraprvtl where PROJECTNO=#projet
and LTRIM(RTRIM(PURCHASEORDER))=#contrat group by POLINE) d
where r.Ligne=d. POLINE
SELECT
Ligne as 'Ligne',
Act as 'Act.',
Descr as 'Description 1',
MntBase as '$ Base',
MntFact as '$ Invoiced',
Modif as 'Num. Modif.',
Descr3 as 'Description 2',
MntPy as '$ Paid'
FROM #RPT
Order by Ligne
Drop table #RPT
First off, take the time & learn SQL. It's an invaluable tool in your toolkit!
Okay, enough of the lecture. In looking through your code, you don't seem to really need the temp table #rpt, you just need to understand JOINs. Hopefully this SQL will get you what you are looking for:
SELECT vtl.ROW AS Ligne, vtl.DETAILCHANGEORDERCOU AS Lein, vtl.ACTIVITY AS Act,
vtl.DESCRIPTION AS Descr, vtl.AMOUNT AS MntBase, vtl.INVOICE AS MntFact,
mac.CHANGEORDERNUMBER AS Modif, mac.DESCRIPTION AS Descr3, mac.CHANGEORDERCOUNTER AS Lien2,
sum(jrap.payment) AS MntPy
FROM cscomvtl AS vtl
LEFT OUTER JOIN cscomac AS mac
ON vtl.detailchangeordercou = mac.counter
LEFT OUTER JOIN jraprvtl AS jrap
ON vtl.row = jrap.poline
WHERE projectnumber = #projet AND LTRIM(RTRIM(ponumber)) = #contrat
GROUP BY vtl.row, vtl.detailchangeordercou, vtl.activity, vtl.description, vtl.amount,
vtl.invoice, mac.changeordernumber, mac.description, mac.changeordercounter
You will likely have to tweak it to fit what you're trying to do in Excel, since you really didn't give much to go on there.

TSQL - Reduce the number of records with intelligence - patterns (crash impact data)

I have some data that contains data from measurements from crash impact tests.
When the object is not moving the measurements contain much rows of the same data, when the object is moving and shaking it can register quite big fluctuations.
Problem: I have hundreds of millions of lines of this data and to use it in reporting (mostly plotting) I have to find a way to make simplify everything and especially reduce the number of records.
Sometimes I have 20 times exactly the same value (=ChannelValue)
An example of the data is the following:
idMetaData;TimeStamp;SampleNumber;ChannelValue
3;0,5036500;12073;0.4573468975
3;0,5037000;12074;0.4418814526
3;0,5037500;12075;0.4109505628
3;0,5038000;12076;0.4109505628
3;0,5038500;12077;0.4264160077
3;0,5038999;12078;0.4573468975
3;0,5039499;12079;0.4573468975
3;0,5039999;12080;0.4109505628
3;0,5040500;12081;0.3336233382
3;0,5041000;12082;0.2408306686
3;0,5041500;12083;0.1789688889
3;0,5042000;12084;0.1789688889
3;0,5042500;12085;0.2253652237
3;0,5042999;12086;0.3026924483
3;0,5043499;12087;0.3645542280
3;0,5044000;12088;0.3954851178
3;0,5044500;12089;0.3645542280
3;0,5045000;12090;0.3026924483
3;0,5045500;12091;0.2253652237
3;0,5046000;12092;0.1635034440
3;0,5046499;12093;0.1325725541
3;0,5046999;12094;0.1480379991
3;0,5047500;12095;0.1789688889
3;0,5048000;12096;0.1944343338
3;0,5048500;12097;0.2098997788
3;0,5049000;12098;0.1944343338
3;0,5049500;12099;0.1635034440
3;0,5049999;12100;0.1171071092
3;0,5050499;12101;0.0861762194
3;0,5051000;12102;0.0707107744
3;0,5051500;12103;0.0707107744
3;0,5052000;12104;0.0861762194
3;0,5052500;12105;0.1171071092
3;0,5053000;12106;0.1635034440
idMetaData;TimeStamp;SampleNumber;ChannelValue
50;0,8799999;19600;-0.7106432894
50;0,8800499;19601;-0.7484265845
50;0,8801000;19602;-0.7232377211
50;0,8801500;19603;-0.6098878356
50;0,8802000;19604;-0.6098878356
50;0,8802500;19605;-0.6476711307
50;0,8802999;19606;-0.7232377211
50;0,8803499;19607;-0.7988043114
50;0,8803999;19608;-0.8617764701
50;0,8804500;19609;-0.8491820384
50;0,8805000;19610;-0.8617764701
50;0,8805500;19611;-0.7988043114
50;0,8806000;19612;-0.8239931749
50;0,8806499;19613;-0.7988043114
50;0,8806999;19614;-0.7736154480
50;0,8807499;19615;-0.6602655625
50;0,8807999;19616;-0.5972934038
50;0,8808500;19617;-0.6602655625
50;0,8809000;19618;-0.7484265845
50;0,8809500;19619;-0.8365876066
50;0,8809999;19620;-0.7862098797
50;0,8810499;19621;-0.8113987432
50;0,8810999;19622;-0.7988043114
50;0,8811499;19623;-0.6980488576
50;0,8812000;19624;-0.7232377211
50;0,8812500;19625;-0.7484265845
50;0,8813000;19626;-0.7232377211
50;0,8813500;19627;-0.8239931749
50;0,8813999;19628;-0.8491820384
50;0,8814499;19629;-0.8617764701
50;0,8814999;19630;-0.8365876066
50;0,8815500;19631;-0.8365876066
50;0,8816000;19632;-0.7988043114
50;0,8816500;19633;-0.8113987432
50;0,8817000;19634;-0.8113987432
50;0,8817499;19635;-0.7736154480
50;0,8817999;19636;-0.7232377211
50;0,8818499;19637;-0.6728599942
50;0,8819000;19638;-0.7232377211
50;0,8819500;19639;-0.7610210163
50;0,8820000;19640;-0.7106432894
50;0,8820500;19641;-0.6602655625
50;0,8820999;19642;-0.6602655625
50;0,8821499;19643;-0.6854544259
50;0,8821999;19644;-0.7736154480
50;0,8822500;19645;-0.8113987432
50;0,8823000;19646;-0.8869653335
50;0,8823500;19647;-0.8743709018
50;0,8824000;19648;-0.7988043114
50;0,8824499;19649;-0.8491820384
50;0,8824999;19650;-0.8239931749
50;0,8825499;19651;-0.8239931749
50;0,8825999;19652;-0.7232377211
50;0,8826500;19653;-0.6854544259
50;0,8827000;19654;-0.6728599942
50;0,8827500;19655;-0.6854544259
50;0,8827999;19656;-0.7232377211
50;0,8828499;19657;-0.7232377211
50;0,8828999;19658;-0.6980488576
50;0,8829499;19659;-0.6980488576
50;0,8830000;19660;-0.7106432894
50;0,8830500;19661;-0.6854544259
50;0,8831000;19662;-0.7484265845
50;0,8831499;19663;-0.7484265845
50;0,8831999;19664;-0.7736154480
50;0,8832499;19665;-0.7610210163
50;0,8832999;19666;-0.7610210163
50;0,8833500;19667;-0.7988043114
50;0,8834000;19668;-0.8617764701
50;0,8834500;19669;-0.9121541970
50;0,8835000;19670;-0.8869653335
50;0,8835499;19671;-0.8743709018
50;0,8835999;19672;-0.9121541970
50;0,8836499;19673;-0.8491820384
50;0,8837000;19674;-0.7988043114
50;0,8837500;19675;-0.7736154480
50;0,8838000;19676;-0.7106432894
50;0,8838500;19677;-0.6980488576
50;0,8838999;19678;-0.7484265845
50;0,8839499;19679;-0.8491820384
50;0,8839999;19680;-0.8491820384
50;0,8840500;19681;-0.7610210163
50;0,8841000;19682;-0.7106432894
50;0,8841500;19683;-0.7232377211
50;0,8842000;19684;-0.7962098797
50;0,8842499;19685;-0.7358321528
50;0,8842999;19686;-0.7232377211
50;0,8843499;19687;-0.7484265845
50;0,8844000;19688;-0.6728599942
50;0,8844500;19689;-0.6854544259
50;0,8845000;19690;-0.7106432894
50;0,8845500;19691;-0.7232377211
50;0,8845999;19692;-0.7862098797
50;0,8846499;19693;-0.7862098797
idMetaData;TimeStamp;SampleNumber;ChannelValue
15;0,3148000;8296;1.5081626404
15;0,3148500;8297;1.5081626404
15;0,3149000;8298;1.5727382554
15;0,3149500;8299;1.5081626404
15;0,3150000;8300;1.4920187367
15;0,3150500;8301;1.4435870254
15;0,3151000;8302;1.4274431217
15;0,3151500;8303;1.5243065442
15;0,3152000;8304;1.4920187367
15;0,3152500;8305;1.5081626404
15;0,3153000;8306;1.4920187367
15;0,3153500;8307;1.5565943516
15;0,3154000;8308;1.5081626404
15;0,3154500;8309;1.5404504479
15;0,3155000;8310;1.5081626404
15;0,3155500;8311;1.5727382554
15;0,3156000;8312;1.5404504479
15;0,3156500;8313;1.3951553142
15;0,3157000;8314;1.4758748329
15;0,3157500;8315;1.4435870254
15;0,3158000;8316;1.4920187367
15;0,3158500;8317;1.4920187367
15;0,3159000;8318;1.5081626404
15;0,3159500;8319;1.4597309292
15;0,3160000;8320;1.4274431217
15;0,3160500;8321;1.4274431217
15;0,3161000;8322;1.4597309292
15;0,3161500;8323;1.5565943516
15;0,3162000;8324;1.5888821591
15;0,3162500;8325;1.5565943516
15;0,3163000;8326;1.5243065442
15;0,3163500;8327;1.5404504479
15;0,3164000;8328;1.5404504479
15;0,3164500;8329;1.5404504479
15;0,3165000;8330;1.5404504479
I want to reduce the number of records by factor 10 or 20.
One solution would be to keep the average of 20 rows but then there is one problem, when there is a peek it will 'evaporate' in the average.
What I'd need is an average of 20 rows ('ChannelValue') but when there is a value that is a 'peek' -> definition: differs more than 10% -positive or negative- with the last (2?) value(s) than for this one do not take the average but the peek value, and from there continue the averages... This is the intelligence I mean in the title
I could also use some sort of 'distinct' logic that will also reduce the number of records by factor 8 to 10.
I read stuff about the NTILE function but this is all new for me.
Partition by idMetadata, order by id (there is an id column which I did not include right now)
Thanks so much in advance!
Here's one way. In SQL Server 2012 i'd use LEAD() or LAG() but since you are on 2008 we can use ROW_NUMBER() with a CTE and then limit on the variation.
declare #test table (idMetaData int, TimeStamp varchar(64), SampleNumber bigint, ChannelValue decimal(16,10))
insert into #test
values
(3,'0,5036500',12073,0.4573468975),
(3,'0,5037000',12074,0.4418814526),
(3,'0,5037500',12075,0.4109505628),
(3,'0,5038000',12076,0.4109505628),
(3,'0,5038500',12077,0.4264160077),
(3,'0,5038999',12078,0.4573468975),
(3,'0,5039499',12079,0.4573468975),
(3,'0,5039999',12080,0.4109505628),
(3,'0,5040500',12081,0.3336233382),
(3,'0,5041000',12082,0.2408306686),
(3,'0,5041500',12083,0.1789688889),
(3,'0,5042000',12084,0.1789688889)
--set the minimum variation you want to keep. Anything greate than this will be removed
declare #variation decimal(16,10) = 0.0000000010
--apply an order with row_number()
;with cte as(
select
idMetaData
,TimeStamp
,SampleNumber
,ChannelValue
,row_number() over (partition by idMetadata order by SampleNumber) as RN
from #test),
--self join to itself adding the next row as additional columns
cte2 as(
select
c.*
,c2.TimeStamp as C2TimeStamp
,c2.SampleNumber as C2SampleNumber
,c2.ChannelValue as C2ChannelValue
from cte c
left join cte c2 on c2.rn = c.rn + 1)
--only return the results where the variation is met. Change the variation to see this in action
select
idMetaData
,TimeStamp
,SampleNumber
,ChannelValue
from
cte2
where
ChannelValue - C2ChannelValue > #variation or C2ChannelValue is null
This doesn't take an "average" which would have to be a running average but what it allows you to do is to use a variance measurement to say that any consecutive measurements which only vary by n amount, treat as a single measurement. The higher the variance you choose, the more rows that will be "removed" or treated equally. It's a way to cluster your points in order to remove some noise without using something like K-Means which is hard in SQL.
Just for fun. I modified a stored procedure which generates dynamic stats for any table/query/measure. This has been tailored to be stand-alone.
This will generate a series of analytical items for groups of 10 ... an arbitrary value.
Just a side note: If there is no true MODE, ModeR1 and ModeR2 will represent the series range. When ModeR1 = ModeR2 then that would be the true mode.
dbFiddle
Example
;with cteBase as (Select GroupBy = [idMetaData]
,Item = Row_Number() over (Partition By [idMetaData] Order By SampleNumber) / 10
,RowNr = Row_Number() over (Partition By [idMetaData] Order By SampleNumber)
,Measure = ChannelValue
,TimeStamp
,SampleNumber
From #YourTable
),
cteMean as (Select GroupBy,Item,Mean=Avg(Measure),Rows=Count(*),MinRow=min(RowNr),MaxRow=max(RowNr) From cteBase Group By GroupBy,Item),
cteMedn as (Select GroupBy,Item,MedRow1=ceiling(Rows/2.0),MedRow2=ceiling((Rows+1)/2.0) From cteMean),
cteMode as (Select GroupBy,Item,Mode=Measure,ModeHits=count(*),ModeRowNr=Row_Number() over (Partition By GroupBy,Item Order By Count(*) Desc) From cteBase Group By GroupBy,Item,Measure)
Select idMetaData = A.GroupBy
,Bin = A.Item+1
,TimeStamp1 = min(TimeStamp)
,TimeStamp2 = max(TimeStamp)
,SampleNumber1 = min(SampleNumber)
,SampleNumber2 = max(SampleNumber)
,Records = count(*)
,StartValue = sum(case when RowNr=B.MinRow then Measure end)
,EndValue = sum(case when RowNr=B.MaxRow then Measure end)
,UniqueVals = count(Distinct A.Measure)
,MinVal = min(A.Measure)
,MaxVal = max(A.Measure)
,Mean = max(B.Mean)
,Median = isnull(Avg(IIF(RowNr between MedRow1 and MedRow2,Measure,null)),avg(A.Measure))
,ModeR1 = isnull(max(IIf(ModeHits>1,D.Mode,null)),min(A.Measure))
,ModeR2 = isnull(max(IIf(ModeHits>1,D.Mode,null)),max(A.Measure))
,StdDev = Stdev(A.Measure)
From cteBase A
Join cteMean B on (A.GroupBy=B.GroupBy and A.Item=B.Item)
Join cteMedn C on (A.GroupBy=C.GroupBy and A.Item=C.Item)
Join cteMode D on (A.GroupBy=D.GroupBy and A.Item=D.Item and ModeRowNr=1)
Group By A.GroupBy,A.Item
Order By A.GroupBy,A.Item
Returns

How to query the same set of columns with different set of values on the same query efficiently

I'm using SQL SERVER 2014 and I have this query which needs to be rebuilt to be more efficient in what it is trying to accomplish.
As an example, I created this schema and added data to it so we could replicate the problem. You can try it at rextester (http://rextester.com/AIYG36293)
create table Dogs
(
Name nvarchar(20),
Owner_ID int,
Shelter_ID int
);
insert into Dogs values
('alpha', 1, 1),
('beta', 2, 1),
('charlie', 3, 1),
('beta', 1, 2),
('alpha', 2, 2),
('charlie', 3, 2),
('charlie', 1, 3),
('beta', 2, 3),
('alpha', 3, 3);
I want to find out which Shelter has these set of owner and dog name combinations and it must be exact. This is the query I'm using right now (this is more or less what query Entity Framework generated but with some slight changes to make it simpler):
SELECT DISTINCT
Shelter_ID
FROM Dogs AS [Extent1]
WHERE ( EXISTS (SELECT
1 AS [C1]
FROM [Dogs] AS [Extent2]
WHERE [Extent1].[Shelter_ID] = [Extent2].[Shelter_ID] AND [Extent2].[Name] = 'charlie' AND [Extent2].[Owner_ID] = 1
)) AND ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[Dogs] AS [Extent3]
WHERE [Extent1].[Shelter_ID] = [Extent3].[Shelter_ID] AND [Extent3].[Name] = 'beta' AND [Extent3].[Owner_ID] = 2
)) AND ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[Dogs] AS [Extent4]
WHERE [Extent1].[Shelter_ID] = [Extent4].[Shelter_ID] AND [Extent4].[Name] = 'alpha' AND [Extent4].[Owner_ID] = 3
))
This query is able to get what I need but I want to know if there is any simpler way of querying it. Because in my actual use case, I have more than just 3 combinations to worry about, it could get up to some crazy combinations like 1000 or more. So just imagine having 1000 subqueries in there so, well, yeah you get the point. When I try querying with that many I get an error saying:
The query processor ran out of internal resources and could not
produce a query plan. This is a rare event and only expected for
extremely complex queries or queries that reference a very large
number of tables or partitions.
NOTE
One solution I tried was using a Pivot to flatten the data and although the query becomes simpler since it would then be just a simple WHERE clause with a number of AND statements but when at some point I get to a higher number number of combinations then I exceed the limit for the allowable max row size and get this error when creating my temporary table to store the flatten data:
Cannot create a row of size 10514 which is greater than the allowable
maximum row size of 8060.
I appreciate any help or thoughts on this matter.
Thanks!
Count them.
WITH dogSet AS (
SELECT *
FROM (
VALUES ('charlie',1),('beta',2),('alpha',3)
) ts(Name,Owner_ID)
)
SELECT Shelter_ID
FROM Dogs AS [Extent1]
JOIN dogSet ts ON ts.Name= [Extent1].name and ts.Owner_ID = [Extent1].Owner_ID
GROUP BY Shelter_ID
HAVING count(*) = (SELECT count(*) n FROM dogSet)

How does Ellucian Banner calculate GPA?

I am working with Ellucian Banner, and I am having a hard time calculating the GPA. (Banner is an ERP used at academic institutions.)
The GPA can be found in many of the views that are built into Banner, including AS_STUDENT_DATA. However, these views are very slow and we have a few reports that are run several times where only the GPA is needed. I am trying to extract the GPA, but the values that I am getting don't all match what is in the views.
Please note that I am able to calculate a GPA using one of many sources on the web, however my values don't perfectly match values in Banner views. (In other words, I am not asking how to calculate a general GPA, which is easy and well documented, but asking how this is done in Banner.)
I have this, which gives values that are close but not all correct:
SELECT PIDM,
round(sum(TOTAL_POINTS)/sum(TOTAL_CREDITS), 2) AS GPA,
round(TOTAL_POINTS, 2) AS TOTAL_POINTS, TOTAL_CREDITS, LEVL_CODE
FROM (
SELECT
SFRSTCR.SFRSTCR_PIDM AS PIDM,
sum(SHRGRDE.SHRGRDE_QUALITY_POINTS * SFRSTCR.SFRSTCR_CREDIT_HR) AS TOTAL_POINTS,
sum(SFRSTCR.SFRSTCR_CREDIT_HR) AS TOTAL_CREDITS,
SHRGRDE_LEVL_CODE AS LEVL_CODE
FROM
INNER JOIN SHRGRDE ON SFRSTCR.SFRSTCR_GRDE_CODE = SHRGRDE.SHRGRDE_CODE AND SHRGRDE.SHRGRDE_GPA_IND = 'Y'
WHERE SHRGRDE_GPA_IND = 'Y'
AND SFRSTCR.SFRSTCR_RSTS_CODE IN ('RE', 'RW', 'RR')
GROUP BY SFRSTCR.SFRSTCR_PIDM, SHRGRDE_LEVL_CODE -- , SFRSTCR.SFRSTCR_CRN, SFRSTCR_TERM_CODE
) GT
WHERE GT.TOTAL_CREDITS > 0 -- Prevent x/0 errors
GROUP BY PIDM, TOTAL_POINTS, TOTAL_CREDITS, LEVL_CODE
Has anyone tackled this problem? Any idea how Banner does it?
you can use the built-in function for banner. its under a package called SHKSELS. the function is called F_SHRLGPA_VALUE. the owner of SHKSELS is BANINST1. the inputs for the function are pidm, credit level, indicator type, GPA type, type of request, campus type, term.
here is a break down and then an example use.
input 1 - pidm --self explanatory
input 2 - credit level value -- options are found by using
select * from stvlevl;
input 3 - indicator type -- Options are GPA (calculated GPA) or QP (Quality Points)
input 4 - GPA Type -- Options are found using
select distinct shrlgpa_gpa_type_ind from shrlgpa;
input 5 - type of request -- Options are V (value of input 3) or HA (Hours Attempted) or HE (Hours Earned) or HP (Hours Passed) or H (Hours toward GPA)
input 6 - campus type -- options are found by using
select * from stvcamp;
input 7 - term -- self explanatory
Most inputs can be NULL if you dont want to be that specific.
EXAMPLE:
SELECT SPRIDEN_ID as IS_NUMBER,
SHKSELS.F_SHRLGPA_VALUE(SPRIDEN_PIDM,'01','GPA','I','V',NULL,NULL) as GPA
FROM SPRIDEN
WHERE SPRIDEN_CHANGE_IND IS NULL;
Hope that helps.
Over release banner8..x if the real or final GPA already was calculated, then it was got the final grades from Academic History tables ( SHRTCKN, SHRTCKG, SHRTCKL) then you can get the GPA from SHRTGPA and SHRLGPA tables (calculated at term and level respectly)
If you need to recalculates the GPA then you will use the
shkcgpa.p_term_gpa with pidm and term by parameters. Therefore both GPA are recalculated.
Here's a guess. Hopefully it's closer.
SELECT
PIDM, LEVL_CODE,
round(sum(TOTAL_POINTS) / sum(TOTAL_CREDITS), 2) AS GPA,
sum(TOTAL_POINTS) AS TOTAL_POINTS, sum(TOTAL_CREDITS) AS TOTAL_CREDITS
FROM (
SELECT
SFRSTCR.SFRSTCR_PIDM AS PIDM, SHRGRDE_LEVL_CODE AS LEVL_CODE,
sum(SHRGRDE.SHRGRDE_QUALITY_POINTS * SFRSTCR.SFRSTCR_CREDIT_HR) AS TOTAL_POINTS,
sum(SFRSTCR.SFRSTCR_CREDIT_HR) AS TOTAL_CREDITS
FROM
SFRSTCR INNER JOIN SHRGRDE ON SFRSTCR.SFRSTCR_GRDE_CODE = SHRGRDE.SHRGRDE_CODE
WHERE
SHRGRDE_GPA_IND = 'Y' AND SFRSTCR.SFRSTCR_RSTS_CODE IN ('RE', 'RW', 'RR')
GROUP BY
SFRSTCR.SFRSTCR_PIDM, SHRGRDE_LEVL_CODE
) GT
WHERE TOTAL_CREDITS > 0 -- Prevent x/0 errors
GROUP BY PIDM, LEVL_CODE