How to calculate global CPI with dynamic instruction counts and determine which computer is faster? - cpu-time

Did I do this problem right? My answer is that P2(0.667ms) is faster than P1 (1.04ms). Which number is the global CPI?
1.6 [20] <ยง1.6> Consider two different implementations of the same instruction set architecture.
The instructions can be divided into four classes according to their CPI (class A, B, C, and D).
P1 clock rate of 2.5 GHz and CPIs of 1 (10%), 2 (20%), 3(50%), and 3 (20%).
P2 clock rate of 3 GHz and CPIs of 2 (10%), 2 (20%), 2 (50%), and 2 (20%).
Given a program with a dynamic instruction count of 1.0E6 (1.0 * 10^6) instructions divided into classes as follows: 10% class A, 20% class B, 50% class C, and 20% class D, which implementation is faster?
a. What is the global CPI for each implementation? Which is faster: P1 or P2?
CPU Time = CPU clock cycle/clock rate
CPU Clock Cycles = Sum of CPI * instruction count
Sum of each row, (A, B, C, D multiplied by IC and CPI)
P1 Clock Cycles = 1.0 * 10^6 dynamic instruction count * 1 CPI * 10% class A
+1.0 * 10^6 dynamic instruction count * 2 CPI * 20% class B
+1.0 * 10^6 dynamic instruction count * 3 CPI * 50% class C
+1.0 * 10^6 dynamic instruction count * 3 CPI * 20% class D
P1 Clock Cycles = (0.1 CPI * 106 instruction count) + (0.4 CPI * 106 instruction count) + (1.5 CPI * 106 instruction count) + (0.6 CPI * 106 instruction count) = 2.6 * 10^6 Clock Cycles
P2 Clock Cycles = 1.0 * 10^6 dynamic instruction count * 2 CPI * 10% class A
+1.0 * 10^6 dynamic instruction count * 2 CPI * 20% class B
+1.0 * 10^6 dynamic instruction count * 2 CPI * 50% class C
+1.0 * 10^6 dynamic instruction count * 2 CPI * 20% class D
P2 Clock Cycles = (0.2 CPI * 10^6 instruction count) + (0.4 CPI * 10^6 instruction count) + (1.0 CPI * 10^6 instruction count) + (0.4 CPI * 10^6 instruction count) = 2 * 10^6 Clock Cycles
P1 CPU Time = (2.6 * 10^6 Clock Cycles) / 2.5 GHz = 1.04 (10^6/10^9) = 1.04 * 10^-3 = 1.04ms
P2 CPU Time = (2 * 10^6 Clock Cycles) / 3 GHz = 1.04 (10^6/10^9) = 0.667 * 10^-3 = 0.667ms
P2 is faster than P1.

The answer was correct, I originally found some incorrect solutions online and became concerned with my own answer. This answer includes clarification on what the global CPI's for each computer were and more complete units:
P1 CPU Time = (2.6 * 106 Clock Cycles) / 2.5 GHz = 1.04 (106/109) = 1.04 * 10-3 = 1.04ms, Global CPI is 2.6 cycles per instruction
P2 CPU Time = (2 * 106 Clock Cycles) / 3 GHz = 0.667 (106/109) = 0.667 * 10-3 = 0.667ms, Global CPI is 2 cycles per instruction
P2 is faster than P1.

Related

Problems with GAMS model

I've been experiencing with GAMS but I still haven't got a clue what I'm doing.
Can someone take a look at this short model and try to point me in the right direction?
I have problems in the compilation at the equations, getting a few of these:
Dimension different - The symbol is referenced with more/less
indices as declared
Uncontrolled set entered as constant
Sets
i months / 1, 2, 3 /
j months / 1, 2, 3 /;
Parameters
cp(i) production cost in month i
/ 1 1.08
2 1.11
3 1.10
/
rh(i) number of necessary workers in month i
/ 1 3
2 4
3 6
/
cap(i) production capacity in month i
/ 1 25
2 20
3 25
/
q(j) number of motors to deliver in month j
/ 1 10
2 15
3 25
/
Scalar ca cost to store motors for a month /0.15/ ;
variables
mc(i,j) cost of production of motors in month i to be delivered in month j
x(i,j) number of motors produced in month i to be delivered in month j;
free variables
wf workforce
z cost of production
hr human resources;
Equations
cost cost
human_resources human resources
r1 restriction1
r2 restriction2 ;
cost .. z =e= sum((i,j), (cp(i)+(j-i)*ca)*x(i,j)) ;
human_resources .. hr =e= sum(i, sum(j, rh(i)*x(i, j))) ;
*lower than
r1.. sum(j, x(i,j)) =l= cap(i) ;
*greater than
r2.. sum(i, x(i,j)) =g= q(j) ;
Model
motors 'temp' /all/;
Solve motors using mip minimizing mc;
Display mc, x;
This works but check the solution. I added the positive variable x because otherwise you would have negative productions.
The main problem was the fact that you were optimizing a variable that is declared but that you never use in the equations. Also, the variable you are optimizing cannot have to dimensions (I think).
Then, for constraints r1 and r2 you need to add an index because they must be verified for each month, so r1(i) and r2(j). They are actually a "family of constraints".
You cannot subtract the indexes of the months (can't explain why), but you can subtract their order in the set.
And finally, calculate the mc(i,j) as a parameter after you have obtained the solution.
Sets
i months / 1, 2, 3 /
j months / 1, 2, 3 /;
Parameters
cp(i) production cost in month i
/ 1 1.08
2 1.11
3 1.10
/
rh(i) number of necessary workers in month i
/ 1 3
2 4
3 6
/
cap(i) production capacity in month i
/ 1 25
2 20
3 25
/
q(j) number of motors to deliver in month j
/ 1 10
2 15
3 25
/
Scalar ca cost to store motors for a month /0.15/ ;
variables
* mc(i,j) cost of production of motors in month i to be delivered in month j
x(i,j) number of motors produced in month i to be delivered in month j;
positive variable x;
free variables
wf workforce
z cost of production
hr human resources;
Equations
cost cost
human_resources human resources
r1(i) restriction1
r2(j) restriction2 ;
cost .. z =e= sum((i,j), (cp(i)+(ord(j)-ord(i))*ca)*x(i,j)) ;
human_resources .. hr =e= sum(i, sum(j, rh(i)*x(i, j))) ;
*lower than
r1(i).. sum(j, x(i,j)) =l= cap(i) ;
*greater than
r2(j).. sum(i, x(i,j)) =g= q(j) ;
Model
motors 'temp' /all/;
Solve motors using mip minimizing z;
Parameter mc(i,j);
mc(i,j)= (cp(i)+(ord(j)-ord(i))*ca)*x.l(i,j);
Display mc, x.l;

GAMS - Economic Dispatch - QCP and NLP

I have this code:
If I use NLP i get the results, but using QCP as it was asked to me, I can not get results
anyone can help me finding the reason?
code:
sets g generators / P1*P5 /
properties generator properties / a,b,c,max,min /
cc(properties) cost categories / a,b,c /
table data(g,properties) generator cost characteristics and limits
a b c max min
P1 0.19 58.3 1800 155 35
P2 0.13 39.3 3250 195 60
P3 0.08 11.5 4600 165 95
P4 0.07 42.6 5100 305 170
P5 0.14 8.9 3850 280 130
parameter exp(cc) exponent for cost function / a 2, b 1, c 0 /;
scalar demand total power demand in MW / 730 / ;
variables
p(g) power generation level in MW
cost total generation cost - the objective function ;
positive variables p;
p.up(g) = data(g,"max") ;
p.lo(g) = data(g,"min") ;
equations
Q_Eq1 total cost calculation
Q_Eq2 constraint - total generation must equal demand ;
Q_Eq1 .. cost =e= sum((g,cc), data(g,cc)*power(p(g),exp(cc)));
Q_Eq2 .. sum(g,p(g)) =g= demand ;
model problem /all/ ;
solve problem using QCP minimizing cost ;
Seems as if the function "power" is treated as nonlinear in general without analyzing the value of "exp", so that it is not allowed for a QCP. You could reformulate Q_Eq1 like this to make it work:
Q_Eq1 .. cost =e= sum((g,cc), data(g,cc)*(1 $(exp(cc)=0) +
p(g) $(exp(cc)=1) +
sqr(p(g))$(exp(cc)=2)));
Best,
Lutz

Does the pricing on high compute queries differ from high byte queries?

For example if "m going to run a query that will process 100mb of data but will require billing tier 12 will that be more expensive than a query that requires billing tier 1 but processes 500mb?
Cost of query execution is billing bytes x billing tier x $5 per 1 TB
so in your example
12 x 100 MB will have cost of of 2.4 times higher than 1 x 500 MB
just because of simple math - (12 x 100) / (1 x 500) = 2.4

Is there a more concise way to calculate the P value for the Anderson-Darling A-squared statistic with VBA?

I have two bits of code in VBA for Excel. One calculates the A-squared statistic for the Anderson-Darling test, this bit of code calculates the P value of the A-squared statistic. I am curious if there is a more concise way or more efficient way to calculate this value in VBA:
Function AndDarP(AndDar, Elements)
'Calculates P value for level of significance for the
'Anderson-Darling Test for Normality
'AndDar is the Anderson-Darling Test Statistic
'Elements is the count of elements used in the
'Anderson-Darling test statistic.
'based on calculations at
'http://www.kevinotto.com/RSS/Software/Anderson-Darling%20Normality%20Test%20Calculator.xls
'accessed 21 May 2010
'www.kevinotto.com
'kevin_n_otto#yahoo.com
'Version 6.0
'Permission to freely distribute and modify when properly
'referenced and contact information maintained.
'
'"Keep in mind the test assumes normality, and is looking for sufficient evidence to reject normality.
'That is, a large p-value (often p > alpha = 0.05) would indicate normality.
' * * *
'Test Hypotheses:
'Ho: Data is sampled from a population that is normally distributed
'(no difference between the data and normal data).
'Ha: Data is sampled from a population that is not normally distributed"
Dim M As Double
M = AndDar * (1 + 0.75 / Elements + 2.25 / Elements ^ 2)
Select Case M
Case Is < 0.2
AndDarP = 1 - Exp(-13.436 + 101.14 * M - 223.73 * M ^ 2)
Case Is < 0.34
AndDarP = 1 - Exp(-8.318 + 42.796 * M - 59.938 * M ^ 2)
Case Is < 0.6
AndDarP = Exp(0.9177 - 4.279 * M - 1.38 * M ^ 2)
Case Is < 13
AndDarP = Exp(1.2937 - 5.709 * M + 0.0186 * M ^ 2)
Case Else
AndDarP = 0
End Select
End Function

Optimisation of Law of Cosines in VB.net

I'm using the law of cosines in a program and it seems to be a slow point of my code. This is the line of code I have:
Ans = Math.Sqrt(A ^ 2 + B ^ 2 - 2 * A * B * Math.Cos(C - D))
Where A to D are double variables that change every time called. This function seems to take around 2000 ticks to run. I've looked at using the small angle approximation which is where if (C-D) is small enough you can use cos(C-D) = 1 - ((C-D)^2)/2. Unfortunately this turned out to be slower overall than the original code. I've looked at any sort of relationship that can be used to simplify the calculation but A and C are related in a complex way and B and D are related in the same way, there's no relationship between A and B or between C and D.
I've thought about using a lookup function for all the values of (C-D) but my accuracy is currently to at least 6 significant figures and I would prefer to stay at that level as that's the accuracy of my input data, in short this means around a million vaules in the look up and this is only 1 section of the function. I've thought about having a look up for all four values (A, B, C and D) but I'm not sure how to implement that.
I've also already multithreaded this application and attempted use of GPGPU (GPGPU ended up being slower due to the time spent loading in and out of GPU memory).
So, my question is how do I speed up this function.
Thanks in advanced!
The following runs in less than 1/3 the time
ans = Math.Sqrt(a * a + b * b - 2 * a * b * Math.Cos(c - d))
Here's the code that proves it:
Dim sw1 As New Stopwatch
Dim sw2 As New Stopwatch
Dim ans, a, b, c, d As Double
a = 5
b = 10
c = 4
d = 2
sw1.Start()
For x As Integer = 1 To 10000
ans = Math.Sqrt(a ^ 2 + b ^ 2 - 2 * a * b * Math.Cos(c - d))
Next
sw1.Stop()
sw2.Start()
For y As Integer = 1 To 10000
ans = Math.Sqrt(a * a + b * b - 2 * a * b * Math.Cos(c - d))
Next
sw2.Stop()
Console.WriteLine(sw1.ElapsedTicks)
Console.WriteLine(sw2.ElapsedTicks)
Console.WriteLine(sw2.ElapsedTicks * 100 / sw1.ElapsedTicks)