Syntax error with AMPL - ampl

I get syntax error while running this script in AMPL. Can someone help me to solve this?
param K;
param N;
param PT;
param beta_lower{1..K};
param beta_upper{1..K};
set KSET := {1 . . K};
set NSET := {1 . . N};
param Channel {KSET,NSET};
var V
var C {KSET, NSET} binary;
#==================================
data;
param K:=2;
param N:=64;
param PT:= 1;
param beta_lower:= 1 1.99 2 3.99;
param beta_upper:= 1 2.01 2 4.01;
param Channel : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 :=
1 1366 1474 1583 1690 1790 1881 1963 2036 2101 2161 2217 2268 2315 2355 2385 2402 2403 2386 2350 2295 2223 2137 2041 1939 1835 1734 1639 1553 1479 1419 1375 1347 1335 1339 1357 1386 1421 1459 1494 1523 1542 1548 1540 1520 1490 1451 1409 1364 1321 1279 1239 1201 1164 1127 1092 1060 1034 1016 1012 1024 1055 1107 1178 1265
2 1297 1281 1250 1201 1135 1055 963 867 772 685 611 555 519 504 510 536 579 636 702 775 851 928 1002 1074 1143 1209 1276 1345 1420 1503 1596 1698 1808 1921 2033 2137 2225 2290 2327 2333 2309 2256 2180 2089 1989 1890 1796 1712 1641 1582 1533 1493 1458 1425 1393 1364 1337 1314 1298 1289 1288 1292 1297 1301;
I write this piece of code in tex file (.rtf) and upload this to neos-server
The output from the solver is:
amplin, line 7 (offset 54):
syntax error
context: >>> {\ <<< rtf1\ansi\ansicpg1252\cocoartf12processing commands.
Executing on neos-2.neos-server.org
Error (2) in /opt/ampl/ampl -R amplin

The problem is that you mix syntax for AMPL model and data in your code. In particular, you should first declare parameter beta_lower in the model:
param beta_lower{1..K}; # you may want a different indexing expression here
and then provide data for it in the data section:
data;
param beta_lower:= 1 1.99 2 3.99;
Your updated formulation looks correct, but the error indicates that it is in RTF format. To make it work you should convert it into plain text.

Related

How to call the same index in each of 2 dimensions in a GAMS variable?

I am running a model in GAMS with a number of variables with 2 dimensions, calling the indexes i and j respectively. I have a few constraints relating to just the "diagonal" entries, so I coded them something like "d(i,i)", but GAMS gives error 171 when I do this, for example the first error comes on equation e3. How do I express this in GAMS? Code follows:
Set
i 'Origin' / 1*20 /
j 'Destination' / 1*20 /;
Table A(i,j)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
1 0 4 2 574 481 440 408 633 573 1066 486 1926 1537 183 334 374 107 509 378 499
2 4 0 2 573 480 438 412 632 572 1064 484 1924 1535 182 332 372 111 508 377 498
3 3 2 0 572 479 437 411 631 571 1063 484 1923 1534 181 332 371 109 507 376 497
4 574 572 572 0 93 249 135 1188 1128 1620 1040 2480 2091 737 888 928 683 498 367 488
5 481 480 479 93 0 156 228 1095 1035 1527 948 2387 1998 645 796 836 591 406 275 396
6 440 438 437 249 156 0 384 1053 993 1485 906 2345 1956 603 754 794 549 364 233 354
7 409 412 411 135 228 384 0 1044 984 1476 897 2336 1947 594 745 785 302 633 502 623
8 633 632 631 1188 1095 1053 1044 0 532 1024 623 1246 792 450 300 403 742 560 992 668
9 573 572 571 1128 1035 993 984 532 0 494 326 879 424 392 385 207 682 1063 932 1053
10 1066 1064 1063 1620 1527 1485 1476 1024 494 0 820 1979 823 885 878 699 1175 1555 1424 1545
11 486 484 484 1040 948 906 897 623 326 820 0 724 270 359 477 223 595 975 844 966
12 1926 1924 1923 2480 2387 2345 2336 1246 878 1979 724 0 454 1943 1099 846 2035 2415 2284 2405
13 1537 1535 1534 2091 1998 1956 1947 792 424 823 270 454 0 1554 645 392 1646 2026 1895 2016
14 183 182 181 738 645 603 594 450 392 884 359 1943 1554 0 151 192 292 672 542 663
15 334 332 332 888 796 754 745 300 385 878 477 1099 645 151 0 257 443 261 692 369
16 373 372 371 928 835 793 784 403 207 699 223 846 392 192 256 0 482 863 732 853
17 107 111 109 683 591 549 302 742 683 1175 595 2035 1646 292 443 483 0 618 487 608
18 509 507 507 498 406 364 633 560 1063 1555 975 2415 2026 672 261 863 618 0 134 180
19 378 377 376 367 275 233 502 992 932 1424 844 2284 1895 542 692 732 487 134 0 122
20 500 498 497 489 396 354 624 668 1053 1545 966 2405 2016 663 369 854 609 180 122 0
;
Positive variable c(i,j) 'cost direct';
Positive variable cstar(i,j) 'cost routing';
Positive variable z 'objective';
Binary variable d(i,j) 'decision direct';
Binary variable dstar(i,j) 'decision routing';
Binary variable p(i,j) 'd(i,i)*dstar(i,j)';
Binary variable q(i,j) 'd(i,j)*p(i,j)';
Binary variable x(i,j) 'd(j,j)*dstar(i,j)';
Binary variable y(i,j) 'd(i,j)*x(i,j)';
Binary variable f(i,j) 'd(i,i)*d(j,j)';
Binary variable g(i,j) 'f(i,j)*dstar(i,j)';
Integer variable h(i) 'd(i,i)*u(i)';
Integer variable k(i,j) 'd(i,i)*u(j)';
Integer variable m(i,j) 'f(i,j)*u(i)';
Integer variable u(i) 'ordering of tours';
Equation
e1
e2(i,j)
e3
e4(j)
e5(i,j)
e6(i,j)
e7(i)
e8(j)
e9(i)
e10(i,j)
e11(i,j)
e12(i,j)
e13(i,j)
e14(i,j)
e15(i,j)
e16(i,j)
e17(i,j)
e18(i,j)
e19(i,j)
e20(i,j)
e21(i,j)
e22(i,j)
e23(i,j)
e24(i,j)
e25(i,j)
e26(i,j)
e27(i,j)
e28(i)
e29(i)
e30(i)
e31(i,j)
e32(i,j)
e33(i,j)
e34(i,j)
e35(i,j)
e36(i,j)
e37(i,j)
e38(i,j)
e39(i)
e40(i)
e41(i)
e42(i)
e43(i,j)
e44(i,j)
e45(i,j)
e46(i,j)
e47(i,j)
e48(i,j)
e49(i,j)
e50(i,j)
e51(i,j);
e1 .. z =e= sum((i,j),c(i,j))+sum((i,j),cstar(i,j));
e2(i,j) .. c(i,j) =e= A(i,j)*d(i,j);
e3 .. sum((i),d(i,i)) =e= 2;
e4(j) .. sum((i),d(i,j)) =e= 1;
e5(i,j) .. d(i,j)-d(i,i) =l= 0;
e6(i,j) .. cstar(i,j) =e= A(i,j)*dstar(i,j);
e7(i) .. dstar(i,i) =e= 0;
e8(j) .. sum((i),dstar(i,j)) =e= 1;
e9(i) .. sum((j),dstar(i,j)) =e= 1;
e10(i,j) .. p(i,j) =l= d(i,i);
e11(i,j) .. p(i,j) =l= dstar(i,j);
e12(i,j) .. p(i,j) =g= d(i,i)+dstar(i,j)-1;
e13(i,j) .. p(i,j) =g= 0;
e14(i,j) .. q(i,j) =l= d(i,j);
e15(i,j) .. q(i,j) =l= p(i,j);
e16(i,j) .. q(i,j) =g= d(i,j)+p(i,j)-1;
e17(i,j) .. q(i,j) =g= 0;
e18(i,j) .. p(i,j)-q(i,j) =e= 0;
e19(i,j) .. x(i,j) =l= d(j,j);
e20(i,j) .. x(i,j) =l= dstar(i,j);
e21(i,j) .. x(i,j) =g= d(j,j)+dstar(i,j)-1;
e22(i,j) .. x(i,j) =g= 0;
e23(i,j) .. y(i,j) =l= d(i,j);
e24(i,j) .. y(i,j) =l= x(i,j);
e25(i,j) .. y(i,j) =g= d(i,j)+x(i,j)-1;
e26(i,j) .. y(i,j) =g= 0;
e27(i,j) .. x(i,j)-y(i,j) =e= 0;
e28(i) .. u(i) =l= 20;
e29(i) .. 1000*(1-d(i,i)) =g= u(i)-1;
e30(i) .. 2-d(i,i) =l= u(i);
e31(i,j) .. f(i,j) =l= d(i,i);
e32(i,j) .. f(i,j) =l= dstar(j,j);
e33(i,j) .. f(i,j) =g= d(i,i)+dstar(j,j)-1;
e34(i,j) .. f(i,j) =g= 0;
e35(i,j) .. g(i,j) =l= f(i,j);
e36(i,j) .. g(i,j) =l= dstar(i,j);
e37(i,j) .. g(i,j) =g= f(i,j)+dtar(i,j)-1;
e38(i,j) .. g(i,j) =g= 0;
e39(i) .. 0 =l= h(i);
e40(i) .. h(i) =l= 1000*d(i,i);
e41(i) .. u(i)-1000*(1-d(i,i)) =l= h(i);
e42(i) .. h(i) =l= u(i);
e43(i,j) .. 0 =l= k(i,j);
e44(i,j) .. k(i,j) =l= 1000*d(i,i);
e45(i,j) .. u(j)-1000*(1-d(i,i)) =l= k(i,j);
e46(i,j) .. k(i,j) =l= u(j);
e47(i,j) .. 0 =l= m(i,j);
e48(i,j) .. m(i,j) =l= 1000*f(i,j);
e49(i,j) .. u(i)-1000*(1-f(i,j)) =l= m(i,j);
e50(i,j) .. m(i,j) =l= u(i);
e51(i,j) .. u(i)-u(j)+1-h(i)+k(i,j)-d(i,i)-k(j,i)+h(j)-d(j,j)-m(i,j)+m(j,i)-f(i,j) =l= 19*(1-d(i,i)-d(j,j)-f(i,j)-dstar(i,j)+p(i,j)+x(i,j)+g(i,j));
Model transport / all /;
solve transport using mip minimizing z;
display d.l, dstar.l;
The problem is, that you declared d (and other symbols) with domain (i,j) but try to access it as (i,i). Therefore, you get a domain violation since you declared i and j as different sets. Is that really wanted? Or are they really the same (i.e., is each origin also a destination)? Looking at table A that seems to be the case. So, instead of declaring j as a new set, you should define it as Alias to j. The start of your model would look like this then:
Set
i 'Origin' / 1*20 /;
Alias(i,j);
The rest stays the same.

Taking the last two rows' minimum value

I have this data frame:
ID Date X 123_Var 456_Var 789_Var
A 16-07-19 3 777 250 810
A 17-07-19 9 637 121 529
A 18-07-19 7 878 786 406
A 19-07-19 4 656 140 204
A 20-07-19 2 295 272 490
A 21-07-19 3 778 600 544
A 22-07-19 6 741 792 907
B 01-07-19 4 509 690 406
B 02-07-19 2 732 915 199
B 03-07-19 2 413 725 414
B 04-07-19 2 170 702 912
B 09-08-19 3 851 616 477
B 10-08-19 9 475 447 555
B 11-08-19 1 412 403 708
B 12-08-19 2 299 537 321
B 13-08-19 4 310 119 125
C 01-12-18 4 912 755 657
C 02-12-18 4 586 771 394
C 04-12-18 9 498 122 193
C 05-12-18 2 500 528 764
C 06-12-18 1 982 383 654
C 07-12-18 1 299 496 488
C 08-12-18 3 336 691 496
C 09-12-18 3 206 433 263
C 10-12-18 2 373 319 111
I want to show the minimum value between current row and previous row values, for each column in 123_Var 456_Var 789_Var set.
That should be applied separately for each ID. (Groupby.)
The first row of each ID, will show the current value. (Since there's no "previous" value to compare.)
Expected result:
ID Date X 123_Var 456_Var 789_Var 123_Min2 456_Min2 789_Min2
A 16-07-19 3 777 250 810 777 250 810
A 17-07-19 9 637 121 529 637 121 529
A 18-07-19 7 878 786 406 637 121 406
A 19-07-19 4 656 140 204 656 140 204
A 20-07-19 2 295 272 490 295 140 204
A 21-07-19 3 778 600 544 295 272 490
A 22-07-19 6 741 792 907 741 600 544
B 01-07-19 4 509 690 406 509 690 406
B 02-07-19 2 732 915 199 509 690 199
B 03-07-19 2 413 725 414 413 725 199
B 04-07-19 2 170 702 912 170 702 414
B 09-08-19 3 851 616 477 170 616 477
B 10-08-19 9 475 447 555 475 447 477
B 11-08-19 1 412 403 708 412 403 555
B 12-08-19 2 299 537 321 299 403 321
B 13-08-19 4 310 119 125 299 119 125
C 01-12-18 4 912 755 657 912 755 657
C 02-12-18 4 586 771 394 586 755 394
C 04-12-18 9 498 122 193 498 122 193
C 05-12-18 2 500 528 764 498 122 193
C 06-12-18 1 982 383 654 500 383 654
C 07-12-18 1 299 496 488 299 383 488
C 08-12-18 3 336 691 496 299 496 488
C 09-12-18 3 206 433 263 206 433 263
C 10-12-18 2 373 319 111 206 319 111
IIUC, We use groupby.shift to select the previous var for each ID, then we can use DataFrame.where
to leave only the cells where the previous value is lower than the current value and fill with the current value in the rest. We use DataFrame.add_suffix to add _Min2 and we join with df with DataFrame.join
df_vars = df[['123_Var','456_Var','789_Var']]
df = df.join(df.groupby('ID')['123_Var','456_Var','789_Var']
.shift()
.fillna(df_vars)
.where(lambda x: x.le(df_vars),df_vars)
.add_suffix('_Min2')
)
print(df)
Output
ID Date X 123_Var 456_Var 789_Var 123_Var_Min2 456_Var_Min2 789_Var_Min2
0 A 16-07-19 3 777 250 810 777.0 250.0 810.0
1 A 17-07-19 9 637 121 529 637.0 121.0 529.0
2 A 18-07-19 7 878 786 406 637.0 121.0 406.0
3 A 19-07-19 4 656 140 204 656.0 140.0 204.0
4 A 20-07-19 2 295 272 490 295.0 140.0 204.0
5 A 21-07-19 3 778 600 544 295.0 272.0 490.0
6 A 22-07-19 6 741 792 907 741.0 600.0 544.0
7 B 01-07-19 4 509 690 406 509.0 690.0 406.0
8 B 02-07-19 2 732 915 199 509.0 690.0 199.0
9 B 03-07-19 2 413 725 414 413.0 725.0 199.0
10 B 04-07-19 2 170 702 912 170.0 702.0 414.0
11 B 09-08-19 3 851 616 477 170.0 616.0 477.0
12 B 10-08-19 9 475 447 555 475.0 447.0 477.0
13 B 11-08-19 1 412 403 708 412.0 403.0 555.0
14 B 12-08-19 2 299 537 321 299.0 403.0 321.0
15 B 13-08-19 4 310 119 125 299.0 119.0 125.0
16 C 01-12-18 4 912 755 657 912.0 755.0 657.0
17 C 02-12-18 4 586 771 394 586.0 755.0 394.0
18 C 04-12-18 9 498 122 193 498.0 122.0 193.0
19 C 05-12-18 2 500 528 764 498.0 122.0 193.0
20 C 06-12-18 1 982 383 654 500.0 383.0 654.0
21 C 07-12-18 1 299 496 488 299.0 383.0 488.0
22 C 08-12-18 3 336 691 496 299.0 496.0 488.0
23 C 09-12-18 3 206 433 263 206.0 433.0 263.0
24 C 10-12-18 2 373 319 111 206.0 319.0 111.0
Case 2: If you want check the n previous use groupby.rolling
df_vars = df[['123_Var','456_Var','789_Var']]
n = 3
df = df.join(df.groupby('ID')['123_Var','456_Var','789_Var']
.rolling(n,min_periods = 1).min()
.reset_index(drop=True)
.add_suffix(f'_Min{n}')
)
print(df)
ID Date X 123_Var 456_Var 789_Var 123_Var_Min3 456_Var_Min3 789_Var_Min3
0 A 16-07-19 3 777 250 810 777.0 250.0 810.0
1 A 17-07-19 9 637 121 529 637.0 121.0 529.0
2 A 18-07-19 7 878 786 406 637.0 121.0 406.0
3 A 19-07-19 4 656 140 204 637.0 121.0 204.0
4 A 20-07-19 2 295 272 490 295.0 121.0 204.0
5 A 21-07-19 3 778 600 544 295.0 140.0 204.0
6 A 22-07-19 6 741 792 907 295.0 140.0 204.0
7 B 01-07-19 4 509 690 406 509.0 690.0 406.0
8 B 02-07-19 2 732 915 199 509.0 690.0 199.0
9 B 03-07-19 2 413 725 414 413.0 690.0 199.0
10 B 04-07-19 2 170 702 912 170.0 690.0 199.0
11 B 09-08-19 3 851 616 477 170.0 616.0 199.0
12 B 10-08-19 9 475 447 555 170.0 447.0 414.0
13 B 11-08-19 1 412 403 708 170.0 403.0 477.0
14 B 12-08-19 2 299 537 321 299.0 403.0 321.0
15 B 13-08-19 4 310 119 125 299.0 119.0 125.0
16 C 01-12-18 4 912 755 657 912.0 755.0 657.0
17 C 02-12-18 4 586 771 394 586.0 755.0 394.0
18 C 04-12-18 9 498 122 193 498.0 122.0 193.0
19 C 05-12-18 2 500 528 764 498.0 122.0 193.0
20 C 06-12-18 1 982 383 654 498.0 122.0 193.0
21 C 07-12-18 1 299 496 488 299.0 122.0 193.0
22 C 08-12-18 3 336 691 496 299.0 383.0 488.0
23 C 09-12-18 3 206 433 263 206.0 383.0 263.0
24 C 10-12-18 2 373 319 111 206.0 319.0 111.0
A quite elegant solution is to apply rolling(2).min() to each group,
but to avoid the first row of NaN in each group, this first row
should be "replicated" from the source group.
To do your task, start from defining the following function:
def fnMin2(grp):
rv = pd.concat([pd.DataFrame([grp.iloc[0, -3:]]),
grp[['123_Var', '456_Var', '789_Var']].rolling(2).min().iloc[1:]])\
.astype('int')
rv.columns = [ it.replace('Var', 'Min2') for it in rv.columns ]
return grp.join(rv)
Then apply it to each group:
df.groupby('ID').apply(fnMin2)
Note that column names assigned to new columns in my solution are
just as you wish, contrary to the solution you accepted.
#this compares the next row to the previous row
ext = df.iloc[:,3:].gt(df.iloc[:,3:].shift(1))
#simply renamed the columns here
ext.columns=['123_min','456_min','789_min']
#join the two dataframes by columns
M = pd.concat([df,ext],axis=1)
#based on the conditions, if it is False,
#use value from current row,
#else use value from previous row
M['123_min']=np.where(M['123_min']==0,
M['123_Var'],
M['123_Var'].shift(1)
)
M['456_min']=np.where(M['456_min']==0,
M['456_Var'],
M['456_Var'].shift(1)
)
M['789_min']=np.where(M['789_min']==0,
M['789_Var'],
M['789_Var'].shift(1)
)

Is pow(x, 2.0) fast as x * x in GLSL? [duplicate]

Which is faster in GLSL:
pow(x, 3.0f);
or
x*x*x;
?
Does exponentiation performance depend on hardware vendor or exponent value?
I wrote a small benchmark, because I was interested in the results.
In my personal case, I was most interested in exponent = 5.
Benchmark code (running in Rem's Studio / LWJGL):
package me.anno.utils.bench
import me.anno.gpu.GFX
import me.anno.gpu.GFX.flat01
import me.anno.gpu.RenderState
import me.anno.gpu.RenderState.useFrame
import me.anno.gpu.framebuffer.Frame
import me.anno.gpu.framebuffer.Framebuffer
import me.anno.gpu.hidden.HiddenOpenGLContext
import me.anno.gpu.shader.Renderer
import me.anno.gpu.shader.Shader
import me.anno.utils.types.Floats.f2
import org.lwjgl.opengl.GL11.*
import java.nio.ByteBuffer
import kotlin.math.roundToInt
fun main() {
fun createShader(code: String) = Shader(
"", null, "" +
"attribute vec2 attr0;\n" +
"void main(){\n" +
" gl_Position = vec4(attr0*2.0-1.0, 0.0, 1.0);\n" +
" uv = attr0;\n" +
"}", "varying vec2 uv;\n", "" +
"void main(){" +
code +
"}"
)
fun repeat(code: String, times: Int): String {
return Array(times) { code }.joinToString("\n")
}
val size = 512
val warmup = 50
val benchmark = 1000
HiddenOpenGLContext.setSize(size, size)
HiddenOpenGLContext.createOpenGL()
val buffer = Framebuffer("", size, size, 1, 1, true, Framebuffer.DepthBufferType.NONE)
println("Power,Multiplications,GFlops-multiplication,GFlops-floats,GFlops-ints,GFlops-power,Speedup")
useFrame(buffer, Renderer.colorRenderer) {
RenderState.blendMode.use(me.anno.gpu.blending.BlendMode.ADD) {
for (power in 2 until 100) {
// to reduce the overhead of other stuff
val repeats = 100
val init = "float x1 = dot(uv, vec2(1.0)),x2,x4,x8,x16,x32,x64;\n"
val end = "gl_FragColor = vec4(x1,x1,x1,x1);\n"
val manualCode = StringBuilder()
for (bit in 1 until 32) {
val p = 1.shl(bit)
val h = 1.shl(bit - 1)
if (power == p) {
manualCode.append("x1=x$h*x$h;")
break
} else if (power > p) {
manualCode.append("x$p=x$h*x$h;")
} else break
}
if (power.and(power - 1) != 0) {
// not a power of two, so the result isn't finished yet
manualCode.append("x1=")
var first = true
for (bit in 0 until 32) {
val p = 1.shl(bit)
if (power.and(p) != 0) {
if (!first) {
manualCode.append('*')
} else first = false
manualCode.append("x$p")
}
}
manualCode.append(";\n")
}
val multiplications = manualCode.count { it == '*' }
// println("$power: $manualCode")
val shaders = listOf(
// manually optimized
createShader(init + repeat(manualCode.toString(), repeats) + end),
// can be optimized
createShader(init + repeat("x1=pow(x1,$power.0);", repeats) + end),
// can be optimized, int as power
createShader(init + repeat("x1=pow(x1,$power);", repeats) + end),
// slightly different, so it can't be optimized
createShader(init + repeat("x1=pow(x1,${power}.01);", repeats) + end),
)
for (shader in shaders) {
shader.use()
}
val pixels = ByteBuffer.allocateDirect(4)
Frame.bind()
glClearColor(0f, 0f, 0f, 1f)
glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT)
for (i in 0 until warmup) {
for (shader in shaders) {
shader.use()
flat01.draw(shader)
}
}
val flops = DoubleArray(shaders.size)
val avg = 10 // for more stability between runs
for (j in 0 until avg) {
for (index in shaders.indices) {
val shader = shaders[index]
GFX.check()
val t0 = System.nanoTime()
for (i in 0 until benchmark) {
shader.use()
flat01.draw(shader)
}
// synchronize
glReadPixels(0, 0, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixels)
GFX.check()
val t1 = System.nanoTime()
// the first one may be an outlier
if (j > 0) flops[index] += multiplications * repeats.toDouble() * benchmark.toDouble() * size * size / (t1 - t0)
GFX.check()
}
}
for (i in flops.indices) {
flops[i] /= (avg - 1.0)
}
println(
"" +
"$power,$multiplications," +
"${flops[0].roundToInt()}," +
"${flops[1].roundToInt()}," +
"${flops[2].roundToInt()}," +
"${flops[3].roundToInt()}," +
(flops[0] / flops[3]).f2()
)
}
}
}
}
The sampler function is run 9x 512² pixels * 1000 times, and evaluates the function 100 times each.
I run this code on my RX 580, 8GB from Gigabyte, and collected the following results:
Power
#Mult
GFlops*
GFlopsFp
GFlopsInt
GFlopsPow
Speedup
2
1
1246
1429
1447
324
3.84
3
2
2663
2692
2708
651
4.09
4
2
2682
2679
2698
650
4.12
5
3
2766
972
974
973
2.84
6
3
2785
978
974
976
2.85
7
4
2830
1295
1303
1299
2.18
8
3
2783
2792
2809
960
2.90
9
4
2836
1298
1301
1302
2.18
10
4
2833
1291
1302
1298
2.18
11
5
2858
1623
1629
1623
1.76
12
4
2824
1302
1295
1303
2.17
13
5
2866
1628
1624
1626
1.76
14
5
2869
1614
1623
1611
1.78
15
6
2886
1945
1943
1953
1.48
16
4
2821
1305
1300
1305
2.16
17
5
2868
1615
1625
1619
1.77
18
5
2858
1620
1625
1624
1.76
19
6
2890
1949
1946
1949
1.48
20
5
2871
1618
1627
1625
1.77
21
6
2879
1945
1947
1943
1.48
22
6
2886
1944
1949
1952
1.48
23
7
2901
2271
2269
2268
1.28
24
5
2872
1621
1628
1624
1.77
25
6
2886
1942
1943
1942
1.49
26
6
2880
1949
1949
1953
1.47
27
7
2891
2273
2263
2266
1.28
28
6
2883
1949
1946
1953
1.48
29
7
2910
2279
2281
2279
1.28
30
7
2899
2272
2276
2277
1.27
31
8
2906
2598
2595
2596
1.12
32
5
2872
1621
1625
1622
1.77
33
6
2901
1953
1942
1949
1.49
34
6
2895
1948
1939
1944
1.49
35
7
2895
2274
2266
2268
1.28
36
6
2881
1937
1944
1948
1.48
37
7
2894
2277
2270
2280
1.27
38
7
2902
2275
2264
2273
1.28
39
8
2910
2602
2594
2603
1.12
40
6
2877
1945
1947
1945
1.48
41
7
2892
2276
2277
2282
1.27
42
7
2887
2271
2272
2273
1.27
43
8
2912
2599
2606
2599
1.12
44
7
2910
2278
2284
2276
1.28
45
8
2920
2597
2601
2600
1.12
46
8
2920
2600
2601
2590
1.13
47
9
2925
2921
2926
2927
1.00
48
6
2885
1935
1955
1956
1.47
49
7
2901
2271
2279
2288
1.27
50
7
2904
2281
2276
2278
1.27
51
8
2919
2608
2594
2607
1.12
52
7
2902
2282
2270
2273
1.28
53
8
2903
2598
2602
2598
1.12
54
8
2918
2602
2602
2604
1.12
55
9
2932
2927
2924
2936
1.00
56
7
2907
2284
2282
2281
1.27
57
8
2920
2606
2604
2610
1.12
58
8
2913
2593
2597
2587
1.13
59
9
2925
2923
2924
2920
1.00
60
8
2930
2614
2606
2613
1.12
61
9
2932
2946
2946
2947
1.00
62
9
2926
2935
2937
2947
0.99
63
10
2958
3258
3192
3266
0.91
64
6
2902
1957
1956
1959
1.48
65
7
2903
2274
2267
2273
1.28
66
7
2909
2277
2276
2286
1.27
67
8
2908
2602
2606
2599
1.12
68
7
2894
2272
2279
2276
1.27
69
8
2923
2597
2606
2606
1.12
70
8
2910
2596
2599
2600
1.12
71
9
2926
2921
2927
2924
1.00
72
7
2909
2283
2273
2273
1.28
73
8
2909
2602
2602
2599
1.12
74
8
2914
2602
2602
2603
1.12
75
9
2924
2925
2927
2933
1.00
76
8
2904
2608
2602
2601
1.12
77
9
2911
2919
2917
2909
1.00
78
9
2927
2921
2917
2935
1.00
79
10
2929
3241
3246
3246
0.90
80
7
2903
2273
2276
2275
1.28
81
8
2916
2596
2592
2589
1.13
82
8
2913
2600
2597
2598
1.12
83
9
2925
2931
2926
2913
1.00
84
8
2917
2598
2606
2597
1.12
85
9
2920
2916
2918
2927
1.00
86
9
2942
2922
2944
2936
1.00
87
10
2961
3254
3259
3268
0.91
88
8
2934
2607
2608
2612
1.12
89
9
2918
2939
2931
2916
1.00
90
9
2927
2928
2920
2924
1.00
91
10
2940
3253
3252
3246
0.91
92
9
2924
2933
2926
2928
1.00
93
10
2940
3259
3237
3251
0.90
94
10
2928
3247
3247
3264
0.90
95
11
2933
3599
3593
3594
0.82
96
7
2883
2282
2268
2269
1.27
97
8
2911
2602
2595
2600
1.12
98
8
2896
2588
2591
2587
1.12
99
9
2924
2939
2936
2938
1.00
As you can see, a power() call takes exactly as long as 9 multiplication instructions. Therefore every manual rewriting of a power with less than 9 multiplications is faster.
Only the cases 2, 3, 4, and 8 are optimized by my driver. The optimization is independent of whether you use the .0 suffix for the exponent.
In the case of exponent = 2, my implementation seems to have lower performance than the driver. I am not sure, why.
The speedup is the manual implementation compared to pow(x,exponent+0.01), which cannot be optimized by the compiler.
Because the multiplications and the speedup align so perfectly, I created a graph to show the relationship. This relationship kind of shows that my benchmark is trustworthy :).
Operating System: Windows 10 Personal
GPU: RX 580 8GB from Gigabyte
Processor: Ryzen 5 2600
Memory: 16 GB DDR4 3200
GPU Driver: 21.6.1 from 17th June 2021
LWJGL: Version 3.2.3 build 13
While this can definitely be hardware/vendor/compiler dependent, advanced mathematical functions like pow() tend to be considerably more expensive than basic operations.
The best approach is of course to try both, and benchmark. But if there is a simple replacement for an advanced mathematical functions, I don't think you can go very wrong by using it.
If you write pow(x, 3.0), the best you can probably hope for is that the compiler will recognize the special case, and expand it. But why take the risk, if the replacement is just as short and easy to read? C/C++ compilers don't always replace pow(x, 2.0) by a simple multiplication, so I wouldn't necessarily count on all GLSL compilers to do that.

To find avg in pig and sort it in ascending order

have a schema with 9 fields and i want to take only two fields(6,7 i.e $5,$6) and i want to calculate the average of $5 and i want to sort the $6 in ascending order so how to do this task can some one help me.
Input Data:
N368SW 188 170 175 17 -1 MCO MHT 1142
N360SW 100 115 87 -10 5 MCO MSY 550
N626SW 114 115 90 13 14 MCO MSY 550
N252WN 107 115 84 -10 -2 MCO MSY 550
N355SW 104 115 85 -1 10 MCO MSY 550
N405WN 113 110 96 14 11 MCO ORF 655
N456WN 110 110 92 24 24 MCO ORF 655
N743SW 144 155 124 7 18 MCO PHL 861
N276WN 142 150 129 -2 6 MCO PHL 861
N369SW 153 145 134 30 22 MCO PHL 861
N363SW 151 145 137 5 -1 MCO PHL 861
N346SW 141 150 128 51 60 MCO PHL 861
N785SW 131 145 118 -15 -1 MCO PHL 861
N635SW 144 155 127 -6 5 MCO PHL 861
N242WN 298 300 276 68 70 MCO PHX 1848
N439WN 130 140 111 -4 6 MCO PIT 834
N348SW 140 135 124 7 2 MCO PIT 834
N672SW 136 135 122 9 8 MCO PIT 834
N493WN 151 160 136 -9 0 MCO PVD 1073
N380SW 170 155 155 13 -2 MCO PVD 1073
N705SW 164 160 147 6 2 MCO PVD 1073
N233LV 157 160 143 1 4 MCO PVD 1073
N786SW 156 160 139 6 10 MCO PVD 1073
N280WN 160 160 146 1 1 MCO PVD 1073
N282WN 104 95 81 10 1 MCO RDU 534
N694SW 89 100 77 3 14 MCO RDU 534
N266WN 94 95 82 9 10 MCO RDU 534
N218WN 98 100 77 12 14 MCO RDU 534
N355SW 47 50 35 15 18 MCO RSW 133
N388SW 44 45 30 37 38 MCO RSW 133
N786SW 46 50 31 4 8 MCO RSW 133
N707SA 52 50 33 10 8 MCO RSW 133
N795SW 176 185 153 -9 0 MCO SAT 1040
N402WN 176 185 161 4 13 MCO SAT 1040
N690SW 123 130 107 -1 6 MCO SDF 718
N457WN 135 130 105 20 15 MCO SDF 718
N720WN 144 155 131 13 24 MCO STL 880
N775SW 147 160 135 -6 7 MCO STL 880
N291WN 136 155 122 96 115 MCO STL 880
N247WN 144 155 127 43 54 MCO STL 880
N748SW 179 185 159 -4 2 MDW ABQ 1121
N709SW 176 190 158 21 35 MDW ABQ 1121
N325SW 110 105 97 36 31 MDW ALB 717
N305SW 116 110 90 107 101 MDW ALB 717
N403WN 145 165 128 -6 14 MDW AUS 972
N767SW 136 165 125 59 88 MDW AUS 972
N730SW 118 120 100 28 30 MDW BDL 777
i have written the code like this but it is not working properly:
a = load '/path/to/file' using PigStorage('\t');
b = foreach a generate (int)$5 as field_a:int,(chararray)$6 as field_b:chararray;
c = group b all;
d = foreach c generate b.field_b,AVG(b.field_a);
e = order d by field_b ASC;
dump e;
I am facing error at order by:
grunt> a = load '/user/horton/sample_pig_data.txt' using PigStorage('\t');
grunt> b = foreach a generate (int)$5 as fielda:int,(chararray)$6 as fieldb:chararray;
grunt> describe #;
b: {fielda: int,fieldb: chararray}
grunt> c = group b all;
grunt> describe #;
c: {group: chararray,b: {(fielda: int,fieldb: chararray)}}
grunt> d = foreach c generate b.fieldb,AVG(b.fielda);
grunt> e = order d by fieldb ;
2017-01-05 15:51:29,623 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1025:
<line 6, column 15> Invalid field projection. Projected field [fieldb] does not exist in schema: :bag{:tuple(fieldb:chararray)},:double.
Details at logfile: /root/pig_1483631021021.log
I want output like(not related to input data):
(({(Bharathi),(Komal),(Archana),(Trupthi),(Preethi),(Rajesh),(siddarth),(Rajiv) },
{ (72) , (83) , (87) , (75) , (93) , (90) , (78) , (89) }),83.375)
If you have found the answer, best practice is to post it so that others referring to this can have a better understanding.

How to query using an array of columns on SQL Server 2008

Can you please help on this, Im trying to write a query which retrieves a total amount from an array of columns, I dont know if there is a way to do this, I retrieve the array of columns I need from this query:
USE Facebook_Global
GO
SELECT c.name AS column_name
FROM sys.tables AS t
INNER JOIN sys.columns AS c
ON t.OBJECT_ID = c.OBJECT_ID
WHERE t.name LIKE '%Lifetime Likes by Gender and###$%' and c.name like '%m%'
Which gives me this table
column_name
M#13-17
M#18-24
M#25-34
M#35-44
M#45-54
M#55-64
M#65+
So I need a query that gives me a TotalAmount of those columns listed in that table. Can this be possible?
Just to clarify a little:
I have this table
Date F#13-17 F#18-24 F#25-34 F#35-44 F#45-54 F#55-64 F#65+ M#13-17 M#18-24 M#25-34 M#35-44 M#45-54 M#55-64 M#65+
2015-09-06 00:00:00.000 257 3303 1871 572 235 116 71 128 1420 824 251 62 32 30
2015-09-07 00:00:00.000 257 3302 1876 571 234 116 72 128 1419 827 251 62 32 30
2015-09-08 00:00:00.000 257 3304 1877 572 234 116 73 128 1421 825 253 62 32 30
2015-09-09 00:00:00.000 257 3314 1891 575 236 120 73 128 1438 828 254 62 33 30
2015-09-10 00:00:00.000 259 3329 1912 584 245 131 76 128 1460 847 259 66 37 31
2015-09-11 00:00:00.000 259 3358 1930 605 248 136 79 128 1475 856 261 67 39 31
2015-09-12 00:00:00.000 259 3397 1953 621 255 139 79 128 1486 864 264 68 41 31
2015-09-13 00:00:00.000 259 3426 1984 642 257 144 80 129 1499 883 277 74 42 32
And I need a column with a SUM of all the columns containing the word F and other containig the word M, instead of using something like this:
F#13-17+F#18-24+F#25-34+F#35-44+F#45-54+etc.
Is this possible?
Try something like this:
with derivedTable as
(sql from your question goes here)
select column_name
from derivedTable
union
select cast(count(*) as varchar (10) + 'records'
from derivedTable