last_value partition - sql

Select speed, ram, price,
last_value(price) over (partition by speed, ram order by speed, ram) as lastp
from PC
PC table
code model speed ram hd cd price
1 1232 500 64 5.0 12x 600.0000
10 1260 500 32 10.0 12x 350.0000
11 1233 900 128 40.0 40x 980.0000
12 1233 800 128 20.0 50x 970.0000
2 1121 750 128 14.0 40x 850.0000
3 1233 500 64 5.0 12x 600.0000
4 1121 600 128 14.0 40x 850.0000
5 1121 600 128 8.0 40x 850.0000
6 1233 750 128 20.0 50x 950.0000
7 1232 500 32 10.0 12x 400.0000
8 1232 450 64 8.0 24x 350.0000
9 1232 450 32 10.0 24x 350.0000
speed ram price lastp
450 32 350.0000 350.0000
450 64 350.0000 350.0000
500 32 350.0000 350.0000
500 32 400.0000 350.0000
Can anyone explain why in speed 500 ram 32 lastp is 350 not 400

You can make another query based on the main one. I don't know your dbms, but you can try this in most databases.
;WITH C AS(
Select speed, ram, price,
ROW_NUMBER() over (partition by speed, ram order by speed, ram) as Rn
from tbl
)
SELECT speed, ram, price
,LAST_VALUE(price) over (partition by speed, ram order by speed , ram) as lastp
FROM C
ORDER BY speed, ram, Rn DESC
SQLFiddle for SQL Server

Related

When does URL to .csv open or download file?

I'm learning some Python pandas and the course uses https://gist.githubusercontent.com/sh7ata/e075ff35b51ebb0d2d577fbe1d19ebc9/raw/b966d02c7c26bcca60703acb1390e938a65a35cb/drinks.csv
Clicking this link opens the actual .csv file contents in my browser and I can read the data into pandas straight away.
However, this doesn't work for https://www.spss-tutorials.com/downloads/browsers.csv. If I click this link, Google Chrome downloads the file rather than show its contents.
Why is this and what can I do about it? I mean, they're both URLs for .csv files, right?
You can use requests module with custom HTTP header to download it. For example:
import requests
import pandas as pd
from io import StringIO
url = "https://www.spss-tutorials.com/downloads/browsers.csv"
headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:99.0) Gecko/20100101 Firefox/99.0"
}
req = requests.get(url, headers=headers)
df = pd.read_csv(StringIO(req.text))
print(df.to_markdown())
Prints:
screen_resolution
sessions
perc_new_sessions
new_users
bounce_rate
pages_session
avg_session_duration
goal_conversion_rate
goal_completions
goal_value
0
1366x768
2,284
79.60%
1,818
69.40%
1.93
00:02:14
0.00%
0
€0.00
1
1920x1080
2,013
72.28%
1,455
71.93%
2.02
00:02:18
0.00%
0
€0.00
2
1280x1024
1,217
72.14%
878
74.53%
1.9
00:02:05
0.00%
0
€0.00
3
1680x1050
1,052
68.16%
717
74.62%
1.93
00:01:46
0.00%
0
€0.00
4
1440x900
921
77.85%
717
74.05%
1.73
00:01:45
0.00%
0
€0.00
5
1280x800
865
80.00%
692
71.91%
1.76
00:01:37
0.00%
0
€0.00
6
1600x900
737
76.39%
563
72.86%
1.8
00:02:02
0.00%
0
€0.00
7
1920x1200
441
64.85%
286
73.92%
1.87
00:01:55
0.00%
0
€0.00
8
1024x768
192
88.02%
169
73.96%
2.07
00:01:32
0.00%
0
€0.00
9
2560x1440
137
67.15%
92
61.31%
1.86
00:02:02
0.00%
0
€0.00
10
1280x720
134
82.84%
111
66.42%
2.16
00:01:15
0.00%
0
€0.00
11
1536x864
118
78.81%
93
72.03%
1.78
00:01:45
0.00%
0
€0.00
12
320x568
104
84.62%
88
75.00%
1.89
00:01:18
0.00%
0
€0.00
13
768x1024
91
83.52%
76
67.03%
2.66
00:02:11
0.00%
0
€0.00
14
1360x768
70
77.14%
54
74.29%
1.69
00:01:08
0.00%
0
€0.00
15
360x640
70
71.43%
50
77.14%
2.06
00:02:06
0.00%
0
€0.00
16
1600x1200
62
80.65%
50
82.26%
1.32
00:01:22
0.00%
0
€0.00
17
1344x840
56
44.64%
25
53.57%
3.11
00:04:39
0.00%
0
€0.00
18
320x480
51
80.39%
41
72.55%
1.61
00:00:55
0.00%
0
€0.00
19
1093x614
41
80.49%
33
78.05%
1.76
00:01:42
0.00%
0
€0.00
20
1280x768
38
60.53%
23
68.42%
2.63
00:02:41
0.00%
0
€0.00
21
1024x600
35
94.29%
33
85.71%
1.37
00:01:23
0.00%
0
€0.00
...and so on

Jedis benchmarking on local Redis server

I'm using JMH to test the performance of Jedis on a local Redis server (Jedis version 2.9.0, Redis version 6.2.6, CPU Quad-Core Intel Core i5). I use 200 threads to send SET command within a connection pool.
#State(Scope.Benchmark)
public class CommonClientBenchmark {
private JedisPool jedisPool;
private final String host = "127.0.0.1";
private final int port = 6379;
#Setup
public void setup() {
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(200);
jedisPoolConfig.setMaxIdle(200);
jedisPool = new JedisPool(jedisPoolConfig, host, port, 30000);
}
#TearDown
public void tearDown() {
jedisPool.close();
}
#Threads(200)
#Fork(1)
#Benchmark
#BenchmarkMode(Mode.Throughput)
#Warmup(iterations = 1, time = 30, timeUnit = TimeUnit.SECONDS)
#Measurement(iterations = 2, time = 30, timeUnit = TimeUnit.SECONDS)
public void jedisSet() {
try (Jedis jedis = jedisPool.getResource()) {
jedis.set("jedis", "jedis");
}
}
public static void main(String[] args) throws IOException, RunnerException {
CommonClientBenchmark commonClientBenchmark = new CommonClientBenchmark();
commonClientBenchmark.setup();
org.openjdk.jmh.Main.main(args);
}
}
With the code above, I obtain about 25000+ QPS. However, when I decrease the maxTotal and maxIdle parameter of the connection pool from 200 to 100, the result QPS is even much higher - it reaches about 75000. Could anyone explain the phenomenon? Thanks a lot!
EDIT: I've change the version of Jedis to 4.1.1 and run multiple benchmarking tests, the result is similar. When the size of connection pool is set to 100 (both maxTotal and maxIdle), I obtain about 25000 ~ 50000 QPS. When I increase the size (both maxTotal and maxIdle) to 200, the QPS rise to 60000 ~ 75000.
I've also use iostat 1 to monitor the usage of CPU while running the tests. And I found that when the pool size is set to 200, the %system is often much higher than when it is set to 100.
connection pool size set to 200:
disk0 cpu load average
KB/t tps MB/s us sy id 1m 5m 15m
4.09 929 3.71 7 87 6 39.58 14.44 8.06
4.00 902 3.52 6 89 5 39.58 14.44 8.06
4.50 8 0.04 5 88 6 38.33 14.60 8.15
4.39 145 0.62 6 89 6 38.33 14.60 8.15
28.00 11 0.30 6 88 5 38.33 14.60 8.15
8.00 1 0.01 5 88 6 38.33 14.60 8.15
0.00 0 0.00 5 88 7 38.33 14.60 8.15
4.00 5 0.02 5 88 7 38.94 15.12 8.37
0.00 0 0.00 5 89 6 38.94 15.12 8.37
0.00 0 0.00 5 88 7 38.94 15.12 8.37
0.00 0 0.00 5 89 6 38.94 15.12 8.37
8.68 222 1.88 5 88 7 38.94 15.12 8.37
5.60 10 0.05 5 87 8 45.20 16.81 9.01
29.65 46 1.33 11 82 7 45.20 16.81 9.01
52.57 7 0.36 8 85 7 45.20 16.81 9.01
28.00 2 0.05 5 87 8 45.20 16.81 9.01
223.33 6 1.31 6 87 7 45.20 16.81 9.01
4.19 1344 5.49 8 85 7 44.54 17.15 9.17
4.61 952 4.29 6 89 5 44.54 17.15 9.17
4.00 690 2.69 6 89 5 44.54 17.15 9.17
connection pool size set to 100:
disk0 cpu load average
KB/t tps MB/s us sy id 1m 5m 15m
4.31 13 0.05 16 59 26 6.55 7.86 7.49
750.67 3 2.20 30 53 17 6.58 7.85 7.48
9.14 225 2.01 23 54 23 6.58 7.85 7.48
37.00 8 0.29 23 56 21 6.58 7.85 7.48
32.00 6 0.19 18 55 26 6.58 7.85 7.48
145.20 10 1.41 22 56 22 6.58 7.85 7.48
0.00 0 0.00 22 56 22 6.46 7.80 7.47
4.00 2660 10.39 24 58 18 6.46 7.80 7.47
4.00 1952 7.62 19 56 25 6.46 7.80 7.47
4.00 1 0.00 19 56 24 6.46 7.80 7.47
4.00 5 0.02 18 56 27 6.46 7.80 7.47
0.00 0 0.00 15 57 28 6.10 7.71 7.44
0.00 0 0.00 18 57 25 6.10 7.71 7.44
256.00 10 2.50 18 56 25 6.10 7.71 7.44
6.29 7 0.04 20 57 23 6.10 7.71 7.44
4.00 5 0.02 20 56 24 6.10 7.71 7.44
17.71 7 0.12 20 56 24 6.01 7.66 7.42
23.00 4 0.09 20 58 23 6.01 7.66 7.42
5.00 4 0.02 23 55 22 6.01 7.66 7.42
4.00 1 0.00 20 56 24 6.01 7.66 7.42

Transpose data in awk

I have a benchmarking tool that has an output looking like this:
Algorithm Data Size CPU Time (ns)
----------------------------------------
bubble_sort 1 16.1
bubble_sort 2 19.1
bubble_sort 4 32.8
bubble_sort 8 74.3
bubble_sort 16 257
bubble_sort 32 997
bubble_sort 64 4225
bubble_sort 128 18925
bubble_sort 256 83565
bubble_sort 512 313589
bubble_sort 1024 1161146
insertion_sort 1 16.1
insertion_sort 2 17.7
insertion_sort 4 26.5
insertion_sort 8 43.7
insertion_sort 16 96.1
insertion_sort 32 263
insertion_sort 64 770
insertion_sort 128 2807
insertion_sort 256 10775
insertion_sort 512 38956
insertion_sort 1024 135419
std_sort 1 17.3
std_sort 2 20.7
std_sort 4 24.4
std_sort 8 32.7
std_sort 16 59.6
std_sort 32 173
std_sort 64 345
std_sort 128 762
std_sort 256 1769
std_sort 512 3982
std_sort 1024 18500
And I'm trying to transform this to become more like this:
Data Size bubble_sort insertion_sort std_sort
1 16.1 16.1 17.3
2 19.1 17.7 20.7
4 32.8 26.5 24.4
8 74.3 43.7 32.7
16 257 96.1 59.6
32 997 263 173
64 4225 770 345
128 18925 2807 762
256 83565 10775 1769
512 313589 38956 3982
1024 1161146 135419 18500
Is there a simple way to achieve this using awk? I'm mostly interested in the numbers in the final table, so the header line isn't essential.
==============================
EDIT:
I was actually able to achieve this using the following code
{
map[$1][$2] = $3
}
END {
for (algo in map) {
some_algo = algo
break;
}
printf "size "
for (algo in map) {
printf "%s ", algo
}
print ""
for (size in map[some_algo]) {
printf "%s ", size
for (algo in map) {
printf "%s ", map[algo][size]
}
printf "\n"
}
}
This works. However, it has two minor problems: It looks a little bit difficult to read, therefore, is there a better and more idiomatic way to do the job? Also, the order of the resulting columns is different from the order in the original data rows. Is there a simple way to fix this order?
here is an alternative
$ sed 1,2d file |
pr -w200 -3t |
awk 'NR==1{print "Data_Size", $1,$4,$7} {print $2,$3,$6,$9}' |
column -t
Data_Size bubble_sort insertion_sort std_sort
1 16.1 16.1 17.3
2 19.1 17.7 20.7
4 32.8 26.5 24.4
8 74.3 43.7 32.7
16 257 96.1 59.6
32 997 263 173
64 4225 770 345
128 18925 2807 762
256 83565 10775 1769
512 313589 38956 3982
1024 1161146 135419 18500
Here is a Ruby to do this.
Ruby is very much like awk but with additional functions and data structures. The advantage to this is it will correctly deal with missing values by inserting n/a if one of the data values is missing.
$ sed 1,2d file |
ruby -lane 'BEGIN{
h=Hash.new {|n,k| n[k]={} }
l="Data_Size"
}
h[l][$F[1]]=$F[1]
h[$F[0]][$F[1]]=$F[2]
END{
puts h.keys.join("\t")
h[l]=h[l].sort{|a,b| a[0].to_i<=>b[0].to_i}.to_h
h[l].each_key { |k|
a=[]
h.each_key { |j|
a.push(h[j][k] || "n/a")
}
puts a.join("\t")
}
}' | column -t
Taking your example and removing the line bubble_sort 4 32.8 prints:
Data_Size bubble_sort insertion_sort std_sort
1 16.1 16.1 17.3
2 19.1 17.7 20.7
4 n/a 26.5 24.4
8 74.3 43.7 32.7
16 257 96.1 59.6
32 997 263 173
64 4225 770 345
128 18925 2807 762
256 83565 10775 1769
512 313589 38956 3982
1024 1161146 135419 18500

Is pow(x, 2.0) fast as x * x in GLSL? [duplicate]

Which is faster in GLSL:
pow(x, 3.0f);
or
x*x*x;
?
Does exponentiation performance depend on hardware vendor or exponent value?
I wrote a small benchmark, because I was interested in the results.
In my personal case, I was most interested in exponent = 5.
Benchmark code (running in Rem's Studio / LWJGL):
package me.anno.utils.bench
import me.anno.gpu.GFX
import me.anno.gpu.GFX.flat01
import me.anno.gpu.RenderState
import me.anno.gpu.RenderState.useFrame
import me.anno.gpu.framebuffer.Frame
import me.anno.gpu.framebuffer.Framebuffer
import me.anno.gpu.hidden.HiddenOpenGLContext
import me.anno.gpu.shader.Renderer
import me.anno.gpu.shader.Shader
import me.anno.utils.types.Floats.f2
import org.lwjgl.opengl.GL11.*
import java.nio.ByteBuffer
import kotlin.math.roundToInt
fun main() {
fun createShader(code: String) = Shader(
"", null, "" +
"attribute vec2 attr0;\n" +
"void main(){\n" +
" gl_Position = vec4(attr0*2.0-1.0, 0.0, 1.0);\n" +
" uv = attr0;\n" +
"}", "varying vec2 uv;\n", "" +
"void main(){" +
code +
"}"
)
fun repeat(code: String, times: Int): String {
return Array(times) { code }.joinToString("\n")
}
val size = 512
val warmup = 50
val benchmark = 1000
HiddenOpenGLContext.setSize(size, size)
HiddenOpenGLContext.createOpenGL()
val buffer = Framebuffer("", size, size, 1, 1, true, Framebuffer.DepthBufferType.NONE)
println("Power,Multiplications,GFlops-multiplication,GFlops-floats,GFlops-ints,GFlops-power,Speedup")
useFrame(buffer, Renderer.colorRenderer) {
RenderState.blendMode.use(me.anno.gpu.blending.BlendMode.ADD) {
for (power in 2 until 100) {
// to reduce the overhead of other stuff
val repeats = 100
val init = "float x1 = dot(uv, vec2(1.0)),x2,x4,x8,x16,x32,x64;\n"
val end = "gl_FragColor = vec4(x1,x1,x1,x1);\n"
val manualCode = StringBuilder()
for (bit in 1 until 32) {
val p = 1.shl(bit)
val h = 1.shl(bit - 1)
if (power == p) {
manualCode.append("x1=x$h*x$h;")
break
} else if (power > p) {
manualCode.append("x$p=x$h*x$h;")
} else break
}
if (power.and(power - 1) != 0) {
// not a power of two, so the result isn't finished yet
manualCode.append("x1=")
var first = true
for (bit in 0 until 32) {
val p = 1.shl(bit)
if (power.and(p) != 0) {
if (!first) {
manualCode.append('*')
} else first = false
manualCode.append("x$p")
}
}
manualCode.append(";\n")
}
val multiplications = manualCode.count { it == '*' }
// println("$power: $manualCode")
val shaders = listOf(
// manually optimized
createShader(init + repeat(manualCode.toString(), repeats) + end),
// can be optimized
createShader(init + repeat("x1=pow(x1,$power.0);", repeats) + end),
// can be optimized, int as power
createShader(init + repeat("x1=pow(x1,$power);", repeats) + end),
// slightly different, so it can't be optimized
createShader(init + repeat("x1=pow(x1,${power}.01);", repeats) + end),
)
for (shader in shaders) {
shader.use()
}
val pixels = ByteBuffer.allocateDirect(4)
Frame.bind()
glClearColor(0f, 0f, 0f, 1f)
glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT)
for (i in 0 until warmup) {
for (shader in shaders) {
shader.use()
flat01.draw(shader)
}
}
val flops = DoubleArray(shaders.size)
val avg = 10 // for more stability between runs
for (j in 0 until avg) {
for (index in shaders.indices) {
val shader = shaders[index]
GFX.check()
val t0 = System.nanoTime()
for (i in 0 until benchmark) {
shader.use()
flat01.draw(shader)
}
// synchronize
glReadPixels(0, 0, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixels)
GFX.check()
val t1 = System.nanoTime()
// the first one may be an outlier
if (j > 0) flops[index] += multiplications * repeats.toDouble() * benchmark.toDouble() * size * size / (t1 - t0)
GFX.check()
}
}
for (i in flops.indices) {
flops[i] /= (avg - 1.0)
}
println(
"" +
"$power,$multiplications," +
"${flops[0].roundToInt()}," +
"${flops[1].roundToInt()}," +
"${flops[2].roundToInt()}," +
"${flops[3].roundToInt()}," +
(flops[0] / flops[3]).f2()
)
}
}
}
}
The sampler function is run 9x 512² pixels * 1000 times, and evaluates the function 100 times each.
I run this code on my RX 580, 8GB from Gigabyte, and collected the following results:
Power
#Mult
GFlops*
GFlopsFp
GFlopsInt
GFlopsPow
Speedup
2
1
1246
1429
1447
324
3.84
3
2
2663
2692
2708
651
4.09
4
2
2682
2679
2698
650
4.12
5
3
2766
972
974
973
2.84
6
3
2785
978
974
976
2.85
7
4
2830
1295
1303
1299
2.18
8
3
2783
2792
2809
960
2.90
9
4
2836
1298
1301
1302
2.18
10
4
2833
1291
1302
1298
2.18
11
5
2858
1623
1629
1623
1.76
12
4
2824
1302
1295
1303
2.17
13
5
2866
1628
1624
1626
1.76
14
5
2869
1614
1623
1611
1.78
15
6
2886
1945
1943
1953
1.48
16
4
2821
1305
1300
1305
2.16
17
5
2868
1615
1625
1619
1.77
18
5
2858
1620
1625
1624
1.76
19
6
2890
1949
1946
1949
1.48
20
5
2871
1618
1627
1625
1.77
21
6
2879
1945
1947
1943
1.48
22
6
2886
1944
1949
1952
1.48
23
7
2901
2271
2269
2268
1.28
24
5
2872
1621
1628
1624
1.77
25
6
2886
1942
1943
1942
1.49
26
6
2880
1949
1949
1953
1.47
27
7
2891
2273
2263
2266
1.28
28
6
2883
1949
1946
1953
1.48
29
7
2910
2279
2281
2279
1.28
30
7
2899
2272
2276
2277
1.27
31
8
2906
2598
2595
2596
1.12
32
5
2872
1621
1625
1622
1.77
33
6
2901
1953
1942
1949
1.49
34
6
2895
1948
1939
1944
1.49
35
7
2895
2274
2266
2268
1.28
36
6
2881
1937
1944
1948
1.48
37
7
2894
2277
2270
2280
1.27
38
7
2902
2275
2264
2273
1.28
39
8
2910
2602
2594
2603
1.12
40
6
2877
1945
1947
1945
1.48
41
7
2892
2276
2277
2282
1.27
42
7
2887
2271
2272
2273
1.27
43
8
2912
2599
2606
2599
1.12
44
7
2910
2278
2284
2276
1.28
45
8
2920
2597
2601
2600
1.12
46
8
2920
2600
2601
2590
1.13
47
9
2925
2921
2926
2927
1.00
48
6
2885
1935
1955
1956
1.47
49
7
2901
2271
2279
2288
1.27
50
7
2904
2281
2276
2278
1.27
51
8
2919
2608
2594
2607
1.12
52
7
2902
2282
2270
2273
1.28
53
8
2903
2598
2602
2598
1.12
54
8
2918
2602
2602
2604
1.12
55
9
2932
2927
2924
2936
1.00
56
7
2907
2284
2282
2281
1.27
57
8
2920
2606
2604
2610
1.12
58
8
2913
2593
2597
2587
1.13
59
9
2925
2923
2924
2920
1.00
60
8
2930
2614
2606
2613
1.12
61
9
2932
2946
2946
2947
1.00
62
9
2926
2935
2937
2947
0.99
63
10
2958
3258
3192
3266
0.91
64
6
2902
1957
1956
1959
1.48
65
7
2903
2274
2267
2273
1.28
66
7
2909
2277
2276
2286
1.27
67
8
2908
2602
2606
2599
1.12
68
7
2894
2272
2279
2276
1.27
69
8
2923
2597
2606
2606
1.12
70
8
2910
2596
2599
2600
1.12
71
9
2926
2921
2927
2924
1.00
72
7
2909
2283
2273
2273
1.28
73
8
2909
2602
2602
2599
1.12
74
8
2914
2602
2602
2603
1.12
75
9
2924
2925
2927
2933
1.00
76
8
2904
2608
2602
2601
1.12
77
9
2911
2919
2917
2909
1.00
78
9
2927
2921
2917
2935
1.00
79
10
2929
3241
3246
3246
0.90
80
7
2903
2273
2276
2275
1.28
81
8
2916
2596
2592
2589
1.13
82
8
2913
2600
2597
2598
1.12
83
9
2925
2931
2926
2913
1.00
84
8
2917
2598
2606
2597
1.12
85
9
2920
2916
2918
2927
1.00
86
9
2942
2922
2944
2936
1.00
87
10
2961
3254
3259
3268
0.91
88
8
2934
2607
2608
2612
1.12
89
9
2918
2939
2931
2916
1.00
90
9
2927
2928
2920
2924
1.00
91
10
2940
3253
3252
3246
0.91
92
9
2924
2933
2926
2928
1.00
93
10
2940
3259
3237
3251
0.90
94
10
2928
3247
3247
3264
0.90
95
11
2933
3599
3593
3594
0.82
96
7
2883
2282
2268
2269
1.27
97
8
2911
2602
2595
2600
1.12
98
8
2896
2588
2591
2587
1.12
99
9
2924
2939
2936
2938
1.00
As you can see, a power() call takes exactly as long as 9 multiplication instructions. Therefore every manual rewriting of a power with less than 9 multiplications is faster.
Only the cases 2, 3, 4, and 8 are optimized by my driver. The optimization is independent of whether you use the .0 suffix for the exponent.
In the case of exponent = 2, my implementation seems to have lower performance than the driver. I am not sure, why.
The speedup is the manual implementation compared to pow(x,exponent+0.01), which cannot be optimized by the compiler.
Because the multiplications and the speedup align so perfectly, I created a graph to show the relationship. This relationship kind of shows that my benchmark is trustworthy :).
Operating System: Windows 10 Personal
GPU: RX 580 8GB from Gigabyte
Processor: Ryzen 5 2600
Memory: 16 GB DDR4 3200
GPU Driver: 21.6.1 from 17th June 2021
LWJGL: Version 3.2.3 build 13
While this can definitely be hardware/vendor/compiler dependent, advanced mathematical functions like pow() tend to be considerably more expensive than basic operations.
The best approach is of course to try both, and benchmark. But if there is a simple replacement for an advanced mathematical functions, I don't think you can go very wrong by using it.
If you write pow(x, 3.0), the best you can probably hope for is that the compiler will recognize the special case, and expand it. But why take the risk, if the replacement is just as short and easy to read? C/C++ compilers don't always replace pow(x, 2.0) by a simple multiplication, so I wouldn't necessarily count on all GLSL compilers to do that.

Group clause in SQL command

I have 3 tables: Deliveries, IssuedWarehouse, ReturnedStock.
Deliveries: ID, OrderNumber, Material, Width, Gauge, DelKG
IssuedWarehouse: OrderNumber, IssuedKG
ReturnedStock: OrderNumber, IssuedKG
What I'd like to do is group all the orders by Material, Width and Gauge and then sum the amount delivered, issued to the warehouse and issued back to stock.
This is the SQL that is really quite close:
SELECT
DELIVERIES.Material,
DELIVERIES.Width,
DELIVERIES.Gauge,
Count(DELIVERIES.OrderNo) AS [Orders Placed],
Sum(DELIVERIES.DeldQtyKilos) AS [KG Delivered],
Sum(IssuedWarehouse.[Qty Issued]) AS [Film Issued],
Sum([Film Retns].[Qty Issued]) AS [Film Returned],
[KG Delivered]-[Film Issued]+[Film Returned] AS [Qty Remaining]
FROM (DELIVERIES
INNER JOIN IssuedWarehouse
ON DELIVERIES.OrderNo = IssuedWarehouse.[Order No From])
INNER JOIN [Film Retns]
ON DELIVERIES.OrderNo = [Film Retns].[Order No From]
GROUP BY Material, Width, Gauge, ActDelDate
HAVING ActDelDate Between [start date] And [end date]
ORDER BY DELIVERIES.Material;
This groups the products almost perfectly. However if you take a look at the results:
Material Width Gauge Orders Placed Delivered Qnty Kilos Film Issued Film Returned Qty Remaining
COEX-GLOSS 590 75 1 534 500 124 158
COEX-MATT 1080 80 1 4226 4226 52 52
CPP 660 38 8 6720 2768 1384 5336
CPP 666 47 1 5677 5716 536 497
CPP 690 65 2 1232 717 202 717
CPP 760 38 3 3444 1318 510 2636
CPP 770 38 4 4316 3318 2592 3590
CPP 786 38 2 672 442 212 442
CPP 800 47 1 1122 1122 116 116
CPP 810 47 1 1127 1134 69 62
CPP 810 47 2 2250 1285 320 1285
CPP 1460 38 12 6540 4704 2442 4278
LD 975 75 1 502 502 182 182
LDPE 450 50 1 252 252 50 50
LDPE 520 70 1 250 250 95 95
LDPE 570 65 2 504 295 86 295
LDPE 570 65 2 508 278 48 278
LDPE 620 50 1 252 252 67 67
LDPE 660 50 1 256 256 62 62
LDPE 670 75 1 248 248 80 80
LDPE 690 47 1 476 476 390 390
LDPE 790 38 2 2104 1122 140 1122
LDPE 790 50 1 286 286 134 134
LDPE 790 50 1 250 250 125 125
LDPE 810 30 1 4062 4062 100 100
LDPE 843 33 1 408 408 835 835
LDPE 850 80 1 412 412 34 34
LDPE 855 30 1 740 740 83 83
LDPE 880 60 1 304 304 130 130
LDPE 900 70 2 1000 650 500 850
LDPE 1017 60 1 1056 1056 174 174
OPP 25 1100 1 381 381 95 95
OPP 1000 30 2 1358 1112 300 546
OPP 1000 30 1 1492 1491 100 101
OPP 1200 20 1 418 417 461 462
PET 760 12 3 1227 1876 132 -517
You'll see that there are some materials that have the same width and gauge yet they are not grouped. I think this is because the delivered qty is different on the orders. For example:
Material Width Gauge Orders Placed Delivered Qnty Kilos Film Issued Film Returned Qty Remaining
LDPE 620 50 1 252 252 67 67
LDPE 660 50 1 256 256 62 62
I would like these two rows to be grouped. They have the same material, width and gauge but the delivered qty is different therefore it hasn't grouped it.
Can anyone help me group these strange rows?
Your "problem" is that the deliveries occurred on different dates, and you're grouping by ActDelDate so the data splits, but because you haven't selected the ActDelDate column, this isn't obvious.
The fix is: Remove ActDelDate from the group by list
You should also remove the unnecessary brackets around the first join, and change
HAVING ActDelDate Between [start date] And [end date]
to
WHERE ActDelDate Between [start date] And [end date]
and have it before the GROUP BY
You are grouping by the delivery date, which is causing the rows to be split. Either omit the delivery date from the results and group by, or take the min/max of the delivery date.