Within my database my data can look one of two ways
1 -
hh_match_count: 5,
hh_total_fhc_0: 6,
hh_total_fhc_1: 5,
hh_total_fhc_2: 3,
hh_total_fhc_3: 2,
hh_total_fhc_4: 4
2 -
hh_match_count: 3,
hh_total_fhc_0: 6,
hh_total_fhc_1: 5,
hh_total_fhc_2: 3,
hh_total_fhc_3: null,
hh_total_fhc_4: null
What I want to do is calculate the number of times a value is >= 1 (will want to expand this to >= 2, >= 3 etc) from each of hh_total_fhc_0, hh_total_fhc_1, hh_total_fhc_2, hh_total_fhc_3, hh_total_fhc_4 and then divide that by hh_match_count. So basically getting the % of occurrences.
What query should I be looking at executing here? Slowly getting more involved with SQL statements.
coalesce returns the first non-null value it's passed. That turns your null values into zeroes, since you need to count them as zero for the average. Next step is to add least to the mix:
SELECT least(1, coalesce(hh_total_fhc_0, 0)) FROM fixtures gives you a 0 if the value is zero (crucially, null isn't a number so least(1, null) is 1!), and a 1 if it's a positive value. Apply that to each of your columns and then you can calculate the hit percentage exactly as you were thinking.
Related
I want to write a statement that generates a random pick from a set of four values (2,4,6, and 8). Below is the select statement that I have so far
SELECT
CASE
WHEN RN_GENERATOR.RANDOM_NUMBER BETWEEN 0 AND 2.00 THEN 2
WHEN RN_GENERATOR.RANDOM_NUMBER BETWEEN 2.01 AND 4.00 THEN 4
WHEN RN_GENERATOR.RANDOM_NUMBER BETWEEN 4.01 AND 6.00 THEN 6
WHEN RN_GENERATOR.RANDOM_NUMBER BETWEEN 6.01 AND 8.00 THEN 8
END AS ORDER_FREQUENCY
FROM (SELECT ROUND(RAND()*8,2) AS RANDOM_NUMBER FROM DUMMY) RN_GENERATOR
is there a more intelligent way of doing this?
Looks to me as if your requirement can be fulfilled but his statement
select
ROUND(rand()*4, 0, ROUND_CEILING) * 2 as ORDER_FREQUENCY
from dummy;
RAND() * 4 spreads the value range of possible outcomes for the RAND() function from 0..1 to 0..4.
ROUND( ... , 0, ROUND_CEILING) rounds the number to the next larger or equal integer and leaves no decimal places. For this example this means that the output of this rounding can only be 1, 2, 3 or 4.
*2 simply maps the four possible values to your target number range 2, 4, 6, 8. If the multiplication wouldn't suffice, you could also use the MAP() function for this.
And that's it. Random numbers picked from the set of (2, 4, 6, 8).
You can randomly sort a data set by using
ORDER BY Rand()
and select first one as your random value
Here is an example
select top 1 rownum
from Numbers_Table(10) as nt
where rownum in (2,4,6,8)
order by rand();
Numbers_Table function returns a numbers table on HANA database and I filter only the values that you want to see as your possible random values using WHERE clause
TOP 1 clause in SELECT command returns the first integer which is randomly ordered
I hope it helps
I am new to wonderware InSQL historian.
I retrieve data for only one day, I have values sets to zero every time after incrementing for few values..
0
1
2
0
1
2
3
4
5
6
7
8
0
I want to calculate the number of rows with values more than 0,
at the moment I am using count function and value>0 in my query
but I would like to use, if any InSql retrieval option to count the number of values..
select count(value) FROM *****.Runtime.dbo.History
WHERE TagName = 'TagA'
AND DateTime >= '2016-06-14 06:00:00'
AND Value > 0
Please help me
There are no retrieval mode in Wonderware Historian that can be used in a straightforward manner to solve your problem. Your proposed solution is probably the best one but be aware that it will display an incorrect value in case of a disconnection and reconnection scenario.
In case of a disconnection a NULL value will be logged and at reconnect the same value will be logged again (e.g. 0, 1, 2, NULL, 2, 3, 0, 1) resulting in the same value being counted twice.
I want to SUM a lot of rows.
Is it quicker (or better practice, etc) to do Option A or Option B?
Option A
SELECT
[Person]
SUM([Value]) AS Total
FROM
Database
WHERE
[Value] > 0
GROUP BY
[Person]
Option B
SELECT
[Person]
SUM([Value]) AS Total
FROM
Database
GROUP BY
[Person]
So if I have, for Person X:
0, 7, 0, 6, 0, 5, 0, 0, 0, 4, 0, 9, 0, 0
Option A does:
a) Remove zeros
b) 7 + 6 + 5 + 4 + 9
Option B does:
a) 0 + 7 + 0 + 6 + 0 + 5 + 0 + 0 + 0 + 4 + 0 + 9 + 0 + 0
Option A has less summing, because it has fewer records to sum, because I've excluded the load that have a zero value. But Option B doesn't need a WHERE clause.
Anyone got an idea as to whether either of these are significantly quicker/better than the other? Or is it just something that doesn't matter either way?
Thanks :-)
Well, if you have a filtered index that exactly matches the where clause, and if that index removes a significant amount of data (as in: a good chunk of the data is zeros), then definitely the first... If you don't have such an index: then you'll need to test it on your specific data, but I would probably expect the unfiltered scenario to be faster, as it can do use a range of tricks to do the sum if it doesn't need to do branching etc.
However, the two examples aren't functionally equivalent at the moment (the second includes negative values, the first doesn't).
Assuming that Value is always positive the 2nd query might still return less rows if there's a Person with all zeroes.
Otherwise you should simply test actual runtime/CPU on a really large amount of rows.
As already pointed out, the two are not functionally equivalent. In addition to the differences already pointed out (negative values, different output row count), Option B also filters out rows where Value is NULL. Option A doesn't.
Based on the Execution plan for both of these and using a small dataset similar to the one you provided, Option B is slightly faster with an Estimated Subtree Cost of .0146636 vs .0146655. However, you may get different results depending on the query or size of dataset. Only option is to test and see for yourself.
http://www.developer.com/db/how-to-interpret-query-execution-plan-operators.html
Drop Table #Test
Create Table #Test (Person nvarchar(200), Value int)
Insert Into #Test
Select 'Todd', 12 Union
Select 'Todd', 11 Union
Select 'Peter', 20 Union
Select 'Peter', 29 Union
Select 'Griff', 10 Union
Select 'Griff', 0 Union
Select 'Peter', 0 Union
SELECT [Person], SUM([Value]) AS Total
FROM #Test
WHERE [Value] > 0
GROUP BY [Person]
SELECT [Person],SUM([Value]) AS Total
FROM #Test
GROUP BY [Person]
Lets say I have a table A with attribute numbers that looks like this.
A
numbers
1
2
3
4
5
6
7
8
9
10
What will this query return? How is the 5 getting compared?
SELECT numbers
FROM A
WHERE 5 > ALL (SELECT numbers FROM a)
The ALL statement requires that ALL of the results returned by your subquery
(SELECT numbers FROM A)
to respect the condition (to be smaller than 5), otherwise the condition is not met and no results are returned.
In your case, there are numbers returned by the subquery, SELECT numbers FROM a, 6, 7, 8, 9, 10 which are greater than 5, thus not ALL numbers respect the condition, so the condition is evaluated to FALSE, and no rows are returned.
Update:
Based on your comments I added details to my answer:
The statement using ALL condition should be read as:
"If ALL of the numbers returned by (SELECT numbers FROM A) are smaller than 5, then return the numbers selected by your MAIN SELECT."
The statement using ANY condition should be read as:
"If ANY of the numbers returned by (SELECT numbers FROM A) are smaller than 5, then return the numbers selected by your MAIN SELECT."
You can run the query in this SQLFiddle to see how the results change, just replace ANY with ALL and see the difference.
It will return an empty resultset (no rows).
The WHERE clause is evaluated for each row in the table A [first instance].
The WHERE clause tests whether 5 is greater than EACH row in table A [second instance].
It is not (there are several rows where the value is greater than 5) so the WHERE clause is always false.
Therefore no rows from table A [first instance] pass the query, therefore no rows are returned.
I have some data I am querying. The table is composed of two columns - a unique ID, and a value. I would like to count the number of times each unique value appears (which can easily be done with a COUNT and GROUP BY), but I then want to be able to count that. So, I would like to see how many items appear twice, three times, etc.
So for the following data (ID, val)...
1, 2
2, 2
3, 1
4, 2
5, 1
6, 7
7, 1
The intermediate step would be (val, count)...
1, 3
2, 3
7, 1
And I would like to have (count_from_above, new_count)...
3, 2 -- since three appears twice in the previous table
1, 1 -- since one appears once in the previous table
Is there any query which can do that? If it helps, I'm working with Postgres. Thanks!
Try something like this:
select
times,
count(1)
from ( select
id,
count(distinct value) as times
from table
group by id ) a
group by times