Plain and simple, does anybody know why this:
Select 30 * 220 / 30
Returns 220, which is the correct result, and this:
Select 30 * (220/30)
Returns 210???
On the second case, I realise that 220/30 is being calculated first, generating a decimal (7,333333...) but still... isn't this lousy precision?
Under integer division 220/30 = 7 and 99/100 = 0 (note truncation not rounding)
Use non integers to avoid this. e.g. Select 30 * (220/30.0)
Or you can use an explicit cast
Select 30 * (220/cast (30 as float))
The one in the parentheses, is always evaluated first, but since the machine logic you are using integer, in that case, the result of the division is 7, wich you multiply by 30, gives you 210
Related
I have a syntax formatting issue with the query below.
I am trying to get the difference between two time columns and then subtract 20 to get whatever the difference is minus 20. I also want to take the max value of either that or 0 so anything less than 0 will be 0.
select id, sum(max(0, (date_diff('minute', time_a, time_b)) - 20)) as mins
FROM tbl
What am doing wrong in the query above that is erorring out?
Thanks!
sum(max()) is highly suspicious. Perhaps you intend:
select id, sum(greatest(0, date_diff('minute', time_a, time_b) - 20)) as mins
from tbl
I have a table with two columns, number of maximum number of places (capacity) and number of places available (availablePlaces)
I want to calculate the availablePlaces as a percentage of the capacity.
availablePlaces capacity
1 20
5 18
4 15
Desired Result:
availablePlaces capacity Percent
1 20 5.0
5 18 27.8
4 15 26.7
Any ideas of a SELECT SQL query that will allow me to do this?
Try this:
SELECT availablePlaces, capacity,
ROUND(availablePlaces * 100.0 / capacity, 1) AS Percent
FROM mytable
You have to multiply by 100.0 instead of 100, so as to avoid integer division. Also, you have to use ROUND to round to the first decimal digit.
Demo here
The following SQL query will do this for you:
SELECT availablePlaces, capacity, (availablePlaces/capacity) as Percent
from table_name;
Why not use a number formatting function such as format_number (or an equivalent one in your database) to format a double as a percentage? This example is generalized. The returned value is a string.
WITH t
AS
(
SELECT count(*) AS num_rows, count(foo) as num_foo
FROM mytable
)
SELECT *, format_number(num_foo/num_rows, '#.#%') AS pct_grade_rows
FROM t
This avoids the use of round and multiplying the numerator by 100.
I have one query:
SELECT CAST(((stats.ts_spawn - 1427835600) / 86400) * 86400 +
1427835600 AS INTEGER) AS anon_1 FROM stats WHERE stats.ts_spawn >
1427835600 AND stats.ts_spawn < 1428440399 GROUP BY anon_1 order by anon_1;
I'm expecting to get start of the each day in a week.
Result in Postgresql:
1427835600
1427922000
1428008400
1428094800
1428181200
1428267600
1428354000
Vertica returns start of each hour of each day of the week:
1427839200
1427842800
1427846400
1427850000
... and so on, total 167 records(24 * 7 - 1)
I have no idea how to modify this query.
The second one is obviously resulting in a float not an integer in division. In Vertica documents we can read this:
the Vertica 6 release introduced a behavior change when dividing integers using the / operator
If you want the query to behave the same on both systems either change the configuration option as mentioned in that doc or use the Floor() function on the result of division.
I have been trying to create a query for my sqlite3 database that provides me with a count of all records at 10 minute intervals between a maximum and minimum time.
I found this answer on the internet, and it seems to work:
select (((`unixtime`)/600000)*600000) as timeslice,
count(*) as mycount from mytable
where
`unixtime` >= 1413902772599
and
`unixtime` <= 1413972793000
group by timeslice;
The result I get is something like this:
timeslice mycount
------------- ----------
1413930000000 9
1413930600000 1013
1413931200000 265
1413932400000 410
1413933000000 643
This seems like sort of a hackish way to go about doing this query. It also doesn't include datapoints that have a zero count, which is an edge-case that I am going to have to fix outside of the database scope (unless there is an SQL solution for this).
Is there a better way to go about this? Are there edge cases for this if I proceed to continue using this query? Will this catastrophically fail under certain scenarios that I'm not considering?
There is no better way to round to multiples of 600000; SQLite has the round() function, but you would still need to convert to/from a value that can be rounded to some decimal fraction.
If you had SQLite 3.8.3 or later, you could use a recursive common table expression to generate the intervals:
WITH RECURSIVE intervals(t) AS (
VALUES(1413902400000)
UNION ALL
SELECT t + 600000
FROM intervals
WHERE t < 1413972000000
)
SELECT intervals.t,
COUNT(*)
FROM intervals
LEFT JOIN MyTable
ON MyTable.unixtime BETWEEN intervals.t
AND intervals.t + 599999
GROUP BY 1;
I'm trying to get an estimate of how many hours people worked during a set period of time. I want to show this by department and by what area they were working in. Right now I have this:
SELECT M.MemberDepartmentID,T.TaskName,
COUNT(DATEDIFF(HOUR, TT.StartTime, TT.EndTime)) 'Hours',
AVG(DATEDIFF(HOUR, TT.StartTime, TT.EndTime)) Average
FROM Member.TaskTracking TT
LEFT OUTER JOIN Member.Task T
ON TT.TaskID=T.TaskID
JOIN dbo.tblMember M
ON TT.MemberID=M.MemberID
WHERE M.FullTime=1
AND M.EmployeeSalary=1
AND (TT.StartTime >= '2013-10-01'
AND TT.EndTime < '2013-11-01')
GROUP BY M.MemberDepartmentID,T.TaskName
ORDER BY M.MemberDepartmentID,T.TaskName
I don't know how to confirm if it's correct, but some are definitely showing averages of zero even if there were hours worked. And some averages are way higher than the hours worked. For instance, here are some of my results:
MemberDepartmentID TaskName Hours Average
---------------------------------------------------
1 Packing 25 0
1 Picking 6 0
1 PreScanning 38 7
4 Picking 2 104
Suggestions?
First, it is important to note that DATEDIFF(HOUR) returns an integer, and it does not necessarily give a good reflection of how much time has actually passed. For example, these both yield 1:
SELECT DATEDIFF(HOUR, '03:59', '04:01'); -- 2 minutes (0.033333 hours)
SELECT DATEDIFF(HOUR, '03:01', '04:59'); -- 118 minutes (1.966666 hours)
And these both yield 0:
SELECT DATEDIFF(HOUR, '03:01', '03:59'); -- 58 minutes (0.966666 hours)
SELECT DATEDIFF(HOUR, '03:01', '03:02'); -- 1 minute (0.016666 hours)
Next, if you give SQL Server integers to divide, it's going to perform integer math. Meaning it will divide, but it will discard any remainder. This yields 0:
SELECT 3/4;
Even though really it's 0.75, and if it rounded up it should be 1. (Not that either of those results are particularly meaningful). Now, extend that to average.
DECLARE #d1 TABLE(a INT);
INSERT #d1 VALUES(3),(4);
SELECT AVG(a) FROM #d1;
This yields 3, not 3.5, which you would probably expect. For the same reasons as above.
Remembering that some of your tasks may have lasted up to 59 minutes, but would still yield an hour differential of 0, you could have, say, 4 tasks, three that lasted > 1 hour, and one that lasted < 1 hour. So your average calculation would essentially be:
SELECT (1+1+1+0)/4;
Which, as above, still yields 0.
If you want a meaningful average there, you should calculate the time spent more granularly than by hours. For example, you could perform the datediff in minutes:
SELECT DATEDIFF(MINUTE, '03:01', '04:59');
This yields 118. If you want to express that in hours, you could divide by 60.0 (the decimal is important) or multiply by 1.0:
SELECT DATEDIFF(MINUTE, '03:01', '04:59')/60.0;
SELECT 1.0*DATEDIFF(MINUTE, '03:01', '04:59')/60;
These both yield 1.966666. Much more meaningful to average such a result. So perhaps change your expression to:
Average = AVG(1.0*DATEDIFF(MINUTE, TT.StartTime, TT.EndTime)/60)
About the count, not sure what you're attempting to do there, but you may want to make similar adjustments to the calculation and probably consider using SUM. If you show some sample data and the results you expect, we can help more.
Also I recommend not escaping keyword aliases using 'single quotes' - some forms of this syntax are deprecated, and it makes your alias look like a string literal. First, try not to use keywords or otherwise invalid identifiers as aliases; but if you must, escape them with [square brackets].