Cutlist -> Frame Ranges for Loop/Trim - avisynth

How do I convert a cutlist of
start: hours:minutes:seconds:milliseconds
end: hours:minutes:seconds:milliseconds
start1: hours:minutes:seconds:milliseconds
end1: hours:minutes:seconds:milliseconds
...
into frames ranges for
Loop(0,start,end)
or
Trim
?
I tried
((milliseconds/100)+seconds+(minutes * 60)+(hours*3600))*framerate
but this cuts at the wrong position.

Milliseconds are a thousandth of a second. Your code looks fine otherwise.

As it was noted, one second has 1000 millisecond. Another problem you get is integer division, what means milliseconds / 1000 will always yield zero.
So I think this should be the correct expression:
round(((float(milliseconds)/1000)+seconds+(minutes * 60)+(hours*3600))*framerate)

Related

Number of seconds since epoch: pandas.to_datetime() vs epoch + pandas.to_timedelta()

I need to transform a date, expressed as a number of seconds since 2000-01-01T00:00:00, to a pandas.Timestamp with a resolution of 1 ns.
I have found two options:
Use: pandas.to_datetime(VALUE, unit='s', epoch=pandas.Timestamp(2000, 1, 1))
Use: epoch=pandas.Timestamp(2000, 1, 1) + pandas.to_timedelta(VALUE, unit='sec')
I was expecting the both of them provide the same result but the results are slightly different, e.g.:
In [2]: Y2K = pandas.Timestamp(2000, 1, 1)
...:
...: s = 538121125.6849735
...:
...: t1 = pandas.to_datetime(s, unit='s', origin=Y2K)
...: t2 = Y2K + pandas.to_timedelta(s, unit='sec')
...:
...: t1 - t2
Out[2]: Timedelta('0 days 00:00:00.000000090')
Am I doing something wrong? Can, this discrepancy, be considered as a bug?
Which is the more correct way to execute this task? Please note that I need a resolution up to 1 ns.
I wouldn't say that's a bug, that's just an incorrect use of pandas.to_datetime method. The second method you proposed seems to be the proper one. It is more accurate because it takes into account the fact that Timestamp is a combination of a date and a time, whereas the first method only takes the date component into account.
Float (double precision) can store only 15 digits (15.9). You use 9 for the integer part, so you can expect only 7 digit precision on decimal part. And you get it as expected.
In any case, do you expect so much precision for any clocks?
As #ChrisQ267 mentioned in the other answer, programs tend to store times in different components, so you have more precision. Common is either date and time in two fields, or second and the decimal part of second fields. So float is not so ideal for high precision timestamps.
But in any case both methods you are using are not precise: both misses leap seconds, so the real result is already off by several seconds (not just the 8 decimal place).

Why does round(143.23,-1) return 140?

For the query
SELECT round(143.23, -1)
FROM dual
I thought that the output will be 142 but the output i got is 140
can anyone help me by explaining this.
The second parameter indicates how many digits of precision after the decimal point you want to preserve. Thus, -1 means one digit before the decimal point. I.e., you're losing the "ones" digit and rounding to the nearest "tens", resulting in 140.
To get a whole number (143 in this case), you can pass 0 as the second parameter, or just omit it entirely, as that's the default.

Sql issue in calculating formulas

I have a problem when i'm trying to calculate in a view a formula whose result is smaller than 1.
e.g. I have the next formula: Arenda*TotalArea/10000 as TotalArenda
If I have Arenda=10 and TotalArea=10 I get TotalArenda=0,00 when normally should be 0.01
Thanks
Make Arenda = 10.0 and TotalArea = 10.0 instead of 10 and 10. This will force SQL not to use integer math and you will get your needed accuracy.
In fact, the only way I can get 0.0 as the result is if the Arenda is 10 (integer) while at least one of TotalArea or 10000 contain a decimal and a trailing 0, and only if I override order of operations by grouping using parentheses such as
select 10.0* (10/10000) as blah
If all are integers you get 0. If all contain decimals you get 0.01. If I remove the parentheses, I get 0.01 if ANY of them are non-integer types.
If precision is highly important I would recommend you cast to decimals and not floats:
select CONVERT(decimal(10,2), Arenda) * CONVERT(decimal(10,2), TotalArea) / 10000.0
You are using colunns, so changing the type may not be feasible. SQL Server does integer division on integers (other databases behave differently). Try one of these:
cast(Arenda as float)*cast(TotalArea as float)/10000
or:
Arenda*TotalArea/10000.0

Time to turn 180 degrees

I have a space ship, and am wanting to calculate how long it takes to turn 180 degrees. This is my current code to turn the ship:
.msngFacingDegrees = .msngFacingDegrees + .ROTATION_RATE * TV.TimeElapsed
My current .ROTATION_RATE is 0.15, but it will change.
I have tried:
Math.Ceiling(.ROTATION_RATE * TV.TimeElapsed / 180)
But always get an answer of 1. Please help.
To explain why you get 1 all the time:
Math.Ceiling simply rounds up to the next integer, so your sum contents must always be < 1.
Rearranging your sum gives TV.TimeElapsed = 180 * (1/.ROTATION_Rate). With a ROTATION_Rate of 0.15 we know that TV.TimeElapsed needs to reach 1200 before your overall function returns > 1.
Is it possible that you're always looking at elapsed times less than this threshold?
Going further to suggest what your sum should be is harder - Its not completely clear without more context.

Why decimal behave differently?

I am doing this small exercise.
declare #No decimal(38,5);
set #No=12345678910111213.14151;
select #No*1000/1000,#No/1000*1000,#No;
Results are:
12345678910111213.141510
12345678910111213.141000
12345678910111213.14151
Why are the results of first 2 selects different when mathematically it should be same?
it is not going to do algebra to convert 1000/1000 to 1. it is going to actually follow the order of operations and do each step.
#No*1000/1000
yields: #No*1000 = 12345678910111213141.51000
then /1000= 12345678910111213.141510
and
#No/1000*1000
yields: #No/1000 = 12345678910111.213141
then *1000= 12345678910111213.141000
by dividing first you lose decimal digits.
because of rounding, the second sql first divides by 1000 which is 12345678910111.21314151, but your decimal is only 38,5, so you lose the last three decimal points.
because when you divide first you get:
12345678910111.21314151
then only six decimal digits are left after point:
12345678910111.213141
then *1000
12345678910111213.141
because the intermediary type is the same as the argument's - in this case decimal(38,5). so dividing first gives you a loss of precision that's reflected in the truncated answer. multiplying by 1000 first doesn't give any loss of precision because that doesn't overload 38 digits.
It's probably because you lose part of data making division first. Notice that #No has 5-point decimal precision so when you divide this number by 1000 you suddenly need 8 digits for decimal part:
123.12345 / 1000 = 0.12312345
So the value has to be rounded (0.12312) and then this value is multiply by 1000 -> 123.12 (you lose 0.00345.
I think that's why the result is what it is...
The first does #No*1000 then divides it by 1000. The intermediates values are always able to represent all the decimal places. The second expression first divides by 1000, which throws away the last two decimal places, before multiplying back to the original value.
You can get around the problem by using CONVERT or CAST on the first value in your expression to increase the number of decimal places and avoid a loss of precision.
DECLARE #num decimal(38,5)
SET #num = 12345678910111213.14151
SELECT CAST(#num AS decimal(38,8)) / 1000 * 1000