I tried setting the timeouts small to force the failures to see what happens:
ClientBuilder.newBuilder()
.readTimeout(1, TimeUnit.NANOSECONDS)
.connectTimeout(1, TimeUnit.NANOSECONDS)
But the code still seems to hang for what feels like the default timeout values.
readTimeout and connectTimeout both accept a TimeUnit parameter so it makes sense the NANOSECONDS would be ok right?
The javadoc for these both read:
Value 0 represents infinity. Negative values are not allowed.
And these are internally converted to MILLISECONDS via TimeUnit.convert which states:
Conversions from finer to coarser granularities truncate, so lose precision.
That is what is happening here. TimeUnit.convert even has an example:
For example, converting {#code 999} milliseconds to seconds results in {#code 0}.
Which would be a similar problem for converting 1 nanosecond to milliseconds resulting in 0.
And 0 is infinity... that is, the operating system default timeouts.
Clearly this is obvious, but none of the Javadocs indicate that the specified times will be internally converted into MILLISECONDS and to beware of losing precision.
And I've wasted days wondering why this wasn't working, when I should have remembered from years of network programming milliseconds are the default units.
Related
This script shows that the timestamp that redis returns seems to jump forward a lot and then go backwards from time to time. This happens regularly throughout the run, on every run. What's going on? Is this documented anywhere? I stumbled on this as it messes up my sliding window rate limiter.
res_0, res_1 = 0, 0
for _ in range(200):
script = f"""
local time = redis.call("TIME")
local current_time = time[1] .. "." .. time[2]
return current_time
"""
res_0 = res_1
res_1 = float(await redis.eval(script, numkeys=0))
print(res_1 - res_0)
time.sleep(0.01)
1667745169.747809
0.011765003204345703
0.01197195053100586
0.011564016342163086
0.011634111404418945
0.012428998947143555
0.011847972869873047
0.011600971221923828
0.011788129806518555
0.012033939361572266
0.012130022048950195
0.01160883903503418
0.011954069137573242
0.012022972106933594
0.011958122253417969
0.011713981628417969
0.011844873428344727
0.012138128280639648
0.011618852615356445
0.011570215225219727
0.011890888214111328
0.011478900909423828
0.7928261756896973
-0.5926899909973145
0.11812996864318848
0.11584997177124023
0.12353992462158203
0.1199800968170166
0.11719989776611328
0.12331008911132812
-0.8117339611053467
0.011723995208740234
0.01131582260131836
The most likely reason for this behavior is floating point arithmetic on the calling code side: parsing floats would inevitably round (not sure about the extent, since you didn't tell what platform you are coding against) the input value and the original result precision is lost. So, I would suggest to review your logic so that you process the two components of the result returned by TIME independently using a couple of integers / longs instead.
In addition to that, apart from the obvious possibility of an issue with the clock of the Redis server, there may also be the chance you are contacting different Redis hosts along with each iteration - this may hold true in the event you are using a multi-node Redis topology (replication or cluster).
(2332 / 2332) reduced
(2332 / 2) reduced
(2332 / 322) reduced (1166/161)
(2332 / 3) reduced (2332/3)
(2332 / 2432423) reduced (2332/2432423)
Look at the above codes. The first and second, when printed, do not work. The MessageNotUnderstood window pops up. And the 3rd, 4th, 5th code are okay. Results come out right.
Why does the reduced method not work?
Is it because the reduced method fails to handle final results which are integers like Uko guesses ?
Fractions are reduced automatically in the / method. There is no need to send the reduced message.
E.g. if you print the result of
2 / 4
you get the reduced (1/2) automatically.
If you print the result of
2332 / 2332
it is reduced to 1 which is not a Fraction, but an Integer, and Integers do not understand the reduced message. That's why you get an error.
The only case when a Fraction is not automatically reduced is when you create it manually, as in
Fraction numerator: 2 denominator: 4
which will answer the non-reduced (2/4). But in normal arithmetic expressions you never need to send reduced.
The error occurs because by default, the Integer class does not understand the message reduced in Squeak. This despite members of Squeak's Integer class being fractions.
5 isFraction "returns True"
The wonderful thing about Smalltalk is that if something does not work the way you want, you can change it. So if an Integer does not respond to the message reduced and you want it to, then you can add a reduced method to Integer with the expected behavior:
reduced
"treat an integer like a fraction"
^ self
Adding methods to Classes is the way Smalltalk makes it easy to write expressive programs. For example, Fractions in GNU Smalltalk understand the message reduce but not the message reduced available in Squeak. Rather than trying to remember a meaningless difference, the programmer can simply make reduced available to fractions in GNU Smalltalk:
Fraction extend [
"I am a synonym for reduce"
reduced [
^ self reduce
]
]
Likewise one can extend Fraction in Squeak to have a reduce method:
reduce
"I am a synonym for reduced"
^ self reduced
The designers of Smalltalk made a language that let's programmers express themselves in the way that they think about the problem.
I am not sure if this is a bug. But I've been playing with big and I cant understand why this code works this way:
https://carc.in/#/r/2w96
Code
require "big"
x = BigInt.new(1<<30) * (1<<30) * (1<<30)
puts "BigInt: #{x}"
x = BigFloat.new(1<<30) * (1<<30) * (1<<30)
puts "BigFloat: #{x}"
puts "BigInt from BigFloat: #{x.to_big_i}"
Output
BigInt: 1237940039285380274899124224
BigFloat: 1237940039285380274900000000
BigInt from BigFloat: 1237940039285380274899124224
First I though that BigFloat requires to change BigFloat.default_precision to work with bigger number. But from this code it looks like it only matters when trying to output #to_s value.
Same with precision of BigFloat set to 1024 (https://carc.in/#/r/2w98):
Output
BigInt: 1237940039285380274899124224
BigFloat: 1237940039285380274899124224
BigInt from BigFloat: 1237940039285380274899124224
BigFloat.to_s uses LibGMP.mpf_get_str(nil, out expptr, 10, 0, self). Where GMP is saying:
mpf_get_str (char *str, mp_exp_t *expptr, int base, size_t n_digits, const mpf_t op)
Convert op to a string of digits in base base. The base argument may vary from 2 to 62 or from -2 to -36. Up to n_digits digits will be generated. Trailing zeros are not returned. No more digits than can be accurately represented by op are ever generated. If n_digits is 0 then that accurate maximum number of digits are generated.
Thanks.
In GMP (it applies to all languages not just Crystal), integers (C mpz_t, Crystal BigInt) and floats (C mpf_t, Crystal BigFloat) have separate default precision.
Also, note that using an explicit precision is better than setting a default one, because the default precision might not be reentrant (it depends on a configure-time switch). Also, if someone reads only a part of your code, they may skip the part with setting the default precision and assume a wrong one. Although I do not know the Crystal binding well, I assume that such functionality is exposed somewhere.
The zero parameter passed to mpf_get_str means to guess the value from the precision. I know the number of significant digits is proportional and close to precision / log2(10). Floating point numbers have finite precision. In that case, it was not the mpf_get_str call which made the last digits zero - it was the internal representation that did not keep such data. It looks like your (default) precision is too small to store all the necessary digits.
To summarize, there are two solutions:
Set a global default precision. Although this approach will work, it will require to either change the default precision frequently, or use one in the whole program. Both ways, the approach with the default precision is a form of procrastination which is going to have its vengeance later.
Set a precision on variable basis. This is a better solution than the former. Although it requires more code (1-2 more lines per variable initialization), it is going to pay back later. For example, in a space object tracking system, the physics calculations have to be super-precise, but other systems could use lower precision numbers for speed and memory saving.
I am still unsure what made the conversion BigFloat --> BigInt yield the missing digits.
This is a case where I can find the definition, but I don't quite grasp it. From the official documentation:
An Instant is a particular moment in time measured in atomic seconds, with fractions. It is not tied to or aware of any epoch.
I don't understand how you can specify a particular moment in time without having an epoch? Doesn't it have a reference point? On two different Linux machines it seemed that both Instants referred to seconds since the POSIX Epoch. My guess is that Instants do have an effective start time, but that that start time is implementation/device dependent.
# machine1
say(now * (1/3600) * (1/24) * (1/365.25)); # years from zero point
46.0748226200715
# machine2
say(now * (1/3600) * (1/24) * (1/365.25)); # years from zero point
46.0748712024946
Anyway, so my question is, can Instants be relied upon to be consistent between different processes or are they for "Internal" use only?
say (now).WHAT; # «(Instant)»
say (now * 1).WHAT # «(Num)»
Any numerical operator will coerce it's operands into Nums. If you want a proper string representation use .perl .
say (now).perl # «Instant.from-posix((<1211194481492/833>, 0))»
No matter what platform you are on, Instant.from-posix will always be relative to the Unix epoch.
see: https://github.com/rakudo/rakudo/blob/nom/src/core/Instant.pm#L15
All Instant objects currently on a particular machine are comparable, Instants from different machines may not be.
For practical purposes, on POSIX machines it is currently based on the number of seconds since January 1st, 1970 according to International Atomic Time (TAI), which is currently 36 seconds ahead of Coordinated Universal Time (UTC).
(This should not be relied upon even if you know your code will only ever be run on a POSIX machine)
On another system it may make more sense for it to be based on the amount of time since the machine was turned on.
So after a reboot, any Instants from before the reboot will not be comparable to any after it.
If you want to compare the Instants from different machines, or store them for later use, convert it to a standardized value.
There are several built-in converters you can use
# runtime constant-like term
my \init = INIT now;
say init.to-posix.perl;
# (1454172565.36938, Bool::False)
say init.DateTime.Str; # now.DateTime =~= DateTime.now
# 2016-01-30T16:49:25.369380Z
say init.Date.Str; # now.Date =~= Date.today
# 2016-01-30
say init.DateTime.yyyy-mm-dd eq init.Date.Str;
# True
I would recommend just using DateTime objects if you need more than what is shown above, as it has various useful methods.
my $now = DateTime.now;
say $now.Str;
# 2016-01-30T11:29:14.928520-06:00
say $now.truncated-to('day').utc.Str;
# 2016-01-30T06:00:00Z
# ^
say $now.utc.truncated-to('day').Str;
# 2016-01-30T00:00:00Z
# ^
Date.today and DateTime.now take into consideration your local timezone information, where as now.Date and now.DateTime can't.
If you really only want to deal with POSIX times you can use time which is roughly the same as now.to-posix[0].Int.
Noted that the parameter of taskDelay is of type int, which means the number could be negative. Just wondering how the function is going to react when passing a negative number.
Most functions would validate the input, and just return early/return 0/set the parameter in question to a default value.
I presume there's no critical need to do this in production, and you probably have some code lying around that you could test with.... why not give it a go?
The documentation doesn't address it, and the only error codes they do define doesn't cover this case. The most correct answer therefore is that the results are undefined.
See the VxWorks / Tornado II FAQ for this gem, however:
taskDelay(-1) shows another bug in
the vxWorks timer/tick code. It has
the (side) effect of setting vxTicks
to zero. This corrupts the localtime
(and probably other things). In fact
taskDelay(x) will have the same effect
if vxTicks + x >= 0x100000000. If the
system clock rate is 100Hz this
happens after about 500 days (because
vxTicks wraps). At faster clock rates
it will happen sooner. Anyone trying
for several years uptime?
Oh there is an undocumented upper
limit on the clock rate. At rates
above 4294 select() will fail to
convert its 'usec' time into the
correct number of ticks. (From: David
Laight, dsl#tadpole.co.uk)
Assuming this bug is old, I would hope that it would either return an error or do the same thing as taskDelay(0), which puts your task at the end of the ready queue.
The task delay tick will be VIRTUALLY 10,9,..,1,0 for taskDelay(10).
The task delay tick will be VIRTUALLY -10,-11,...,-2147483648,2147483647,...,1,0 for taskDelay(-10).