I'm having trouble understanding the difference between Kotlin ticker channel TickerMode.FIXED_DELAY and TickerMode.FIXED_PERIOD. I've played with both, but I'm unable to draw inferences from their behavior. I've also read the example in the docs. I would be grateful for a clearer explanation, with an illustration of each.
As you can find in coroutines sources the difference is that FIXED_PERIOD is more sophisticated and gets into account the fact the receiver cannot keep up and adjust the delay before next invocations of send. This can be tricky to demonstrate though, because you need to measure time a receiver spends waiting for a next tick.
P.S. Note that this functionality is marked as obsolete, i.e. "the design of the corresponding declarations has serious known flaws and they will be redesigned in the future." In this case the reason is that it isn't integrated with structured concurrency.
fun main() = runBlocking {
println("\nFIXED_PERIOD")
val tickerPeriodMode = ticker(100, 0, mode = TickerMode.FIXED_PERIOD)
consumer(tickerPeriodMode)
println("\nFIXED_DELAY")
val tickerDelayMode = ticker(100, 0, mode = TickerMode.FIXED_DELAY)
consumer(tickerDelayMode)
}
private suspend fun CoroutineScope.consumer(ticker: ReceiveChannel<Unit>) {
val job = launch {
var i = 0
while (isActive) {
val waitTime = measureTimeMillis {
ticker.receive()
}
print("[%4d ms]".format(waitTime))
if (i++ == 1) {
delay(150)
println(" adding extra 150ms delay")
} else
println(" going ahead")
}
}
delay(1_000L)
job.cancel()
ticker.cancel() // indicate that no more elements are needed
}
Output
FIXED_PERIOD
[ 1 ms] going ahead
[ 91 ms] adding extra 150ms delay
[ 0 ms] going ahead
[ 46 ms] going ahead
[ 100 ms] going ahead
[ 102 ms] going ahead
[ 98 ms] going ahead
[ 100 ms] going ahead
[ 99 ms] going ahead
[ 100 ms] going ahead
[ 100 ms] going ahead
FIXED_DELAY
[ 0 ms] going ahead
[ 105 ms] adding extra 150ms delay
[ 0 ms] going ahead
[ 101 ms] going ahead
[ 100 ms] going ahead
[ 103 ms] going ahead
[ 103 ms] going ahead
[ 101 ms] going ahead
[ 101 ms] going ahead
[ 105 ms] going ahead
Related
java 17.0.3 with zgc
-Xms8g -Xmx8g -XX:SoftMaxHeapSize=7500m -XX:ConcGCThreads=4 -XX:-UseDynamicNumberOfGCThreads -XX:+UseZGC
[516623.420s][info][gc,task ] GC(50947) Using 4 workers
[516626.249s][info][gc,phases] GC(50947) Pause Mark Start 2829.801ms
[516626.418s][info][gc,phases] GC(50947) Concurrent Mark 168.389ms
[516626.418s][info][gc,phases] GC(50947) Pause Mark End 0.040ms
[516626.418s][info][gc,phases] GC(50947) Concurrent Mark Free 0.001ms
[516626.457s][info][gc,phases] GC(50947) Concurrent Process Non-Strong References 38.371ms
[516626.457s][info][gc,phases] GC(50947) Concurrent Reset Relocation Set 0.057ms
[516626.460s][info][gc,phases] GC(50947) Concurrent Select Relocation Set 2.591ms
[516626.460s][info][gc,phases] GC(50947) Pause Relocate Start 0.021ms
[516626.462s][info][gc,phases] GC(50947) Concurrent Relocate 2.486ms
[516626.462s][info][gc,ref ] GC(50947) Soft: 2038 encountered, 284 discovered, 0 enqueued
[516626.462s][info][gc,ref ] GC(50947) Weak: 26751 encountered, 2023 discovered, 0 enqueued
[516626.462s][info][gc,ref ] GC(50947) Final: 150 encountered, 2 discovered, 0 enqueued
[516626.462s][info][gc,ref ] GC(50947) Phantom: 454 encountered, 396 discovered, 0 enqueued
[516626.462s][info][gc,reloc ] GC(50947) Small Pages: 1135 / 2270M, Empty: 1350M, Relocated: 3M, In-Place: 0
[516626.462s][info][gc,reloc ] GC(50947) Medium Pages: 1 / 32M, Empty: 0M, Relocated: 0M, In-Place: 0
[516626.462s][info][gc,reloc ] GC(50947) Large Pages: 0 / 0M, Empty: 0M, Relocated: 0M, In-Place: 0
[516626.462s][info][gc,reloc ] GC(50947) Forwarding Usage: 0M
[516626.462s][info][gc ] GC(50947) Garbage Collection (Proactive) 2302M(28%)->546M(7%)
What scenario will cause this phenomenon?
Can somebody answer with short example:
How to correctly Lock code part with condition: if this part is locked by some thread don't hold other threads just skip this part by other threads and keep going.
ok, here is the working example (credit goes to #KenThomases ...)
import Dispatch
let semaphore = DispatchSemaphore(value: 1)
let printQueue = DispatchQueue(label: "print queue")
let group = DispatchGroup()
func longRuningTask(i: Int) {
printQueue.async(group: group) {
print(i,"GREEN semaphore")
}
usleep(1000) // cca 1 milisecond
printQueue.async(group: group) {
print(i,"job done")
}
}
func shortRuningTask(i: Int) {
group.enter()
guard semaphore.wait(timeout: .now() + 0.001) == .success else { // wait for cca 1 milisecond from now
printQueue.async(group: group) {
print(i,"RED semaphore, job not done")
}
group.leave()
return
}
longRuningTask(i: i)
semaphore.signal()
group.leave()
}
printQueue.async(group: group) {
print("running")
}
DispatchQueue.concurrentPerform(iterations: 10, execute: shortRuningTask )
group.wait()
print("all done")
and its printout
running
0 GREEN semaphore
2 RED semaphore, job not done
1 RED semaphore, job not done
3 RED semaphore, job not done
0 job done
4 GREEN semaphore
5 RED semaphore, job not done
6 RED semaphore, job not done
7 RED semaphore, job not done
4 job done
8 GREEN semaphore
9 RED semaphore, job not done
8 job done
all done
Program ended with exit code: 0
How could I generate steady CPU load in C#, lower than 100% for a certain time? I would also like to be able to change the load amount after a certain period of time. How do you recommend to generate usage spikes for a very short time?
First off, you have to understand that CPU usage is always an average over a certain time. At any given time, the CPU is either working or it is not. The CPU is never 40% working.
We can, however, simulate a 40% load over say a second by having the CPU work for 0.4 seconds and sleep 0.6 seconds. That gives an average utilization of 40% over that second.
Cutting it down to smaller than one second, say 100 millisecond chunks should give even more stable utilization.
The following method will take an argument that is desired utilization and then utilize a single CPU/core to that degree:
public static void ConsumeCPU(int percentage)
{
if (percentage < 0 || percentage > 100)
throw new ArgumentException("percentage");
Stopwatch watch = new Stopwatch();
watch.Start();
while (true)
{
// Make the loop go on for "percentage" milliseconds then sleep the
// remaining percentage milliseconds. So 40% utilization means work 40ms and sleep 60ms
if (watch.ElapsedMilliseconds > percentage)
{
Thread.Sleep(100 - percentage);
watch.Reset();
watch.Start();
}
}
}
I'm using a stopwatch here because it is more accurate than the the TickCount property, but you could likewise use that and use subtraction to check if you've run long enough.
Two things to keep in mind:
on multi core systems, you will have to spawn one thread for each core. Otherwise, you'll see only one CPU/core being exercised giving roughly "percentage/number-of-cores" utilization.
Thread.Sleep is not very accurate. It will never guarantee times exactly to the millisecond so you will see some variations in your results
To answer your second question, about changing the utilization after a certain time, I suggest you run this method on one or more threads (depending on number of cores) and then when you want to change utilization you just stop those threads and spawn new ones with the new percentage values. That way, you don't have to implement thread communication to change percentage of a running thread.
Just in add of the Isak response, I let here a simple implementation for multicore:
public static void CPUKill(object cpuUsage)
{
Parallel.For(0, 1, new Action<int>((int i) =>
{
Stopwatch watch = new Stopwatch();
watch.Start();
while (true)
{
if (watch.ElapsedMilliseconds > (int)cpuUsage)
{
Thread.Sleep(100 - (int)cpuUsage);
watch.Reset();
watch.Start();
}
}
}));
}
static void Main(string[] args)
{
int cpuUsage = 50;
int time = 10000;
List<Thread> threads = new List<Thread>();
for (int i = 0; i < Environment.ProcessorCount; i++)
{
Thread t = new Thread(new ParameterizedThreadStart(CPUKill));
t.Start(cpuUsage);
threads.Add(t);
}
Thread.Sleep(time);
foreach (var t in threads)
{
t.Abort();
}
}
For a uniform stressing: Isak Savo's answer with a slight tweak. The problem is interesting. In reality there are workloads that far exceed it in terms of wattage used, thermal output, lane saturation, etc. and perhaps the use of a loop as the workload is poor and almost unrealistic.
int percentage = 80;
for (int i = 0; i < Environment.ProcessorCount; i++)
{
(new Thread(() =>
{
Stopwatch watch = new Stopwatch();
watch.Start();
while (true)
{
// Make the loop go on for "percentage" milliseconds then sleep the
// remaining percentage milliseconds. So 40% utilization means work 40ms and sleep 60ms
if (watch.ElapsedMilliseconds > percentage)
{
Thread.Sleep(100 - percentage);
watch.Reset();
watch.Start();
}
}
})).Start();
}
Each time you have to set cpuUsageIncreaseby variable.
for example:
1- Cpu % increase by > cpuUsageIncreaseby % for one minute.
2- Go down to 0% for 20 seconds.
3- Goto step 1.
private void test()
{
int cpuUsageIncreaseby = 10;
while (true)
{
for (int i = 0; i < 4; i++)
{
//Console.WriteLine("am running ");
//DateTime start = DateTime.Now;
int cpuUsage = cpuUsageIncreaseby;
int time = 60000; // duration for cpu must increase for process...
List<Thread> threads = new List<Thread>();
for (int j = 0; j < Environment.ProcessorCount; j++)
{
Thread t = new Thread(new ParameterizedThreadStart(CPUKill));
t.Start(cpuUsage);
threads.Add(t);
}
Thread.Sleep(time);
foreach (var t in threads)
{
t.Abort();
}
//DateTime end = DateTime.Now;
//TimeSpan span = end.Subtract(start);
//Console.WriteLine("Time Difference (seconds): " + span.Seconds);
//Console.WriteLine("10 sec wait... for another.");
cpuUsageIncreaseby = cpuUsageIncreaseby + 10;
System.Threading.Thread.Sleep(20000);
}
}
}
I attempted to write a C-style for-loop in REBOL:
for [i: 0] [i < 10] [i: i + 1] [
print i
]
This syntax doesn't appear to be correct, though:
*** ERROR
** Script error: for does not allow block! for its 'word argument
** Where: try do either either either -apply-
** Near: try load/all join %/users/try-REBOL/data/ system/script/args...
Does REBOL have any built-in function that is similar to a C-style for loop, or will I need to implement this function myself?
The equivalent construct in a C-like language would look like this, but I'm not sure if it's possible to implement the same pattern in REBOL:
for(i = 0; i < 10; i++){
print(i);
}
Because of the rebol3 tag, I'll assume this question pertains to Rebol 3.
Proposed "CFOR" for Rebol 3
For Rebol 3, there is a proposal (which got quite a bit of support) for a "general loop" very much along the lines of a C-style for and therefore currently going under the name of cfor as well: see CureCode issue #884 for all the gory details.
This includes a much refined version of Ladislav's original implementation, the current (as of 2014-05-17) version I'll reproduce here (without the extensive inline comments discussing implementation aspects) for the sake of easy reference:
cfor: func [ ; Not this name
"General loop based on an initial state, test, and per-loop change."
init [block! object!] "Words & initial values as object spec (local)"
test [block!] "Continue if condition is true"
bump [block!] "Move to the next step in the loop"
body [block!] "Block to evaluate each time"
/local ret
] [
if block? init [init: make object! init]
test: bind/copy test init
body: bind/copy body init
bump: bind/copy bump init
while test [set/any 'ret do body do bump get/any 'ret]
]
General problems with user-level control structure implementations in Rebol 3
One important general remark for all user-level implementation of control constructs in Rebol 3: there is no analogue to Rebol 2's [throw] attribute in R3 yet (see CureCode issue #539), so such user-written ("mezzanine", in Rebol lingo) control or loop functions have problems, in general.
In particular, this CFOR would incorrectly capture return and exit. To illustrate, consider the following function:
foo: function [] [
print "before"
cfor [i: 1] [i < 10] [++ i] [
print i
if i > 2 [return true]
]
print "after"
return false
]
You'd (rightly) expect the return to actually return from foo. However, if you try the above, you'll find this expectation disappointed:
>> foo
before
1
2
3
after
== false
This remark of course applies to all the user-level implementation given as answers in this thread, until bug #539 is fixed.
There is an optimized Cfor by Ladislav Mecir
cfor: func [
{General loop}
[throw]
init [block!]
test [block!]
inc [block!]
body [block!]
] [
use set-words init reduce [
:do init
:while test head insert tail copy body inc
]
]
The other control structure that most people would use in this particular case is repeat
repeat i 10 [print i]
which results in:
>> repeat i 10 [print i]
1
2
3
4
5
6
7
8
9
10
I generally do no use loop very often, but it can be used to a similar extent:
>> i: 1
>> loop 10 [print ++ i]
1
2
3
4
5
6
7
8
9
10
Those are some useful control structures. Not sure if you were looking for cfor but you got that answer from others.
I have implemented a function that works in the same way as a C for loop.
cfor: func [init condition update action] [
do init
while condition [
do action
do update
]
]
Here's an example usage of this function:
cfor [i: 0] [i < 10] [i: i + 1] [
print i
]
For simple initial value, upper limit and step, following works:
for i 0 10 2
[print i]
This is very close to C for loop.
I'm studying VRML as a beginner. I have a problem with TimeSensor that need help. This is my source code
DEF time TimeSensor
{
loop TRUE
cycleInterval 2
}
DEF C11 Transform
{
translation -3 0 0
children
[
Shape
{
geometry Sphere
{
radius 0.5
}
appearance Appearance
{
material Material
{
diffuseColor 0 0 0
specularColor .29 .3 .29
shininess .08
ambientIntensity 0
transparency 0.0
}
}
}
DEF moveC11 PositionInterpolator
{
key [0 1]
keyValue [-3 0 0,3 3 0]
}
]
}
ROUTE time.fraction_changed TO moveC11.set_fraction
ROUTE moveC11.value_changed TO C11.translation
When I view in browser, the sphere moves from coordinate -3 0 0 to 3 3 0 and repeat.I want it moves only 1 time. The sphere stop at coordinate 3 3 0. How can I do it?
Thank you for helping me!
VRML concept of TimeSensor is other than to stop an infinite loop
A reverse logic works:
Modify DEF time TimeSensor { loop FALSE } to avoid uncontrollable infinite loop.
Send an event set_startTime with the current time to the TimeSensor.
The problem with this approach might be how to compute the absolute current time in seconds since 1970-01-01 00:00:00.
Fortunately all sensors in VRML generate events which output a time value when they become active.
So basically all you have to do is to ROUTE the event generated by the sensor when it becomes active to the eventIn of the TimeSensor set_startTime.