Is it possible to implement direct jumps (i.e., GOTO) in Rust? - optimization

I'm exploring optimizations for the HVM, a parallel functional runtime written in Rust. The way it works by spawning several threads, and then having each one busy on a main work loop where tasks are popped and executed. It works more or less like this:
// Modes
enum Mode {
Visiting, // visits a node
Reducing, // applies a rewrite rule
Fetching, // pops a local task
Stealing, // steals a global task
}
// Main loop
loop {
match mode {
Visiting => { ... do stuff ... }
Reducing => { ... do stuff ... }
Fetching => { ... do stuff ... }
Stealing => { ... do stuff ... }
}
}
// Change mode with:
mode = Mode::Visiting;
continue 'main;
I expected Rust's compiler to optimize that to a goto, but, to my surprise, it doesn't. In an attempt to improve the situation, I've replaced the Mode enum by two booleans, and adjusted my loop as follows:
// Modes
let mut a : bool;
let mut b : bool;
// Main loop
loop {
if a {
if b {
... do stuff ...
} else {
... do stuff ...
}
} else {
if b {
... do stuff ...
} else {
... do stuff ...
}
}
}
// Change mode with:
a = true;
b = true;
continue 'main;
To my surprise, this small change resulted in a ~15% improvement on the overall performance of HVM's runtime! This is still not ideal though. If I was in C, I'd just have labels, and jump via GOTO. My question is: can I change my Rust code in a way that will let the compiler optimize to the expected jumping code? I.e., something like this?
// Main loop:
loop {
'visiting { ... do stuff ... }
'reducing { ... do stuff ... }
'fetching { ... do stuff ... }
'stealing { ... do stuff ... }
}
// Change mode with:
goto 'visiting

Related

listop operator causing infinite recursion, any way to fix?

I'm looking to possibly help update the File::HomeDir module which was never finished. While inspecting it, I noticed that stubbed out methods were causing infinite loops:
In the File::HomeDir role:
unit class File::HomeDir;
use File::HomeDir::Win32;
use File::HomeDir::MacOSX;
use File::HomeDir::Unix;
my File::HomeDir $singleton;
method new
{
return $singleton if $singleton.defined;
if $*DISTRO.is-win {
$singleton = self.bless does File::HomeDir::Win32;
} elsif $*DISTRO.name.starts-with('macos') {
$singleton = self.bless does File::HomeDir::MacOSX;
} else {
$singleton = self.bless does File::HomeDir::Unix;
}
return $singleton;
}
method my-home {
return File::HomeDir.new.my-home;
}
method my-desktop {
return File::HomeDir.new.my-desktop;
}
<snip>
In the File::HomeDir::MacOSX module:
use v6;
unit role File::HomeDir::MacOSX;
method my-home {
# Try HOME on every platform first, because even on Windows, some
# unix-style utilities rely on the ability to overload HOME.
return %*ENV<HOME> if %*ENV<HOME>.defined;
return;
}
method my-desktop {
!!!
}
<snip>
With this code, calling say File::HomeDir.my-desktop; results in an infinite loop.
This module was first written about 5 1/2 years ago. I'm assuming it worked at the time. But it appears now that if a role method has a listop operator, it causes the parent's class to be called which then called the role method which then calls the parent class, etc.
I'd do it like this, staying close to the original design:
role File::HomeDir::Win32 {
method my-home() { dd }
method my-desktop() { dd }
}
role File::HomeDir::MacOSX {
method my-home() { dd }
method my-desktop() { dd }
}
role File::HomeDir::Unix {
method my-home() { dd }
method my-desktop() { dd }
}
class File::HomeDir {
my $singleton;
# Return singleton, make one if there isn't one already
sub singleton() {
without $singleton {
$_ = File::HomeDir but $*DISTRO.is-win
?? File::HomeDir::Win32
!! $*DISTRO.name.starts-with('macos')
?? File::HomeDir::MacOSX
!! File::HomeDir::Unix;
}
$singleton
}
method my-home() { singleton.my-home }
method my-desktop() { singleton.my-desktop }
}
File::HomeDir.my-home;
File::HomeDir.my-desktop;

Failure failing in CATCH

I'm probably overlooking something simple, but I do not expect the below code to fail. It is behaving as if I wrote die instead of fail in the catch block.
The Failure does not get properly handled and the code dies.
sub foo()
{
try {
say 1 / 0;
CATCH { default { fail "FAIL" } }
}
return True;
}
with foo() {
say "done";
}
else
{
say "handled {.exception.message}"
}
Output:
FAIL
in block at d:\tmp\x.pl line 5
in any at d:\tmp\x.pl line 5
in sub foo at d:\tmp\x.pl line 4
in block <unit> at d:\tmp\x.pl line 11
To bring home to later readers the full force of what Yoda said in their comment, the simplest solution is to unlearn the notion that you have to try in order to CATCH. You don't:
sub foo()
{
say 1 / 0;
CATCH { default { fail "FAIL" } }
return True;
}
with foo() {
say "done";
}
else
{
say "handled {.exception.message}"
}
correctly displays:
handled FAIL
According to the Failure documentation this seems to be the defined behavior.
Sink (void) context causes a Failure to throw, i.e. turn into a normal exception. The use fatal pragma causes this to happen in all contexts within the pragma's scope. Inside try blocks, use fatal is automatically set, and you can disable it with no fatal.
You can try to use the no fatal pragma.
sub foo() {
try {
no fatal;
say 1 / 0;
CATCH { default { fail "FAIL" } }
}
}
unless foo() {
say "handled"
}

Run multiple tasks sequentially after emitter response

I am trying to create a communication controller for a hardware device that always responds with some delay. If I would only request one value, I could create a Single<ByteArray> and do the final conversion in .subscribe{ ...}.
But when I request more than one value I need to make sure that the second request happens after the first request has been fully closed.
Is that something that I can do with RxJava, e.g. defer? Or should I create a queue on my own and handle the sequence of events manually with my queue?
We're using RxJava anyway (and I'm obviously new to it) and of course it would be nice to use it for this purpose as well. But is that a good use-case?
Edit:
Code that I could use, but that wouldn't be generic enough:
hardware.write(byteArray)
.subscribe(
{
hardware.receiveResult().take(1)
.doFinally { /* dispose code */ }
.subscribe(
{ /* onSuccess */ }
{ /* onError */ }
.let { disposable = it }
},
{ /* onError */ }
)
All code for the next request in the queue could be put in the inner onSuccess and then the next one in that onSuccess. That would be executed sequentially but that wouldn't be generic enough. Any other class that makes a request would end up spoiling my sequence.
I am searching for a solution that builds up the queue automatic in the hardware communication controller class.
Long time passed, the project developed and we got a solution long time ago. Now I wanted to share it here:
fun writeSequential(data1: ByteArray, data2: ByteArray) {
disposable = hardwareWrite(data1)
.zipWith(hardwareWrite(data2))
.subscribe(
{
/* handle results.
it.first will be the first response,
it.second the second. */
},
{ /* handle error */ }
)
compositeDisposable.add(disposable)
}
fun hardwareWrite(data: ByteArray): Disposable {
var emitter: SingleEmitter<ByteArray>? = null
var single = Single.create<ByteArray> { emitter = it }
return hardware.write(data)
.subscribe(
{ hardwareRead(emitter) },
{ /* onError */ }
))
}
fun hardwareRead(emitter: SingleEmitter<ByteArray>): Disposable {
return hardware.receiveResult()
.take(1)
.timeout( /* your timeout */ )
.single( /* default value */ )
.doFinally( /* cleanup queue */ )
.subscribe(
{ emitter.onSuccess(it) }
{ emitter.onError(it) }
)
}
The solution is not perfect and now I see that the middle part doesn't do anything with the disposable result.
Also in out example it's a bit more complicated as hardwareWrite doesn't fire immediatelly but gets queued. This way we assure that the hardware is accessed sequentially and the result don't get mixed up.
Still I hope this might help someone, who is looking for a solution, and is maybe new to kotlin and/or RxJava stuff (like I was in the beginning of the project).

How to handle shared resources with pthread mutex

I have a question regarding mutex and pthreads.
If there is a shared flag, lets call it (F1). And there are multiple threads.
But only one thread (T1) can raise/cease the flag and all other threads (T2..Tn) only reads or pulls the status.
Is it enough if T1 uses mutex_lock/mutex_unlock when the flag F1 will be set with a new value?
Should all other threads also use mutex_lock/mutex_unlock even that they are only going to read the status from F1?
Exemple1:
T1()
{
while(Running)
{
pthread_mutex_lock(&lock);
F1 = true;
pthread_mutex_unlock(&lock);
}
}
T2()
{
while(Running)
{
if(F1) {
/* Do something */
}
}
}
Exemple2:
T1()
{
while(Running)
{
pthread_mutex_lock(&lock);
F1 = true;
pthread_mutex_unlock(&lock);
}
}
T2()
{
while(Running)
{
pthread_mutex_lock(&lock);
if(F1) {
/* Do something */
}
pthread_mutex_unlock(&lock);
}
}
You can use the single-writer-multiple-readers idiom.
Reading:
pthread_rwlock_rdlock(&rwlock);
Writing:
pthread_rwlock_wdlock(&rwlock);
If your use case is as simple as the example you posted, you might consider a lock-free version involving atomic flags.
Under the pthreads model, the readers do need to perform a synchronisation operation as well. This can be a pthread_mutex_lock() / pthread_mutex_unlock() pair in both the readers and writer as you've described, or alternatively metalfox's suggestion of a reader-writer lock.

How to check if no condition was met?

<Check Object "If there is not an object at (x,y)">
{
<Create Instance "create instance of object at (x,y)">
}
...
Using Game Maker events, I created a repeated process like the one above, checking one space and then the other, and filling all the empty ones. The code works fine, but I want to add a message at the end ONLY IF NONE OF THE SPACES ARE EMPTY. I tried using an ELSE at the end, but that only uses the very last if.
Sorry for bad wording, I can elaborate if needed.
What you want is an if-else if-else structure. You can do this by nesting conditions:
if () {
...
} else {
if(...) {
...
} else {
if(...) {
...
} else {
...
}
}
}
Though your code would be easier to read were it to use GML, rather than the visual language, as in GML you can do this:
if () {
...
} else if(...) {
...
} else if(...) {
...
} else {
...
}