Why isn't Rust able to optimise a match on a specific error as well as it does for is_err()? [duplicate] - optimization

Consider this silly enum:
enum Number {
Rational {
numerator: i32,
denominator: std::num::NonZeroU32,
},
FixedPoint {
whole: i16,
fractional: u16,
},
}
The data in the Rational variant takes up 8 bytes, and the data in the FixedPoint variant takes up 4 bytes. The Rational variant has a field which must be nonzero, so i would hope that the enum layout rules would use that as a discriminator, with zero indicating the presence of the FixedPoint variant.
However, this:
fn main() {
println!("Number = {}", std::mem::size_of::<Number>(),);
}
Prints:
Number = 12
So, the enum gets space for an explicit discriminator, rather than exploiting the presence of the nonzero field.
Why isn't the compiler able to make this enum smaller?

Although simple cases like Option<&T> can be handled without reserving space for the tag, the layout calculator in rustc is still not clever enough to optimize the size of enums with multiple non-empty variants.
This is issue #46213 on GitHub.
The case you ask about is pretty clear-cut, but there are similar cases where an enum looks like it should be optimized, but in fact can't be because the optimization would preclude taking internal references; for example, see Why does Rust use two bytes to represent this enum when only one is necessary?

Related

Kotlin - Unique Characters in String

My function should return a boolean indicating whether the input String contains all unique characters.
e.g.
"abc" returns true, "abca" returns false
fun uniqueCharacters(s: String): Boolean = s.groupBy { it }
.values
.stream()
.allMatch { it.size == 1 }
Is there a more efficient way of solving this problem? If I was solving this in non-functional way I would store all the characters in a Map with the value being the count of that character so far, if it is greater than one then break and return false.
Not sure how best to translate this into a functional Kotlin piece of code.
You can use all function and Set::add as predicate for it
fun main() {
println("abc".allUnique()) // true
println("abca".allUnique()) // false
}
fun String.allUnique(): Boolean = all(hashSetOf<Char>()::add)
It's lazy, the function returns the result when it finds the first duplicate
Perhaps the simplest way is to create a Set of the characters, and check its size:
fun String.isUniqueCharacters() = toSet().size == length
(Since this function depends only on the contents of the string, it seems logical to make it an extension function; that makes it easier to call, too.)
As for performance, it's effectively creating a hash table of the characters, and then checking number of entries, which is the number of unique characters.  So that's not trivial.  But I can't think of a way which is significantly better.
Other approaches might include:
Copying the characters to an array, sorting it in-place, and then scanning it comparing adjacent elements.  That would save some memory allocation, but needs more processing.
As above, but using a hand-coded sort algorithm that spots duplicates and returns early. That would reduce the processing in cases where there are duplicates, but at the cost of much more coding.  (And the hand-coded sort would probably be slower than a library sort when there aren't duplicates.)
Creating an array of 65536 booleans (one for every possible Char value*), all initialised to false, and then scanning through each character in the string checking the corresponding array value (returning false if it was already set, else setting it).  That would probably be the fastest approach, but takes a lot of memory.  (And the cost of initialising the array could be significant.)
As always, it comes down to trading off memory, processing, and coding effort.
(* There are of course many more characters than this in Unicode, but Kotlin uses UTF-16 internally, so 65536 is all we need.)
Yet another approach
fun String.uniqueCharacters(): Boolean = this.toCharArray().distinct().isNotEmpty()

Kotlin: Why is Sequence more performant in this example?

Currently, I am looking into Kotlin and have a question about Sequences vs. Collections.
I read a blog post about this topic and there you can find this code snippets:
List implementation:
val list = generateSequence(1) { it + 1 }
.take(50_000_000)
.toList()
measure {
list
.filter { it % 3 == 0 }
.average()
}
// 8644 ms
Sequence implementation:
val sequence = generateSequence(1) { it + 1 }
.take(50_000_000)
measure {
sequence
.filter { it % 3 == 0 }
.average()
}
// 822 ms
The point here is that the Sequence implementation is about 10x faster.
However, I do not really understand WHY that is. I know that with a Sequence, you do "lazy evaluation", but I cannot find any reason why that helps reducing the processing in this example.
However, here I know why a Sequence is generally faster:
val result = sequenceOf("a", "b", "c")
.map {
println("map: $it")
it.toUpperCase()
}
.any {
println("any: $it")
it.startsWith("B")
}
Because with a Sequence you process the data "vertically", when the first element starts with "B", you don't have to map for the rest of the elements. It makes sense here.
So, why is it also faster in the first example?
Let's look at what those two implementations are actually doing:
The List implementation first creates a List in memory with 50 million elements.  This will take a bare minimum of 200MB, since an integer takes 4 bytes.
(In fact, it's probably far more than that.  As Alexey Romanov pointed out, since it's a generic List implementation and not an IntList, it won't be storing the integers directly, but will be ‘boxing’ them — storing references to Int objects.  On the JVM, each reference could be 8 or 16 bytes, and each Int could take 16, giving 1–2GB.  Also, depending how the List gets created, it might start with a small array and keep creating larger and larger ones as the list grows, copying all the values across each time, using more memory still.)
Then it has to read all the values back from the list, filter them, and create another list in memory.
Finally, it has to read all those values back in again, to calculate the average.
The Sequence implementation, on the other hand, doesn't have to store anything!  It simply generates the values in order, and as it does each one it checks whether it's divisible by 3 and if so includes it in the average.
(That's pretty much how you'd do it if you were implementing it ‘by hand’.)
You can see that in addition to the divisibility checking and average calculation, the List implementation is doing a massive amount of memory access, which will take a lot of time.  That's the main reason it's far slower than the Sequence version, which doesn't!
Seeing this, you might ask why we don't use Sequences everywhere…  But this is a fairly extreme example.  Setting up and then iterating the Sequence has some overhead of its own, and for smallish lists that can outweigh the memory overhead.  So Sequences only have a clear advantage in cases when the lists are very large, are processed strictly in order, there are several intermediate steps, and/or many items are filtered out along the way (especially if the Sequence is infinite!).
In my experience, those conditions don't occur very often.  But this question shows how important it is to recognise them when they do!
Leveraging lazy-evaluation allows avoiding the creation of intermediate objects that are irrelevant from the point of the end goal.
Also, the benchmarking method used in the mentioned article is not super accurate. Try to repeat the experiment with JMH.
Initial code produces a list containing 50_000_000 objects:
val list = generateSequence(1) { it + 1 }
.take(50_000_000)
.toList()
then iterates through it and creates another list containing a subset of its elements:
.filter { it % 3 == 0 }
... and then proceeds with calculating the average:
.average()
Using sequences allows you to avoid doing all those intermediate steps. The below code doesn't produce 50_000_000 elements, it's just a representation of that 1...50_000_000 sequence:
val sequence = generateSequence(1) { it + 1 }
.take(50_000_000)
adding a filtering to it doesn't trigger the calculation itself as well but derives a new sequence from the existing one (3, 6, 9...):
.filter { it % 3 == 0 }
and eventually, a terminal operation is called that triggers the evaluation of the sequence and the actual calculation:
.average()
Some relevant reading:
Kotlin: Beware of Java Stream API Habits
Kotlin Collections API Performance Antipatterns

Does Perl 6 have an infinite Int?

I had a task where I wanted to find the closest string to a target (so, edit distance) without generating them all at the same time. I figured I'd use the high water mark technique (low, I guess) while initializing the closest edit distance to Inf so that any edit distance is closer:
use Text::Levenshtein;
my #strings = < Amelia Fred Barney Gilligan >;
for #strings {
put "$_ is closest so far: { longest( 'Camelia', $_ ) }";
}
sub longest ( Str:D $target, Str:D $string ) {
state Int $closest-so-far = Inf;
state Str:D $closest-string = '';
if distance( $target, $string ) < $closest-so-far {
$closest-so-far = $string.chars;
$closest-string = $string;
return True;
}
return False;
}
However, Inf is a Num so I can't do that:
Type check failed in assignment to $closest-so-far; expected Int but got Num (Inf)
I could make the constraint a Num and coerce to that:
state Num $closest-so-far = Inf;
...
$closest-so-far = $string.chars.Num;
However, this seems quite unnatural. And, since Num and Int aren't related, I can't have a constraint like Int(Num). I only really care about this for the first value. It's easy to set that to something sufficiently high (such as the length of the longest string), but I wanted something more pure.
Is there something I'm missing? I would have thought that any numbery thing could have a special value that was greater (or less than) all the other values. Polymorphism and all that.
{new intro that's hopefully better than the unhelpful/misleading original one}
#CarlMäsak, in a comment he wrote below this answer after my first version of it:
Last time I talked to Larry about this {in 2014}, his rationale seemed to be that ... Inf should work for all of Int, Num and Str
(The first version of my answer began with a "recollection" that I've concluded was at least unhelpful and plausibly an entirely false memory.)
In my research in response to Carl's comment, I did find one related gem in #perl6-dev in 2016 when Larry wrote:
then our policy could be, if you want an Int that supports ±Inf and NaN, use Rat instead
in other words, don't make Rat consistent with Int, make it consistent with Num
Larry wrote this post 6.c. I don't recall seeing anything like it discussed for 6.d.
{and now back to the rest of my first answer}
Num in P6 implements the IEEE 754 floating point number type. Per the IEEE spec this type must support several concrete values that are reserved to stand in for abstract concepts, including the concept of positive infinity. P6 binds the corresponding concrete value to the term Inf.
Given that this concrete value denoting infinity already existed, it became a language wide general purpose concrete value denoting infinity for cases that don't involve floating point numbers such as conveying infinity in string and list functions.
The solution to your problem that I propose below is to use a where clause via a subset.
A where clause allows one to specify run-time assignment/binding "typechecks". I quote "typecheck" because it's the most powerful form of check possible -- it's computationally universal and literally checks the actual run-time value (rather than a statically typed view of what that value can be). This means they're slower and run-time, not compile-time, but it also makes them way more powerful (not to mention way easier to express) than even dependent types which are a relatively cutting edge feature that those who are into advanced statically type-checked languages tend to claim as only available in their own world1 and which are intended to "prevent bugs by allowing extremely expressive types" (but good luck with figuring out how to express them... ;)).
A subset declaration can include a where clause. This allows you to name the check and use it as a named type constraint.
So, you can use these two features to get what you want:
subset Int-or-Inf where Int:D | Inf;
Now just use that subset as a type:
my Int-or-Inf $foo; # ($foo contains `Int-or-Inf` type object)
$foo = 99999999999; # works
$foo = Inf; # works
$foo = Int-or-Inf; # works
$foo = Int; # typecheck failure
$foo = 'a'; # typecheck failure
1. See Does Perl 6 support dependent types? and it seems the rough consensus is no.

How do I access an integer array within a struct/class from in-line assembly (blackfin dialect) using gcc?

Not very familiar with in-line assembly to begin with, and much less with that of the blackfin processor. I am in the process of migrating a legacy C application over to C++, and ran into a problem this morning regarding the following routine:
//
void clear_buffer ( short * buffer, int len ) {
__asm__ (
"/* clear_buffer */\n\t"
"LSETUP (1f, 1f) LC0=%1;\n"
"1:\n\t"
"W [%0++] = %2;"
:: "a" ( buffer ), "a" ( len ), "d" ( 0 )
: "memory", "LC0", "LT0", "LB0"
);
}
I have a class that contains an array of shorts that is used for audio processing:
class AudProc
{
enum { buffer_size = 512 };
short M_samples[ buffer_size * 2 ];
// remaining part of class omitted for brevity
};
Within the AudProc class I have a method that calls clear_buffer, passing it the samples array:
clear_buffer ( M_samples, sizeof ( M_samples ) / 2 );
This generates a "Bus Error" and aborts the application.
I have tried making the array public, and that produces the same result. I have also tried making it static; that allows the call to go through without error, but no longer allows for multiple instances of my class as each needs its own buffer to work with. Now, my first thought is, it has something to do with where the buffer is in memory, or from where it is being accessed. Does something need to be changed in the in-line assembly to make this work, or in the way it is being called?
Thought that this was similar to what I was trying to accomplish, but it is using a different dialect of asm, and I can't figure out if it is the same problem I am experiencing or not:
GCC extended asm, struct element offset encoding
Anyone know why this is occurring and how to correct it?
Does anyone know where there is helpful documentation regarding the blackfin asm instruction set? I've tried looking on the ADSP site, but to no avail.
I would suspect that you could define your clear_buffer as
inline void clear_buffer (short * buffer, int len) {
memset (buffer, 0, sizeof(short)*len);
}
and probably GCC is able to optimize (when invoked with -O2 or -O3) that cleverly (because GCC knows about memset).
To understand assembly code, I suggest running gcc -S -O -fverbose-asm on some small C file, then to look inside the produced .s file.
I would have take a guess, because I don't know Blackfin assembler:
That LC0 sounds like "loop counter", LSETUP looks like a macro/insn, which, well, setups a loop between two labels and with a certain loop counter.
The "%0" operands is apparently the address to write to and we can safely guess it's incremented in the loop, in other words it's both an input and output operand and should be described as such.
Thus, I suggest describing it as in input-output operand, using "+" constraint modifier, as follows:
void clear_buffer ( short * buffer, int len ) {
__asm__ (
"/* clear_buffer */\n\t"
"LSETUP (1f, 1f) LC0=%1;\n"
"1:\n\t"
"W [%0++] = %2;"
: "+a" ( buffer )
: "a" ( len ), "d" ( 0 )
: "memory", "LC0", "LT0", "LB0"
);
}
This is, of course, just a hypothesis, but you could disassemble the code and check if by any chance GCC allocated the same register for "%0" and "%2".
PS. Actually, only "+a" should be enough, early-clobber is irrelevant.
For anyone else who runs into a similar circumstance, the problem here was not with the in-line assembly, nor with the way it was being called: it was with the classes / structs in the program. The class that I believed to be the offender was not the problem - there was another class that held an instance of it, and due to other members of that outer class, the inner one was not aligned on a word boundary. This was causing the "Bus Error" that I was experiencing. I had not come across this before because the classes were not declared with __attribute__((packed)) in other code, but they are in my implementation.
Giving Type Attributes - Using the GNU Compiler Collection (GCC) a read was what actually sparked the answer for me. Two particular attributes that affect memory alignment (and, thus, in-line assembly such as I am using) are packed and aligned.
As taken from the aforementioned link:
aligned (alignment)
This attribute specifies a minimum alignment (in bytes) for variables of the specified type. For example, the declarations:
struct S { short f[3]; } __attribute__ ((aligned (8)));
typedef int more_aligned_int __attribute__ ((aligned (8)));
force the compiler to ensure (as far as it can) that each variable whose type is struct S or more_aligned_int is allocated and aligned at least on a 8-byte boundary. On a SPARC, having all variables of type struct S aligned to 8-byte boundaries allows the compiler to use the ldd and std (doubleword load and store) instructions when copying one variable of type struct S to another, thus improving run-time efficiency.
Note that the alignment of any given struct or union type is required by the ISO C standard to be at least a perfect multiple of the lowest common multiple of the alignments of all of the members of the struct or union in question. This means that you can effectively adjust the alignment of a struct or union type by attaching an aligned attribute to any one of the members of such a type, but the notation illustrated in the example above is a more obvious, intuitive, and readable way to request the compiler to adjust the alignment of an entire struct or union type.
As in the preceding example, you can explicitly specify the alignment (in bytes) that you wish the compiler to use for a given struct or union type. Alternatively, you can leave out the alignment factor and just ask the compiler to align a type to the maximum useful alignment for the target machine you are compiling for. For example, you could write:
struct S { short f[3]; } __attribute__ ((aligned));
Whenever you leave out the alignment factor in an aligned attribute specification, the compiler automatically sets the alignment for the type to the largest alignment that is ever used for any data type on the target machine you are compiling for. Doing this can often make copy operations more efficient, because the compiler can use whatever instructions copy the biggest chunks of memory when performing copies to or from the variables that have types that you have aligned this way.
In the example above, if the size of each short is 2 bytes, then the size of the entire struct S type is 6 bytes. The smallest power of two that is greater than or equal to that is 8, so the compiler sets the alignment for the entire struct S type to 8 bytes.
Note that although you can ask the compiler to select a time-efficient alignment for a given type and then declare only individual stand-alone objects of that type, the compiler's ability to select a time-efficient alignment is primarily useful only when you plan to create arrays of variables having the relevant (efficiently aligned) type. If you declare or use arrays of variables of an efficiently-aligned type, then it is likely that your program also does pointer arithmetic (or subscripting, which amounts to the same thing) on pointers to the relevant type, and the code that the compiler generates for these pointer arithmetic operations is often more efficient for efficiently-aligned types than for other types.
The aligned attribute can only increase the alignment; but you can decrease it by specifying packed as well. See below.
Note that the effectiveness of aligned attributes may be limited by inherent limitations in your linker. On many systems, the linker is only able to arrange for variables to be aligned up to a certain maximum alignment. (For some linkers, the maximum supported alignment may be very very small.) If your linker is only able to align variables up to a maximum of 8-byte alignment, then specifying aligned(16) in an __attribute__ still only provides you with 8-byte alignment. See your linker documentation for further information.
.
packed
This attribute, attached to struct or union type definition, specifies that each member (other than zero-width bit-fields) of the structure or union is placed to minimize the memory required. When attached to an enum definition, it indicates that the smallest integral type should be used.
Specifying this attribute for struct and union types is equivalent to specifying the packed attribute on each of the structure or union members. Specifying the -fshort-enums flag on the line is equivalent to specifying the packed attribute on all enum definitions.
In the following example struct my_packed_struct's members are packed closely together, but the internal layout of its s member is not packed—to do that, struct my_unpacked_struct needs to be packed too.
struct my_unpacked_struct
{
char c;
int i;
};
struct __attribute__ ((__packed__)) my_packed_struct
{
char c;
int i;
struct my_unpacked_struct s;
};
You may only specify this attribute on the definition of an enum, struct or union, not on a typedef that does not also define the enumerated type, structure or union.
The problem which I was experiencing was specifically due to the use of packed. I attempted to simply add the aligned attribute to the structs and classes, but the error persisted. Only removing the packed attribute resolved the problem. For now, I am leaving the aligned attribute on them and testing to see if I find any improvements in the efficiency of the code as mentioned above, simply due to their being aligned on word boundaries. The application makes use of arrays of these structures, so perhaps there will be better performance, but only profiling the code will say for certain.

How to map a float to an enum without a long stack of if statements

Given a float, how to map it to an enum like
{FAIL, POOR, OK, GOOD, EXCELLENT, PERFECT}
if the divisions aren't even.
0.0-0.4 is FAIL
0.4-0.6 is POOR
...
0.8-0.999.. is EXCELLENT
1.0 is PERFECT
The float is a rating value calculated from all the played levels in a game. It ranges from 0..1, both inclusive. There are normally no more than about 10 divisions needed, but the spacings are subject to tuning during development.
I'm currently using a stack of if..else statements. Is that the right way to do it? It seems a bit brittle.
Use an array of structs - either statically allocated or dynamic - and then a simple routine to search it - if its small just iterate, if its large you can do binary search.
As you know the minimum (0.0) and maximum (1.0) you only need to store the upper-bound of the range and enum value. E.g:
typedef enum {FAIL, POOR, OK, GOOD, EXCELLENT, PERFECT} Rating;
typedef struct
{
float upperBound;
Rating score;
} RatingDivision;
static RatingDivision Divisions[] =
{
{ 0.4, FAIL },
{ 0.6, POOR },
...
{ 0.999, EXCELLENT },
{ 1.0, PERFECT }
};
Now sizeof(Divisions)/sizeof(RatingDivision) will tell you the number of entries (needed for binary search), or just iterate until the value you're looking for is <= Divisions[i].upperBound returning Divisions[i].score or the upperBound reaches 1.0 with no match and handle the error.
You could use parallel arrays: use an array of threshold float values, and an array of enum values of the same size; then you could use a single short loop, checking each value in the float array and returning the enum value once the threshold is crossed.