Is bool guaranteed to be 1 byte? - size

The Rust documentation is vague on bool's size.
Is it guaranteed to be 1 byte, or is it unspecified like in C++?
fn main() {
use std::mem;
println!("{}",mem::size_of::<bool>()); //always 1?
}

Rust emits i1 to LLVM for bool and relies on whatever it produces. LLVM uses i8 (one byte) to represent i1 in memory for all the platforms supported by Rust for now. On the other hand, there's no certainty about the future, since the Rust developers have been refusing to commit to the particular bool representation so far.
So, it's guaranteed by the current implementation but not guaranteed by any specifications.
You can find more details in this RFC discussion and the linked PR and issue.
Please, see E_net4's answer for more information about changes introduced in Rust since this answer had been published.

While historically there was a wish to avoid committing to a more specific representation, it was eventually decided in January 2018 that bool should provide the following guarantees:
The definition of bool is equivalent to the C99 definition of _Bool
In turn, for all currently supported platforms, the size of bool is exactly 1.
The documentation has been updated accordingly. In the Rust reference, bool is defined as thus:
The bool type is a datatype which can be either true or false. The boolean type uses one byte of memory. [...]
It has also been documented since 1.25.0 that the output of std::mem::size_of::<bool>() is 1.
As such, one can indeed rely on bool being 1 byte (and if this is ever to change, it will be a pretty loud change).
See also:
In C how much space does a bool (boolean) take up? Is it 1 bit, 1 byte or something else?
Why is a boolean 1 byte and not 1 bit of size? (C++)

Related

In a dependently typed programming language is Type-in-Type practical for programming?

In a language with dependent types you can have Type-in-Type which simplifies the language and gives it a lot of power. This makes the language logically inconsistent but this might not be a problem if you are interested in programming only and not theorem proving.
In the Cayenne paper (a dependently typed language for programming) it is mentioned about Type-in-Type that "the unstratified type system would make it impossible during type checking to determine if an expression corresponds to a type or a real value and it would be impossible to remove the types at runtime" (section 2.4).
I have two questions about this:
In some dependently typed languages (like Agda) you can explicitly say which variables should be erased. In that case does Type-in-Type still cause problems?
We could extend the hierarchy one extra step with Kind where Type : Kind and Kind : Kind. This is still inconsistent but it seems that now you can know if a term is a type or a value. Is this correct?
the unstratified type system would make it impossible during type
checking to determine if an expression corresponds to a type or a real
value and it would be impossible to remove the types at runtime
This is not correct. Type-in-type prevents erasure of proofs, but it does not prevent erasure of types, assuming that we have parametric polymorphism with no typecase operation. Recent GHC Haskell is an example for a system which supports type-in-type, type erasure and type-level computation at the same time, but which does not support proof erasure. In dependently typed settings, we always know if a term is a type or not; we just check whether its type is Type.
Type erasure is just erasure of all things with type Type.
Proof erasure is more complicated. Let's assume that we have a Prop universe like in Coq, which is intended to be a universe of computationally irrelevant types. Here, we can use some p : Bool = Int proof to coerce Bool-s to Int. If the language is consistent, there is no closed proof of Bool = Int so closed program execution never encounters such coercion. Thus, closed program execution is safe even if we erase all coercions.
If the language is inconsistent, and the only way of proving contradiction is by an infinite loop, there is a diverging closed proof of Bool = Int. Now, closed program execution can actually hit a proof of falsehood; but we can still have type safety, by requiring that coercion must evaluate the proof argument. Then, the program loops whenever we coerce by falsehood, so execution never reaches the unsound parts of the program.
Probably the key point here is that A = B : Prop supports coercion, which eliminates into computationally relevant universe, but a parametric Type universe has no elimination principle at all and cannot influence computation.
Erasure can be generalized in several ways. For example, we may have any inductive data type with a single constructor (and no stored data which is not available from elsewhere, e.g. type indices), and try to erase every matching on that constructor. This is again sound if the language is total, and not otherwise. If we don't have a Prop universe, we can still do erasure like this. IIRC Idris does this a lot.
I just want to add a note that I believe is related to the question. Formality, a minimal proof language based on self-types, is non-terminating. I was involved in a Reddit discussion about whether Formality can segfault. One way that could happen is if you could prove Nat == String, then cast 42 :: Nat to 42 :: String and then print it as if it was a string, for example. But that's not the case. While you can prove String == Int in Formality:
nat_is_string: Nat == String
nat_is_string
And you can use it to cast a Nat to a String:
nat_str: String
42 :: rewrite x in x with nat_is_string
Any attempt to print nat_str, your program will not segfault, it will just hang. That's because you can't erase the equality evidence in Formality. To understand why, let's see the definition of Equal.rewrite (which is used to cast 42 to String):
Equal.rewrite<A: Type, a: A, b: A>(e: Equal(A,a,b))<P: A -> Type>(x: P(a)): P(b)
case e {
refl: x
} : P(e.b)
Once we erase the types, the normal form of rewrite becomes λe. λx. e(x). The e there is the equality evidence. In the example above, the normal form of nat_str is not 42, but nat_is_string(42). Since nat_is_string is an equality proof, then it has two options: either it will halt and become identity, in which case it will just return 42, or it will hang forever. In this case, it doesn't halt, so nat_is_string(42) will never return 42. As such, it can't be printed, and any attempt to use it will cause your entire program to hang, but not segfault.
So, in short, the insight is that self types allow us to encode the Equal, rewrite / subst, and erase all the type information, but not the equality evidence itself.

Using flatbuffers struct as a key

I am considering using flatbuffers' serialized struct as a key in a key-value store. Here is an example of the structs that I want to use as a key in rocksdb.
struct Foo {
foo_id: int64;
foo_type: int32;
}
I read the documentation and figured that the layout of a struct is deterministic. Does that mean it is suitable to be used as a key? If yes, how do I serialize a struct and deserialize it back. It seems like Table has API for serialization/deserialization but struct does not (?).
I tried serializing struct doing it as follows:
constexpr int key_size = sizeof(Foo);
using FooKey = std::array<char, key_size>;
FooKey get_foo_key(const Foo& foo_object) {
FooKey key;
std::memcpy(&key, &foo_object, key_size);
return key;
}
const Foo* get_foo(const FooKey& key) {
return reinterpret_cast<const Foo*>(&key);
}
I did some sanity checks and the above seems to work in my Ubuntu 18 docker image and is blazing fast. So my questions are as follows:
Is this a safe thing to do on a machine if it passes FLATBUFFERS_LITTLEENDIAN and uint8/char equivalence checks? Or are there any other checks needed?
Are there any other caveats that I should be aware of when doing it as demonstrated above?
Thanks in advance !
You don't actually need to go via std::array, the Foo struct is already a block of memory that is safe to copy or cast as you wish. It needs no serialization functions.
Like you said, that memory contains little endian data, so FLATBUFFERS_LITTLEENDIAN must pass. Actually even on a big endian machine you may copy these structures all you want, as long as you use the accessors to read the fields (which do a byteswap on access on big endian). The only thing that won't work on big endian is casting the struct to, say, an int64_t * to read the first field without using the accessor methods.
The other caveat to certain casting operations is strict aliasing, if you have that turned on certain casts may be undefined behavior.
Also note that in this example Foo will be 16 bytes in size on all platforms, because of alignment.

Why Microsoft CRT is so permissive regarding a BSTR double free

This is a simplified question for the one I asked here. I'm using VS2010 (CRT v100) and it doesn't complain, in any way ever, when i double free a BSTR.
BSTR s1=SysAllocString(L"test");
SysFreeString(s1);
SysFreeString(s1);
Ok, the question is highly hypothetical (actually, the answer is :).
SysFreeString takes a BSTR, which is a pointer, which actually is a number which has a specific semantic. This means that you can provide any value as an argument to the function, not just a valid BSTR or a BSTR which was valid moments ago. In order for SysFreeString to recognize invalid values, it would need to know all the valid BSTRs and to check against all of them. You can imagine the price of that.
Besides, it is consistent with other C, C++, COM or Windows APIs: free, delete, CloseHandle, IUnknown::Release... all of them expect YOU to know whether the argument is eligible for releasing.
In a nutshell your question is: "I am calling SysFreeString with an invalid argument. Why compiler allows me this".
Visual C++ compiler allows the call and does not issue a warning because the call itself is valid: there is a match of argument type, the API function is good, this can be converted to binary code that executes. The compiler has no knowledge whether your argument is valid or not, you are responsible to track this yourselves.
The API function on the other hand expects that you pass valid argument. It might or might not check its validity. Documentation says about the argument: "The previously allocated string". So the value is okay for the first call, but afterward the pointer value is no longer a valid argument for the second call and behavior is basically undefined.
Nothing to do with the CRT, this is a winapi function. Which is C based, a language that has always given programmers enough lengths of rope to hang themselves by invoking UB with the slightest mistake. Fast and easy-to-port has forever been at odds with safe and secure.
SysFreeString() doesn't win any prizes, clearly it should have had a BOOL return type. But it can't, the IMalloc::Free() interface function was fumbled a long time ago. Nothing you can't fix yourself:
BOOL SafeSysFreeString(BSTR* str) {
if (str == NULL) {
SetLastError(ERROR_INVALID_ARGUMENT);
return FALSE;
}
SysFreeString(*str);
*str = NULL;
return TRUE;
}
Don't hesitate to yell louder, RaiseException() gives a pretty good bang that is hard to ignore. But writing COM code in C is cruel and unusual punishment, outlawed by the Geneva Convention on Programmers Rights. Use the _bstr_t or CComBSTR C++ wrapper types instead.
But do watch out when you slice the BSTR out of them, they can't help when you don't or can't use them consistently. Which is how you got into trouble with that VARIANT. Always pay extra attention when you have to leave the safety of the wrapper, there are C sharks out there.
See this quote from MSDN:
Automation may cache the space allocated for BSTRs. This speeds up
the SysAllocString/SysFreeString sequence.
(...)if the application allocates a BSTR and frees it, the free block
of memory is put into the BSTR cache by Automation(...)
This may explain why calling SysFreeString(...) twice with the same pointer does not produce a crash,since the memory is still available (kind of).

Overcoming the race condition in lock-free reference-counted dereferences

Imagine a structure like this:
struct my_struct {
uint32_t refs
...
}
for which a pointer is acquired through a lookup table:
struct my_struct** table;
my_struct* my_struct_lookup(const char* name)
{
my_struct* s = table[hash(name)];
/* EDIT: Race condition here. */
atomic_inc(&s->refs);
return s;
}
A race exists between the dereference and the atomic increment in a multi-threaded model. Given that this is very performance critical code, I was wondering how this race inbetween the dereference and atomic increment is typically resolved or worked around?
EDIT: When acquiring a pointer to a my_struct structure via the lookup table, it is necessary to first dereference the structure in order to increment its reference count. This creates a problem in multi-threaded code when other threads could be altering the reference count and potentially deallocating the object itself while another thread would then dereference a pointer to non-existent memory. Combined with preemption and some bad luck, this could be a recipe for disaster.
As someone said above, you can make linked list of memory to free at some later time, so your pointers are never invalid. This is a handy method in some cases.
Or....you can make a 64 bit struct with your 32 bit pointer and have 32 bits for a ref count and other flags. You can use 64 bit atomic ops on the struct if you wrap it in a union:
union my_struct_ref {
struct {
unsigned int cUse : 16,
fDeleted : 1; // etc
struct my_struct *s;
} Data;
unsigned long n64;
}
You can human readably work with the Data part of the struct, and you can use CAS on the n64 bit part.
my_struct* my_struct_lookup(const char* name)
{
struct my_struct_ref Old, New;
int iHash = hash(name);
// concurrency loop
while (1) {
Old.n64 = table[iHash].n64;
if (Old.Data.fDeleted)
return NULL;
New.n64 = Old.n64;
New.Data.cRef++;
if (CAS(&table[iHash].n64, Old.n64, New.n64)) // CAS = atomic compare and swap
return New.Data.s; // success
// we get here if some other thread changed the count or deleted our pointer
// in between when we got a copy of it int old. Just loop to try again.
}
}
If you are using 64 bit pointers you will need to do 128 bit CAS.
One solution is to use a freelist, rather than malloc() and free(). This has obvious drawbacks.
Another is to implement lock-free garbage collection (also known as Safe Memory Reclaimation).
There are MANY patents in this field, but it appears that epoch-based LFGC is unencumbered.
The upshot of using this method is that elements are only deallocated when no threads are pointing at them.
The former solution is very easy to implement. You need a lock-free freelist, of course, or your overall system is no longer lock-free.
The latter is really not complex, but requires learning the algorithm in question, which takes some time and research.
Beside the race you identified, you have a general problem of memory consistency.
Even if you could make the table modifications atomic in a lock-free fashion, the block of memory my_struct* points to could still be "stale" when seen from a different thread compared to the thread that last modified it. This does not apply to my_struct.refs (provided you always access it using atomics), but does apply to all other fields. This is the consequence of write buffers and caches that are "private" to each CPU core.
The only way to guarantee you are seeing the correct memory content is to use a memory barrier. Yet, a typical lock is also a memory barrier, so why not just use the lock in the first place?
Lock-free programming is much trickier than may initially seem, OTOH locks can be very fast, especially when contentions are rare. Have you actually benchmarked lock-based implementation and confirmed that locking is indeed your bottleneck?

const vs enum in D

Check out this quote from here, towards the bottom of the page. (I believe the quoted comment about consts apply to invariants as well)
Enumerations differ from consts in that they do not consume any space
in the final outputted object/library/executable, whereas consts do.
So apparently value1 will bloat the executable, while value2 is treated as a literal and doesn't appear in the object file.
const int value1 = 0xBAD;
enum int value2 = 42;
Back in C++ I always assumed this was for legacy reasons, and old compilers that couldn't optimize away constants. But if this is still true in D, there must be a deeper reason behind this. Anyone know why?
Just like in C++, an enum in D seems to be a "conserved integer literal" (edit: amazing, D2 even supports floats and strings). Its enumerators have no location. They are just immaterial as values without identity.
Placing enum is new in D2. It first defines a new variable. It is not an lvalue (so you also cannot take its address). An
enum int a = 10; // new in D2
Is like
enum : int { a = 10 }
If i can trust my poor D knowledge. So, a in here is not an lvalue (no location and you can't take its address). A const, however, has an address. If you have a global (not sure whether this is the right D terminology) const variable, the compiler usually can't optimize it away, because it doesn't know what modules can access that variable or could take its address. So it has to allocate storage for it.
I think if you have a local const, the compiler can still optimize it away just as in C++, because the compiler knows by looking at its scope whether or not anyone is interested in its address or whether everyone just takes its value.
Your actual question; why enum/const is the same in D as in C++; seems to be unanswered. Sadly there exists no good reason for this choice whatsoever. I believe that this was just an unintentional side effect in C++ that became a de facto pattern. In D the same pattern was needed, and Walter Bright decided that it should be done as in C++ such that those coming from that place would recognize what to do ... In fact, before this rather IMHO silly decision, the keyword manifest was used instead of enum for this usecase.
I think a good compiler/linker should still remove the constant. It's just that with the enum, it's actually guaranteed in the spec. The difference is primarily a matter of semantics. (Also keep in mind that 2.0 isn't complete yet)
The real purpose of enum being expanded syntactically to support single manifest constants, from what I understand, is that Don Clugston, a D template guru, was doing some crazy stuff with templates. He kept running into long build times, ridiculous compiler memory usage, etc. because the compiler kept creating internal data strucutres for const variables. One key thing about const/immutable variables compared to enums is that const/immutable variables are lvalues and can have their address taken. This means there is some extra overhead for the compiler. This usually doesn't matter, but when you're executing really complicated compile-time metaprograms, even if const variables are optimized away, this is still significant overhead at compile time.
It sounds like the enum value will be used "inline" in expressions where as the const will actually take storage and any expression referencing it will be loading the value from the memory storage.
This sound similar to the difference between const vs. readonly in C#. The former is a compile-time constant and the later is a run-time constant. This definitely affected versioning of assemblies (since assemblies referencing a readonly would receive a copy at compile time and would not get a change to the value if the referenced assembly was rebuilt with a different value).