Using flatbuffers struct as a key - flatbuffers

I am considering using flatbuffers' serialized struct as a key in a key-value store. Here is an example of the structs that I want to use as a key in rocksdb.
struct Foo {
foo_id: int64;
foo_type: int32;
}
I read the documentation and figured that the layout of a struct is deterministic. Does that mean it is suitable to be used as a key? If yes, how do I serialize a struct and deserialize it back. It seems like Table has API for serialization/deserialization but struct does not (?).
I tried serializing struct doing it as follows:
constexpr int key_size = sizeof(Foo);
using FooKey = std::array<char, key_size>;
FooKey get_foo_key(const Foo& foo_object) {
FooKey key;
std::memcpy(&key, &foo_object, key_size);
return key;
}
const Foo* get_foo(const FooKey& key) {
return reinterpret_cast<const Foo*>(&key);
}
I did some sanity checks and the above seems to work in my Ubuntu 18 docker image and is blazing fast. So my questions are as follows:
Is this a safe thing to do on a machine if it passes FLATBUFFERS_LITTLEENDIAN and uint8/char equivalence checks? Or are there any other checks needed?
Are there any other caveats that I should be aware of when doing it as demonstrated above?
Thanks in advance !

You don't actually need to go via std::array, the Foo struct is already a block of memory that is safe to copy or cast as you wish. It needs no serialization functions.
Like you said, that memory contains little endian data, so FLATBUFFERS_LITTLEENDIAN must pass. Actually even on a big endian machine you may copy these structures all you want, as long as you use the accessors to read the fields (which do a byteswap on access on big endian). The only thing that won't work on big endian is casting the struct to, say, an int64_t * to read the first field without using the accessor methods.
The other caveat to certain casting operations is strict aliasing, if you have that turned on certain casts may be undefined behavior.
Also note that in this example Foo will be 16 bytes in size on all platforms, because of alignment.

Related

Thrift go support of map with struct value as key

I experimented golang generation with Thrift 0.9.1, for example,
thrift definition,
struct AppIdLeveledHashKeyTimeKeyHour {
1: required i32 appId
2: required LeveledHashKey leveledHashKey
3: required TimeKeyHour timeKeyHour
}
typedef map<AppIdLeveledHashKeyTimeKeyHour, ...sth else...> EventSliceShardIdValue
in the generated code, EventSliceShardIdValue would be,
type EventSliceShardIdValue map[*AppIdLeveledHashKeyTimeKeyHour]EventSliceAppIdLeveledHashKeyTimeKeyHourValue
you can find the key part is a pointer which represents memory address. In golang a pointer as map key (instead of a value, or hash of the obj) is useless in most cases. To use a combination of some fields as map key, the definition should use a value type like
map[AppIdLeveledHashKeyTimeKeyHour]EventSliceAppIdLeveledHashKeyTimeKeyHourValue
Is it a problem of Thrift's go support (or I misused sth)? Any workaround to solve this problem in thrift?
Structs (without pointers) can only be used as map keys under certain limited circumstances (they must be comparable per http://golang.org/ref/spec#Comparison_operators); it's possible that AppIdLeveledHashKeyTimeKeyHour doesn't fit this definition, so it's not actually possible to build a map without using a pointer for the key.

Overcoming the race condition in lock-free reference-counted dereferences

Imagine a structure like this:
struct my_struct {
uint32_t refs
...
}
for which a pointer is acquired through a lookup table:
struct my_struct** table;
my_struct* my_struct_lookup(const char* name)
{
my_struct* s = table[hash(name)];
/* EDIT: Race condition here. */
atomic_inc(&s->refs);
return s;
}
A race exists between the dereference and the atomic increment in a multi-threaded model. Given that this is very performance critical code, I was wondering how this race inbetween the dereference and atomic increment is typically resolved or worked around?
EDIT: When acquiring a pointer to a my_struct structure via the lookup table, it is necessary to first dereference the structure in order to increment its reference count. This creates a problem in multi-threaded code when other threads could be altering the reference count and potentially deallocating the object itself while another thread would then dereference a pointer to non-existent memory. Combined with preemption and some bad luck, this could be a recipe for disaster.
As someone said above, you can make linked list of memory to free at some later time, so your pointers are never invalid. This is a handy method in some cases.
Or....you can make a 64 bit struct with your 32 bit pointer and have 32 bits for a ref count and other flags. You can use 64 bit atomic ops on the struct if you wrap it in a union:
union my_struct_ref {
struct {
unsigned int cUse : 16,
fDeleted : 1; // etc
struct my_struct *s;
} Data;
unsigned long n64;
}
You can human readably work with the Data part of the struct, and you can use CAS on the n64 bit part.
my_struct* my_struct_lookup(const char* name)
{
struct my_struct_ref Old, New;
int iHash = hash(name);
// concurrency loop
while (1) {
Old.n64 = table[iHash].n64;
if (Old.Data.fDeleted)
return NULL;
New.n64 = Old.n64;
New.Data.cRef++;
if (CAS(&table[iHash].n64, Old.n64, New.n64)) // CAS = atomic compare and swap
return New.Data.s; // success
// we get here if some other thread changed the count or deleted our pointer
// in between when we got a copy of it int old. Just loop to try again.
}
}
If you are using 64 bit pointers you will need to do 128 bit CAS.
One solution is to use a freelist, rather than malloc() and free(). This has obvious drawbacks.
Another is to implement lock-free garbage collection (also known as Safe Memory Reclaimation).
There are MANY patents in this field, but it appears that epoch-based LFGC is unencumbered.
The upshot of using this method is that elements are only deallocated when no threads are pointing at them.
The former solution is very easy to implement. You need a lock-free freelist, of course, or your overall system is no longer lock-free.
The latter is really not complex, but requires learning the algorithm in question, which takes some time and research.
Beside the race you identified, you have a general problem of memory consistency.
Even if you could make the table modifications atomic in a lock-free fashion, the block of memory my_struct* points to could still be "stale" when seen from a different thread compared to the thread that last modified it. This does not apply to my_struct.refs (provided you always access it using atomics), but does apply to all other fields. This is the consequence of write buffers and caches that are "private" to each CPU core.
The only way to guarantee you are seeing the correct memory content is to use a memory barrier. Yet, a typical lock is also a memory barrier, so why not just use the lock in the first place?
Lock-free programming is much trickier than may initially seem, OTOH locks can be very fast, especially when contentions are rare. Have you actually benchmarked lock-based implementation and confirmed that locking is indeed your bottleneck?

Most appropriate data structure for dynamic languages field access

I'm implementing a dynamic language that will compile to C#, and it's implementing its own reflection API (.NET's is too slow, and the DLR is limited only to more recent and resourceful implementations).
For this, I've implemented a simple .GetField(string f) and .SetField(string f, object val) interface. Until recently, the implementation just switches over all possible field string values and makes the corresponding action.
Also, this dynamic language has the possibility to define anonymous objects. For those anonymous objects, at first, I had implemented a simple hash algorithm.
By now, I am looking for ways to optimize the dynamic parts of the language, and I have come across the fact that a hash algorithm for anonymous objects would be overkill. This is because the objects are usually small. I'd say the objects contain 2 or 3 fields, normally. Very rarely, they would contain more than 15 fields. It would take more time to actually hash the string and perform the lookup than if I would test for equality between them all. (This is not tested, just theoretical).
The first thing I did was to -- at compile-time -- create a red-black tree for each anonymous object declaration and have it laid onto an array so that the object can look for it in a very optimized way.
I am still divided, though, if that's the best way to do this. I could go for a perfect hashing function. Even more radically, I'm thinking about dropping the need for strings and actually work with a struct of 2 longs.
Those two longs will be encoded to support 10 chars (A-za-z0-9_) each, which is mostly a good prediction of the size of the fields. For fields larger than this, a special function (slower) receiving a string will also be provided.
The result will be that strings will be inlined (not references), and their comparisons will be as cheap as a long comparison.
Anyway, it's a little hard to find good information about this kind of optimization, since this is normally thought on a vm-level, not a static language compilation implementation.
Does anyone have any thoughts or tips about the best data structure to handle dynamic calls?
Edit:
For now, I'm really going with the string as long representation and a linear binary tree lookup.
I don't know if this is helpful, but I'll chuck it out in case;
If this is compiling to C#, do you know the complete list of fields at compile time? So as an idea, if your code reads
// dynamic
myObject.foo = "some value";
myObject.bar = 32;
then during the parse, your symbol table can build an int for each field name;
// parsing code
symbols[0] == "foo"
symbols[1] == "bar"
then generate code using arrays or lists;
// generated c#
runtimeObject[0] = "some value"; // assign myobject.foo
runtimeObject[1] = 32; // assign myobject.bar
and build up reflection as a separate array;
runtimeObject.FieldNames[0] == "foo"; // Dictionary<int, string>
runtimeObject.FieldIds["foo"] === 0; // Dictionary<string, int>
As I say, thrown out in the hope it'll be useful. No idea if it will!
Since you are likely to be using the same field and method names repeatedly, something like string interning would work well to quickly generate keys for your hash tables. It would also make string equality comparisons constant-time.
For such a small data set (expected upper bounds of 15) I think almost any hashing will be more expensive then a tree or even a list lookup, but that is really dependent on your hashing algorithm.
If you want to use a dictionary/hash then you'll need to make sure the objects you use for the key return a hash code quickly (perhaps a single constant hash code that's built once). If you can prevent collisions inside of an object (sounds pretty doable) then you'll gain the speed and scalability (well for any realistic object/class size) of a hash table.
Something that comes to mind is Ruby's symbols and message passing. I believe Ruby's symbols act as a constant to just a memory reference. So comparison is constant, they are very lite, and you can use symbols like variables (I'm a little hazy on this and don't have a Ruby interpreter on this machine). Ruby's method "calling" really turns into message passing. Something like: obj.func(arg) turns into obj.send(:func, arg) (":func" is the symbol). I would imagine that symbol makes looking up the message handler (as I'll call it) inside the object pretty efficient since it's hash code most likely doesn't need to be calculated like most objects.
Perhaps something similar could be done in .NET.

std::unique_ptr and pointer-to-pointer

I've been using std::unique_ptr to store some COM resources, and provided a custom deleter function. However, many of the COM functions want pointer-to-pointer. Right now, I'm using the implementation detail of _Myptr, in my compiler. Is it going to break unique_ptr to be accessing this data member directly, or should I store a gajillion temporary pointers to construct unique_ptr rvalues from?
COM objects are reference-countable by their nature, so you shouldn't use anything except reference-counting smart pointers like ATL::CComPtr or _com_ptr_t even if it seems inappropriate for your usecase (I fully understand your concerns, I just think you assign too much weight to them). Both classes are designed to be used in all valid scenarios that arise when COM objects are used, including obtaining the pointer-to-pointer. Yes, that's a bit too much functionality, but if you don't expect any specific negative consequences you can't tolerate you should just use those classes - they are designed exactly for this purpose.
I've had to tackle the same problem not too long ago, and I came up with two different solutions:
The first was a simple wrapper that encapsulated a 'writeable' pointer and could be std::moved into my smart pointer. This is just a little more convenient that using the temp pointers you are mentioning, since you cannot define the type directly at the call-site.
Therefore, I didn't stick with that. So what I did was a Retrieve helper-function that would get the COM function and return my smart-pointer (and do all the temporary pointer stuff internally). Now this trivially works with free-functions that only have a single T** parameter. If you want to use this on something more complex, you can just pass in the call via std::bind and only leave the pointer-to-be-returned free.
I know that this is not directly what you're asking, but I think it's a neat solution to the problem you're having.
As a side note, I'd prefer boost's intrusive_ptr instead of std::unique_ptr, but that's a matter of taste, as always.
Edit: Here's some sample code that's transferred from my version using boost::intrusive_ptr (so it might not work out-of-the box with unique_ptr)
template <class T, class PtrType, class PtrDel>
HRESULT retrieve(T func, std::unique_ptr<PtrType, PtrDel>& ptr)
{
ElementType* raw_ptr=nullptr;
HRESULT result = func(&raw_ptr);
ptr.reset(raw_ptr);
return result;
}
For example, it can be used like this:
std::unique_ptr<IFileDialog, ComDeleter> FileDialog;
/*...*/
using std::bind;
using namespace std::placeholders;
std::unique_ptr<IShellItem, ComDeleter> ShellItem;
HRESULT status = retrieve(bind(&IFileDialog::GetResult, FileDialog, _1), ShellItem);
For bonus points, you can even let retrieve return the unique_ptr instead of taking it by reference. The functor that bind generates should have signature typedefs to derive the pointer type. You can then throw an exception if you get a bad HRESULT.
C++0x smart pointers have a portable way to get at the raw pointer container .get() or release it entirely with .release(). You could also always use &(*ptr) but that is less idiomatic.
If you want to use smart pointers to manage the lifetime of an object, but still need raw pointers to use a library which doesn't support smart pointers (including standard c library) you can use those functions to most conveniently get at the raw pointers.
Remember, you still need to keep the smart pointer around for the duration you want the object to live (so be aware of its lifetime).
Something like:
call_com_function( &my_uniq_ptr.get() ); // will work fine
return &my_localscope_uniq_ptr.get(); // will not
return &my_member_uniq_ptr.get(); // might, if *this will be around for the duration, etc..
Note: this is just a general answer to your question. How to best use COM is a separate issue and sharptooth may very well be correct.
Use a helper function like this.
template< class T >
T*& getPointerRef ( std::unique_ptr<T> & ptr )
{
struct Twin : public std::unique_ptr<T>::_Mybase {};
Twin * twin = (Twin*)( &ptr );
return twin->_Myptr;
}
check the implementation
int wmain ( int argc, wchar_t argv[] )
{
std::unique_ptr<char> charPtr ( new char[25] );
delete getPointerRef(charPtr);
getPointerRef(charPtr) = 0;
return charPtr.get() != 0;
}

const vs enum in D

Check out this quote from here, towards the bottom of the page. (I believe the quoted comment about consts apply to invariants as well)
Enumerations differ from consts in that they do not consume any space
in the final outputted object/library/executable, whereas consts do.
So apparently value1 will bloat the executable, while value2 is treated as a literal and doesn't appear in the object file.
const int value1 = 0xBAD;
enum int value2 = 42;
Back in C++ I always assumed this was for legacy reasons, and old compilers that couldn't optimize away constants. But if this is still true in D, there must be a deeper reason behind this. Anyone know why?
Just like in C++, an enum in D seems to be a "conserved integer literal" (edit: amazing, D2 even supports floats and strings). Its enumerators have no location. They are just immaterial as values without identity.
Placing enum is new in D2. It first defines a new variable. It is not an lvalue (so you also cannot take its address). An
enum int a = 10; // new in D2
Is like
enum : int { a = 10 }
If i can trust my poor D knowledge. So, a in here is not an lvalue (no location and you can't take its address). A const, however, has an address. If you have a global (not sure whether this is the right D terminology) const variable, the compiler usually can't optimize it away, because it doesn't know what modules can access that variable or could take its address. So it has to allocate storage for it.
I think if you have a local const, the compiler can still optimize it away just as in C++, because the compiler knows by looking at its scope whether or not anyone is interested in its address or whether everyone just takes its value.
Your actual question; why enum/const is the same in D as in C++; seems to be unanswered. Sadly there exists no good reason for this choice whatsoever. I believe that this was just an unintentional side effect in C++ that became a de facto pattern. In D the same pattern was needed, and Walter Bright decided that it should be done as in C++ such that those coming from that place would recognize what to do ... In fact, before this rather IMHO silly decision, the keyword manifest was used instead of enum for this usecase.
I think a good compiler/linker should still remove the constant. It's just that with the enum, it's actually guaranteed in the spec. The difference is primarily a matter of semantics. (Also keep in mind that 2.0 isn't complete yet)
The real purpose of enum being expanded syntactically to support single manifest constants, from what I understand, is that Don Clugston, a D template guru, was doing some crazy stuff with templates. He kept running into long build times, ridiculous compiler memory usage, etc. because the compiler kept creating internal data strucutres for const variables. One key thing about const/immutable variables compared to enums is that const/immutable variables are lvalues and can have their address taken. This means there is some extra overhead for the compiler. This usually doesn't matter, but when you're executing really complicated compile-time metaprograms, even if const variables are optimized away, this is still significant overhead at compile time.
It sounds like the enum value will be used "inline" in expressions where as the const will actually take storage and any expression referencing it will be loading the value from the memory storage.
This sound similar to the difference between const vs. readonly in C#. The former is a compile-time constant and the later is a run-time constant. This definitely affected versioning of assemblies (since assemblies referencing a readonly would receive a copy at compile time and would not get a change to the value if the referenced assembly was rebuilt with a different value).