Efficiently clearing flatbuffer builders for layers of tables - flatbuffers

Can I reuse 'sub' builder instances generated from a flatbuffers::FlatBufferBuilder after calling builder.Clear()? What is the effect of builder.Clear() on sub-builders?
Having generated a flatbuffers schema such as the following:
table FB_mytable1{
myshort::ushort = 0;
}
table FB_table2{
nestedTable1::FB_mytable1;
nestedTable1::FB_mytable1;
}
root_type FB_table2;
If I reset the builder using builder.clear(), my instantiation of 'flatbuffers::FlatBufferBuilder', will this allow me to generate new serial data without calling a reset function or renewing any of the individual table serialisers, such as my instantiation of FB_mytable1Builder myFB_mytable1Builder(builder)? Or do I need to ensure that the individual builder objects have a scope that means they are not reused?

clear() resets a FlatBufferBuilder as if just constructed, and any of the table builder instances should not be reused across multiple buffers or even multiple tables.
See, a FlatBufferBuilder is a somewhat heavyweight structure (since it owns a buffer), so it makes sense to reuse it when you can. The table builders however are super lightweight, so should just be a local variable used for construction of a single table, you can't reuse them.

Related

Class that represents parts of another object?

I have a class called "EntryData" with a couple of fields in it, "name", "entrydate", "enteredby" and "key" a foreign key. The key points to the "DataEntered" class with "key", "startdate", "enddate" and "values", an array of doubles. This matched the layout of a db we were using, which stored data in two tables.
Now we are adapting this to a db that stores all the same data, but in a single table. We would like the API to remain the same.
So, I hope this makes sense:
Can I make a new class called "DataEntered" that has no instances its own, and consists solely of pointers to particular fields in "EntryData"? That is, there would be no objects of this class, it would simply be a sort of wrapper that always referred to the underlying EntryData it was called on, like this…
myEntryDataInstance.DataEntered.startdate
In this case there is no instance of DataEntered.
The easy way to solve my problem would be if I could put periods in my method names, but that doesn't seem to work. :-)
What your asking for is called inheritance but VB does not allow you to inherit from multiple classes. You would have to inherit from one class and then write in pointers to the second class which would have to be saved as a variable in your class.
Problem is, what do you plan to do with this? Are you using the Entity Framework or your own homebrewed data manager? If you are using something gereric, it won't know how to handle you NEW class with multiple tables.

How to add custom (user defined) properties to entities with EclipseLink?

I'd like to add user-defined custom fields to an existing entity in EclipseLink. For performance reasons, I want them to be stored directly in the entity's table, and I also want them to be "first class citizens", i.e. usable in queries.
From an implementation standpoint, the entity should have two methods to set and get custom fields:
public Object getCustomProperty(String key) { ... }
public void setCustomProperty(String key, Object value) { ... }
When setting a custom property foo, EclipseLink should store the value in the entity's table in a field named custom_foo.
From an end user standpoint, I would like to provide a GUI where the user can define and manage custom fields, which are then dynamically added to or removed from the database.
Is this possible in EclipseLink?
Regards,
Jochen
Check out EclipseLink's Extensibility feature
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Advanced_JPA_Development/Extensible_Entities
This with support for adding columns:
http://wiki.eclipse.org/EclipseLink/DesignDocs/368365
seems somewhat like what you are asking for - except for the conflicting statement that it shouldn't store in the main table, then later state it should store in the entity's table "custom_foo" column.
You will need to create the GUI that creates the mappings. Eclipselink ships with a metadata source implementation that reads from an orm.xml file, so you may have to write your own implementation for EclipseLink to use if your GUI cannot write to an orm file.

How to effectively refresh many to many relationship

Lets say I have entity A, which have many to many relationship with another entities of type A. So on entity A, I have collection of A. And lets say I have to "update" this relationships according to some external service - from time to time I receive notification that relations for certain entity has changed, and array of IDs of current related entities - some relations can be new, some existing, some of existing no longer there... How can I effectively update my database with EF ?
Some ideas:
eager load entity with its related entities, do foreach on collection of IDs from external service, and remove/add as needed. But this is not very effective - need to load possibly hundreds of related entities
clear current relations and insert new. But how ? Maybe perform delete by stored procedure, and then insert by "fake" objects
a.Related.Add(new A { Id = idFromArray })
but can this be done in transaction ? (call to stored procedure and then inserts done by SaveChanges)
or is there any 3rd way ?
Thanx.
Well, "from time to time" does not sound like a situation to think much about performance improvement (unless you mean "from millisecond to millisecond") :)
Anyway, the first approach is the correct idea to do this update without a stored procedure. And yes, you must load all old related entities because updating a many-to-many relationship goes only though EFs change detection. There is no exposed foreign key you could leverage to update the relations without having loaded the navigation properties.
An example how this might look in detail is here (fresh question from yesterday):
Selecting & Updating Many-To-Many in Entity Framework 4
(Only the last code snippet before the "Edit" section is relevant to your question and the Edit section itself.)
For your second solution you can wrap the whole operation into a manually created transaction:
using (var scope = new TransactionScope())
{
using (var context = new MyContext())
{
// ... Call Stored Procedure to delete relationships in link table
// ... Insert fake objects for new relationships
context.SaveChanges();
}
scope.Complete();
}
Ok, solution found. Of course, pure EF solution is the first one proposed in original question.
But, if performance matters, there IS a third way, the best one, although it is SQL server specific (afaik) - one procedure with table-valued parameter. All new related IDs goes in, and the stored procedure performs delete and inserts in transaction.
Look for the examples and performance comparison here (great article, i based my solution on it):
http://www.sommarskog.se/arrays-in-sql-2008.html

NHibernate HiLo ID Generator. Generating an ID before saving

I'm trying to use 'adonet.batch_size' property in NHibernate. Now, I'm creating entities across multiple sessions at a large rate (hence batch inserting). So what I'm doing is creating a buffer where I keep these entities and them flush them out all at once periodically.
However I need the ID's as soon as I create the entities. So I want to create an entity (in any session) and then have its ID generated (I'm using HiLo generator). And then at a later time (and other session) I want to flush that buffer and ensure that those IDs do not change.
Is there anyway to do this?
Thanks
Guido
I find it odd that you need many sessions to do a single job. Normally a single session is enough to do all work.
That said, the Hilo generator sets the id property on the entity when calling nhSession.Save(object) without necessarily requiring a round-trip to the database and a
nhSession.Flush() will flush the inserts to the database
UPDATE ===========================================================================
This is a method i used on a specific case that made pure-sql inserts while maintaining NHibernate compatibility.
//this will get the value and update the hi-lo value repository in the datastore
public static void GenerateIdentifier(object target)
{
var targetType = target.GetType();
var classMapping = NHibernateSessionManager.Instance.Configuration.GetClassMapping(targetType);
var impl = NHibernateSessionManager.Instance.GetSession().GetSessionImplementation();
var newId = classMapping.Identifier.CreateIdentifierGenerator(impl.Factory.Dialect, classMapping.Table.Catalog, classMapping.Table.Schema,
classMapping.RootClazz).Generate(impl, target);
classMapping.IdentifierProperty.GetSetter(targetType).Set(target, newId);
}
So, this method takes your newly constructed entity like
var myEnt = new MyEnt(); //has default identifier
GenerateIdentifier(myEnt); //now has identifier injected based on nhibernate's mapping
note that this call does not place the entity in any kind of nhibernate managed space. So you still have to make a place to place your objects and make the save on each one. Also note that i used this one with pure sql inserts and unless you specify generator="assigned" (which will then require some custom hi-lo generator) in your entity mapping nhibernate may require a different mechanism to persist it.
All in all, what you want is to generate an Id for an object that will be persisted at some time in the future. This brings up some problems such as handling non-existent entries due to rollbacks and failed commits. Additionally imo nhibernate is not the tool for this particular job, you don't need nhibernate to do your bulk insert unless there is some complex entity logic that is too costly (in dev time) to implement on your own.
Also note that you are implying that you need transient detached entities which however cannot be used unless you call .nhSes.Save(obj) on the first session and flush its contents so the 2nd session when it calls Load on the transient object there will be an existing row in the database which contradicts what you want to achieve.
Imo don't be afraid of storming the database, just optimise the procedure top-to-bottom to be able to handle the volume. Using nhibernate just to do an insert seems counter-productive when you can achieve the same result with 4 times the performance using ado.net or even an isqlquery wrapped-query (and use the method i provided above)

How can one delete an entity in nhibernate having only its id and type?

I am wondering how can one delete an entity having just its ID and type (as in mapping) using NHibernate 2.1?
If you are using lazy loading, Load only creates a proxy.
session.Delete(session.Load(type, id));
With NH 2.1 you can use HQL. Not sure how it actually looks like, but something like this: note that this is subject to SQL injection - if possible use parametrized queries instead with SetParameter()
session.Delete(string.Format("from {0} where id = {1}", type, id));
Edit:
For Load, you don't need to know the name of the Id column.
If you need to know it, you can get it by the NH metadata:
sessionFactory.GetClassMetadata(type).IdentifierPropertyName
Another edit.
session.Delete() is instantiating the entity
When using session.Delete(), NH loads the entity anyway. At the beginning I didn't like it. Then I realized the advantages. If the entity is part of a complex structure using inheritance, collections or "any"-references, it is actually more efficient.
For instance, if class A and B both inherit from Base, it doesn't try to delete data in table B when the actual entity is of type A. This wouldn't be possible without loading the actual object. This is particularly important when there are many inherited types which also consist of many additional tables each.
The same situation is given when you have a collection of Bases, which happen to be all instances of A. When loading the collection in memory, NH knows that it doesn't need to remove any B-stuff.
If the entity A has a collection of Bs, which contains Cs (and so on), it doesn't try to delete any Cs when the collection of Bs is empty. This is only possible when reading the collection. This is particularly important when C is complex of its own, aggregating even more tables and so on.
The more complex and dynamic the structure is, the more efficient is it to load actual data instead of "blindly" deleting it.
HQL Deletes have pitfalls
HQL deletes to not load data to memory. But HQL-deletes aren't that smart. They basically translate the entity name to the corresponding table name and remove that from the database. Additionally, it deletes some aggregated collection data.
In simple structures, this may work well and efficient. In complex structures, not everything is deleted, leading to constraint violations or "database memory leaks".
Conclusion
I also tried to optimize deletion with NH. I gave up in most of the cases, because NH is still smarter, it "just works" and is usually fast enough. One of the most complex deletion algorithms I wrote is analyzing NH mapping definitions and building delete statements from that. And - no surprise - it is not possible without reading data from the database before deleting. (I just reduced it to only load primary keys.)