Dapper.Rainbow VS Dapper.Contrib - orm

Can someone please explain the difference between Dapper.Rainbow vs. Dapper.Contrib?
I mean when do you use SqlMapperExtensions.cs of Dapper.Contrib and when should you use Dapper.Rainbow?

I’ve been using Dapper for a while now and have wondered what the Contrib and Rainbow projects were all about myself. After a bit of code review, here are my thoughts on their uses:
Dapper.Contrib
Contrib provides a set of extension methods on the IDbConnection interface for basic CRUD operations:
Get
Insert
Update
Delete
The key component of Contrib is that it provides tracking for your entities to identify if changes have been made.
For example, using the Get method with an interface as the type constraint will return a dynamically generated proxy class with an internal dictionary to track what properties have changed.
You can then use the Update method which will generate the SQL needed to only update those properties that have changed.
Major Caveat: to get the tracking goodness of Contrib, you must use an Interface as your type constraint to allow the proxy class to be generated.
Dapper.Rainbow
Rainbow is an Abstract class that you can use as a base class for your Dapper classes to provide basic CRUD operations:
Get
Insert
Update
Delete
As well as some commonly used methods such as First (gets the first record in a table) and All (gets all results records in a table).
For all intents and purposes, Rainbow is basically a wrapper for your most commonly used database interactions and will build up the boring SQL based on property names and type constraints.
For example, with a Get operation, Rainbow will build up a vanilla SQL query and return all columns and then map those values back to the type used as the constraint.
Similarly, the insert/update methods will dynamically build up the SQL needed for an insert/update based on the type constraint's property names.
Major Caveat: Rainbow expects all your tables to have an identity column named “Id”.
Differences?
The major difference between Contrib and Rainbow is (IMO), one tracks changes to your entities, the other doesn’t:
Use Contrib when you want to be able to track changes in your entities.
Use Rainbow when you want to use something more along the lines of a standard ADO.NET approach.
On a side note: I wish I had looked into Rainbow earlier as I have built up a very similar base class that I use with Dapper.
From the article and quote #anthonyv quoted: That annoying INSERT problem, getting data into the DB
There are now 2 other APIs you can choose from as well (besides Rainbow) (for CRUD)
Dapper.Contrib and Dapper Extensions.
I do not think that one-size-fits-all. Depending on your problem and
preferences there may be an API that works best for you. I tried to
present some of the options. There is no blessed “best way” to solve
every problem in the world.
I suspect what Sam was trying to convey in the above quote and the related blog post was: Your scenario may require a lot of custom mapping (use vanilla Dapper), or it may need to track entity changes (use Contrib), or you may have common usage scenarios (use Rainbow) or you may want to use a combination of them all. Or not even use Dapper. YMMV.

This post by Adam Anderson describes the differences between several CRUD Dapper extension libraries:
Dapper Contrib (Automatic change tracking - only if dirty or not, Attributes for custom mapping, No composite key support, No manual key support)
Dapper Rainbow (Manual change tracking using Snapshotter, Attributes for custom mapping, No composite key support, No manual key support)
Dapper Extensions (No change tracking, Fluent config for custom mapping, Supports composite keys, Supports manual key specification), also includes a predicate system for simple queries (NOTE: deprecated - does not support recent Dapper versions nor .NET core)
Dapper SimpleCRUD (No change tracking, Attributes for custom mapping, No composite key support, Supports manual key specification), also includes filtering/paging helpers, async support, automatic POCO class generation (through T4)

Sam describes in details what the difference is in his post - http://samsaffron.com/archive/2012/01/16/that-annoying-insert-problem-getting-data-into-the-db-using-dapper.
Basically, its the usual not 1 size fits all answer and its up to us to decide which approach to go with based on your needs:
There are now 2 other APIs you can choose from as well (besides Rainbow) (for CRUD)
Dapper.Contrib and Dapper Extensions.
I do not think that one-size-fits-all. Depending on your problem and
preferences there may be an API that works best for you. I tried to
present some of the options. There is no blessed “best way” to solve
every problem in the world.

Related

FlatBuffer schema design for frameworks

I'm looking for advice on structuring FlatBuffer schemas for a framework which allows users to extend the data types defined by the framework, but also allows the framework developers to add new fields when new versions of the framework are published.
My original thinking was that when you create a project using this framework, it would generate several FlatBuffer schema files which you could then edit for your specific project. You could then compile the schemas and start developing code using the framework APIs.
However, this becomes a problem when the framework developers decide to add fields to the base types. As you probably know, FlatBuffers requires that any additional fields be appended to the end (or at least have a higher ID than other fields). So there is a conflict between the additions made by the framework developer and the framework user.
One possible solution would be to have a set of 'non-user-extensible' types that are owned by the framework creator, and which should not be modified by users of the framework; and these types would then be embedded within the data types defined by the framework user. However, given the restrictions on fields changing size, I am not sure if this would even work.
I'm also willing to hear alternatives to using flatbuffers if it turns out that there is no good solution otherwise.
To have open ended extension like that, you should really have the framework authors and users work in two separate tables.. where one can own the other. There is no good way to extend a single table if all contributors aren't sharing the schema in source control.
If these extensions must be in a single object for whatever reason, then Protocol Buffers is more flexible than FlatBuffers, since it doesn't require adjacent field ids. You can simply say that all field ids >=1000 are for framework users, for example.
In retrospect (answering my own question two years later), it seems that FlatBuffers was not the right choice for my use case. These days I'm using a combination of msgpack (in cases where I care about byte-size) and JSON (in cases where I don't) and I'm pretty happy with each.

Entity Framework - Schema Upgrade, Multiple DBMS, and Code First

I'm looking into using Microsoft's Entity Framework in an upcoming project which is a point release of an existing product. Our current product supports two DBMS (Oracle and SQL Server), the schema of each is maintained in separate .sql script files.
The entity framework (4.1) looks appealing because it allows various scenarios to be implemented automatically via code generation, reflection, etc. However, as far as I can tell, some of these benefits appear to be mutually exclusive of others.
For example, to support multiple DMBSes, I am inferring that I would need to use a model or code first design, in which case EF would generate the schema for each according to the model (I have seen little to no posts or documentation on this, so I may be wrong). This means that our existing schema would need to be either abandoned (model-first), or mapped (code-first). Additionally, updating the schema would require manual scripts as EF does not appear to support schema upgrades (without wiping out data).
Are model-first and code-first the only viable means of supporting multiple DBMSes in EF? I realize that technically it would be impossible to guarantee that two arbitrary schemae are the same, so I am thinking this is true.
Are there any potential pitfalls of code-first and mapping to multiple DBMS systems? For example, Oracle does not have auto-increment columns; you have to use sequences. How is this mapped in the DbContext? Do I need to create separate maps for each DBMS?
Does EF support any mechanism to upgrade an existing DBMS schema to one of which is representative of the EF model (schema recreation =/= upgrade), or am I limited to doing this manually?
I did come up with one possible way to use database first and support multiple DBMSes, however it is a maintenance nightmare. The idea was to add another layer of abstraction to the two generated data models and create converter classes for each of the EF generated models. This seems like the best way of doing it so that each DBMS could potentially have its own model, yet my code would handle the mapping. But in doing this, what am I really gaining from EF? Maybe query generation, but is that worth it?
Actually both the model-first and the database-first have same constraints. Both these approaches are using an EDMX file which contains SSDL (a description of store = a database layer) part related directly to a single database provider so if you want to have two different database providers you must have two different SSDL parts and keep them in sync. You can use single CSDL (a description of conceptual layer = your model classes) and a single or two MSLs (a description of mapping between SSDL and CSDL - a single file is possible only if tables and columns will have exactly same names in both SSDLs). As I know EDMX file can consists only from single SSDL, CSDL and MSL parts so I expect that the designer has no support for this scenario and you will have to modify second SSDL manually or use two EDMXs = model each change twice.
The code-first approach can make this much more simple but the question is how good is Oracle provider when using the code-first and the database generation. The provider is responsible for correctly interpreting needed features like sequences in case of auto increment columns.
EF itself currently has no support for upgrading existing DB. When using EDMX the process of the database generation is controlled either by T4 template or Workflow so it can be customized and there is already separate feature called Entity Designer Database Generation Power Pack which allow incremental building of the database with the model-first approach. The problem is that this feature is using VS Database tools. I think these tools works only with SQL server. I never like these automated tools so I still think that database upgrade should be controlled manually with help of some tools to get difference script between the current and the last deployed database versions. You should need diff script only when deploying new the new version to a production environment. In a testing and a development environment you can always recreate the whole database.
There should be no abstraction needed when working with two EDMX models. Models must produce the same conceptual layer. In such case you need only a single set of POCO classes which are mapped by conventions (same class name as the entity, same properties with same types and accesibility) so they will work with both models.
Edit:
Based on #Tridus answer I'm just adding that you can create databases first and use fluentAPI from EF 4.1 to map them. Your databases must have exactly the same schema (table names, column names, etc.), they can't use any specific features (I hope sequences will not be the problem because it is just the way how Oracle handles auto increment columns).
This is actually fairly doable with a database first design, but there's some caveats you won't be able to get around easily due to how the databases handle things differently.
Sequences are one (in that they're just ignored by EF entirely). You can fake that in Oracle by putting a trigger on the table that populates it on Insert, but I also found that if you have to update the model later then EF "forgets" that the column is an identity column and it'll try to stick a 0 in it again. I also found it unreliable in Oracle to try and get the new ID if you use a trigger. We just wound up selecting from the sequence and setting the ID on the object before doing the insert because that's how you usually do it in Oracle. You could also use a stored procedure that handles it.
Numbers aren't handled the same way. SQL Server uses number formats that map to Int32, Int64, etc. Oracle's number format is totally different and a full range Int32 in SQL Server is a Number(10,0) in Oracle... which is actually an Int64 in EF because it's bigger then an Int32. I also found that Oracle's EF provider likes to use Decimal a lot even when it doesn't have to, but that's probably just a beta issue.
Stored Procedures in Oracle require some values to be put in app.config/web.config in order to work in EF. I'm not sure if that's going to just be clutter in SQL Server or if it'll cause problems.
Finally, EF Code First is pretty immature and according to the docs doesn't support changing the database structure in this version. I'm not sure if Oracle's provider supports it either (it might, haven't tried it).
Most of this is stuff you can get around, but you're going to need to do some work to hide the differences from the rest of your code and it'll probably take a wrapper layer to do it.
edit - In regards to your #4 - EF 4.1 can generate partial POCO classes. Instead of writing a wrapper around each of the generated models to hide any differences, you can create another partial class code file that won't be regenerated when you update the model, and then add properties/methods that hide the differences. Your app code would just have to be aware to use those instead, and they'd handle the issue (like the number issue I mentioned, you could completely hide it with another property that can do the necessary casting for Oracle).

Same business entity for identical tables?

I got a legacy database which have about 10 identical tables (only name differs).
Is it possible to be able to use the same business entity for all tables without having to create several classes/mapping files?
You can use the entity-name feature if you are using NHibernate v2.1 or higher. It is poorly documented but I am actively using the feature. It has gotten hard to find the documentation on it but look here:
Section 5.3 in
http://docs.jboss.org/hibernate/core/3.2/reference/en/html/mapping.html#mapping-entityname
A couple of things to be aware of. You must now use entity-name instead of class name to refer to the objects. In general it is not an entirely transparent change moving from class names to entity names.
Session actions now require two parameters, for example:
_session.Save("MyEntity", myobject)
The entity-name controls what table the data goes into.
Some HQL queries do not work right anymore, sometimes you must use Criteria instead.
If you need a set of sample code I may be able post some, but far too busy at the moment. I suggest you look at the limited info you can find and set it up for a very simple object and multiple tables to learn how it all works. It does work.
You can create a base class with all the properties, but you still need to map them all.
For that, you can either use copy&paste, XML entities (see examle at http://nhibernate.info/doc/nh/en/index.html#inheritance-tableperconcreate-polymorphism), or a code-based mapping method (Fluent or ConfORM). They usually make reuse easier.

Can Identity Generators be used for other columns with NHibernate

I'm creating an invoicing feature and I want to use a counter for the invoice number, but instead of writing an implementation I was wondering if it is possible to use an existing identity generator in NHibernate for this property?
Although there are many identity generators for NHibernate, Ayende, i.e. Mr NHibernate himself advices against it:
Before I start, I wanted to explain
that NHibernate fully support the
identity generator, and you can work
with it easily and without pain.
There are, however, implications of
using the identity generator in your
system. Tuna does a great job in
detailing them. The most common issue
that you’ll run into is that identity
breaks the notion of unit of work.
When we use an identity, we have to
insert the value to the database as
soon as we get it, instead of
deferring to a later time. It also
render batching useless.
And, just to put some additional icing
on the cake. On SQL 2005 and SQL 2008,
identity is broken.
I know that “select ain’t broken” most
of the time, but this time, it appears
it does :-)
We strongly recommend using some other
generator strategy, such as guid.comb
(similar to new sequential id) or HiLo
(which also generates human readable
values).
Technically, yes, you can. They're all public classes living in the NHibernate.Id
namespace, so you can instantiate and use any of them whenever you want.
In practice though, it depends on which one you want to use. Some of them are fairly simple and don't require any configuration or dependencies, like CounterGenerator, GuidCombGenerator or UUIDStringGenerator. Others need the session, like NativeGuidGenerator. Others need to be configured before they can be used, like SequenceHiLoGenerator.
I don't think NHibernate supports using generators other than in ids and idbags, so using them is entirely up to you.

Where is the api reference for nhibernate?

I may be going mental, but I can not find any api reference material for nhibernate. I've found plenty of manuals, tutorials, ebooks etc but no api reference. I saw the chm file on the nhibernate sourceforge page, but it doesn't seem to work on any of my PCs (different OSes)
Can someone please point me in the right direction?
I just found this one:
http://web.archive.org/web/20141001063046/http://elliottjorgensen.com/nhibernate-api-ref/index.html
It doesn't seem to be official, but at least it looks like an API reference... unlike the official reference, which mostly describes concepts and mappings without any information about classes and members.
If you're on Windows, get ILSpy and point it at NHibernate.dll. It's not quite the same as real API documentation, but it's not half bad.
There is no class references publicly available on Internet as far as I know. You may build it from the source. Clone them, build the NHibernate.sln solution, then go into doc folder, ensure you have prerequisites indicated in reference\readme.txt file, and run nant doc. This will generate the class reference in the build folder.
Otherwise the most commonly used API are not wide, and most of them are xml documented with intellisens working in Visual Studio. The reference documentation has the advantage of giving more context, probably helping avoiding pitfalls like believing ISession.Update is to be used for updating entities (this is wrong, you do not need it unless you use detached entities, or entities coming from another session).
Official documentation reference is on https://nhibernate.info.
Sub-links:
Global documentation list
Reference (What I mostly use, especially following sub parts.)
Configuration
Mapping - basic / entities. (Add mapping xsd definition file in any or your solution folders for letting VS know it and give you intellisens in your hbm mappings.)
Mapping - collections
Querying - general. Do not miss the named queries feature in The IQuery interface.
Querying APIs:
HQL. I mostly use HQL with named queries, in mappings, for queries not dynamically built. They get parsed and validated when building session factory, which normally occurs at application startup, so it is almost as good as compile time validation. Checks log4net logs to get detailed reasons of named query parsing failures.
Criteria API. I view it as the historical way of dynamically building queries in code, to be preferred over constructing HQL strings.
QueryOver API. Based on Criteia API, with lambda expression support for having compile time validation of queried entities namings. Should be preferred over Criteria API in my opinion.
Linq API. Great for dynamically built queries. Bear in mind that its implementation translates your queries to HQL. With complex queries, it may generate unsupported HQL constructs. Having knowledge of HQL capabilities allows a better understanding of how to write a supported Linq query for complex cases. (By example, for a complex order by, better use an explicit linq sub-query in the OrderBy rather than using a collection mapped on your queried entity.)
Native SQL. Well, quite self-explanatory. To be used by example when you need some SQL special feature not available through other querying APIs (SQL server full-text, select for xml, ...), and that you do not wish to extend those other APIs. You may also call stored procedures. When using native SQL, I favor SQL named queries.
Modifying data, from Updating objects to Flush, and Exception handling.
Performances.
Batch fetching. About this, you may read my post here for a detailed explanation of why lazy loading can be very efficient with NHibernate, thanks to batch fetching. This single feature will always cause me to prefer NHibernate over Entity Framework, till it ceases being lacking in EF.
Second level cache. Another great NHibernate feature, lacking native support in EF. Beware, you must use transactions for leveraging this. It allows NHibernate to automatically evict cached entries for you as you change data through your application process. Without transactions, NHibernate will disable the second level cache as soon as you start changing data, for avoiding letting the cache yield you stale data.
Interceptors. This is one way among many allowing to customize NHibernate inner working. NHibernate is very strong at allowing you to extend it. You may also add your own HQL extensions as here, your own linq2NH extension as here (all are answers from me). And there are other ways, see this list for linq2NH extensibility solutions.
Moreover, a class reference will very likely be near the Hibernate one. There is so many internals APIs supporting its implementation that is not much usable.
Why are such API not hidden (internal, private, ...)? Not hiding them is required for allowing the great extensibility capabilities of NHibernate. Those capabilities are a must have in my opinion. In contrast, it is so hard to fix some other .Net project shortcomings, due to lacks of extensibility they suffer. (MVC FileResult and the TweakDispositionAsInline I had to use instead of just being able of overriding some method, or try extend linq-to-entities, see this.)
there is a good book that covers a lot, and there is the html documentation on the site (which also comes as a book)
(the book would be manning - nHibernate in Action - a little outdated, but a good start)
Here is the link to the online reference