Making changes to Nsb Message and SagaData classes - nservicebus

Both our Message and our SagaData classes contain properties that are defined in our solutions central Model project. We're now in the process of refactoring our solution such that we'll have a specific project where properties of our NServiceBus classes will be defined. We're doing this to hopefully decouple the Nsb layer from the rest of the application, and to avoid avoid unnecessary pollution of our Nsb classes as the solution's Model project changes.
The Nsb specific Model (Nsb.Model) project will closely mirror the central Model project, and AutoMapper will take care of mapping our objects from Nsb.Model <-> Model.
I think we don't need to be too worried about refactoring our Message classes, as it should be safe enough to simply deploy this change when there are no in-flight messages (we'll have plenty of opportunities to do this)
I'm more worried about our Saga and SagaData classes. There's always going to be some Saga's running (mostly dormant waiting for Timeouts) and I'm worried that issues could come up with already running Saga's when we make changes to the SagaData class. The changes to the SagaData classes are basically referencing a new assembly (Nsb.Model) which has all the same classes as the old assembly (Model). One of the classes has been renamed in the new assembly, other than that they're all identical to the old ones.
We're using NHibernate as our persistance. I've tried single Saga's on our testing environments and deploying the changes while the Saga waits for a Timeout, and it looks like it basically has no issues with the updated assembly nor the name change of one of it's properties class. However I'm reluctant to deploy this to production without fully understanding what effects this could have and whether our application will stay healthy once this gets deployed.

NServiceBus uses NHibernate to create the schemas that represent the SagaData class. You can either rely on NHibernate trying to modify the current schema, or write migration scripts yourself.
For example, adding a property will result in an additional column that will be created by NHibernate. That column will have no value for all the saga data that is already present. Removing a property will remove the column and the data will be lost.
Modifying a complex object in a collection will provide difficulties. The best way to know if this works for your project is actually perform the upgrade and verify during development and in a test environment.
I suspect you're running for a while already, otherwise are SQL Persister (which doesn't use an OR/M) uses JSON serialized objects to store data inside a single column and it relies on the flexibility of the serializer to migrate from version to version. Our customers have had much better results with that than with NHibernate.
But as I said. An option is to look at the before and after states and create migration scripts yourself. With complex changes, this might be a better alternative.

Related

How to do Object–Relational mapping in ABAP?

I'm currently tring to port an application using hibernate to ABAP.
So short version is: I've (at least) two tables, let's say Entity(entity_id, ...) and SubEntity(sub_entity_id, entity_id).
Now in ABAP OO I'm representing these entities as classes, like zcl_app_entity. Now I'm wondering how I could use ABAP to persist these entities and relationships.
I've use-cases like:
Lock entity, and then add subentity
Get all subEntities and send them via http as json
In Java with JPA I'd do something like
Entity entity = userRepository.findById(entityId);
entity.lock(); // granted, this mechanism would on DB Level here while ABAP needs ABAP Locks
entity.getSubEntities.add(address);
There's a session with UnitOfWork automatically with the repository call. But as far as I'm aware ABAP doesn't offer a Repository pattern which automatically transforms classes to managed entities.
I could of course add INSERT etc directly into the add calls, or create a load / persist method on every class. But then I lose the testability.
I could create Repositories myself, passing in the objects. But then my Objects are repository aware itself (addAddress would call the repository).
Another way would be in the service class to call the repository, and then pass that object to the add method after it's persisted. Quite error prone.
Also lazy loading of e.g. xstrings (like 50MB) would be great, this won't work when the object has no access to the repository / sql interface to load on demand though.
I'd be super suprised if there isn't something like this (JPA/Hibernate), since these are common patterns.
IEntityRepository, ISubEntitiyRepository, IMyService (calls repo interfaces)
All calls make objects managed, maybe with OneToMany etc relationships, rollbacks, lazy load.
Weirdly I found the most ABAP way to have some logic in classes (e.g. the Entity->lock( ), entity->add_subentity( xyz ) but then just use an SQL persistency interface to get all data and return some structures. There wouldn't really be OO relationships. At most a class would be used as a short time driver of a struct. But when I'd say update all sub_entities it would be more like data_provider->get_sub_entity( entity_id ) which returns an internal table. And then the requestor has to persist it again if required data_provider->update_sub_entity_status( entity_id, 'R' )
So how do I use Object–Relational mapping in ABAP, e.g. when I want to update the status of all sub_entities of entitiy X to 'DONE' while keeping it testable?
You can take a look at the basic ABAP persistence service.
In se24 if you create a class, you can mark it as "persistent".
Eg SFLIGHT.
https://blogs.sap.com/2012/04/18/abap-persistent-object-services-demystified/#:~:text=The%20Persistence%20Object%20Services%20can,again%20when%20you%20need%20them.
There is an generated example on every system CL_SPFLI_PERSISTENT
If you have used ORMs in other languages like c# , this will be a very disappointing experience.
This toolset began around 20 yrs ago, but offers a questionable return on invest if you ask me.
Apart from the fact the approach doesnt conform to traditional OO principals.
When this first came out SAP already had 3GL code updating traditional
tables. Most developers even SAP internal, had no clear idea how to implement an ABAP OO model.
90% of the code SAP delivered didnt use this type of model. So there where no good examples to base your own work on.
Unfortunately it never took off and was never extended to have the functionality expected in a true ORM persistence tool.
I dont recall seeing anything inside the toolset that manages things like cascade delete. Nor A proper class relationship model?
Please correct me if I missed that.
If you ask me, sap generated a class with GET and SET methods and a PERSIST method and that was where it stopped.
Actually using Classes not DDIC structures as the model and implementing things like cascade delete look outstanding.
If you google SAP and ORM you will see a Javascript tool using Hibernate and HANA as the DB. Not an ABAP based tool.
or you will see ORM meaning Operational Risk Mgt.
That pretty much says it all. The ABAP layer has a toy Class generator but no true ORM tool like entity framework on .net.

What is the path forward for changetracking complex entity relationships if Self Tracked Entities are not recommended anymore?

I have been using EF since it first came out. Used to hand build POCOs in 3.5 and was glad to see Self Tracked Entities(STE) in EF4.0.
I have use STEs in a couple of very large projects(500+ entities, some with multiple models). In these projects I use a generic Repository and a generic Unit of Work to persist the entities i.e. 2 small generic classes no mapping. By electing a core entity as the "aggregrate root", other entities are added and updated on the client side and the core entity graph containing these changes is sent to the WCF service and used in the Logic Layer which creates the Repository<[core entity]> and uses the UnitOfWork<[core entity]>.Save(Repository<[core entity]>) to persist the STEs and their children to the database.
Now Microsoft is recommending that we not use STEs. See this article
So my question is, What is(are) the patterns that are now recommended by Microsoft for applications that are persisting client changes to WCF Services that use EF?
I created a EF5 Model and examined the generated code. The there are no attributes for a WCF Service i.e. DataContract, DataMember etc
EF4 had a "ADO.NET DbContext Generator with WCF Support" template, but there isn't a EF5 equivalent.
One site suggested I should use a partial class file and decorate the same properties in that file with these attributes. But unless .net 4.5 has introduced partial properties, I cannot see how that can be done.
Another blog suggested using DTO and Automapper, which means more mapping which is error prone; especially when entity fields change type.
So now that DBContext generated code classes are not Service enabled, does this mean that we need to write another set of classes (POCOs) that:
needs to be mapped FROM the DBContext generated code classes after querying the database.
holds the data state for the WCF Service client(s)
is updatable by that client(s)
is mapped by the client(s)
has the ability to hold changed state so this can be sent back to the WCF Service
needs to be mapped TO the DBContext generated code classes for persistence
It seems we just took a great leap backwards to EF3.
If you code both client and service that runs on your hardware, you don't need to be concerned about data structures at the client as they belong to you.
If you also need to expose some of your service methods to non.NET clients you should do the 5 points above for those services anyway and use DTOs and Automapper in those occasions.These should be in a different WCF Service but implemented against the same Logic Layer, after mapping.
But how many of these type of non.NET client services are be created in the day to day building of web applications in most software teams?
This latest recommendation is confusing as it has not been explained as to WHY STEs are ALWAYS ill-conceived and what now, are the recommended patterns to be used for persisting client changes to WCF Services that use EF.
Can anybody inform me where I can find a good resource that solves this architectural design issue?
P.S.
Please don't recommend WCF Data Services or WCF RIA as we need a lot of control over how your data is retrieved and saved by clients.
Please don't recommend Code First as we use Database First as we want to have and need to control the structure of that database and not have to generated for us.
Ok so i thought the same thing when I first read this article, it seems a bit weird to deprecate a whole branch of EF like this and the intention wasn't terribly well communicated (IMO). I think a couple of things are important here:
STEs as referred to in this article refer to object context based self tracking entities (which act a little like autonomous contexts)
ObjectContext is generally being moved away from in favor of the cleaner DbContext structure (this is for both DB first and Code First)
STEs != DB first generation, you can still use an EDMX model in EF and this isn't likely to change.
When i originally saw this article I mistook STEs for POCO Proxy entities which are still available and AFAIK there are no plans to deprecate. (these achieve a similar technical solution to the problem of change detection but with a nicer interface. Check out this article for the differences EF4: Difference between POCO , Self Tracking Entities , POCO Proxies
So what does this all mean
Basically STEs in terms of the old implementation of a change tracker are being deprecated in favor of the newer forms of change tracking (Snapshot or POCO Proxies). This means that if snapshot tracking doesn't suit you you should look into POCO Proxies which are similar to the old STEs.
You can still use all previous techniques for context generation (DB First, Model First, Code First, and DB-> Code)

Does adding PetaPoco attributes to POCO's have any negative side effects?

Our current application uses a smart object style for working with the database. We are looking at the feasibility of moving to PetaPoco instead. Looking over the features I notice you can add attributes to make it easier to CRUD objects. Does adding these attributes have any negative side effects that I should be aware of?
Has anyone found a reason NOT to use these decorators?
Directly to the use of the POCO object instance itself? None.
At least not that I would be aware of. Jon Skeet should be able to provide more info because he knows compiler inner workings through and through, so he knows exactly what happens with this metadata after it's been compiled.
Other implications indirectly related to these
There are of course implications when accessing these declarative attributes, because they're read using reflection which is normally a slow process.
But there's nothing to worry here, because PetaPoco is a smart library and reads these only once then compiles & caches these things, so you only get penalized once then you get blazing performance afterwards. Because it uses compiled code.
Non-performance related implications
By putting attributes (any) on your classes/properties/methods you somehow bind your code to particular engine that will use this class, because they're directives for this particular engine to understand your code.
In case of PetaPoco attributes this means that your class can be used with PetaPoco but not with some other DAL (ie. EF) unless you add attributes of that one as well (EF Code First uses the very same approach with attributes).
The second implication is related to back-end database. In case you rename a table, column or any other part that is provided in your PetaPoco attribute as a constant magic string, you will subsequently have to change this string as well. This just means that you have to be thorough when doing database changes...
One downside is that it breaks the separation between the "domain" layer and the "data" layer, since it introduces the PetaPoco file (which contains data logic) to domain classes that should really not have any knowledge or dependency on the data layer.
If you're doing a single-project MVC app or something then it's okay to just use the Models directory for both, but for non-trivial and separated apps you'll have to have two PetaPoco files or play around with abstracting portions of the file in order to annotate your models without making them "know too much" about the underlying data, or else have you specify the table and/or primary key name all over the place.

Implementing repositories using NHibernate and Spring.Net

I'm trying to get to grips with NHibernate, Fluent NHibernate and Spring.
Following domain-driven design principals, I'm writing a standard tiered web application composed of:
a presentation tier (ASP.Net)
a business tier, comprising:
an application tier (basically a set of methods made visible to UI tier)
repository interfaces and domain components (used by the application tier)
A persistence tier (basically the implementation of the repository interfaces defined in the business tier)
I would like help determining a way of instantiating an NHibernate ISession in such a way that it can be shared by multiple repositories over the lifetime of a single request to the business tier. Specifically, I would like to:
allow the ISession instance and any transaction to be controlled outwith the repository implementation (perhaps by some aspect of the IOC framework, an interceptor?)
allow the ISession instance to be available to the repositories in a test-friendly manner (perhaps via injection or trough some shared 'context' abstraction)
avoid any unnecessary transactions being created (i.e. when only read-only operations have been executed)
allow me to write tests that use SQLLite
allow me to use Fluent NHibernate
allow the repository implementation to remain ignorant of the host environment. I don't yet know if the businese tier will run in-process with the presentation tier or will be hosted separately under WCF (in IIS), so I don't want to bind my code too closely to a HTTP context (for example).
My first attempt to solve this problem had been using the Registry pattern; storing the ISession instance in a ThreadStatic property. However, subsequent reading has suggested that isn't the best solution (as ASP.Net can switch the thread within the page lifecycle, I believe).
Any thoughts, part solutions, pattern names, pointers to up-to-date samples (NHibernate 2) will be most gratefully received.
I have not used Spring.NET so I can't comment on that. However, the rest sounds remarkably (or perhaps not so remarkably; we're hardly the first to implement these things ;) similar to my own experience. I too had trouble finding a One True Best Practice so I just read as much as I could and came up with my own interpretation.
In my situation I wanted transaction/session management to be external to the repository as well as keep repository concerns from bubbling up out of them (i.e. the code using the repository should not need to know that it's using NHibernate internally and shouldn't need to know anything about NHibernate session management). In my case it was decided that transactions would be created by default lest developers forget them, so I had to have a read-only escape mechanism. I went with the Unit of Work pattern with the NHibernate ISession instance store inside. Calling code (I also created a DSL interface for the UoW) might look something like:
using (var uow = UoW.Start().ReadOnly().WithHttpContext()
.InNewScope().WithScopeContext(ScopeContextProvider.For<CRMModel>())
{
// Repository access
}
In practice, that could be as short as UoW.Start() depending on how much context is already available. The HttpContext part refers to the storage location for the UoW which is, unsurprisingly, the HttpContext in this case. As you mentioned, for a ASP .NET application, HttpContext is the safest place to store things. ScopeContextProvider basically makes sure the right data context is provided for the UoW (ISession instance to the appropriate database/server, other settings). The "ScopeContext" concept also makes it easy to insert a "test" scope context.
Going this route makes the repositories explicitly dependent on the UoW interface. Actually, you might be able to abstract it some but I'm not sure I see the benefit. What I mean is, each repository method retrieves the current UoW instance and then pulls out the ISession object (or simply a SqlConnection for those methods that don't use NHibernate) to run the NHibernate query/operation. This works for me though because it also seems like the ideal time to make sure that the current UoW is not read-only for methods that might need to run CRUD.
Overall, I think this is one approach that solves all your points:
Allows session management to be external to the repository
ISession context can be mocked or pointed at a context provider for a test environment
Avoids unnecessary transactions (well, you'd have to invert what I did and have a .Transactional() call or something)
I can't see why you couldn't test with SQLite since that's more of an NHibernate concern
I use Fluent NHibernate myself
Allows the repository to be ignorant of the host environment (that is, the repository caller controls the UoW storage context)
As for the UoW implementation, I'm partially kicking myself for not looking around more before I started. There's a project called machine.uow which I understand is fairly popular and works well with NHibernate. I haven't played with it much so I can't say if it solves all my requirements as neatly as the one I wrote myself, but it might have saved development time as well.
Perhaps we'll get some comments as to where I went wrong or how to improve things, but I hope this is at least helpful in some way.
For reference, the software stack I'm using is:
ASP.NET MVC
Fluent NHibernate on top of NHibernate
Ninject for dependency injection
What you are describing is supported by the Spring.NET framework almost out of the box. Only for FluentNHibernate you need to add a custom SessionFactory (not a lot of code, look here:Using Fluent NHibernate in Spring.NET) to Spring.NET.
Every repository can use the same ISession, just inject the SessionFactory in your repositories and use Spring.NET's transaction services.
Just try it out, they have pretty thorough documentation imho.

NHibernate, DTOs and NonUniqueObjectException

We're using the DTO pattern to marshal our domain objects from the service layer into our repository, and then down to the database via NHibernate.
I've run into an issue whereby I pull a DTO out of the repository (e.g. CustomerDTO) and then convert it into the domain object (Customer) in my service layer. I then try and save a new object back (e.g. SalesOrder) which contains the same Customer object. This is in turn converted to a SalesOrderDTO (and CustomerDTO) for pushing into the repository.
NHibernate does not like this- it complains that the CustomerDTO is a duplicate record. I'm assuming that this is because it pulled out the first CustomerDTO in the same session and because the returning has been converted back and forth it cannot recognise this as the same object.
Am I stuck here or is there a way around this?
Thanks
James
You can re-attach an object to a session in NHibernate by using Lock - e.g.
_session.Lock(myDetachedObject, NHibernate.LockMode.None);
which may or may not help depending on exactly what is happening here. On a side note, using DTO's with NHibernate is not the most common practice, the fact that NHibernate (mostly) supports persistence ignorance means that typically DTO's aren't as widely used as with some other ORM frameworks.
It's really about how NHibernate session works. So if you within a session pull an instance of your CustomerDTO and then after a while you should get the same CustomerDTO (say by primary key) - you actually will get reference to the very same object like you did in your first retrieval.
So what you do is that you either merge the objects by calling session.Merge or you ask your session for the object by calling session.Get(primaryKey) do your updates and flush the session.
However as suggested by Steve - this is not usually what you do - you really want to get your domain object from the datastore and use DTOs (if neede) for transferring the data to UI, webservice whatever...
As others have noted, implementing Equals and GetHashCode is a step in the right direction. Also look into NHibernate's support for the "attach" OR/M idiom.
You also have the nosetter.camelcase option at your disposal: http://davybrion.com/blog/2009/03/entities-required-properties-and-properties-that-shouldnt-be-modified/
Furthermore, I'd like to encourage you not to be dissuaded by the lack of information out there online. It doesn't mean you're crazy, or doing things wrong. It just means you're working in an edge case. Unfortunately the biggest consumers of libraries like NHibernate are smallish in-house and/or web apps, where there exists the freedom to lean all your persistence needs against a single database. In reality, there are many exceptions to this rule.
For example, I'm currently working on a commercial desktop app where one of my domain objects has its data spread between a SQL CE database and image files on disk. Unfortunately NHibernate can only help me with the SQL CE persistence. I'm forced to use a sort of "Double Mapping" (see Martin Fowler's "Patterns of Enterprise Application Architecture") map my domain model through a repository layer that knows what data goes to NHibernate and what to disk.
It happens. It's a real need. Sometimes an apparent lack in a tool indicates you're taking a bad approach. But sometimes the truth is that you just truly are in an edge case, and need to build out some of these patterns for yourself to get it done.
I'm assuming that this is because it
pulled out the first CustomerDTO in
the same session and because the
returning has been converted back and
forth it cannot recognise this as the
same object.
You are right. Hibernate can't. Consider implementing Equals and Hashcode to fix this. I think a re-attach may only work if you haven't loaded the object within this session.