I have a case where one of the columns on the database is generated using a trigger because of a specific way we generate this value which I can't change. If I in my mapping in NHibernate sets the property to generated=insert, it works like a charm, where NHibernate inserts the row without the generated property, and afterwards does a select to pull the value from the database.
But I also have cases where I want to be able to set the property explicitly (the trigger is built to only set the column if it's not set). But I can't get NHibernate to allow me to do this. When it's set to generated=insert, it will always ignore the property I set in my object. So I really want to be able to somehow tell NHibernate that when the property is "untouched"/null, act as property is generated, but if set, don't.
Is it possible to configure NHibernate this way somehow?
I don't think you can achieve this through configuration. However, you can simply call ISession.Refresh(myObject) after an insert to force it to go back to the database and refresh the object.
Documentation for the generated property states (emphasis mine):
Properties marked as generated must additionally be non-insertable and non-updateable. Only Section 5.1.7, “version (optional)”, Section 5.1.8, “timestamp (optional)”, and Section 5.1.9, “property” can be marked as generated.
Is it a property that can be set as nullable in your domain model so on the initial insert nothing goes into it and your trigger still thinks it's untouched?
My domain model allows for null insertion of the value. And my trigger is made to only set the column if null inserted. What I'm trying to achieve is to at runtime decide, wether or not NHibernate should handle it as a generated property.
But from what I can understand, NHibernate does not have this sort of flexibility and somewhat strides against it's configuration structure where it builds a session factory once for multiple uses.
The alternative solution could be to build two session factories, one for each of my scenarios.
The first (where property is generated) is normal usage.
The second (where property is non-generated) is during upload scenarios where I need to maintain the property value in the code.
I'm using FluentNHibernate for the mappings, and since it reflects over my mapping classes, I could set a state during creation of session factories, so when my mapping is being read, I could do an if/else statement based on which session factory is currently being built. This should allow me to achieve both without duplicate configurations, although with two session factories in play instead of one.
I haven't tried it yet, only theory, but it should solve my problem and hopefully others trying to achieve similar.
Related
This is about an API handling the validation during saving an object. Which means that the front-end client sends a request to the API to a specific end point, then on the back-end the API creates a new object if the right conditions are meet.
Right now the regular method that we use is that the models has a ruleset for each fields and then the validation is invoked when the save function is invoked, but technically the validation is done right before the object is saved into the database.
Then during today's code review I came across a solution which I wasn't sure if it's a good practice or not. And it was about that the front-end must send a specific parameter to the API every time. This is because other APIs are using our API as well, and we needed to know if the request was sent as and API request or a browser request. If this parameter is present then we want to execute an extra validation function on a specific field.
(1)If I would have to implement it, then I would check the incoming parameter in the service handler or in the controller level, and if I got one, I would invoke the validation right away, and if it fails I would throw an error.
(2)The implementation I saw however adds an extra variable to the model, and sets the model variable when there is an incoming parameter, then validates only when the save function is invoked on the object(which first validates the ruleset defined on the object fields, then saves the object into the database)
So my problem with (2) is that the object now grown bigger with an extra variable that is only related to a specific event. So I would say it's better to implement (1). But (2) also has an advantage, and that is when you create the object on different end point by parsing the parameters, then the validation will work there as well, even if the developer forget to update the code there.
Now this may seems like a silly question because, why would I care about just 1 extra variable, but this is like a bedrock of something good or bad. So if I say this is ok, then from now on the models will start growing with extra variables that are only related to specific events, which I think should be handled on the controller/service handler level. On the other hand the code would be more reliable if it's not the developer who should remember all the 6712537 functionalities and keep them in mind when makes some changes somewhere. Let's say all the devs will get heart attack tomorrow from the excitement of an amazing discovery, and a new developer has to work on the project while he doesn't know about these small details, and then he has to change something on the code that is related to this functionality - so that new feature should be supported by this old one as well.
So my question is if is there any good practice on this, and what do you think what would be the best approach?
So I spent some time on thinking on the solution, and I think the best is to have an array of acceptable trigger variables in the model class. Then when the parameters are passed to the model on the controller level, then the loader function can be modified that it takes the trigger variables from the parameters and save it in the model's associative array variable that stores the trigger variables.
By default this array is empty, and it doesn't matter how much new variables are needed to be created, it will only contain the necessary ones when those are used.
Then of course the loader function needs to be modified in a way that it can filter out the non trigger variables as well as it is done for the regular fields, and there can be even a rule set of validation on the trigger variables if necessary.
So this solves the problem with overgrowing the object with unnecessary variables and the centralized validation part, because now the validation can be always done in the model instead of the controller.
And since the loader function is modified to store the trigger variables in the model's trigger variables array variable, the developer never has to remember that this functionality was created. Which is good, because in the future when he creates a new related function or end point that should handle object creation, he will not miss it to validate it against the old functionality, because the the loader function that he modified in the past like this will handle it for him.
It needs to be noted tho, that since the loader function doesn't differentiate between the parameters, and where to load them other then checking the names of the parameters with the filter functions, these parameter names should be identical from each other, otherwise a buggy functionality can be created accidentally. Like if you forget that a model attribute with the same name was used, then you can accidentally trigger an event that was programmed to be triggered if the trigger variable with the same name is present. However this can be solved by prefixing the trigger variables for example.
I'd like to refactor some of my sagas and messages and move them to a new namespace.
I can't clear out the existing worker queues and need to have the old saga/messages still work until they are all gone.
I won't be changing any behaviour of the saga/messages just the namespace, is there an easy way to bulk update these so that the old saga/messages can continue to process correctly.
What things do I need to worry about here, is it possible to do this?
I'm not sure if there's any way you can blanket update all the in-flight saga instances. I imagine you might be able to with some Raven-fu (or SQL if you're using that).
The problem is that NServiceBus uses the fully qualified name of the message type to identify it for routing purposes, so it's a complex problem and something you'd want to get right first time.
In effect, what you're talking about doing is introducing a whole load of new messages into your architecture. It may be safer to introduce the change in parallel, allow all in-flight saga instances to complete, and then decommission the obsolete - and now unused - bits.
NSB documentation has this to say about handling breaking changes, though nothing specific to in-flight sagas...
When there are significant changes in a message type, such as adding
or removing property, changing the property type, etc. the upgrade
process should consist of the following steps:
Update contract to the new version.
Update senders to use the new contract version. Ensure changes are visible for receivers, such as: Decorate the existing property with
Obsolete attribute with a warning when removing or renaming
properties.
Update receivers to handle the new contract version. Make sure the new properties are handled correctly, e.g. instead of relying on .NET
to set the default value for int Age = 1, it's better to use nullable
types and represent missing values as null.
When all senders and receivers are updated and in-flight messages in the old format have been handled, obsolete the properties and throw an
error, or simply remove them.
I have a very simple asp.net mvc web app which uses castle active record, on top of MySql.
The users of the app want to now define the primary key for one of the entities manually (it was the default of autonumber). No problem I thought, I will simple change the primary key attribute from [PrimaryKey] to [PrimaryKey (PrimaryKeyType.Assigned)] and modify the database schema (okay I know this could be considered a flawed approach but that is not the point of this question)
After trying this, new entities would never get persisted to the database when calling their .Create() method, even though I create a sessionscope per request in OnBeginRequest and OnEndRequest using code identical to here. The same code worked fine before I altered the [Primary Key] attribute.
If I call .CreateAndFlush() instead of Create(), the entities are persisted to the db. I thought the changes in Create() would be persisted when the sessionscope ends. Why aren't they... am I misunderstanding how this should work?
For a generated Key ActiveRecord knows that 0/NULL/{0000-0000-0000-0000}/etc. are the empyt values and that the object is transient and needs to be created. It does not know it the PrimaryKey is Assigened. But you can tell ActiveRecord what the empty value is, just use the following named parameter for your PrimaryKeyAttribute.
[PrimaryKey(PrimaryKeyType.Assigned, UnsavedValue="")]
Greeetings
Juy Juka
Turned out to be pebcak. I had forgotten that I'd written some code which checks that the new primary key does not exist. That code of course needed its own sessionscope. As soon as I did that everything worked, i.e. calling .create would persist the object to the DB.
Two lessons learned from this:
Next time post my code on SO.
Don't make late night code changes that I may otherwise forget about...
I come from an ASP.NET MVC background and am currently going through the following Rails tutorial: http://guides.rubyonrails.org/getting_started.html
I have created a "Post" model which contains some instance variables, but they do not seem to have been defined in the model. They must come from somewhere else. Where are they defined?
Googled "activerecord model" and this was in the top result:
Active Record objects don’t specify their attributes directly, but rather infer them from the table definition with which they’re linked. Adding, removing, and changing attributes and their type is done directly in the database. Any change is instantly reflected in the Active Record objects. The mapping that binds a given Active Record class to a certain database table will happen automatically in most common cases, but can be overwritten for the uncommon ones.
You can have virtual variables that don't correlate to fields in a table/model. A common example is the 'password' and 'password_confirmation' variables used in authentication. You have them exist temporarily until you encrypt it and save it to another field like 'encrypted_password'.
You can declare them but that's not required. You don't have to define or declare them anywhere... just start using them. Of course, they're not persistent though, so won't be saved.
I've got a store that is synchronized externally and a store that is unique to the application instance, so in order to cleanly differentiate the two I want to have some join entities between them and then resolve through to the entities between using Fetched Properties, as "discussed" in the Core Data Programming Guide:
developer.apple.com/documentation/Cocoa/Conceptual/CoreData/Articles/cdRelationships.html#//apple_ref/doc/uid/TP40001857-SW5
I think I just don't really "get" how Fetched Properties are supposed to be used - and I've spent a fair number of hours looking for examples with no real luck.
The way I think of it is,
I have the following Entities each in a different store
Foo with attribute relatedBarName in store A
Bar with attribute barName in store B
I need to create a fetched property on Foo named findRelatedBar that relates Foo to Bar loosely through barName = relatedBarName.
However, I don't understand how since Foo and Bar are in different stores how to declare any relationship of any sort, whether through the fetched property or not, from Foo to Bar?
The predicate builder in XCode seems to want a Destination entity. If they are in different schemas, how can you declare the destination? If you don't declare a destination, how do you at runtime indicate that findRelatedBar on Foo is describing Bar?
Otherwise, do they need to be in the same schema but just stored in different stores?
In crafting this question, I thought of these questions and answered them myself by more focused examination of the documentation. I assume if I found it confusing, others might as well, so I'll inline them with this post to make it easier to find related answers to fetched properties / core data stores.
Q) If a store coordinator have more than one store associated with it of the same schema, how do insertions know which store to insert to?
A) You use the assignObject:toPersistentStore: method on the managed object context.
Q) What does FETCH_SOURCE refer to in specific?
A) It's simply the managed object which has the fetched property associated with it. Sort of like "self"
Q) What does FETCHED_PROPERTY refer to in specific?
A) It is a reference to the fetched property description instance you are using to query with - you can use this to insert per query variable substitution. By setting a property (as in the Core Data Programming example) on the userInfo of the property description instance you're using, you can inject that value into the expression.
Thanks!!!!
The answer is:
Yes, you need to do a cross store fetched property with shared schemas. If you do this, you need to make sure you attribute the inserts with the assignObject:: method as described in the question. However, due to the limitations of the SQLLITE persistent store, natural things like IN $FETCH_SOURCE.attribute do not work.
Q) If a store coordinator have more
than one store associated with it of
the same schema, how do insertions
know which store to insert to?
This is what configurations are for. You create a configuration for each store and then assign entities to that configuration. You then create the store with the proper configuration. When you save the context, each entity will automatically go to the correct store.