We are creating a system that allows users to create and modify bills for their clients. The modifications need to be maintained as part of the bill for auditing purposes. It is to some extent a point in time architecture but we aren't tracking by time just by revision. This is a ASP.NET MVC 5, WebAPI2, EntityFramework 6, SQL Server app using Breeze on the client and the server.
I'm trying to figure how to get back the Breeze and our data model to work correctly. When we modify an entity we essentially keep the old row, make a copy of it with the modifications and update some entity state fields w/ date/time/revision number and so on. We can always get the most recent version of the entity based off of an entity ID and an EditState field where "1" is the most current.
I made a small sample app to work on getting Breeze working as part of the solution and to enable some nice SPA architecture and inline editing on the client and it all works... except that since our entity framework code automatically creates a new entity that contains the modifications, the SaveChanges response contains the original entity but not the new "updated" entity. Reloading the data on the client works but it would of course be dumb to do that outside of just hacking around for demo purposes.
So I made a new ContextProvider and inherited from EFContextProvider, overrode the AfterSaveEntities method and then things got a bit more complicated. Not all the entities have this "point in time" / revision functionality but most of them do. If they do I can as I said above get the latest version of that entity using its EntityId and EditState but I'm not seeing a straight forward way to get the new entity (pretty new to EF and very new to Breeze) so I'm hoping to find some pointers here.
Would this solution lie in Breeze or our DataContext? I could just do some reflection, get the type, query the updated entity and shove that into the saveMap. It seems like that might break down at some point (not sure how or when but seems sketchy). Is our architecture bad? Should we have gone the route of creating audit/log tables to store the modified values instead of keeping the datamodel somewhat smaller by keeping all of the revisions of the entities in their original tables but with the revision information and making the queries slightly more complicated? Am I just missing something in EF?
... and to head of the obvious response, I know we should have used a document database but that wasn't an option on this project. We are stuck in relational land.
I haven't tried this but another approach would be to simply change the EntityState of the incoming entity in the BeforeSaveEntities method from Modified to Added. You will probably need to also update some version field in this 'new' entity so that it doesn't have a primary key conflict with the original.
But... having built apps like this in the past, I really recommend another approach. Store your 'historical' entities of each type in a separate table. It can be exactly the same shape as the 'current' table. When you save you first copy the 'current' entity into the 'historical' table ( again with some version numbering or date schema for the primary key) and then just update your 'current' entity normally.
This might not give you the answer you expected, but here is an idea:
When saving an object, intercept save on server, you get an instance of object you need to modify, read object from database that has the same ID, put copy of that old object to legacy table in your database and continue with saving into main table. That way only latest revision stays in main table while legacy table would contain all previous versions.
So, all you would need to do is have two tables containing same objects:
public DbSet<MyClass> OriginalMyClasses{get;set;}
public DbSet<MyClass> LegacyMyClasses{get;set;}
override SaveChanges function and intercept when entry E state is Modified, read E type, get the original and legacy tables, read object O from Original with same ID as E, save O to Legacy table, and finally return base.SaveChanges(); (let it save as it is supposed to by default).
Related
Can I update a table in Keystone when I add data to another table?
For example: I have a table named Property where I add details of the property. As soon as I enter the data into this Property table, another table, named NewTable, should automatically get populated with the contents.
Is there a way to achieve this?
There are two ways I can see to approach this:
The afterOperation hook, which lets you configure an async function that runs after the main operation has finished
A database trigger that runs on UPDATE and INSERT
afterOperation Hook
See the docs here. There's also a hooks guide with some context on how the hooks system works.
In your case, you'll be adding a function to your Property list config.
The operation argument will tell you what type of operation just occurred ('create', 'update', or 'delete') which may be handy if you also want to reflect changes to Property items or clean up records in NewTable when a Property item is deleted.
Depending on the type of operation, the data you're interested in will be available in either the originalItem, item or resolvedData arguments:
For create operations, resolvedData will contain the values supplied but you'll probably want to reference item, it'll also contain generated and defaulted values that were applied, such as the new item's id. In this case originalItem will be null.
For update operations, resolvedData will be just the data that changed, which should have everything you need to keep the copy in sync. If you want a move compete picture originalItem and item will be the entire item before and after the update is applied.
For delete operations originalItem will be the last version of the item before it was removed from the DB. resolvedData and item will both be null.
The context argument is a reference to the Keystone context object which includes all the APIs you'll need to write to your NewTable list. You probably want the Query API, eg. context.query.NewTable.createOne(), context.query.NewTable.updateOne(), etc.
The benefits to using a Keystone hook are:
The logic is handled within the Keystone app code which may make it easier to maintain if your devs are mostly focused on JavaScript and TypeScript (and maybe not so comfortable with database functionality).
It's database-independent. That is, the code will be the same regardless of which database platform your project uses.
Database Triggers
Alternatively, I'm pretty sure it's possible to solve this problem at the database level using UPDATE and INSERT triggers.
This solution is, in a sense, "outside" of Keystone and is database specific. The exact syntax you'll need depends on the DB platform (and version) your project is built on:
PostgreSQL
MySQL
SQLite
You'll need to manually add a migration that creates the relevant database structure and add it to your Keystone migrations dir. Once created, Prisma (the DB tooling Keystone uses internally) will ignore the trigger when it's performing its schema comparisons, allowing you to continue using the automatic migrations functionality.
Note that, due to how Prisma works, the table with the copy of the data (NewTable in your example) will need to either be:
Defined as another Keystone list so Prisma can create and maintain the table, or..
Manually created in different database schema, so Prisma ignores it. (I believe this isn't possible on SQLite as it lacks the concepts of multiple schemas within a single DB)
If you try to manually create and manage a table within the default database schema, Prisma will get confused (producing a Drift detected: Your database schema is not in sync with your migration history error) and prompt you to reset your DB.
I've been trying to get my head around NoSQL, and I do see the benefits to embedding data in documents.
What I can't understand, and hope someone can clear up, is how to store data if it must be relational.
For example.
I have many users. They are all buying a product. So everytime that they buy a product, we add it under the users document in mongo, so its embedded and its all great.
The problem I have is when something in reference to that product changes.
Lets say user A buys a car called "Porsche". Then, we add a reference to that under the users profile. However, in a strange turn of events Porsche gets purchased by Ferrari.
What do you do now, update each and every record and change to name from Porsche to Ferrari?
Typically in SQL, we would create 3 tables. One for users, one for Cars (description, model etc) & one for mapping users to purchases.
Do you do the same thing for Mongo? It seems like if you go down this route, you are trying to make Mongo do things SQL way, which is not what its intended for.
I can understand how certain data is great for embedding (addresses, contact details, comments, etc) but what happens when you need to reference data that can and needs to change at a regular basis?
I hope this question is clear
DBRefs/Manual References were made specifically to solve this issue. Instead of manually adding the data to each document and then needing to update when something changes, you can store a reference to another collection. Here is the mongoDB documentation for details.
References in Mongo
Then all you would need to do is update the reference collection and the change would be reflected in all downstream locations.
When i used the mongoose library for node js it actually creates 3 tables similar to how you might do it in SQL, you can use object id's as foreign keys and enrich them either on the client side or on the backend, still no joining but you could do an 'in' query for the ID's then enrich the objects that way, mongoose can do this automatically by 'populating'
So this is a question about Serialization and Versioning. I have a program that is a Music database that stores sheet music with Name, Composer, ...
I serialize each song to a hidden folder so that the user can reload the database at next launch.
Now, when I have to change something in the Song class all is fine if it is a compatible change. I had the idea that if I were to make an incompatible change, would I be able to create a second class with the same name 'Song' but a different VersionUID. Then when it reads the Songs, if the saved version doesn't match the latest version, it will go to a method that will read the Song into the old UID then go through a series of steps to convert it to the new Version. Is any of this possible?
I do know that you can have multiple methods with the same name but different parameters. Would this work with classes and VersionUID's or some other variable?
Thanks!
No it would not. Classes do not support a concept like "property overload" so a class with the same name is considered as the same class, even if it has different properties.
The "best" way for you would be a migration to a relational database in combination with EntityFramework6 (there is a SQLite adapter out there, so you don't need SQLServer).
With EF you can use migrations which enables you to change your model and migrate the data automatically. If done correctly you can change the model and no data loss occurs.
I am working on VB.NET project using Entity Framework 4.
When I create new entity and add it to context.EntityCollection, without calling context.SaveChanges I cannot found newly added entity in that collection.
I need to check for duplicate records before saving to database and it appears that only working solution is to store entities in some dictionary outside of whole EF-generated stuff for checking duplicate records.
Is there any better solution?
Checking database directly before saving changes is possible solution.
I made a bad decision as I was designing a MongoDB database to embed a model rather than reference it in an associated model. Now I need to make the embedded model a referenced model, but there is already a healthy amount of data in the database (or document?).
I'm using Mongoid, so I reasoned I can just change embedded_in to referenced_in. Before I start, I figured I'd ask people who know better than I do. How can I transition the embedded data already in the database to the document for the associated model.
class Building
embeds_many :landlords
..
end
class Landlord
embedded_in :building
...
end
Short answer - Incrementally.
Create a copy of Landlord, name it Landlord2.
Make it referenced in Building.
Copy all data from Landlord to Landlord2.
Delete Landlord.
Rename Landlord2 to Landlord.
Users should not be able to CRUD Landlord during steps 3-5 (ideally). You still can get away with only locking CRUD on 4-5. Just make sure you make all updates that happened during copying, before removing Landlords.
Just chan ging the model like you have above will not work, the old data will still be in a different strucutre in the db.
Very similar the previous answer, one of the things I have done to do this migration before is to do it dynamically, while the system is running and being used by the users.
I had the data layer separated from the logic, so it let me add some preprocessors and inject code to do the following.
Lets say we start with the old datamodel, then release new code that does the following:
On every access to the document, you would have to check whether the embedded property exists, if it does, create a new entry associated as a reference and save to the database and delete the embedded property from the documents. Once this is done for a couple of days, a lot of my data got migrated and then I just had to run a similar script for everything that was not touched, made the job of migrating the data much easier and simpler and I did not have to run long running scripts or get the system offline to perform the conversion.
You may not ha ve that requirement, so Pick accordingly.