I have a question about correctly handling a recreation of a database.
In my dev environment I often recreate the database by using
_schemaExport.Drop(true, true);
_schemaExport.Create(createResult, true);
(I have to note, that I use the hilo generator.) Right after when I recreated the database, sometimes a save of a new entity fails because of "Cannot insert duplicate key..." exception.
My question:
Would I have to reinitialize the session factory (and maybe even the session) to correctly come in sync with the new hilo-using database? Or sould it work just as is?
Any hint is appreciated!
Lg
warappa
I'd say you definitely have to create a new session after recreating a database. Another option is to clear the existing one before recreating DB.
The ID generator will start from scratch after you've recreated the DB. This can cause some generated ID to be the same as ID of another object in previously existing session. Thus you're getting duplicate key errors.
Related
Can I update a table in Keystone when I add data to another table?
For example: I have a table named Property where I add details of the property. As soon as I enter the data into this Property table, another table, named NewTable, should automatically get populated with the contents.
Is there a way to achieve this?
There are two ways I can see to approach this:
The afterOperation hook, which lets you configure an async function that runs after the main operation has finished
A database trigger that runs on UPDATE and INSERT
afterOperation Hook
See the docs here. There's also a hooks guide with some context on how the hooks system works.
In your case, you'll be adding a function to your Property list config.
The operation argument will tell you what type of operation just occurred ('create', 'update', or 'delete') which may be handy if you also want to reflect changes to Property items or clean up records in NewTable when a Property item is deleted.
Depending on the type of operation, the data you're interested in will be available in either the originalItem, item or resolvedData arguments:
For create operations, resolvedData will contain the values supplied but you'll probably want to reference item, it'll also contain generated and defaulted values that were applied, such as the new item's id. In this case originalItem will be null.
For update operations, resolvedData will be just the data that changed, which should have everything you need to keep the copy in sync. If you want a move compete picture originalItem and item will be the entire item before and after the update is applied.
For delete operations originalItem will be the last version of the item before it was removed from the DB. resolvedData and item will both be null.
The context argument is a reference to the Keystone context object which includes all the APIs you'll need to write to your NewTable list. You probably want the Query API, eg. context.query.NewTable.createOne(), context.query.NewTable.updateOne(), etc.
The benefits to using a Keystone hook are:
The logic is handled within the Keystone app code which may make it easier to maintain if your devs are mostly focused on JavaScript and TypeScript (and maybe not so comfortable with database functionality).
It's database-independent. That is, the code will be the same regardless of which database platform your project uses.
Database Triggers
Alternatively, I'm pretty sure it's possible to solve this problem at the database level using UPDATE and INSERT triggers.
This solution is, in a sense, "outside" of Keystone and is database specific. The exact syntax you'll need depends on the DB platform (and version) your project is built on:
PostgreSQL
MySQL
SQLite
You'll need to manually add a migration that creates the relevant database structure and add it to your Keystone migrations dir. Once created, Prisma (the DB tooling Keystone uses internally) will ignore the trigger when it's performing its schema comparisons, allowing you to continue using the automatic migrations functionality.
Note that, due to how Prisma works, the table with the copy of the data (NewTable in your example) will need to either be:
Defined as another Keystone list so Prisma can create and maintain the table, or..
Manually created in different database schema, so Prisma ignores it. (I believe this isn't possible on SQLite as it lacks the concepts of multiple schemas within a single DB)
If you try to manually create and manage a table within the default database schema, Prisma will get confused (producing a Drift detected: Your database schema is not in sync with your migration history error) and prompt you to reset your DB.
I have added code to cache Db for my Yii2 application, it works perfectly as expected, but the problem I am facing due to that is, everytime I add any new column to the Db, application complains about the missing column due to caching mechanism. So I have to every time either add Yii::$app->cache->flush(); into my migration file manually or clear it from server, which is kind of annoying and I might miss that. Is there any way to automate this, like add this line at end of every new migration created automatically?
I'm trying to understand the different types of migration paths we can choose when developing an ASP.NET Core 1.0 application with EF Core. When I created my first Core application I noticed it generated a ApplicationDbContextModelSnapshot class that uses a ModelBuilder to build the model.
Then I read that if I need to add a table to the database, I need to create the new model and run the command line to generate the migration file and update the database. Ok, I get it up to this point.
But when I do that, I notice that the ApplicationDbContextModelSnapshot class gets updated too.
1) Does that mean I cannot modify this ApplicationDbContextModelSnapshot class since it looks like it gets regenerated each time?
2) Should I use Data Annotations to build my model or should I use Fluent API which tells me to build my model in the ApplicationDbContext class? Huh? another file that builds the model?
I'm seeing three different ways of working with the database here, the snapshot class, data annotations, and fluent API. I'm confused because today, I made a mistake in my last migration file so I deleted the file, dropped the database and reran the database update.
But by doing that I got errors similar to:
The index 'IX_Transaction_GiftCardId' is dependent on column 'GiftCardId'.
ALTER TABLE ALTER COLUMN GiftCardId failed because one or more objects access this column.
So naturally I was wondering if I had to modify the ApplicationDbContextModelSnapshot class.
What is the path I should be taking when it comes to migrations or database updates because these three paths are confusing me.
I have run into this issue before when I create migrations, make model changes, create new migrations, and try to update the database. The root cause is when keys are being changed and relationships are not dropped and are not added back or do not exist.
You have two options
Easy Method
The easiest way is also the most destructive way and only possible in a dev environment.
Delete all migrations, drop the database, create new migrations and run 'update-database'.
Hard/Safest Method
This is the most time consuming method. I recommend do this in a local integration branch first, pushing it to a remote integration, and then production.
Open the migration file, ie 20160914173357_MyNewMigration.cs.
Drop all indexes in order
Drop/Add/Edit table schemas
Add all indexes back.
For either method, just be sure to test and test again.
Do not modify ApplicationDbContextModelSnapshot. It is a design-time artifact, and should only be modified in the case of a merge conflict.
To update the model, always use data annotations or the fluent API.
For more information on the EF Migrations workflow, see Code First Migrations. It's for EF6, but most of the information is still relevant.
I am working on VB.NET project using Entity Framework 4.
When I create new entity and add it to context.EntityCollection, without calling context.SaveChanges I cannot found newly added entity in that collection.
I need to check for duplicate records before saving to database and it appears that only working solution is to store entities in some dictionary outside of whole EF-generated stuff for checking duplicate records.
Is there any better solution?
Checking database directly before saving changes is possible solution.
I am working on the project which uses NHibernate.
I don't keep session opened. When i need to get or save object, i open the session, do what i need and then close the session. So all the time i'm working with objects detached from session.
For example, when i need to get object from the database, i open the session, then call session.Get() and close the session. Then i update some properties of detached object. When i need to save changes to the database i call method that opens session, calls session.Update(myObject) and closes the session.
But when i do so, NHibernate generates sql that updates all the fields I have mapped, even though they have not changed. My suggestion is when objects is detached from session, NHibernate couldn't track the changes has been made.
What approach do you use when you want to update only properties that has been changed for object detached from session? How do you track the changes for detached objects?
Thanks
The question is: why do you want to do this? I think it is not much of an optimization if you only update columns that have changed.
Have you got triggers in the database?
If so, you can do the following:
use select-before-update="true" and dynamic-update="true" in the mapping. This makes NH to perform a query before the update and only updates if it changed, and only the columns that have changed. I'm not sure if it selects for every update, or only if it is not in the session.
use Merge instead of update. This does actually the same: it selects the entity from the database, only if it is not already in the session. Also use dynamic-update="true". There is also a trade-off: Merge returns the attached instance if there is already one in the session. So you should always throw away the instance you passed in and work with the instance you go from Merge.
Actually I wouldn't care about the updated columns. Its most probably faster to update them blindly instead of performing a preceding query.
Use dynamic-update="true" in the mapping.
Additionally to update detached object use this:
Session.SaveOrUpdateCopy(myObject)