I am wondering which one I should use in this situation. I have a dropdown list that send a value back to the server. The server currently uses load and make the object. It then grabs a value out of and tries to convert it to an enum.
After doing some reading it seems that I should just use Get as I am need to access something out of the object other than the PK.
In general, use Get if you need access to properties other than the Id itself; this makes the intention of your code much clearer and is likely more efficient in the long run. Load is great if you need to setup FK relationships when creating or updating entities without making unnecessary round-trips to the database.
For further reading, check out Ayende's article that describes this in greater detail.
Get and Load are different if lazy loading is enabled.
If you use the method Load, NHibernate does not retrieve the entity from the database, but rather creates a proxy object and the only populated property is the ID.
If you access to an other property, NHibernate will load the entity from the DB.
So in your case the best use should be Get.
Related
I am working on VB.NET project using Entity Framework 4.
When I create new entity and add it to context.EntityCollection, without calling context.SaveChanges I cannot found newly added entity in that collection.
I need to check for duplicate records before saving to database and it appears that only working solution is to store entities in some dictionary outside of whole EF-generated stuff for checking duplicate records.
Is there any better solution?
Checking database directly before saving changes is possible solution.
When I Add-Migration, I get the appropriate DbMigration class with the Up / Down methods, where I am able to make schema changes and (with the use of the Sql() method) can make data/content changes as well.
I'd like to be able to make content changes per migration using the database context. I understand that I could use the Seed method in a Configuration class, but my understanding is that I can only wire up one Configuration with my initializer.
I'd prefer to have a UpCompleted()/DownCompleted() methods that would provide access to the db context after the migration completed. This would enable writing incremental data/context change "scripts" in a manner that would be less prone to errors than using the Sql() method.
Am I missing something? Is this possible?
Thanks!
That doesn't really work because the context only has your most recent model - which can only be used to access the database once the most recent migration has run (which is effectively what Seed achieves).
For an example of how this idea breaks, if you moved a property from one class to another then seed logic from older migrations would no longer compile. But you couldn't change it to use the new property because the corresponding column wouldn't exist in the database yet.
If you want to write this kind of seed/data-manipulation logic, you need to put it at the end of the Up/Down methods and use the Sql method to perform it using raw SQL.
~Rowan
We are creating a system that allows users to create and modify bills for their clients. The modifications need to be maintained as part of the bill for auditing purposes. It is to some extent a point in time architecture but we aren't tracking by time just by revision. This is a ASP.NET MVC 5, WebAPI2, EntityFramework 6, SQL Server app using Breeze on the client and the server.
I'm trying to figure how to get back the Breeze and our data model to work correctly. When we modify an entity we essentially keep the old row, make a copy of it with the modifications and update some entity state fields w/ date/time/revision number and so on. We can always get the most recent version of the entity based off of an entity ID and an EditState field where "1" is the most current.
I made a small sample app to work on getting Breeze working as part of the solution and to enable some nice SPA architecture and inline editing on the client and it all works... except that since our entity framework code automatically creates a new entity that contains the modifications, the SaveChanges response contains the original entity but not the new "updated" entity. Reloading the data on the client works but it would of course be dumb to do that outside of just hacking around for demo purposes.
So I made a new ContextProvider and inherited from EFContextProvider, overrode the AfterSaveEntities method and then things got a bit more complicated. Not all the entities have this "point in time" / revision functionality but most of them do. If they do I can as I said above get the latest version of that entity using its EntityId and EditState but I'm not seeing a straight forward way to get the new entity (pretty new to EF and very new to Breeze) so I'm hoping to find some pointers here.
Would this solution lie in Breeze or our DataContext? I could just do some reflection, get the type, query the updated entity and shove that into the saveMap. It seems like that might break down at some point (not sure how or when but seems sketchy). Is our architecture bad? Should we have gone the route of creating audit/log tables to store the modified values instead of keeping the datamodel somewhat smaller by keeping all of the revisions of the entities in their original tables but with the revision information and making the queries slightly more complicated? Am I just missing something in EF?
... and to head of the obvious response, I know we should have used a document database but that wasn't an option on this project. We are stuck in relational land.
I haven't tried this but another approach would be to simply change the EntityState of the incoming entity in the BeforeSaveEntities method from Modified to Added. You will probably need to also update some version field in this 'new' entity so that it doesn't have a primary key conflict with the original.
But... having built apps like this in the past, I really recommend another approach. Store your 'historical' entities of each type in a separate table. It can be exactly the same shape as the 'current' table. When you save you first copy the 'current' entity into the 'historical' table ( again with some version numbering or date schema for the primary key) and then just update your 'current' entity normally.
This might not give you the answer you expected, but here is an idea:
When saving an object, intercept save on server, you get an instance of object you need to modify, read object from database that has the same ID, put copy of that old object to legacy table in your database and continue with saving into main table. That way only latest revision stays in main table while legacy table would contain all previous versions.
So, all you would need to do is have two tables containing same objects:
public DbSet<MyClass> OriginalMyClasses{get;set;}
public DbSet<MyClass> LegacyMyClasses{get;set;}
override SaveChanges function and intercept when entry E state is Modified, read E type, get the original and legacy tables, read object O from Original with same ID as E, save O to Legacy table, and finally return base.SaveChanges(); (let it save as it is supposed to by default).
I have an app, using Core Data with a SQLite store.
At some point I'd like to remove all objects for a few entities. There may be close to a thousand objects.
From what I can tell via google and the official docs, the only way to delete objects is to [managedObjectContext deleteObject:(Entity *)] for every object. But this means that I must first fetch all objects.
The data store is just sqlite, is there no way to simply pass a TRUNCATE TABLE ZENTITY; to it?
If you relate your objects to a parent entity simply delete the parent. If your parents delete rule is set to 'cascade' all of those (1k) children will be removed as well.
John
The issue is that CoreData isn't just a SQLite wrapper. It's an object graph management solution and it stores cached versions of your object in memory in addition to other stuff. As far as I know to remove all instances of a given managed object you'll have to fetch them and then delete each. This isn't to say that this functionality shouldn't exist, because it probably should.
Have you actually tried it and found it to be a performance issue? If so, then you could provide some information showing that, and more importantly file a bug report on it. If not, then why are you bothering to ask?
It's not as simple as doing a TRUNCATE TABLE ZENTITY; since Core Data must also apply the delete rule for each object (among other actions), which means doing a fetch. So they may as well let you make the fetch and then pass the results into the context one-by-one.
OK, first let me state that I have never used this control and this is also my first attempt at using a web service.
My dilemma is as follows. I need to query a database to get back a certain column and use that for my autocomplete. Obviously I don't want the query to run every time a user types another word in the textbox, so my best guess is to run the query once then use that dataset, array, list or whatever to then filter for the autocomplete extender...
I am kinda lost any suggestions??
Why not keep track of the query executed by the user in a session variable, then use that to filter any further results?
The trick to preventing the database from overloading I think is really to just limit how frequently the auto updater is allowed to update, something like once per 2 seconds seems reasonable to me.
What I would do is this: Store the current list returned by the query for word A server side and tie that to a session variable. This should be basically the entire list I would think. Then, for each new word typed, so long as the original word A exists, you can filter the session info and spit the filtered results out without having to query again. So basically, only query again when word A changes.
I'm using "session" in a PHP sense, you may be using a different language with different terminology, but the concept should be the same.
This question depends upon how transactional your data store is. Obviously if you are looking for US states (a data collection that would not change realistically through the life of the application) then I would either cache a System.Collection.Generic List<> type or if you wanted a DataTable.
You could easily set up a cache of the data you wish to query to be dependent upon an XML file or database so that your extender always queries the data object casted from the cache and the cache object is only updated when the datasource changes.
RAM is cheap and SQL is harder to scale than IIS so cache everything in memory:
your entire data source if is not
too large to load it in reasonable
time,
precalculated data,
autocomplete webservice responses.
Depending on your autocomplete desired behavior and performance you may want to precalculate data and create redundant structures optimized for reading. Make use of structs like SortedList (when you need sth like 'select top x ... where z like #query+'%'), Hashtable,...
While caching everything is certainly a good idea, your question about which data structure to use is an issue that wasn't fully answered here.
The best data structure for an autocomplete extender is a Trie.
You can find a good .NET article and code here.