Core Data delete rule -- to-many relationship, delete when empty - objective-c

I'm a little fuzzy on how the delete rules for relationships in Core Data work, at least beyond the simple cases described in the documentation.
Most of those cases, and most of the answers I've seen for questions here, use a model where the object on left side of a one-to-many relationship "owns" the objects on the right side: e.g. a Person has PhoneNumbers, and if you delete the person you delete all their associated numbers. In that kind of case, the solution is clear: Core Data will handle everything for you if you set the relationships like so:
Person --(cascade)-->> PhoneNumber
PhoneNumber --(nullify)--> Person
What I'm interested in is the opposite: A to-many relationship where the "ownership" is reversed. For example, I might extend the CoreDataBooks sample code to add an Author entity for collecting all info about a unique author in one place. A Book has one author, but an author has many books... but we don't care about authors for whom we don't list books. Thus, deleting an Author whose books relationship is non-empty should not be allowed, and deleting the last Book referencing a particular Author should delete that Author.
I can imagine a couple of ways to do this manually... what I'm not sure of is:
does Core Data have a way to do at least some of this automagically, as with relationship delete rules?
is there a "canonical", preferred way to handle this kind of situation?

You could override prepareForDeletion in your Book class and check if the author has any other books. If not you could delete the author.
- (void)prepareForDeletion {
Author *author = self.author;
if (author.books.count == 1) { // only the book itself
[self.managedObjectContext deleteObject:author];
}
}
Edit: To prevent deletion of an author with books you could override validateForDelete or even better: don't call deleteObject with an author with books in the first place

Rickstr,
Check below for the relationships to get your two criteria done.
Author -- (Deny) -->> Books
deleting an Author whose books relationship is non-empty should not be allowed
DENY: If there is at least one object at the relationship destination, then the source object cannot be deleted.
Book -- (Cascade)-- > Author
deleting the last Book referencing a particular Author should delete that Author
You cannot delete the Author, as our first rule is saying, if there are any Books which are non-empty should not be deleted. If they are not present the Author gets deleted.
I think theoretically it should work. Let me know, if this works or not.

Similarly to Tim's solution, you can override the willSave method in your Author NSManagedObject subclass. Note that if you do use Tim's solution, I highly recommend filtering the books set for books that haven't been deleted; this way if you delete all of the Author's books at the same time, the Author will still be deleted.
- (void)willSave {
if (!self.isDeleted) {
NSPredicate *notDeletedPredicate = [NSPredicate predicateWithBlock:^BOOL(id evaluatedObject, NSDictionary<NSString *,id> *bindings) {
return ![(NSManagedObject *)evaluatedObject isDeleted];
}];
NSSet *filteredBooks = [self.books filteredSetUsingPredicate:notDeletedPredicate];
if (filteredBooks.count == 0)
[self.managedObjectContext deleteObject:self];
}
[super willSave];
}

The following worked for me:
Set the deletion rule on the 'book' relationship of your author entity to 'Deny' meaning that as long as there is a book linked to your author it cannot be deleted.
Subclass your book entity and override the prepareForDeletion() function as follows:
public override func prepareForDeletion() {
super.prepareForDeletion()
do {
try author.validateForDelete()
managedObjectContext?.delete(author)
} catch {}
}
Validate for delete will throw an error unless the book relationship is empty.
You can optionally handle the error.

Related

Is there a faster way to add my Many-to-Many DBIC relationships?

I have a Many to Many relationship Between what are called
Objects and ObjectGroups.
An ObjectGroup is just a 'Group' or Objects and an Object can belong to many Object Groups.
For example
ObjectGroupA
-------------
ObjectA
ObjectB
ObjectC
ObjectGroupB
-------------
ObjectA
ObjectB
ObjectC
ObjectD
These ObjectGroups belong to a device
I have put the Objects and ObjectGroups into the Database via multicreate but now I need to build up the Many-to-Many relationships ( I wasn't able to work out how to do this
via multicreate. I did read the documentation). like below;
my $device = $device_rs->create(
{
devicename => $deviceName,
objects => \#objects,
object_groups => \#objectgroups
}
);
Then for the relationships I build up a hash of arrays where the ObjectGroup(Parent) is the key and the array contains the Names of the objects. Then for each object group I
move through the list of child objects and add them via add_to_$rel.
foreach my $parent (keys %childobjects) {
my $parent_objectgroup = $device->object_groups->find({objectgroupname => $parent});
foreach my $child (#{$childobjects{$parent}}) {
my $child_object = $device->objects->find({objectname => $child});
if ($child_object) {
#add the child object from the array to the object-group
$parent_objectgroup->add_to_childobjects($child_object);
}
}
}
This works, but is VERY slow as I have a lot of relationships to build up. Can anybody suggest a faster way to do this with DBIC?
Okay so unless I am totally lost on this, there is one thing that seems to spring to mind.
You seem to be populating bulk data into these tables as some kind of initial load or conversion or something. And it would seem that from your last routine you already have some concept of which devices have which objects and which group and so on by the matching that you are doing.
With that in mind, it seems the logical thing to do is just populate each table with the relational keys you already know about. Loading up each table is going to be a lot faster than looping through and trying to work out the relations that you should already know in your data.
The operations you are trying to use to bulk load are intended for more atomic purposes where you are going to add new relations in a many to many context. These will work well programatically, as you add an additional device or object group etc.
But as you should already know the associations of this bulk data, just load it into the tables.

OOP where to put orphan methods

Apologies if this has been answered elsewhere, my search didn't yield quite the answer I was looking for.
Hypothetically speaking, let us say I am building an application for a bookshop.
I have a class that handles all my database transactions. I also have a 'Book' class which extends the Database class, calling the Database constructor from it's own constructor, removing the need to instantiate the Database class first:
class Book extends Database {
__construct($book_id){
parent::__construct();
$this->databaseGet("SELECT * FROM..."); // method in Database class
etc...
}
}
I can pass a reference id to the 'Book' class constructor and create an object containing information pulled from the database about that book along with several methods relevant to a given book.
But I also want to list all the books in the database. My question is, where do I put this method and other methods that simply don't have a context such as 'Book'?
I could create a single "GetStuff" or 'Bookshop' class that extends the Database class, which would contain all these single-use methods. But that requires it to be loaded all the time as these orphan methods would be used all over the program.
I could create lots of classes that house a single method but that would require instantiating the class to an object in order to call the method, seems like overkill.
They aren't general utilities, they have a place in the business model. Just where should I put these orphan methods?
If I understand it, you're asking where should code go that relates to a specific type but doesn't implement a behaviour of the type itself. There is no single answer. According to the overall design of the system, it could be part of the type - Smalltalk classes have 'class fields' and 'instance fields', and there is nothing wrong with that - or it could end up anywhere it makes sense. If it relates to something external to the type itself - that is, it's not merely a matter of not being the behaviour of an instance, but a matter of being an interaction with something extraneous - it may make sense to put it outside. For instance, you may have Book, BookDatabase, BookForm, BookWebService, etc. There's no harm in some of those classes having few members, you never know when you'll want to add some more.
Book is a book, Books is collection of books.
Database is one thing you could use to persist a lot of books so you don't have to type them all in again.
It could be an xml file, an excel spreadsheet, even a webservice.
So write Book and Books, then write something like
BookDatabase extends database with methods like
Books GetBooks();
and
void SaveBook(Book argBook);
The real trick is to make Book and Books work no matter what / how they are stored.
There's lot more to learn around that, but first thing to do is start again and not make your data objects dependant on a particular "database".
Seems your design is seriously flawed. You have to separate three concerns:
Your Domain Layer (DM): In this case, Book belongs to it.
Data Access Layer (DAL): Handles database storage. Domain Layer does not know about this layer at all.
Service Layer (SL): handles use cases. A use case may involve multiple object from Domain, as well as calls to DAL to save or retrieve data. Methods in service layer perform a unit of work.
A simplified example:
// Model Object
class Book {
title;
author:
isbn;
constructor(title, author, isbn) {// ...}
// other methods ...
}
// DAL class
class BookDataMapper {
// constructors ...
save(Book book) {}
Book getById(id) {
String query = get from book table where book_id = id;
execute query;
parse the result;
Book book = new Book(parsed result);
return book;
}
Book getByTitle(title) {}
...
getAll(){} // returns all books
}
//Service class
class BookService {
BookDataMapper bookMapper;
buyBook(title) {
// check user account
// check if book is available
Book book = bookMapper.getBytitle(title);
if book available
charge customer
send the book to ship etc.
}
}

How to add new values to nested Core Data objects?

My Core Data object model has three nested objects as shown below:
Item
Beverage
Brand
When I first create an instance of Item
Item *item = [NSEntityDescription insertNewObjectForEntityForName:#"Item" inManagedObjectContext:self.objectContext];
the item.beverage property is nil. Next I want to store a value in the item.beverage.brand.title property.
Do I have to create an instance of Beverage and assign it to item.beverage, then create an instance of Brand and assign it to item.beverage.brand.
item.beverage = [NSEntityDescription insertNewObjectForEntityForName:#"Beverage" inManagedObjectContext:self.objectContext];
item.beverage.brand = [NSEntityDescription insertNewObjectForEntityForName:#"Brand" inManagedObjectContext:self.objectContext];
before I can finally assign the value to the title property?
item.beverage.brand.title=#"Sample Title";
Is there a shorter/less verbose way to do this?
You need to create each object in the object graph. I'm not aware of any core-data provided shortcuts. You could of course write your own methods, categories, or macros to reduce the verbosity if you find your self writing a lot of boiler plate code.
If the purpose of your separate entities is primarily encapsulation of behavior or normalization of data, it sometimes occurs that you never wish to insert a parent without automatically inserting a child. In your case, this could be an Item that always, without fail, has a Beverage.
In this case, you could set the relationship between Item and Beverage to be non-optional and to-one, with a cascading delete rule. Together, these reflect the fact that the one is meaningless without the other. Unfortunately, in that case, insertion of the Item still does not insert the Beverage. To do that, override awakeFromInsert to perform this insertion and then it will be automatically present whenever an instance of Item is created.
If on the other hand it is possible that an Item may not have a Beverage, but that when it does, that Beverage always has a name, as the other poster commented, it's also a possibility that adding custom logic - Item.setBrandname, which inserts a Beverage if it is not already present - is a viable solution.
Your business logic drives this kind of decision.
Also, fwiw, 'item', while not a reserved keyword, may be a good word to avoid as a class/variable name because it's an abstract/programming-related concept. That's entirely separate from the rest of my answer. ;-)

WCF: how to modify registers with different types of operations (update, add, delete...)

Well, in this post, I get to the conclusion that is better to work with services in per call mode, because it's more efficient. This makes me to have data context with a short life, the life of the method that I call.
In this example, I see how to modify data of a list of elements, only is needed to set the state of the entity to modified.
However, how could I do with one operation, modifies and updates, for example?
One case can be this. I have books and authors, and the client application have a view with two datagrids, one for authors and other for books. The user can add authors and modify their information in the first datagrid and do the same with the books in the second datagrid. Also can assign books to their authors.
I have the POCO classes, so I have a author class with a property that is a list of books. Well, I can add books to this list, and then when I call the service method updateAuthors, I only need to use the class of author as parameter, EF knows what to do with the books. It is not needed to pass the book too.
But what happens when in the list of books there are new books and also books that exists but its information is modified?
In the example of the post that I mention in the beginning it says how to do it when all the entity are modify, but if I want to add new registers, I need to set the state to add. So if there is mixed entities, how can I do it? does it exist any pattern or a some way to do this? I have to set the state of all the books? I can know the state of the books, because I use an autonumeric as ID, so if the ID is 0 is a new register, in other case is a modification.
Thanks.
Daimroc.
EDIT: Perhaps I don't be very clear in my question. What I want to know is how I can save the changes of many entities at once. For example, I have the Authors, Books and Custormers. I Add, modify and delete information of many of them. In my WCF client I have method to save changes of Authors, other method to save changes of books and other method to save changes of Customers.
How my service is per call, I need to make three calls, one for each type of entities, and this are independents. However, if I use directly entity framework, I can do many changes to many type of entities and I only need to call savechanges once, it make all the work. How can I do the same with WCF and POCO entities?
I am reading about self tracking entities, buy I have the same problem, I can use Applychanges(entity) but if I am not wrong, it applies changes only to one entity. do I need to call N times if I did changes to many entities?
Thanks.
Not sure if this will answer your question, but here is my suggestion:
Manage the state on your POCO entities by using flags (IsNew, IsDirty, IsDeleted);
When you pass the POCO entities to the object context, use the ObjectStateManager to change the attached entity state;
Recursively loop through the list of children entities and apply the same approach.
EDIT:
The following code is the AuthorStateManager class:
public partial class AuthorStateManager : IStateManager<Author, Context>
{
private IStateManager<Book, Context> _BookStateManager = new BookStateManager();
public void ChangeState(Author m, Context ctx)
{
if (m == null) return;
ctx.Authors.Attach(m);
if (m.IsDeleted)
{
ctx.ObjectStateManager.ChangeObjectState(m, System.Data.EntityState.Deleted);
}
else
{
if (m.IsNew)
{
ctx.ObjectStateManager.ChangeObjectState(m, System.Data.EntityState.Added);
}
else
{
if (m.IsDirty)
{
ctx.ObjectStateManager.ChangeObjectState(m, System.Data.EntityState.Modified);
}
}
}
SetRelationsState(m, ctx);
}
private void SetRelationsState(Author m, Context ctx)
{
foreach (Book varBook in m.Books)
{
_BookStateManager.ChangeState(varBook, ctx);
}
}
}
where Authors is the ObjectSet, m is a POCO entity of type Author, ctx is the object context, and SetRelationsState is the method that loops through all the children state managers to update their state.
After changing the state, in my repository object I call ctx.SaveChanges(). This is the Update method in AuthorRepository class:
public Author Update(Author m, bool commit)
{
_AuthorStateManager.ChangeState(m, _ctx);
if (commit)
{
_ctx.SaveChanges();
}
return m;
}
_BookStateManager is a private member of BookStateManager type which modifies the Book state in its own ChangeState() method.
I suggest you make the State Manager classes implement an interface called IStateManager, which has the ChangeState() method.
It seems a bit convoluted, but it gets easier if you generate code for these classes.
If you want to perform multiple actions in a single service call, then the action to take need to move from being a method call to an object. For example, you might have a InsertCustomerAction class which has a Customer instance tied to it. All of these actions would have a base class (Action) and your WCF method would take in a collection of Action instances.

Accept Interface into Collection (Covariance) troubles with nHibernate

I am using Fluent nHibernate for my persistence layer in an ASP.NET MVC application, and I have come across a bit of a quandry.
I have a situation where I need to use an abstraction to store objects into a collection, in this situation, an interface is the most logical choice if you are looking at a pure C# perspective.
Basically, an object (Item) can have Requirements. A requirement can be many things. In a native C# situation, I would merely accomplish this with the following code.
interface IRequirement
{
// methods and properties neccessary for evaluation
}
class Item
{
virtual int Id { get; set; }
virtual IList<IRequirement> Requirements { get; set; }
}
A crude example. This works fine in native C# - however because the objects have to be stored in a database, it becomes a bit more complicated than that. Each object that implements IRequirement could be a completely different kind of object. Since nHibernate (or any other ORM that I have discovered) cannot really understand how to serialize an interface, I cannot think of, for the life of me, how to approach this scenario. I mean, I understand the problem.
This makes no sense to the database/orm. I understand completely why, too.
class SomeKindOfObject
{
virtual int Id { get; set; }
// ... some other methods relative to this base type
}
class OneRequirement : SomeKindOfObject, IRequirement
{
virtual string Name { get; set; }
// some more methods and properties
}
class AnotherKindOfObject
{
virtual int Id { get; set; }
// ... more methods and properties, different from SomeKindOfObject
}
class AnotherRequirement : AnotherKindOfObject, IRequirement
{
// yet more methods and properties relative to AnotherKindOfObject's intentive hierarchy
}
class OneRequirementMap : ClassMap<OneRequirement>
{
// etc
Table("OneRequirement");
}
class AnotherRequirementMap : ClassMap<AnotherRequirement>
{
//
Table("OtherRequirements");
}
class ItemMap : ClassMap<Item>
{
// ... Now we have a problem.
Map( x => x.Requirements ) // does not compute...
// additional mapping
}
So, does anyone have any ideas? I cannot seem to use generics, either, so making a basic Requirement<T> type seems out. I mean the code works and runs, but the ORM cannot grasp it. I realize what I am asking here is probably impossible, but all I can do is ask.
I would also like to add, I do not have much experience with nHibernate, only Fluent nHibernate, but I have been made aware that both communities are very good and so I am tagging this as both. But my mapping at present is 100% 'fluent'.
Edit
I actually discovered Programming to interfaces while mapping with Fluent NHibernate that touches on this a bit, but I'm still not sure it is applicable to my scenario. Any help is appreciated.
UPDATE (02/02/2011)
I'm adding this update in response to some of the answers posted, as my results are ... a little awkward.
Taking the advice, and doing more research, I've designed a basic interface.
interface IRequirement
{
// ... Same as it always was
}
and now I establish my class mapping..
class IRequirementMap : ClassMap<IRequirement>
{
public IRequirementMap()
{
Id( x => x.Id );
UseUnionSubclassForInheritanceMapping();
Table("Requirements");
}
}
And then I map something that implements it. This is where it gets very freaky.
class ObjectThatImplementsRequirementMap : ClassMap<ObjectThatImplementsRequirement>
{
ObjectThatImplementsRequirementMap()
{
Id(x => x.Id); // Yes, I am base-class mapping it.
// other properties
Table("ObjectImplementingRequirement");
}
}
class AnotherObjectThatHasRequirementMap : ClassMap<AnotherObjectThatHasRequirement>
{
AnotherObjectThatHasRequirementMap ()
{
Id(x => x.Id); // Yes, I am base-class mapping it.
// other properties
Table("AnotheObjectImplementingRequirement");
}
}
This is not what people have suggested, but it was my first approach. Though I did it because I got some very freaky results. Results that really make no sense to me.
It Actually Works... Sort Of
Running the following code yields unanticipated results.
// setup ISession
// setup Transaction
var requirements = new <IRequirement>
{
new ObjectThatImplementsRequirement
{
// properties, etc..
},
new AnotherObjectThatHasRequirement
{
// other properties.
}
}
// add to session.
// commit transaction.
// close writing block.
// setup new session
// setup new transaction
var requireables = session.Query<IRequirable>();
foreach(var requireable in requireables)
Console.WriteLine( requireable.Id );
Now things get freaky. I get the results...
1
1
This makes no sense to me. It shouldn't work. I can even query the individual properties of each object, and they have retained their type. Even if I run the insertion, close the application, then run the retrieval (so as to avoid the possibility of caching), they still have the right types. But the following does not work.
class SomethingThatHasRequireables
{
// ...
public virtual IList<IRequirement> Requirements { get; set; }
}
Trying to add to that collection fails (as I expect it to). Here is why I am confused.
If I can add to the generic IList<IRequirement> in my session, why not in an object?
How is nHibernate understanding the difference between two entities with the same Id,
if they are both mapped as the same kind of object, in one scenario, and not the other?
Can someone explain to me what in the world is going on here?
The suggested approach is to use SubclassMap<T>, however the problem with that is the number of identities, and the size of the table. I am concerned about scalability and performance if multiple objects (up to about 8) are referencing identities from one table. Can someone give me some insight on this one specifically?
Take a look at the chapter Inheritance mapping in the reference documentation. In the chapter Limitations you can see what's possible with which mapping strategy.
You've chose one of the "table per concrete class" strategies, as far as I can see. You may need <one-to-many> with inverse=true or <many-to-any> to map it.
If you want to avoid this, you need to map IRequirement as a base class into a table, then it is possible to have foreign keys to that table. Doing so you turn it into a "table per class-hierarchy" or "table per subclass" mapping. This is of course not possible if another base class is already mapped. E.g. SomeKindOfObject.
Edit: some more information about <one-to-many> with inverse=true and <many-to-any>.
When you use <one-to-many>, the foreign key is actually in the requirement tables pointing back to the Item. This works well so far, NH unions all the requirement tables to find all the items in the list. Inverse is required because it forces you to have a reference from the requirement to the Item, which is used by NH to build the foreign key.
<many-to-any> is even more flexible. It stores the list in an additional link table. This table has three columns:
the foreign key to the Item,
the name of the actual requirement type (.NET type or entity name)
and the primary key of the requirement (which can't be a foreign key, because it could point to different tables).
When NH reads this table, it knows from the type information (and the corresponding requirement mapping) in which other tables the requirements are. This is how any-types work.
That it is actually a many-to-many relation shouldn't bother you, it only means that it stores the relation in an additional table which is technically able to link a requirement to more then one item.
Edit 2: freaky results:
You mapped 3 tables: IRequirement, ObjectThatImplementsRequirement, AnotherObjectThatHasRequirement. They are all completely independent. You are still on "table per concrete class with implicit polymorphism". You just added another table with containing IRequirements, which may also result in some ambiguity when NH tries to find the correct table.
Of course you get 1, 1 as result. The have independent tables and therefore independent ids which both start with 1.
The part that works: NHibernate is able to find all the objects implementing an interface in the whole database when you query for it. Try session.CreateQuery("from object") and you get the whole database.
The part that doesn't work: On the other side, you can't get an object just by id and interface or object. So session.Get<object>(1) doesn't work, because there are many objects with id 1. The same problem is with the list. And there are some more problems there, for instance the fact that with implicit polymorphism, there is no foreign key specified which points from every type implementing IRequirement to the Item.
The any types: This is where the any type mapping comes in. Any types are stored with additional type information in the database and that's done by the <many-to-any> mapping which stores the foreign key and type information in an additional table. With this additional type information NH is able to find the table where the record is stored in.
The freaky results: Consider that NH needs to find both ways, from the object to a single table and from the record to a single class. So be careful when mapping both the interface and the concrete classes independently. It could happen that NH uses one or the other table depending on which way you access the data. This may have been the cause or your freaky results.
The other solution: Using any of the other inheritance mapping strategies, you define a single table where NH can start reading and finding the type.
The Id Scope: If you are using Int32 as id, you can create 1 record each second for 68 years until you run out of ids. If this is not enough, just switch to long, you'll get ... probably more then the database is able to store in the next few thousand years...