Customize core data model at runtime? - objective-c

I want a model which could be customized by the user.
Is it possible with core data or are there better solutions?
Thanks matchi
Ps: it is an application for mac os!

This is explained under "Creating the Managed Object Model" of Apple's Core Data Utility Tutorial. In general, once you have a reference to a managed object model, you can use the NSEntityDescription and NSAttributeDescription classes to customize the entities and their attributes in the managed object model.
Note, however, that in most cases once you modify a managed object model it will no longer be compatible with existing persistent data stores, meaning that you will then have to migrate data from your old persistent store to your new one. This is definitely not an endeavor to be taken lightly.
Of course, as mentioned in the comments, Core Data can also migrate data automatically, a process known as lightweight migration. In general, though, to do so
Core Data needs to be able to find the
source and destination managed object
models itself at runtime. (Core Data
searches the bundles returned by
NSBundle’s allBundles and
allFrameworks methods.) It must then
analyze the schema changes to
persistent entities and properties and
generate an inferred mapping model.
For Core Data to be able to do this,
the changes must fit an obvious
migration pattern, for example:
Simple addition of a new attribute
A non-optional attribute becoming optional
An optional attribute becoming non-optional, and defining a default value
Does this fit your use case, or do you want to allow your users to change the managed object model in ways that would make lightweight migration impossible?
In any case, I highly recommend that you read through the following documents before you try to allow your users to modify Core Data models.
Core Data Programming Guide
Core Data Model Versioning and Data Migration Programming Guide
Core Data Utility Tutorial
NSPersistentDocument Core Data Tutorial

See the NSManagedObjectModel reference page...
Managed object models are editable
until they are used by an object graph
manager...However, once a model is
being used, it must not be changed...
I'd say this is definitely an advanced Core Data topic (and Core Data itself is already a pretty advanced topic), not to be undertaken lightly. I'm not sure that any data already stored in a data store would be useful (or even useable) if you let the user modify the model.

Related

Use a 3rd party object as a Core Data Entity

I am wanting to pull down and cache notes, notebooks and tags from the Evernote service using their iOS SDK. Their SDK comes with a Store that returns an array of model objects matching a filtered criteria I set.
I want to take those models and use them as a Entity in Core Data. I understand that I can't, because they inherit from NSObject. So my question to all of you is what are the best practices I can employe when I model my entities based on the Evernote model objects? It is a real pain because every time they change something, I have to reflect the same changes in my entities. Is there a work around, or am I stuck building a bridge (so to speak)?.
Thanks,
Johnathon
Following my comment
I don't understand your question here. Just kick off a data import
each time models are returned from Evernote. Each model should be
designed through a Core Data entity.
and you reply on it.
Sorry, I'm not sure what you mean by importing. Bring down the objects
from Evernote then manually assign their object properties to my
entities? That will be a pain but is an option. There's a lot if
properties to copy.
With importing I mean that you should insert a managed object for each model returned from the results received data from Evernote.
This means that if Evernote returns a model that contains three properties, you shoul create an Entity that looks the same (or similar since it strictly depends on what you UI will be).
Here I suppose that you Core Data store is a cache. So you should apply synchronization stuff. Items should be inserted, updated or removed based on user. Synchronization is not easy to achieve but I can suggest you the following tutorials.
How To Synchronize Core Data with a Web Service – Part 1
How To Synchronize Core Data with a Web Service – Part 2
You could also take advantage of RestKit in this case, since it offers an integration with Core Data. In particular, it allows to maps NSObjects, for example returned from a JSON call, to a Core Data entity in a easy way. An example can be found at NSScrencast GitHub Repository. Note that I don't know how Evernote SDK works. So, this approach could not be useful.
But if you are new to RestKit and Core Data, I really suggest to stick with plain Core Data. It's already difficult as its own.
If you need something else let me know.
Update 1
I am going to be doing a synchronization for sure, so I assume I have
to map the Evernote object completely with a Managed Object. Since the
Evernote objects can contain data blobs representing video, pictures,
files etc, I will need to look at how to store that data in Core Data
as well.
In Core Data you need (this not a must but I really good advice) to store files (e.g. images) in the file system. Within an entity you should maintain only meta-informations (i.e. the path) of an image and through it retrieve the image later. This is not necessary for small data, but I think your binaries will be big in size.
Starting from iOS 5 there is a new flag called External Storage that do this for you based on heuristic algorithm.
If you specify that the value of a managed object attribute may be
stored as an external record, Core Data heuristically decides on a
per-value basis whether it should save the data directly in the
database or store a URI to a separate file that it manages for you.
About searching for binary file I really suggest to an attribute called, for example, tag. This will allow you to search images, videos, etc. Obviously when you save you need to associate that tag with the specified binary data. This is just an idea.
P.S. If you need further support I really suggest to open a new question on SO. This to have a self-contained question.
You probably wanna save the object as NSData. Since I don't know what object you're looking for to use, I can't really tell if it's suitable for this. To see if it is, you would have to check if the class adopts the NSCoding protocol.
More info on archiving could be found in Apple's documentation:
https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/Archiving/Archiving.html

Automatically generating poco classes

Today I was checking out a few technologies: T4 templating, automapper
some mini orms: petapoco, sqlfu, ormlite
I understand the gist of what these technologies provide. I'm currently working on a 3 tier system, and I would have loved to replace the DAL (data access layer located on it's own data server) and have it integrated with a mini ORM as shown. However, I will be making no such plans for now. We currently use .NET Remoting (predates WCF).
So instead of replacing whatever is on the DataServer, I'd like to extend one of these new technologies on the application server.
I've done research on how Entity Framework can automatically generate POCO classes based on the context, which is done manually after building EF, I was wondering if I can do the same without using EF.
So here's the facts on what's currently happening:
Send a sql statement (or stored proc) to the DAL to execute
Retrieves a DataSet or a DataTable back to the application through TCP channel
My question is, is it possible to automatically generate a dynamic POCO class using keywords "var" and "dynamic" based on the values sent back from the DataSet and do dynamic mapping onto it during runtime? Would any of the technologies mentioned above help? Or do I have to manually create the POCO class first, and do a mapping on it?
It seems a bit redundant for me to manually create a POCO class and map it to a backend sql table if the application could be aware of what the POCO class is supposed to have. Like what happens if I update a table on the backend, then I'd have to update the POCO class associated with it as well. I'd love to have this to be automatic for me.
If you know the data sets at compile time, then T4 might be an option. You can write a T4 script that downloads the database schema, and constructs strongly-typed entity classes and database reads/write methods.
As far late-bound (runtime) classes, one option is to use the runtime typing provided by CustomTypeDescriptor. You can pass arrays of objects back and forth from the server, and use reflection or other techniques to infer the type.
I think it should be clear that #1 is preferable, if you know the types at compile time (which it sounds like in your case here). Runtime and dynamic should only be a last resort, as it circumvents a lot of valuable compile-time type checks.
Really, I would recommend using one of the micro ORMs like Dapper, etc, if you don't want to use the full Entity Framework. That is, unless you really want to re-invent the wheel.

Accessing Stored Core Data Entities from Different Classes

I am quite new to Core Data, and I'm trying to implement it into my relatively simple OS X application. My application takes some file URLs provided by the user, gets some more information about the files (like creation date, for example), and then stores the URLs for use later.
I am wanting to have those file URLs, and related data, stored in a 'central' location so I can access, modify, and change the order of them (order is really important) from any of the classes in my application (correct me if I'm wrong, but I think Core Data is ideal for this).
I have my Core Data Model setup in Xcode (it only has one Entity which has a couple of Attributes), I've create an NSManagedObject Subclass to match the Entity in the Model, and I'm using Bindings to tie the data to a TableView. However, like I said, I need to be able get at this data from any class in my application. I have been reading Apple's Documentation and a book with a section on Core Data, however I am both struggling to get my head around it, and am yet to come across a section that describes the needs I mentioned above.
Any help with this (even just a link to a useful article) would be very much appreciated.
Thanks in advance.

Use RavenDB as the database for an Orchard CMS module

I'm just getting underway with Orchard CMS. How difficult would it be to create an Orchard module that uses RavenDB as its database? Is a hard dependency on SQL and NHibernate baked deeply into Orchard?
All of Orchard's core features are based on NHibernate so it would be difficult to move the entire Orchard database to another DBMS not supported by NHibernate. However, Orchard is very extensible and it is quite easy to access all kinds of custom data sources from your own modules. For example, I am currently working in a project where we store our data in a graph database (neo4j) and access them in Orchard using a WCF service.
It depends on what kind of data you need to access, but you will probably need to create a custom content part which dynamically loads data instead of using the underlying SQL database through NHibernate. You can do this by inheriting from the non-generic ContentPart class (the generic one uses a record stored using NHibernate) and using a ContentHandler to populate the data from your custom data source.
There is an experimental RavenDB-based data layer implementation in 'ravendb' Mercurial branch.
It was built a couple of months ago and I'm not sure about the compatibility with the current release, but you can give it a try. There were no big changes to DL since then so I assume it should work or need just a couple of tweaks.

Object serialization practical uses?

How many software projects have you worked on used object serialization? I personally never came across a scenario where object serialization was used. One use case i can think of is, a server software storing objects to disk to save memory. Are there other types of software where object serialization is essential or preferred over a database?
I've used object serialization in a lot of my projects. Sometimes we use it to store computer-specific settings locally. I have also used XML serialization to simplify interaction and generation of XML documents. It is also very beneficial in communication protocols. Serialize on one end and re-inflate on the other end.
Well, converting objects to XML or JSON is a form of serialization that is quite common on the web. I've also worked on a project where objects were created and serialized to a binary file in one application and then imported into another custom application (though that's fragile since it uses C# and serialization has broken in the past between versions of the .NET framework). Also, application settings that have a complex structure may be useful to serialize. I also think remoting APIs use serialization to communicate. Basically, serialization in general is simply a way to store the states of your objects, and this has many different uses.
Here are few uses I can think of :
Send an object across network, the most common example is serializing objects across a cluster
Serialize object for (sort of) caching, ie save the state in a file and read it back later
Serialize passive/huge data to a file to minimize the memory consumption and read it back whenever required.
I'm using serialization to pass objects across a TCP socket. You put XmlSerializers on either side, and it parses your data into readily available objects. If you do a little ground work, you can get it so that you're basically passing objects back and forth, and it makes socket communication extremely easy, reducing it to nothing more than socket.Send(myObject);.
Interprocess communication is a biggie.
you can combine db & serialization. f.ex. when you have to store an object with a lot of attributes (often dynamic, i.e. one object attribute set will be different from another one) to the relational DB, and you don't want to create a new column per each attribute
We started out with a system that serialized all of the thousands of in-memory objects to disk every 15 minutes or so. When that started taking too long we switched over to a mixed mode of saving the objects into a relational db and pickle file (this was a python system btw). Eventually the majority of the data was stored in a relational database. Interestingly, the system was written in such a way that all of the application code couldn't care less what was going on down there. It was all done using XP and thousands of automated tests.
Document based applications such as word processors and vector graphics editors will often serialize the document model to disk when the user invokes the Save command. Serialization is often preferred over complex databases in these apps.
Using serialization saves you time each time you want to implement an import/export functionality.
Every time you need to export your system's data, create backups or store some kind of settings, you could use serialization instead and just save the state of the objects that represent the actual config, data or whatever else.
Only when you need a specific format of the exported/imported data, there is a sense in building a custom parser and exporter/importer.
Serialization is also change-proof. Whenever you change the format of the object that is involved in the exchange functionality, it is automatically exportable and you don't have to change the logic behind your export/import parts.
We used it for a backup & update functionality. It was basically serialized hibernate objects being backed up, then the DB schema is altered through the update and we delivered a helper class that "coverted" the old objects to the new DB schema. This way we had a pretty solid update mechanism that wouldnt break easily and does an automatic backup at the same time.
I've used XML serialization heavily on one project. The technique was used to persist to database data structures that had no common structure, so the data couldn't be stored directly. I also used serialization to separate application settings that could be changed at runtime.