I am a Mac App Dev and have some questions about how to use Core-Data correctly.
I have multiple TableViews and they are all playing around with the same data (which I want to save persistently with Core-Data). I know the central part of Core-Data which I have to work with - NSManagedObjectContext and I know how to create NSManagedObjects and save/edit/delete these in the persistent store.
But actually I'm wondering about how to organize all that together with multiple ViewControllers,Views,Tables,.. efficiently and without merge conflicts. I've read about some approaches for that: one was by passing the context down from the delegate through every layer of your code. Somebody else suggests to create a singleton-class, let's call it DatabaseManager, which holds the NSManagedObjectContext instance and you go and ask it from there. (I especially like this approach, don't know why.) Or just ask the delegate every time with [[NSApplication sharedApplication] delegate], does this have any disadvantages?
Okay, a lot of questions here but that was only about the How. Now I want to know about your recommendations of where I should actually do all interaction with the managedObjectsContext. (to be in compliance with MVC and don't mess up my code and make it more complicated than it has to be..)
Should I save stuff to Core-Data in my NSTableViewDelegate/-DataSource Classes directly or just fire up an Notification for someone else?
Should I implement some instance methods for my Model-Objects (like -(void)saveInstanceToDatabase,..) to encapsulate Core-Data-Interaction?
Ok thanks in advance for all the brave guys who read until this point and give me some kind of response :D I always appreciate code-examples!
After years of working with Core Data... I've come to the conclusion it's not very good. There are some serious flaws in it's design that can only be solved properly by abstracting the whole thing away.
I recommend implementing your own model for manage objects, which uses core data underneath but does not ever expose it.
Your views and controllers and application delegate and all of that should not ever touch core data at all. They should talk to a series of classes you create yourself, which has been custom tailored for your particular application.
That object layer can use core data underneath, or it might use something else like FMDB or NSCoding or even just plain old NSData objects (this is the fastest option, if you need extremely high performance with large amounts of data, especially when combined with features like NSDataReadingMappedIfSafe).
Start with Core Data. Look at the other options if you have problems. Having your own layer on top means you can easily abandon it in future. And many developers have chosen to move away from Core Data shortly after their app ships to the public. Often due to unsolvable bugs or performance issues.
Howdie,
This is my first post, so if this has been answered somewhere please forgive me (I did search).
Problem:
I have a Cocoa app that needs to share a single Core Data database among multiple user accounts on the system.
Idea:
I would create a daemon to handle requests from the users (to cross user privilege boundaries) to save/retrieve the data from Core Data. Create a shared Managed Object Context that is used in the application and pass that MOC to the daemon through NSXPCConnection. The daemon will have a fully realized Core Data Stack. I can then set the MOC that was created in the app to be a child of the MOC that was created by the daemon. Hit save and I'm done?
Question:
Would this even work? Is this just a dumb idea? What are the other solutions? NSManagedObjectContext conforms to the NSCoder protocol, but in order to use it with XPC I have to subclass it and make it conform to the NSSecureCoding protocol? Would I also just need to make sure my ManagedObject subclasses conform to NSSecureCoder protocol to use with NSXPConnection? I suppose I can ditch the context all together and just send the managed objects.
I'm assuming NSXPCConnection copys objects instead of using pointers? Is this correct?
Also, I'd probably have to keep performance in mind as the objects are coded/ decoded as fully realized objects and not faulted. Is this correct?
Thank you in advance for your help.
Maybe it works. ;-)
But there are some special problems you have to deal with. To make a long story short: I think that it would be better to use an incremental store doing this. But its documentation is close to poor.
We implemented something like this for syncing. And we have to implement this for networking. (That is, what you want to do.) Here are the problems:
A.
Moving around the context won't help. The context contains a subset of the store objects. Which one is in the context is random to the app programmer. (Inserted and changed objects will live there, but not changed, not inserted objects can be there and can go away.)
B.
You can of course move around the store. Since it is on the hard disk, this would be easier, if you have access to that location, where it is stored. You can use an XPC service for that.
Doing so has the problem that you do not know, what one user changed. You only get the whole store. In contrast an incremental store knows the specific changes through the save request.
C.
"Different users" means that you have conflicts. This is "unnatural" to Core Data. Is is a graph modeller and being that, it is not "connection based". It opens a document, owns it, changes it, stores it, and closes it. As every document it is typically not owned by two or more apps (including two running instances of one app) at one time. There is no real good mechanism for working on a store simultaneously. So you have to handle that yourself. It is probably easier to handle that on the level of the incremental store than on a level build on top of it.
What you want to do is not that easy, if you do not make assumptions you can make in your special case. (For example having a locking mechanism on a higher level.)
My 0,05 $.
I'm looking at designing and building out a system that would allow A/B testing of different flows in an iOS app (e.g. registration flow, log-in flow, purchasing flow).
A system that comes to mind initially looks like:
app pings server, server responds giving list of resources (which could include some links to xib files)
if the user does not have those xibs on disk, download them and save them to disk
when the view controller is presented, load from the xib if it has been downloaded (else default to the one the app was shipped with)
Does anyone have any thoughts on this idea or any insights on this system?
NOTE: I am not trying to implement a system where I can add new features. Right now, I'm focusing on changing flows, like the text and views a user will see. I'm not looking into a discussion of whether this violates the App Store rules, but if you would like to do so - go for it!
This is possible, but I don't know if I would download XIBs to the device. Seems a little risky to me.
Apple did a talk at WWDC 2010 where they address this exact issue, and they recommend building the interface using (more or less) Plists or JSON to describe the UI elements and their functions, and building up the views dynamically. It's well worth watching as it brings up a lot of smaller issues that aren't immediately obvious, but it requires a developer account to access it).
This would be an interesting system to use. I wonder if one could write a shell script to replace the old binary of an app with a new one. I know it would probably be more complicated then just that, but it would be cool to do. I would definitely use this for in house apps, or personal tools. Its to bad the apple wouldn't allow it, unless someone could secretly slip it past them :-)
I say go for it. You seem to have a pretty good idea of what you want to do and how you want to do it. Changing the UI based on responses from a server isn't uncommon, but I guess downloading xib files from the server is. I don't see why it wouldn't work though and I don't think it would be rejected by Apple, but you never know.
I'm interested in picking up some tips and tricks while learning about the SDK. What I am looking for something that you wish you had known getting started that would have benefited you now.
don't use a DOM parser, but a SAX parser. (Memory issues / speed).
if you use custom table cells, don't add too many views. (Slow scrolling issues)
if you add views to table cells, like labels, you may want to make their background opaque.
the generated table view code defeats the MVC paradigm. Think about your data model, and implement an UITableViewDataSource. Really.
One of the things I wish I knew at the very beginning was how to download data in a non-blocking way, specifically using NSURLConnection. The first versions of my apps suffered somewhat because I was using things like dataWithContentsOfURL:, which isn't a great idea on the iPhone, since you're never really sure what the network environment will be like for your users. To make it worse, I was testing over a fiber connection at home with an iPod touch, when a large number of my users were using Edge on their iPhones.
If you want to use SQLite, go with either Core Data (available in 3.0) or FMDatabase (Flying Meat). My first two apps, I wrote a customer wrapper and bound directly to SQLite. I am currently using FMDatabase with a new application and have found the experience much nicer.
In the case of a lot of developers, including Google, I'm sure they wish they knew their app would be rejected once complete.
CoreData Bindings is not supported on the phone.
Use the Clang Static Analyzer
http://clang-analyzer.llvm.org/
It's great for finding reference counting issues -- I have never seen a false positive.
Regarding the table view speed check out Loren Brichter's blog post http://blog.atebits.com/2008/12/fast-scrolling-in-tweetie-with-uitableview/
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am converting an open source Java library to C#, which has a number of methods and classes tagged as deprecated. This project is an opportunity to start with a clean slate, so I plan to remove them entirely. However, being new to working on larger projects, I am nervous that the situation will arise again. Since much of agile development revolves around making something work now and refactoring later if needed, it seems like deprecation of APIs must be a common problem. Are there preventative measures I can take to avoid/minimize API deprecation, even if I am not entirely sure of the future direction of a project?
I'm not sure there is much you can do. Requirements change, and if you absolutely have to make sure that clients of the API are not broken by newer API version, you'll have rely on simply deprecating code until you think that no-one is using the deprecated code.
Placing [Obsolete] attributes on code causes the compiler to create warnings if there are any references to the obsolete methods. This way clients of the API, if they are diligent about fixing their compiler warnings, can gradually move to the new methods without having everything break with the new version.
Its useful if you use the ObsoleteAttribute's override which takes a string:
[Obsolete("Foo is deprecated. Use Bar instead for munging widgets.")]
<frivolous>
Perhaps you could create a TimeBombAttribute:
[TimeBomb(new DateTime(2010,1,1), "Foo will blow up! Better use Bar, or else."]
In your code, reflect for methods with the timebomb attribute and throw KaboomException if they are called after the specified date. That'll make sure that after 1st January 2010 no-one is using the obsolete methods, and you can clean up your API nicely. :)
</frivolous>
As Matt says, the Obsolete attribute is your friend... but whenever you apply it, provide details of how to change calling code. That way you've got a lot better chance of people actually changing. You might also want to consider specifying which version you anticipate removing the method in (probably the next major release).
Of course, you should be diligent in making sure you don't call the obsolete code - particularly in sample code.
Since much of agile development revolves around making something work now and refactoring later if needed
That's not agile. It's cowboy coding disguised under the label of agile.
The ideal is that whatever you complete, is complete, according to whatever Definition of Done you have. Usually the DoD states something along the lines of "feature impelmented, tested and related code refactored". Of course, if you are working on a throwaway prototype, you can have a more relaxed DoD.
API modifications are a difficult beast. If they are only project-internal APIs you are modifying, the best way to go is to refactor early. If you need to change the internal API, just go ahead and change all API clients at the same time. This way the refactoring debt does not grow very large and you don't have to use deprecation.
For published APIs you probably have some source and binary compatibility guarantees you have to maintain, at least until the next major release or so. Marking the old APIs deprecated works while maintaining compatibility. As with internal APIs, you should fix your internal code as soon as possible to not use the deprecated APIs.
Matt's answer is solid advice. I just wanted to mention that intially you probably want to use something along the lines of:
[Obsolete("Please use ... instead ", false)]
Once you have the code ported, change the false to true and the compiler will then treat all the calls to the method as an error.
Watch Josh Bloch's "How to Design a Good API and Why It Matters"
Most important w/r/t deprecation is knowing that "when in doubt, leave it out." Watch the video for clarification, but it has to do with having to support what you provide forever. If you are realistically expecting that API to be reused, you're effectively setting your decisions in stone.
I think API design is a much trickier thing to do in an Agile fashion because you're expecting it to be reused probably in many different ways. You have to worry about breaking others that are dependent on you, and so while it can be done, it's tough to have the right design emerge without getting a quick turnaround from other teams. Of course deprecation is going to help here, but I think YAGNI is a lot better design heuristic when it comes to APIs.
I think deprecation of code is an inevitable byproduct of Agile processes like continuous refactoring and incremental development. So if you end up with deprecated code as you work on your project, that's not necessarily a bad thing--just a fact of life. Of course, you will probably find that, rather than deprecating code, you end up keeping a lot of code but refactoring it into different methods, classes, and so on.
So, bottom line: I wouldn't worry about deprecating code during Agile development. If it served its purpose for a while, you're doing the right thing.
The rule of thumb for API design is to focus on what it does, rather than how it does it. Once you know the end goal, figure out the absolute minimum input you need and use that. Avoid passing your own objects as parameters, pass only data.
Seperate configuration from execution. For exmaple, maybe you have an image encoder/decoder.
Instead of making a call like:
Encoder.Encode( bytes, width, height, compression_type, compression_ratio, palette, etc etc);
Make it
Encoder.setCompressionType(compression_type);
Encoder.setCompressionType(compression_ratio);
etc,etc
Encoder.Encode(bytes, width, height);
That way adding or removing settings is much less likely to break existing implementations.
For deprecation, there's basically 3 types of APIs: internal, external, and public.
Internal is when its only your team working on the code. Deprecating these APIs isn't a big deal. Your team is the only one using it, so they aren't around long, there's pressure to change them, people aren't afraid to change them, and people know how to change them.
External is when its the same code base, but different teams are using it. This might be some common libraries in a large company, or a popular open source library. The point is, people can choose the version of code they compile with. The ease of deprecating an API depends on the size of the organization and how well they communicate. IMO, its the deprecator's job to update old code, rather than mark it deprecated and let warnings fly throughout the code base. Why the deprecator instead of the deprecatee? Because the depcarator is in the know; they know what changed and why.
Those two cases are pretty easy. So long as there is backwards compatibility, you can generally do whatever you'd like, update the clients yourself, or convince the maintainers to do it.
Then there are public api's. These are basically external API's that the clients don't have much control over, such as a web API. These are incredibly hard to update or deprecate. Most won't notice its broken, won't have someone to fix it, won't get notifications that its changing, and will only fix it once its broken (after they've yelled at you for breaking it, over course).
I've had to do the above a few times, and it is such a chore. I think the best you can do is purposefully break it early, wait a bit, and then restore it. You send out the usual warnings and deprecations first, of course, but - trust me - nothing will happen until something breaks.
An idea I've yet to try is to let people register simple apps that run small tests. When you want to do an API update, you run the external tests and contact the affected people.
Another approach to be popular is to have clients depend on (web) services. There are constructs out there that allow you to version your services and allow clients to perform lookups. This adds a lot more moving parts and complexity into the equation, but can be helpful if you are looking at turning over a lot of versions, and having to support multiple versions in production.
This article does a good job of explaining the problem and an approach.