Apple provides documentation for managing user preferences through both Core Foundation and Foundation Kit. It also doesn't offer any help with choosing one or another, except for stating that they're indeed different ("not toll-free bridged").
So, I'm interested, is there anything I should consider selecting configuration mechanism for the application? Or should I just toss a coin?
Thanks!
You are supposed to use NSUserDefaults, which is absolutely the default choice. I suppose you could use CFPreferences, but unless you had a good reason to make that choice, I'd steer clear of CF-APIs.
NSUserDefaults reads and writes to the user scope, while CFPreferences supports writing to both user scope and machine scope. You can use CFPreferences for a machine-wise setting which can be applied to all users using the application on the same machine.
Related
Howdie,
This is my first post, so if this has been answered somewhere please forgive me (I did search).
Problem:
I have a Cocoa app that needs to share a single Core Data database among multiple user accounts on the system.
Idea:
I would create a daemon to handle requests from the users (to cross user privilege boundaries) to save/retrieve the data from Core Data. Create a shared Managed Object Context that is used in the application and pass that MOC to the daemon through NSXPCConnection. The daemon will have a fully realized Core Data Stack. I can then set the MOC that was created in the app to be a child of the MOC that was created by the daemon. Hit save and I'm done?
Question:
Would this even work? Is this just a dumb idea? What are the other solutions? NSManagedObjectContext conforms to the NSCoder protocol, but in order to use it with XPC I have to subclass it and make it conform to the NSSecureCoding protocol? Would I also just need to make sure my ManagedObject subclasses conform to NSSecureCoder protocol to use with NSXPConnection? I suppose I can ditch the context all together and just send the managed objects.
I'm assuming NSXPCConnection copys objects instead of using pointers? Is this correct?
Also, I'd probably have to keep performance in mind as the objects are coded/ decoded as fully realized objects and not faulted. Is this correct?
Thank you in advance for your help.
Maybe it works. ;-)
But there are some special problems you have to deal with. To make a long story short: I think that it would be better to use an incremental store doing this. But its documentation is close to poor.
We implemented something like this for syncing. And we have to implement this for networking. (That is, what you want to do.) Here are the problems:
A.
Moving around the context won't help. The context contains a subset of the store objects. Which one is in the context is random to the app programmer. (Inserted and changed objects will live there, but not changed, not inserted objects can be there and can go away.)
B.
You can of course move around the store. Since it is on the hard disk, this would be easier, if you have access to that location, where it is stored. You can use an XPC service for that.
Doing so has the problem that you do not know, what one user changed. You only get the whole store. In contrast an incremental store knows the specific changes through the save request.
C.
"Different users" means that you have conflicts. This is "unnatural" to Core Data. Is is a graph modeller and being that, it is not "connection based". It opens a document, owns it, changes it, stores it, and closes it. As every document it is typically not owned by two or more apps (including two running instances of one app) at one time. There is no real good mechanism for working on a store simultaneously. So you have to handle that yourself. It is probably easier to handle that on the level of the incremental store than on a level build on top of it.
What you want to do is not that easy, if you do not make assumptions you can make in your special case. (For example having a locking mechanism on a higher level.)
My 0,05 $.
Can I somehow work on two computers with the same Core Data store? This could, I presume, lead to some incompatibilities during saving. What is the best way to deal with this?
Also, let's say I want to avoid the pain of having to worry about this. How would I make sure that only one computer can work on a particular Core Data store at the same time?
Incidentally, you can work on multiple devices on the same store with one of Apple's own core technologies. It's called iCloud.
Sure, technically speaking there are several copies of the store on the devices as well as the logs in iCloud, but the effect would be the same.
Fortunately, iCloud syncing includes some clever mechanism to merge multiple versions if possible (if not, you have to decide which one to give preference).
Only caveat: in my experience iCloud with Core Data has been far from reliable when using the published information for implementation.
From my own experience with Core Data I do not believe that the framework was designed to be used in a multi user (or distributed) environment. I found this interesting post on CocoaBuilder which might help you shape your thoughts on the matter. It's dated July 2012, so it's pretty recent and also discusses some interesting other technologies that are available.
I have been having this debate with a friend where i have a library (its python but I didn't include that as a tag as the question is applicable to any language) that has a few dependencies. The debate is whether to provide a default environment in the initialization or force the user of the code to explicitly set one.
My opinion is to force the user as its explicit and will avoid confusion and make it clear what they are pointing to.
My friend this is safer and more convenient to default to an environment and let the user override if he wants to.
Thoughts ? Are there any good references or examples / patterns in popular libraries that support either of our arguments? also, any popular blogs or articles that discuss this API design point?
I don't have any references, but here are my thoughts as a potential user of said library.
I think it's good to have a default configuration available to allow developers to quickly evaluate the library. I don't want to have to go through a bunch of configuration just to see if the library will do what I need. Once I'm happy that the library will do what i need it to do, then I'm happy to configure it the way I want.
A good example is Microsoft's ASP.Net MVC framework. When you create a new MVC project it hooks in a default authentication and membership provider, which allows the developer to very quickly get a functioning application up and running. It is also easy to configure different providers to be used if the default one's don't meet the requirements of the application in question.
As a slightly different example, Atlassian Confluence is wiki software which supports many different back-end databases. Atlassian could have chosen to have no default DB configuration, but instead Confluence ships with a default, simple, file-based database to allow users to evaluate the software. For production installations you can then hook up to Oracle, SQL Server, mySQL or whatever else you like.
There may be instances where a default configuratino for a library doesn't really make sense, but I think that would be a special case, rather than a general rule.
It depends. If you can provide sensible defaults, you might want to do that: it will make life easer on the occasional user of the library as they can set only the relevant settings, as opposed to the whole environment (with possibly settings the implications of which they don't fully understand (yet)). You are correct, that in situations it is possible this leads to frustration and confusion as the defaulted settings might cause unexpected behavior (unexpected by the (inexperienced) user) -- you have to weigh the reduced frustration of convenience against the price of not-understood defaults to make the choice for each of these possible-to-default settings, which choice might affect the choice for other, related settings as well
On the other hand, if there is no sensible default (e.g. DB credentials, remote address), you should require the user to provide those settings.
The key in both cases is to provide enough information in the documentation of the library and in the error messages (either for missing settings or conflicting ones) that the user can figure out what those settings actually mean/control without having to read through the source code of the library. This part is hard because 1) it is usally tedious from the point of view of the library developer (so it is often skimped) and 2) the documentation has to be written from the mindset of a newbie to the library, which is often different from the library developer's mindset -- the latter knows the implicit connections/implications, the former has to be told about those in an understandable way.
Although not exactly identical in terms of problem domain, this strikes me as the Convention over Configuration argument.
There has been quite a lot momentum behind CoC in recent years, and in my mind, it makes a whole lot of sense. As long as flexibility is not lost, you have everything to gain. Lower friction development is what we are all after, and if I've got to configure every aspect of your API in order to get it working, I'm less inclined to use it over another API of equal functionality.
I happen to like Hanselman's podcasts, so if you want a little light listening, check out this podcast.
I think your question needs some clarification. For starters, I don't think a library should have any runtime configuration. In terms of dependencies, library dependencies should be handled in a manner appropriate to the environment they are being written for. In python, those dependencies should be in the setup.py file (under requirements), and ultimately that file should meet the requirements of whatever service you plan on making it available on (i.e. pypi for python).
For applications, it is completely okay to require runtime configuration, but you should try to have sensible defaults. If your application depends on libraries, that dependency should be handled in the same way a library dependency would be handled, even though that information may be redundant in the context of an installer (if needed). For the most part first-run scripts and their ilk should be apart of the installer/rpm.
For Web Frameworks, it is typical that your app would carry configuration with it, and likely that it would need to be installed in a different way than traditional applications. Here, about the only thing you can do is try to follow the conventions of whatever framework you are writing in.
Say I'm writing a publicly available framework for the Vimeo API. This framework needs to get information from the Internet. Because this can take some time, I need to use threadin to prevent the UI from hanging. Foundation uses delegates for this, like NSURLConnectionDelegate. However, Game Kit uses blocks as callback functions.
What is the recommended way of doing this? I know blocks aren't supported in standard GCC versions, but they require less, much less code for the one that uses my framework.
Delegates, on the other hand, are real methods and when protocols are used, I'm sure the methods are implemented.
Thanks.
I really like blocks but I would be tempted to use a delegate protocol in this case. Network connections can fail in a large number of ways and their delegates tend to keep a fair amount of stateful information about them. I find that that maps well to a delegate protocol with a number of optional methods.
If you're providing a very simplified API for accessing network data then a success/failure pair of blocks might be sufficient. Personally I find that I have to deal with alot of different cases which use many delegate methods on a stateful delegate object. For example; should I retry failed connections immediately or later, does the relative priority of failed connections change, can I make us of a partial response, should I switch a connection to wifi when it becomes available, do I offer a user a chance to authenticate if prompted, do I display incremental progress in a connection? You could handle all of those with blocks but I find that I would rather have a delegate class managing the connection.
Without knowing more about what data you intend for your interface to fetch I don't know that I can be more specific but. I would be tempted to allow users of the API to manage their own connection state if possible.
It all depends on who your target audience is. If you want people writing apps for OS X 10.5 or iOS 3.x, then you need to use delegates. Otherwise, go ahead and use blocks.
It's quite a subjective question since both are valid options, but Apple seems to be shifting further towards using blocks for "throw-away" methods.
The main question would be your target audience.
Block are limited to Snow Leopard (and IOS 4? cant remember).
If you want your framework to be usable by previous operating systems, you can't use blocks.
If you're happy with os limitations, then go with blocks and NSOperationQueue, it's really good and simple to use.
Better, you could offer both options..
I would recommend using blocks, and if you do it right, you can support 10.5 at the same time.
Check out the open-source PLBlocks runtime, it allows you to seamlessly use blocks on both 10.5 and 10.6.
I am writing my first objective-c daemon type process that works in the background. Everything it does needs to be logged properly.
I am fairly new to Apple stuff so I am not sure, what is the most common and/or best way to log activity? Does everyone simply log to a text file in their own special format, or use some sort of system call?
You should look at the Apple System Logger. ASL writes to the system log database (making it easy to query the log from Console.app or from within your own app) and additionally to one or more flat files (if you choose). Peter Hosey's introduction to the ASL is the best I'm aware of. ASL is a C-level API, but it's relatively easy to wrap in Objective-C if you'd like. I would recommend also taking a look at Google's Toolbox for Mac. Among many other goodies, it contains a GTMLogger facility that includes ASL support. I've ditched my home-grown ASL wrapper in favor of the GTMLogger.
Another alternative you might want to try is https://github.com/CocoaLumberjack. Lumberjack is quite flexible and will allow you to log to various destinations, configure log levels, etc. It's very log4j / log4net like, if you are familiar with those.
It's also reports that it is faster than ASL... I don't know how it compares to GTMLogger with respect to functionality or speed, but the documentation seems to be a bit more approachable.