In one of mine project I have a quite complex data model.
I need a way to ensure that no retain cycle are created by me or by others colleagues and i want to use an automated approach.
There is a way to ensure that all the "dealloc" method are called?
You can try Static Analyzer (from menu : Product - Analyze or shorcut Shift+CMD+B). Or you create Unit Tests https://developer.apple.com/library/ios/documentation/ToolsLanguages/Conceptual/Xcode_Overview/UnitTestYourApp/UnitTestYourApp.html and check objects retainCount.
Leak instrument may help too : http://www.raywenderlich.com/2696/instruments-tutorial-for-ios-how-to-debug-memory-leaks , https://developer.apple.com/library/mac/documentation/developertools/conceptual/instrumentsuserguide/MemoryManagementforYouriOSApp/MemoryManagementforYouriOSApp.html
There is no way you can test such things automatically. Things you can do:
Have good coding standards and program architecture
Good architecture will prevent many retain cycles.
Be careful when using self in blocks (know when to use __weak id self).
Run Instruments and inspect leaks while your application is running
If you wish to do this you need to design & program it yourself.
For example you could:
Define "connection" as a strong reference to an instance of a class in your data model.
Define a protocol which provides either a "connected to" count method and one to return a connection by index or provides a connections iterator.
Have each class in your data model implement this protocol.
Now given a reference to your data model these protocol methods provide you with the "graph" (the objects are the nodes, the connections the arcs). Implement a cycle checking algorithm.
Now run the test at appropriate places during development & testing to check for accidental introduction of cycles.
You may be able to implement this without a protocol by using the facilities of the runtime, you can certainly given an arbitrary instance discover its ivars and whether an ivar is an object reference. You might get stuck though determining whether the ivar is strong or weak. While much more general this may be harder to implement, but once done...
HTH
Related
Objective-C’s objects are pretty flexible when compared to similar languages like C++ and can be extended at runtime via Categories or through runtime functions.
Any idea what this sentence means? I am relatively new to Objective-C
While technically true, it may be confusing to the reader to call category extension "at runtime." As Justin Meiners explains, categories allow you to add additional methods to an existing class without requiring access to the existing class's source code. The use of categories is fairly common in Objective-C, though there are some dangers. If two different categories add the same method to the same class, then the behavior is undefined. Since you cannot know whether some other part of the system (perhaps even a system library) adds a category method, you typically must add a prefix to prevent collisions (for example rather than swappedString, a better name would likely be something like rnc_swappedString if this were part of RNCryptor for instance.)
As I said, it is technically true that categories are added at runtime, but from the programmer's point of view, categories are written as though just part of the class, so most people think of them as being a compile-time choice. It is very rare to decide at runtime whether to add a category method or not.
As a beginner, you should be aware of categories, but slow to create new ones. Creating categories is a somewhat intermediate-level skill. It's not something to avoid, but not something you'll use every day. It's very easy to overuse them. See Justin's link for more information.
On the other hand, "runtime functions" really do add new functionality to existing classes or even specific objects at runtime, and are completely under the control of code. You can, at runtime, modify a class such that it responds to a method it didn't previously respond to. You can even generate entirely new classes at runtime that did not exist when the program was compiled, and you can change the class of existing objects. (This is exactly how Key-Value Observation is implemented.)
Modifying classes and objects using the runtime is an advanced skill. You should not even consider using these techniques in production code until you have significant experience. And when you have that experience, it will tell you that you very seldom what to do this anyway. You will know the runtime functions because they are C-based, with names like method_exchangeImplmentations. You won't mistake them for normal ObjC (and you generally have to import objc/runtime.h to get to them.)
There is a middle-ground that bleeds into runtime manipulation called message forwarding and dynamic message resolution. This is often used for proxy objects, and is implemented with -forwardingTargetForSelector, +resolveInstanceMethod, and some similar methods. These are tools that allow classes to modify themselves at runtime, and is much less dangerous than modifying other classes (i.e. "swizzling").
It's also important to consider how all of this translates to Swift. In general, Swift has discouraged and restricted the use of runtime class manipulation, but it embraces (and improves) category-like extensions. By the time you're experienced enough to dig into the runtime, you will likely find it an even more obscure skill than it is today. But you will use extensions (Swift's version of categories) in every program.
A category allows you to add functionality to an existing class that you do not have access to source code for (System frameworks, 3rd party APIs etc). This functionality is possible by adding methods to a class at runtime.
For example lets say I wanted to add a method to NSString that swapped uppercase and lowercase letters called -swappedString. In static languages (such as C++), extending classes like this is more difficult. I would have to create a subclass of NSString (or a helper function). While my own code could take advantage of my subclass, any instance created in a library would not use my subclass and would not have my method.
Using categories I can extend any class, such as adding a -swappedString method and use it on any instance of the class, such asNSString transparently [anyString swappedString];.
You can learn more details from Apple's Docs
I'm currently making a client-client approach on some simulation with objective-c with two computers (mac1 and mac2).
I have a class Client, and every computer has a instance of the "Client" on it (client1,client2). I expect that both clients will be synchronized: they will both be equal apart from memory locations.
When a user presses a key on mac1, I want both client1 and client2 to receive a given method from class Client (so that they are synchronized, i.e. they are the same apart from it's memory location on each mac).
To this approach, my current idea is to make 2 methods:
- (void) sendSelector:(Client*)toClient,...;
- (void) receiveSelector:(Client*)fromClient,...;
sendSelector: uses NSStringFromSelector() to transform the method to a NSString, and send it over the network (let's not worry about sending strings over net now).
On the other hand, receiveSelector: uses NSSelectorFromString() to transform a NSString back to a selector.
My first question/issue is: to what extent is this approach "standard" on networking with objective-c?
My second question:
And the method's arguments? Is there any way of "packing" a given class instance and send it over the network? I understand the pointer's problem when packing, but every instance on my program as an unique identity, so that should be no problem since both clients will know how to retrieve the object from its identity.
Thanks for your help
Let me address your second question first:
And the method's arguments? Is there any way of "packing" a given
class instance and send it over the network?
Many Cocoa classes implement/adopt the NSCoding #protocol. This means they support some default implementation for serializing to a byte stream, which you could then send over the network. You would be well advised to use the NSCoding approach unless it's fundamentally not suited to your needs for some reason. (i.e. use the highest level of abstraction that gets the job done)
Now for the more philosophical side of your first question; I'll rephrase your question as "is it a good approach to use serialized method invocations as a means of communication between two clients over a network?"
First, you should know that Objective-C has a not-often-used-any-more, but reasonably complete, implementation for handling remote invocations between machines with a high level of abstraction. It was called Distributed Objects. Apple appears to be shoving it under the rug to some degree (with good reason -- keep reading), but I was able to find an old cached copy of the Distributed Objects Programming Topics guide. You may find it informative. AFAIK, all the underpinnings of Distributed Objects still ship in the Objective-C runtime/frameworks, so if you wanted to use it, if only to prototype, you probably could.
I can't speculate as to the exact reasons that you can't seem to find this document on developer.apple.com these days, but I think it's fair to say that, in general, you don't want to be using a remote invocation approach like this in production, or over insecure network channels (for instance: over the Internet.) It's a huge potential attack vector. Just think of it: If I can modify, or spoof, your network messages, I can induce your client application to call arbitrary selectors with arbitrary arguments. It's not hard to see how this could go very wrong.
At a high level, let me recommend coming up with some sort of protocol for your application, with some arbitrary wire format (another person mentioned JSON -- It's got a lot of support these days -- but using NSCoding will probably bootstrap you the quickest), and when your client receives such a message, it should read the message as data and make a decision about what action to take, without actually deriving at runtime what is, in effect, code from the message itself.
From a "getting things done" perspective, I like to share a maxim I learned a while ago: "Make it work; Make it work right; Make it work fast. In that order."
For prototyping, maybe you don't care about security. Maybe when you're just trying to "make it work" you use Distributed Objects, or maybe you roll your own remote invocation protocol, as it appears you've been thinking of doing. Just remember: you really need to "make it work right" before releasing it into the wild, or those decisions you made for prototyping expedience could cost you dearly. The best approach here will be to create a class or group of classes that abstracts away the network protocol and wire format from the rest of your code, so you can swap out networking implementations later without having to touch all your code.
One more suggestion: I read in your initial question a desire to 'keep an object (or perhaps an object graph) in sync across multiple clients.' This is a complex topic, but you may wish to employ a "Command Pattern" (see the Gang of Four book, or any number of other treatments in the wild.) Taking such an approach may also inherently bring structure to your networking protocol. In other words, once you've broken down all your model mutation operations into "commands" maybe your protocol is as simple as serializing those commands using NSCoding and shipping them over the wire to the other client and executing them again there.
Hopefully this helps, or at least gives you some starting points and things to consider.
These days it would seem that the most standard way is to package everything up on JSON.
In objective-c when you are implementing a method that is going to perform a repetitive operations, for example, you need to choice in between the several options that the language brings you:
#interface FancyMutableCollection : NSObject { }
-(void)sortUsingSelector:(SEL)comparator;
// or ...
-(void)sortUsingComparator:(NSComparator)cmptr;
#end
I was wondering which one is better?
Objective-c provides many options: selectors, blocks, pointers to functions, instances of a class that conforms a protocol, etc.
Some times the choice is clear, because only one method suits your needs, but what about the rest? I don't expect this to be just a matter of fashion.
Are there any rules to know when to use selectors and when to use blocks?
The main difference I can think of is that with blocks, they act like closures so they capture all of the variables in the scope around them. This is good for when you already have the variables there and don't want to create an instance variable just to hold that variable temporarily so that the action selector can access it when it is run.
With relation to collections, blocks have the added ability to be run concurrently if there are multiple cores in the system. Currently in the iPhone there isn't, but the iPad 2 does have it and it is probable that future iPhone models will have multiple cores. Using blocks, in this case, would allow your app to scale automatically in the future.
In some cases, blocks are just easier to read as well because the callback code is right next to the code that's calling it back. This is not always the case of course, but when sometimes it does simply make the code easier to read.
Sorry to refer you to the documentation, but for a more comprehensive overview of the pros/cons of blocks, take a look at this page.
As Apple puts it:
Blocks represent typically small, self-contained pieces of code. As such, they’re particularly useful as a means of encapsulating units of work that may be executed concurrently, or over items in a collection, or as a callback when another operation has finished.
Blocks are a useful alternative to traditional callback functions for two main reasons:
They allow you to write code at the point of invocation that is executed later in the context of the method implementation.
Blocks are thus often parameters of framework methods.
They allow access to local variables.
Rather than using callbacks requiring a data structure that embodies all the contextual information you need to perform an operation, you simply access local variables directly.
On this page
The one that's better is whichever one works better in the situation at hand. If your objects all implement a comparison selector that supports the ordering you want, use that. If not, a block will probably be easier.
My app contains several singletons (following from this tutorial). I've noticed however, when the app crashes because of a singleton, it becomes nearly impossible to figure out where it came from. The app breakpoints at the main function giving an EXEC_BAD_ACCESS even though the problem lies in one of the Singleton objects. Is there a guide to how would I debug my singleton objects if they were problematic?
if you don't want to change your design (as recommended in my other post), then consider the usual debugging facilities: assertions, unit tests, zombie tests, memory tests (GuardMalloc, scribbling), etc. this should identify the vast majority of issues one would encounter.
of course, you will have some restrictions regarding what you can and cannot do - notably regarding what cannot be tested independently using unit tests.
as well, reproducibility may be more difficult in some contexts when/if you are dealing with a complex global state because you have created several enforced singletons. when the global state is quite large and complex - testing these types independently may not be fruitful in all cases since the bug may appear only in a complex global state found in your app (when 4 singletons interact in a specific manner). if you have isolated the issue to interactions of multiple singleton instances (e.g. MONAudioFileCache and MONVideoCache), placing these objects in a container class will allow you to introduce coupling, which will help diagnose this. although increasing coupling is normally considered a bad thing; this does't really increase coupling (it already exists as components of the global state) but simply concentrates existing global state dependencies -- you're really not increasing it as much as you are concentrating it when the state of these singletons affect other components of the mutable global state.
if you still insist on using singletons, these may help:
either make them thread safe or add some assertions to verify mutations happen only on the main thread (for example). too many people assume an object with atomic properties implies the object is thread safe. that is false.
encapsulate your data better, particularly that which mutates. for example: rather than passing out an array your class holds for the client to mutate, have the singleton class add the object to the array it holds. if you truly must expose the array to the client, then return a copy. ths is just basic ood, but many objc devs expose the majority of their ivars disregarding the importance of encapsualtion.
if it's not thread safe and the class is used in a mutithreaded context, make the class (not the client) implement proper thread safety.
design singletons' error checking to be particularly robust. if the programmer passes an invalid argument or misuses the interface - just assert (with a nice message about the problem/resolution).
do write unit tests.
detach state (e.g. if you can remove an ivar easily, do it)
reduce complexity of state.
if something is still impossible to debug after writing/testing with thorough assertions, unit tests, zombie tests, memory tests (GuardMalloc, scribbling), etc,, you are writing programs which are too complex (e.g. divide the complexity among multiple classes), or the requirements do not match the actual usage. if you're at that point, you should definitely refer to my other post. the more complex the global variable state, the more time it will take to debug, and the less you can reuse and test your programs when things do go wrong.
good luck
I scanned the article, and while it had some good ideas it also had some bad advice, and it should not be taken as gospel.
And, as others have suggested, if you have a lot of singleton objects it may mean that you're simply keeping too much state global/persistent. Normally only one or two of your own should be needed (in addition to those that other "packages" of one sort or another may implement).
As to debugging singletons, I don't understand why you say it's hard -- no worse than anything else, for the most part. If you're getting EXEC_BAD_ACCESS it's because you've got some sort of addressing bug, and that's nothing specific to singleton schemes (unless you're using a very bad one).
Macros make debugging difficult because the lines of code they incorporate can't have breakpoints put in them. Deep six macros, if nothing else. In particular, the SYNTHESIZE_SINGLETON_FOR_CLASS macro from the article is interfering with debugging. Replace the call to this macro function with the code it generates for your singleton class.
ugh - don't enforce singletons. just create normal classes. if your app needs just one instance, add them to something which is created once, such as your app delegate.
most cocoa singleton implementations i've seen should not have been singletons.
then you will be able to debug, test, create, mutate and destroy these objects as usual.
the good part is course that the majority of your global variable pains will disappear when you implement these classes as normal objects.
This is a question with many answers - I am interested in knowing what others consider to be "best practice".
Consider the following situation: you have an object-oriented program that contains one or more data structures that are needed by many different classes. How do you make these data structures accessible?
You can explicitly pass references around, for example, in the constructors. This is the "proper" solution, but it means duplicating parameters and instance variables all over the program. This makes changes or additions to the global data difficult.
You can put all of the data structures inside of a single object, and pass around references to this object. This can either be an object created just for this purpose, or it could be the "main" object of your program. This simplifies the problems of (1), but the data structures may or may not have anything to do with one another, and collecting them together in a single object is pretty arbitrary.
You can make the data structures "static". This lets you reference them directly from other classes, without having to pass around references. This entirely avoids the disadvantages of (1), but is clearly not OO. This also means that there can only ever be a single instance of the program.
When there are a lot of data structures, all required by a lot of classes, I tend to use (2). This is a compromise between OO-purity and practicality. What do other folks do? (For what it's worth, I mostly come from the Java world, but this discussion is applicable to any OO language.)
Global data isn't as bad as many OO purists claim!
After all, when implementing OO classes you've usually using an API to your OS. What the heck is this if it isn't a huge pile of global data and services!
If you use some global stuff in your program, you're merely extending this huge environment your class implementation can already see of the OS with a bit of data that is domain specific to your app.
Passing pointers/references everywhere is often taught in OO courses and books, academically it sounds nice. Pragmatically, it is often the thing to do, but it is misguided to follow this rule blindly and absolutely. For a decent sized program, you can end up with a pile of references being passed all over the place and it can result in completely unnecessary drudgery work.
Globally accessible services/data providers (abstracted away behind a nice interface obviously) are pretty much a must in a decent sized app.
I must really really discourage you from using option 3 - making the data static. I've worked on several projects where the early developers made some core data static, only to later realise they did need to run two copies of the program - and incurred a huge amount of work making the data non-static and carefully putting in references into everything.
So in my experience, if you do 3), you will eventually end up doing 1) at twice the cost.
Go for 1, and be fine-grained about what data structures you reference from each object. Don't use "context objects", just pass in precisely the data needed. Yes, it makes the code more complicated, but on the plus side, it makes it clearer - the fact that a FwurzleDigestionListener is holding a reference to both a Fwurzle and a DigestionTract immediately gives the reader an idea about its purpose.
And by definition, if the data format changes, so will the classes that operate on it, so you have to change them anyway.
You might want to think about altering the requirement that lots of objects need to know about the same data structures. One reason there does not seem to be a clean OO way of sharing data is that sharing data is not very object-oriented.
You will need to look at the specifics of your application but the general idea is to have one object responsible for the shared data which provides services to the other objects based on the data encapsulated in it. However these services should not involve giving other objects the data structures - merely giving other objects the pieces of information they need to meet their responsibilites and performing mutations on the data structures internally.
I tend to use 3) and be very careful about the synchronisation and locking across threads. I agree it is less OO, but then you confess to having global data, which is very un-OO in the first place.
Don't get too hung up on whether you are sticking purely to one programming methodology or another, find a solution which fits your problem. I think there are perfectly valid contexts for singletons (Logging for instance).
I use a combination of having one global object and passing interfaces in via constructors.
From the one main global object (usually named after what your program is called or does) you can start up other globals (maybe that have their own threads). This lets you control the setting up of program objects in the main objects constructor and tearing them down again in the right order when the application stops in this main objects destructor. Using static classes directly makes it tricky to initialize/uninitialize any resources these classes use in a controlled manner. This main global object also has properties for getting at the interfaces of different sub-systems of your application that various objects may want to get hold of to do their work.
I also pass references to relevant data-structures into constructors of some objects where I feel it is useful to isolate those objects from the rest of the world within the program when they only need to be concerned with a small part of it.
Whether an object grabs the global object and navigates its properties to get the interfaces it wants or gets passed the interfaces it uses via its constructor is a matter of taste and intuition. Any object you're implementing that you think might be reused in some other project should definately be passed data structures it should use via its constructor. Objects that grab the global object should be more to do with the infrastructure of your application.
Objects that receive interfaces they use via the constructor are probably easier to unit-test because you can feed them a mock interface, and tickle their methods to make sure they return the right arguments or interact with mock interfaces correctly. To test objects that access the main global object, you have to mock up the main global object so that when they request interfaces (I often call these services) from it they get appropriate mock objects and can be tested against them.
I prefer using the singleton pattern as described in the GoF book for these situations. A singleton is not the same as either of the three options described in the question. The constructor is private (or protected) so that it cannot be used just anywhere. You use a get() function (or whatever you prefer to call it) to obtain an instance. However, the architecture of the singleton class guarantees that each call to get() returns the same instance.
We should take care not to confuse Object Oriented Design with Object Oriented Implementation. Al too often, the term OO Design is used to judge an implementation, just as, imho, it is here.
Design
If in your design you see a lot of objects having a reference to exactly the same object, that means a lot of arrows. The designer should feel an itch here. He should verify whether this object is just commonly used, or if it is really a utility (e.g. a COM factory, a registry of some kind, ...).
From the project's requirements, he can see if it really needs to be a singleton (e.g. 'The Internet'), or if the object is shared because it's too general or too expensive or whatsoever.
Implementation
When you are asked to implement an OO Design in an OO language, you face a lot of decisions, like the one you mentioned: how should I implement all the arrows to the oft used object in the design?
That's the point where questions are addressed about 'static member', 'global variable' , 'god class' and 'a-lot-of-function-arguments'.
The Design phase should have clarified if the object needs to be a singleton or not. The implementation phase will decide on how this singleness will be represented in the program.
Option 3) while not purist OO, tends to be the most reasonable solution. But I would not make your class a singleton; and use some other object as a static 'dictionary' to manage those shared resources.
I don't like any of your proposed solutions:
You are passing around a bunch of "context" objects - the things that use them don't specify what fields or pieces of data they are really interested in
See here for a description of the God Object pattern. This is the worst of all worlds
Simply do not ever use Singleton objects for anything. You seem to have identified a few of the potential problems yourself