I'm not sure how to initialize a class property that's a C struct, or pre-iOS5 base type. This is what I'd do if I was dealing with a class, but I don't know what I can check to see if this is the first time the struct has been accessed, since they're undefined at creation:
#interface GraphView : UIView
#property (nonatomic) CGPoint origin;
#end
#implementation GraphView
#synthesize origin = _origin;
- (CGPoint)origin
{
if (WHAT?) {
_origin = CGPointMake(self.bounds.origin.x + self.bounds.size.width/2,
self.bounds.origin.y + self.bounds.size.height/2);
}
return _origin;
}
#end
I realize the primary benefit of lazy initialization is for memory allocation, but if I'm doing this for all the properties that are classes, it seems clearest to use the same style for setting starting values on all my properties.
I can use some other instance variable or property to track whether the self.origin has been accessed, but that seems... not smooth. I could take care to never access self.origin before I've set it, which seems mildly entailed by the fact that structs are undefined at creation.
Is there a "right" way?
All members of a objc class will be initialized to zero on creation (even a struct). I do not get you point about memory allocation. The space is reserved for the struct (you are not storing a pointer to it). It will require the same space whether you have assigned it a value or not.
Generally you have to have an in-band invalid state like nil for pointers. In your case, you could test !CGPointEqualToPoint(_origin, CGPointZero) or !CGPointEqualToPoint(_origin, CGPointMake(-1,-1)) if you know those can never be valid. (_origin will default to CGPointZero or you'll have to set it to (-1, -1) in an init if (0, 0) is valid.)
If all possible in-band values are valid, you're stuck with an out-of-band flag as you mentioned.
The lazy evaluation of the CGPoint could be done in (at least) two different ways:
The clean way would be storing a BOOL, that keeps the information whether your CGPoint has been initialized.
#interface GraphView : UIView {
BOOL _originInitialized;
}
#property (nonatomic) CGPoint origin;
#end
#implementation GraphView
#synthesize origin = _origin;
// extend the init methods
-(id) init {
...
_originInitialized = NO;
...
}
- (CGPoint) origin {
if (_originInitialized) {
_origin = CGPointMake(self.bounds.origin.x + self.bounds.size.width/2,
self.bounds.origin.y + self.bounds.size.height/2);
_originInitialized = YES;
}
return _origin;
}
#end
In case you do not want to waste a BOOL of memory you could do a dirty approach
and use an initial value, to test your _origin against:
#interface GraphView : UIView
#property (nonatomic) CGPoint origin;
#end
#implementation GraphView
#synthesize origin = _origin;
// extend the init methods
-(id) init {
...
_origin = CGPointMake(-23.0, -42.0);
...
}
- (CGPoint) origin {
if (_origin.x == -23.0 && _origin.y == -42.0) {
_origin = CGPointMake(self.bounds.origin.x + self.bounds.size.width/2,
self.bounds.origin.y + self.bounds.size.height/2);
}
return _origin;
}
#end
But I'd recommend not to use the second one.
Related
I'm trying to achieve/force this safe & clean swift syntax
struct PopButton: View
{
var Label = PopEngineLabel(label:"Hello")
var body: some View
{
Text(Label.label)
}
}
Ojective-c
#interface PopEngineLabel : NSObject
#property (strong, nonatomic) NSString* label;
- (id)initWithLabel:(NSString*)label;
#end
#implementation PopEngineLabel
- (id)initWithLabel:(NSString*)label
{
if ( label )
self.label = label;
else
self.label = #"<null>";
return self;
}
#end
But in the swiftUI code, I get the error
error: value of optional type 'PopEngineLabel?' must be unwrapped to refer to member 'label' of wrapped base type 'PopEngineLabel'
Text(Label.label)
I can remove the errors with
Text(Label?.label ?? "default")
Text(Label!.label)
I'm assuming this is because all objective-c class/instances are implicitly optional.... BUT, the following code makes it non-optional, but crashes at runtime, as it hasn't done my objective-c initialiser (and .label is nil)
struct PopButton: View
{
var Label = PopEngineLabel()
var body: some View
{
Text(Label.label)
}
}
Can I force the user to use a constructor/initialiser AND be a non-optional in swift? (without making an intermediate swift class)
You can use nullability specifiers in Objective-C to tell Swift whether a specific property can be nil or not (and hence whether Swift should treat it as an Optional or not). By default, all reference types in Swift can be null, since they are pointers and hence by default, Swift treats all Obj-C pointer properties as implicitly unwrapped optionals.
However, this won't make the inherited initialiser from NSObject initialise label correctly (or make that initialiser unusable). So you need to assign a default value to label in the default init inherited from NSObject.
#interface PopEngineLabel : NSObject
#property (strong, nonatomic, nonnull) NSString* label;
- (instancetype)initWithLabel:(nonnull NSString*)label;
- (nonnull instancetype)init NS_UNAVAILABLE; // disable default initialiser in Swift
#end
#implementation PopEngineLabel
- (instancetype)init {
self = [super init];
if (self) {
self.label = #"<null>";
}
return self;
}
- (instancetype)initWithLabel:(nonnull NSString*)label {
self = [super init];
if (self) {
if (label) {
self.label = label;
} else {
self.label = #"<null>";
}
}
return self;
}
#end
Couple of other things to bear in mind in Objective-C:
Your init shouldn't return id, but instancetype
You should delegate to super init
I have a class in a game that is often used, and I thought it would be nice to tidy it up by grouping together instance variables with a typedef struct. I'm not completely convinced yet this will help or not.
Originally in my class header interface I had something like this:
#interface ThingClass : CCLayer {
#public
bool _invulnerableToggled;
int _invulnerableCount;
int _invulnerableMax;
}
#property(nonatomic, assign) bool invulnerableToggled;
#property(nonatomic, assign) int invulnerableCount;
#property(nonatomic, assign) int invulnerableMax;
and in my .m
#synthesize
invulnerableToggled = _invulnerableToggled,
invulnerableCount = _invulnerableCount,
invulnerableMax = _invulnerableMax;
A subclass of this class would set these variables to their default values in init. Another class could access an instance of this subclass and set the values accordingly with regular dot notation, like tempThing.invulnerableToggled = YES;
Now that I'm using a typedef struct, it looks as though my values cannot be adjusted, and I've tried various things to overcome it. Although it may be because I'm not setting this up correctly to begin with, so just in case I'll show you what I'm doing now.
Currently my class header:
typedef struct {
bool toggled;
int count;
int max;
} Invulnerable;
#interface ThingClass : CCLayer {
#public
Invulnerable _invulnerable;
}
#property(nonatomic, assign) Invulnerable invulnerable;
and in my .m
#synthesize
invulnerable = _invulnerable;
I set these values in a subclass init like so:
_invulnerable.toggled = NO;
_invulnerable.count = 0;
_invulnerable.max = 50;
When I try to set this in another class, I expect it to add 1 to the current value. It always remains 1 instead. This if statement is sometimes checked 60 times a second, but has not change to the count:
Invulnerable invulnerable = tempBaddy.invulnerable;
// check baddy invulnerability and increment if needed
if(invulnerable.toggled == YES){
int increase = invulnerable.count +1;
invulnerable.count = increase;
NSLog(#"invulnerable.count = %i", invulnerable.count);
}
This is not a common way in ObjC but you can pass the struct by reference, i.e. return a pointer to the struct:
#interface ThingClass : CCLayer {
#protected
Invulnerable _invulnerable;
}
#property(nonatomic, readonly) Invulnerable* invulnerable;
#end
The *.m file:
#implementation ThingClass
- (Invulnerable*)invulnerable {
return &_invulnerable;
}
#end
Updating the data:
Invulnerable* invulnerable = tempBaddy.invulnerable;
// check baddy invulnerability and increment if needed
if(invulnerable->toggled == YES){
invulnerable->count++;
NSLog(#"invulnerable.count == %i", tempBaddy.invulnerable->count);
}
I guess you are trying to perform some action on an instance of ThingClass (or its subclass). And the action affects the value of _invulnerable. In this case a more common way would be having a method in the Thing class that performs all the required updates:
#implementation ThingClass
- (void)headshot {
if (_invulnerable.toggled) {
_invulnerable.count++;
} else {
[self die];
}
}
#end
I'm trying to teach myself Objective-C and as an exercise, I'm trying to write an app with one button and one label. When I click on the button, I want to trigger a calculation then see the results in the label. The following code compiles and runs with no errors or warnings but as far as I can tell, the [object method] 'call' doesn't do anything. I've spent hours on this and just don't see what's wrong. Can anyone explain the problem? Thanks.
*** testMethodViewController.h ****
#import <UIKit/UIKit.h>
#import "testBrain.h"
#interface testMethodViewController : UIViewController
{
IBOutlet UILabel *display;
testBrain *model;
}
- (IBAction)cellPressed:(UIButton *)sender;
#end
*** testMethodViewController.m ****
#import "testMethodViewController.h"
#implementation testMethodViewController
- (testBrain *)model
{
if (!model) {model = [[testBrain alloc] init];}
return model;
}
- (IBAction)cellPressed:(UIButton *)sender
{
int x = [model check:3]; //This method call doesn't work. But gets no errors.
NSLog(#"Results from model: %i", x); //Says x = 0, but I expect 6
NSString *xAsString = [NSString stringWithFormat: #"testBrain: %i", x];
display.text = xAsString; //Label is updated and displays: testBrain: 0
} //I expect: testBrain: 6
#end
*** testBrain.h ****
#import <Foundation/Foundation.h>
#interface testBrain : NSObject {}
- (int) check:(int) anInteger;
#end
*** testBrain.m ****
#import "testBrain.h"
#implementation testBrain
- (int) check:(int) anInteger //3 passed as the parameter.
{
int r = anInteger + anInteger;
NSLog(#"inside check %i", r); //Debugging line: doesn't print.
return r;
}
#end
When this code runs:
int x = [model check:3];
model is nil. In Objective-C, messages sent to nil silently do nothing, and return 0. So, as you see, x is 0 and -check: is never called.
Apparently you were expecting this method to be called automatically:
- (testBrain *)model
{
if (!model) {model = [[testBrain alloc] init];}
return model;
}
However, that method will be called only if you do it yourself, by saying [self model] or self.model. So, this line would fix it:
int x = [[self model] check:3];
Try it and see.
Going a little further: It would be clearer to remove the model method entirely, and create the instance variable model when the UIViewController is created. That way, we can guarantee that model is valid anytime any code in the testMethodViewController class runs.
You would do that by overriding UIViewController's designated initializer:
- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
if (self) {
// Now you can initialize your instance variables
model = [[testBrain alloc] init];
}
return self;
}
With your model method, you are halfway towards Lazy Instantiation, however to properly achieve this, you must always acess the lazily instantiated object through its accessor method. You aren't doing this in your button action, so your messages are going to nil, which is silently ignored.
This is one of the reasons you often see instance variables in objective-c declared with a leading or trailing underscore. If you then typed model anywhere in the rest of your class, it would be a compiler error, forcing you to use the accessor. Typically this is implemented with properties and the synthesize statement:
In your interface:
#property (nonatomic, strong) TestBrain* model;
In your implementation:
#synthesize model = model_;
Your model method would be:
-(TestBrain*)model
{
if (!model_)
model_ = [[TestBrain alloc] init];
return model_;
}
You would then use self.model instead of model throughout the rest of the class.
If you are just starting out, the Stanford iOS course on iTunes U is an excellent resource, a lot of this sort of material is covered.
int x = [model check:3];
This line should be:
int x = [self.model check:3];
you are almost there. You need to use #property and #synthesize in order to complete this. The #synthesize directive will direct the compiler to create the setters and getters for a particular property. The #synthesize directive tells the compiler that variable is a property. Properties allow you to use the dot syntax. i.e. self.model which will automatically the call the getter or setter method, depending on the context.
In your testMethodViewController.h file change it to look like this:
#interface testMethodViewController : UIViewController
{
IBOutlet UILabel *display;
testBrain *model;
}
#property (nonatomic,retain) testBrain *model;
- (IBAction)cellPressed:(UIButton *)sender;
#end
then in the .m implementation you need to use #synthesize after the #implementation. Like this:
#implementation testMethodViewController
#synthesize model; // tells the compiler to synthesize the setter and getter for you
- (testBrain *)model
{
if (!model) {model = [[testBrain alloc] init];}
return model;
}
then in your cellPressed: method, you need to use self.model in order for the getter to be called:
- (IBAction)cellPressed:(UIButton *)sender
{
int x = [self.model check:3]; //This method call doesn't work. But gets no errors.
NSLog(#"Results from model: %i", x); //Says x = 0, but I expect 6
NSString *xAsString = [NSString stringWithFormat: #"testBrain: %i", x];
display.text = xAsString; //Label is updated and displays: testBrain: 0
}
Hope this helps.
I dont see anywhere in the testMethodViewController.h file
IBOutlet UIButton *button;
Also check if u have properly connected all IBOutlet, IBAction & delegate, datasource.
I have a Class that runs the following method (a getter):
// the interface
#interface MyClass : NSObject{
NSNumber *myFloatValue;
}
- (double)myFloatValue;
- (void)setMyFloatValue:(float)floatInput;
#end
// the implementation
#implementation
- (MyClass *)init{
if (self = [super init]){
myFloatValue = [[NSNumber alloc] initWithFloat:3.14];
}
return self;
}
// I understand that NSNumbers are non-mutable objects and can't be
// used like variables.
// Hence I decided to make make the getter's implementation like this
- (double)myFloatValue{
return [myFloatValue floatValue];
}
- (void)setMyFloatValue:(float)floatInput{
if ([self myFloatValue] != floatInput){
[myFloatValue release];
myFloatValue = [[NSNumber alloc] initWithFloat:floatInput;
}
#end
When I mouse over the myFloatValue object during debugging, it does not contain a value. Instead it says: "out of scope".
I would like to be able to make this work without using #property, using something other than NSNumbers, or other major changes since I just want to understand the concepts first. Most importantly, I would like to know what mistake I've apparently made.
I can see a couple of mistakes:
The line #implementation should read #implementation MyClass
The function setMyFloatValue is missing a closing ] and } —it should read:
- (void)setMyFloatValue:(float)floatInput{
if ([self myFloatValue] != floatInput){
[myFloatValue release];
myFloatValue = [[NSNumber alloc] initWithFloat:floatInput];
}
}
I've just tested it in Xcode and it works for me with these changes.
Why not just set property in interface and synthesize accessors in implementation?
#interface MyClass : NSObject {
float *myFloat
}
#property (assign) float myFloat;
#end
#implementation MyClass
#synthesize myFloat;
#end
thing.h
#interface Thing : NSObject
{
float stuff[30];
}
#property float stuff;
#end
thing.m
#implementation Thing
#synthesize stuff;
#end
I get error: type of property 'stuff' does not match type of ivar 'stuff'
I don't want to use an NSArray because I'd have to make the floats into NSNumbers (right?) and that's a pain to do math with.
Update: I've noticed similar answers had guesses and trial answers. While I appreciate the attempts by non-Objective-C folks, I'm hoping for a definitive answer whether it's possible or not.
OK, I have compiled up the following code at it works as expected.
FloatHolder.h
#interface FloatHolder : NSObject {
int _count;
float* _values;
}
- (id) initWithCount:(int)count;
// possibly look into this for making access shorter
// http://vgable.com/blog/2009/05/15/concise-nsdictionary-and-nsarray-lookup/
- (float)getValueAtIndex:(int)index;
- (void)setValue:(float)value atIndex:(int)index;
#property(readonly) int count;
#property(readonly) float* values; // allows direct unsafe access to the values
#end
FloatHolder.m
#import "FloatHolder.h"
#implementation FloatHolder
#synthesize count = _count;
#synthesize values = _values;
- (id) initWithCount:(int)count {
self = [super init];
if (self != nil) {
_count = count;
_values = malloc(sizeof(float)*count);
}
return self;
}
- (void) dealloc
{
free(_values);
[super dealloc];
}
- (float)getValueAtIndex:(int)index {
if(index<0 || index>=_count) {
#throw [NSException exceptionWithName: #"Exception" reason: #"Index out of bounds" userInfo: nil];
}
return _values[index];
}
- (void)setValue:(float)value atIndex:(int)index {
if(index<0 || index>=_count) {
#throw [NSException exceptionWithName: #"Exception" reason: #"Index out of bounds" userInfo: nil];
}
_values[index] = value;
}
#end
then in your other application code you can do something like the following:
** FloatTestCode.h **
#import <Cocoa/Cocoa.h>
#import "FloatHolder.h"
#interface FloatTestCode : NSObject {
FloatHolder* holder;
}
- (void) doIt:(id)sender;
#end
** FloatTestCode.m **
#import "FloatTestCode.h"
#implementation FloatTestCode
- (id) init
{
self = [super init];
if (self != nil) {
holder = [[[FloatHolder alloc] initWithCount: 10] retain];
}
return self;
}
- (void) dealloc
{
[holder release];
[super dealloc];
}
- (void) doIt:(id)sender {
holder.values[1] = 10;
}
The type of the property must match the type of the instance variable it will be stored in, so you could do something like
#interface Thing : NSObject
{
float stuff[30];
}
#property float[30] stuff;
#end
and it should work. I wouldn't recommend it though.
I'm guessing you're looking for something like indexed properties from Delphi. The closest you'll get is something like the following.
#interface Thing : NSObject
{
float stuff[30];
}
- (void) setStuff:(float)value atIndex:(int)index;
- (float) getStuffAtIndex:(int)index;
#end
You can't do it the way you want to do it. You can jump through some hoops and get something similar, e.g. using Daniel's solution, but it's not quite the same thing. The reason you can't do it is that arrays are not lvalues in C. An lvalue is something that can appear on the left-hand side of an assignment. The following code is invalid C:
float stuff1[30], stuff2[30];
stuff1 = stuff2; // ERROR: arrays are not lvalues
As a consequence, you can't declare properties whose types are not lvalues.
Daniel's FloatHolder answer has a major bug (edit: he's now fixed it). It only allocates memory for one float and not for the whole array.
The line:
_values = malloc(sizeof(float));
Should be:
_values = malloc(sizeof(float) * count);
Otherwise it seems to be a good answer. Sorry couldn't work out how to reply directly. (edit: I didn't have the necessary privilege on stackoverflow then.)
Even if you could get that to compile, it wouldn't behave well. 'stuff' would return a float*, and the client would have no idea how long the array way; 'setStuff:' would just change the pointer, and you'd either be pointing to stack-allocated data that would vanish out from under you or heap-allocated data that would leak because it wouldn't know to free it.
I'm not well-versed in Objective-C 2.0, but I'm guessing that the issue might be caused by the fact that a C array is essentially just a pointer to the first element of the array, meaning that the type of float stuff[30] is actually float *, not merely a float.