NSTimeInterval is a double, thus it cannot take a nil, and 0 represents something that should happen immediately. Is there a constant that means "never"... or an astronomically huge value, or should I use -1?
As suggested by s.bandara, use a very large number to treat a time interval as "infinite" or "never".
DBL_MAX is the largest value a double can hold. This macro is declared in float.h:
#define DBL_MAX (9.999999999999999e999)
In Swift, use TimeInterval.infinity. For example, in SwiftUI to conditionally enable a timeline view to update every second or never, use:
TimelineView(.periodic(from: start, by: isRunning ? 1 : .infinity))
Related
a := 12 / 24.
a is a variable assigned a fraction 12 / 24. 1 / 2 is the answer when print. Now opening inspect pop up window, I alter the value of numerator and denominator and guess the answer to be the same as before, 1 / 2. What truly happens is that the output stays 12 / 24, which is kind of weird to me.
I have recorded a video to help understand this issue.
https://www.youtube.com/watch?v=LNj24f2wP0M
Why reduction of a fraction does not happen after numerator and denominator values are modified in the inspect window ??
The behavior you describe is correct and is the intended one.
As a developer you can modify objects in two ways:
Sending messages to them
Modifying their instance variables from an inspector
Method 1 is preferred because it conforms to the paradigm. So, why do we have method 2? Because when you open an inspector you somehow impersonate the object. In other words, you become the object under inspection and therefore you are entitled to modify yourself.
Of course, if you modify your internal state it is up to you to preserve your invariants. In the case of fractions, there are two invariants:
denominator > 0
(numerator gcd: denominator) = 1
In sum, the inspector will assume you know what you are doing and will let you modify all instance variables the way you want. When sending messages, however, the object should behave in such a way that its invariants are preserved.
Of course, there are private methods that should be handled with care (i.e., be only sent by public methods), but the general idea is that direct manipulation of objects is a good thing and presents no obstacle or safeguard.
My answer is based on Pharo dialect, I believe Squeak is not strong different.
Because you divided Integers at the first time.
There is snippet from Integer>>/
(Fraction numerator: self denominator: aNumber) reduced
Pay attention to calling reduce method.
If you call reduce method after changing denominator/nominator at the Inspector tool, fraction object will be reduced too.
I want to invert the value of a BOOL every time I detect a tap. The default value of the BOOL is NO and the first time I tap it inverts it to YES. On subsequent taps the value stays as YES.
#property(nonatomic, assign) BOOL isDayOrNight; //property in timeDayChart object.
self.timeDayChart.isDayOrNight = ~self.timeDayChart.isDayOrNight; //This is done in a VC.
I had to change it to this:
self.timeDayChart.isDayOrNight = !self.timeDayChart.isDayOrNight;
to achieve my desired results. I would like to know why ~ did not work as expected.
BOOL is defined as a signed char in objc.h:
typedef signed char BOOL;
and YES and NO are defined like so:
#define YES (BOOL)1
#define NO (BOOL)0
So ~YES is -2, which is not the same as NO.
In (Objective-)C(++) when a Boolean value is required, such as in an if or as an operand to &&, actually take an integer value and interpret 0 as false and non-zero as true. The logical, relational and equality operators all also return integers, using 0 for false and 1 for true.
Objective-C's BOOL is a synonym for signed char, which is an integer type, while NO and YES are defined as 0 and 1 respectively.
As you correctly state ~ is the bit inversion operator. If you invert any integer containing both 0's and 1's the result will also do so. Any value containing a 1 bit is treated as true, and inverting any such value other than all 1's produces a value with at least one 1 bit which is also interpreted as true.
If you start with all 0's then repeated inversion should go all 1's, all 0's, all 1's - which is true, false, true etc. (but not YES, NO, YES, etc.). So if you are starting with 0 then either you are not always using inversion or you are testing explicitly for YES rather than true.
However what you should be using, as you figured out, is ! - logical negation - which maps 0 to 1 and non-0 to 0 and so handles "true" values other than 1 correctly.
HTH
Find a book about the C language. Check what it says about the ~ operator and the ! operator. ~ inverts all bits in an integer, and BOOL is defined as an integer type. So NO = all bits zero will be changed to all bits set, which is not the same as YES, and YES = all bits zero except the last bit = 1 will be changed to all bits set except the last bit = 0.
You are better off using this idiom to toggle a BOOL value:
self.timeDayChart.isDay = self.timeDayChart.isDay ? NO : YES;
(I deliberately changed the naming of your property)
I have a method that needs to do a different thing when given an unset float than a float with the value of 0. Basically, I need to check whether or not a variable has been, counting it as set if it has a value of 0.
So, what placeholder should I use as an unset value (nil, NULL, NO, etc) and how can test to see if a variable is unset without returning true for a value of 0?
You can initialize your floats to NaN (e.g. by calling nan() or nanf()) and then test with isnan() if they have been changed to hold a number. (Note that testing myvalue == nan() will not work.)
This is both rather simple (you will probably include math.h in any case) and conceptually sensible: Any value that is not set to a number is "not a number"...
Using a constant value to indicate the unset state often leads to errors when the variable legitimately obtains the value of that constant.
Consider using NSNumber to store your float. That way it can not only be nil, it will default to that state.
This assumes that you only need a small number of floats. If you need millions of them, NSNumber may be too slow and memory-intensive.
Instead of overloading these float properties (let's call them X and Y), create a separate isValid flag for each property. Initialize the flags to indicate that the floats haven't been set, and provide your own setters to manage the flags appropriately. So your code might look something like:
if (self.isXValid == YES) {
self.Y = ... // assigning to Y sets isYValid to YES
}
else if (self.isYValid == YES) {
self.X = ... // assigning to Y sets isXValid to YES
}
You could actually go a step further and have the setter for X also assign Y and vice versa. Or, if X and Y are so closely linked that you can calculate one based on the value of the other, you really only need one variable for both properties.
My goal is to prevent index out of bounds conditions for the lower bound when using a variable as a subscript to an array. In other words, I'd like to limit the integer variable values to >= 0. Sort of similar to an absolute value, except instead of making a negative number positive, it would make a negative number zero.
Is there any better method of doing this than using a macro such as:
#define gte0(value) (value < 0) ? 0 : value
and then wrapping my variables representing an index with this macro when I access an array element? Is there a standard practice bounds checking other than doing it manually in every place in your code before you access an array element with a variable representing the index?
I'm looking for any solutions in C or Objective-C.
Thanks!
Require unsigned int or NSUInteger primitives for indices. You'll then be guaranteed a value greater than or equal to zero, up to UINT_MAX or whatever limits.h defines, and you just need to check the upper bound.
There is no way of doing this automatically in C.
In Objective-C I have a timer fire every 0.1 seconds and increment a double value (seconds) by 0.1.
So it should basically keep time counting up by 1/10 of a second. When it fires it checks some if-else statements to see if time (seconds) is equal to 3, 9, 33, etc., but these are never triggered. I suppose it is because of the way doubles are represented in bits, that is the decimal is an approximation and never actually a whole number.
How can I fix this so my statements are triggered?
-(void)timeSeconds:(NSTimer*)theTimer {
seconds = seconds + 0.1;
NSLog(#"%f", seconds);
if (seconds == 3.0) {
[player pause];
[secondsTimer invalidate];
}
else if (seconds == 9){
[player pause];
[secondsTimer invalidate];
}
The floating point types cannot represent some numbers exactly, so when these are added, the error is compounded and the floating point type becomes less and less precise.
Use an integral type but represent the time difference using greater resolution, for example, use an NSUInteger to represent milliseconds instead of seconds, and increment by 100 instead of 0.1. Instead of comparing seconds == 3.0, you would use milliseconds == 3000, etc.
Keep in mind that timers are not fired very precisely:
A timer is not a real-time mechanism; it fires only when one of the run loop modes to which the timer has been added is running and able to check if the timer’s firing time has passed.
You may find that when milliseconds==9000, more than 9 seconds has actually passed (but probably not much more). There are other tools available if more precise timing is required.
0.1 cannot be represented exactly in binary floating point, so you get a small error that accumulates over time. If you want an exact value, use an int or long variable that get incremented by 1 each time called tenthsOfSeconds.
Floating point math is imprecise, using floating to count is not a great idea, but if you must, check that the difference between the count and the variable is very small.