How to calculate the time taken by one process in objective c? - objective-c

Hey there, I am fresh to iPhone development and Objective C. Sorry if this question is too stupid....
Here is my problem, I want to calculate the time taken by one of my function. like UIGetScreenImage. Here is the code:
-(void)screenCapture{
CGImageRef screen = UIGetScreenImage();
UIImage* image = [UIImage imageWithCGImage:screen];
CGImageRelease(screen);
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
what should I do to calculate the time taken by this process? Sample code would be appreciated.
Thanks for your kind assistance. Look forward to your replies and ideas. :D

You can get current date on method start and finish and check time passed between those 2 moments:
-(void)screenCapture{
NSDate* startDate = [NSDate date];
...
NSDate* finishDate = [NSDate date];
NSLog(#"%f", [finishDate timeIntervalSinceDate: startDate]);
}
Edit: I believe my approach described above is (to put it mildly) not the best solution to measure process time. Now I use approach described in "Big Nerd Ranch" blog here that uses mach_absolute_time function. I copied the code from the post to illustrate that method - with this code snippet you can measure rub time of arbitrary block:
#import <mach/mach_time.h> // for mach_absolute_time() and friends
CGFloat BNRTimeBlock (void (^block)(void)) {
mach_timebase_info_data_t info;
if (mach_timebase_info(&info) != KERN_SUCCESS) return -1.0;
uint64_t start = mach_absolute_time ();
block ();
uint64_t end = mach_absolute_time ();
uint64_t elapsed = end - start;
uint64_t nanos = elapsed * info.numer / info.denom;
return (CGFloat)nanos / NSEC_PER_SEC;
} // BNRTimeBlock

I think an easier (and more straightforward) solution can be found here
NSDate *methodStart = [NSDate date];
/* ... Do whatever you need to do ... */
NSDate *methodFinish = [NSDate date];
NSTimeInterval executionTime = [methodFinish timeIntervalSinceDate:methodStart];

Related

compare two dates in objective-c

I am sure this question came up before I am pulling my hair out. I have two dates - one from an Object on Parse.com and the other one local. I try to determine whether the remote object has been updated so that I can trigger actions locally.
When looking at the NSDate of both objects they seem identical but a comparison reveals that the remote object is newer - when checking the time internal (since1970) it becomes obvious that there is a difference but why? When I first created the local object all I did was
localObject.updatedAt = remoteObject.updatedAt //both NSDate
But when looking closer I get this:
Local Time Interval: 1411175940.000000
Local Time: 2014-09-20 01:19:00 +0000
Remote Time Interval: 1411175940.168000
Remote Time: 2014-09-20 01:19:00 +0000
Does anyone have an idea why that is and whether I can ignore this detail? Does iOS round up or something?
Adding more code:
#property (strong, nonatomic) NSDate *date;
...
PFQuery *query = [PFObject query];
[query whereKey:#"Product" equalTo:#"123456"]
[query findObjectsInBackgroundWithBlock:^(NSArray *objects, NSError *error) {
if (!error)
{
self.date = objects[0].updatedAt;
NSTimeInterval localTime = [self.date timeIntervalSince1970];
NSTimeInterval remoteTime = [objects[0].updatedAt timeIntervalSince1970];
NSLog(#"Local Time Interval: %f", localTime);
NSLog(#"Local Time: %#", self.date);
NSLog(#"Remote Time Interval: %f", remoteTime);
NSLog(#"Remote Time: %#", objects[0].updatedAt);
}
else
{
NSLog(#"Error with query");
}
}];
That results in the console output above - and I don't understand why these dates are different.
I cannot explain why there is a difference, but the important thing to understand is that there can be a difference and that when comparing dates you have to use a tolerance value.
The Apple Date and Time Programming Guide has an example of how to compare two dates within a given tolerance:
To compare dates, you can use the isEqualToDate:, compare:,
laterDate:, and earlierDate: methods. These methods perform exact
comparisons, which means they detect sub-second differences between
dates. You may want to compare dates with a less fine granularity. For
example, you may want to consider two dates equal if they are within a
minute of each other. If this is the case, use timeIntervalSinceDate:
to compare the two dates. The following code fragment shows how to use
timeIntervalSinceDate: to see if two dates are within one minute (60
seconds) of each other.
if (fabs([date2 timeIntervalSinceDate:date1]) < 60) ...
It's up to you decide on the tolerance value, but something like 0.5 seconds seems reasonable:
+ (BOOL)date:(NSDate *)date1
equalsDate:(NSDate *)date2
{
return fabs([date2 timeIntervalSinceDate:date1]) < 0.5;
}
Parse stores dates as iso8601 format. This makes things very complex as Apple does not manage the format well. While the idea of the standard is awesome, until everyone plays by the same rules, anarchy rules..
I convert everything inbound from parse into usable format before attempting anything on their date time values..
Drop this into a library somewhere, and save yourself tons of headaches. This took weeks of searching and scratching to overcome.
+ (NSDate *)convertParseDate:(NSDate *)sourceDate {
NSDateFormatter *dateFormatter = [NSDateFormatter new];
NSString *input = (NSString *)sourceDate;
dateFormatter.dateFormat = #"yyyy-MM-dd'T'HH:mm:ss.SSS'Z'";
// Always use this locale when parsing fixed format date strings
NSLocale* posix = [[NSLocale alloc] initWithLocaleIdentifier:#"en_US_POSIX"];
dateFormatter.locale = posix;
NSDate *convertedDate = [dateFormatter dateFromString:input];
assert(convertedDate != nil);
return convertedDate;
}

Performance of Swift vs Objective-C

I am just comparing the performance of Swift and Objective-C. For that, I am using NSDate to measure the time taken, but I am getting a big difference between Swift and Objective-C. I just ran an empty for loop 100,000 times. Here is my code,
In Objective-C,
NSDate * start = [NSDate date];
for (int i=0; i<=100000; i++) {
}
NSDate * end = [NSDate date];
double timeTaken = [end timeIntervalSinceDate:start] * 1000;
timeTaken is 0.24 milliseconds
In Swift,
var start = NSDate()
for i in 0...100000
{
}
var end = NSDate()
var timeTaken = end.timeIntervalSinceDate(start) * 1000
timeTaken is 74 milliseconds in Swift, which is a big difference when compared to Objective-C.
Am I doing anything wrong here in the measurement?
What you are doing is pure nonsense. It doesn't matter what the performance of this loop is, because it doesn't happen in real code. The essential difference is that in Swift, the loop will do overflow checks at each step which are a required side effect due to the language definition. In Objective-C, that's not the case.
At the very least you need to do code that does actually meaningful things.
I've done some real tests, and the results were: 1. Speed of Swift and plain C for low-level operations are comparable. 2. When on overflow happens, Swift will kill the program so you can notice the overflow, while C and Objective-C will silently give you nonsense results. Try this:
var i: Int = 0
var s: Double = 0.0
for (i in 1 .. 10_000_000_000) { s += Double (i * i) }
Swift will crash. (Anyone who thinks that's bad hasn't got a clue about programming). The same thing in Objective-C will produce a nonsense result. Replace the loop with
for (i in 1 .. 10_000_000_000) { s += Double (i) * Double (i) }
and both run at comparable speed.
I did some tests with sort function which used native Swift array and Swift string compared to Objective C mutable array and NSString.
Function was ordering 1000 strings and it performed ~ milion string comparisons and it shuffled the array.
Results were the following:
Objective C (using primitive integers and booleans): 0.32 seconds
Objective C (using NSNumber for integers and booleans): 0.45 seconds
Swift in debug mode: 13 seconds :)
Swift with Optimization level (Fastest [-O]): 1.32 seconds
Swift with Optimization level (Fastest, Unchecked [-Ofast]): 0.67 seconds
Seems in fastest mode Swift came pretty close to Objective C but it is still not faster as Apple claim(ed).
This is sort code I used:
Swift:
var strFile = String.stringWithContentsOfFile("1000strings.txt", encoding: NSUTF8StringEncoding, error: nil)
var strings:String[] = strFile!.componentsSeparatedByString("\n")
var startDate = NSDate()
var shouldLoopAgain = true
var numberOfLoops = 0
while shouldLoopAgain {
shouldLoopAgain = false
++numberOfLoops
for var i = 0; i < strings.count-1; ++i {
if (strings[i] > strings[i+1]) {
let temp = strings[i]
strings[i] = strings[i+1]
strings[i+1] = temp;
if !shouldLoopAgain {
shouldLoopAgain = true
}
}
}
}
var time = NSDate().timeIntervalSinceDate(startDate)
Objective C with primitive boolean and integers
NSMutableArray *strings = [NSMutableArray arrayWithArray:[strFile componentsSeparatedByString:#"\n"]];
NSDate *startDate = [NSDate date];
BOOL shouldLoopAgain = YES;
int numberOfLoops = 0;
while (shouldLoopAgain) {
shouldLoopAgain = NO;
++numberOfLoops;
for (int i = 0; i < strings.count-1; ++i) {
if ([strings[i] compare:strings[i+1]] == NSOrderedDescending) {
NSString *temp = strings[i];
strings[i] = strings[i+1];
strings[i+1] = temp;
if (!shouldLoopAgain) {
shouldLoopAgain = YES;
}
}
}
}
NSTimeInterval time = [[NSDate date] timeIntervalSinceDate:startDate];
Objective C with NSNumber for boolean and integers
NSDate *startDate = [NSDate date];
NSNumber *shouldLoopAgain = #YES;
NSNumber *numberOfLoops = #(0);
while ([shouldLoopAgain boolValue]) {
shouldLoopAgain = #NO;
numberOfLoops = #([numberOfLoops intValue]+1);
for (NSNumber *i = 0; [i intValue] < strings.count-1; i = #([i intValue]+1)) {
if ([strings[[i intValue]] compare:strings[[i intValue]+1]] == NSOrderedDescending) {
NSString *temp = strings[[i intValue]];
strings[[i intValue]] = strings[[i intValue]+1];
strings[[i intValue]+1] = temp;
if (![shouldLoopAgain boolValue]) {
shouldLoopAgain = #YES;
}
}
}
}
NSTimeInterval time = [[NSDate date] timeIntervalSinceDate:startDate];
By default, the compiler optimisation level is set to None [-Onone] in debug mode...
Changing this to Fastest [-O] (like release) gives the following results:
Try compiling with optimizations enabled. If that doesn't change the situation, file a bug with Apple. Swift is still in beta, so there may be rough spots.
I don't think that as of today you can run these tests and determine with any certainty whether Swift 1.0 is faster or slower than Objective-C. The entire Swift language is still under development, syntax and the way aspects of the language are implemented are changing. This is clear from the changes in the language between Xcode 6 Betas 2 and 3, and if you look on the Apple Dev Forums you can see the Apple guy's who are working on the language clearly saying that stuff isn't complete or optimised and that it won't be until the Swift 1.0 release, which should happen alongside the public release of Xcode 6.
So, I'm not saying there isn't value in doing these tests right now, but until Swift 1.0 is finalised we can't say anything conclusive about final performance.
Take look at https://softwareengineering.stackexchange.com/questions/242816/how-can-swift-be-so-much-faster-than-objective-c-in-these-comparisons and http://www.splasmata.com/?p=2798 tutorial, might be you can get answer. But main point it that Swift language is still in beta. And also apple does/might not confidently announce that Swift is more faster than objective-c in all the cases. Apple said to us base on average performance. My view is that If in some cases obj-c is faster than swift, it doesn't mean that all over performance of swift is slower. We just give more time to Apple for it.

How to display the time whenever value of the current minute changes (with Cocoa)?

Ok this one really has me stumped. The challenge seems simple but has a few restrictions that are really throwing me off.
Gist of the program:
(1) When the minute changes the application should output the current time with the format HH:mm:ss.
(2) When the second changes the application should output a character for every second and a different character for every tenth second (i.e. ".........|.........|"). Here it is important to note that I cannot just count seconds; rather, the program must know the value of the second of the current time and output the corresponding character only if it is divisible by 10.
Restrictions:
(1) The program can have 2 classes -- one for the minute display and one for the second display.
(2) Each class can contain only one void method that takes a timer as a parameter.
(3) The main function will create an object of each class and 2 NSTimers. The first timer must have its interval set at 0.5 and must target the minutes object. The second timer must have its interval set at 0.1 and must target the seconds object.
I would love to just set the first time to fire every minute and the second timer to fire ever second, but I am not allowed to tamper with that. This is as far as I got before I was stumped:
main.m:
MinuteDisplay *minutes = [[MinuteDisplay alloc] init];
__unused NSTimer *minutesTimer =
[NSTimer scheduledTimerWithTimeInterval:0.5
target:minutes
selector:#selector(minuteChange:)
userInfo:nil
repeats:YES];
[[NSRunLoop currentRunLoop] run];
MinuteDisplay.m:
- (void)minuteChange:(NSTimer *)timer
{
NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
[dateFormatter setDateFormat:#"HH:mm:ss"];
NSDateFormatter *testFormatter = [[NSDateFormatter alloc] init];
[testFormatter setDateFormat:#"ss"];
NSDate *now = [NSDate date];
NSString *nowString = [dateFormatter stringFromDate:now];
NSString *testString = [testFormatter stringFromDate:now];
if ([testString isEqual:#"00"]) {
NSLog(#"%#", nowString];
}
}
Example output:
01:09:00
//.5 second interval
01:09:00
I haven't even bothered with the second half of this challenge yet because I got stumped here. This technically works, but I know I'm doing it wrong, since the output isn't dependent upon the program knowing the value of the minute has changed. Also, because the timer is set to fire every .5 seconds, I get two lines of output every minute instead of just one (which does not satisfy the specs).
Can anyone suggest where to go from here? This one really has my mind boggled.
If you can not modify the timer interval, the other alternative is to use an instance variable in your MinuteDisplay class, for example, displayString. And modify the minuteChange method as follows:
- (void)minuteChange:(NSTimer *)timer
{
NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
[dateFormatter setDateFormat:#"HH:mm:ss"];
NSDateFormatter *testFormatter = [[NSDateFormatter alloc] init];
[testFormatter setDateFormat:#"ss"];
NSDate *now = [NSDate date];
NSString *nowString = [dateFormatter stringFromDate:now];
NSString *testString = [testFormatter stringFromDate:now];
if ([testString isEqual:#"00"]) {
if(![nowString isEqualToString:dispString]) {
dispString = nowString;
NSLog(#"*************** %#", dispString);
}
}
}
This will make sure that it is logged only once a minute
Working minuteChange method:
- (void)minuteChange:(NSTimer *)timer
{
NSDate *now = [NSDate date];
if (!self.lastTime) {
self.lastTime = [NSDate date];
}
if (now > self.lastTime) {
if ([[self.seconds stringFromDate:now] isEqualToString:#"00"]) {
printf("\n%s", [[self.output stringFromDate:now] UTF8String]);
self.lastTime = now;
}
}
self.output = [[NSDateFormatter alloc] init];
[self.output setDateFormat:#"HH:mm:ss"];
self.seconds = [[NSDateFormatter alloc] init];
[self.seconds setDateFormat:#"ss"];
}
Working secondChange method:
- (void)secondChange:(NSTimer *)timer
{
long secondsSince1970 = time(NULL);
if (!self.lastSecond) {
self.lastSecond = secondsSince1970;
}
if (secondsSince1970 > self.lastSecond) {
if (secondsSince1970 % 10 == 0) {
printf("|");
self.lastSecond = secondsSince1970;
} else {
printf(".");
self.lastSecond = secondsSince1970;
}
}
}
Changes to minuteChange:
The most important change was the addition of the lastTime ivar to the class. Now it's simple...if lastTime is null, assign it the value of the current date. If current time is greater than last time, test if it's a new minute and then output the time if it is. ONLY update lastTime if a new minute was output (this ensures that the time is only displayed once per minute).
secondChange:
Took a different approach with this one and now I'm wishing I had gone with straight C the first time around. This one is so simple! Check if the current second is greater than the last second. If so, check if it's divisible by 10. If so, print "|". If not, print ".". Super easy and actually more accurate than setting the timer to fire every second. Sheesh.

Why is Objective-C sqrtf() function faster than a function that just returns a value?

I have these functions in Objective-C:
-(void)emptyFunction
{
NSTimeInterval startTime = [[NSDate date] timeIntervalSinceReferenceDate];
float b;
for (int i=0; i<1000000; i++) {
b = [self returnNr:i];
}
NSTimeInterval endTime = [[NSDate date] timeIntervalSinceReferenceDate];
double elapsedTime = endTime - startTime;
NSLog(#"1. %f", elapsedTime);
}
-(float)returnNr:(float)number
{
return number;
}
and
-(void)sqrtFunction
{
NSTimeInterval startTime = [[NSDate date] timeIntervalSinceReferenceDate];
float b;
for (int i=0; i<1000000; i++) {
b = sqrtf(i);
}
NSTimeInterval endTime = [[NSDate date] timeIntervalSinceReferenceDate];
double elapsedTime = endTime - startTime;
NSLog(#"2. %f", elapsedTime);
}
When I call them, in any order, it prints in console the following:
2014-01-13 12:23:00.458 RapidTest[443:70b] 1. 0.011970
2014-01-13 12:23:00.446 RapidTest[443:70b] 2. 0.006308
How is this happening? How can sqrtf() function be twice as faster than a function that just returns a value? I know sqrtf() works on bits with assembly language and such, but faster than just a return? How is it possible?
Calling [self returnNr:i] is not the same as simply calling a C function. Instead you're sending a message to self, which gets translated to the equivalent in C:
objc_msgSend(self, #selector(returnNr:), i);
This will eventually call your returnNr: implementation, but there's some overhead involved. For more details on what's going on in objc_msgSend see objc_msgSend tour or Let's build objc_msgSend
[edit]
Also see An Illustrated History of objc_msgSend, which shows how the implementation changed over time. Executing the code from your Q will have resulted in different results on different platform versions because of improvements / trade-offs made during the evolution of objc_msgSend.
Your benchmark is flawed.
First, in both cases a compiler may optimize your loop as follows:
for (int i=0; i<1000000-1; i++) {
[self returnNr:i];
}
float b = [self returnNr:i];
respectively:
for (int i=0; i<1000000-1; i++) {
sqrtf(i);
}
float b = sqrtf(i);
Then, IFF the compiler can deduce that the statement sqrtf(i) has no side effect (other than returning a value), it may further optimize as follows:
float b = sqrtf(1000000-1);
It's very likely that clang will apply this optimization, though it's implementation dependent.
See also: Do C and C++ optimizers typically know which functions have no side effects?
In case of Objective-C method invocations a compiler has far less optimization opportunities, and it will very likely always assume that a method call might have a possible side effect and must be called always. Thus, the second optimization will likely not applied in an optimized build.
Additionally, your approach to measure the elapsed time is by far not accurate enough. You should use the mach timer to get a precise absolute time. And you need to execute a number of "runs" and take the minimum of the runs.
Your Obj-C method
-(float)returnNr:(float)number
{
return number;
}
first gets compiled to C-function and then it is executed, and sqrtf() is a C-function.
Hence C-functions will be faster as compared to Objective-C methods.

Frames per second timecode display with NSTimer

I am working on an iPhone/iPad app that needs to display a running timecode clock. I have gotten it to display the correct hours, minutes, and seconds with no problem using this code:
- (void) viewDidLoad {
// Start the Timer method for here to start it when the view loads.
runTimer = [NSTimer scheduledTimerWithTimeInterval: .01 target: self selector: #selector(updateDisplay) userInfo: nil repeats: YES];
}
- (void)updateDisplay {
NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
NSDate *date = [NSDate date];
// Display each hour, minute, second and frame.
[formatter setDateFormat:#"hh"];
[timecodeHourLabel setText:[formatter stringFromDate:date]];
[formatter setDateFormat:#"mm"];
[timecodeMinuteLabel setText:[formatter stringFromDate:date]];
[formatter setDateFormat:#"ss"];
[timecodeSecondLabel setText:[formatter stringFromDate:date]];
}
The issue is when I need to display frames per second. I know that calculating 1/24 * 1000 gives me how many milliseconds are in one frame. I just don't know how to make the NSDate and NSTimer functions work with this code and allow it to update a UILabel as quickly as needed for running timecode.
Any suggestions?
If your timer is running with the period of 0.01 sec, then it's frequency is 100 frames/sec (well, it's better to say it has 100 function calls per second). But if you need to display the precise time period (cause sometimes the timer may be delayed), then you need to store the previous call date and then use
NSDate* new_date = [NSDate date];
double freq = 1.0 / [new_date timeIntervalSinceDate: old_date];
[old_date release];
old_date = [new_date retain];
Here's a Processing/Java equivalent that's fairly straightforward to repurpose.
String timecodeString(int fps) {
float ms = millis();
return String.format("%02d:%02d:%02d+%02d", floor(ms/1000/60/60), // H
floor((ms/1000/60)%60), // M (edit: added %60)
floor(ms/1000%60), // S
floor(ms/1000*fps%fps)); // F
}