I am just comparing the performance of Swift and Objective-C. For that, I am using NSDate to measure the time taken, but I am getting a big difference between Swift and Objective-C. I just ran an empty for loop 100,000 times. Here is my code,
In Objective-C,
NSDate * start = [NSDate date];
for (int i=0; i<=100000; i++) {
}
NSDate * end = [NSDate date];
double timeTaken = [end timeIntervalSinceDate:start] * 1000;
timeTaken is 0.24 milliseconds
In Swift,
var start = NSDate()
for i in 0...100000
{
}
var end = NSDate()
var timeTaken = end.timeIntervalSinceDate(start) * 1000
timeTaken is 74 milliseconds in Swift, which is a big difference when compared to Objective-C.
Am I doing anything wrong here in the measurement?
What you are doing is pure nonsense. It doesn't matter what the performance of this loop is, because it doesn't happen in real code. The essential difference is that in Swift, the loop will do overflow checks at each step which are a required side effect due to the language definition. In Objective-C, that's not the case.
At the very least you need to do code that does actually meaningful things.
I've done some real tests, and the results were: 1. Speed of Swift and plain C for low-level operations are comparable. 2. When on overflow happens, Swift will kill the program so you can notice the overflow, while C and Objective-C will silently give you nonsense results. Try this:
var i: Int = 0
var s: Double = 0.0
for (i in 1 .. 10_000_000_000) { s += Double (i * i) }
Swift will crash. (Anyone who thinks that's bad hasn't got a clue about programming). The same thing in Objective-C will produce a nonsense result. Replace the loop with
for (i in 1 .. 10_000_000_000) { s += Double (i) * Double (i) }
and both run at comparable speed.
I did some tests with sort function which used native Swift array and Swift string compared to Objective C mutable array and NSString.
Function was ordering 1000 strings and it performed ~ milion string comparisons and it shuffled the array.
Results were the following:
Objective C (using primitive integers and booleans): 0.32 seconds
Objective C (using NSNumber for integers and booleans): 0.45 seconds
Swift in debug mode: 13 seconds :)
Swift with Optimization level (Fastest [-O]): 1.32 seconds
Swift with Optimization level (Fastest, Unchecked [-Ofast]): 0.67 seconds
Seems in fastest mode Swift came pretty close to Objective C but it is still not faster as Apple claim(ed).
This is sort code I used:
Swift:
var strFile = String.stringWithContentsOfFile("1000strings.txt", encoding: NSUTF8StringEncoding, error: nil)
var strings:String[] = strFile!.componentsSeparatedByString("\n")
var startDate = NSDate()
var shouldLoopAgain = true
var numberOfLoops = 0
while shouldLoopAgain {
shouldLoopAgain = false
++numberOfLoops
for var i = 0; i < strings.count-1; ++i {
if (strings[i] > strings[i+1]) {
let temp = strings[i]
strings[i] = strings[i+1]
strings[i+1] = temp;
if !shouldLoopAgain {
shouldLoopAgain = true
}
}
}
}
var time = NSDate().timeIntervalSinceDate(startDate)
Objective C with primitive boolean and integers
NSMutableArray *strings = [NSMutableArray arrayWithArray:[strFile componentsSeparatedByString:#"\n"]];
NSDate *startDate = [NSDate date];
BOOL shouldLoopAgain = YES;
int numberOfLoops = 0;
while (shouldLoopAgain) {
shouldLoopAgain = NO;
++numberOfLoops;
for (int i = 0; i < strings.count-1; ++i) {
if ([strings[i] compare:strings[i+1]] == NSOrderedDescending) {
NSString *temp = strings[i];
strings[i] = strings[i+1];
strings[i+1] = temp;
if (!shouldLoopAgain) {
shouldLoopAgain = YES;
}
}
}
}
NSTimeInterval time = [[NSDate date] timeIntervalSinceDate:startDate];
Objective C with NSNumber for boolean and integers
NSDate *startDate = [NSDate date];
NSNumber *shouldLoopAgain = #YES;
NSNumber *numberOfLoops = #(0);
while ([shouldLoopAgain boolValue]) {
shouldLoopAgain = #NO;
numberOfLoops = #([numberOfLoops intValue]+1);
for (NSNumber *i = 0; [i intValue] < strings.count-1; i = #([i intValue]+1)) {
if ([strings[[i intValue]] compare:strings[[i intValue]+1]] == NSOrderedDescending) {
NSString *temp = strings[[i intValue]];
strings[[i intValue]] = strings[[i intValue]+1];
strings[[i intValue]+1] = temp;
if (![shouldLoopAgain boolValue]) {
shouldLoopAgain = #YES;
}
}
}
}
NSTimeInterval time = [[NSDate date] timeIntervalSinceDate:startDate];
By default, the compiler optimisation level is set to None [-Onone] in debug mode...
Changing this to Fastest [-O] (like release) gives the following results:
Try compiling with optimizations enabled. If that doesn't change the situation, file a bug with Apple. Swift is still in beta, so there may be rough spots.
I don't think that as of today you can run these tests and determine with any certainty whether Swift 1.0 is faster or slower than Objective-C. The entire Swift language is still under development, syntax and the way aspects of the language are implemented are changing. This is clear from the changes in the language between Xcode 6 Betas 2 and 3, and if you look on the Apple Dev Forums you can see the Apple guy's who are working on the language clearly saying that stuff isn't complete or optimised and that it won't be until the Swift 1.0 release, which should happen alongside the public release of Xcode 6.
So, I'm not saying there isn't value in doing these tests right now, but until Swift 1.0 is finalised we can't say anything conclusive about final performance.
Take look at https://softwareengineering.stackexchange.com/questions/242816/how-can-swift-be-so-much-faster-than-objective-c-in-these-comparisons and http://www.splasmata.com/?p=2798 tutorial, might be you can get answer. But main point it that Swift language is still in beta. And also apple does/might not confidently announce that Swift is more faster than objective-c in all the cases. Apple said to us base on average performance. My view is that If in some cases obj-c is faster than swift, it doesn't mean that all over performance of swift is slower. We just give more time to Apple for it.
Related
I have these functions in Objective-C:
-(void)emptyFunction
{
NSTimeInterval startTime = [[NSDate date] timeIntervalSinceReferenceDate];
float b;
for (int i=0; i<1000000; i++) {
b = [self returnNr:i];
}
NSTimeInterval endTime = [[NSDate date] timeIntervalSinceReferenceDate];
double elapsedTime = endTime - startTime;
NSLog(#"1. %f", elapsedTime);
}
-(float)returnNr:(float)number
{
return number;
}
and
-(void)sqrtFunction
{
NSTimeInterval startTime = [[NSDate date] timeIntervalSinceReferenceDate];
float b;
for (int i=0; i<1000000; i++) {
b = sqrtf(i);
}
NSTimeInterval endTime = [[NSDate date] timeIntervalSinceReferenceDate];
double elapsedTime = endTime - startTime;
NSLog(#"2. %f", elapsedTime);
}
When I call them, in any order, it prints in console the following:
2014-01-13 12:23:00.458 RapidTest[443:70b] 1. 0.011970
2014-01-13 12:23:00.446 RapidTest[443:70b] 2. 0.006308
How is this happening? How can sqrtf() function be twice as faster than a function that just returns a value? I know sqrtf() works on bits with assembly language and such, but faster than just a return? How is it possible?
Calling [self returnNr:i] is not the same as simply calling a C function. Instead you're sending a message to self, which gets translated to the equivalent in C:
objc_msgSend(self, #selector(returnNr:), i);
This will eventually call your returnNr: implementation, but there's some overhead involved. For more details on what's going on in objc_msgSend see objc_msgSend tour or Let's build objc_msgSend
[edit]
Also see An Illustrated History of objc_msgSend, which shows how the implementation changed over time. Executing the code from your Q will have resulted in different results on different platform versions because of improvements / trade-offs made during the evolution of objc_msgSend.
Your benchmark is flawed.
First, in both cases a compiler may optimize your loop as follows:
for (int i=0; i<1000000-1; i++) {
[self returnNr:i];
}
float b = [self returnNr:i];
respectively:
for (int i=0; i<1000000-1; i++) {
sqrtf(i);
}
float b = sqrtf(i);
Then, IFF the compiler can deduce that the statement sqrtf(i) has no side effect (other than returning a value), it may further optimize as follows:
float b = sqrtf(1000000-1);
It's very likely that clang will apply this optimization, though it's implementation dependent.
See also: Do C and C++ optimizers typically know which functions have no side effects?
In case of Objective-C method invocations a compiler has far less optimization opportunities, and it will very likely always assume that a method call might have a possible side effect and must be called always. Thus, the second optimization will likely not applied in an optimized build.
Additionally, your approach to measure the elapsed time is by far not accurate enough. You should use the mach timer to get a precise absolute time. And you need to execute a number of "runs" and take the minimum of the runs.
Your Obj-C method
-(float)returnNr:(float)number
{
return number;
}
first gets compiled to C-function and then it is executed, and sqrtf() is a C-function.
Hence C-functions will be faster as compared to Objective-C methods.
Im trying to create a Binary to Decimal calculator and I am having trouble doing any sort of conversion that will actually work. First off Id like to introduce myself as a complete novice to objective c and to programming in general. As a result many concepts will appear difficult to me, so I am mostly looking for the easiest way to understand and not the most efficient way of doing this.
I have at the moment a calculator that will accept input and display this in a label. This part is working fine and I have no issues with it. The variable that the input is stored on is _display = [[NSMutableString stringWithCapacity:20] retain];
this is working perfectly and I am able to modify the data accordingly. What I would like to do is to be able to display an NSString of the conversion in another label. At the moment I have tried a few solutions and have not had any decent results, this is the latest attempt
- (NSMutableString *)displayValue2:(long long)element
{
_str= [[NSMutableString alloc] initWithString:#""];
if(element > 0){
for(NSInteger numberCopy = element; numberCopy > 0; numberCopy >>= 1)
{
[_str insertString:((numberCopy & 1) ? #"1" : #"0") atIndex:0];
}
}
else if(element == 0)
{
[_str insertString:#"0" atIndex:0];
}
else
{
element = element * (-1);
_str = [self displayValue2:element];
[_str insertString:#"0" atIndex:0];
NSLog(#"Prima for: %#",_str);
for(int i=0; i<[_str length];i++)
_str = _display;
NSLog(#"Dopo for: %#",_str);
}
return _str;
}
Within my View Controller I have a convert button setup, when this is pressed I want to set the second display field to the decimal equivalent. This is working as if I set displayValue2 to return a string of my choosing it works. All I need is help getting this conversion to work. At the moment this bit of code has led to "incomplete implementation" being displayed at the to of my class. Please help, and cheers to those who take time out to help.
So basically all you are really looking for is a way to convert binary numbers into decimal numbers, correct? Another way to think of this problem is changing a number's base from base 2 to base 10. I have used functions like this before in my projects:
+ (NSNumber *)convertBinaryStringToDecimalNumber:(NSString *)binaryString {
NSUInteger totalValue = 0;
for (int i = 0; i < binaryString.length; i++) {
totalValue += (int)([binaryString characterAtIndex:(binaryString.length - 1 - i)] - 48) * pow(2, i);
}
return #(totalValue);
}
Obviously this is accessing the binary as a string representation. This works well since you can easily access each value over a number which is more difficult. You could also easily change the return type from an NSNumber to some string literal. This also works for your element == 0 scenario.
// original number wrapped as a string
NSString *stringValue = [NSString stringWithFormat:#"%d", 11001];
// convert the value and get an NSNumber back
NSNumber *result = [self.class convertBinaryStringToDecinalNumber:stringValue];
// prints 25
NSLog(#"%#", result);
If I misunderstood something please clarify, if you do not understand the code let me know. Also, this may not be the most efficient but it is simple and clean.
I also strongly agree with Hot Licks comment. If you are truly interested in learning well and want to be an developed programmer there are a few basics you should be learning first (I learned with Java and am glad that I did).
So, I'm trying to do a simple calculation over previously recorded audio (from an AVAsset) in order to create a wave form visual. I currently do this by averaging a set of samples, the size of which is determined by dividing the audio file size by the resolution I want for the wave form.
This all works fine, except for one problem....it's too slow. Running on a 3GS, processing an audio file takes about 3% of the time it takes to play it, which is way to slow (for example, a 1 hour audio file takes about 2.5 minutes to process). I've tried to optimize the method as much as possible but it's not working. I'll post the code I use to process the file. Maybe someone will be able to help with that but what I'm really looking for is a way to process the file without having to go over every single byte. So, say given a resolution of 2,000 I'd want to access the file and take a sample at each of the 2,000 points. I think this would be a lot quicker, especially if the file is larger. But the only way I know to get the raw data is to access the audio file in a linear manner. Any ideas? Here's the code I use to process the file (note, all class vars begin with '_'):
So I've completely changed this question. I belatedly realized that AVAssetReader has a timeRange property that's used for "seeking", which is exactly what I was looking for (see original question above). Furthermore, the question has been asked and answered (I just didn't find it before) and I don't want to duplicate questions. However, I'm still having a problem. My app freezes for a while and then eventually crashes when ever I try to copyNextSampleBuffer. I'm not sure what's going on. I don't seem to be in any kind of recursion loop, it just never returns from the function call. Checking the logs show give me this error:
Exception Type: 00000020
Exception Codes: 0x8badf00d
Highlighted Thread: 0
Application Specific Information:
App[10570] has active assertions beyond permitted time:
{(
<SBProcessAssertion: 0xddd9300> identifier: Suspending process: App[10570] permittedBackgroundDuration: 10.000000 reason: suspend owner pid:52 preventSuspend preventThrottleDownCPU preventThrottleDownUI
)}
I use a time profiler on the app and yep, it just sits there with a minimal amount of processing. Can't quite figure out what's going on. It's important to note that this doesn't occur if I don't set the timeRange property of AVAssetReader. I've checked and the values for timeRange are valid, but setting it is causing the problem for some reason. Here's my processing code:
- (void) processSampleData{
if (!_asset || CMTimeGetSeconds(_asset.duration) <= 0) return;
NSError *error = nil;
AVAssetTrack *songTrack = _asset.tracks.firstObject;
if (!songTrack) return;
NSDictionary *outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
nil];
UInt32 sampleRate = 44100.0;
_channelCount = 1;
NSArray *formatDesc = songTrack.formatDescriptions;
for(unsigned int i = 0; i < [formatDesc count]; ++i) {
CMAudioFormatDescriptionRef item = (__bridge_retained CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i];
const AudioStreamBasicDescription* fmtDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item);
if(fmtDesc ) {
sampleRate = fmtDesc->mSampleRate;
_channelCount = fmtDesc->mChannelsPerFrame;
}
CFRelease(item);
}
UInt32 bytesPerSample = 2 * _channelCount; //Bytes are hard coded by AVLinearPCMBitDepthKey
_normalizedMax = 0;
_sampledData = [[NSMutableData alloc] init];
SInt16 *channels[_channelCount];
char *sampleRef;
SInt16 *samples;
NSInteger sampleTally = 0;
SInt16 cTotal;
_sampleCount = DefaultSampleSize * [UIScreen mainScreen].scale;
NSTimeInterval intervalBetweenSamples = _asset.duration.value / _sampleCount;
NSTimeInterval sampleSize = fmax(100, intervalBetweenSamples / _sampleCount);
double assetTimeScale = _asset.duration.timescale;
CMTimeRange timeRange = CMTimeRangeMake(CMTimeMake(0, assetTimeScale), CMTimeMake(sampleSize, assetTimeScale));
SInt16 totals[_channelCount];
#autoreleasepool {
for (int i = 0; i < _sampleCount; i++) {
AVAssetReader *reader = [AVAssetReader assetReaderWithAsset:_asset error:&error];
AVAssetReaderTrackOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:songTrack outputSettings:outputSettingsDict];
[reader addOutput:trackOutput];
reader.timeRange = timeRange;
[reader startReading];
while (reader.status == AVAssetReaderStatusReading) {
CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];
if (sampleBufferRef){
CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBufferRef);
size_t length = CMBlockBufferGetDataLength(blockBufferRef);
int sampleCount = length / bytesPerSample;
for (int i = 0; i < sampleCount ; i += _channelCount) {
CMBlockBufferAccessDataBytes(blockBufferRef, i * bytesPerSample, _channelCount, channels, &sampleRef);
samples = (SInt16 *)sampleRef;
for (int channel = 0; channel < _channelCount; channel++)
totals[channel] += samples[channel];
sampleTally++;
}
CMSampleBufferInvalidate(sampleBufferRef);
CFRelease(sampleBufferRef);
}
}
for (int i = 0; i < _channelCount; i++){
cTotal = abs(totals[i] / sampleTally);
if (cTotal > _normalizedMax) _normalizedMax = cTotal;
[_sampledData appendBytes:&cTotal length:sizeof(cTotal)];
totals[i] = 0;
}
sampleTally = 0;
timeRange.start = CMTimeMake((intervalBetweenSamples * (i + 1)) - sampleSize, assetTimeScale); //Take the sample just before the interval
}
}
_assetNeedsProcessing = NO;
}
I finally figured out why. Apparently there is some sort of 'minimum' duration you can specify for the timeRange of an AVAssetReader. I'm not sure what exactly that minimum is, somewhere above 1,000 but less than 5,000. It's possible that the minimum changes with the duration of the asset...honestly I'm not sure. Instead, I kept the duration (which is infinity) the same and simply changed the start time. Instead of processing the whole sample, I copy only one buffer block, process that and then seek to the next time. I'm still having trouble with the code, but I'll post that as another question if I can't figure it out.
Is there a way in Objective-C on iOS to spell out an integer number as text?
For example, if I have
NSInteger someNumber = 11242043;
I would like to know some function so that would return a string similar to "eleven million two hundred forty two thousand forty three."
Apple has a lot of handy formatting functionality built in for many data types. Called a "formatter," they can convert objects to/from string representations.
For your case, you will be using NSNumberFormatter, but if you have an integer you need to convert it to an NSNumber first. See below example.
NSInteger anInt = 11242043;
NSString *wordNumber;
//convert to words
NSNumber *numberValue = [NSNumber numberWithInt:anInt]; //needs to be NSNumber!
NSNumberFormatter *numberFormatter = [[NSNumberFormatter alloc] init];
[numberFormatter setNumberStyle:NSNumberFormatterSpellOutStyle];
wordNumber = [numberFormatter stringFromNumber:numberValue];
NSLog(#"Answer: %#", wordNumber);
// Answer: eleven million two hundred forty-two thousand forty-three
If you'd like to learn more about formatters:
https://developer.apple.com/library/content/documentation/General/Conceptual/Devpedia-CocoaApp/Formatter.html
Power of extension for Swift 5
import Foundation
public extension Int {
var asWord: String? {
let numberValue = NSNumber(value: self)
let formatter = NumberFormatter()
formatter.numberStyle = .spellOut
return formatter.string(from: numberValue)
}
}
var value = 2
if let valueAsWord = value.asWord {
//do something with your worded number here
print("value worded = \(valueAsWord)")
} else {
print("could not word value :(")
}
Note: Edited to protect against formatter.string(from: returning nil which is highly not likely, but still possible.
Output:
value worded = two
From the docs:
NSNumberFormatterSpellOutStyle
Specifies a spell-out format; for example, “23” becomes “twenty-three”.
Available in iOS 2.0 and later.
Declared in NSNumberFormatter.h.
As your question isn't very specific, I won't post full-fledged code source either.
With Swift 5 / iOS 12.2, NumberFormatter has a numberStyle property that can be set with value NumberFormatter.Style.spellOut. spellOut has the following declaration:
case spellOut = 5
A style format in which numbers are spelled out in the language defined by the number formatter locale.
For example, in the en_US locale, the number 1234.5678 is represented as one thousand two hundred thirty-four point five six seven eight; in the fr_FR locale, the number 1234.5678 is represented as mille deux cent trente-quatre virgule cinq six sept huit.
This style is supported for most user locales. If this style doesn't support the number formatter locale, the en_US locale is used as a fallback.
The Playground code below shows how to convert an integer to a spell-out text using NumberFormatter spellOut style:
import Foundation
let integer = 2018
let formatter = NumberFormatter()
formatter.numberStyle = NumberFormatter.Style.spellOut
let spellOutText = formatter.string(for: integer)!
print(spellOutText) // prints: two thousand eighteen
We can do this in swift like this.
let formatter = NSNumberFormatter()
formatter.numberStyle = NSNumberFormatterStyle. SpellOutStyle
println("\(identifier) \(formatter.stringFromNumber(1234.5678))")
You can use the below function to convert an integer to words using swift native number style.
func toWords<N>(number: N) -> String? {
let formatter = NumberFormatter()
formatter.numberStyle = .spellOut
switch number {
case is Int, is UInt, is Float, is Double:
return formatter.string(from: number as! NSNumber)
case is String:
if let number = Double(number as! String) {
return formatter.string(from: NSNumber(floatLiteral: number))
}
default:
break
}
return nil
}
print(toWords(number: 12312))
print(toWords(number: "12312"))
For my own reference, this is #moca's answer, but ready for use:
- (NSString *) spellInt:(int)number {
NSNumber *numberAsNumber = [NSNumber numberWithInt:number];
NSNumberFormatter *formatter = [NSNumberFormatter new];
[formatter setNumberStyle:NSNumberFormatterSpellOutStyle];
return [formatter stringFromNumber:numberAsNumber];
}
Note: This is using ARC.
Hey there, I am fresh to iPhone development and Objective C. Sorry if this question is too stupid....
Here is my problem, I want to calculate the time taken by one of my function. like UIGetScreenImage. Here is the code:
-(void)screenCapture{
CGImageRef screen = UIGetScreenImage();
UIImage* image = [UIImage imageWithCGImage:screen];
CGImageRelease(screen);
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
what should I do to calculate the time taken by this process? Sample code would be appreciated.
Thanks for your kind assistance. Look forward to your replies and ideas. :D
You can get current date on method start and finish and check time passed between those 2 moments:
-(void)screenCapture{
NSDate* startDate = [NSDate date];
...
NSDate* finishDate = [NSDate date];
NSLog(#"%f", [finishDate timeIntervalSinceDate: startDate]);
}
Edit: I believe my approach described above is (to put it mildly) not the best solution to measure process time. Now I use approach described in "Big Nerd Ranch" blog here that uses mach_absolute_time function. I copied the code from the post to illustrate that method - with this code snippet you can measure rub time of arbitrary block:
#import <mach/mach_time.h> // for mach_absolute_time() and friends
CGFloat BNRTimeBlock (void (^block)(void)) {
mach_timebase_info_data_t info;
if (mach_timebase_info(&info) != KERN_SUCCESS) return -1.0;
uint64_t start = mach_absolute_time ();
block ();
uint64_t end = mach_absolute_time ();
uint64_t elapsed = end - start;
uint64_t nanos = elapsed * info.numer / info.denom;
return (CGFloat)nanos / NSEC_PER_SEC;
} // BNRTimeBlock
I think an easier (and more straightforward) solution can be found here
NSDate *methodStart = [NSDate date];
/* ... Do whatever you need to do ... */
NSDate *methodFinish = [NSDate date];
NSTimeInterval executionTime = [methodFinish timeIntervalSinceDate:methodStart];