Print CMTime in objective C - objective-c

I have some experience of swift, but zero knowledge on objective C
I need to print a timestamp in the console,
here it is:
CMTime timeStamp = CMTimeMake(frame.timeStampNs / rtc::kNumNanosecsPerMillisec, 1000);
I added something like that:
NSLog(#timeStamp);
But it's failed, I need to print something like that in swift:
print("Timestamp: \(timeStamp)"
Could you please tell me how to do it on objective c?
Thanks

Use the format specifier %f for 64-bit floating-point numbers (double)
NSLog("Timestamp: %f", CMTimeGetSeconds(timeStamp));
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/Strings/Articles/formatSpecifiers.html

Related

How to do a reinterpret_cast with Obj-C?

Edit: I am looking for reinterpret_castin Objective-C so the following is meaningless for my intended question.
Are there static casts in Objective-C?
e.g. in this C++ example static_cast is used for a good reason:
float rnd = static_cast <float> (rand()) / static_cast <float> (RAND_MAX);
I can think of using a union or pointers to directly access the
integer value as a float but that would make the code much more
complex.
How would I do the same thing as in the C++ example conveniently in
Objective-C?
Objective C is a superset of C so do this the C way and simple casts
float rnd = (float) (rand()) / (float)(RAND_MAX);
After the clarification you are after reinterpret_cast:
You are correct you can do it with a union. If you want to do it inline you can use a cast, address-of and indirection...
The C-style cast (type)expr in both (Objective-)C and C++ works similarly: if the equivalent of a static_cast is appropriate it does that, e.g. between int and float and other value types; otherwise it will do the equivalent of a reinterpret_cast, e.g. between pointer-types.
So you just need to convert to a pointer, cast, and indirect. E.g.:
int z = 0xDEADBEEF;
float y = *(float *)&z;
NSLog(#"%d -> %f", z, y);
Yes it's a bit ugly, but then what you are doing is as well ;-) To make it a bit nicer define it as a macro:
#define REINTERPRET(type, expr) (*(type *)&(expr))
which you can use as:
int z = 0xDEADBEEF;
float y = REINTERPRET(float, z);
NSLog(#"%d -> %f", z, y);
As with reinterpret_cast, you should use this sparingly and with care!
HTH

In Objective-C, fabsf() has the wrong result type

If in the Xcode's debug console I type
(lldb) p (float)fabsf(-5.0f)
(float) $22 = 0
(lldb) p (double)fabsf(-5.0f)
(double) $23 = 5
where the first result casting to float (without casting, the p command can't recognize the fabsf return type) is wrong no matter the parameter.
But in the library headers the return type of fabs is clearly a float. Can somebody explain me this?
ios9.2: math.h
...
extern float fabsf(float);
extern double fabs(double);
extern long double fabsl(long double);
...
Is there some lldb mechanism or issue which I am not aware of, or really a language implementation issue (guess not...)?
It was pretty sure a bug of lldb console in Xcode 7, and a ticket was created for it.
The issue is not present any more in Xcode 8.0.

float imprecision in printf / appendFormat

I have a float value, that I would like to show it in format string and if it corresponds to an int, showing the integer, if not, showing it with one decimal.
Like this :
3.1
3
2.9
2.8
For now I'm stuck, in the concept, I'd do something like that :
float myFloat = 3.1
float mySecondFloat = 3
[NSString stringWithFormat:#"%g %g", myFloat, mySecondFloat];
My question is:
1/ the type format "%g" works in most cases but sometimes i have result shown like "0.600001" while in reality there should be 0.6 because all I do is 0.7 - 0.1.
Is there a kind of type cast for float at 1 decimal or maybe a bitwise operation to get rid of the final imprecision, or other way to make it works in 100% of cases ?
Thanks for your answers.
If your need absolute precision when working with decimal numbers, you may consider using the NSDecimalNumber class.
Number and Value Programming Topics: Using Decimal Numbers
Otherwise, the %.1g format specifier will be OK.
You have to use this code:
float myFloat = 3.1
float mySecondFloat = 3
[NSString stringWithFormat:#"%.1f %.1f", myFloat, mySecondFloat];
EDIT:
If I really want to print the integer value of a float, i would do it this way (ugly code)
int intValue = myFloatNumber/1;
NSString *string;
if(myFloatNumber == intValue)
{
string = [NSString stringWithFormat:#"%.0f", myFloatValue];
}
else
{
string = [NSString stringWithFormat:#"%.1f", myFloatValue];
}
Doing an integer division by 1, you automatically cast your float to an int.
Once you have the NSString *string you can concat it to your string.
Seeing others answers, it seems that there is no standard C formatter to achieve this.
I went following Nicolas answer (NSDecimalNumber), hence the solve flag. It worked fine indeed, but, in my (very simple) case it might be overkill. Giving it a second thought, if I had to do it again, I think that I would go using only NSNumberFormatter on a NSNumber (using numberWithFloat: method)
If it can help someone..

Objective c: Parsing string to float, avoid extra decimals

Im converting some information Im receiving by a string to a float to get the sum of all of them.
The problem is when I convert the float, for example:
myString = #"13502.63"
float *f = [myString floatValue];
NSLog(#"Converted %f", f);
The result of "f" is 13502.629883
This thing is ok for some values, but when I have to add a big amount of this values, these extra decimals make the result incorrect.
Could anybody help me, please?
Thanks
If you want accuracy you should not use float. Use NSDecimalNumber.
NSString *myString = #"13502.63";
NSDecimalNumber *number = [NSDecimalNumber decimalNumberWithString:myString];
Unfortunately all floating point types in any language will have this problem, as they have to convert into an underlying binary integer format.
Have you considered using NSDecimalNumber?
These will be much slower than a float, but if that is not a problem, then they are much more accurate for such calculations.
If you need speed for some reason, would a double or long-double be accurate enough?
float numbers have no exact representation, that is the reason why "13502.63" is converted to 13502.629883; this is the closest float to the original number.
So, I don't think there is an easy solution with float. You should try NSDecimalNumber. I don't know about the performance, but it should give you an exact representation.

Why is float getting rounded off in this Objective C / Cocoa code

I am doing a simple :
float x = 151.185436;
printf("x=%f",x);
and the result is
x=151.185440
Whats wrong here? I want to retain and print my original value of 151.185436
Thanks
Amarsh
floats just aren't very accurate. Try double. And read this: http://docs.sun.com/source/806-3568/ncg_goldberg.html
A float can only hold 32 bits (4 bytes) of information about your number - it can't just store as many decimal places as you need it to. 151.18544 is as close to your value that the float could represent without running out of bits.
For the precision you want, you need to use a double instead of a float.
Floats are inaccurate, use doubles. Also as you're using Objective C and not straight C it might be better if you use Objective C functions for this:
myNumber = [NSNumber numberWithDouble:151.185436];
NSLog(#"myNumber = %#", myNumber);