digits after decimal point - objective-c

how to get digits after decimal point from float number in objective c

OK, this is C-style, but I imagine the process would be the same.
int decimals = (number -((int)number) );
while( decimals > 0.0 )
{
reportNextNumber( (int)(decimals*10) );
decimals = (number -((int)number) );
}

this code works
CGFloat x = 2.43;
// CGFloat x = 3.145;
// CGFloat x = 2.0003;
// CGFloat x = 1.0;
// CGFloat x = 3.1415926535;
NSLog(#"%f -> %#", x, #([self numberOfFractionDigits:x]));
- (NSString *)numberOfFractionDigits:(CGFloat)number {
CGFloat fractionalPart = number - (NSInteger)number;
NSMutableString *r = [NSMutableString stringWithString:#""];
while (fractionalPart) {
[r appendFormat:#"%#", #((NSUInteger)(fractionalPart * 10 + .5))];
number *= 10;
fractionalPart = number - (NSInteger)number;
}
return r;
}
output:
2.430000 -> 43
3.145000 -> 145
2.000300 -> 0003
1.000000 ->
3.141593 -> 1415926535 // for x = 3.1415926535;

Related

myArray count isn't functioning as expected

The arrayTwelveLEngth variable isn't working as expected. When I placed a breakpoint on the amount = 1; line above, I hovered over arrayTwelve, and found that it was empty with 0 elements. Immediately after, I then hovered about arrayTwelveLength, expecting to see 0, but instead it seems that the arrayTwelveLength had a value of 1876662112. I don't know how it got that value, and I need to solve that problem. What am I doing wrong?
NSMutableArray *redValues = [NSMutableArray array];
NSMutableArray *arrayTwelve = [NSMutableArray array];
__block int counter = 0;
__block NSInteger u;
NSUInteger redValuesLength = [redValues count];
__block int arrayTwelveLength = 0;
__block float diffForAverage, fps, averageTime, bloodSpeed;
float average;
__block int amount = 1;
__block float totalTwelve, totalThirteen;
__block NSUInteger totalNumberOfFramesInSmallArrays = 0;
__block NSUInteger totalNumberOfFramesNotInSmallArrays;
for (u = (counter + 24); u < (redValuesLength - 24); u++)
{
diffForAverage = average - [redValues[u + 1] floatValue];
float test = [redValues[u] floatValue];
arrayTwelveLength = [arrayTwelve count];
if (diffForAverage > -1 && diffForAverage < 1)
{
totalTwelve += [redValues[u + 1] floatValue];
amount++;
[arrayTwelve addObject:#(test)];
counter++;
}
else
{
if (arrayTwelveLength >= 8)
{
counter++;
break;
}
else
{
[arrayTwelve removeAllObjects];
totalTwelve = [redValues[u + 1] floatValue];
counter++;
amount = 1;
}
}
}
amount = 1; // I added a breakpoint here
totalThirteen = [redValues[u + 1] floatValue];
average = totalThirteen / amount;
if (counter == redValuesLength)
{
totalNumberOfFramesNotInSmallArrays = redValuesLength - totalNumberOfFramesInSmallArrays - 25 - (redValuesLength - counter);
fps = redValuesLength / 30;
averageTime = totalNumberOfFramesNotInSmallArrays / fps;
bloodSpeed = 3 / averageTime;
[_BloodSpeedValue setText:[NSString stringWithFormat:#"%f", bloodSpeed]];
}
if (arrayTwelveLength == NULL)
{
arrayTwelveLength = 0;
}
totalNumberOfFramesInSmallArrays += arrayTwelveLength;
You have problems with unsigned/signed types and with your data set the first for loop should not even enter, because your for loop index variable u (== 24) < (redValuesLength (== 0) - 24) but, because redValuesLength being Unsigned type it wraps around and you get:
(unsigned long)0 - (unsigned long)24 = -24 modulo ULONG_MAX + 1= 18446744073709551592
Also, you are not initialising average before usage.

Find Max Difference in Array - Need Algorithm Solution Optimization [duplicate]

This question already has answers here:
optimal way to find sum(S) of all contiguous sub-array's max difference
(2 answers)
Closed 6 years ago.
I practised solving an algo on HackerRank - Max Difference.
Here's the problem given:
You are given an array with n elements: d[ 0 ], d[ 1 ], ..., d[n-1]. Calculate the sum(S) of all contiguous sub-array's max difference.
Formally:
S = sum{max{d[l,...,r]} - min{d[l, ..., r}},∀ 0 <= l <= r < n
Input format:
n
d[0] d[1] ... d[n-1]
Output format:
S
Sample Input:
4
1 3 2 4
Sample Output:
12
Explanation:
l = 0; r = 0;
array: [1]
sum = max([1]) - min([1]) = 0
l = 0; r = 1;
array: [1,3]
sum = max([1,3]) - min([1,3]) = 3 - 1 = 2
l = 0; r = 2;
array: [1,3,2]
sum = max([1,3,2]) - min([1,3,2]) = 3 - 1 = 2
l = 0;r = 3;
array: [1,3,2,4]
sum = max([1,3,2,4]) - min([1,3,2,4]) = 4 - 1 = 3
l = 1; r = 1 will result in zero
l = 1; r = 2;
array: [3,2]
sum = max([3,2]) - min([3,2]) = 3 - 2 = 1;
l = 1; r = 3;
array: [3,2,4]
sum = max ([3,2,4]) - min([3,2,4]) = 4 - 2 = 2;
l = 2; r = 2; will result in zero
l = 2; r = 3;
array:[2,4]
sum = max([2,4]) - min([2,4]) = 4 -2 = 2;
l = 3; r = 3 will result in zero;
Total sum = 12
Here's my solution:
-(NSNumber*)sum:(NSArray*) arr {
int diff = 0;
int curr_sum = diff;
int max_sum = curr_sum;
for(int i=0; i<arr.count; i++)
{
for(int j=i; j<=arr.count; j++) {
// Calculate current diff
if (!(j-i > 1)) {
continue;
}
NSArray *array = [arr subarrayWithRange:NSMakeRange(i, j-i)];
if (!array.count || array.count == 1) {
continue;
}
int xmax = -32000;
int xmin = 32000;
for (NSNumber *num in array) {
int x = num.intValue;
if (x < xmin) xmin = x;
if (x > xmax) xmax = x;
}
diff = xmax-xmin;
// Calculate current sum
if (curr_sum > 0)
curr_sum += diff;
else
curr_sum = diff;
// Update max sum, if needed
if (curr_sum > max_sum)
max_sum = curr_sum;
}
}
return #(max_sum);
}
There were totally 10 test cases.
The above solution passed first 5 test cases, but didn't get passed through the other 5, which were failed due to time out (>=2s).
"Here's the Status: Terminated due to timeout".
Please help me on how this code can be further optimised.
Thanks!
Already there was an answer in Python. Here's the Objective C version from me:
#interface Stack : NSObject {
NSMutableArray* m_array;
int count;
}
- (void)push:(id)anObject;
- (id)pop;
- (id)prev_prev;
- (void)clear;
#property (nonatomic, readonly) NSMutableArray* m_array;
#property (nonatomic, readonly) int count;
#end
#implementation Stack
#synthesize m_array, count;
- (id)init
{
if( self=[super init] )
{
m_array = [[NSMutableArray alloc] init];
count = 0;
}
return self;
}
- (void)push:(id)anObject
{
[m_array addObject:anObject];
count = m_array.count;
}
- (id)pop
{
id obj = nil;
if(m_array.count > 0)
{
obj = [m_array lastObject];
[m_array removeLastObject];
count = m_array.count;
}
return obj;
}
- (id)prev_prev
{
id obj = nil;
if(m_array.count > 0)
{
obj = [m_array lastObject];
}
return obj;
}
- (void)clear
{
[m_array removeAllObjects];
count = 0;
}
#end
#interface SolutionClass:NSObject
/* method declaration */
-(NSNumber*)findDiff:(NSArray*) arr;
#end
#implementation SolutionClass
-(NSNumber*)findDiff:(NSArray*) arr {
NSNumber *maxims = [self sum:arr negative:NO];
NSNumber *minims = [self sum:arr negative:YES];
NSNumber *diff = #(maxims.longLongValue+minims.longLongValue);
NSLog(#"diff: %#", diff);
return diff;
}
-(NSNumber*)sum:(NSArray*) arr negative:(BOOL)negate {
Stack *stack = [Stack new];
[stack push:#{#(-1): [NSNull null]}];
long long sum = 0;
for(int i=0; i<arr.count; i++) {
NSNumber *num = arr[i];
if (negate) {
num = #(-num.longLongValue);
}
NSDictionary *prev = stack.m_array.lastObject;
NSNumber *prev_i = (NSNumber*)prev.allKeys[0];
NSNumber *prev_x = (NSNumber*)prev.allValues[0];
if ([self isNumber:prev_x]) {
while (num.longLongValue > prev_x.longLongValue) {
prev_i = (NSNumber*)prev.allKeys[0];
prev_x = (NSNumber*)prev.allValues[0];
prev = [stack pop];
NSDictionary *prev_prev_Dict = [stack prev_prev];
NSNumber *prev_prev_i = (NSNumber*)prev_prev_Dict.allKeys[0];
sum += prev_x.longLongValue * (i-prev_i.longLongValue) * (prev_i.longLongValue - prev_prev_i.longLongValue);
prev = stack.m_array.lastObject;
prev_x = (NSNumber*)prev.allValues[0];
if (![self isNumber:prev_x]) {
break;
}
}
}
[stack push:#{#(i): num}];
}
NSLog(#"Middle: sum: %lld", sum);
while (stack.count > 1) {
NSDictionary *prev = [stack pop];
NSDictionary *prev_prev_Dict = [stack prev_prev];
NSNumber *prev_i = (NSNumber*)prev.allKeys[0];
NSNumber *prev_x = (NSNumber*)prev.allValues[0];
NSNumber *prev_prev_i = (NSNumber*)prev_prev_Dict.allKeys[0];
sum += prev_x.longLongValue * (arr.count-prev_i.longLongValue) * (prev_i.longLongValue - prev_prev_i.longLongValue);
prev = stack.m_array.lastObject;
prev_x = (NSNumber*)prev.allValues[0];
if (![self isNumber:prev_x]) {
break;
}
}
NSLog(#"End: sum: %lld", sum);
return #(sum);
}
-(BOOL)isNumber:(id)obj {
if ([obj isKindOfClass:[NSNumber class]]) {
return 1;
}
return 0;
}
#end
The above solution works well for 7 test cases, but fails for the other 3 saying this: "Status: Wrong Answer". Hoping to find a fix for that too.
EDIT:
Have updated the WORKING code that passed all the test cases. Wrong data types were used before.

MKPolygon area calculation

I'm trying to make an area calculation category for MKPolygon.
I found some JS code https://github.com/mapbox/geojson-area/blob/master/index.js#L1 with a link to the algorithm: http://trs-new.jpl.nasa.gov/dspace/handle/2014/40409.
It says:
Here is my code, which gave a wrong result (thousands times more than actual):
#define kEarthRadius 6378137
#implementation MKPolygon (AreaCalculation)
- (double) area {
double area = 0;
NSArray *coords = [self coordinates];
if (coords.count > 2) {
CLLocationCoordinate2D p1, p2;
for (int i = 0; i < coords.count - 1; i++) {
p1 = [coords[i] MKCoordinateValue];
p2 = [coords[i + 1] MKCoordinateValue];
area += degreesToRadians(p2.longitude - p1.longitude) * (2 + sinf(degreesToRadians(p1.latitude)) + sinf(degreesToRadians(p2.latitude)));
}
area = area * kEarthRadius * kEarthRadius / 2;
}
return area;
}
- (NSArray *)coordinates {
NSMutableArray *points = [NSMutableArray arrayWithCapacity:self.pointCount];
for (int i = 0; i < self.pointCount; i++) {
MKMapPoint *point = &self.points[i];
[points addObject:[NSValue valueWithMKCoordinate:MKCoordinateForMapPoint(* point)]];
}
return points.copy;
}
double degreesToRadians(double radius) {
return radius * M_PI / 180;
}
#end
What did I miss?
The whole algorithm implemented in Swift 3.0 :
import MapKit
let kEarthRadius = 6378137.0
// CLLocationCoordinate2D uses degrees but we need radians
func radians(degrees: Double) -> Double {
return degrees * M_PI / 180;
}
func regionArea(locations: [CLLocationCoordinate2D]) -> Double {
guard locations.count > 2 else { return 0 }
var area = 0.0
for i in 0..<locations.count {
let p1 = locations[i > 0 ? i - 1 : locations.count - 1]
let p2 = locations[i]
area += radians(degrees: p2.longitude - p1.longitude) * (2 + sin(radians(degrees: p1.latitude)) + sin(radians(degrees: p2.latitude)) )
}
area = -(area * kEarthRadius * kEarthRadius / 2);
return max(area, -area) // In order not to worry about is polygon clockwise or counterclockwise defined.
}
The final step for i = N-1 and i+1 = 0 (wrap around) is missing in your loop.
This may help to someone...
You need to pass the shape edge points into below method and it returns the correct area of a polygon
static double areaOfCurveWithPoints(const NSArray *shapeEdgePoints) {
CGPoint initialPoint = [shapeEdgePoints.firstObject CGPointValue];
CGMutablePathRef cgPath = CGPathCreateMutable();
CGPathMoveToPoint(cgPath, &CGAffineTransformIdentity, initialPoint.x, initialPoint.y);
for (int i = 1;i<shapeEdgePoints.count ;i++) {
CGPoint point = [[shapeEdgePoints objectAtIndex:i] CGPointValue];
CGPathAddLineToPoint(cgPath, &CGAffineTransformIdentity, point.x, point.y);
}
CGPathCloseSubpath(cgPath);
CGRect frame = integralFrameForPath(cgPath);
size_t bytesPerRow = bytesPerRowForWidth(frame.size.width);
CGContextRef gc = createBitmapContextWithFrame(frame, bytesPerRow);
CGContextSetFillColorWithColor(gc, [UIColor whiteColor].CGColor);
CGContextAddPath(gc, cgPath);
CGContextFillPath(gc);
double area = areaFilledInBitmapContext(gc);
CGPathRelease(cgPath);
CGContextRelease(gc);
return area;
}
static CGRect integralFrameForPath(CGPathRef path) {
CGRect frame = CGPathGetBoundingBox(path);
return CGRectIntegral(frame);
}
static size_t bytesPerRowForWidth(CGFloat width) {
static const size_t kFactor = 64;
// Round up to a multiple of kFactor, which must be a power of 2.
return ((size_t)width + (kFactor - 1)) & ~(kFactor - 1);
}
static CGContextRef createBitmapContextWithFrame(CGRect frame, size_t bytesPerRow) {
CGColorSpaceRef grayscale = CGColorSpaceCreateDeviceGray();
CGContextRef gc = CGBitmapContextCreate(NULL, frame.size.width, frame.size.height, 8, bytesPerRow, grayscale, kCGImageAlphaNone);
CGColorSpaceRelease(grayscale);
CGContextTranslateCTM(gc, -frame.origin.x, -frame.origin.x);
return gc;
}
static double areaFilledInBitmapContext(CGContextRef gc) {
size_t width = CGBitmapContextGetWidth(gc);
size_t height = CGBitmapContextGetHeight(gc);
size_t stride = CGBitmapContextGetBytesPerRow(gc);
// Get a pointer to the data
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(gc);
uint64_t coverage = 0;
for (size_t y = 0; y < height; ++y) {
for (size_t x = 0; x < width; ++x) {
coverage += bitmapData[y * stride + x];
}
}
// NSLog(#"coverage =%llu UINT8_MAX =%d",coverage,UINT8_MAX);
return (double)coverage / UINT8_MAX;
}

How can I get a hex string from UIColor or from rgb

Now I can convert a hex string to rgb color like this:
// Input is without the # ie : white = FFFFFF
+ (UIColor *)colorWithHexString:(NSString *)hexString
{
unsigned int hex;
[[NSScanner scannerWithString:hexString] scanHexInt:&hex];
int r = (hex >> 16) & 0xFF;
int g = (hex >> 8) & 0xFF;
int b = (hex) & 0xFF;
return [UIColor colorWithRed:r / 255.0f
green:g / 255.0f
blue:b / 255.0f
alpha:1.0f];
}
bu how can I convert rgb to hex string?
Use this method :
- (NSString *)hexStringForColor:(UIColor *)color {
const CGFloat *components = CGColorGetComponents(color.CGColor);
CGFloat r = components[0];
CGFloat g = components[1];
CGFloat b = components[2];
NSString *hexString=[NSString stringWithFormat:#"%02X%02X%02X", (int)(r * 255), (int)(g * 255), (int)(b * 255)];
return hexString;
}
Use this Extension of UIColor to get hexString from it.
extension UIColor {
func toHexString() -> String {
var r:CGFloat = 0
var g:CGFloat = 0
var b:CGFloat = 0
var a:CGFloat = 0
getRed(&r, green: &g, blue: &b, alpha: &a)
let rgb:Int = (Int)(r*255)<<16 | (Int)(g*255)<<8 | (Int)(b*255)<<0
return String(format:"#%06x", rgb)
}
}
Anoop answer is not correct as if you try it with [UIColor blackColor] it will return green color's hex string.
Reason is system is cleaver enough to save memory it will set
For black colorcomponent[0] = 0 (r=0,g=0,b=0) and component[1] = 1 (a=1).
For white colorcomponent[0] = 1 (r=1,g=1,b=1) and component[1] = 1 (a=1).
Use below category for UIColor to hex
UIColor+Utility.h
#interface UIColor (Utility)
/**
Return representaiton in hex
*/
-(NSString*)representInHex;
#end
UIColor+Utility.m
#implementation UIColor (Utility)
-(NSString*)representInHex
{
const CGFloat *components = CGColorGetComponents(self.CGColor);
size_t count = CGColorGetNumberOfComponents(self.CGColor);
if(count == 2){
return [NSString stringWithFormat:#"#%02lX%02lX%02lX",
lroundf(components[0] * 255.0),
lroundf(components[0] * 255.0),
lroundf(components[0] * 255.0)];
}else{
return [NSString stringWithFormat:#"#%02lX%02lX%02lX",
lroundf(components[0] * 255.0),
lroundf(components[1] * 255.0),
lroundf(components[2] * 255.0)];
}
}
#end
This is the code that I used in Swift, please be aware that this seems to work fine when you send it a UIColor created with the rgba values, but returns some strange results when sending pre-defined colours like UIColor.darkGrayColor()
func hexFromUIColor(color: UIColor) -> String
{
let hexString = String(format: "%02X%02X%02X",
Int(CGColorGetComponents(color.CGColor)[0] * 255.0),
Int(CGColorGetComponents(color.CGColor)[1] *255.0),
Int(CGColorGetComponents(color.CGColor)[2] * 255.0))
return hexString
}
Objective-c UIColor category. This method can avoid situations like #FFFF00000000
- (id)hexString {
CGFloat r, g, b, a;
[self getRed:&r green:&g blue:&b alpha:&a];
int rgb = (int)(r * 255.0f)<<16 | (int)(g * 255.0f)<<8 | (int)(b * 255.0f)<<0;
if (a < 1) {
rgb = (int)(a * 255.0f)<<24 | (int)(r * 255.0f)<<16 | (int)(g * 255.0f)<<8 | (int)(b * 255.0f)<<0;
}
return [NSString stringWithFormat:#"#%06x", rgb];
}

How do I convert a UIColor to a hexadecimal string?

I have a project where I need to store the RGBA values of a UIColor in a database as an 8-character hexadecimal string. For example, [UIColor blueColor] would be #"0000FFFF".
I know I can get the component values like so:
CGFloat r,g,b,a;
[color getRed:&r green:&g blue: &b alpha: &a];
but I don't know how to go from those values to the hex string. I've seen a lot of posts on how to go the other way, but nothing functional for this conversion.
Get your floats converted to int values first, then format with stringWithFormat:
int r,g,b,a;
r = (int)(255.0 * rFloat);
g = (int)(255.0 * gFloat);
b = (int)(255.0 * bFloat);
a = (int)(255.0 * aFloat);
[NSString stringWithFormat:#"%02x%02x%02x%02x", r, g, b, a];
Here it goes. Returns a NSString (e.g. ffa5678) with a hexadecimal value of the color.
- (NSString *)hexStringFromColor:(UIColor *)color
{
const CGFloat *components = CGColorGetComponents(color.CGColor);
CGFloat r = components[0];
CGFloat g = components[1];
CGFloat b = components[2];
return [NSString stringWithFormat:#"%02lX%02lX%02lX",
lroundf(r * 255),
lroundf(g * 255),
lroundf(b * 255)];
}
Swift 4 answer by extension UIColor:
extension UIColor {
var hexString: String {
let colorRef = cgColor.components
let r = colorRef?[0] ?? 0
let g = colorRef?[1] ?? 0
let b = ((colorRef?.count ?? 0) > 2 ? colorRef?[2] : g) ?? 0
let a = cgColor.alpha
var color = String(
format: "#%02lX%02lX%02lX",
lroundf(Float(r * 255)),
lroundf(Float(g * 255)),
lroundf(Float(b * 255))
)
if a < 1 {
color += String(format: "%02lX", lroundf(Float(a * 255)))
}
return color
}
}