Conversion from float to int looks weird - objective-c

I am having difficulty understanding why the following code is giving me the numbers below. Can anyone explain this conversion from float to int? (pCLocation is a CGPoint)
counter = 0;
pathCells[counter][0].x = pCLocation.x;
pathCells[counter][0].y = pCLocation.y;
cellCount[counter]++;
NSLog(#"%#",[NSString stringWithFormat:#"pCLocation at:
%f,%f",pCLocation.x,pCLocation.y]);
NSLog(#"%#",[NSString stringWithFormat:#"path cell 0: %i,%i",
pathCells[counter][cellCount[counter-1]].x,pathCells[counter][cellCount[counter]].y]);
2012-03-09 01:17:37.165 50LevelsBeta1[1704:207] pCLocation at: 47.000000,16.000000
2012-03-09 01:17:37.172 50LevelsBeta1[1704:207] path cell 0: 0,1078427648

Assuming your code is otherwise correct:
I think it would help you to understand how NSLog and other printf-style functions work. When you call NSLog(#"%c %f", a_char, a_float), your code pushes the format string and values onto the stack, then jumps to the start of that function's code. Since NSLog accepts a variable number of arguments, it doesn't know how much to pop off the stack yet. It knows at least there is a format string, so it pops that off and begins to scan it. When it finds a format specifier %c, it knows to pop one byte off the stack and print that value. Then it finds %f, so now it knows to pop another 32 bits and print that as a floating point value. Then it reaches the end of the format string, so it's done.
Now here's the kicker: if you lie to NSLog and tell it you are providing a int but actually provide a float, it has no way to know you are lying. It simply assumes you are telling the truth and prints whatever bits it finds in memory however you asked it to be printed.
That's why you are seeing weird values: you are printing a floating point value as though it were an int. If you really want an int value, you should either:
Apply a cast: NSLog(#"cell.x: %i", (int)cell.x);
Leave it a float but use the format string to hide the decimals: NSLog(#"cell.x: %.0f", cell.x);
(Alternate theory, still potentially useful.)
You might be printing out the contents of uninitialized memory.
In the code you've given, counter = 0 and is never changed. So you assign values to:
pathCells[0][0].x = pCLocation.x;
pathCells[0][0].y = pCLocation.y;
cellCount[0]++;
Then you print:
pathCells[0][cellCount[-1]].x
pathCells[0][cellCount[0]].y
I'm pretty sure that cellCount[-1] isn't what you want. C allows this because even though you think of it as working with an array of a specific size, foo[bar] really just means grab the value at memory address foo plus offset bar. So an index of -1 just means take one step back. That's why you don't get a warning or error, just junk data.
You should clarify what pathCells, cellCount, and counter are and how they relate to each other. I think you have a bug in how you are combining these things.

Related

Trying to translate Object-C into Applescriptobjc for instagram post finder

So I have this Objective-C code it does something that I had been trying to wrap my head around with plain Applescript, and also tried and failed with some python that I tried (and failed at). I'd post the Applescript I have already tried, but it is essentially worthless. So I am turning to the AppleScript/ASOBJC gurus here to help with a solution. The code is to reverse engineer an instagram media ID to a post ID (so if you have a photo that you know is from IG you can find the post ID for that photo).
-(NSString *) getInstagramPostId:(NSString *)mediaId {
NSString *postId = #"";
#try {
NSArray *myArray = [mediaId componentsSeparatedByString:#"_"];
NSString *longValue = [NSString stringWithFormat:#"%#",myArray[0]];
long itemId = [longValue longLongValue];
NSString *alphabet = #"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
while (itemId > 0) {
long remainder = (itemId % 64);
itemId = (itemId - remainder) / 64;
unsigned char charToUse = [alphabet characterAtIndex:(int)remainder];
postId = [NSString stringWithFormat:#"%c%#",charToUse , postId];
}
} #catch(NSException *exception) {
NSLog(#"%#",exception);
}
return postId;}
The code above comes from an answer on another SO question, which can be found here:
Link
I realize it is probably asking a lot but I suck at math so I don't really "get" this code, which is probably why I can't translate it to some form of Applescript myself! Hopefully I will learn something in this process.
Here is an example of the media ID the code is looking for:
45381714_262040461144618_1442077673155810739_n.jpg
And here is the post ID that the code above is supposed to translate into
BqvS62JHYH3
A lot of the research that went into these "calculators" is from this post from 5 years ago. It looks like the 18 digit to 10 digit ratio that they point out in the post is now an 11 to 19 ratio. I tried to test the code in Xcode but got an build error when I attempted to run it. Given that I am an Xcode n00b that is not surprising.
Thanks for your help with this!
Here's an (almost) "word-for-word" translation of your Objective-C code into ASObjC:
use framework "Foundation"
use scripting additions
on InstagramPostIdFromMediaId:mediaId
local mediaId
set postId to ""
set mediaId to my (NSString's stringWithString:mediaId)
set myArray to mediaId's componentsSeparatedByString:"_"
set longValue to my NSString's stringWithFormat_("%#", myArray's firstObject())
set itemId to longValue's longLongValue()
set alphabet to my (NSString's stringWithString:(("ABCDEFGHIJKLMNOPQRSTUVWXYZ" & ¬
"abcdefghijklmnopqrstuvwxyz0123456789-_")))
repeat while (itemId > 0)
set remainder to itemId mod 64
set itemId to itemId div 64
set unichar to (alphabet's characterAtIndex:remainder) as small integer
set postId to character id unichar & postId
end repeat
return postId
end InstagramPostIdFromMediaId:
By "almost", I mean that every Objective-C method utilised in the original script has been utilised by an equivalent call to the same Objective-C method by way of the ASObjC bridge, with two exceptions. I also made a trivial edit of a mathematical nature to one of the lines. Therefore, in total, I made three operational changes, two of these technically being functional changes but which end up to yielding identical results:
to replace (itemId - remainder) / 64 with itemId div 64
The AppleScript div command performs integer division, which is where a number given by regular division is truncated to remove everything after the decimal point. This is mathematically identical to what is being done when the remainder is subtracted from itemId before performing regular dividing.
to avoid the instance where stringWithFormat: is used to translate a unicode character index to a string representation
NSString objects store strings as a series of UTF-16 code points, and characterAtIndex: will retrieve a particular code point from a string, e.g. 0x0041, which refers to the character "A". stringWithFormat: uses the %c format specifier to translate an 8-bit unsigned integer (i.e. those in the range 0x0000 to 0x00FF) into its character value. AppleScript bungles this up, although I'm uncertain how or why this presents a problem. Unwrapping the value returned by charactertAtIndex: yields an opaque raw AppleScript data object that, for example, looks like «data ushr4100». This can happily be coerced into a small integer type, correctly returning the number 65 in denary. Therefore, whatever goes wrong is likely something stringWithFormat: is doing, so I used AppleScript's character id ... function to perform the same operation that stringWithFormat: was intended to do.
myArray[0] was replaced with myArray's firstObject()
Both of these are used in Objective-C to retrieve the first element in an array. myArray[0] is the very familiar C syntax that can happily be used in native Objective-C programming, but is not available to AppleScript. firstObject is an Objective-C method wrapping the underlying function and making it accessible for use in any Objective-C context, but also likely performs some additional checks to make it suitably safe to use without too much thought. As far as we're concerned in the AppleScript context, the result is identical.
With all that being said, supplying a mediaId of "45381714_262040461144618_1442077673155810739_n.jpg" to our new ASObjC handler gives this result:
"CtHhS"
rather than what you stated as the expected result, namely "BqvS62JHYH3". However, it's easy to see why. Both scripts are splitting the mediaId into components ("text items") at every occurrence of an underscore. Then only the first of these goes on to be used by either script to determine the postId. With the given mediaId above, the first text item is "45381714", which is far too short to be valid for our needs, hence the short length of the erroneous result above. The second text item is only 15 digits (characters) long so, too, is not viable. The third text item is 19 characters long, which is of the correct length.
Therefore, I replaced firstObject() in the script with item 3. As you can guess, instead of retrieving the first item from the array of text items (components) stored in myArray, it retrieves the third, namely "1442077673155810739". This produces the following result:
"BQDSgDW-VYA"
Similar, but not the identical to what you were expecting.
For now, I'll leave this with you. At this point, I would usually have compared this with your own previous attempts, but you said they were "worthless" so I'm assuming that this at least provides you with a piece of translated code that works in so far as it performs the same operations as its Objective-C counterpart. If you tell us what the nature of the actual hurdles you were facing are, that potentially lets me or someone else help further.
But since I can say with confidence that these two scripts are doing the same thing, then if the original is producing a different output with identical input, then that tells us that the data must be mutating at some point during its processing. Given that we are dealing with a number with an order of magnitude of 10¹⁹, I think it's very likely that the error is a result of floating-point precision. AppleScript stores any integers with absolute value up to and including 536870911 as type class integer, and anything exceeding this as type class real (floating point), so will be subject to floating-point errors.

What does 'Implicit conversion loses integer precision: 'time_t'' mean in Objective C and how do I fix it?

I'm doing an exercise from a textbook and the book is outdated, so I'm sort of figuring out how it fits into the new system as I go along. I've got the exact text, and it's returning
'Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int''.
The book is "Cocoa Programming for Mac OS X" by Aaron Hillegass, third edition and the code is:
#import "Foo.h"
#implementation Foo
-(IBAction)generate:(id)sender
{
// Generate a number between 1 and 100 inclusive
int generated;
generated = (random() % 100) + 1;
NSLog(#"generated = %d", generated);
// Ask the text field to change what it is displaying
[textField setIntValue:generated];
}
- (IBAction)seed:(id)sender
{
// Seed the randm number generator with time
srandom(time(NULL));
[textField setStringValue:#"Generator Seeded"];
}
#end
It's on the srandom(time(NULL)); line.
If I replace time with time_t, it comes up with another error message:
Unexpected type name 'time_t': unexpected expression.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
Well you really need to do some more reading so you understand what these things mean, but here are a few pointers.
When you (as in a human) count you normally use decimal numbers. In decimal you have 10 digits, 0 through 9. If you think of a counter, like on an electric meter or a car odometer, it has a fixed number of digits. So you might have a counter which can read from 000000 to 999999, this is a six-digit counter.
A computer represents numbers in binary, which has two digits 0 and 1. A Binary digIT is called a BIT. So thinking about the counter example above, a 32-bit number has 32 binary digits, a 64-bit one 64 binary digits.
Now if you have a 64-bit number and chop off the top 32-bits you may change its value - if the value was just 1 then it will still be 1, but if it takes more than 32 bits then the result will be a different number - just as truncating the decimal 9001 to 01 changes the value.
Your error:
Implicit conversion looses integer precision: 'time_t' (aka 'long') to 'unsigned int'
Is saying you are doing just this, truncating a large number - long is a 64-bit signed integer type on your computer (not on every computer) - to a smaller one - unsigned int is a 32-bit unsigned (no negative values) integer type on your computer.
In your case the loss of precision doesn't really matter as you are using the number in the statement:
srandom(time(NULL));
This line is setting the "seed" - a random number used to make sure each run of your program gets different random numbers. It is using the time as the seed, truncating it won't make any difference - it will still be a random value. You can silence the warning by making the conversion explicit with a cast:
srandom((unsigned int)time(NULL));
But remember, if the value of an expression is important such casts can produce mathematically incorrect results unless the value is known to be in range of the target type.
Now go read some more!
HTH
Its just a notification. You are assigning 'long' to 'unsigned int'
Solution is simple. Just click the yellow notification icon on left ribbon of that particular line where you are assigning that value. it will show a solution. Double click on solution and it will do everything automatically.
It will typecast to match the equation. But try next time to keep in mind the types you are assigning are same.. hope this helps..

RenderScript Variable types and Element types, simple example

I clearly see the need to deepen my knowledge in RenderScript memory allocation and data types (I'm still confused about the sheer number of data types and finding the correct corresponding types on either side - allocations and elements. (or when to refer the forEach to input, to output or to both, etc.) Therefore I will read and re-read the documentation, which is really not bad - but it needs some time to get the necessary "intuition" how to use it correctly. But for now, please help me with this basic one (and I will return later with hopefully less stupid questions...). I need a very simple kernel that takes an ARGB Color Bitmap and returns an integer Array of gray-values. My attempt was the following:
#pragma version(1)
#pragma rs java_package_name(com.example.xxxx)
#pragma rs_fp_relaxed
uint __attribute__((kernel)) grauInt(uchar4 in) {
uint gr= (uint) (0.2125*in.r + 0.7154*in.g + 0.0721*in.b);
return gr;
}
and Java side:
int[] data1 = new int[width*height];
ScriptC_gray graysc;
graysc=new ScriptC_gray(rs);
Type.Builder TypeOut = new Type.Builder(rs, Element.U8(rs));
TypeOut.setX(width).setY(height);
Allocation outAlloc = Allocation.createTyped(rs, TypeOut.create());
Allocation inAlloc = Allocation.createFromBitmap(rs, bmpfoto1,
Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT);
graysc.forEach_grauInt(inAlloc, outAlloc);
outAlloc.copyTo(data1);
This crashed with the message cannot locate symbol "convert_uint". What's wrong with this conversion? Is the code otherwise correct?
UPDATE: isn't that ridiculous? I don't get this "easy one" run, even after 2 hours trying. I still struggle with the different Element- and variable-types. Let's recap: Input is a Bitmap. Output is an int[] Array. So, why doesnt it work when I use U8 in the Java-side Out-allocation, createFromBitmap in the Java-side In-allocation, uchar4 as kernel Input and uint as the kernel Output (RSRuntimeException: Type mismatch with U32) ?
There is no convert_uint() function. How about simple casting? Other than that, the code looks alright (assuming width and height have correct values).
UPDATE: I have just noticed that you allocate Element.I32 (i.e. signed integer type), but return uint from the kernel. These should match. And in any case, unless you need more than 8-bit precision, you should be able to fit your result in U8.
UPDATE: If you are changing the output type, make sure you change it in all places, e.g. if the kernel returns an uint, the allocation should use U32. If the kernel returns a char, the allocation should use I8. And so on...
You can't use a Uint[] directly because the input Bitmap is actually 2-dimensional. Can you create the output Allocation with a proper width/height and try that? You should still be able to extract the values into a Java array when you are finished.

Objective-C: Float addition error when returning to view

I've got an NSDictionary made up of titles and floats (although they're stored as strings, for what it's worth), along the lines of "Paid for Dinner":"15.00". Right now, those entries are (15, 25.25, 25.75, 15), which should add up to 81. (And I've checked in the debugger, those are the correct values being stored, so it isn't a data source problem.)
I want to get the sum of all the entries programmatically, so I've got a fairly simple bit of code to do that:
float currentTotal;
for(id key in thisSet) {
currentTotal = currentTotal + [[thisSet objectForKey:key] floatValue];
}
By the end of the function, currentTotal is correctly set at 81.
Thing is, when I leave that ViewController and then return back to it, (by going from the MasterView, where I was, to the DetailView and then back, if it matters), the same function with the same values will return 81.006.
The values didn't change (I checked the debugger again, it's still precisely (15, 25.25, 25.75, 15)) and the code didn't change, so why would simply moving from to another View and back change the result?
NOTE: I know about floating point addition errors and such from other answers like this and this. I'm not looking for why floating point operations can be imprecise, I'm wondering why a change in View would affect the results.
The local variable currentTotal must be assigned a value (zero in your case) before
it is used, otherwise its contents is
undefined and may be different on each invocation of the function.
(The Static Analyzer should warn you about this.)

sprintf fails spontaneously depending on what printf and NSLog calls there are

Hello I have a bizarre problem with sprintf. Here's my code:
void draw_number(int number,int height,int xpos,int ypos){
char string_buffer[5]; //5000 is the maximum score, hence 4 characters plus null character equals 5
printf("Number - %i\n",number);
sprintf(string_buffer,"%i",number); //Get string
printf("String - %s\n",string_buffer);
int y_down = ypos + height;
for (int x = 0; x < 5; x++) {
char character = string_buffer[x];
if(character == NULL){ //Blank characters occur at the end of the number from spintf. Testing with NULL works
break;
}
int x_left = xpos+height*x;
int x_right = x_left+height;
GLfloat vertices[] = {x_left,ypos,x_right,ypos,x_left,y_down,x_right,y_down};
rectangle2d(vertices, number_textures[atoi(strcat(&character,"\0"))], full_texture_texcoords);
}
}
With the printf calls there, the numbers are printed successfully and the numbers are drawn as expected. When I take them away, I can't view the output and compare it, of-course, but the numbers aren't rendering correctly. I assume sprintf breaks somehow.
This also happens with NSLog. Adding NSLog's anywhere in the program can either break or fix the function.
What on earth is going on?
This is using Objective-C with the iOS 4 SDK.
Thank you for any answer.
Well this bit of code is definately odd
char character = string_buffer[x];
...
... strcat(&character,"\0") ...
Originally I was thinking that depending on when there happens to be a NUL terminator on the stack this will clober some peice of memory, and could be causing your problems. However, since you're appending the empty string I don't think it will have any effect.
Perhaps the contents of the stack actually contain numbers that atoi is interpretting?Either way I suggest you fix that and see if it solves your issue.
As to how to fix it Georg Fritzsche beat me to it.
With strcat(&character,"\0") you are trying to use a single character as a character array. This will probably result in atoi() returning completely different values from what you're expecting (as you have no null-termination) or simply crash.
To fix the original approach, you could use proper a zero-terminated string:
char number[] = { string_buffer[x], '\0' };
// ...
... number_textures[atoi(number)] ...
But even easier would be to simply use the following:
... number_textures[character - '0'] ...
Don't use NULL to compare against a character, use '\0' since it's a character you're looking for. Also, your code comment sounds surprised, of course a '\0' will occur at the end of the string, that is how C terminates strings.
If your number is ever larger than 9999, you will have a buffer overflow which can cause unpredicable effects.
When you have that kind of problem, instantly think stack or heap corruption. You should dynamically allocate your buffer with enough size- having it as a fixed size is BEGGING for this kind of trouble. Because you don't check that the number is within the max- if you ever had another bug that caused it to be above the max, you'd get this problem here.