I have been programming Objective-C for only a few weeks. My experience in programming languages such as basic, visual basic, C++ and PHP is much more extensive starting back in 1987 and continuing forward to today. Although, for the last 5 years, I have exclusively coded PHP.
Today, I find myself confused by what I perceive to be bit conversion errors within the Objective-C language. I first noticed this the other day when trying to divide an integer (84) converted to a float by a float (10.0). This produced 8.399999, instead of the 8.400 I was hoping for. I coded a way around the issue and moved on.
Today, I am extracting an (int) 0 from an NSMutableDictionary. I store it first in an NSInteger and second in an int variable. The values should be 0 for both cases, but for both cases, I get the integer value 151229568. (See screenshot)
I remember from my early programming years that we had to worry about the size of the container, because pointing to block of memory with a 32-bit pointer to access a 4-bit value resulted in capturing all the data associated with other values and thus resulted in what appeared to be the wrong number being captured. With implicit memory management and type-conversions becoming the norm, I have not had to worry about this kind of issue for years, and now that I am confronted with it again, I need advice and clarification from programmers who are more familiar with this topic in todays programming environments.
Questions:
Is this a case of incorrect pointer sizing or something else?
What is happening on the back-end to produce this conversion from 0 to another number?
What can I do to get better precision and accuracy from my Objective-C calculations and variable assignments?
Code:
NSInteger hsibs = [keyData objectForKey:#"half_sibs"];
int hsibsi = [keyData objectForKey:#"half_sibs"];
//breakpoint and screen capture of variables in stack
I don't know Objective C all that well, but it looks like the method you use to obtain your data is returning a data type of id (see this reference), and not int
Looks like you either need to cast it or get the integer value in such a manner:
NSInteger hsibs = [[keyData objectForKey:#"half_sibs"] integerValue];
int hsibsi = [[keyData objectForKey:#"half_sibs"] intValue];
and then see if you get the expected results.
Related
So I have this Objective-C code it does something that I had been trying to wrap my head around with plain Applescript, and also tried and failed with some python that I tried (and failed at). I'd post the Applescript I have already tried, but it is essentially worthless. So I am turning to the AppleScript/ASOBJC gurus here to help with a solution. The code is to reverse engineer an instagram media ID to a post ID (so if you have a photo that you know is from IG you can find the post ID for that photo).
-(NSString *) getInstagramPostId:(NSString *)mediaId {
NSString *postId = #"";
#try {
NSArray *myArray = [mediaId componentsSeparatedByString:#"_"];
NSString *longValue = [NSString stringWithFormat:#"%#",myArray[0]];
long itemId = [longValue longLongValue];
NSString *alphabet = #"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
while (itemId > 0) {
long remainder = (itemId % 64);
itemId = (itemId - remainder) / 64;
unsigned char charToUse = [alphabet characterAtIndex:(int)remainder];
postId = [NSString stringWithFormat:#"%c%#",charToUse , postId];
}
} #catch(NSException *exception) {
NSLog(#"%#",exception);
}
return postId;}
The code above comes from an answer on another SO question, which can be found here:
Link
I realize it is probably asking a lot but I suck at math so I don't really "get" this code, which is probably why I can't translate it to some form of Applescript myself! Hopefully I will learn something in this process.
Here is an example of the media ID the code is looking for:
45381714_262040461144618_1442077673155810739_n.jpg
And here is the post ID that the code above is supposed to translate into
BqvS62JHYH3
A lot of the research that went into these "calculators" is from this post from 5 years ago. It looks like the 18 digit to 10 digit ratio that they point out in the post is now an 11 to 19 ratio. I tried to test the code in Xcode but got an build error when I attempted to run it. Given that I am an Xcode n00b that is not surprising.
Thanks for your help with this!
Here's an (almost) "word-for-word" translation of your Objective-C code into ASObjC:
use framework "Foundation"
use scripting additions
on InstagramPostIdFromMediaId:mediaId
local mediaId
set postId to ""
set mediaId to my (NSString's stringWithString:mediaId)
set myArray to mediaId's componentsSeparatedByString:"_"
set longValue to my NSString's stringWithFormat_("%#", myArray's firstObject())
set itemId to longValue's longLongValue()
set alphabet to my (NSString's stringWithString:(("ABCDEFGHIJKLMNOPQRSTUVWXYZ" & ¬
"abcdefghijklmnopqrstuvwxyz0123456789-_")))
repeat while (itemId > 0)
set remainder to itemId mod 64
set itemId to itemId div 64
set unichar to (alphabet's characterAtIndex:remainder) as small integer
set postId to character id unichar & postId
end repeat
return postId
end InstagramPostIdFromMediaId:
By "almost", I mean that every Objective-C method utilised in the original script has been utilised by an equivalent call to the same Objective-C method by way of the ASObjC bridge, with two exceptions. I also made a trivial edit of a mathematical nature to one of the lines. Therefore, in total, I made three operational changes, two of these technically being functional changes but which end up to yielding identical results:
to replace (itemId - remainder) / 64 with itemId div 64
The AppleScript div command performs integer division, which is where a number given by regular division is truncated to remove everything after the decimal point. This is mathematically identical to what is being done when the remainder is subtracted from itemId before performing regular dividing.
to avoid the instance where stringWithFormat: is used to translate a unicode character index to a string representation
NSString objects store strings as a series of UTF-16 code points, and characterAtIndex: will retrieve a particular code point from a string, e.g. 0x0041, which refers to the character "A". stringWithFormat: uses the %c format specifier to translate an 8-bit unsigned integer (i.e. those in the range 0x0000 to 0x00FF) into its character value. AppleScript bungles this up, although I'm uncertain how or why this presents a problem. Unwrapping the value returned by charactertAtIndex: yields an opaque raw AppleScript data object that, for example, looks like «data ushr4100». This can happily be coerced into a small integer type, correctly returning the number 65 in denary. Therefore, whatever goes wrong is likely something stringWithFormat: is doing, so I used AppleScript's character id ... function to perform the same operation that stringWithFormat: was intended to do.
myArray[0] was replaced with myArray's firstObject()
Both of these are used in Objective-C to retrieve the first element in an array. myArray[0] is the very familiar C syntax that can happily be used in native Objective-C programming, but is not available to AppleScript. firstObject is an Objective-C method wrapping the underlying function and making it accessible for use in any Objective-C context, but also likely performs some additional checks to make it suitably safe to use without too much thought. As far as we're concerned in the AppleScript context, the result is identical.
With all that being said, supplying a mediaId of "45381714_262040461144618_1442077673155810739_n.jpg" to our new ASObjC handler gives this result:
"CtHhS"
rather than what you stated as the expected result, namely "BqvS62JHYH3". However, it's easy to see why. Both scripts are splitting the mediaId into components ("text items") at every occurrence of an underscore. Then only the first of these goes on to be used by either script to determine the postId. With the given mediaId above, the first text item is "45381714", which is far too short to be valid for our needs, hence the short length of the erroneous result above. The second text item is only 15 digits (characters) long so, too, is not viable. The third text item is 19 characters long, which is of the correct length.
Therefore, I replaced firstObject() in the script with item 3. As you can guess, instead of retrieving the first item from the array of text items (components) stored in myArray, it retrieves the third, namely "1442077673155810739". This produces the following result:
"BQDSgDW-VYA"
Similar, but not the identical to what you were expecting.
For now, I'll leave this with you. At this point, I would usually have compared this with your own previous attempts, but you said they were "worthless" so I'm assuming that this at least provides you with a piece of translated code that works in so far as it performs the same operations as its Objective-C counterpart. If you tell us what the nature of the actual hurdles you were facing are, that potentially lets me or someone else help further.
But since I can say with confidence that these two scripts are doing the same thing, then if the original is producing a different output with identical input, then that tells us that the data must be mutating at some point during its processing. Given that we are dealing with a number with an order of magnitude of 10¹⁹, I think it's very likely that the error is a result of floating-point precision. AppleScript stores any integers with absolute value up to and including 536870911 as type class integer, and anything exceeding this as type class real (floating point), so will be subject to floating-point errors.
The general standard appears to use NS_ENUM with NSInteger as the base type. Why is this the case? Assuming less than 256 cases (which covers almost any enumeration), is there any reason to use that instead of uint8_t, which could use less memory space? Either imports into Swift fine.
This is different than NS_OPTIONS, where a larger type makes sense, since you shouldn't be doing any bit math with enumerations, and you can use every number representable by the base type as a value.
The answer to the question in the title:
Is there any reason to use NSInteger instead of uint8_t with NS_ENUM?
is probably not.
When declaring an enum in C if no underlying type is specified the compiler is free to choose any suitable type from char and the signed and unsigned integer types which can at least represent all the values required. The current Xcode/Clang compiler picks a 4-byte integer. One could reasonably assume the compiler writers made an informed choice - some balance of performance and storage.
Smaller types, such as uint8_t, will usually be aligned on smaller boundaries in memory (or on disc) - but that is only of benefit if the adjacent field matches the alignment e.g. if a 2-byte size typed field follows a 1-byte sized typed field then unless otherwise specified (e.g. with a #pragma packed) there will probably be an intervening unused byte.
Whether any performance or storage differences are significant will be heavily dependent on the application. Follow the usual rule of thumb - don't optimise until an issue is found.
However if you find semantic benefit in limiting the size then certainly do so - there is no general reason you shouldn't. The choice is similar to picking signed vs. unsigned integers, some programmers avoid unsigned types for values that will be ≥ 0 unless absolutely required for the extra range, while others appreciate the semantic benefit.
Summary: There is no right answer, its largely a subjective issue.
HTH
First of all: The memory footprint is close to completely meaningless. You are talking about 1 Byte vs. 4/8 Bytes. (If the memory alignment does not force the usage of 4/8 bytes whatever you chosed.) How many NS_ENUM (C) objects do you want to have in your running app?
I guess that the reason is pretty easy: NSInteger is akin of "catch all" integer type in Cocoa. That makes assignments easier, especially you do not have to care about assigning a bigger integer type to a smaller one. Without casting this would lead to warnings.
Having more than one integer type in a desktop app with a 32/64 bit model is akin of an anachronism. Nor a Mac neither a MacBook neither an iPhone is an embedded micro controller …
You can use any integer data type including uint8_t with NS_ENUM as.
typedef NS_ENUM(uint8_t, eEnumAddEditViewMode) {
eWBEnumAddMode,
eWBEnumEditMode
};
In old c style standard NSInteger is default, because NSInteger is akin of "catch all" integer type in objective c. and developer can easily type boxing and unboxing with their own variable. This is just developer friendly best practise.
I'm doing an exercise from a textbook and the book is outdated, so I'm sort of figuring out how it fits into the new system as I go along. I've got the exact text, and it's returning
'Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int''.
The book is "Cocoa Programming for Mac OS X" by Aaron Hillegass, third edition and the code is:
#import "Foo.h"
#implementation Foo
-(IBAction)generate:(id)sender
{
// Generate a number between 1 and 100 inclusive
int generated;
generated = (random() % 100) + 1;
NSLog(#"generated = %d", generated);
// Ask the text field to change what it is displaying
[textField setIntValue:generated];
}
- (IBAction)seed:(id)sender
{
// Seed the randm number generator with time
srandom(time(NULL));
[textField setStringValue:#"Generator Seeded"];
}
#end
It's on the srandom(time(NULL)); line.
If I replace time with time_t, it comes up with another error message:
Unexpected type name 'time_t': unexpected expression.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
Well you really need to do some more reading so you understand what these things mean, but here are a few pointers.
When you (as in a human) count you normally use decimal numbers. In decimal you have 10 digits, 0 through 9. If you think of a counter, like on an electric meter or a car odometer, it has a fixed number of digits. So you might have a counter which can read from 000000 to 999999, this is a six-digit counter.
A computer represents numbers in binary, which has two digits 0 and 1. A Binary digIT is called a BIT. So thinking about the counter example above, a 32-bit number has 32 binary digits, a 64-bit one 64 binary digits.
Now if you have a 64-bit number and chop off the top 32-bits you may change its value - if the value was just 1 then it will still be 1, but if it takes more than 32 bits then the result will be a different number - just as truncating the decimal 9001 to 01 changes the value.
Your error:
Implicit conversion looses integer precision: 'time_t' (aka 'long') to 'unsigned int'
Is saying you are doing just this, truncating a large number - long is a 64-bit signed integer type on your computer (not on every computer) - to a smaller one - unsigned int is a 32-bit unsigned (no negative values) integer type on your computer.
In your case the loss of precision doesn't really matter as you are using the number in the statement:
srandom(time(NULL));
This line is setting the "seed" - a random number used to make sure each run of your program gets different random numbers. It is using the time as the seed, truncating it won't make any difference - it will still be a random value. You can silence the warning by making the conversion explicit with a cast:
srandom((unsigned int)time(NULL));
But remember, if the value of an expression is important such casts can produce mathematically incorrect results unless the value is known to be in range of the target type.
Now go read some more!
HTH
Its just a notification. You are assigning 'long' to 'unsigned int'
Solution is simple. Just click the yellow notification icon on left ribbon of that particular line where you are assigning that value. it will show a solution. Double click on solution and it will do everything automatically.
It will typecast to match the equation. But try next time to keep in mind the types you are assigning are same.. hope this helps..
Is is a follow-up to my previous question:
What are the digits in an ObjC method type encoding string?
Say there is an encoding:
v24#0:4:8#12B16#20
How are those numbers calculated? B is a char so it should occupy just 1 byte (not 4 bytes). Does it have something to do with "alignment"? What is the size of void?
Is it correct to calculate the numbers as follows? Ask sizeof on every item and round up the result to multiple of 4? And the first number becomes the sum of all the other ones?
The numbers were used in the m68K days to denote stack layout. That is, you could literally decode the the method signature and, for just about all types, know exactly which bytes at what offset within the stack frame you could diddle to get/set arguments.
This worked because the m68K's ABI was entirely [IIRC -- been a long long time] stack based argument/return passing. There wasn't anything shoved into registers across call boundaries.
However, as Objective-C was ported to other platforms, always-on-the-stack was no longer the calling convention. Arguments and return values are often passed in registers.
Thus, those offsets are now useless. As well, the type encoding used by the compiler is no longer complete (because it never was terribly useful) and there will be types that won't be encoded. Not too mention that encoding some C++ templatized types yields method type encoding strings that can be many Kilobytes in size (I think the record I ran into was around 30K of type information).
So, no, it isn't correct to use sizeof() to generate the numbers because they are effectively meaningless to everything. The only reason why they still exist is for binary compatibility; there are bits of esoteric code here and there that still parse the type encoding string with the expectation that there will be random numbers sprinkled here and there.
Note that there are vestiges of API in the ObjC runtime that still lead one to believe that it might be possible to encode/decode stack frames on the fly. It really isn't as the C ABI doesn't guarantee that argument registers will be preserved across call boundaries in the face of optimization. You'd have to drop to assembly and things get ugly really really fast (>shudder<).
The full encoding string is constructed (in clang) by the method ASTContext::getObjCEncodingForMethodDecl, which you can find in lib/AST/ASTContext.cpp.
The method that does the size rounding is ASTContext::getObjCEncodingTypeSize, in the same file. It forces each size to be at least the size of an int. On all of Apple's current platforms, an int is 4 bytes.
The stack frame size and argument offsets are calculated by the compiler. I'm actually trying to track this down in the Clang source myself this week; it possibly has something to do with CodeGenTypes::arrangeObjCMessageSendSignature. (Looks like Rob just made my life a lot easier!)
The first number is the sum of the others, yes -- it's the total space occupied by the arguments. To get the size of the type represented by an ObjC type encoding in your code, you should use NSGetSizeAndAlignment().
Reading Objective-C type encodings documentation (GCC's and Apple's pages somewhat complement each other), I stumbled upon the _Complex keyword. I've never heard about it, and when I tried to look it up, I found tons of results talking about erroneous uses of it, but never what it really did.
What is _Complex, and how does it work?
A complex number type which looks like it uses half the bit-width for the real part and half for the imaginary part:
_Complex double x; declares x as a variable whose real part and imaginary
part are both of type double.
_Complex short int y; declares y to
have real and imaginary parts of type
short int; this is not likely to be
useful, but it shows that the set of
complex types is complete.
Posts about "EXC_BAD_ACCESS _Complex double return"
http://hintsforums.macworld.com/showthread.php?t=92768
http://developer.apple.com/library/mac/#documentation/DeveloperTools/gcc-4.0.1/gcc/Complex.html
Complex numbers.