xCode 9 ios 11 : Data's pointer inside struct losing - objective-c

In my application, I have a struct and it contains all of my data. It works well with previous version of Xcode. But when I build it with Xcode 9, beta 5 to iOS 11 device, some data inside is lost.
That happens, when I pass the struct object as parameter by function to another activity.
My struct looks like below:
typedef struct {
NSString *title;
MainController *mainController;
//....
//....
//more than 200 objects
} mystruct;
After data loss, when I try to access it Xcode shows [EXC_BAD_ACCESS] and application crashes.
Is there any size limit to structures in Xcode 9?
UPDATE 1: add creating and passing source
//declare struct
mystruct m;
memset(&m,0,sizeof(mystruct));
// setting data for struct
....
...
...
// passing param
[anotherObj showData:&m];
Method read value of another object :
- (void)showData:(mystruct *)ms
{
// get data of struct
[self showText : ms->title];
}
NOTE: It only happens on xCode 9 and iOS 11
xCode 9 + iOS 11 -> Error
xCode 9 + iOS 10 -> OK
xCode 8 + iOS 11 ->OK
xCode 8 + iOS 10 -> OK
xCode 7 + iOS 11 -> OK
xCode 7 + iOS 10 ->OK

I solved it by myself . The reason is because when build app on xCode 9 . It forces my app build with 64-bit compiler .
There was a warning from Apple doc about [Be Careful When Aligning 64-Bit Integer types] . But i didn't notice it .
Link : Apple doc about aligning bit
Then, i just add pragma like they teach and it worked well .
#pragma pack(4)
struct bar {
int32_t foo0;
int32_t foo1;
int32_t foo2;
int64_t bar;
};
#pragma options align=reset
Another way , we can reorganize the elements with the largest alignment values first and the smallest elements last to avoid padding bit .
#Amin Negm-Awad : Thanks for your help

Related

Determine macOS 10.15 or older [duplicate]

I am using below code to check OS X version at runtime.
if (floor(NSAppKitVersionNumber) <= NSAppKitVersionNumber10_10)
{
/* On a 10.10.x or earlier system */
}
But this condition return false on 10.10.4 OS X. I am using Xcode 6.3.2.
According to AppKit Release Notes for OS X v10.11, It should work.
if (floor(NSAppKitVersionNumber) <= NSAppKitVersionNumber10_9) {
/* On a 10.9.x or earlier system */
} else if (floor(NSAppKitVersionNumber) <= NSAppKitVersionNumber10_10) {
/* On a 10.10 - 10.10.x system */
} else {
/* 10.11 or later system */
}
Since Xcode 9.0, you can use code below.
Reference to Apple documentation.
Swift
if #available(macOS 10.13, *) {
// macOS 10.13 or later code path
} else {
// code for earlier than 10.13
}
Objective-C
if (#available(macOS 10.13, *)) {
// macOS 10.13 or later code path
} else {
// code for earlier than 10.13
}
So the #define for 10_10 you see there is for 10.10.0.
If you look for older version numbers, you'll see specific #define's for MacOS 10.7.4, MacOS 10.5.3.
And what is happening here is that on a 10.10.4 machine (like yours and mine), the app kit number for 10.10.4 is greater than the one defined for 10.10.0.
That is, in swift, I did:
func applicationDidFinishLaunching(aNotification: NSNotification) {
print("appkit version number is \(NSAppKitVersionNumber)")
}
And I got:
appkit version number is 1348.17
So your code is actually checking for 10.10.0 and older.
If you want to check for all versions of Yosemite & newer, you'll probably want to do something like
#ifdef NSAppKitVersionNumber10_11
if (floor(NSAppKitVersionNumber) < NSAppKitVersionNumber10_11)
{
/* On a 10.10.x or earlier system */
}
#endif
which will compile once you start building with Xcode 7 (and once Apple gets around to defining the official shipping version/build number for the El Capitan release)
FWIW, the Xcode 7 beta I have includes "NSAppKitVersionNumber10_10_3" in the 10.11 SDK.
If you need to support multiple systems (you can adapt method return value):
#import <objc/message.h>
+ (NSString *)systemVersion
{
static NSString *systemVersion = nil;
if (!systemVersion) {
typedef struct {
NSInteger majorVersion;
NSInteger minorVersion;
NSInteger patchVersion;
} MyOperatingSystemVersion;
if ([[NSProcessInfo processInfo] respondsToSelector:#selector(operatingSystemVersion)]) {
MyOperatingSystemVersion version = ((MyOperatingSystemVersion(*)(id, SEL))objc_msgSend_stret)([NSProcessInfo processInfo], #selector(operatingSystemVersion));
systemVersion = [NSString stringWithFormat:#"Mac OS X %ld.%ld.%ld", (long)version.majorVersion, version.minorVersion, version.patchVersion];
}
else {
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
SInt32 versMaj, versMin, versBugFix;
Gestalt(gestaltSystemVersionMajor, &versMaj);
Gestalt(gestaltSystemVersionMinor, &versMin);
Gestalt(gestaltSystemVersionBugFix, &versBugFix);
systemVersion = [NSString stringWithFormat:#"Mac OS X %d.%d.%d", versMaj, versMin, versBugFix];
#pragma clang diagnostic pop
}
}
return systemVersion;
}
Starting with Yosemite, you can use [NSProcessInfo processInfo].operatingSystemVersion and test the result in the struct NSOperatingSystemVersion.
Update:
Use #define NSAppKitVersionNumber10_10_Max 1349
Old:
From 10.11 SDK
#define NSAppKitVersionNumber10_7_2 1138.23
#define NSAppKitVersionNumber10_7_3 1138.32
#define NSAppKitVersionNumber10_7_4 1138.47
#define NSAppKitVersionNumber10_8 1187
#define NSAppKitVersionNumber10_9 1265
#define NSAppKitVersionNumber10_10 1343
#define NSAppKitVersionNumber10_10_2 1344
#define NSAppKitVersionNumber10_10_3 1347
for 10.10.4 Its 1348.0(From NSLog output)
They increase decimal part for 10.10.x constant.
The workaround is to use CFBundleVersion value /System/Library/Frameworks/AppKit.framework/Resources/Info.plist on 10.11.
if (NSAppKitVersionNumber < 1391.12)
{
/* On a 10.10.x or earlier system */
}
NOTE: My OS X 10.11 build version is 15A244a. If someone have first build , Please update the value in if condition.

How to declare packed struct (without padding) for LLVM?

It's possible to tell GCC it should not use padding for the struct. This is done using __attribute__((packed)).
typedef struct {
uint8_t startSymbol;
uint8_t packetType;
uint32_t deviceId;
uint16_t packetCRC;
} PacketData __attribute__((packed));
However, newest Xcode uses LLVM and does not recognize the attribute. How to define packed struct for LLVM?
The full description of the problem might be found here
UPDATE I'm using Xcode 4.5.1 for iOS which uses Apple LLVM 4.1 compiler. I'm getting "'packed' attribute ignored" warning in Xcode in the code example above.
Did you actually try it? I just tested it on my machine, and __attribute__((packed)) compiled fine using clang.
Edit: I got the same warning ("Warning: packed attribute unused") for
typedef struct {
int a;
char c;
} mystruct __attribute__((packed));
and in this case sizeof(mystruct) was 8.
However,
typedef struct __attribute__((packed)) {
int a;
char c;
} mystruct;
worked just fine, and sizeof(mystruct) was 5.
Conclusion: it seems that the attribute needs to preceed the struct label in order to get this working.
You can use preprocessor directive to specify byte alignment for the struct so no padding will be done by the compiler:
#pragma pack(1)
typedef struct {
char t1;
long long t2;
char t3;
} struct_size_test;
#pragma options align=reset
See the answer to this question on stackoverflow.
clang 3.5 on Linux -
typedef struct __attribute__((packed)) thing1 { int blah; } THING_ONE;
worked.

Assigning to array element in Objective C assigns to wrong index

I'm seeing some rather strange behavior as I'm debugging my iPhone project. Among other things, in the header file for a UIViewController descendant, I have this:
#interface OOGameDetailsViewController : UIViewController <OOXMLParserDelegate,OOServerDelegate>
{
...
OOXMLNode *node;
float elheights[6];
...
}
#property (readwrite, retain) OOXMLNode *gameNode;
...
#end
Then in the code, I'm assigning values to this array:
- (void)setNode:(OOXMLNode *)aNode
{
node = aNode;
[self updateUserDefaults];
[background setImage:[gameNode imageFromKey:#"background"]];
CGFloat maxw = 240.0f;
NSString *text = [gameNode elementNodeWithKey:#"title"].value;
CGSize sizeOfText=[text sizeWithFont:[UIFont boldSystemFontOfSize:20.0f]
constrainedToSize:CGSizeMake(maxw, CGFLOAT_MAX)
lineBreakMode:UILineBreakModeWordWrap];
...
elheights[0] = 10 + sizeOfText.height;
text = [gameNode elementNodeWithKey:#"age"].value;
sizeOfText=[text sizeWithFont:[UIFont systemFontOfSize:16.0f]
constrainedToSize:CGSizeMake(maxw, CGFLOAT_MAX)
lineBreakMode:UILineBreakModeWordWrap];
...
elheights[1] = 10 + sizeOfText.height;
...
elheights[2] = 10 + height;
...
elheights[3] = 10 + separatorHeight;
text = [gameNode elementNodeWithKey:#"intro"].value;
sizeOfText=[text sizeWithFont:[UIFont boldSystemFontOfSize:14.0f]
constrainedToSize:CGSizeMake(maxw, CGFLOAT_MAX)
lineBreakMode:UILineBreakModeWordWrap];
...
elheights[4] = sizeOfText.height;
...
elheights[5] = 20 + btnHeight;
[mainTable reloadData];
}
The ideas is quite simple: I'm assigning 6 floating point values to a 6-element array - with indices from 0 to 5. However as I'm stepping through this code in the debugger, I consistently see each of the assignments assign the element of the array with index 1 higher than indicated. That is, elheights[0] = ... actually assigns this vlaue into elheights[1] and so on. The last assignment (assigning to element with index 5 simply does nothing).
To prove that I'm not crazy, here are screenshots of the debugger (crops of the relevant part - to keep image size to minimum). The first one is right before the assignment, the second one is right after the assignment.
I cleaned the project and restarted; I closed the simulator, closed Xcode, reopened and restarted - same results. Is there something special about the way ObjectiveC handles simple C-style arrays?
Thanks in advance,
-Aleks
XCode4 uses new LLDB debugger by default, which does not always work perfectly yet. (For example, the current version does not recognize structure alignment attribute). It also has some bugs here and there.
Switch to GDB (click Scheme / Edit Scheme in the toolbar, there is the “Debugger” popup button with two choices) if you're having problems with it.
I've experienced strange debugger behaviour in Xcode when stepping through optimised (i.e. non -O0) code that was a result of the compiler re-ordering instructions. Try making sure the GCC_OPTIMIZATION_LEVEL isn't something you aren't expecting.

Why does backtrace not contain Objective-C symbols regardless of -rdynamic?

Update: I'm working with the GNU-runtime on Linux. The problem does not occur on MacOS with the Apple-runtime.
Update 2: I compiled the GNU-runtime on MacOS and build the example with it. The error does not occur on MacOS with the GNU-runtime. I would say the problem is the glibc (since backtrace and backtrace_symbols are glibc extensions).
When printing a backtrace in a GCC compiled Objective-C app using backtraceand backtrace_symbols, I don't get any Objective-C symbols. Only the filenames, addresses and C-symbols appear.
I compiled with -g and linked with -rdynamic.
My test app:
void _printTrace()
{
void *addr[1024];
int aCount = backtrace(addr, 1024);
char **frameStrings = backtrace_symbols(addr, aCount);
for (int i = 0; i < aCount; i++) {
printf("%s\n", frameStrings[i]);
}
free(frameStrings);
}
#interface TheObject
+ (void)_printTrace;
+ (void)printTrace;
#end
#implementation TheObject
+ (void)_printTrace
{
_printTrace();
}
+ (void)printTrace
{
[self _printTrace];
}
#end
void printTrace()
{
[TheObject printTrace];
}
int main(int argc, char **argv)
{
printTrace();
return 0;
}
and it's output:
./test.bin(_printTrace+0x1f) [0x8048e05]
./test.bin() [0x8048e60]
./test.bin() [0x8048e8b]
./test.bin(printTrace+0x34) [0x8048ec5]
./test.bin(main+0xf) [0x8048eda]
/lib/libc.so.6(__libc_start_main+0xe5) [0xb7643bb5]
./test.bin() [0x8048b51]
Is there a way to let the Objective-C symbols appear in this backtrace?
dladdr() only reports global and weak symbols. But all Objective-C function symbols are local:
$ readelf -s so_backtrace
Symbol table '.dynsym' contains 29 entries:
…
Symbol table '.symtab' contains 121 entries:
Num: Value Size Type Bind Vis Ndx Name
…
49: 08048a01 13 FUNC LOCAL DEFAULT 14 _c_TheObject___printTrace
50: 08048a0e 47 FUNC LOCAL DEFAULT 14 _c_TheObject__printTrace
…
You can verify that local symbols are never returned by looking at the GNU libc source code yourself. backtrace_symbols() is defined in sysdeps/generic/elf/backtracesyms.c. It relies on _dl_addr(), which is defined in elf/dl-addr.c, to provide it with the symbol names. That ultimately calls determine_info(). If it can, it uses the the GNU hash table, which does not include local symbols by design:
49 /* We look at all symbol table entries referenced by the hash
50 table. */
…
60 /* The hash table never references local symbols so
61 we can omit that test here. */
If the GNU hash table isn't present, it falls back to standard hash table. This includes all the symbols, but the determine_info() code filters out all but the global symbols and weak symbols:
90 if ((ELFW(ST_BIND) (symtab->st_info) == STB_GLOBAL
91 || ELFW(ST_BIND) (symtab->st_info) == STB_WEAK)
To symbolicate the Objective-C function addresses, you would have to perform the look-up yourself and not filter out the local function symbols. Further, you would have to demangle the Objective-C function symbols to restore _c_TheObject___printTrace to +[TheObject _printTrace].
GNUstep's NSException implementation doesn't use backtrace, instead it uses libbfd (binary file descriptor). I think the function that actually does the work is called static void find_address, which you can view here. Using this trivial example, I get the results that follow.
#include <Foundation/Foundation.h>
#interface Test : NSObject {}
+ (void) test;
#end
#implementation Test
+ (void) test
{
Class GSStackTrace = objc_getClass("GSStackTrace");
id stack = [GSStackTrace currentStack];
for (int i = 0; i < [stack frameCount]; i++)
{
NSLog (#"%#", [[stack frameAt:i] function]);
}
}
#end
int main(int argc, char **argv)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
[Test test];
[pool release];
return 0;
}
Output (when compiled with debug symbols):
2010-10-18 14:14:46.188 a.out[29091] +[GSStackTrace currentStack]
2010-10-18 14:14:46.190 a.out[29091] +[Test test]
2010-10-18 14:14:46.190 a.out[29091] main
2010-10-18 14:14:46.190 a.out[29091] __libc_start_main
You may be able to pick apart GSStackTrace. It is a “private” class (that's why I need to use objc_getClass, you'll also get lots of unrecognised selector warnings), but it seems to contain all the code necessary to read Objective-C class names.
Tested on Ubuntu 9.04 with GNUstep configured with --enable-debug (so that GSFunctionInfo is included in the build).
I expect you'll need to ask the ObjC run time about the addresses to get symbol information. The addresses returned from backtrace() could probably be passed to something like object_getClass() to get the class, for example. I haven't tried any of this but it's where I'd look next in this case.

declaring integer array in objective c without NSArray.n

(Question updated after first comment)
int max_size = 20;
int h[max_size];
Debugging gives a value of [-1] for h when using max_size to initialize;
If instead I initialize using an integer. So the code is:int h[20] , it works fine.
This was with GCC 4.2 on Mac OS X 10.6.
I just compiled and ran the following program incorporating your code:
#import <Foundation/Foundation.h>
int main() {
int max_size = 20;
int h[max_size];
h[0] = 5;
NSLog(#"It is %d", h[0]);
return 0;
}
It worked fine. The problem is something besides simply declaring an array.
This was with GCC 4.0.1 on Mac OS X 10.4.
If I recall correctly, some compilers need to know the size of stack-allocated arrays explicitly at compile-time. This (possibly) being the case, you could make your max_size variable const or an #define macro (or an integer literal, as you've done.) Alternatively, you could dynamically allocate the array and then the size could be any-old variable
ex:
int *array = calloc(max_size, sizeof(int));