Is it possible to prevent POSIX symbol name pollution in Objective-C? - objective-c

I've run into a somewhat unexpected behavior in Xcode/Objective-C. I know it's probably not advised, but if I want to make my own struct in_addr in a .m file, it seems I can't. This implies something rather strange about namespaces and symbol pollution in Objective-C. The same seems to apply for many other networking types and perhaps other POSIX-y things as well.
I came up with a very basic example that demonstrates this behavior. Note that this snippet is the entire contents of the .m file.
#define _SYS_SOCKET_H_
#define _NETINET_IN_H_
#include <stdint.h>
struct in_addr {
uint32_t foo;
};
which yields the build error Redefinition of 'in_addr'.
This implies some fairly strange things about Objective-C. For starters, I wouldn't expect <stdint.h> to bring in any networking types. But even allowing that it might, defining _NETINET_IN_H_ first should prevent the definition of struct in_addr. And yet even still, this code refuses to build.
Is it possible to somehow forgo this forced symbol visibility? Is there a list of symbols that are included, no matter what? Is there a good reason for this behavior?
edit: Stranger still, if i remove <stdint.h> and change the uint32_t to int, this actually does compile.

If you go into the Report navigator and read the full error emitted by the clang tool, you'll see a big hint:
In module 'Darwin' imported from /Users/csrstka/Desktop/asdfasdf/asdfasdf/main.m:1:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/usr/include/netinet/in.h:302:12: note: field has name 's_addr' here
in_addr_t s_addr;
As you can see, the existing in_addr is coming from the Darwin module, which is implicitly imported due to your #include of stdint.h, which is part of the Darwin module. You can see this if you go to Product > Perform Action > Preprocess in Xcode—instead of copying in all the headers you've imported, there's just one line about importing Darwin.C.stdint.
Basically, there are a few purposes for modules; they improve compile times by cutting down on redundant compilation tasks, and they prevent people from messing with library headers via #defines like you're trying to do. ;-) For more on Objective-C modules, how they work, and the rationale behind them, see this link:
https://clang.llvm.org/docs/Modules.html#introduction
Of particular interest to your question are the following excerpts:
The primary user-level feature of modules is the import operation, which provides access to the API of software libraries. However, today’s programs make extensive use of #include, and it is unrealistic to assume that all of this code will change overnight. Instead, modules automatically translate #include directives into the corresponding module import. For example, the include directive
#include <stdio.h>
will be automatically mapped to an import of the module std.io. Even with specific import syntax in the language, this particular feature is important for both adoption and backward compatibility: automatic translation of #include to import allows an application to get the benefits of modules (for all modules-enabled libraries) without any changes to the application itself. Thus, users can easily use modules with one compiler while falling back to the preprocessor-inclusion mechanism with other compilers.
And later on:
If any submodule of a module is imported into any part of a program, the entire top-level module is considered to be part of the program. As a consequence of this, Clang may diagnose conflicts between an entity declared in an unimported submodule and an entity declared in the current translation unit, and Clang may inline or devirtualize based on knowledge from unimported submodules.
Or, if you'd prefer to turn them off and get more traditional C-like behavior, you can simply set Enable Modules (C and Objective-C) to No in Xcode's Build Settings, or compile without the -fmodules flag if you're using the command line.

Related

dllexport in headers confusion

I'm confused as to why __declspec(dllexport) or equivalent needs to go in the header file. Say I'm writing a library. Surely the users don't need to know or worry about whether symbols are exported or not, all they care about is that the function declarations are there and will presumably be linked against the shared or static library itself. So why can't all this boilerplate go into source files, for use only at build time?
The only use case I think of is a situation where someone is writing a wrapper of my library and needs to export all of my functions as well, but in general that is not the case - is it really worth the hassle of having all the export stuff inside public headers? Is there something I'm missing, is this a technical limitation of linkers..?
I'm asking because I like my headers and build system to be clean, and as dllexport stuff is generally set/not set based on whether we are building the library as a shared or static library, I find it strange that it should end up inside public headers since it's (to my understanding) fundamentally a build time concept. So can someone please enlighten me on what I am missing?
I'm not really sure I can provide a great answer. My impression is that it serves several purposes:
It speeds the loading of DLLs, particularly when lots of DLLs are used (because there are fewer exported symbols to search through)
It reduces the possibility of symbols colliding at run-time (because there are fewer exported symbols)
It allows the linker to complain about undefined symbols (instead of just assuming it might find them at run time.
I'm sure there are other reasons. I generally wrap my APIs in something like this:
#if defined(MY_LIB_CREATION)
#define MY_LIB_API __declspec(dllexport)
#else
#define MY_LIB_API __declspec(dllimport)
#endif
And then all of my API functions & classes are defined as MY_LIB_API:
class MY_LIB_API Foo {};
MY_LIB_API void bar();
And then in the project file defined MY_LIB_CREATION for the project implementing your library.

Xcode: define preprocessor macro in one project used by another project

I have multiple app projects which all link to the same static library project. Each app project needs to compile the static library project using different settings.
At the moment I have a conditional compilation header in the static library project, let's call it ViewType.h which adds more types, typedefs, macros, etc specific to each view.
#define VIEW_A 1
#define VIEW_B 2
#define VIEW_C 3
#ifndef VIEWTYPE
#define VIEWTYPE VIEW_A
#endif
#if VIEWTYPE == VIEW_A
// further typedefs and defines tailored to VIEW_A
#elif VIEWTYPE == VIEW_B
// further typedefs and defines tailored to VIEW_B
#elif VIEWTYPE == VIEW_C
// further typedefs and defines tailored to VIEW_C
#endif
The problem here is that each app project needs to change the VIEWTYPE in the static library project, and every time I switch app projects I have to change the VIEWTYPE again.
Unfortunately it seems I can not define VIEWTYPE=2 (for example) as preprocessor macro in the app target. And I can't define this in the static library project either because all 3 projects include the same static library project, because the .xcodeproj is shared between the 3 apps (ie the .xcodeproj is dragged & dropped onto the app project; I'm not using a workspace).
I understand one issue is that the static library being a dependent target it is built first before the app target is even considered. So perhaps there's some way to make that decision which app the library is built for based on other conditionals (ie checking for a file, or including an optional app-specific header).
Question: How I can create a macro or otherwise perform conditional compilation based on macros/settings defined by the app target which are then adhered to by the static library project?
The first, simplest approach, is to get rid of the static library, and just include the source files directly into the dependent projects. I often find that intermediate static libraries are much more trouble than they're worth. Their one big benefit comes when they provide a significant build-performance improvement, but they can't here since you're rebuilding the static library for every final target anyway.
I will say that the use of a type #defines almost always makes me cry, and may suggest a design flaw that could be better handled. For instance, you may want to implement methods that return the class required (the way UIView layerClass does). Pre-processor trickery that changes type definitions can lead to extremely subtle bugs. (I just chased down a case of this last year… it was a horrible, horrible crash to figure out.)
That said, another approach for certain versions of this problem can be solved with xcconfig files. For example, if there are actually multiple copies of the static library (i.e. this is a library that is commonly copied into other projects), then you can use an xcconfig file that has an #include "../SpecialTypeDefs.xcconfig". That file would be provided by each project to set special declarations. Failure to define that file would lead to a complier error, so it's easy to not have an error.
But personally, I'd just include the files into the actual project directly and skip the library unless they're really enormous.

GTest not finding tests in separate compilation units

I've got a program written in C++, with some subfolders containing libraries linked in. There's a top level SConscript, which calls SConscript files in the subfolders/libraries.
Inside a library cpp, there is a GTest test function:
TEST(X, just_a_passing_test) {
EXPECT_EQ(true, true);
}
There is main() in the top level program source, which just calls GTests main, and has another GTest test within it:
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
TEST(Dummy, should_pass){
EXPECT_EQ(true, true);
}
Now the issue is that when I run the program, GTest only runs the test in the main.cpp source. Ignoring the test in the library. Now it gets bizarre when I reference an unrelated class in the same library cpp in main.cpp, in a no side-effect kind of way (eg. SomeClass foo;), the test magically appears. I've tried using -O0 and other tricks to force gcc to not optimize out code that isn't called. I've even tried Clang.
I suspect it's something to do with how GTest does test discovery during compilation, but I can't find any info on this issue. I believe it uses static initialization, so maybe there's some weird ordering going on there.
Any help/info is greatly appreciated!
Update: Found a section in the FAQ that sounds like this problem, despite it referring specifically to Visual C++. Which includes a trick/hack to force the compiler to not discard the library if not referenced.
It recommends not putting tests in libraries, but that leaves me wondering how else would you test libraries, without having an executable for every one, making quickly running them a pain and with bloated output.
https://code.google.com/p/googletest/wiki/Primer#Important_note_for_Visual_C++_users
From the scene-setting one gathers that the library whose gtest test case
goes missing is statically linked in the application build. Also that the
GNU toolchain is in use.
The cause of the problem behaviour is straightforward. The test
program contains no references to anything in the library that contains
TEST(X, just_a_passing_test). So the linker doesn't need to link any
object file from that library to link the program. So it doesn't. So the
gtest runtime doesn't find that test in the executable, because it's not there.
It helps to understand that a static library in GNU format is an archive
of object files, adorned with a house-keeping header block and a global symbol table.
The OP discovered that by coding in the program an ad hoc reference to
any public symbol in the problem library, he could "magically" compel its
test case into the program.
No magic. To satisfy the reference to that public symbol, the linker is
now obliged to link an object file from the library - the one that contains
the definition of the symbol. And the OP imparts that the library is made
from a .cpp. So there is only one object file in the library, and it
contains the definition of the test case, too. With that object file in the
linkage, the test case is in program.
The OP twiddled in vain with the compiler options, switching from GCC to clang,
in search of a more respectable way to achieve the same end. The compiler is
irrelevant. GCC or clang, it gets its linking done by the system linker, ld
(unless unusual measures have been taken to replace it).
Is there a more respectable way to get ld to link an object file from a
static library even when the program refers to no symbols in that object file?
There is. Say the problem program is app and the problem static library is
libcool.a
Then the usual GCC commandline that links app resembles this, in the relevant
points:
g++ -o app -L/path/to/the/libcool/archive -lcool
This delegates a commandline to ld, with additional linker options and
libraries that g++ deems to be defaults for the system where it finds itself.
When the linker comes to consider -lcool, it will figure out this is a request
for the archive /path/to/the/libcool/archive/libcool.a. Then it will figure
out whether at this point it has still got any unresolved symbol references in hand
whose definitions are compiled in object files in libcool.a. If there are
any, then it will link those object files into app. If not, then it links
nothing from libcool.a and passes on.
But we know there are symbol definitions in libcool.a that we want to
link, even though app does not refer to them. In that case, we can tell
the linker to link the object files from libcool.a even though they are
not referenced. More precisely, we can tell g++ to tell the linker to do that,
like so:
g++ -o app -L/path/to/the/libcool/archive -Wl,--whole-archive -lcool -Wl,-no-whole-archive
Those -Wl,... options tell g++ to pass the options ... to ld. The --whole-archive
option tells ld to link all object files from subsequent archives, whether they
are referenced or not, until further notice. The -no-whole-archive tells the
ld to stop doing that and resume business as usual.
It may look as if -Wl,-no-whole-archive is redundant, as it's the last thing on the
g++ commandline. But it's not. Remember that g++ appends system default libraries
to the commandline, behind the scenes, before passing it to the ld. You definitely
do not want --whole-archive to be in force when those default libraries are linked.
(The linkage will fail with multiple definition errors).
Apply this solution to the problem case and TEST(X, just_a_passing_test)
will be executed, without the hack of forcing the program to make some no-op
reference into the object file that defines that test.
There's an obvious downside to this solution in the general case. If it happens that the library from
which we want to force linkage of some unreferenced object file contains a
bunch of other unreferenced object files that we really don't need.
--whole-archive links them all of them too, and they're just bloat in the program.
The --whole-archive solution may be more respectable that the no-op reference
hack, but it's not respectable. It doesn't even look respectable.
The real solution here is just to do the reasonable thing. If you want the
linker to link the definition of something in your program, then don't keep that a secret from
the linker. At least declare the thing in each compilation unit where you
expect its definition to be used.
Doing the reasonable thing with gtest test-cases involves understanding that
a gtest macro like TEST(X, just_a_passing_test) expands to a class definition,
in this case:
class X_just_a_passing_test_Test : public ::testing::Test {
public:
X_just_a_passing_test_Test() {}
private:
virtual void TestBody();
static ::testing::TestInfo* const test_info_ __attribute__ ((unused));
X_just_a_passing_test_Test(X_just_a_passing_test_Test const &);
void operator=(X_just_a_passing_test_Test const &);
};
(plus a static initializer for test_info_ and a definition for TestBody()).
Likewise for the TEST_F, TEST_P variants. Consequently, you can deploy these
macros in your code with just the same constraints and expectations that would
apply to class definitions.
In this light, if you have a library libcool defined in cool.h, implemented in cool.cpp
and want gtest unit tests for it, to be executed by a test program tests
that is implemented in tests.cpp, the reasonable thing is:-
Write a header file, cool_test.h
#include "cool.h" in it
#include <gtest/gtest.h> in it.
Then define your libcool test cases in it
#include "cool_test.h" in tests.cpp,
Compile and link tests.cpp with libcool and libgtest
And it's obvious why you wouldn't do what the OP has done. You would not define
classes that are needed by tests.cpp, and not needed by cool.cpp, within cool.cpp
and not in tests.cpp.
The OP was averse to the advice against defining the test cases in the library
because:
how else would you test libraries, without having an executable for every one,
making quickly running them a pain.
As a rule of thumb I would recommend the practice of maintaining a gtest executable
per library to be unit-tested: running them quickly is painless with commonplace automation tools
such a make, and it's far better to get a pass/fail verdict per library than
just a verdict for a bunch of libraries. But if you don't want to do that there's still nothing to the
objection:
// tests.cpp
#include "cool_test.h"
#include "cooler_test.h"
#include "coolest_test.h"
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Compile and link with libcool, libcooler, libcoolest and libgtest

Add no-objc-arc to the implementation or interface file

I have a class (NDTrie on github) which uses c struct for its internal structure, it would make it easier for users to use it in their projects with automatic reference counting by adding the fno-objc-arc to the source file instead of requiring users to set it in the build phase for that source file, is there a way to do that.
Per-file compiler flags are tacked onto files on a per-project basis (files meant to be compiled down to some form of machine code generally try to avoid metadata). If you'd like to specify the flag and have it update the required field in any Xcode project, you can use CocoaPods to write a pod file for your dependancy. Let the underlying tool handle the rest.
No, it's not possible to separate portions of translations based on features and alter their flags for that translation in this case.
You should approach it from another angle. The ARC feature treats all preprocessed input as having ARC enabled or not, based on the compiler flags of the current translation.
The most obvious workarounds:
0) Impact reduction: Hide the implementation, where possible
1) Translation specific conditions: Use __has_feature(objc_arc) to determine whether you are or are not dealing with an ARC-enabled translation -- if you are using clang, the expression __has_feature(objc_arc) expands to 1 when ARC is enabled. Then you can conditionally make portions of your program visible or annotate it differently, depending on whether or not ARC is enabled.
2) Detect and fail: In some remaining cases, you may opt for:
// >> detect clang or GCC if needed
#if __has_feature(objc_arc)
#error This file cannot be compiled with ARC enabled (+HYPERLINK so you don't get a flooded inbox)
#endif
// << detect clang or GCC if needed

import = dynamic linking? & include = static linking?

I wonder the difference of import and include in Object-c
By the way, I am not clear about the difference of dynamic and static linking.
If I use a library with static linking,
is that mean I copy the code i need from library for my program and link with them?
Then my program can work with the code from library.
If i use a library with dynamic linking,
is that mean I only reference the code i need from library to my program when my program is running.
Then my program can work with the "reference code".
#import vs. #include and static vs. dynamic linking are two completely unrelated topics.
#include includes the contents of a file directly in another file, and is available in C (and therefore also in Objective-C). However, it's common to want to include the contents of a file only if that file hasn't already been included. (You don't, for example, want to declare the same variables twice; it'd cause compiler errors!) That's why #import was added in Objective-C; it does exactly that: includes the contents of a file only if that file hasn't already been #imported. If you're not sure what to use, you should probably be using #import.
Static vs. dynamic linking is completely different--linking happens after compilation, so it couldn't possibly be related to #import and #include, which are part of the source code. Your thoughts on linking are exactly correct, however--statically linked libraries are included in your app, and your users don't need them. Dynamically linked libraries are referenced, and must be present on your users' machines for your app to run.