I need to use a library in a Cocoa application and can use either a dynamic (.dynlib) or a static (.a) version of it. I came from Linux world and would happily use the dynlib. However, since the app bundle will contain all the dependencies (including the dynlib) I thought it would not be a problem to have a bigger binary due to the static linking. What is the best solution?
In this case, my concern would be responsiveness with respect to loading time of big executable vs. small executable and multiple libraries. The difference may be small.
You cannot create a dynamic library .dylib but you are able to create a dynamic framework with .dylib inside. The answer depends on your needs
[iOS static vs dynamic library]
[Create Objective-C dynamic framework]
iOS app should NOT have any dynamic libraries. Your only option is to statically link code.
How do you reference DLLs from Objective-C? I use GNUStep Make files on Windows.
RIch
Ooh... this takes me back. A bit of a guess from the most common problem I ever ran into.
If GNUStep's DLLs on Windows work like they did a decade ago, then you:
Link to the DLL like you would any other DLL. I don't remember the explicit syntax, but there should be about a zillion examples available
Make sure you have a static reference to a symbol in each DLL from the main program (or from some other DLL).
In particular, when compiling something that is pure Objective-C, it is quite easy to end up in a situation where the Windows link loader doesn't load a DLL because it doesn't see a hard reference to any symbol in that DLL. When I ran into this with WebObjects applications, I would typically export something like:
int businessLogicDLLVersion;
And then refer to that symbol quite specifically in my main program. That static reference was enough to cause the link loader to load the DLL and the runtime to hook up all the classes.
Windows still use DLLs and Mac programs seem to not use DLL at all. Are there benefits or disadvantages of using either technique?
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
MacOS X, like other flavours of Unix, use shared libraries, which are just another form of DLL.
And yes both are advantageous as the DLL or shared library code can be shared between multiple processes. It does this by the OS loading the DLL or shared library and mapping it into the virtual address space of the processes that use it.
On Windows, you have to use dynamically-loaded libraries because GDI and USER libraries are avaliable as a DLL only. You can't link either of those in or talk to them using a protocol that doesn't involve dynamic loading.
On other OSes, you want to use dynamic loading anyway for complex apps, otherwise your binary would bloat for no good reason, and it increases the probably that your app would be incompatible with the system in the long run (However, in short run static linking can somewhat shield you from tiny breaking changes in libraries). And you can't link in proprietary libraries on OSes which rely on them.
Windows still use DLLs and Mac
programs seem to not use DLL at all.
Are they benefits or disadvantages of
using either technique?
Any kind of modularization is good since it makes updating the software easier, i.e. you do not have to update the whole program binary if a bug is fixed in the program. If the bug appears in some dll, only the dll needs to be updated.
The only downside with it imo, is that you introduce another complexity into the development of the program, e.g. if a dll is a c or c++ dll, different calling conventions etc.
If a program installation includes all
the DLL it requires, will it be the
same as statically linking all the
libraries?
More or less yes. Depends on if you are calling functions in a dll which you assume static linkage with. The dll could just as well be a "free standing" dynamic library, that you only can access via LoadLibrary() and GetProcAddress() etc.
One big advantage of shared libraries (DLLs on Windows or .so on Unix) is that you can rebuild the library and its consumers separately while with static libraries you have to rebuild the library and then relink all the consumers which is very slow on Unix systems and not very fast on Windows.
MacOS software uses "dll's" as well, they are just named differently (shared libraries).
Dll's make sense if you have code you want to reuse in different components of your software. Mostly this makes sense in big software projects.
Static linking makes sense for small single-component applications, when there is no need for code reuse. It simplifies distribution since your component has no external dependencies.
Besides memory/disk space usage, another important advantage of using shared libraries is that updates to the library will be automatically picked up by all programs on the system which use the library.
When there was a security vulnerability in the InfoZIP ZIP libraries, an update to the DLL/.so automatically made all software safe which used these. Software that was linked statically had to be recompiled.
Windows still use DLLs and Mac programs seem to not use DLL at all. Are they benefits or disadvantages of using either technique?
Both use shared libraries, they just use a different name.
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
Somewhat. When you statically link libraries to a program, you will get a single, very big file, with DLLs, you will have many files.
The statically linked file won't need the "resolve shared libraries" step (which happens while the program loads). A long time ago, loading a static program meant that the whole program was first loaded into RAM and then, the "resolve shared libraries" step happened. Today, only the parts of the program, which are actually executed, are loaded on demand. So with a static program, you don't need to resolve the DLLs. With DLLs, you don't need to load them all at once. So performance wise, they should be on par.
Which leaves the "DLL Hell". Many programs on Windows bring all DLLs they need and they write them into the Windows directory. The net effect is that the last installed programs works and everything else might be broken. But there is a simple workaround: Install the DLLs into the same directory as the EXE. Windows will search the current directory first and then the various Windows paths. This way, you'll waste a bit of disk space but your program will work and, more importantly, you won't break anything else.
One might argue that you shouldn't install DLLs which already exist (with the same version) in the Windows directory but then, you're again vulnerable to some bad app which overwrites the version you need with something that breaks your neck. The drawback is that you must distribute security fixes for your app yourself; you can't rely on Windows Update or similar things to secure your code. This is a tight spot; crackers are making lots of money from security issues and people will not like you when someone steals their banking data because you didn't issue security fixes soon enough.
If you plan to support your application very tightly for many, say, 20 years, installing all DLLs in the program directory is for you. If not, then write code which checks that suitable versions of all DLLs are installed and tell the user about it, so they know why your app suddenly starts to crash.
Yes, see this text :
Dynamic linking has the following
advantages: Saves memory and
reduces swapping. Many processes can
use a single DLL simultaneously,
sharing a single copy of the DLL in
memory. In contrast, Windows must load
a copy of the library code into memory
for each application that is built
with a static link library. Saves
disk space. Many applications can
share a single copy of the DLL on
disk. In contrast, each application
built with a static link library has
the library code linked into its
executable image as a separate
copy. Upgrades to the DLL are
easier. When the functions in a DLL
change, the applications that use them
do not need to be recompiled or
relinked as long as the function
arguments and return values do not
change. In contrast, statically linked
object code requires that the
application be relinked when the
functions change. Provides
after-market support. For example, a
display driver DLL can be modified to
support a display that was not
available when the application was
shipped. Supports multilanguage
programs. Programs written in
different programming languages can
call the same DLL function as long as
the programs follow the function's
calling convention. The programs and
the DLL function must be compatible in
the following ways: the order in which
the function expects its arguments to
be pushed onto the stack, whether the
function or the application is
responsible for cleaning up the stack,
and whether any arguments are passed
in registers. Provides a mechanism
to extend the MFC library classes. You
can derive classes from the existing
MFC classes and place them in an MFC
extension DLL for use by MFC
applications. Eases the creation
of international versions. By placing
resources in a DLL, it is much easier
to create international versions of an
application. You can place the strings
for each language version of your
application in a separate resource DLL
and have the different language
versions load the appropriate
resources. A potential
disadvantage to using DLLs is that the
application is not self-contained; it
depends on the existence of a separate
DLL module.
From my point of view an shared component has some advantages that are somtimes realized as disadvantages.
shared component defines interfaces in your process. So you are forced to decide which components/interfaces are visible outside and which are hidden. This automatically defines which interface has to be stable and which does not have to be stable and can be refactored without affecting any code outside the component..
Memory administration in case of C++ and Windows must be well thought. So normally you should not handle memory outside of an dll that isn't freed in the same dll. If you do so your component may fail if: different runtimes or compiler version are used.
So I think that using shared coponents will help the software to get better organized.
For me, a library is a collection of classes that do useful things. Typically something, that can be useful in a lot of projects. Is that also the case in terms of objective-c? What exactly is a library there? Only classes that have methods? Or also collections of functions? And do they have to be compiled to be called a "library"? Where is the segregation between an "Framework"? Aren't bove the same thing?
According to Wikipedia: "Frameworks are functionally similar to shared libraries, a compiled object that can be dynamically loaded into a program's address space at runtime, but frameworks add associated resources, header files, and documentation."
A framework is essentially a shared library (binary, similar to a DLL) in a bundle that also includes all of the information needed to use that library (like header files, documentation, internationalization resources, etc). A framework without all of the extras is just a library.
There is no requirement that a framework be object-oriented in nature, though I assume that's the norm with Cocoa.
For Cocoa, the concept of a framework generally replaces (enhances) the concept of a library. However, the Objective-C toolchain imposes no such requirement. You can use source-only "libraries" or unix-style binary libraries (e.g. an .so file). I think of a "library" in these generic terms... it's just a collection of useful code, in source or binary form. A framework, on the other hand, is a specific thing with a specific meaning for OS X.
Assuming you are talking about a library that uses the Cocoa frameworks and not just one written in the plain old Objective-C language, a library (or framework) is a collection of classes that work together to perform a specific task. I would not organize an ObjC framework as a collection of functions since that totally goes against the paradigm of the language.
As for the difference between a library and framework, that's probably a bit subjective. To me, a library (in the context of your question) is something written in C that probably more closely resembles a non-OO collection of functions. A framework would be a full package of classes as I described above. So the Messaging framework on CocoaDev would be a framework, whereas the sqlite3 APIs you can access on the iPhone would be a library. Again, that's just me. Other people may interpret the terms differently.
I've got a lot of small DLLs which I would like to make into one big(er) DLL (as suggested here). I can do so by merging my projects but I would like a less intrusive way.
Can several DLLs be merged into one unit?
A quick search found this thread that claims this is not possible. Does anyone know otherwise?
Note that I'm talking about native C++ code not .NET so ILMerge is out.
I don't know about merging dlls, but I'm sure you can link the intermediate object files into one dll. This would only require changes in your build script.
As far as I know you cannot merge DLL files directly. But it should be possible with static libraries or object files. If it is possible for you to build static libraries of your projects you can merge them using the Library Manager by extracting object files from all libraries and packaging them into a new library.
Also, there was a product that made a .LIB out of .DLLs. You could then link your exe against that .LIB and get rid of the .DLLs altogether. Perhaps you could link a .DLL out of the .LIB - I'm not sure.
The product is here:
http://www.binary-soft.com/dll2lib/dll2lib.htm
I'm not sure, if it works anymore, if it's supported or even sold. It sure appears pricey, but it used to have (nag-enabled) free trial period.