objective-c frameworks - Dynamic Library Install Name - objective-c

I'm new to objective-c & osx architecture. I started playing with building a framework and then using it. I followed this great tutorial.
During the tutorial, I had to set the framework's target's Dynamic Library Install Name to #rpath/MyFramework.framework/Versions/A/MyFramework. My understanding is that #rpath will expand to the loader's (consumer's) run-path search paths.
It seems as if the responsibility of loading the framework is split between the framework author and the consumer author. Could someone please explain why the author of the framework needs to be concerned with the consumer's run-path search path? For example, if the framework-author set the Dynamic Library Install Name to point to some random directory (instead of #rpath) how would the client be able to consume the framework?
Thanks in advance.

It depends a lot on how the framework is being used. And it's important to remember that the framework construct has existed for a long time on the platform.
For a system framework, such as the ones that Apple creates, you're going to be quite happy that they keep the frameworks in a known location. In those cases, the paths that they use are fixed for the OS, and it guarantees that you don't accidentally load the wrong one. Further, as indicated in the Framework documentation, these frameworks are loaded only once on the machine, regardless of how many times they are used (see Apple:What Are Frameworks) . The benefit here is performance and it is for both the code and the resources in many cases.
Due to the recent move to randomize framework locations,and Apple's comments in the release notes that "Mountain Lion randomly relocates the kernel, kexts, and system frameworks at system boot," it certainly appears they're still sharing these resources, and thus still gaining from this benefit.
For embedded frameworks, the situation is a lot more tedious, and Apple has moved through a variety of methods over the years to make it easier to find frameworks wherever they may be. Due, again, to the shared nature, it would make sense for Applications which share common library requirements to share them on the machine, both for purposes of efficiency, and to make sure they're at the same version if they're sharing data. So, for example, if you have two separate apps that use the same framework to work with shared data, you might put the shared framework in /Library/Frameworks and have both apps explicitly look for that, making sure that some other (possibly older) version of the framework, that has been loaded by another App, is not used instead.
In the end, there's a lot of flexibility for the Framework producer and consumer the way that it currently works. It means that the developer can decide to share a framework, include a private copy of the framework, or even do both, depending upon whether the framework exists on the machine or not. However, the price for that flexibility is the complexity that we have today.
Another example of a reason you might not want to use #rpath specifically is for tightly-linked embedded frameworks (yes, people embed frameworks within other frameworks). In these cases, you don't know where the first framework is loaded, but you want to put the embedded framework inside of it, so that they stay together. In this case #loader_path is relative to the code that is loading it, so that your plug-in's framework can find its resources correctly.
In answer to your specific example about somebody setting the Dynamic Library Install Name
to a "random" location. In this case, you'd have to know that location. There might be many reasons for somebody doing this, such as wanting to discourage reuse by other programs, or because there are large resources within the framework that should only be installed in a known shared location.

Related

VB.NET Cross platform app

I make a game using vb.net & wpf. But I want this run on Windows/Linux/Mac.
How can I do it?
I'm sorry my english is really bad :D I use Google translate.
Implementing a solution for multiple platforms is not an easy task and you need to be familiar with all the platforms you plan to support, starting with trivialities such as different path schemes and ending with checking every reference you require by your project settings for its compatibility to the target platform.
Please have a look at http://www.mono-project.com. When you install the mono package to your system, you can run your compiled .exe as it is from the shell under certain circumstances.
Obviously, you need to decide whether you try to create one application that runs on all target platforms OR if you want to create platform specific applications all referencing to the same game engine.
If you stick very close to the framework not using external references, chances are higher to achieve the former. If the main logic of your game can easily be compartmentalized into a dedicated project, the latter is the way to go.
In general, cross-plattform compatibility is more easily to achieve if your application backend consists of a console application to be accessed by a webbrowser installed on the system - using a web frontend though. But as long as you do not require accelerated graphics, this should be feasible.

MonoTouch: creating multiplatform apps using Portable Class Libraries

My scenario: trying to port a small part of an application created by our company from native code (ObjC for iOS / Java for Android) to C-Sharp. The project will interact with our webservices. Goal of this project is figuring out how feasible it is to port our whole app to Mono.
To create URLs, I'd like to use String.Format(). I thought it'd be a wise idea to put this 'service layer' inside a Portable Class Library (PCL) since I don't expect this code to change across platforms. Sadly, it seems the String library is not available for PCLs.
So my question is the following:
I think the main advantage of PCLs over "normal" libraries is that they shouldn't need a recompile for different platforms. Is this assumption correct?
This experience makes me think that for the moment PCLs are rather limited. Should I try to stick with PCLs and work around these kinds of issues, or might it be better to stick with a "normal" library for now? --- I'll assume the "normal" library has more functionality exposed.
You can use PCLs currently across many platforms, but it does require some small hacks to your setup.
These hacks are listed in http://slodge.blogspot.co.uk/2012/12/cross-platform-winrt-monodroid.html
Once you've got those working then the functionality available is quite broad - and it definitely includes things like String.Format
For the situations where the PCL profile is not broad enough, then you can use several techniques for extending them - see http://blogs.msdn.com/b/dsplaisted/archive/2012/08/27/how-to-make-portable-class-libraries-work-for-you.aspx . The technique I generally use is to use MvvmCross Plugins - which are basically PCL interfaces with platform specific implementations. But these plugins are generally at the level of 'make bluetooth work' rather than at the level of String.Format
I do lots of PCL work across WinRT, WP, WPF, MonoTouch and Mono for Android - see http://slodge.blogspot.co.uk/p/mvvmcross-quicklist.html for lots of links to PCL work.
It's true that Xamarin have recommended not using PCLs for a couple of years, but that situation has now changed and the official support for PCLs is under way - see http://slodge.blogspot.co.uk/2013/02/the-future-is-portable.html
From a development perspective - especially from the point of view of using refactoring and testing tools - I don't hesitate to recommend you use PCLs now... especially for operations at the String.Format level. However, each project is unique... so it's not always the right answer.
One important note: right now it's better to not reuse the PCL binary files across to the MonoTouch platform - for now, build your portable libraries against the specific MonoTouch library platform. See http://slodge.blogspot.co.uk/2013/01/almost-portable-binaries.html?m=1
Perhaps you want to look at the efforts other who have got PCLs working to a considerable degree with monotouch and monodroid.
For example see http://www.slideshare.net/cirrious/mvvm-cross-going-portable . You'll also find instructions on how to setup PCL support for MVVMCross here http://slodge.blogspot.co.uk/2012/09/mvvmcross-vnext-portable-class.html .
Xamarin has recently committed to providing far greater PCL support rather than some of the workarounds that people have been having to make, but it is worth the effort.

Creating your own custom libraries in iOS?

I'm fairly new to programming and wanted to start programming more efficiently. Try as I may I often find myself straying from the MVC model.
I was wondering are there any tips or methods in keeping your code organized when coding in xcode objc? To be more specific (I know you guys like that :) I want to
Be able to write libraries or self-containing code that can bring from one project to another
Share my code with others as open sourced projects
Prevent myself from writing messy code that does not follow proper structure
Use a high warning level. Build cleanly.
Remove all static analyzer issues.
Write some unit tests.
Keep the public interfaces small.
Specify your library's dependencies (e.g. minimum SDK versions and dependent libraries).
Compile against multiple/supported OS versions regularly.
Learn to create and manage static library targets. This is all you should need to support and reuse the library in another project (unless you drag external resources into the picture, which becomes a pain).
No global state (e.g. singletons, global variables).
Be precise about support in multithreaded contexts (more commonly, that concurrency shall be the client's responsibility).
Document your public interface (maybe your private one too…).
Define a precise and uniform error model.
You can never have enough error detection.
Set very high standards -- Build them for reuse as reference implementations.
Determine the granularity of the libraries early on. These should be very small and focused.
Consider using C or C++ implementations for your backend/core libraries (that stuff can be stripped).
Do establish and specify any prefixes for your library's objc classes and categories. Use good prefixes too.
Minimize visible dependencies (e.g. don't #import tons of frameworks which could be hidden).
Be sure it compiles without the client needing to add additional #imports.
Don't rely on clients putting things in specific places, or that resources will have specific names.
Be very conservative about memory consumption and execution costs.
No leaks.
No zombies.
No slow blocking operations on the main thread.
Don't publish something until it's been well tested, and has been stable for some time. Bugs break clients' code, then they are less likely to reuse your library if it keeps breaking their program.
Study, use, and learn from good libraries.
Ask somebody (ideally, who's more experienced than you) to review your code.
Do use/exercise the libraries wherever appropriate in your projects.
Fix bugs before adding features.
Don't let that scare you -- it can be really fun, and you can learn a lot in the process.
There are a number of ways you can reuse code:
Store the code in a common directory and include that directory in your projects. Simple, but can have versioning issues.
Create a separate project which builds a static iOS library and then create a framework. More complex to setup because it involves scripting to build the framework directory structure. But easy to use in other projects and can handle versioning and device/simulator combined libs.
Create a separate project which builds a static iOS library and then include this as a subproject in other projects. Avoids having to build frameworks and the results can be more optimised.
That's the basic 3, there are of course a number of variations on these and how you go about them. A lot of what you decide to do is going to come down to who you are going to do this for. For example I like sub projects for my own code, but for code I want to make available for others, I think frameworks are better. even if they are more work to create. Plus I can then wrap them up with docsets of the api documentation and upload the whole lot as a DMG to github for others to download.

what are the pros and cons of using a DLL?

Windows still use DLLs and Mac programs seem to not use DLL at all. Are there benefits or disadvantages of using either technique?
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
MacOS X, like other flavours of Unix, use shared libraries, which are just another form of DLL.
And yes both are advantageous as the DLL or shared library code can be shared between multiple processes. It does this by the OS loading the DLL or shared library and mapping it into the virtual address space of the processes that use it.
On Windows, you have to use dynamically-loaded libraries because GDI and USER libraries are avaliable as a DLL only. You can't link either of those in or talk to them using a protocol that doesn't involve dynamic loading.
On other OSes, you want to use dynamic loading anyway for complex apps, otherwise your binary would bloat for no good reason, and it increases the probably that your app would be incompatible with the system in the long run (However, in short run static linking can somewhat shield you from tiny breaking changes in libraries). And you can't link in proprietary libraries on OSes which rely on them.
Windows still use DLLs and Mac
programs seem to not use DLL at all.
Are they benefits or disadvantages of
using either technique?
Any kind of modularization is good since it makes updating the software easier, i.e. you do not have to update the whole program binary if a bug is fixed in the program. If the bug appears in some dll, only the dll needs to be updated.
The only downside with it imo, is that you introduce another complexity into the development of the program, e.g. if a dll is a c or c++ dll, different calling conventions etc.
If a program installation includes all
the DLL it requires, will it be the
same as statically linking all the
libraries?
More or less yes. Depends on if you are calling functions in a dll which you assume static linkage with. The dll could just as well be a "free standing" dynamic library, that you only can access via LoadLibrary() and GetProcAddress() etc.
One big advantage of shared libraries (DLLs on Windows or .so on Unix) is that you can rebuild the library and its consumers separately while with static libraries you have to rebuild the library and then relink all the consumers which is very slow on Unix systems and not very fast on Windows.
MacOS software uses "dll's" as well, they are just named differently (shared libraries).
Dll's make sense if you have code you want to reuse in different components of your software. Mostly this makes sense in big software projects.
Static linking makes sense for small single-component applications, when there is no need for code reuse. It simplifies distribution since your component has no external dependencies.
Besides memory/disk space usage, another important advantage of using shared libraries is that updates to the library will be automatically picked up by all programs on the system which use the library.
When there was a security vulnerability in the InfoZIP ZIP libraries, an update to the DLL/.so automatically made all software safe which used these. Software that was linked statically had to be recompiled.
Windows still use DLLs and Mac programs seem to not use DLL at all. Are they benefits or disadvantages of using either technique?
Both use shared libraries, they just use a different name.
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
Somewhat. When you statically link libraries to a program, you will get a single, very big file, with DLLs, you will have many files.
The statically linked file won't need the "resolve shared libraries" step (which happens while the program loads). A long time ago, loading a static program meant that the whole program was first loaded into RAM and then, the "resolve shared libraries" step happened. Today, only the parts of the program, which are actually executed, are loaded on demand. So with a static program, you don't need to resolve the DLLs. With DLLs, you don't need to load them all at once. So performance wise, they should be on par.
Which leaves the "DLL Hell". Many programs on Windows bring all DLLs they need and they write them into the Windows directory. The net effect is that the last installed programs works and everything else might be broken. But there is a simple workaround: Install the DLLs into the same directory as the EXE. Windows will search the current directory first and then the various Windows paths. This way, you'll waste a bit of disk space but your program will work and, more importantly, you won't break anything else.
One might argue that you shouldn't install DLLs which already exist (with the same version) in the Windows directory but then, you're again vulnerable to some bad app which overwrites the version you need with something that breaks your neck. The drawback is that you must distribute security fixes for your app yourself; you can't rely on Windows Update or similar things to secure your code. This is a tight spot; crackers are making lots of money from security issues and people will not like you when someone steals their banking data because you didn't issue security fixes soon enough.
If you plan to support your application very tightly for many, say, 20 years, installing all DLLs in the program directory is for you. If not, then write code which checks that suitable versions of all DLLs are installed and tell the user about it, so they know why your app suddenly starts to crash.
Yes, see this text :
Dynamic linking has the following
advantages: Saves memory and
reduces swapping. Many processes can
use a single DLL simultaneously,
sharing a single copy of the DLL in
memory. In contrast, Windows must load
a copy of the library code into memory
for each application that is built
with a static link library. Saves
disk space. Many applications can
share a single copy of the DLL on
disk. In contrast, each application
built with a static link library has
the library code linked into its
executable image as a separate
copy. Upgrades to the DLL are
easier. When the functions in a DLL
change, the applications that use them
do not need to be recompiled or
relinked as long as the function
arguments and return values do not
change. In contrast, statically linked
object code requires that the
application be relinked when the
functions change. Provides
after-market support. For example, a
display driver DLL can be modified to
support a display that was not
available when the application was
shipped. Supports multilanguage
programs. Programs written in
different programming languages can
call the same DLL function as long as
the programs follow the function's
calling convention. The programs and
the DLL function must be compatible in
the following ways: the order in which
the function expects its arguments to
be pushed onto the stack, whether the
function or the application is
responsible for cleaning up the stack,
and whether any arguments are passed
in registers. Provides a mechanism
to extend the MFC library classes. You
can derive classes from the existing
MFC classes and place them in an MFC
extension DLL for use by MFC
applications. Eases the creation
of international versions. By placing
resources in a DLL, it is much easier
to create international versions of an
application. You can place the strings
for each language version of your
application in a separate resource DLL
and have the different language
versions load the appropriate
resources. A potential
disadvantage to using DLLs is that the
application is not self-contained; it
depends on the existence of a separate
DLL module.
From my point of view an shared component has some advantages that are somtimes realized as disadvantages.
shared component defines interfaces in your process. So you are forced to decide which components/interfaces are visible outside and which are hidden. This automatically defines which interface has to be stable and which does not have to be stable and can be refactored without affecting any code outside the component..
Memory administration in case of C++ and Windows must be well thought. So normally you should not handle memory outside of an dll that isn't freed in the same dll. If you do so your component may fail if: different runtimes or compiler version are used.
So I think that using shared coponents will help the software to get better organized.

Is this a reasonable "Application entry point"?

I have recently come across a situation where code is dynamically loading some libraries, wiring them up, then calling what is termed the "application entry point" (one of the libraries must implement IApplication.Run()).
Is this a valid "Appliation entry point"?
I would always have considered the application entry point to be before the loading of the libraries and found the IApplication.Run() being called after a considerable amount of work slightly misleading.
The terms application and system are terms that are so widely and diversely used that you need to agree what they mean upfront with your conversation partner. E.g. sometimes an application is something with a UI, and a system is 'UI-less'. In general it's just a case of you say potato, I say potato.
As for the example you use: that's just what a runtime (e.g. .NET or java) does: loading a set of libraries and calling the application entry point, i.e. the "main" method.
So in your case, the code loading the libraries is doing just the same, and probably calling a method on an interface, you could then consider the loading code to be the runtime for that application. It's just a matter of perspective.
The term "application" can mean whatever you want it to mean. "Application" merely means a collection of resources (libraries, code, images, etc) that work together to help you solve a problem.
So to answer your question, yes, it's a valid use of the term 'application'.
Application on its own means actually nothing. It is often used by people to talk about computer programs that provide some value to the user. A more correct term is application software and this has the following definition:
Application software is a subclass of
computer software that employs the
capabilities of a computer directly
and thoroughly to a task that the user
wishes to perform. This should be
contrasted with system software which
is involved in integrating a
computer's various capabilities, but
typically does not directly apply them
in the performance of tasks that
benefit the user. In this context the
term application refers to both the
application software and its
implementation.
And since application really means application software, and software is any piece of code that performs any kind of task on a computer, I'd say also a library can be an application.
Most terms are of artificial nature anyway. Is a plugin no application? Is the flash plugin of your browser no application? People say no, it's just a plugin. Why? Because it can't run on it's own, it needs to be loaded into a real process. But there is no definition saying only things that "can run on their own" are applications. Same holds true for a library. The core application could just be an empty container and all logic and functionality, even the interaction with the user, could be performed by plugins or libraries, in which case that would be more an application than the empty container that just provides some context for the application to run. Compare this to Java. A Java application can't run on it's own, it must run within a Java Virtual Machine (JVM), does that mean the JVM is the application and the Java Code is just... well what? Isn't the Java code the real application and the JVM just an empty runtime environment that provides nothing to the end user without the loaded Java code?
I think in this context "application entry point" means "the point at which the application (your code) enters the library".
I think probably what you're referring to is the main() function in C/C++ code or WinMain in a Windows app. That is, it's the point where execution is normally started in an app. Your question is pretty broad and vague--for example, which OS are you running this on--but this may be what you're looking for. This might also address the question.
Bear in mind when you're asking questions, details are your friend. People can give you a much better, more informed answer when you provide them with details.
EDIT:
In a broader context consider what has to happen from the standpoint of the OS. When the user specifies that they want to run an app, the OS has to load the app from the hard drive and then when the app is loaded into memory, it has to pass control to some point in the memory blocked occupied by the newly loaded app to continue execution. That would be the "Application Entry Point". When an app is constructed with dynamically linked code the OS has to load all that dynamically linked code in order to get the correct app image into memory. Loading up those shared bits of code does not change the fact that the OS must have a point to which to pass control when the app is loaded into memory.