If I statically link an executable in ubuntu, is there any chance that that executable won't work within another distribution such as mint os? or fedora? I know processor types are affected, but other then that is there anything else I have to be wary of? Sorry if this is a dumb question. Thanks for any help
There are a few corner cases, but for the most part, you should be in good shape with static linking. The one that comes to mind is libnss. This particular library is essentially impossible to link statically, because of the way it does its job (permissions, authentication, security tasks). As long as the glibc-versions are similar, you should be ok on this issue, though.
If your program needs to work with subtle features of the kernel, like volume managers, you've got a pretty slim chance of getting your program to work, statically linked, across distros, because the kernel interfaces may change slightly.
Most typical applications, the kind that even makes sense to discuss portability, like network services, gui-applications, language tools (like compilers/interpreters) wont have a problem with any of this.
If you statically link a program on one computer and then move it to another computer in which the system basically runs the same way, then it should work just fine. That's the point of static linking; that there are no other files the program depends on - it's entirely self-contained, so as long as it can run at all, it will run the same way it does on its "host" system.
This contrasts with dynamic linking, in which the program incorporates elements of other files (libraries) at runtime. If you move a dynamically linked program to another system where the libraries it depends on are different (or nonexistent), it won't work.
In most cases, your executable will work just fine. As long as your executable doesn't depend on anything unusual being present for it to function, there will be no problem. (And, if it does depend on something unusual being present, then you'll have the same issue even if you dynamically link.)
Statically linking is usually safer than dynamically linking for compatibility between different UNIX environments, as long as the same CPU is in use.
To have a statically linked binary fail, again assuming the same processor architecture, you would have to do something such as link on a system using the a.out binary format and try to execute it on a system running ELF, in which case the dynamically linked version would fail just as badly.
So why do people not routinely link statically? Two reasons:
It makes the executable larger, sometimes MUCH larger, and
If bugs in the libraries are fixed, you'll have to relink your program to get access to the bug fixes. If a critical security bug is fixed in the libraries, you have to relink and redistribute your exe.
On the contrary. Whatever your chances are of getting a binary to work across distributions or even OSes, those chances are maximized by static linking. Static linking makes an executable self-contained in terms of libraries. It can still go wrong if it tries to read a file that's not there on another system.
For even better chances of portability, try linking against dietlibc or some other libc. An article at Linux Journal mentions some candidates. A smaller, simpler libc is less likely to depend on things in the filesystem that differ from distro to distro.
I would, for the reasons noted above avoid statically linking something unless you absolutely must.
That being said, it should work on any other similar kernel of the same architecture (i.e. if you statically link on a machine running linux 2.4.x , the loader VDSO is going to be different on linux 2.6, VDSO being virtual dynamic shared object, a shared object that the kernel exposes to every process containing loader code).
Other pitfalls include things in /etc not being where you'd think, logs being in different places, system utilities being absent or different (ubuntu uses update-rc.d, RHEL uses chkconfig), etc.
There are sometimes that you just have no choice. I was writing a program that talked to LVM2's string based cmdlib interface in favor of using execv().. low and behold, 30% of the distros I needed to support did NOT include that library and offered no way of getting it. So, I had to link against the static object when producing binary packages.
If you are using glibc, you can be confident that stuff like getpwnam() and friends will still work .. just make sure to watch any hard coded paths (better yet, make them configurable at run time)
As long as you can guarantee it'll only be executed on a similar version of the OS on similar hardware your program will work fine if it statically linked. so, if you build for a 2.6 Linux and statically link you will be fine to run on (almost) all 2.6 Linux distributions.
Be warned you can't statically link some parts of GLIBC so if you're using them you'll have to dynamically link anyway. From memory the name service stuff (nss) parts required dynamic linking when I was investigating it.
You can't statically link a program for (say) Linux then expect it to run on BSD or Windows. BSD and Unix don't present or handle their system calls in the same way Linux does. I tell a slight lie because the BSDs have a Linux emulation layer that can be enabled, but out of the box it won't work.
No it will not work. Static linking for distribution independence is a concept from the old unix ages and is not recommended. By the fact you can't as many libraries are not avail as static libraries anyway.
Follow the Linux Standard Base way, this is your only chance to get as much cross distribution portability as possible.
The LSB also works fine if you program for FreeBSD and Solaris.
There are two compatibility questions at issue here: library versions and library inventory.
You don't say what libraries you are using.
If you have no '-l' options, then the only 'library' is glibc itself, which serves as the interface to the kernel. Glibc versions are upward compatible. If you link on a glibc 2.x system you can run on a glibc 2.y, for y > x. The developers make a firm commitment to this.
If you have -l options, static linking is always safe. If you are dynamically linked, you have to ensure that (1) the library is present on the target system, and (2) has a compatible version. Your Mileage Might Vary as to whether the target distro has what you need.
Related
This might be the wrong forum, but its filled with so many smart people so someone might know a solution.
One of my customers has given me several assignments that require a quite similar yet wery different development setup (different versions of class-libraries and such) and the problem is that each time I need to switch between the projects I need to do a lot of configuring to get the compilation to work and not include wrong versions of tools used.
If I dont get it right there might be a lot of cleanup afterwards.
It takes me at least an hour to switch projects. Often several.
Now I realize that the customer has an issue with a lot of branching in their setup and they are working on that, but thats a long process.
So.. My question is. Is there some tool that allows me to take "snapshots" of working project-environments and switch between them?
I'm working on windows.
There are many options. One of them is virtualization: running an entire operating system on top of your operating system. It will emulate a fake hard drive, fake network card, etc. This will run an entire desktop 'in a box'.
In the last years containerization is very popular: running a separate environment (file system, OS configuration) but leaving the heavy stuff to the host (in your case, Windows) such as networking, hardware, etc. This is geared towards running a single application (though this is flexible) 'in a box'.
In your case, since you need to encapsulate an entire development environment, it would seem to be best to go with a virtual machine.
A no-cost solution is Virtualbox. Paid alternatives such as VMWare may offer better usability, performance or features.
If your class libraries are in a specific location, you could containerize the compiler: you work on your PC, then build inside of a 'box' with its own environment. Docker is often used.
Depending on what language you are programming in, there may be a specific solution. Python has VirtualEnv, for example.
What are the differences between the different Rebol 3 branches, especially with the new REN branch?
Is it the platforms they'll run on, the feature set, code organization, the C standard compliance?
This is an answer destined to become outdated, hence set to Community Wiki. This information is as of Sep-2015. So if updating this answer after some time has passed, please modify the date as well.
Binary download of Rebol3 from rebol.com
Last build was 5-Mar-2011 and pre-dates the open source release.
No GUI support, no HTTPS support, no serial port support, no UDP support, no smart console...
No 64-bit builds. Binaries are for Windows x86, OS/X (PPC or x86), Linux (x86 or PPC), FreeBSD x86.
While Rebol2 binaries are archived for many "esoteric" systems (BeOS, AIX, Windows DEC Alpha, QNX, Solaris...) similar binaries were not provided for Rebol3. The only "weird" build is for Amiga, and only an OS4 PowerPC Amiga. No successful builds of Rebol3 for Amiga emulators have been reported.
Open source release of Rebol3 on Github rebol/rebol
Open-sourcing was on 12-Dec-2012.
The rebol.com binary downloads were not rebuilt as part of this release. However, a community member (#earl here on SO) created a build farm at rebolsource.net that follows this GitHub master whenever it updates. Given that GitHub's rebol/rebol master hasn't been updated since March 2014, this dynamism is currently underused.
Building the source at time of release got an executable not distinguishable (?) in functionality from the builds on 5-Mar-2011. This suggests few changes to the source were made besides some cleanup and Apache-licensing edits to prepare for publication.
Minor patches and bugfixes were integrated sporadically, with most PRs sitting idle. Last PR accepted at time of writing was Mar 3, 2014, which is over a year ago.
The most noticeable "breaking" PR that did get approved was to repurpose the FUNCTION name. It was considered to be worth breaking the old arity 3 form to let the word be taken for the much more useful implementation as locals-gathering FUNCT. (This also brought Rebol in alignment with Red, whose FUNCTION is arity 2 and acts similarly.) FUNCT was kept around as-is for legacy code.
The most major non-breaking PR that was taken is probably not requiring blocks around IF, UNLESS, or EITHER bodies. This has been received well among those who know it's there, as fitting the freeform and non-boilerplate philosophy of the language. It allows some code constructs to get "prettier" and gives programmers more choice, while it doesn't seem to cause any more problems than anything else. It's certainly less of a speedbump than if [condition] [...], in fact it seems almost no one knows this feature got added, so it must not be biting anyone. (If anyone can bend ears over at Red to make sure it gets IF and IF/ONLY then that would be ideal.)
RETURN/REDO was removed. Rationale was that it permitted functions to effectively behave with variable arity, and that this was unnecessary and took terra firma away by no longer being able to predict a function's arity from its spec. Perhaps this stance warrants a second look...as Lisp users who are pressuring for the addition of Lisp-style macros seeming aren't worried about that very much. (Here in the StackExchange universe, this provoked a Programmers.SE question Would Rebol (or Red) benefit from Lisp-style Macros?, which hasn't gotten much in the way of answers yet.)
The fork by Saphirion: "Saphir"
Prior to the open-sourcing of Rebol, Saphirion AG had a special relationship with Rebol technologies. They had access to the source and were taking responsibility for most of the development work for Rebol3 GUI features. They also added several other things like HTTPS.
Saphir is available as a binary download from their website, but only provided for 32-bit Windows. There was at one time an experimental .APK for Android from Saphirion.
Some (but not all) of Saphir's source was released after the open-sourcing. Notable omissions were the android build and some Rebol3 code for encapping...a way of injecting compressed scripts and resources into binaries of the interpreter without needing to recompile it.
(Note: Under Apache2 license there is no requirement to release source code for one's derived work.)
"Community" Integration at Rebolsource on GitHub
With the GitHub rebol/rebol being held up on integrations, a fork at rebolsource/r3 was established to be a "community build" where work could be staged.
Rebolsource changes were conservative, seemingly aimed toward showing process for how GitHub's rebol/rebol might adopt changes "in the spirit in which Rebol was conceived" should that repository be delegated to the community. (For that spirit, see this.) Hence it integrated non-controversial bugfixes and tweaks, instead of large third-party cryptography libraries for implementing HTTPS. Also: no allowance for adding build dependencies besides a C compiler (no GNU autotools, for instance).
Binaries for the community build were produced on an as-needed basis for those requesting them who could not build it themselves.
Atronix Engineering's Rebol "3.0" at Github zsx/r3
Atronix is an industrial automation solutions provider that uses Rebol. How they do so is described in a video here by David den Haring, director of Engineering, and their ZOE software is built on their version of Rebol.
After the open sourcing, Atronix partnered with Saphirion to port the GUI to Linux. Atronix publishes their source publicly as it is developed, and David den Haring notes in the video above that they have only one proprietary component they developed (an industrial control driver). Other than that they are happy to share the source for all Rebol development they do.
Atronix integrated the 64-bit patches from Rebolsource, created a Windows 64-bit target, and offer up-to-date binaries of their development branch for Windows and Linux x86/x64, as well as Linux ARMv7.
Besides having the features of Saphir, the Atronix build added support for CALL with /INPUT, /OUTPUT, /ERROR. It also added a Foreign Function Interface, implementing LIBRARY!, ROUTINE! and STRUCT! for communicating with non-Rebol dynamic libraries. It brings in encapping support as well on Windows and Linux.
Rebol's "religion" was at times at odds with expedience, so the Rebol-based build process was replaced when needed by hand-edited makefiles and Visual Studio projects. The FFI library introduced a dependency on GNU autotools to build.
All Atronix builds include the GUI, so there is no "Core" build. And again, only Linux and Windows.
Ren-C
(Bias Note: This fork is the initiative #HostileFork started, knows the most about, and will speak most enthusiastically about.)
Ren-C started as an an extraction of a Core build out of Atronix's codebase. That gave it features like HTTPS, the enhanced CALL, and Foreign Function Interface to essentially all the platforms that Rebolsource was able to build for. Updates Jul/Sep-2015 Ren/C supports line continuations in the console, user infix functions, several bugfixes...
Ren-C makes large-scale changes and fixes fundamental issues in R3-Alpha, which are tracked on a Trello that provides more information. There is a new FAQ as a GitHub wiki. Critical issues like definitionally-scoped returns have been solved, with continuous work on other outstanding problems.
Though Atronix's R3/View required some additional dependencies, Ren/C pushed back to being able to be built with nothing besides a C compiler, and eliminated all handmade makefiles/projects.
Beyond Windows, Linux and Mac in both 32-bit and 64-bit variants, Ren/C has also been built for smaller players like HaikuOS and yes, even Syllable. This is interesting more for the demonstration of how broadly turnkey builds of the C89 code work (simply as make -f makefile.boot) as opposed to there being a particularly large userbase of those particular OSes!
From the point of view of language rigor, Ren/C is pushing on modern techniques. Although it can still build as C89, it can be built as C99 and C11 as well. It has also been verified to build as C++98 through C++14, and with some strategic modifications under #ifdef __cplusplus it can take advantage of modern C++ as a kind of static analysis tool over the C code. Warnings are raised, type errors all fixed up, and it's "const correct". The necessary changes were carefully considered to make Rebol's baseline C code not just more correct but cleaner and clearer source across the board.
From a point of view of C developers, Ren/C should be stable, organized, and commented enough for anyone who knows C to "modify with confidence" and try new features. That means being able to implement definitionally scoped returns (actually written, but not pushed), or try developing features like NewPath.
From a point of view of architecture, Ren/C is intended to not have an executable at all...but to be a library for embedding a Rebol interpreter into other programs. It is now the basis for Ren/C++, which was designed to anticipate working with Red as well.
From a point of view of testing, Ren/C intends to whip everything into shape for engineering rigor and zero bug tolerance. This means avoiding practices like zero-filling memory to obscure uninitialized memory accesses, using Address Sanitizer, Valgrind, and a test suite that can pass the highest settings on both.
While enabling all the extra functionality has made Ren/C's executable nearly twice the size of Rebolsource's, there's not yet been any audit to see how this can be brought down. It has been confirmed that there are duplicate copies of Zlib and PNG encoding/decoding--for instance (Saphirion included LodePNG, likely to work around a bug in the existing PNG because it was easier than fixing it...yet did not mothball the previous code). Also, being able to do a build which selectively integrates only the codecs you want to use is on the agenda.
Ren/C currently has the stakeholders from Atronix and Rebolsource participating in its development and direction, which strengthens the likelihood that it may evolve into "the" Rebol Core. It is now being linked in as the code backing Ren Garden, and using a similar approach it may be set up as the library used by Atronix's R3/View...then Rebolsource...and perhaps ultimately rebol/rebol itself.
The fork by Oldes
(Bias Note: this edit is added 28-Feb-2019 by Oldes himself)
Forked from the community branch. Main focus on keeping the code close to the original Carl's release without blindly taking everything from Atronix/Saphirion but still trying to pick-up the good things from these branches slowly.
Not like Ren-C, this version is not trying to introduce new syntax, but rather be closer to the original Rebol2 and new Red language
I am simply reading information off the interwebs, currently the cmake about page, and I need the information to fill in the gaps, it helps to see the big picture.
Surely the answer is straightforward, I hope. What is a native build environment?
Context: I need to know how to build software on my machine (CodeBlocks, etc), why I need to do this, the advantages of doing this, etc. But first, I need to know every piece of jargon I come across, and I could not find any explanations about exactly what a "Native build environment" is, although I can speculate to some degree.
"Native" as in "runs directly in the host operating system" and not "runs in a virtual machine or emulator."
The particular point that CMake's about page is trying to convey is the manner in which CMake achieves cross-platform functionality: specifically not by virtualizing, but by cooperating/collaborating directly with the host system, and the "normal" ways the host system is used to doing things.
Is the build environment then just the directory holding all the garbage needed for a compiler to build the software then?
That's an oversimplification – there's nothing to say that it's a single directory – but more or less, yes. The term is not jargon, it literally means "the state of the world" (aka, environment) needed for the build.
What would you call the other thing then, Non-native?
Sure, or virtualized, or emulated, or whatever other intermediate layer has been added.
Why do we need the distinction as well?
Why not? It's useful to have a concise, clear, simple term so we can communicate precisely and with minimal confusion and ambiguity.
Why 'non-native'? If you haven't already figured this out - there is something called cross-compilation.
Simply put, if I don't have access to target hardware (or an equivalent virtual machine) on which the software needs to run, how do I develop this software on my host and package it to run on that target?
Cross-compilation addresses this by providing necessary tools that perform a translation (or other important low-level stuff) to give you the final software. Such an environment to develop software is called non-native.
Well, I believe we need the term to state the technique.
Suppose I have a software and I want to make cross-plataform plugins. You compile the plugin for a virtual machine, and any platform running my software would be able to run this code.
I am wondering if it is possible to use LLVM interpreter and bytecode for this purpose. Also, I am wondering if does make sense using LLVM for this purpose instead of something else, i.e. is it what LLVM was made for?
I'm not sure that LLVM was designed for it. However, I doubt there is anything that hasn't been done using LLVM1
Other virtual-machines based script engines are specifically created for the job:
LUA is very popular
Wikipedia lists some other Extension/embeddable languages under the Scripting language entry
If you're looking for embeddable virtual machines:
IKVM supports embedding JVM and CLR in a bridged mode (interoperable)
Parrot supports embedding (and includes a Python interpreter; mind you, you can just run python bytecode images)
Perl has similar architecture and supports embedding
Javascript supports embedding (not sure about the architecture of v8, but I guess it would use a virtual machine)
Mono's CLR engine supports embedding: http://www.mono-project.com/Embedding_Mono
1 including compiling c++ information to javascript to run in your browser...
There is VMIR (https://github.com/andoma/vmir) which is a LLVM bitcode interpreter / JIT engine that's intended to be embedded into other apps.
Disclaimer: I'm the author of it and it's still work-in-progress but works reasonable well.
In theory, there exist a limited subset of LLVM IR which can be portable across various platforms. You shall not specify alignments, you shall not bitcast pointers to integral types, you must avoid intrinsics, etc. Which means - you can't immediately use a code generated by a stock C compiler (llvm-gcc, Clang, whatever), unless you specify a limited target for it and implement sanitising LLVM passes. Another issue is that the bitcode format from different LLVM versions is not guaranteed to be compatible.
In practice, I would not go there. Mono is a reasonably small, embeddable, fast VM, and all the .NET stack of tools is available for it. VM itself is pretty low-level (as long as you do not care about the verifyability).
LLVM includes an interpreter, so if you can build this interpreter for your target platforms, you can then evaluate LLVM bitcode on the fly.
It's apparently not so fast though.
In their classic discussion (that you do not want to miss if you're a fan of open source, LLVM, compilers) about LLVM vs libJIT, that has happened long before LLVM became famous and established, the author of libJIT Rhys Weatherley raised this particular issue, he stated that LLVM is not suitable to be embedded, while Chris Lattner, the author of LLVM stated that otherwise, it is modular and you can use it in any possible fashion including embedding only the parts you need.
Windows still use DLLs and Mac programs seem to not use DLL at all. Are there benefits or disadvantages of using either technique?
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
MacOS X, like other flavours of Unix, use shared libraries, which are just another form of DLL.
And yes both are advantageous as the DLL or shared library code can be shared between multiple processes. It does this by the OS loading the DLL or shared library and mapping it into the virtual address space of the processes that use it.
On Windows, you have to use dynamically-loaded libraries because GDI and USER libraries are avaliable as a DLL only. You can't link either of those in or talk to them using a protocol that doesn't involve dynamic loading.
On other OSes, you want to use dynamic loading anyway for complex apps, otherwise your binary would bloat for no good reason, and it increases the probably that your app would be incompatible with the system in the long run (However, in short run static linking can somewhat shield you from tiny breaking changes in libraries). And you can't link in proprietary libraries on OSes which rely on them.
Windows still use DLLs and Mac
programs seem to not use DLL at all.
Are they benefits or disadvantages of
using either technique?
Any kind of modularization is good since it makes updating the software easier, i.e. you do not have to update the whole program binary if a bug is fixed in the program. If the bug appears in some dll, only the dll needs to be updated.
The only downside with it imo, is that you introduce another complexity into the development of the program, e.g. if a dll is a c or c++ dll, different calling conventions etc.
If a program installation includes all
the DLL it requires, will it be the
same as statically linking all the
libraries?
More or less yes. Depends on if you are calling functions in a dll which you assume static linkage with. The dll could just as well be a "free standing" dynamic library, that you only can access via LoadLibrary() and GetProcAddress() etc.
One big advantage of shared libraries (DLLs on Windows or .so on Unix) is that you can rebuild the library and its consumers separately while with static libraries you have to rebuild the library and then relink all the consumers which is very slow on Unix systems and not very fast on Windows.
MacOS software uses "dll's" as well, they are just named differently (shared libraries).
Dll's make sense if you have code you want to reuse in different components of your software. Mostly this makes sense in big software projects.
Static linking makes sense for small single-component applications, when there is no need for code reuse. It simplifies distribution since your component has no external dependencies.
Besides memory/disk space usage, another important advantage of using shared libraries is that updates to the library will be automatically picked up by all programs on the system which use the library.
When there was a security vulnerability in the InfoZIP ZIP libraries, an update to the DLL/.so automatically made all software safe which used these. Software that was linked statically had to be recompiled.
Windows still use DLLs and Mac programs seem to not use DLL at all. Are they benefits or disadvantages of using either technique?
Both use shared libraries, they just use a different name.
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
Somewhat. When you statically link libraries to a program, you will get a single, very big file, with DLLs, you will have many files.
The statically linked file won't need the "resolve shared libraries" step (which happens while the program loads). A long time ago, loading a static program meant that the whole program was first loaded into RAM and then, the "resolve shared libraries" step happened. Today, only the parts of the program, which are actually executed, are loaded on demand. So with a static program, you don't need to resolve the DLLs. With DLLs, you don't need to load them all at once. So performance wise, they should be on par.
Which leaves the "DLL Hell". Many programs on Windows bring all DLLs they need and they write them into the Windows directory. The net effect is that the last installed programs works and everything else might be broken. But there is a simple workaround: Install the DLLs into the same directory as the EXE. Windows will search the current directory first and then the various Windows paths. This way, you'll waste a bit of disk space but your program will work and, more importantly, you won't break anything else.
One might argue that you shouldn't install DLLs which already exist (with the same version) in the Windows directory but then, you're again vulnerable to some bad app which overwrites the version you need with something that breaks your neck. The drawback is that you must distribute security fixes for your app yourself; you can't rely on Windows Update or similar things to secure your code. This is a tight spot; crackers are making lots of money from security issues and people will not like you when someone steals their banking data because you didn't issue security fixes soon enough.
If you plan to support your application very tightly for many, say, 20 years, installing all DLLs in the program directory is for you. If not, then write code which checks that suitable versions of all DLLs are installed and tell the user about it, so they know why your app suddenly starts to crash.
Yes, see this text :
Dynamic linking has the following
advantages: Saves memory and
reduces swapping. Many processes can
use a single DLL simultaneously,
sharing a single copy of the DLL in
memory. In contrast, Windows must load
a copy of the library code into memory
for each application that is built
with a static link library. Saves
disk space. Many applications can
share a single copy of the DLL on
disk. In contrast, each application
built with a static link library has
the library code linked into its
executable image as a separate
copy. Upgrades to the DLL are
easier. When the functions in a DLL
change, the applications that use them
do not need to be recompiled or
relinked as long as the function
arguments and return values do not
change. In contrast, statically linked
object code requires that the
application be relinked when the
functions change. Provides
after-market support. For example, a
display driver DLL can be modified to
support a display that was not
available when the application was
shipped. Supports multilanguage
programs. Programs written in
different programming languages can
call the same DLL function as long as
the programs follow the function's
calling convention. The programs and
the DLL function must be compatible in
the following ways: the order in which
the function expects its arguments to
be pushed onto the stack, whether the
function or the application is
responsible for cleaning up the stack,
and whether any arguments are passed
in registers. Provides a mechanism
to extend the MFC library classes. You
can derive classes from the existing
MFC classes and place them in an MFC
extension DLL for use by MFC
applications. Eases the creation
of international versions. By placing
resources in a DLL, it is much easier
to create international versions of an
application. You can place the strings
for each language version of your
application in a separate resource DLL
and have the different language
versions load the appropriate
resources. A potential
disadvantage to using DLLs is that the
application is not self-contained; it
depends on the existence of a separate
DLL module.
From my point of view an shared component has some advantages that are somtimes realized as disadvantages.
shared component defines interfaces in your process. So you are forced to decide which components/interfaces are visible outside and which are hidden. This automatically defines which interface has to be stable and which does not have to be stable and can be refactored without affecting any code outside the component..
Memory administration in case of C++ and Windows must be well thought. So normally you should not handle memory outside of an dll that isn't freed in the same dll. If you do so your component may fail if: different runtimes or compiler version are used.
So I think that using shared coponents will help the software to get better organized.