How to debug C++/CLI MFC app that crashes before InitInstance - .net-4.0

I have this monstrous C++ application built using MFC(mainly) and COM. It links to several libraries and does a lot of scientific computing. So, now I want to add some new features to it and as an organizational policy, we are developing everything new using .NET. So, this new UI feature is going to be built using WPF and will be consumed in this existing C++ application.
I know how to use a WPF control in a C++ application, so that's not the problem. The problem is that when I try to turn on CLR on this project, it takes a lot of time (around 10 minutes) in linking stage to perform the linking and produce a mixed mode executable. In the end, it manages to do so and successfully creates the executable. But the problem is that whenever I launch this executable, it crashes. I tried to debug InitInstance but it crashes somewhere before this. I am a little stumped on what to try next.
Does anybody have any idea what might be the cause of this.
The target framework of the mixed mode assembly is 4.0 (as it should be) so THIS is not the problem here.
FYI, ILDasm fails to load this exe as well. It takes a lot of time, gives no error but it doesn't load it either. This gives me the impression that somehow managed image is not created properly.

Not sure if this is necessary, but it doesn't hurt either. New MFC projects have this in the constructor of your CWinApp-derived class:
#ifdef _MANAGED
// If the application is built using Common Language Runtime support (/clr):
// 1) This additional setting is needed for Restart Manager support to work properly.
// 2) In your project, you must add a reference to System.Windows.Forms in order to build.
System::Windows::Forms::Application::SetUnhandledExceptionMode(System::Windows::Forms::UnhandledExceptionMode::ThrowException);
#endif

Related

updating IDE old to new C++ Builder

I'm currently trying to compile an old program (made with C++ builder 2 or 3) with the "current" Embarcadero RAD Studio XE2.
So, I was wondering whether there is an easy way to use the old code, as Borland once claimed to be fully compatible to lower versions... however I couldn't find a "project-file", only source-code (.cpp, .h, .res, etc.).
I tried to "add to project" the main .cpp, however there seem to be some wrong include-paths... it also seem to use the OWL-package and includes its important source-files...
I'm a bit confused which type of main project I have to open first, since you need to open a new project before adding the source to it. As the running .exe has a GUI, I tried a Form-Window first, but it may be better to use a console or service as the real form is produced within the code as far as I understand.
So, after installing OWL and correcting the include-paths, do you think it should be running fine? Or is there something else to take care of?
If your old project was using OWL, you're probably well outside of the supported upgrade path.
That being said, valid C++ code should still compile and work and I've heard of people using OWL with recent versions of C++Builder. (via OWLNext)
Regarding your confusion as to which type of project to use, I believe a console application would be your best bet. A forms application is completely wrong, that will bring in the VCL and give you no end of problems trying to reconcile the different windowing systems. A service application is a completely different beast as well, and isn't meant for GUI applications. A console application should work, but you'll need more. The OWLNext project has a wiki that should help quite a bit.

OpenNETCF and Windows Embedded Standard O/S

If I have an application that is written in .net Compact Framework (and runs on Windows CE) and is theoretically compatible with Windows Embedded Standard O/S, would it still be compatible if it makes use of OpenNETCF functionality?
Things like running .exe files with help of OpenNETCF for example. I am assuming that OpenNETCF uses P/Invoke under the hood, which might make the application incompatible with other OS than Windows CE.
I don't use P/Invoke in my code, but I can't asnwer for sure whether OpenNETCF does or does not.
.net compact framework 2.0 and windows embedded standard
OpenNETCF does use P/Invoke extensively.
It is effectively a wrapper for some core OS functionality in Windows CE and its derivatives, that is not otherwise implemented in the Compact Framework. In practice this means extensive P/Invoking of coredll.dll; the basic OS module for Windows CE.
Windows Embedded Standard is Windows XP. For this reason I would not expect you to be able to use OpenNETCF.
Depending on the version you're using you may be able to get hold of the OpenNETCF code here (or buy the latest of course), and see what's going on under the hood. Also, what you may find is that the calls you are making to the OpenNETCF are actually implemented anyway when compiling for Windows Embedded Standard.
One way to approach this is to make another project to target this platform, containing exactly the same code files, but no reference to the OpenNETCF, then work through fixing the compile errors.
You can add a Conditional compilation symbol to either the CE project or the Windows Embedded project then fix the errors like so (This example is not for OpenNETCF, but you get the idea):
public static string ExecutingAssembly
{
get
{
#if WindowsCE
return Assembly.GetExecutingAssembly().GetName().CodeBase;
#else
return Assembly.GetExecutingAssembly().Location;
#endif
}
}
Obviously you will then have to create a build for each platform as the outputted assemblies will now be different.
As Chris points out, the SDF makes heavy use of coredll P/Invokes. That's not to say everything does, but it's certainly a minefield. I tend to have a CF project and a FFX project and places where I have overlap I use aliasing, like this:
#if WindowsCE
using Thread = OpenNETCF.Threading.Thread2;
#else
using Thread = System.Threading.Thread;
#endif
Then in the code you just do your normal
var thread = new Thread(...);
And things work out.
Now ages ago we did start an interesting side project of creating a coredll "shim" for the desktop. What that means is that a p/invoke to "coredll" on the desktop would actually call that DLL, which would in turn marshal the calls off to kernel32, user32 or whatever. Our testing for what we implemented (and there's a fair bit there) showed that it worked just fine, so if you're using a limited subset of APIs, just dropping it on the PC might make the CF assembly "just work".

C++/cli - Native error trap for "Framework not available" Errors?

When creating dlls (Add-ins) for a third party program that loads Native DLLs dynamically, is there a way, in a Mixed Mode DLL (C++/cli) to natively catch the fact that the .Net framework is not available. So that the Parent program that is dynamically trying to consume this DLL does not throw an error?
It might be possible to do something with a custom entry point in the dll, but I expect you are walking in 'undocumented' territory.
The only 'simple' way I can think to do this would be to create a native shim dll that performs the check and handles the condition in whatever way you see fit. If the framework is present it in turn loads the real plugin DLL and mirrors all calls through to it.
How easy this is will depend on the complexity of the plugin interface you are working with.

How to create a cross-platform DLL in .net

I have a interesting problem: Where I work we've built a home-grown ERP system in VB6 that we are slowly moving over into vb.net. There are some projects have are in .net: we have a hand-held C# project that uses a web service to talk to our database, I've built some reporting screens using Crystal and some smaller maintenance screens.
Well as we have been plotting the conversion out, we want to have a way to separate our business logic and UI so that the UI can be a win/web form or a Smart Device project. Is this even possible? I try to reference the DLL in a test I have and it gives me this error when trying to debug using a emulator
Deployment and/or registration failed with error: 0x8973190e. Error writing file '%csidl_program_files%\smartdeviceproject1\system.windows.forms.dll'. Error 0x80070070: There is not enough space on the disk.
I'm not sure what it's doing... I take my DLL out and it works fine. Does anyone know of a way I can create a DLL that can target all of these UI without may changes?
This post here
helped me alot. Using a linked projects with conditional complation would seem to work in my case.

what are the pros and cons of using a DLL?

Windows still use DLLs and Mac programs seem to not use DLL at all. Are there benefits or disadvantages of using either technique?
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
MacOS X, like other flavours of Unix, use shared libraries, which are just another form of DLL.
And yes both are advantageous as the DLL or shared library code can be shared between multiple processes. It does this by the OS loading the DLL or shared library and mapping it into the virtual address space of the processes that use it.
On Windows, you have to use dynamically-loaded libraries because GDI and USER libraries are avaliable as a DLL only. You can't link either of those in or talk to them using a protocol that doesn't involve dynamic loading.
On other OSes, you want to use dynamic loading anyway for complex apps, otherwise your binary would bloat for no good reason, and it increases the probably that your app would be incompatible with the system in the long run (However, in short run static linking can somewhat shield you from tiny breaking changes in libraries). And you can't link in proprietary libraries on OSes which rely on them.
Windows still use DLLs and Mac
programs seem to not use DLL at all.
Are they benefits or disadvantages of
using either technique?
Any kind of modularization is good since it makes updating the software easier, i.e. you do not have to update the whole program binary if a bug is fixed in the program. If the bug appears in some dll, only the dll needs to be updated.
The only downside with it imo, is that you introduce another complexity into the development of the program, e.g. if a dll is a c or c++ dll, different calling conventions etc.
If a program installation includes all
the DLL it requires, will it be the
same as statically linking all the
libraries?
More or less yes. Depends on if you are calling functions in a dll which you assume static linkage with. The dll could just as well be a "free standing" dynamic library, that you only can access via LoadLibrary() and GetProcAddress() etc.
One big advantage of shared libraries (DLLs on Windows or .so on Unix) is that you can rebuild the library and its consumers separately while with static libraries you have to rebuild the library and then relink all the consumers which is very slow on Unix systems and not very fast on Windows.
MacOS software uses "dll's" as well, they are just named differently (shared libraries).
Dll's make sense if you have code you want to reuse in different components of your software. Mostly this makes sense in big software projects.
Static linking makes sense for small single-component applications, when there is no need for code reuse. It simplifies distribution since your component has no external dependencies.
Besides memory/disk space usage, another important advantage of using shared libraries is that updates to the library will be automatically picked up by all programs on the system which use the library.
When there was a security vulnerability in the InfoZIP ZIP libraries, an update to the DLL/.so automatically made all software safe which used these. Software that was linked statically had to be recompiled.
Windows still use DLLs and Mac programs seem to not use DLL at all. Are they benefits or disadvantages of using either technique?
Both use shared libraries, they just use a different name.
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
Somewhat. When you statically link libraries to a program, you will get a single, very big file, with DLLs, you will have many files.
The statically linked file won't need the "resolve shared libraries" step (which happens while the program loads). A long time ago, loading a static program meant that the whole program was first loaded into RAM and then, the "resolve shared libraries" step happened. Today, only the parts of the program, which are actually executed, are loaded on demand. So with a static program, you don't need to resolve the DLLs. With DLLs, you don't need to load them all at once. So performance wise, they should be on par.
Which leaves the "DLL Hell". Many programs on Windows bring all DLLs they need and they write them into the Windows directory. The net effect is that the last installed programs works and everything else might be broken. But there is a simple workaround: Install the DLLs into the same directory as the EXE. Windows will search the current directory first and then the various Windows paths. This way, you'll waste a bit of disk space but your program will work and, more importantly, you won't break anything else.
One might argue that you shouldn't install DLLs which already exist (with the same version) in the Windows directory but then, you're again vulnerable to some bad app which overwrites the version you need with something that breaks your neck. The drawback is that you must distribute security fixes for your app yourself; you can't rely on Windows Update or similar things to secure your code. This is a tight spot; crackers are making lots of money from security issues and people will not like you when someone steals their banking data because you didn't issue security fixes soon enough.
If you plan to support your application very tightly for many, say, 20 years, installing all DLLs in the program directory is for you. If not, then write code which checks that suitable versions of all DLLs are installed and tell the user about it, so they know why your app suddenly starts to crash.
Yes, see this text :
Dynamic linking has the following
advantages: Saves memory and
reduces swapping. Many processes can
use a single DLL simultaneously,
sharing a single copy of the DLL in
memory. In contrast, Windows must load
a copy of the library code into memory
for each application that is built
with a static link library. Saves
disk space. Many applications can
share a single copy of the DLL on
disk. In contrast, each application
built with a static link library has
the library code linked into its
executable image as a separate
copy. Upgrades to the DLL are
easier. When the functions in a DLL
change, the applications that use them
do not need to be recompiled or
relinked as long as the function
arguments and return values do not
change. In contrast, statically linked
object code requires that the
application be relinked when the
functions change. Provides
after-market support. For example, a
display driver DLL can be modified to
support a display that was not
available when the application was
shipped. Supports multilanguage
programs. Programs written in
different programming languages can
call the same DLL function as long as
the programs follow the function's
calling convention. The programs and
the DLL function must be compatible in
the following ways: the order in which
the function expects its arguments to
be pushed onto the stack, whether the
function or the application is
responsible for cleaning up the stack,
and whether any arguments are passed
in registers. Provides a mechanism
to extend the MFC library classes. You
can derive classes from the existing
MFC classes and place them in an MFC
extension DLL for use by MFC
applications. Eases the creation
of international versions. By placing
resources in a DLL, it is much easier
to create international versions of an
application. You can place the strings
for each language version of your
application in a separate resource DLL
and have the different language
versions load the appropriate
resources. A potential
disadvantage to using DLLs is that the
application is not self-contained; it
depends on the existence of a separate
DLL module.
From my point of view an shared component has some advantages that are somtimes realized as disadvantages.
shared component defines interfaces in your process. So you are forced to decide which components/interfaces are visible outside and which are hidden. This automatically defines which interface has to be stable and which does not have to be stable and can be refactored without affecting any code outside the component..
Memory administration in case of C++ and Windows must be well thought. So normally you should not handle memory outside of an dll that isn't freed in the same dll. If you do so your component may fail if: different runtimes or compiler version are used.
So I think that using shared coponents will help the software to get better organized.