How can a DLL have zero exports? - dll

I recently ran across a DLL installed on my system that Dependancy Walker (and every other utility I tried) says has zero exports by name or ordinal, yet the file is approximately 4mb in size. I thought the sole purpose of a DLL was to export functions for use by other code so what would be the purpose of a dll with no visible exports?

One way to think of a DLL is as a container for functions. Exporting a function from a DLL makes those functions visible to callers outside of the DLL. While exporting functions from a DLL is perhaps the most common way to provide access to them, many platforms provide other ways to access functions which have not been exported such as reflection in the .NET Framework and Java and (I think) LoadLibtary / GetProcAddress in Win32
Reasons for doing this are varied, often it is because it is beneficial to the developer to have functions in a library but undesirable for those functions to be called from external applications

Resource-only DLL, maybe? Those are used quite often for localization purposes, for example.
EDIT: it's also possible to have a DLL with code that does something in DllMain() to somehow make its functionality available. The DLL can register itself with some global dispatcher, for example, or create named kernel objects...

Related

Is there a way to know what functions are called from an external DLL?

I have an old set of DLLs developed in late 90s with Visual C++ of that time and an application which uses them. Is there any way to know what functions (and their signatures, e.g. arguments and value types) are called from these DLLs.
There's a more general question. Is there a way to monitor all DLL calls which are made by any process in the system?
The only precise way to see what functions are used from a DLL is to debug the application that use the DLL and inspect the stack before each call.
If you want something more generic you can log every LoadLibrary and GetProcAddres API call but it is a daunting task.
You can also run an API monitor software like this one from Rohitab: APImonitor

Register for COM Interop and ComVisible(true)

I have the common library which will be used by multiple projects. I have Signed it with Strong name in order to manage the multiple versions across the project. In this library, I have couple of classes with [ComVisble(true)] property. However, I don't want to register this library for COM interop. But I am getting compilation error which asks me to "Please register your assembly for COm Interop".
I am new to COM in .Net. I assume that only if I have [ComRegisterFunction] in my library, I need to register it for COM interop. Correct me if I am wrong.
If I register it with COM Interop, then Strong naming will not be helpful in maintaining the multiple versions across the project.
Any help in this would be greatly appreciated.
However, I don't want to register this library for COM interop.
You have to register it, that's the way the client program finds your DLL. The client code uses a number when it creates the object (the CLSID guid), the registry tells which DLL implements that number. Technically the client program can use a manifest and its own local copy of your DLL but that is not under your control.
I assume that only if I have [ComRegisterFunction] in my library,
It is very rarely required. Only necessary if additional registry keys need to be written beyond the standard ones that Regasm.exe writes. It does not in any way address a versioning problem.
then Strong naming will not be helpful
A strong name permits you to put the assembly in the GAC. That actually is helpful, it avoids a new version of a DLL file overwriting an old version and ensures that an old client program that was not recompiled can still find that old version of your DLL. It is not the only technique, you can also give new versions of the DLL a different file name so they can live together in a single directory without overwriting each other. Not a very solid way to do it, programmers tend to take shortcuts when they have to fight a build system. That's a very dangerous shortcut to take. The GAC is often needed anyway, the CLR will have trouble locating dependent assemblies.
Do keep in mind that the GAC only solves one of the DLL Hell problems associated with COM. Another very important rule in COM that interfaces and coclasses are immutable. If you make a change to one of them then you must give them a different number. This ensures that the client program cannot accidentally create an incompatible object. Very important, versioning problems are extraordinarily hard to diagnose when the client program uses early binding. Getting a different number is automatic in .NET if you don't explicitly use the [Guid] attribute on a [ComVisible] type.

Is there anyway to export a function (not a class) in VB6?

I want to create an ActiveX DLL from Visual Basic 6 from which I would like to call some public functions. I will call this DLL only from VB6. However, it seems that only classes get exported. Is there any workaround?
I know there is a way to create DLLs from VB6 with standard WINAPI functions. This is not what I want, because I would have to type thousands of Declare instructions, and I would lose the dynamic linking so I don't need to recompile applications when changing the DLL.
I will state my problem just in case anyone has a better idea. I've got a bunch of relatively big projects, each with its own code, and then I have a lot of "Generic" code which is used in several projects. It's an annoyance to add every file to each new project, and having to recompile all of them for each minor change. So I thought of creating a DLL, so I would just "Add reference" when I begin a new project, and don't have to worry anymore about recompiling (at least for minor changes) but I raged when discovered that only classes got exported.
I wouldn't mind to reorganize the code in classes, but it's an overwhelming task: there are some 10 years of 3-4 people code, so it's not something I can do overnight.
Yes, it's easy.
Put all the utility routines in special classes in the DLL.
Set the Instancing property of those classes as GlobalMultiUse.
Build the DLL.
In your client project (with a reference to the DLL) you will now be able to call the functions and subroutines as if they were in a module in that project. You won't need to create any objects.
You can read more in the VB6 manual.

How to check whether a PE file (DLL,EXE) is a COM component?

I need to write a stub module which, when given a PE (DLL/EXE) as input, will determine whether it is a normal Win32 DLL/EXE or COM DLL/EXE. I need to determine this programatically.
Are there any Windows APIs for this purpose?
I suspect that this is something that would be very hard to do with near 100% accuracy. Some thoughts though:
A COM DLL will export functions like DllRegisterServer and DllUnregisterServer. You could use LoadLibrary() to load the Dll, and then GetProcAddress() to check for the presence of these functions. If they're there then its highly likely that its a COM dll.
A plain win32 Dll will export DllMain. You could use the same technique to check for that. If you find it then its very likely that its win32.
I'm not aware of a way to discover if an exe is a COM server. Servers written using ATL often have a registration script embedded in their resource table, but they don't have to. And you don't need to use ATL to write a COM server. Services using "registry-less com" will similarly have an embedded manifest. You could scan the registry (below HKLM/Classes/Software/) to see if the exe is registered, but it may be that the exe is using registry-less com or just hasn't been regisered yet.
Hope that helps.
For traditional COM DLL, you can look for the wellknown exported methods (search on msdn for these methods)
DllGetClassObject
DllRegisterServer
DllUnregisterServer
DllCanUnloadNow
I am not sure about EXE COM servers though because they generally use command line parameters for registration/unregistration and for class object usually calls CoRegisterClassObject when the EXE starts.
Most of the COM servers traditionally also registered in the Registry but you can create registration free servers now.
Are you also looking for a .NET assembly with some COM visible classes?

what are the pros and cons of using a DLL?

Windows still use DLLs and Mac programs seem to not use DLL at all. Are there benefits or disadvantages of using either technique?
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
MacOS X, like other flavours of Unix, use shared libraries, which are just another form of DLL.
And yes both are advantageous as the DLL or shared library code can be shared between multiple processes. It does this by the OS loading the DLL or shared library and mapping it into the virtual address space of the processes that use it.
On Windows, you have to use dynamically-loaded libraries because GDI and USER libraries are avaliable as a DLL only. You can't link either of those in or talk to them using a protocol that doesn't involve dynamic loading.
On other OSes, you want to use dynamic loading anyway for complex apps, otherwise your binary would bloat for no good reason, and it increases the probably that your app would be incompatible with the system in the long run (However, in short run static linking can somewhat shield you from tiny breaking changes in libraries). And you can't link in proprietary libraries on OSes which rely on them.
Windows still use DLLs and Mac
programs seem to not use DLL at all.
Are they benefits or disadvantages of
using either technique?
Any kind of modularization is good since it makes updating the software easier, i.e. you do not have to update the whole program binary if a bug is fixed in the program. If the bug appears in some dll, only the dll needs to be updated.
The only downside with it imo, is that you introduce another complexity into the development of the program, e.g. if a dll is a c or c++ dll, different calling conventions etc.
If a program installation includes all
the DLL it requires, will it be the
same as statically linking all the
libraries?
More or less yes. Depends on if you are calling functions in a dll which you assume static linkage with. The dll could just as well be a "free standing" dynamic library, that you only can access via LoadLibrary() and GetProcAddress() etc.
One big advantage of shared libraries (DLLs on Windows or .so on Unix) is that you can rebuild the library and its consumers separately while with static libraries you have to rebuild the library and then relink all the consumers which is very slow on Unix systems and not very fast on Windows.
MacOS software uses "dll's" as well, they are just named differently (shared libraries).
Dll's make sense if you have code you want to reuse in different components of your software. Mostly this makes sense in big software projects.
Static linking makes sense for small single-component applications, when there is no need for code reuse. It simplifies distribution since your component has no external dependencies.
Besides memory/disk space usage, another important advantage of using shared libraries is that updates to the library will be automatically picked up by all programs on the system which use the library.
When there was a security vulnerability in the InfoZIP ZIP libraries, an update to the DLL/.so automatically made all software safe which used these. Software that was linked statically had to be recompiled.
Windows still use DLLs and Mac programs seem to not use DLL at all. Are they benefits or disadvantages of using either technique?
Both use shared libraries, they just use a different name.
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
Somewhat. When you statically link libraries to a program, you will get a single, very big file, with DLLs, you will have many files.
The statically linked file won't need the "resolve shared libraries" step (which happens while the program loads). A long time ago, loading a static program meant that the whole program was first loaded into RAM and then, the "resolve shared libraries" step happened. Today, only the parts of the program, which are actually executed, are loaded on demand. So with a static program, you don't need to resolve the DLLs. With DLLs, you don't need to load them all at once. So performance wise, they should be on par.
Which leaves the "DLL Hell". Many programs on Windows bring all DLLs they need and they write them into the Windows directory. The net effect is that the last installed programs works and everything else might be broken. But there is a simple workaround: Install the DLLs into the same directory as the EXE. Windows will search the current directory first and then the various Windows paths. This way, you'll waste a bit of disk space but your program will work and, more importantly, you won't break anything else.
One might argue that you shouldn't install DLLs which already exist (with the same version) in the Windows directory but then, you're again vulnerable to some bad app which overwrites the version you need with something that breaks your neck. The drawback is that you must distribute security fixes for your app yourself; you can't rely on Windows Update or similar things to secure your code. This is a tight spot; crackers are making lots of money from security issues and people will not like you when someone steals their banking data because you didn't issue security fixes soon enough.
If you plan to support your application very tightly for many, say, 20 years, installing all DLLs in the program directory is for you. If not, then write code which checks that suitable versions of all DLLs are installed and tell the user about it, so they know why your app suddenly starts to crash.
Yes, see this text :
Dynamic linking has the following
advantages: Saves memory and
reduces swapping. Many processes can
use a single DLL simultaneously,
sharing a single copy of the DLL in
memory. In contrast, Windows must load
a copy of the library code into memory
for each application that is built
with a static link library. Saves
disk space. Many applications can
share a single copy of the DLL on
disk. In contrast, each application
built with a static link library has
the library code linked into its
executable image as a separate
copy. Upgrades to the DLL are
easier. When the functions in a DLL
change, the applications that use them
do not need to be recompiled or
relinked as long as the function
arguments and return values do not
change. In contrast, statically linked
object code requires that the
application be relinked when the
functions change. Provides
after-market support. For example, a
display driver DLL can be modified to
support a display that was not
available when the application was
shipped. Supports multilanguage
programs. Programs written in
different programming languages can
call the same DLL function as long as
the programs follow the function's
calling convention. The programs and
the DLL function must be compatible in
the following ways: the order in which
the function expects its arguments to
be pushed onto the stack, whether the
function or the application is
responsible for cleaning up the stack,
and whether any arguments are passed
in registers. Provides a mechanism
to extend the MFC library classes. You
can derive classes from the existing
MFC classes and place them in an MFC
extension DLL for use by MFC
applications. Eases the creation
of international versions. By placing
resources in a DLL, it is much easier
to create international versions of an
application. You can place the strings
for each language version of your
application in a separate resource DLL
and have the different language
versions load the appropriate
resources. A potential
disadvantage to using DLLs is that the
application is not self-contained; it
depends on the existence of a separate
DLL module.
From my point of view an shared component has some advantages that are somtimes realized as disadvantages.
shared component defines interfaces in your process. So you are forced to decide which components/interfaces are visible outside and which are hidden. This automatically defines which interface has to be stable and which does not have to be stable and can be refactored without affecting any code outside the component..
Memory administration in case of C++ and Windows must be well thought. So normally you should not handle memory outside of an dll that isn't freed in the same dll. If you do so your component may fail if: different runtimes or compiler version are used.
So I think that using shared coponents will help the software to get better organized.