How does WinSxS resolve DLL names to DLL locations? - winsxs

If I link a module against mydll.dll, which is deployed using WinSxS, the PE header in my module will simply reference "mydll.dll". How does Windows determine at runtime, firstly that this dependency should be loaded via WinSxS, and secondly what assembly it's in?

It does so via the "activation context". In my opinion, WinSxS does not make any sense until you learn about the activation context and how it is managed.
I've put a rather long blog post explaining the details at http://omnicognate.wordpress.com/2009/10/05/winsxs/.
To summarise the article, the activation context is a structure that WinSxS uses at runtime to resolve unversioned object names (eg. DLL names) to the full information about how to locate them. The purpose of "manifests" in WinSxS is to construct activation contexts. Without understanding how and when these activation contexts are constructed and how they are managed (they are held on a thread-local stack), it is not possible to reason through the steps involved in loading a DLL via WinSxS and it is therefore impossible to diagnose the majority of problems that can arise.

Related

Environment Variable To Register Libraries From Custom Location (OCX, DLL)

I've searched far an wide for this specific problem, but I only find separate solutions for each problem individually. I basically want to know what the name of the environment variable should be. My assumption is that the name of the variable should be the name of the component and that it should be User variable and not System variable, for example:
name -> "mydll.dll"
path -> "c:\myCustomPath\mydll.dll"
The reason why I want to do this is because of two reasons. First, I often run my custom made tools either directly from the source code in a VM (which is sort of a pain), or I compile it and run it in W10. However, I just cannot do that with more complex apps that have dependencies because then I would have to register tons of DLLs onto the system root, and I know that I would lose track of it easily. The second reason is because I read this reply the guy says it's not recommended to use the system root for private libraries and he also suggests using an environment variable which sounded like a good solution to my problem.
The reason why I have not tested this myself through trial and error is because I'm afraid of leaving my only computer unusable if I put something wrong in the variable. Also all the libraries and exe files that I'm using are written and compiled in VB6, so I have no easy way around it since I already tried merging the multiple projects into one on a rather small project. I ended up rewriting almost the whole thing because VB6 doesn't like public types enums, etc in private Object Classes.
Finally, I am not sure if my question should be here since it doesn't involve programming, but I just felt it would be better understood here.
If I understand your question correctly, you are asking where you can place COM DLLs so that you can register them on your computer.
The answer is - fundamentally - that it does not matter where they are located because registration has a "global" effect. (Simplifying a little).
Now of course there are standards or conventions for where system-wide registered DLLs should go - e.g., Windows\SysWOW64 folder. But the point is that if you register the wrong thing, or leave out dependencies, or remove a registered DLL without unregistering it - etc. etc. - you will cause problems.
I am not aware of any environment variable that has anything to do with this basic function of COM DLLs. (I may be ignorant of something).
If you are actually using an application manifest (as maybe implied in the question) then you don't need to and should not register any DLL which is manifested.

Get importlib directives from type library

How can one programmatically determine which type libraries (GUID and version) a given native, VB6-generated DLL/OCX depends on?
For background: The VB6 IDE chokes when opening a project where one of the referenced type libraries can't load one of its dependencies, but it's not so helpful as to say which dependency can't be met--or even which reference has the dependency that can't be met. This is a common occurrence out my company, so I'm trying to supplement the VB6 IDE's poor troubleshooting information.
Relevant details/attempts:
I do have the VB source code. That tells me the GUIDs and versions as of a particular revision in the repo, but when analyzing a DLL/OCX/TLB file I don't know which version of the repo (if any--could be from a branch or might never have been committed to a branch) a given DLL/OCX corresponds to.
I've tried using tlbinf32.dll, but it doesn't appear to be able to list imports.
I don't know much about PE, but I popped open one of the DLLs in a PE viewer and it only shows MSVBVM60.dll in the imports section. This appears to be a special quirk of VB6-produced type libraries: they link only to MSVBVM60 but have some sort of delay-loading mechanism for the rest of the dependencies.
Even most of the existing tools I've tried don't give the information--e.g., depends.exe only finds MSVBVM60.dll.
However: OLEView, a utility that used to ship with Visual Studio, somehow produces an IDL file, which includes the importlib directives. Given that VB doesn't use IDL files, it's clearly generating the information somehow. So it's possible--I just have no idea how.
Really, if OLEView didn't do it I'd have given it up by now as impossible. Any thoughts on how to accomplish this?
It turns out that I was conflating basic DLL functionality and COM. (Not all DLLs are COM DLLs.)
For basic DLLs, the Portable Executable format includes a section describing its imports. The Optional Header's directory 1 is about the DLL's imports. Its structure is given by IMAGE_IMPORT_DESCRIPTOR. This is a starting point for learning about that.
COM DLLs don't seem to have an equivalent as such, but you can discover which other COM components its public interface needs: for each exposed interface, list out the types of their properties and their method arguments, and then use the Registry to look up where those types come from. tlbinf32.dll provides some of the basic functionality for listing members, etc. Here's and intro to that.

Cocoa/Objective-C Plugins Collisions

My application has a plugin system that allows my users to write their own plugins that get loaded at runtime. Usually this is fine but in some cases two plugins use the same libraries that will cause a collision between those two.
Example:
Plugin A wants to use TouchJSON for working with JSON and thus the creator adds the TouchJSON code to the plugin source and it gets compiled and linked into the plugin binary. Later Plugin B also wants to use that same library and does exactly the same. Now when my app loads these two different plugins it detects this and spits out an warning like this:
Class CJSONScanner is implemented in
both [path_to_plugin_a] and
[path_to_plugin_b]. One of the two
will be used. Which one is undefined.
Since my app just loads plugins and makes sure they conform to a certain protocol I have no control over which plugins are loaded and if two or more use the same library.
As long as both plugins use the exact same version of the library this will probably work but as soon as the API changes in one plugin a bunch of problems will arise.
Is there anything I can do about this?
The bundle loading system provides no mean to pacifically resolve name conflicts. In fact, we're told to ensure ourselves that the problem doesn't happen, rather than what to do if it happens. (Obviously, in your case, that's not possible).
You could file a bug report with this issue.
If this is absolutely critical to your application, you may want to have bundles live in separate processes and use some kind of IPC, possibly NSDistantObject, to pass the data from your program to the plugin hosts. However, I'm fairly sure this is a bag of hurt, so if you don't have very clearly-defined interfaces that allow for distribution into different processes, it might be quite an undertaking.
In a single-process model, the only way to deal with this is to ensure that the shared code (more precisely, the shared Objective-C classes) is loaded once. There are two ways to do this:
Put the shared code in a framework.
Put the shared code in a loadable bundle, and load the bundle when the plug-in is loaded if the relevant classes aren’t already available (check using NSClassFromString()). The client code would also have to use NSClassFromString() rather than referring to classes directly.
Of course, if you aren’t in control of the plug-ins you can’t enforce either of these schemes. The best you can do is provide appropriate guidelines and possibly infrastructure; for instance, in the second case the loading could be handled by the application, perhaps by specifying a class to check for and the name of an embedded bundle to load if it isn’t available in the plug-in’s Info.plist.

Should DLLs have their own configuration files?

Apologies if this is a duplicate, but I've not managed to find this question being asked directly.
The general opinion here (that's me and him across from me) is that they shouldn't, the reason being that DLLs can be shared; therefore the idea of having application-specific information in a DLL is nonsense. If the information is not application-specific, then constants can be used.
A further question is, assuming that DLLs do not have their own config file, whether DLLs should use the configuration files of the executable that loaded the DLL, or instead be passed the relevant data as part of some kind of constructor. Our opinion here is the latter, as it makes it more testable, the downside being that it will sometimes be necessary to pass a significant amount of data to the dll.
Opinions?
There's no reason why you can't have the best of both worlds in terms of "simple to configure with config files" and "testable". Have a static method which can create instances from the configuration file, but also provide a constructor for more control and testability. The static method would just grab the settings and call the constructor.
I believe it's possible to create settings classes for DLLs just like any other project, and then you just need to put the actual text into the application's config file instead of one for the DLL. Basically ignore the app.config generated for the library project, except to use as a template for the application's central one.
Alternatively, use something like Spring.NET to manage this sort of thing :)
Usually, I guess you should pass relevant information to the functions you're calling or set relevant properties in objects you're creating that are defined within the DLL. I guess that's why .NET does not really support config files for DLLs (you can create them, but they'll not be used when running).
I have one scenario, where DLLs are reading a config file, but that is very special: The .NET DLL exports objects as COM objects for use by Microsoft Navision. It communicates with a factoring bank using an XML-RPC interface.
While the DLL is installed on every user's machine, the configuration for the interface is common to all users, so I have a configuration placed on a network drive that's mapped on every PC and the configuration (URL, credentials, etc.) is read from that common file.
Whether that's good practice is up to the reader, but in that scenario having a common config file just made sense...

Best way for a dll to get configuration information?

We have developed a number of custom dll's which are called by third-party Windows applications. These dlls are loaded / unloaded as required.
Most of the dlls call web services and these need to have urls, timeouts, etc configured.
Because the dll is not permanently in memory, it has to read the configuration every time it is invoked. This seems sub-optimal to me.
Is there a better way to handle this?
Note: The configurable information is in an xml file so that the IT department can alter as required. They would not accept registry edits.
Note: These dll's cater for a number of third-party applications, It esentially implements an external EDMS interface. The vendors would not accept passing the required parameters.
Note: It’s a.NET application and the dll is written in C#. Essentially, there are both thick (Windows application) and thin clients that access this dll when they need to perform some kind of EDMS operation. The EDMS interface is defined as a set of calls that have to be implemented in the dll and the dll decides how to implement the EDMS functions e.g. for some clients, “Register Document” would update a DB and for others the same call would utilise a third-party EDMS system. There are no ASP clients.
My understanding is that the dll is loaded when the client wants to access an EDMS operation and is then unloaded when the call is finished. The client may not need to do another EDMS operation for a while (in some cases over an hour).
Use the registry to store your configuration information, it's definitely fast enough.
I think you need to provide more information. There are so many approaches at persisting configuration information. We don't even know the development platform. .Net?
I wouldn't rely on the registry unless I was sure it would always be available. You might get away with that on client machines, but you've already mentioned webservices.
XML file in the current directory seems to be very popular now for server side third-party dlls. But those configurations are optional.
If this is ASP, Your Trust Level will be very important in choosing a configuration persistance method.
You may be able to use your Application server's "Application Scope". Which gets loaded once per lifetime of the application. Your DLL can invalidate that data if it detects it needs too.
I've used text files, XML files, database, various IPC like shared memory segments, application scope, to persist configuration information. It depends a lot on the specifics of your project.
Care to elaborate further?
EDIT. Considering your clarifications, I'd go with an XML file. This custom XML file would be loaded using a search path that has been predefined and documented. If this is ASP.Net you can use Server.MapPath() for example to check various folders like App_Data. The DLL would check the current directory for the configuration file first though. You can then use a "manager" thread that holds the configuration data and passes it to any child threads that require it. The sharing can use IPC like a shared memory segment.
This seems like hassle, but you have to store the information in some scope... Either from disk, memory ( application scope, session scope, DLL global scope, another process/IPC etc. )
ASP.Net also gives you the ability to add custom configuration sections to standard configuration files like web.config. You can access those sections at will and they will not depend on when your DLL was loaded.
Why do you believe your DLL is being removed from memory?
Why don't you let the calling application fill out a data-structure with the stuff you need? Can be done as part of an init-call or so.
How often is the dll getting unloaded? COM dlls can control when they are unloaded via the DllCanUnload method. If these are COM components you could look at implementing some kind of timeout here to prevent frequent loads and unloads. Unless the dll is reload the configuration at a significant frequency it is unlikely to be a real performance bottleneck.
Knowing that the dll will reload its configuration at certain points is a useful feature, since it prevents the users wondering if they have to restart the host process, reboot the machine, etc for the configuration to take effect. You could even watch the file for changes to keep it up to date.
I think the best way for a DLL to get configuration information is via the application that is using it - either via implicit "Init"-calls, like Nils suggested, or via their configuration files.
DLLs shouldn't usually "configure themselves", as they can never be sure in which context they are used. Different users (as in applications) may have different configuration settings to make.
Since you said that the application is written in .NET, you should probably simply require them to put the necessary configuration for your DLL's functions in their configuration file ("whatever.exe.config") and access it from your DLL via AppSettings or even better via a custom configuration section.
Additionally, you may want to provide sensible default values for settings where that is possible (probably not for network addresses though).
If the dlls are loaded and unloaded from memory only at a gap of every 1 hour or so the in-efficiency due to mslal initializations (read file / registry) will be negligible.
However if this is more frequent, a higher inefficiency would be the physical action of loading and unloading of dlls. This could be more of an in-efficiency than small initializations.
It might therefore be better to keep them pinned in memory. That way the initialization performed at the load time, does not get repeated and you also avoid the in-efficiency of load and unload. You solve 2 issues this way.
I could tell you how to do this in C++. Not sure how you would do this in C#. GetModuleHandle + making an extra a LoadLibrary call on this handle is how i would do this in C++.
One way to do it is to have an Interface in the DLL which specify the required settings.
Then it's up to the "application project" to have a class that implements this interface and pass it to the DLL at initiation, this makes you free to change the implementation depending on project. One might read from web.config while another reads from DB.