Environment Variable To Register Libraries From Custom Location (OCX, DLL) - dll

I've searched far an wide for this specific problem, but I only find separate solutions for each problem individually. I basically want to know what the name of the environment variable should be. My assumption is that the name of the variable should be the name of the component and that it should be User variable and not System variable, for example:
name -> "mydll.dll"
path -> "c:\myCustomPath\mydll.dll"
The reason why I want to do this is because of two reasons. First, I often run my custom made tools either directly from the source code in a VM (which is sort of a pain), or I compile it and run it in W10. However, I just cannot do that with more complex apps that have dependencies because then I would have to register tons of DLLs onto the system root, and I know that I would lose track of it easily. The second reason is because I read this reply the guy says it's not recommended to use the system root for private libraries and he also suggests using an environment variable which sounded like a good solution to my problem.
The reason why I have not tested this myself through trial and error is because I'm afraid of leaving my only computer unusable if I put something wrong in the variable. Also all the libraries and exe files that I'm using are written and compiled in VB6, so I have no easy way around it since I already tried merging the multiple projects into one on a rather small project. I ended up rewriting almost the whole thing because VB6 doesn't like public types enums, etc in private Object Classes.
Finally, I am not sure if my question should be here since it doesn't involve programming, but I just felt it would be better understood here.

If I understand your question correctly, you are asking where you can place COM DLLs so that you can register them on your computer.
The answer is - fundamentally - that it does not matter where they are located because registration has a "global" effect. (Simplifying a little).
Now of course there are standards or conventions for where system-wide registered DLLs should go - e.g., Windows\SysWOW64 folder. But the point is that if you register the wrong thing, or leave out dependencies, or remove a registered DLL without unregistering it - etc. etc. - you will cause problems.
I am not aware of any environment variable that has anything to do with this basic function of COM DLLs. (I may be ignorant of something).
If you are actually using an application manifest (as maybe implied in the question) then you don't need to and should not register any DLL which is manifested.

Related

Get importlib directives from type library

How can one programmatically determine which type libraries (GUID and version) a given native, VB6-generated DLL/OCX depends on?
For background: The VB6 IDE chokes when opening a project where one of the referenced type libraries can't load one of its dependencies, but it's not so helpful as to say which dependency can't be met--or even which reference has the dependency that can't be met. This is a common occurrence out my company, so I'm trying to supplement the VB6 IDE's poor troubleshooting information.
Relevant details/attempts:
I do have the VB source code. That tells me the GUIDs and versions as of a particular revision in the repo, but when analyzing a DLL/OCX/TLB file I don't know which version of the repo (if any--could be from a branch or might never have been committed to a branch) a given DLL/OCX corresponds to.
I've tried using tlbinf32.dll, but it doesn't appear to be able to list imports.
I don't know much about PE, but I popped open one of the DLLs in a PE viewer and it only shows MSVBVM60.dll in the imports section. This appears to be a special quirk of VB6-produced type libraries: they link only to MSVBVM60 but have some sort of delay-loading mechanism for the rest of the dependencies.
Even most of the existing tools I've tried don't give the information--e.g., depends.exe only finds MSVBVM60.dll.
However: OLEView, a utility that used to ship with Visual Studio, somehow produces an IDL file, which includes the importlib directives. Given that VB doesn't use IDL files, it's clearly generating the information somehow. So it's possible--I just have no idea how.
Really, if OLEView didn't do it I'd have given it up by now as impossible. Any thoughts on how to accomplish this?
It turns out that I was conflating basic DLL functionality and COM. (Not all DLLs are COM DLLs.)
For basic DLLs, the Portable Executable format includes a section describing its imports. The Optional Header's directory 1 is about the DLL's imports. Its structure is given by IMAGE_IMPORT_DESCRIPTOR. This is a starting point for learning about that.
COM DLLs don't seem to have an equivalent as such, but you can discover which other COM components its public interface needs: for each exposed interface, list out the types of their properties and their method arguments, and then use the Registry to look up where those types come from. tlbinf32.dll provides some of the basic functionality for listing members, etc. Here's and intro to that.

Will having two different versions of a DLL cause issues?

I have a vb6 program that needs to use MSOLAP80.dll to display its pivot tables properly. But because MSOLAP90.dll has some compatibility issues with this I cannot use MSOLAP90.dll and still have the pivot tables display.
I have registered MSOLAP90.dll and then registered MSOLAP80.dll again and everything seems to be fine. I however don't know if both are actually registered or if MSOLAP80.dll is the only one registered, because I have no reference point as to what is new in MSOLAP90.dll. Is it possible that both are registered and the program is just using MSOLAP80.dll and if there are programs that need MSOLAP90.dll then it will know to use that one?
I guess I am just confused about how registering DLL's work and if it is possible to have both of these registered at the same time. Can somebody help with an explanation?
If you want to know for sure which one is registered, you can:
Look at the References dialogue for a type library which matches your DLLs' paths.
Open up RegEdit, and search for MSOLAP80.DLL or MSOLAP90.DLL (uncheck "Match whole string only").
If you find references for both DLLs, then you are safe because you can bind to a specific version. If you find a reference to the wrong DLL, then unregister the wrong one, and register the right one.
COM originall only allowed one version of a set of CLSIDs (which uniquely identify classes), IIDs (which uniquely identify classes' interfaces) at any one time. It is possible to have more than one reference to a LIBID (which identifies a type library - a resource embedded in the DLLL) but they have to have different versions.
From Windows XP onwards, it became possible to do side by side DLL access in which an executable can access a specific version of a DLL, overriding the value in the registry. You need to embed or have a .manifest file in the same folder as the EXE file.
Unfortunately, the documentation for this seems to have disappeared from MSDN, and is only referred to by a couple of Knowledge Base articles:
http://support.microsoft.com/kb/828629
http://support.microsoft.com/kb/843524

What is the best way to organize source code of a large Cocoa application in Xcode?

Here is what I'm looking for:
I'd like to separate pieces of functionality into modules or components of some sort to limit visibility of other classes to prevent that each class has access to every other class which over time results in spaghetti code.
In Java & Eclipse, for example, I would use packages and put each package into a separate project with a clearly defined dependency structure.
Things I have considered:
Using separate folders for source files and using Groups in Xcode:
Pros: simple to do, almost no Xcode configuration needed
Cons: no compile-time separation of functionality, i.e. access to everything is only one #import statement away
Using Frameworks:
Pros: Framework code cannot access access classes outside of framework. This enforces encapsulation and keeps things separate
Cons: Code management is cumbersome if you work on multiple Frameworks at the same time. Each Framework is a separate Xcode project with a separate window
Using Plugins:
Pros: Similar to Frameworks, Plugin code can't access code of other plugins. Clean separation at compile-time. Plugin source can be part of the same Xcode project.
Cons: Not sure. This may be the way to go...
Based on your experience, what would you choose to keep things separate while being able to edit all sources in the same project?
Edit:
I'm targeting Mac OS X
I'm really looking for a solution to enforce separation at compile time
By plugins I mean Cocoa bundles (http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/LoadingCode/Concepts/Plugins.html)
I have worked on some good-sized Mac projects (>2M SLOC in my last one in 90 xcodeproj files) and here are my thoughts on managing them:
Avoid dynamic loads like Frameworks, Bundles, or dylibs unless you are actually sharing the binaries between groups. These tend to create more complexity than they solve in my experience. Plus they don't port easily to iOS, which means maintaining multiple approaches. Worst, having lots of dynamic libraries increases the likelihood of including the same symbols twice, leading to all kinds of crazy bugs. This happens when you directly include some "helper" class directly in more than one library. If it includes a global variable, the bugs are awesome as different threads use different instances of the global.
Static libraries are the best choice in many if not most cases. They resolve everything at build time, allowing code stripping in your C/C++ and other optimizations not possible in dynamic libraries. They get rid of "hey, it loads on my system but not the customer's" (when you use the wrong value for the framework path). No need to deal with slides when computing line numbers from crash stacks. They catch duplicate symbols at build time, saving many hours of debugging pain.
Separate major components into separate xcodeproj. Really think about what "major" means here, though. My 90-project product was way too many. Just doing dependency checking can become a very non-trivial exercise. (Xcode 4 can improve this, but I left the project before we ever were able to get Xcode 4 to reliably build it, so I don't know how well it did in the end.)
Separate public from private headers. You can do this with static libs just as well as you can with Frameworks. Put the public headers in a different directory. I recommend each component have its own public include directory for this purpose.
Do not copy headers. Include them directly from the public include directory for the component. Copying headers into a shared tree seems like a great idea until you do it. Then you find that you're editing the copy rather than the real one, or you're editing the real one, but not actually copying it. In any case, it makes development a headache.
Use xcconfig files, not the build pane. The build pane will drive you crazy in these kinds of big projects. Mine tend to have lines like this:
common="../../common"
foo="$(common)/foo"
HEADER_SEARCH_PATHS = $(inherited) $(foo)/include
Within your public header path, include your own bundle name. In the example above, the path to the main header would be common/foo/include/foo/foo.h. The extra level seems a pain, but it's a real win when you import. You then always import like this: #import <foo/foo.h>. Keeps everything very clean. Don't use double-quotes to import public headers. Only use double-quotes to import private headers in your own component.
I haven't decided the best way for Xcode 4, but in Xcode 3, you should always link your own static libraries by adding the project as a subproject and dragging the ".a" target into your link step. Doing it this way ensures that you'll link the one built for the current platform and configuration. My really huge projects haven't been able to convert to Xcode 4 yet, so I don't have a strong opinion yet on the best way there.
Avoid searching for custom libraries (the -L and -l flags at the link step). If you build the library as part of the project, then use the advice above. If you pre-build it, then add the full path in LD_FLAGS. Searching for libraries includes some surprising algorithms and makes the whole thing hard to understand. Never drop a pre-built library into your link step. If you drop a pre-built libssl.a into your link step, it actually adds a -L parameter for the path and then adds -lssl. Under default search rules, even though you show libssl.a in your build pane, you'll actually link to the system libssl.so. Deleting the library will remove the -l but not the -L so you can wind up with bizarre search paths. (I hate the build pane.) Do it this way instead in xcconfig:
LD_FLAGS = "$(openssl)/lib/libssl.a"
If you have stable code that is shared between several projects, and while developing those projects you're never going to mess with this code (and don't want the source code available), then a Framework can be a reasonable approach. If you need plugins to avoid loading large amounts of unnecessary code (and you really won't load that code in most cases), then bundles may be reasonable. But in the majority of cases for application developers, one large executable linked together from static libraries is the best approach IMO. Shared libraries and frameworks only make sense if they're actually shared at runtime.
My suggestion would be:
Use Frameworks. They're the most easily reusable build artifact of the options you list, and the way you describe the structure of what you are trying to achieve sounds very much like creating a set of Frameworks.
Use a separate project for each Framework. You'll never be able to get the compiler to enforce the kind of access restrictions you want if everything is dumped into a single project. And if you can't get the compiler to enforce it, then good luck getting your developers to do so.
Upgrade to XCode4 (if you haven't already). This will allow you to work on multiple projects in a single window (pretty much like how Eclipse does it), without intermingling the projects. This pretty much eliminates the cons you listed under the Frameworks option.
And if you are targeting iOS, I very strongly recommend that you build real frameworks as opposed to the fake ones that you get by using the bundle-hack method, if you aren't building real frameworks already.
I've managed to keep my sanity working on my project which has grown over the past months to fairly large (number of classes) by forcing myself to practice Model-View-Control (MVC) diligently, plus a healthy amount of comments, and the indispensable source control (subversion, then git).
In general, I observe the following:
"Model" Classes that serialize data (doesn't matter from where, and including app's 'state') in an Objective-C 1 class subclassed from NSObject or custom "model" classes that inherits from NSObject. I chose Objective-C 1.0 more for compatibility as it's the lowest common denominator and I didn't want to be stuck in the future writing "model" classes from scratch because of dependency of Objective-C 2.0 features.
View Classes are in XIB with the XIB version set to support the oldest toolchain I need to support (so I can use a previous version Xode 3 in addition to Xcode 4). I tend to start with Apple provided Cocoa Touch API and frameworks to benefit from any optimization/enhancement Apple may introduce as these APIs evolve.
Controller Classes contain usual code that manages display/animation of views (programmatically as well as from XIBs) and data serialization of data from "model" classes.
If I find myself reusing a class a few times, I'd explore refactoring the code and optimizing (measured using Instruments) into what I call "utility" classes, or as protocols.
Hope this helps, and good luck.
This depends largely on your situation and your own specific preferences.
If you're coding "proper" object-oriented classes then you will have a class structure with methods and variables hidden from other classes where necessary. Unless your project is huge and built of hundreds of different distinguishable modules then its probably sufficient to just group classes and resources into folders/groups in XCode and work with it that way.
If you've really got a huuge project with easily distinguishable modules then by all means create a framework. I would suggest though that this would only really be necessary where you are using the same code in different applications, in which case creating a framework/extra project would be a good way to effectively copy code between projects. In practically all other cases it would probably just be overkill and much more complicated than needed.
Your last idea seems to be a mix of the first two. Plugins (as I understand you are describing - tell me if I'm wrong) are just separated classes in the same project? This is probably the best way, and should be done (to an extent) in any case. If you are creating functionality to draw graphs (for example) you should section off a new folder/group and start your classes and functionality within that, only including those classes into your main application where necessary.
Let me put it this way. There's no reason to go over the top... but, even if just for your own sanity - or the maintainability of your code - you should always endeavour to group everything up into descriptive groups/folders.

Cocoa/Objective-C Plugins Collisions

My application has a plugin system that allows my users to write their own plugins that get loaded at runtime. Usually this is fine but in some cases two plugins use the same libraries that will cause a collision between those two.
Example:
Plugin A wants to use TouchJSON for working with JSON and thus the creator adds the TouchJSON code to the plugin source and it gets compiled and linked into the plugin binary. Later Plugin B also wants to use that same library and does exactly the same. Now when my app loads these two different plugins it detects this and spits out an warning like this:
Class CJSONScanner is implemented in
both [path_to_plugin_a] and
[path_to_plugin_b]. One of the two
will be used. Which one is undefined.
Since my app just loads plugins and makes sure they conform to a certain protocol I have no control over which plugins are loaded and if two or more use the same library.
As long as both plugins use the exact same version of the library this will probably work but as soon as the API changes in one plugin a bunch of problems will arise.
Is there anything I can do about this?
The bundle loading system provides no mean to pacifically resolve name conflicts. In fact, we're told to ensure ourselves that the problem doesn't happen, rather than what to do if it happens. (Obviously, in your case, that's not possible).
You could file a bug report with this issue.
If this is absolutely critical to your application, you may want to have bundles live in separate processes and use some kind of IPC, possibly NSDistantObject, to pass the data from your program to the plugin hosts. However, I'm fairly sure this is a bag of hurt, so if you don't have very clearly-defined interfaces that allow for distribution into different processes, it might be quite an undertaking.
In a single-process model, the only way to deal with this is to ensure that the shared code (more precisely, the shared Objective-C classes) is loaded once. There are two ways to do this:
Put the shared code in a framework.
Put the shared code in a loadable bundle, and load the bundle when the plug-in is loaded if the relevant classes aren’t already available (check using NSClassFromString()). The client code would also have to use NSClassFromString() rather than referring to classes directly.
Of course, if you aren’t in control of the plug-ins you can’t enforce either of these schemes. The best you can do is provide appropriate guidelines and possibly infrastructure; for instance, in the second case the loading could be handled by the application, perhaps by specifying a class to check for and the name of an embedded bundle to load if it isn’t available in the plug-in’s Info.plist.

Should DLLs have their own configuration files?

Apologies if this is a duplicate, but I've not managed to find this question being asked directly.
The general opinion here (that's me and him across from me) is that they shouldn't, the reason being that DLLs can be shared; therefore the idea of having application-specific information in a DLL is nonsense. If the information is not application-specific, then constants can be used.
A further question is, assuming that DLLs do not have their own config file, whether DLLs should use the configuration files of the executable that loaded the DLL, or instead be passed the relevant data as part of some kind of constructor. Our opinion here is the latter, as it makes it more testable, the downside being that it will sometimes be necessary to pass a significant amount of data to the dll.
Opinions?
There's no reason why you can't have the best of both worlds in terms of "simple to configure with config files" and "testable". Have a static method which can create instances from the configuration file, but also provide a constructor for more control and testability. The static method would just grab the settings and call the constructor.
I believe it's possible to create settings classes for DLLs just like any other project, and then you just need to put the actual text into the application's config file instead of one for the DLL. Basically ignore the app.config generated for the library project, except to use as a template for the application's central one.
Alternatively, use something like Spring.NET to manage this sort of thing :)
Usually, I guess you should pass relevant information to the functions you're calling or set relevant properties in objects you're creating that are defined within the DLL. I guess that's why .NET does not really support config files for DLLs (you can create them, but they'll not be used when running).
I have one scenario, where DLLs are reading a config file, but that is very special: The .NET DLL exports objects as COM objects for use by Microsoft Navision. It communicates with a factoring bank using an XML-RPC interface.
While the DLL is installed on every user's machine, the configuration for the interface is common to all users, so I have a configuration placed on a network drive that's mapped on every PC and the configuration (URL, credentials, etc.) is read from that common file.
Whether that's good practice is up to the reader, but in that scenario having a common config file just made sense...