Questions about DLL load order - dll

As we know, for implicitly linked DLLs of an executable, at load
time, they will be loaded into memory by the loader, and the
loader will calls their entry point to initialize them as well.
This is a linear process - they are loaded one by one, and be
initialized one by one. So the most important thing is the order,
the order affects a lot of things.
Q1: The initialize order can be different from the load order, is
this true ?
Q2: The load order is affected by the import table, is this true ?
Q3: The initialize order of independent DLLs is affected by the
import table - A DLL may be initialized first because it appears
first, is this true ?

"The process is created in a suspended state with the
CREATE_SUSPENDED flag to CreateProcess. Detours then modifies the
image of the application binary in the new process to include the
specified DLL as its first import. Execution in the process is
then resumed. When execution resumes, the Windows process loader
will first load the target DLL and then any other DLLs in the
application's import table, before calling the application entry
point."
I found this important message on the document of Microsoft
Detours. So for Q2 and Q3, yes, it's true. I will do more research
on this topic.

Related

#Object already exported, no package change is possible" while mass package assignment

I need to change a package for ~250 SAP development objects (ABAP classes, data elements, tables, etc). I'm getting an error message TR242 (Object already exported, no package change is possible) when I'm trying to do the change via se24/se80 transactions or via RSWBO052 report.
SAP help docs say that the object must be copied under new name, the old one must be deleted and the new one must be renamed to the old name back. However, it's not a good way for 250 objects.
Is there any way to do a mass package change except call tranaction/LSMW for this case?
The problem occurred because I was trying to move the development objects to a non transportable package as #vwegert metnioned above. The target package was marked as non transportable because it was marked as a legacy one. This happened because the target package was moved from a system with basis level lower then the current system basis level. Next steps are necessary to resolve the issue:
The legacy package must be migrated via report RS_MIGRATE_PACKAGES (see note 1711900). The mark 'legacy package' will be removed, but the package will be still non transportable. However, you will be able to recreate the package after the migration.
Delete the non transportable target package and create a new as copy of the non TMS package.
Assign all necessary objects to package created at step 2 using RSWBO052 report.
This message occurs if you try to move objects from a transport-enabled package to a non-transportable package like $TMP. The rationale behind this is:
The object once was in a transportable package, so it must have been added to at least one transport request.
The transport request might have been transported to another system (directly or via ToC), so the other system might have that object.
The current system is the original system of the object, so it is responsible for notifying the other systems (via transport) when the object is to be deleted.
Moving the object to a non-transportable package is semantically equivalent to deleting it for the rest of the system landscape.
Since that process happens very infrequently, it's usually sufficient to direct the developer to copy and delete the object.

RazorEngine IsolatedTemplateService is not preventing growth of number of loaded assemblies

I am attempting to use the IsolatedTemplateService in RazorEngine 3.2.0.
According to http://www.fidelitydesign.net/?p=473, this should prevent an ever-increasing number of assemblies from being loaded into my main AppDomain.
This doesn't seem to be the case, however. I am using PerfMon to monitor the number of loaded assemblies and each time I execute a template, the counter value goes up.
Repro:
1) Open perfmon and start watching the .NET CLR Loading: Current Assemblies and .NET CLR Loading: Current Appdomains counters
2) Parse a template using the IsolatedTemplateService
Results:
The "Current Appdomains" counter DOES spike up when the template is parsed, and then drops back down afterwards. It does look like a separate appdomain is being created.
The "Current Assemblies" counter just keeps going up. It does NOT drop back down at any time.
Am I reading these counters wrong? Or is the IsolatedTemplateService failing to constrain the dynamic assembly creation to the temporary AppDomain?

Is there a way to change referenced library files during processing?

I have a .NET customer framework that functions much like a Workflow. It uses reflection to get a listing of all of the processes it is capable of from a specific folder, and starts them via reflection with a known start point (all of them have a method called "Process"). Since these files are only called to do the processing and not part of the compile... is there a way for me to be able to drop in a new reference library (DLL) for one of the processes that is being updated without restarting the whole process?
Here is my flow...
START
Load list of references
Load work, assign to references
After X Time, refresh references (or
on WCF refresh command being sent)
Is it possible to do this, or do I do I need to actually stop and restart the assembly base to be able to recognized the new reference file?
Yes you can with Assembly.Load but I think you need to look at MEF first.

How to reliably handle files uploaded periodically by an external agent?

It's a very common scenario: some process wants to drop a file on a server every 30 minutes or so. Simple, right? Well, I can think of a bunch of ways this could go wrong.
For instance, processing a file may take more or less than 30 minutes, so it's possible for a new file to arrive before I'm done with the previous one. I don't want the source system to overwrite a file that I'm still processing.
On the other hand, the files are large, so it takes a few minutes to finish uploading them. I don't want to start processing a partial file. The files are just tranferred with FTP or sftp (my preference), so OS-level locking isn't an option.
Finally, I do need to keep the files around for a while, in case I need to manually inspect one of them (for debugging) or reprocess one.
I've seen a lot of ad-hoc approaches to shuffling upload files around, swapping filenames, using datestamps, touching "indicator" files to assist in synchronization, and so on. What I haven't seen yet is a comprehensive "algorithm" for processing files that addresses concurrency, consistency, and completeness.
So, I'd like to tap into the wisdom of crowds here. Has anyone seen a really bulletproof way to juggle batch data files so they're never processed too early, never overwritten before done, and safely kept after processing?
The key is to do the initial juggling at the sending end. All the sender needs to do is:
Store the file with a unique filename.
As soon as the file has been sent, move it to a subdirectory called e.g. completed.
Assuming there is only a single receiver process, all the receiver needs to do is:
Periodically scan the completed directory for any files.
As soon as a file appears in completed, move it to a subdirectory called e.g. processed, and start working on it from there.
Optionally delete it when finished.
On any sane filesystem, file moves are atomic provided they occur within the same filesystem/volume. So there are no race conditions.
Multiple Receivers
If processing could take longer than the period between files being delivered, you'll build up a backlog unless you have multiple receiver processes. So, how to handle the multiple-receiver case?
Simple: Each receiver process operates exactly as before. The key is that we attempt to move a file to processed before working on it: that, and the fact the same-filesystem file moves are atomic, means that even if multiple receivers see the same file in completed and try to move it, only one will succeed. All you need to do is make sure you check the return value of rename(), or whatever OS call you use to perform the move, and only proceed with processing if it succeeded. If the move failed, some other receiver got there first, so just go back and scan the completed directory again.
If the OS supports it, use file system hooks to intercept open and close file operations. Something like Dazuko. Other operating systems may let you know about file operations in anoter way, for example Novell Open Enterprise Server lets you define epochs, and read list of files modified during an epoch.
Just realized that in Linux, you can use inotify subsystem, or the utilities from inotify-tools package
File transfers is one of the classics of system integration. I'd recommend you to get the Enterprise Integration Patterns book to build your own answer to these questions -- to some extent, the answer depends on the technologies and platforms you are using for endpoint implementation and for file transfer. It's a quite comprehensive collection of workable patterns, and fairly well written.

Executing and then Deleting a DLL in c#

I'm creating a self updating app where I have the majority of the code in a seperate DLL. It's command line and will eventually be run on Mono. I'm just trying to get this code to work in C# on windows at the command line.
How can I create a c# application that I can delete a supporting dll while its running?
AppDomain domain = AppDomain.CreateDomain("MyDomain");
ObjectHandle instance = domain.CreateInstance( "VersionUpdater.Core", "VersionUpdater.Core.VersionInfo");
object unwrap = instance.Unwrap();
Console.WriteLine(((ICommand)unwrap).Run());
AppDomain.Unload(domain);
Console.ReadLine();
at the ReadLine the VersionUpdater.Core.dll is still locked from deletion
The ICommand interface is in VersionUpdater.Common.dll which is referenced by both the Commandline app and VersionUpdater.Core.dll
The only way I've ever managed something similar is to have the DLL in a separate AppDomain to the assembly that is trying to delete it. I unload the other AppDomain and then delete the DLL from disk.
If you're looking for a way to perform the update, off the top of my head I would go for a stub exe that spawns the real AppDomain. Then, when that stub exe detects an update is to be applied, it quits the other AppDomain and then does the update magic.
EDIT: The updater cannot share DLLs with the thing it is updating, otherwise it will lock those DLLs and therefore prevent them from being deleted. I suspect this is why you still get an exception. The updater has to be standalone and not reliant on anything that the other AppDomain uses, and vice versa.
Unwrap will load the assembly of the object's type into the appdomain that calls it. One way around this is to create a type in your "base" assembly that calls command.run, then load that into your new appdomain. This way you never have to call unwrap on an object from a type in a different assembly, and you can delete the assembly on disk.
When I built a self-updating app, I used the stub idea, but the stub was the app itself.
The app would start, look for updates. If it found an update, it would download a copy of the new app to temp storage, and then start it up (System.Diagnostics.Process.Start()) using a command-line option that said "you are being updated". Then the original exe exits.
The spawned exe starts up, sees that it is an update, and copies itself to the original app directory. It then starts the app from that new location. Then the spawned exe ends.
The newly started exe from the original app install location starts up - sees the temp file and deletes it. Then resumes normal execution.
You can always use MOVEFILE_DELAY_UNTIL_REBOOT to delete on reboot. This is most likely the least hackey way todo this sort of thing, by hackey I ususally see things like; loading up new DLL's or injecting to explorer.exe even patching a system dll to get loaded into another process, etc...
MoveFileEx From MSDN;
lpNewFileName [in, optional] The new
name of the file or directory on the
local computer.
When moving a file, the destination
can be on a different file system or
volume. If the destination is on
another drive, you must set the
MOVEFILE_COPY_ALLOWED flag in dwFlags.
When moving a directory, the
destination must be on the same drive.
If dwFlags specifies
MOVEFILE_DELAY_UNTIL_REBOOT and
lpNewFileName is NULL, MoveFileEx
registers the lpExistingFileName file
to be deleted when the system
restarts. If lpExistingFileName refers
to a directory, the system removes the
directory at restart only if the
directory is empty.