Can I enable/disable aspects from an external application? - aop

Supposing I have two applications, one that has features based on aspect oriented programming such as authentication, authorization, logging, exception handling etc. and the other one that connects to the previous application and has buttons that enable/disable aspects. Is there a way that I can do this? (while the AOP app is running)

You could write your program such that logging is disabled if some (global static) property is false. Of course then this effectively disables logging. (and yes you could write this behavior in 1 or many aspects)
However a AOP app is a compiled program, and aspects are just compiled code. There is no "now run the program as if the aspect wern't compiled in".And even if you recompiled your code you wouldn't be guaranteed everything just worked (except the logging). Some aspects can modify inputs that your program relies on for example. without the aspects your program might just crash.

Related

How to execute an untrusted function efficiently in a cross-platform way?

I am writing an open source cross-platform application written in C++ that targets Windows, Mac, and Linux on x86 CPUs. The application produces a stream of data (integers) that needs to be validated, and my application will perform actions depending on the validation result. There are multiple validators, which we shall call "modules", and they can be swapped out for one another.
Anybody can write and share modules with other users, so my application has to ensure that maliciously-written modules cannot harm the user in any way (perhaps except via high CPU usage, in which case my application should be able to kill the module after some amount of time - this can be done by using a surrogate process). Furthermore, the stream of data is being sent at a high rate (up to 100kB/s).
Fortunately, the code in these modules are usually simple arithmetic operations on data in the stream (usually processing each incoming integer in constant time), and they do not need to make any system calls (not even heap allocation).
I've considered the following possibilities (all of them with some drawbacks):
Kernel-based sandboxing
On Linux, we can use secure computing (seccomp), which prevents a process from making any system calls except for reading and writing with already-open file destriptors. Module creators would write their modules as a single function that takes in input and output file descriptors (in a language like C or C++) and compile it into a shared object, then distribute that shared object.
My application will probably prepare input and output file descriptors, then fork() itself or exec() a surrogate process, and this child process uses dlopen() and dlsym() to get a pointer to the untrusted function. Then strict secure computing mode will be enabled, before executing the untrusted function.
Drawbacks: There's the problem that dlopen() will actually run the constructor function from the shared library. This would have to be properly sandboxed as well, and I can't think of a way to do so. Also, of course, this thing will only work on Linux. As far as I know, there is no way to ban WinNT system calls on Windows, so a similar solution on Windows won't be very secure.
Application-level sandboxing
[[ Any form of application-level sandboxing means that we cannot run untrusted machine code of any form. An untrusted function can overwrite its return value or data outside its call stack, thereby compromising the whole application (and effectively acquiring any permissions that the original application had). ]]
Make modules use a simple scripting language that does not support any system calls - just pure arithmetic operations and perhaps the ability to read an input stream. My application would contain an interpreter for this language.
Drawbacks: Unfortunately I have not found this scripting language. Many scripting languages have extensive functionalities (e.g. Python) and a sandbox (e.g. PyPy's sandbox) simply filters OS system calls. I would be shipping a lot of useless interpreter code with my application, and it arguably is more prone to security issues due to bugs in the intepreter than a language with simply no functionality to do things other than simple calculations and control flow instructions (basically a function that does not make any system calls). Furthermore, marshalling the data between C++ (machine code) and the scripting language is usually a slow process.
Distribute modules with a 'safe' compiled language that again does not support any system calls. My application would contain a JIT for this language.
Marshalling won't be necessary because my application would call into the JITted machine code of the untrusted module, so performance across this boundary should be fast. The untrusted module now won't be able to corrupt the stack, attempt return-oriented programming, or perform any other malicious actions, due to the language restrictions and checks of the 'safe' language. WebAssembly is the first and only language that comes to mind (if it can be called a language). (As far as I can tell, WebAssembly seems to provide the security guarantees for my use case, right?)
Drawbacks: The existing implementations of WebAssembly seem to be all browser-based, so I would have to steal an implementation from an open source browser. This does seem like a lot of work, considering that I would have to uncouple it from all the JavaScript and other browser bits. However, a standalone WebAssembly JIT based on LLVM seems to be under development.
Question:
What is the best way to execute an untrusted function efficiently that works on Windows, Mac, and Linux?
Right now, I think that the scripting language way would probably be the safest, and be the easiest for module writers. But for a more efficient solution, WebAssembly is probably better. Am I right, or are there better or easier solutions that I have not thought of?
(Remark: I think several pairs of tags used in this question have never been seen together before!)
Regarding WebAssembly:
Unfortunately, there is no production-quality stand-alone implementation yet. I expect some to show up in the future, but it hasn't happened yet.
For historical reasons, existing production implementations are all part of a JavaScript VM. Fortunately, none of these VMs is tied to a browser. If you don't mind including some unused JS baggage, you can embed them as they are (ripping out the JS would be very hard). One problem, though, is that these VMs don't yet provide embedding interfaces for Wasm specifically. You have to go through JS, which is stupid.
There is an initial design for a C and C++ API for WebAssembly, which would give direct access to an embedded Wasm VM. It is meant to be VM-neutral, i.e., could be implemented by any existing VM (the repo contains a prototype implementation on top of V8). This may evolve into a standard, but I cannot promise any timeline. Right now it's only for the brave.

Why is it a good idea to limit the use of custom actions in my WiX / MSI setups?

Why is it a good idea to limit the use of custom actions in my WiX / MSI setups?
Deployment is a crucial part of most development. Please give this content a chance. It is my firm belief that software quality can be dramatically improved by small changes in application design to make deployment more logical and more reliable - that is what this "answer" is all about - software development.
This is a Q/A-style question split from an answer that became too long: How do I avoid common design flaws in my WiX / MSI deployment solution?.
Core: Essentially custom actions are complex to get right because of complexity arising from:
Sequencing-, Conditioning- (when it runs - in what installation mode: install, repair, modify, uninstall, etc...) and Impersonation-issues (security context it runs in).
It results in overall poor debugability - very hard hard work to test in all permutations.
Application Launch: Solution? Prefer application launch code. You get a familiar debugging context and can usually keep the code in the main application source - very important. The issue is the same for: licensing in setups (take it out if you can).
Custom Action Debugging: With all warnings in (you will read the short "folksy" version below - yes you will - Jedi trick) - do use built-in constructs when available - here is how you can improve debugging custom actions though:
Common Causes for Custom Action runtime failures
Debugging Custom Actions
For native code / C++ just attach debugger to msiexec.exe
Advanced Installer's Debug C# Custom Actions video tutorial
Short "Folksy" Version:
Overall: a modern setup should be "declared", not coded. "think SQL queries, not scripting". Apply well-tested logic that you just invoke to do the work. More on that later.
Avoid custom actions if you can.
They will screw with your head (and do murder to your girly figure)!
Seriously: they are the number one cause of deployment failure.
They feature a "conspiratory complexity" whereby things often break unexpectedly and without warning after you have deployed your setup for some time and it is "in the wild".
Deployment is a process - you will have to deal with "past sins" for every new release. It can become a real nightmare to clean things up after broken deployments. Each iteration adds new, potential error sources (my fix for my failed fix, failed, etc...).
Runtime requirements for managed code and scripting custom actions are very error prone (.NET in particular in my opinion). A setup - if anything - should feature "minimum dependencies" - since it is used to install dependencies outright and should run on "any machine" in "any state" in "any language".
Apart from using built-in MSI constructs or WiX custom features, custom actions can often be avoided by minor changes to the application design so that complex custom actions are no longer needed (samples below).
Spend the time to find and review built-in MSI features and extensions from WiX, Installshield, Advanced Installer (and other vendor tools).
These extensions are made by the best deployment experts available, and have in most cases been tested by thousands of people (millions and billions in the case of built-in MSI features).
How can you conceivably match this with your own proprietary code? If the existing solution is good enough - which it normally is - then your own code is entirely added risk for no gain whatsoever. The ready-made solutions even support rollback - a notoriously neglected and hard-to-implement feature - that is actually required by design.
In other words: when you use the existing solutions, you borrow not just the implementation, but crucially also the QA, UAT and testing of the extensions when you use them. This can save weeks of work and contribute greatly to deployment stability.
All that is needed is to spend some time searching and reading up on what is already available, rather than jump-starting things with your own code. Heck, even your own manager could get involved searching... In a dream world.
A setup should generally not be coded, it should be declared! (think SQL). This is what Windows Installer is all about: you declare what to be installed, and the sequence is taken care of by the installer engine itself. Benefits result - when done right.
That's about it really. Read on if you dare! Here be dragons in the midst of serious ranting. And crucially: real-life experience from dealing with both setup development, and also large scale deployment of such setups in corporate environments. Common problems become clearer - they are more identifiable. And you may get burned with problems you really have to answer for - that you feel you could never have foreseen (doing corporate packaging and repackaging you get to see just how much can be wrong with a single package - it can take days to beat them into shape at times).
Please remember, and this is not meant as bad as it sounds: deployment is generally a low status technology endeavor, but a high profile failure. Errors really show. And someone will get to answer for it, and support will really feel it.
Debugging during development is hard, but for deployment it is sometimes nearly impossible since you generally have no access to the problem systems, and each problem system will be in a totally different state.
In all honesty: failing to get software deployed correctly to end users may be close to being the most expensive error to make in software development - nobody can witness the greatness of your solution - and it affects sales, support, marketing and further solution development.
Poor deployment can definitely sink a product. Unfortunately good deployment isn't enough to save a product.
Fleshed Out Ranting Version :-)
As stated above this section was split from an existing answer with broader scope: How do I avoid common design flaws in my WiX / MSI deployment solution? (an answer intended to help developers make better deployment decisions).
Erroneous or unnecessary use of custom actions.
A plethora of deployment problems can be caused by custom action use - most of them very serious. If your setup fails to complete or crashes, it is a fair bet that a erroneous custom action is at fault.
Accordingly the obvious solution is to limit your use of custom actions whenever possible. Custom actions are (often) "black box" (hidden code) whereas most of MSI features a lot of transparency. The MSI format can be viewed easily (COM-structured storage file), and everything that an MSI file does can essentially be deduced from its MSI database file - with the exception of compiled custom actions (script custom actions are still white box - you can see what is going on, unless they are obfuscated).
In this age of malware, these black box custom actions - that run with elevated rights - may become frowned upon in more ways than one. They are not just stability and reliability issues, but a full-blown security issue.
6.1 The Overuse of Custom Actions
Pardon the "trifles" listed below. A lot of this may be banality, but maybe use the arguments listed to make a case for yourself to avoid custom actions and to work on better solutions - usually involving smarter application launch sequence and application self-configuration. Trust us, application coding will be more interesting than dealing with Windows's Installer's complex conditioning, sequencing and impersonation for custom actions.
The below has been updated so many times that sections are a little bit out of sequence or repetitive here and there. Will be cleaned up as time allows.
Developers are good at coding - so with all due respect - you tend to overuse custom actions to do things that are better done using built-in MSI features or ready-made MSI extension solutions such as those available in WiX for advanced things such as XML file updates, IIS, COM+, firewall rules, driver installation, custom permissioning for disk and registry entries, modify NT privileges, etc... Such support is also found in commercial tools such as Installshield and Advanced Installer.
It is also possible to do a lot of what is done in custom action as part of an application's launch sequence. The prime example is initializing user data and copying settings file to each user profile. There is a little summary of this issue here: Create folder and file on Current user profile, from Admin Profile.
It is obvious, but very often custom actions are used out of ignorance of what is already available "out of the box" by means of ready-made solutions. This happens to all of us all the time, doesn't it? There is always some smarter way to do things that would have saved you lots of grief? In general anyway - though sometimes you are breaking new ground. Don't break new ground in your setup - unless you absolutely have to! Save it for your application.
I want to emphasize that these built-in Windows Installer solutions and extensions from WiX and commercial tools have been written by the best deployment specialists available. And moreover, and even more importantly, they have been used and tested by thousands, millions - heck even billions of people for built-in MSI features. Do you really think you can do any better? Moral of the story: pick your battles and use your great coding skills to solve new problems, and let deployment be as dumb as possible. Use what already works, and don't reinvent the wheel. There are too many unknowns in deployment, too many variables that can't be controlled - you are dealing with "any machine" in "any state" and in "any language". See the Complexity of Deployment section here if you want examples - a bit down the page - just a short dump of all the ways your target systems may differ in their states when your package hits it. Each variable is a new bear trap for custom code - from OS version, to application estate to malware situation. The list goes on and on. As the conclusion reads in the linked content: "Deployment is a simple concept, with a complicated mix of variables that can cause the most mysterious errors - including the developer favorite: the intermittent bug. As we all know the seriousness of such bugs can not be overstated as they are often impossible to debug properly."
Certain advanced things must be done in your setup - since they require elevated rights and your application should not request this whilst running. This is what a setup is for, advanced, elevated system configuration - so embrace this complexity, but use ready made solutions! Don't roll your own script and solutions, use built-in, well tested stuff. Will your script run correctly in Korea? Will your custom action be blocked by a major anti-malware solution of caliber that you never had the time to test your script with? Do you have time to write your own check for whether a certain prerequisite runtime is installed on the computer - which works in all locales and across all OS versions? The potential for bugs here is staggering - and sometimes impossible to test properly. Do you have test machines in Arabic, Chinese, Korean or Japanese? Maybe you do. But do you have a terminal server to test on? How about a Korean terminal server? Did you test your setup with MSI advertisement? For advanced system management features to work, you have to make your setup as dumb and standard as possible. Spend several days looking for ready-made solutions before you write anything on your own I'd say. Remember, with ready-made solutions you are not just borrowing their code, but crucially their creator's QA, testing and UAT - which you can almost never hope to repeat.
Real-world testing is the only yardstick that matters - there is no substitute. Don't take on this world of pain! A slight change in Windows deployed via a Windows Update and your custom action breaks in any number of ways. Choose a better battle to make use of your skills. If you fight the design, then Windows installer fights back.
And there is a lot going on with deployment that makes it complex - and not dumb like we want it (just copy some darn files), your fight is to make it as simple as possible, but no simpler. Here is a list of deployment tasks showing why it is hard to get your package "dumb enough": What is the benefit and real purpose of program installation? Use only ready-made constructs whenever you can - it is the first easy win. All you need to do is to read and search a little.
Finally I will add that all custom actions are supposed to support rollback to put the system back to the original state if the install fails. In the real-world this is almost never done in an ad-hoc custom action (in my experience). Believe me it is complicated to deal with - I fear the word "rollback" as a Siberian husky fears the word "bath" after I implemented MSI rollback in C++. Not the most fun I ever had, but it eventually worked properly. If you want your setup to be without too many dependencies, C++ is the way to go, and as we all know it is not trivial to deal with. The ready-made solutions support rollback out of the box - it is an easy-win for only a bit of reading and searching to enable them. Complexity yes, but you are on more solid ground. And crucially: once an obscure error happens on a Japanese machine we can work for a community fix, or a third party vendor needs to sort it.
After all those "general observations", on a purely technical level, custom actions in Windows Installer are very complex with regards to implementation, scheduling, conditioning and rollback, and should hence be used when absolutely necessary (often for early-adopter stuff). A further complexity is runtime requirements - for example dependencies on specific versions of the .NET runtime, or PowerShell, or Installscript runtimes (scripting language of Installshield - it had a runtime dependency, but this is largely solved now, it used to be a problem).
Errors from missing runtime requirements can be a enormous hairball to deal with - for zero gain. For this reason I only use minimal dependency C++ dll's or Installscript when making custom actions. It happens that I use VBScript or JavaScript for read-only custom actions in the user interface sequence. These just retrieve data, and make no system changes. These are the only types of custom actions that are not that error prone. They should however be set to run without checking exit codes (to prevent them from triggering rollback or abort in a setup that is being run - often in major upgrade mode).
Also: Windows Installer hosts its own scripting runtime engine, so you know your active scripts (VBScript, Javascript, etc...) can run unless your target system is actually broken (there is no missing runtime on a normal system). This is in contrast to managed code custom actions - it is entirely possible for target systems to have no .NET runtime at this point (Jan 2018). Now this will change in the future, as .NET becomes a really built-in and required feature in Windows (or some minimal version of it hosted by Windows Installer itself). There are still problems with managed custom action code failing for strange reasons - such as the wrong .NET version being used to run it, or the wrong version of the CLR being loaded and being used, etc... I have limited experience here, but the problems are serious in my opinion. Eventually, though, we will all write managed code custom actions I think. I would still use C++ for worldwide distribution scenarios though.
Powershell scripts are particularly hairy for custom actions as they apparently run out of process and can't access the MSI's session object (source: MSI expert Chris Painter). Powershell also requires the .NET framework to be installed. I would never use Powershell script for custom actions - not even for internal, corporate deployment. Your call. Just as a "sample", here is an expert in the field stating his opinion (aging blog item, but still relevant if you ask me): Don’t use managed code to write your custom actions!
I tried to write a summary of pros and cons of different custom action types. Frankly I am not too happy with it, but here it is: Windows Installer fails on Win 10 but not Win 7 using WIX. What I am not happy with? The recommendation of JavaScript for one thing - I have used it seldomly, and although it is a better language than VBScript - particularly for error handling - I have had lots of problems using Javascript with MSI. It may be a case of the problem existing between the keyboard and the chair :-), but I think there are some real gotchas with JavaScript as well. Try to avoid complex use of scripts. For simple things like retrieving properties they seem to work OK for me. However, the experts in the field are merciless on script custom actions: Don’t use vbscript/jscript to write your custom actions! (Aaron Stebner), VBScript (and Jscript) MSI CustomActions suck (Rob Mensching - WiX benevolency).
Let me also add that custom action complexity is "silly, conspiratory complexity" not fun-to-deal-with stuff. It bites you. Gotchas. You will discover all the mistakes you have made in due course - as you move along (and then you have some explaining to do) - they will often not be immediately obvious (problems suddenly arise when upgrading, patching, uninstalling, your custom action runs erroneously during repair because it is not conditioned properly and wipes out certain registry settings, silent install doesn't work properly - it leaves the install incomplete because the custom action only exists in the user interface sequence, there are runtime errors on Windows XP that you never tested your custom action code with, there are broken things on target systems that you didn't shield yourself from, someone calls you and tells you your package doesn't work with active directory, your package can't be advertised, it fails on all Korean and Japanese systems, self-repair triggers runtime warnings and access denied for normal users, etc...). You will have no time to fix this properly once it hits you, and you will likely have to proceed with sub-standard solutions to get yourself ahead over the next hurdle. And no, I didn't experience all of this myself. In fact I discovered most of these problems when fixing up third party vendor MSI files for corporate deployment. You really discover a lot of potential error sources when you see, compare, review and adapt hundreds of packages to a corporate standard for large scale deployment. In all honesty there are serious shortcomings and errors in all but the simplest of packages. And obviously you get to see what you did wrongly when you made such vendor setups a few years earlier. And guess what tops the list? Overuse of custom actions when built-in constructs were available.
And to add to it all, the inherent complexity of deployment with its requirement to work on any machine, anywhere in any state, there is the issue of MSI technology borderline anti-patterns. Parts of the MSI technology that are causing repeated deployment problems because they are poorly understood, and sometimes fragile in real-world use. There is a brief summary of this in this answer (towards the bottom): How to make better use of MSI files (along with a list of the very important corporate benefits of MSI - and a random dump of common technical problems seen in real-world packages). Particularly the latter link has struck me as less than ideal after I wrote it. Take it for what it is: messy, real-world advice without much else going for it. It identifies too many problems without showing much in the line of fixes in places.
A substantial proportion of a software's support calls tend to come from problems seen during the deployment process. As the story goes: "...failing to properly install your great software may be close to being the most expensive error to make in software development. You can never hope to sell a software that was never possible to test" (from one of the linked answers above). A bad custom action is very often behind such problems.
The most common unnecessary custom action uses we see are in my opinion:
You install Windows Services via custom actions. This is much better done in the MSI itself using built-in constructs.
You install .NET assemblies to the GAC via a custom action. This is fully supported by Windows Installer itself without a line of (risky) code.
You run custom .NET assembly installer classes. These are to be used for development and testing only. They should never be run as part of deployment. Rather your MSI should use built-in constructs to deploy and register your assembly.
You run prerequisite setups and runtime installers via a custom action in your own MSI. This should be done entirely differently. What you need is a bootstrapper that can launch your setup and its prerequisites in a sequence. Commercial deployment tools such as Advanced Installer and Installshield have features for this, but free frameworks such as WiX has support via its Burn feature, and there are also free GUI applications such as DOTNetInstaller (untested by me) with these kind of features.
Please take my word for it in this particular case: running embedded setups via a custom action will fail eventually - usually rather immediately. MSI is too complicated in its scheduling, impersonation, transaction, rollback and overall runtime architecture to make this possible. Only one MSI transaction can run at a time by design, and this makes things very complicated. It used to be you could run embedded MSI files as a concept in MSI, but this was deprecated - it didn't work properly. You must run each setup in sequence. The correct sequence. There are bootstrappers / chainers available that will allow you to define such an installation sequence. Here is an answer which describes some of them (don't let the question title throw you off - it is about bootstrappers / chainers and other deployment considerations): Wix - How to run/install application without UI.
6.2 Custom Action Alternatives
The ready-made solutions found in WiX and other tools such as Installshield and Advanced Installer, are tried and tested, and crucially also implement proper rollback support - a feature almost always missing from custom action implementations - even in otherwise good vendor MSI setups. Rollback is a very important MSI feature. You cannot compete with the quality delivered from a large user community with active use and testing in all kinds of environments. Make use of these features.
Apart from using built-in MSI constructs or WiX custom features, custom actions can often be avoided by minor changes to the application design so that complex custom actions are no longer needed. I see licensing as an example of how deployment can be simplified to improve reliability.
This is a very important point, and one that is very often ignored. Sometimes some quality hours or days of coding and some quality UAT / QA time can prevent long term deployment problems from ever existing. No amount of support time can solve the worst deployment problems you can create.
This answer provides a blurb about the overall complexity of deployment: Windows Installer and the creation of WiX (towards the bottom). Deployment is a complicated "delivery process" with several serious challenges: 1) the cumulative nature of deployment errors, 2) the difficulty of proper debugging, and 3) the virtually unlimited range of external factors and variables affecting target machines world-wide - the conclusion is that deployment must be made as simple as possible to be reliable. There are so many intervening factors leaving you with the developer favorite: the intermittent bug. The link above is recommended for a more fleshed out explanation.
6.3 Advanced Custom Action Problems (beyond runtime failures)
A very common custom action issue is their incorrect scheduling and attempting to modify the system from "non elevated" immediate mode actions. These are advanced MSI design flaws not immediately recognizable for people who see an otherwise working setup.
Never insert immediate mode custom actions after InstallFinalize in the InstallExecuteSequence of your MSI that attempt to modify the system (read/write). These actions will never run if your setup is deployed in silent mode via deployment systems such as SCCM (again, last time I checked).
Never try to change the system using immediate mode custom actions activated from the setup GUI. This will appear to work for admin users, but immediate mode custom actions are not elevated and hence regular users will not be able to run them without error. In addition any changes to the system done from the MSI GUI will never be done when the setup is run in silent mode (then the whole GUI sequence is skipped). This is a very serious MSI design flaw: installing silently and interactively causes different results. This issue is mentioned in section 10 on silent install as well in this answer: How do I avoid common design flaws in my WiX / MSI deployment solution?. Silent install is a crucial feature for corporate deployment of your software. In corporations with control of their deployment, ALL deployment is done silently. Your setup GUI is simply never seen. There are only a few exceptions I can think of, such as certain server deployments that may be done interactively, but all desktop and client software is deployed silently. Not supporting silent install properly is hence a horrendous deficiency and design flaw of your MSI setup. Please heed this advice. It is crucial for corporate acceptance of your software - I have recommended certain software to be thrown out from the standard system because its deployment is so dangerous for the system that we just don't want to deal with it. Virtualization and sandboxing (APP-V - virtual packages - or full on virtual machines) are big today precisely because of these issues (misbehaving applications and setups).
All custom actions that make changes to the system must be inserted in the "transacted section" of the installation sequence. This Windows Installer Transaction (think database transaction commit) runs between the standard actions InstallInitialize and InstallFinalize in the InstallExecuteSequence and runs with elevated rights. All changes to the system are to take place in this transaction - anything else is erroneous (but unfortunately quite common). All custom actions inserted here should have a corresponding rollback custom action implemented as well, and this should undo any changes done to the system in case the install fails and gets rolled back.
The idea that custom actions should be limited comes from the fact that it is easy to write a poorly implemented custom action. A poorly implemented custom action is one that, in the event of installation failure, cannot roll back changes already made to the system (i.e. installing a windows service, for example).
That being said, if a custom action is written such that it has a complimentary rollback custom action in place, then I don't think its a problem.
The pattern I like to follow is that for every "custom action" that I think I might need (for example, FoobarCustomAction), its actually 3 consists custom actions:
FoobarCustomActionImmediate - This is an immediate custom action that prepares 2 payloads:
First, a payload to pass to FoobarCustomActionDeferred, which dictates the system altering changes to make.
Second, a payload to pass to FoobarCustomActionRollback, which dictates how to rollback the system altering changes.
FoobarCustomActionRollback - This is a rollback custom action that is scheduled before FoobarCustomActionDeferred in sequence, that uses the payload provided by FoobarCustomActionImmediate to rollback the changes made by FoobarCustomActionDeferred in the event of a critical failure.
FoobarCustomActionDeferred - This is a deferred custom action that makes all of the system altering changes to the system.
Like many other technologies, I think there is always a way to write poorly maintainable code in WiX. Take the example above, if this is a custom action that I want to use in many different installers, I should provide the developers a .wxs file containing a fragment that ensures all needed custom actions are referenced and sequenced correctly. If instead, the WiX code is simply copy pasted carelessly across many different installers, the chance of the custom action being implemented incorrectly increases. Its our job as developers to help consumers of our code fall into the pit of success, and I think WiX provides that through careful use of fragment and wixlibs.
But the developer of the installer has to care about such things, WiX does not enforce it!

NSBundle sandboxing in Objective-C

Odd title, I know. What I REALLY mean to ask is something like "can I create a TrustedBSD MAC Framework or iOS sandbox for NSBundles?" I have a very dynamic application in which modules can be "plugged in" or "un plugged" to a "switchboard" at any time. The user operating it is on another computer, perhaps miles away. There's no way for users to know how the switchboard modules can be compliant and not wreck the system in their absence. I know iOS sandboxing and TrustedBSD (as well as Lion's Seatbelt.framework) can do this for processes, but how can I do this for bundle code?
Things I've thought of:
Static binary analysis of all posix calls and ObjC messaging (seems impossible as binaries are.... binary at this point, not code).
Tracing ObjC messages at runtime and logging them, locking the module out after a single stray call. (This wouldn't affect POSIX calls or any C calls for that matter, nor assembly, and would have at minimum one call already sent).
Create a sandboxed external process using XPC and load modules there, and use PDO to route switchboard calls to that process (very doable, but I need snow leopard compatibility here).
Any Ideas? I realize that a bundle, once loaded, ends up as PART of your application, so this adds further problems on implementation of security.,
The instant that you load a bundle into your process, you are giving it full control over your process. (This is because a bundle can contain code that runs at initialization time, before you even call any of its functions.)
Tracing messages at runtime is pointless, as a malicious bundle could simply disable the code responsible for message tracing before it fires. (It's running in the same memory space, after all, and it needn't use any ObjC messages to do this.)
In short: do not load bundles you do not trust. If you must make use of an untrusted bundle, use some form of sandboxing (whether that's XPC or something else).

VB: get compiled DLL's calling application info; COM security

Through COM, one can potentially gain absolute control over a target system. For example: using javascript's ActiveXObject object in IE, one can create certain objects which were designed to have direct access or interaction with system properties and files. One would think common sense dictates users disable ActiveX features in IE immediately after installing the browser to ensure their system is protected while surfing the net, or at least paying close attention to which websites they permit. But, I doubt many average PC users know how or why to do this, or just get tired of mirco-managing it over time. I think any PC user or admin my COM class caters to would greatly appreciate not having to deal with that. Thankfully it looks like IE versions come packaged with ActiveX disabled by default nowadays.
I've built a very versatile COM class library in VB. I didn't intend for it to be callable from any website, but that feature is just part of the COM platform. I'd like to prevent the library from being called from IE unless the website is on a white-listed domain to proactively protect the user (and ultimately their entire intranet) from harm from malicious websites. What would be the best method in VB.Net to tell which application called my DLL, to be able to tell if it was called from any command or process originating from IE? And, what domain called my dll?
Edit: I believe this might be a duplicate. See: Calling Assembly to get Application Name VB.NET
System.Environment.GetCommandLineArgs()(0) gets me the calling application path. With this info, I can compare it to a black/white-list of applications. Problem solved for now.
Don't mark the control as Safe for Scripting.
Default security settings will not allow such controls to be scripted.
Self-answer, and possibly duplicate, I suppose. See System.Environment.GetCommandLineArgs()(0) from Calling Assembly to get Application Name VB.NET
In this case, the class never was marked as safe for scripting and the intent was already never to mark it safe. The issue was how to obtain the calling application info so I could add additional security measures in case those which the calling application had were not enough.

Is this a reasonable "Application entry point"?

I have recently come across a situation where code is dynamically loading some libraries, wiring them up, then calling what is termed the "application entry point" (one of the libraries must implement IApplication.Run()).
Is this a valid "Appliation entry point"?
I would always have considered the application entry point to be before the loading of the libraries and found the IApplication.Run() being called after a considerable amount of work slightly misleading.
The terms application and system are terms that are so widely and diversely used that you need to agree what they mean upfront with your conversation partner. E.g. sometimes an application is something with a UI, and a system is 'UI-less'. In general it's just a case of you say potato, I say potato.
As for the example you use: that's just what a runtime (e.g. .NET or java) does: loading a set of libraries and calling the application entry point, i.e. the "main" method.
So in your case, the code loading the libraries is doing just the same, and probably calling a method on an interface, you could then consider the loading code to be the runtime for that application. It's just a matter of perspective.
The term "application" can mean whatever you want it to mean. "Application" merely means a collection of resources (libraries, code, images, etc) that work together to help you solve a problem.
So to answer your question, yes, it's a valid use of the term 'application'.
Application on its own means actually nothing. It is often used by people to talk about computer programs that provide some value to the user. A more correct term is application software and this has the following definition:
Application software is a subclass of
computer software that employs the
capabilities of a computer directly
and thoroughly to a task that the user
wishes to perform. This should be
contrasted with system software which
is involved in integrating a
computer's various capabilities, but
typically does not directly apply them
in the performance of tasks that
benefit the user. In this context the
term application refers to both the
application software and its
implementation.
And since application really means application software, and software is any piece of code that performs any kind of task on a computer, I'd say also a library can be an application.
Most terms are of artificial nature anyway. Is a plugin no application? Is the flash plugin of your browser no application? People say no, it's just a plugin. Why? Because it can't run on it's own, it needs to be loaded into a real process. But there is no definition saying only things that "can run on their own" are applications. Same holds true for a library. The core application could just be an empty container and all logic and functionality, even the interaction with the user, could be performed by plugins or libraries, in which case that would be more an application than the empty container that just provides some context for the application to run. Compare this to Java. A Java application can't run on it's own, it must run within a Java Virtual Machine (JVM), does that mean the JVM is the application and the Java Code is just... well what? Isn't the Java code the real application and the JVM just an empty runtime environment that provides nothing to the end user without the loaded Java code?
I think in this context "application entry point" means "the point at which the application (your code) enters the library".
I think probably what you're referring to is the main() function in C/C++ code or WinMain in a Windows app. That is, it's the point where execution is normally started in an app. Your question is pretty broad and vague--for example, which OS are you running this on--but this may be what you're looking for. This might also address the question.
Bear in mind when you're asking questions, details are your friend. People can give you a much better, more informed answer when you provide them with details.
EDIT:
In a broader context consider what has to happen from the standpoint of the OS. When the user specifies that they want to run an app, the OS has to load the app from the hard drive and then when the app is loaded into memory, it has to pass control to some point in the memory blocked occupied by the newly loaded app to continue execution. That would be the "Application Entry Point". When an app is constructed with dynamically linked code the OS has to load all that dynamically linked code in order to get the correct app image into memory. Loading up those shared bits of code does not change the fact that the OS must have a point to which to pass control when the app is loaded into memory.