OOM error:Mule 3.7 community standalone, object to json transformer is shown in the leak suspect - mule

Mule app using 3.7 community standalone version has crashed due to Out Of Memory issue. Analysing the heap dump, suspect looks to be one of the transformer widely used in code. Out of the box "Object to Json" transformer is being reported in the "leak suspect" with over 10k object references during the crash holding ~90% of heap.
Reading about the scope of transformers in Mule doc, transformers have prototype scope and its being cached/pooled, but not definitely a singleton.
From Mule 4 recommendation is to use DataWeave instead of transformers which has the capability to swap payload to file(off-heap) if not able to accommodate in memory.
Is the only possible solution is to benchmark the app with peak load and adjust JVM heap settings? I do not find any config which can control the indefinite growth of transformer object. Please give some guidelines if anyone experienced the same issue as am not a mule expert.

Increasing the heap is only useful when the application requires more memory than assigned. If it is leaking memory it usually gets to the point that no heap is enough, though in some cases it could be useful to mitigate partially. For example if you can make it crash every few days instead of every hour. But it is not a full solution and might not work at all.
The thing is memory leaks depends on several factors. Many times complex to troubleshoot and require a lot of information, including several heap dumps, logs, access to the application, some understanding of how the application is implemented, and in this case the Mule runtime too, and many other factors. In a few cases it could be easy to identify the cause and the fix.
In this case you haven't provided enough information to start a full troubleshooting. A Stackoverflow question is not adequate for this kind of activity either.
I can however tell you the following advice:
You are using a very old version of Mule that is not even supported in its Enterprise Edition. Mule 3.7.0 was released around 2015.
Community editions lack patches. 3.7 has a lot of known issues that were patched in later releases and versions, including some major security vulnerabilities reported over the years.
There was a possible leak related to what you described reported against Mule 3.7. Because of the previous points you would need to upgrade to have it patched.
My recommendation is to test and migrate to Mule 3.9, where at least some issues. If possible skip Mule 3 completely and migrate directly to the latest Mule 4 available, however that will require a rewrite of the application. There is a community tool called Mule Migration Assistant that you can use to help starting the migration to Mule 4.
Even if you stay in Mule 3.7 migrating the transformation to DataWeave may resolve the issue since it has a different implementation. Having said that Mule 3 Community Editions don't support DataWeave so it would not be useful for your installation.

Related

Understand Heap-dump and Thread-dump for Large-scale application

I go through some tutorials of Java Profilers (JVisualVM, JProfiler, YourKit) on Youtube as well as Pluralsight. I got a little bit idea regarding how to check heap-dump and how to find the memory leak. But these all tutorials are elementary.
My query is, when I analyse in heap-dump, I saw only 3 types of objects char[], java.lang.String and java.lang.Object[] which covers almost all memory(more than 70% always). But none from my application.
And the same way for thread dump, I saw HTTP-8080 request (the port I am using) and that leads me to Runnable()'s run method or Java Concurrent Package and again not to any specific code to my project.
I also discussed the problem with some of my friends and analyse their application as well (which doesn't face any issues regarding memory leak and performance), but their results are almost the same.
Would you guys help to understand how to analyse heap-dump and thread-dump in JVisualVM for the large scale application? Any video, blog, anything would be helpful.
I am using OpenJDK-11, AWS ECS-Docker and Tomcat as a web-server.
Checkout the Eclipse Memory Analyzer (https://www.eclipse.org/mat/), I used it in the past several times to successfully find memory leaks, but it takes some time to get familiar with it.
Another advise I can give you is to create benchmark tests with Apache JMeter (https://jmeter.apache.org/) or another tool that lets you reproduce the performance/memory issue and identify the execution path that cause you problems.
Be aware AWS doesn't like when someone execute performance/penetration tests against their services (https://aws.amazon.com/aup/)

Are applications dependent on the environment where it was compiled?

We are having a System.BadImageFormatException in our MSI installers. I have already read about the target frameworks, but we already checked and it's targeting the correct framework (.NET Framework 4.5 same with our QA machines).
We have exactly the same source codes, but the results of the msi installer compiled by our 'build team' fails, but the msi installer compiled by us 'dev' works. Question is, does the environment where an application was built and compiled affects the output (example: msi installers)?
There are basically two reasons for this error:
A cross-architecture call from 32-bit code to 64-bit (or vice versa). Different architectures require different MSI setups Heath Stewarts blog and so everything in a 32-bit setup (especially managed custom action code) should be explicitly 32-bit and explicitly 64-bit in a 64-bit install. For example, when an x64 system encounters AnyCpu code it might load the X64 runtime, and then a reference to a 32-bit assembly will fail and get this error.
A .NET framework runtime attempt to load the "wrong" framework. The NET 4 runtime is somewhat backwards compatible, so you are most likely to get this error when code expecting the NET 2 runtime encounters a NET 4 engine. The devil is in the details here, but again, this is much like the architecture issue. If anything loads the NET 2 runtime and the calling sequence tries to call a NET 4 assembly to run in the 2.0 FW it will fail with this message.
Having said that, it's not clear exactly how you are calling the managed code, whether through DTF or something else (such as the Visual Studio InstallUtilLib mechanism). And finally the machine you build on makes no difference to the eventual runtime environment. It's no different from a code file which will work on one machine but fail on another because (for example) it can't find the C++ runtime. The issue isn't the build machine, it's the environment of the target machine.
Some Suggested Debugging Steps
So it is the actual MSI file which triggers these errors, or the application after installation?
Below are some thoughts and questions to consider when trying to debug issues such as these (in no particular order). My bet is on issue 3 in this first list:
Does this exception occur as you run the MSI itself (or is it a setup.exe?), or as you try to launch the application after installation? Just to verify - I assume the MSI.
Do you have custom actions in the setup? If you have managed code custom actions in your MSI, what platforms do you target in your build? Any CPU I would presume? Please verify. I think there are some issues with COM-interop here, but I am fuzzy on the details. Sometimes you may have to pick a specific platform. In this case you can get such error messages (bad image). See section "Managed Code" below for a whole "rant" about managed code and deployment - and some problems that may result.
Regardless of the above, in your WiX source file, what is the value of the Platform attribute in the Product element? Possible values: x86, x64, intel64, intel, arm, ia64. Please report (and try to test with other values as well - x86, x64 for example). This affects the MSI's platform setting. If you don't use WiX - open the compiled MSI file, and check the summary stream for the Platform setting. Using Orca this is View => Summary Information... - look for Platform.
Do you have any malware scanners, security software or other "potential blockers" for MSI compilation and / or installation on your build computer? Or on the test system where you try to install? (We must always mention these issues - people can waste days if we don't - even if it rarely seems to be the only issue).
Is this a localized MSI using Asian characters? (or Arabic, or any other complex character sets?). This I just mention - frankly I don't see how this is 100% relevant, but I want to clarify this "variable" for your scenario (i.e can we eliminate this as a potential error source). It would generally cause runtime errors, not System.BadImageFormatException issues - I believe.
I assume a compare of the different MSI files may not work because one of the files is a "bad image"? Did you try? Maybe it is still a valid COM structured storage file - but the msiexec.exe engine can't handle it? If it is, then tools may be able to read the content inside the file just fine - I don't know, give it a try.
For your scenario: I initially thought a single, compiled MSI behaved differently in different locations (environments) - and hence suggested to check for any damage in transit (network issues, samba issues, storage issues, malware issue, etc...) by doing a binary diff on the copies in the different locations (bit-level comparison). Since you seem to compile two (or more) MSI files from the same sources, such a binary compare is obviously meaningless. Differences are certain.
However, a "content compare" could tell you something - this compares actual content in the tables / streams inside the MSIs. I think I will add a Q/A on how to compare MSI files that I can link to from here (added: How can I compare the content of two (or more) MSI files?). This presumes that the MSI is readable - even if it is not runnable. Only way to know is to try.
I hope and believe that the above list should help you sort out your problem.
I wrote myself off a cliff below on the subject of managed code issues. The idea was to describe a couple of issues to check, but it became a long discussion. I may delete all the stuff below and perhaps resurrect it elsewhere. It may not be relevant for you at all. The overall topic is managed code and how it can crash in new and "interesting" ways:
Managed Code
This is another one of those sprawling answers that got out of hand. I
think it still has value, leaving it in.
A couple of further issues with regards to .NET custom actions (managed code). I am far from an expert on this topic, since I shun them like the plague (for now - this may change over time).
Some of this veered quite a bit off topic - for your purpose - but I will leave it in as general comments on managed code for MSI use.
MSI expert Chris Painter is the man for this topic - he has taken on this potential "world of pain" and seem to benefit from such custom actions too, but these managed custom actions seem generally accepted to be problematic - if you approach them in a naive way. Be pragmatic and weigh benefits against potential problems listed below.
A friendly piece of advice: for worldwide distribution I would never use managed code as of now - though it is "getting safer" - we have to admit that. There are too many potential error sources for a large scale distribution MSI package using such custom actions (home users may uninstall .NET, corporate users may see versions of .NET disabled, and the whole list of problems below, and I fear "catch 22" uninstall problems more than anything - a whole section on this below, etc...).
As I said, I am not an expert, but there are many, and serious problems. Maybe Chris can correct me if they are "sorted" by now. The DTF framework (distributed along with WiX) features support for embedding a managed code custom action dll inside a regular win32 dll wrapper. This helps reliability. I will dig up a few links here for reference. Chris has been a pioneer and early-adopter here.
Partial list of managed code problems for custom action use:
The .NET framework may be missing, disabled or corrupted (entirely or in the version requested / needed for your code). Now, what if all your 3000 corporate packages have a .NET dll with managed code embedded in them? They can't even uninstall in this case - much less upgrade. More below in issue #5.
When targeting different versions of the .NET runtime with different custom actions, all will load the same CLR version. So they tell me (I could not believe it whilst reading it - please read it!). Enough for me to run for the hills :-). "This can blow up in any number of ways" is what I hear myself think. Apply suitable paranoia accordingly! The resident evil of all things rears its ugly head again - etc... Seriously, don't listen to paranoia, but be on alert for serious problems. Is this problem managable? I guess - I would have to say yes, but it is not a problem to ignore. Serious UAT / QA needed on many different OS and .NET versions. Would a native dll do better? I think so.
Components installed to the GAC can not be used as dependencies for your managed custom actions in the setup (chicken or the egg - I suppose). This has to do with the commit models of Fusion / MSI.
Bob Arnson has commented on this - check it out (he is on the core WiX team). I don't know if this still is his top issue with managed code - along with rollback.
Small digression: I have read Arnson stating that VBScript actions are worse than managed code (Painter certainly agrees, and definitely the WiX boss himself Rob Mensching - blog). I think this is true for just about all cases, but not for corporate application package scenarios (which I have experience with) - or ad-hoc testing that will never be used in production (quick and easy).
I describe the reasoning behind this here (pragmatic issue): Windows Installer fails on Win 10 but not Win 7 using WIX (essentially anything compiled adds a source control problem in the real, chaotic world - and corporate packagers have to pick up each other's work on the fly and a fully embedded, transparent source file in the MSI saves the day - all the time, and there is a skill set issue as well, and there is more...).
I do not recommend VBScript for anything but corporate use in controlled environments (standardized workstations). VBScript is not good enough for public, worldwide MSI releases in any shape or form. They can work for read-only custom actions returning no error codes and set to ignore all runtime errors, but no - there are better ways.
UPDATE: I can add that in a snag I would use VBScript in read-only custom actions in the GUI sequence (just a property setter script) in order to get rid of the .NET framework as a dependency altogether. The time will come when the .NET framework is on all target machines, but it is not quite there yet (and even if it is there it could be broken. Windows now actively fixes ActiveScript to always be running - and MSI hosts its own ActiveScripting runtime - scripts will run, but you could easily mess up the code yourself to make the custom actions horrendous still).
I should add that my recommendation to use Javascript over VBScript in the link above will be removed soon. Javascript has proved just as bad as VBScript in practical use, with some added snags that are too detailed to go into. The enhanced exception handling offered by Javascript does not make up for the fact that the MSI API seems to have been tested with VBScript during its development. Javascript was probably not, and hence has a few clunky issues when working with the MSI API that are not immediately apparent. I have wasted costly time on this - I would recommend you don't waste yours.
I also use scripting for testing, prototyping and debugging my MSI packages (to debug property settings, app searches, override command lines for testing, etc...). I find this the quickest way (who wants to compile something ad-hoc for this?). Just don't roll with your script test code for release! If using Installshield I use Installscript for such "scripting".
And for the future: one good use for managed code would be embedded
directly in the MSI, in inspectable (and reusable) form - making custom actions white box - with full source embedded and with full code access security too, making them unable to run with freebasing elevated rights. Just thinking about what could come - let us see what you are doing in this custom action of yours?
To elaborate issue 1, managed code may hard-code a certain .NET runtime version that is not available. I guess this is probably the easier problem to deal with? Correct me if I am wrong Chris. I am just a dabbler with this. Setting "lastest version" could still cause issues though...
Let me add a pet peeve of mine as well: if a managed custom action fails during uninstall due to a corrupt .NET framework (or for any other reason - focused on managed code issues - for example a design / security change in Windows itself from Windows Update) - you can't uninstall and thereby not (major)-upgrade your existing installation. A serious catch 22 in my opinion. Try this if you have 3000 live packages and thousands of desktops to manage and the dll is embedded in each MSI...
Creating custom action code of any kind that trigger errors on uninstall / upgrade was my big fear when making a C++ custom action dll as well - so it is not unique to managed code. A classic error is to set custom actions after InstallFinalize or in the UI sequence to "check exit code" - and a trivial error returned causes full rollback of a major upgrade. A classic "catch 22" - now you can not upgrade without fixing the problem in the old product's uninstall sequence.
Despite this being a general problem for all custom action code, I feel the risk is heightened quite a bit with managed code. What if some weird policy change to the .NET framework makes all packages in a large corporation un-uninstallable and un-upgradeable since they all have embedded the same problematic custom action dll? Or worse yet, it is a Windows design change that you can't roll-back?
A contingency should be available in such cases. This is the core reason I stay away from managed code entirely - I like down to the wire better - fewer layers to depend upon. Minimal dependencies, minimal entanglements (no imperial entanglements). If a minimal dependency C++ dll does not run, then the core of Windows is generally broken and the system needs a rebuild in most cases anyway. For .NET custom actions you would minimally have to fix the .NET framework (which might be easier than I think - for all I know - don't think so though).
I was looking at ways to make the DLL external to all corporate packages in a pre-requisite package (ideally with a minimal, baseline, embedded DLL in the setup itself as well - if the external DLL is missing / not found). The idea being that an external DLL is preferred once available, and upgradable for all packages by a single, updated "prerequisite package". All 3000 packages fixed - all at once?
I never got around to determining the technical feasibility of this. Bear with me, I am getting off topic for your purpose. If the WiX guys are reading - what are the technical possibilities here off the top off your head? Essentially I am expecting to hear "impossible" - and then I am done with it. When thinking about this I was preparing for potential problems with the embedded DLL in Asian and Arabic locations (potentially serious and unexpected and fatal runtime failures due to Unicode / code page issues), and also for any unexpected security changes in Windows (that we keep seeing - Windows 10 ransomware protection which currently intermittently triggers runtime failure for files installed to userprofile folders, or the sudden need for admin rights for MSI repair - kb2918614 which appeared out of the blue on Vista, and whatever else they keep changing unexpectedly...). I did not want to sit with thousands of un-upgradable, un-uninstallable packages - already deployed to tens of thousands of machine.
My "last resort" contingency for corporate use was to "hack patch" all cached MSI files in the local, super-hidden MSI cache folder using a "home grown" patcher EXE deployed by a hotfix package. Generally insane in every way, but it looked technically possible (until digital signatures shuts off the possibility?). And for me the only acceptable "last resort" I could think of if tens of thousands of trading floor machines were hit by disaster suddenly.
I can think of at least two other options - one of which is to minor upgrade affected packages (lots of work, cleaner, guaranteed to work). The last option will not be mentioned :-) - (Voldemort, "those we do not speak of", etc...).
An auto generation feature for minor upgrade patches to patch the embedded custom action dll's was also on my list of contingencies - the minor upgrade would only patch the dll - no other changes. Then problems could be handled on a package-by-package basis. This patch should be available at the click of a button when pointing to a live MSI package in need of patching. An "embedded custom action dll hotfixer". A thing that should not ever be used if at all possible. Contingency "solutions" are rarely pretty.
My two cents: I can think of few scenarios where minimal dependencies are more important than for an embedded custom action in an MSI. It must work on any machine, in any state, in any language, in any location in any installation mode (and uninstall is the catch 22 here) ideally without any non-standard dependencies at all. I statically link C++ code for this very reason. For worldwide distribution I feel this is the only thing that is currently good enough - statically linked C++ code - (with the possible exception of Installscript - from Installshield - which is now running without dependencies apparently - embedded runtime? I don't know how they do it - in the olden days there were legendary problems with the required runtime pre-requisite for the Installscript language. It should be fixed since version 12 of Installshield).
This is not a complete list. It is my "run for the hills list" :-).
No fear though - just be aware of it all - and use the benefits of managed code if they are substantial enough for you, but don't expect entirely smooth sailing is my take on it. I would be upfront with my manager about these potential bear traps, without sounding like a total, paranoid lunatic. A good manager will be able to "sell" any contingency plans as necessities, that you can get time to work on and even demonstrate quickly (believe me, attention span here is short - it has to be the quickest demo ever). The big question is whether you have one package to deal with, or thousands like we do in corporate deployment. Things change a lot for the latter. Risk must be minimized for all features that are embedded in all deployed packages.
If I am 100% honest, it is not as bad with managed code as it used to be. Using DTF and other frameworks have helped. But the potential runtime issues for uninstall problems are worrying. A global change to the .NET framework in the company - and all your packages can no longer uninstall? Or a newer version of the .NET framework reveals unknown bugs in the custom action not found when it was deployed? It may suddenly "manifest itself" on attempted uninstall / upgrade. Managable, but you will curse yourself...
I would prepare your support guys for the above managed code issues - they should know about the issues and really understand what .NET is about.
"We have never seen any problems with our managed code custom actions" - famous last words - to be honest.
If your target computers are uniform and standardized (SOE environment) - which is normal for corporations - then your packages may appear better than they really are (now this is true for packages with scripts too). Just wait for the next SOE version based on a new operating system... I would pilot test early with all packages in the package estate.
You could still face the irony that all target computers start failing in exactly the same way (Windows design changes in Windows Updates, security software updates that trigger interference, SOE updates that fail for some locations, etc...).
For worldwide distribution things are quite different and things tend to fail in any number of ways that are hard to debug and fix or even work out at all. You normally have no access to the problem system at all - for starters. Maybe read some further comments in "The Complexity of Deployment"-section here: Windows Installer and the creation of WiX.
So I would never use managed code for global distribution of a complex package - unless you are delivering a very specific product and know the nature of your target machines in more detail than normal. Cost / benefit.
I would have a contingency for what to do if many machines are affected by unforeseen triggers of "deadlocks" such as not being able to uninstall / upgrade. Some paranoia in this scenario, but not impossible. Silly "war games". Risk is for your manager to manage, and for you to handle technically.
Adding a link to an aging, but still valid FAQ entry from installsite.org on the topic of managed custom actions and their problems: How can I create Custom Actions in Managed Languages, like C#?.
And be skeptical of any custom actions in the first place!
Managed code just adds to custom action volatility. Custom actions are complex and difficult to get right in the first place. They run impersonated or in the wrong context unintentionally, they run twice unexpectedly, they don't run at all when expected to, they run in the wrong installation mode, they crash due to missing dependencies, they cause exceptions due to bad coding that fail upgrades and uninstalls alike by triggering rollback, you hard code references to localized folders so your setup crashes in non-English machines, you name it...
Built-in constructs in MSI itself, or pre-written custom actions (with rollback support) in frameworks such as WiX or commercial tools such as Installshield and Advanced Installer have been tested by thousands, millions or even billions (!) of users - and they are written by the best deployment experts available. Even for these components, bugs are still found - which says it all. Do you think you could do it better on your own? Always prefer ready-made, tested and maintained solutions - if available.
A whole rant about the problems with custom actions in general: Why is it a good idea to limit the use of custom actions in my WiX / MSI setups?
"Sources"
Some further links (some of this content may be showing its age by now, but these are trustworthy sources - not to be ignored - Mensching is the WiX benevolent dictator):
Don’t use managed code to write your custom actions!
Link to more details about the dangers of managed code custom actions in an MSI.
Managed Code CustomActions, no support on the way and here's why.

ServiceMix -> NetBeans OpenESB?

I've picked up a project that needs to import some (old) JBI components that were developed using ServiceMix about three years ago. I need to bring these into to a modern GlassFish environment. So far, it's not very clear what or how I should do it. Any tips or pointers?
My worst case scenario is to wrap the JBI component call in a POJO class, stripping out the ServiceMix bits, to see if that will at least get the gears spinning again.
I note elsewhere, that the JBI code in ServiceMix is not apparently JBI certified. So maybe that might be an indication this may be a non-sequitur.
TIA!
Andrew
I would think and re-think and then re-think once more before moving or importing anything into open esb. OpenESB project has basically been left without funding since Oracle bought Sun.
I did one better. I wrote my own data flow processing system from scratch. Everything else available was just too heavyweight and complex.
My new system, code named LightRail, works great. All connectivity is component driven and defined through a single JSON configuration file. All processing and flow control is handled through a single BeanShell script.
I've already deployed 10 different data flows in the last 10 months, connecting to IMAP, SFTP, FTP, Files and a database or two. Life is good again...
Andrew

How to upgrade PowerBuilder code?

I have code from PowerBuilder 5 that can't be built. The compiler just stops before it is done without any error codes.
I would like to upgrade the code to the recent version of PowerBuilder but there are some intermediate versions of PowerBuilder that have binary dependencies to an old Microsoft java dll that Microsoft no longer can distribute due to some court case.
So, is there a way to get my code running in a newer environment?
/johan/
Firstly, you don't need to use "intermediate versions of PowerBuilder" to migrate up to a current version, so even if this java DLL dependency sounds questionable to me (at least it doesn't ring a bell), it's irrelevant unless it affects the target version of PowerBuilder.
For migrating, you might want to check out this migration guide, as well as a list of changes to PB that may affect you.
Very unusual sounding problem. You could give a try to migrating the code to a more recent version of PowerBuilder and see if it will compile or at least fail but give you some useful error messages.
I would also recommend posting this in the PowerBuilder section of the Sybase newsgroups. They are very active and full of some brilliant PB minds with lots of experience. You can find them here: http://forums.sybase.com
From here:http://forums.sybase.com/cgi-bin/webnews.cgi?cmd=item-4558&group=sybase.public.powersite
I just learned that the combination of "severe" message, and message
that psdwc70.dll was unable to self-register is probably because
msjava.dll is not present and/or registered on your machine. The
psdwc70.dll file relies on msjava.dll in order to install properly.
/johan/
Have you tried exporting the code in PB5 and importing in new version?

Whats the best way of finding ALL your memory when developing on the Compact Framework?

I've used the CF Remote Performance Monitor, however this seems to only track memory initialised in the managed world as opposed to the unmanaged world. Well, I can only presume this as the numbers listed in the profiler are way short of the maximum allowed (32mb on CE 5). Profiling a particular app with the RPM showed me that the total usage of all the caches only manages to get to about 12mb and then slowly shrinks as (I assume) something unmanaged starts to claim more memory.
The memory slider in System also shows that the device is very short on memory. If I kill the process the slider shows all the memory coming back. So it must (?) be this managed process that is swallowing the memory.
Is there any simple(ish?) fashion how one can track unmanaged memory usage in some way that might enable me to match it up with the corresponding P/Invoke calls?
EDIT: To all you re-taggers it isn't .NET, tagging the question like this confuses things. It's .NETCF / Compact Framework. I know they appear to be similar but they're different because .NET rocks whereas CF is basically just a wrapper around NotImplementedException.
Try enabling Interop logging.
Also, if you have access to the code of the native dll you are using, check this out: http://msdn.microsoft.com/en-us/netframework/bb630228.aspx
I've definitely been fighting with unmanaged issues in a C# managed app for a while -- it's not easy.
What I've found to be most helpful is to have a regular output to a text log file. For example you can print the output of GlobalMemoryStatus every couple of minutes along with logging every time you load a new form. From there you can at least see that either memory gradually erodes, or a huge chunks of memory disappeared at specific times of the day.
For us, we found a gradual memory loss all day as long as the device was being used. From there we eventually found that the barcode scanning device was being initialized for no particular reason in our Form base class (I blame the previous developer! :-)
Setting up this logging may be a small hassle, but for us it paid huge dividends in the long run especially with the device in live use we can get real data, instrumentation, stack traces from exceptions, etc.
Ok, I'm using C++ on CE, not C# so this may not be helpful, but...
I use a package called Entrk toolbox which monitors memory and resource usage, leaks, and exceptions under Windows CE. Pretty much like a lightweight CE version of boundschecker. Does the trick most times.