How to increase the stack size of an ASP.NET Core binary (32 bit)? - com

Is it possible to increase the stack size of an ASP.NET core 2 binary? I have to use a 32 bit COM interop component which happens to smash the stack under some defined conditions. It's not an infinite recursion, just a workload that happens to scratch the limit when the largest possible data set is requested, so increasing the stack might be an acceptable fix.
Modifiying the stack size via EDITBIN only works if applied to dotnet.exe directly, which obviously is not a preferred solution.

Starting with .NET Core 3.0, the output of a build is an .exe file on Windows platform. We can just apply editbin on the exe before or after we publish it. This works fine if we run the web application out-of-process. I have tested it. Obviously with inprocess there is no other solution than editbinning worker process.

For .Net 5 default stack size can set by environment variable COMPlus_DefaultStackSize=100000. Value in HEX

Related

ARM Cortex-M3 Startup Code

I'm trying to understand how the initialization code works that ships with Keil (realview v4) for the STM32 microcontrollers. Specifically, I'm trying to understand how the stack is initialized.
In the documentation on ARM's website it mentions that one of the routines in startup_xxx.s, __user_initial_stack_heap, should not use more than 88 bytes of stack. Do you know where that limitation is coming from?
It seems that when the reset handler calls System_Init it is executing a couple functions in a C environment which I believe means it is using some form of a temporary stack (it allocates a few automatic variables). However, all of those stack'd items should be out of scope once it returns back and then calls __main which is where __user_initial_stack_heap is called from.
So why is there this requirement for __user_initial_stack_heap to not use more than 88 bytes? Does the rest of __main use a ton of stack or something?
Any explanation of the cortex-m3 stack architecture as it relates to the startup sequence would be fantastic.
You will see from the __user_initial_stackheap() documentation, that the function is for legacy support and that it is superseded by __user_setup_stackheap(); the documentation for the latter provides a clue ragarding your question:
Unlike __user_initial_stackheap(), __user_setup_stackheap() works with systems where the application starts with a value of sp (r13) that is already correct, for example, Cortex-M3
[..]
Using __user_setup_stackheap() rather than __user_initial_stackheap() improves code size because there is no requirement for a temporary stack.
On Cortex-M the sp is initialised on reset by the hardware from a value stored in the vector table, on older ARM7 and ARM9 devices this is not the case and it is necessary to set the stack-pointer in software. The start-up code needs a small stack for use before the user defined stack is applied - this may be the case for example if the user stack were in external memory and could not be used until the memory controller were initialised. The 88 byte restriction is imposed simply because this temporary stack is sized to be as small as possible since it is probably unused after start-up.
In your case in STM32 (a Cortex-M device), it is likely that there is in fact no such restriction, but you should perhaps update your start-up code to use the newer function to be certain. That said, given the required behaviour of this function and the fact that its results are returned in registers, I would suggest that 88 bytes would be rather extravagant if you were to need that much! Moreover, you only need to reimplement it if you are using scatter loading file as described.

Visual Studio 2012 adds numbers to my double values if I enter them in vb.net

A very strange, presumably meant to be helpfull behaviour of Visual Studio 2012.
When I enter a double in vb.net like:
Dim myD as Double = 1.4
When I hit enter of move my focus another way, the formatting kicks in and changes the above to:
Dim myD as Double = 1.39999999999999998
or 1.6 as
Dim myD as Double = 1.6000000000000001
This behaviour does not appear to happen for all doubles. 1.3, 1.5, 1.7 and 1.8
See this youtube movie for the behaviour in action:
http://youtu.be/afw4jg58-aU
Why, and more important, how can I prevent this?
Edit:
Extensions installed are:
Second Edit
The behaviour seems to have gone away. I do not know what has caused this so for future reference this is useless to anyone, but for now, I'm happy that I don't have to go troubleshooting as suggested.
The exact same issue is also reported in this question. Which applied to VS2010, otherwise without a usable answer.
This is an environmental problem, code is getting loaded into Visual Studio that messes with the FPU control word on your machine. It is a processor register that determines how floating point operations work, it looks like this:
The Rounding Control bits are a good source of trouble like this, they determine how the internal 80-bits precision floating point value is truncated to 64-bits. Options are round-up, round-down and round-to-nearest. The Precision Control bits are also a good candidate, options are full 64-bit precision, 53 and 24 bits.
Both VS2012 and the .NET Framework rely on the operating system default, with the expectation that this will not change afterwards. Pretty hard to diagnose trouble arises when code actually does change it, your observation strongly fits the pattern. The most common troublemakers are:
code that uses DirectX without the D3DCREATE_FPU_PRESERVE option. DirectX reprograms the precision and rounding control bits to squeeze out a bit more perf.
code that was written in an older Borland language product. Its runtime library initializes the FPU control word in a non-standard way. Otherwise a generic problem with software that relies on old runtime library or an old legacy initialization that was carried through in later releases.
in general, any code that uses a media codec or media api. Such code tends to reprogram the FPU to squeeze out perf for the same reasons that DirectX does. Especially notorious in a product I worked on which uses such codecs heavily. The codebase was peppered with calls that reset the FPU control word after making a call into external code.
Finding and eliminating such code can be very difficult. DLLs get injected into another process by a large variety of well-intended malware. The SysInternals' Autoruns utility can be very useful, it shows all the possible ways code can be injected with an easy way to disable it. Be prepared to be shocked at what you see and readily disable stuff that doesn't carry a Microsoft copyright.
For dynamic injection, you'll need a debugger to see what is loaded into VS. Start VS again and use Tools + Attach to Process to attach to the first one, selecting the unmanaged debugger. Debug + Windows + Modules shows you what DLLs are loaded. Do beware that the DLL can be transient, a shell dialog like File + Open + File will dynamically load shell extensions into VS and unload them again afterwards. Good luck with it, you'll need it and sometimes the only fix is a rather drastic one.
I had the exact same issue with Visual Studio 2012.
The "solution" if it happens to you is:
write your number in your code
let VS screw it up, e.g. when you move to another line
CTRL-Z to revert VS mess
continue writing your code

Esent crashes with Windows 8 [duplicate]

I've been using ESENT for my projects quite extensively and I really love how easy and fast it works. And stable too!!
But I have a HUGE problem with Windows 8!!! Regardless of how I link to the esent.dll (dynamically or statically) whenever I call something other than JetSetSystemParameter, the dll is crashing, takig my app down the cliff.
Unfortunately I still can't get it running. My code had no problem running with Windows 7 or older. But with Windows 8 I get esent.dll crashing when I try to create an instance (floating point invalid operation).
I tried all possible calling conventions. This is definitely NOT the problem. I tried some more and discovered this weird situation: 1. I created a demo application using VS 2012 and JetCreateInstance worked just fine. 2. Exactly the same code in Delphi XE3 will send esent.dll crashing. 3. I created a DLL using VS 2012, exporting the method that worked perfectly in the above demo app, thinking it's a Delphi bug. 4. And then I loaded the DLL in a demo Delphi project (tried with 6, XE2 and XE3). Called the method and BOOM. Same crash.
Now my assumption is that Microsoft won't allow?!? any other developer environment to work correctly with the esent.dll. Is this possible???
The error, a floating point invalid operation, makes the problem sound as though it is related to the floating point control word.
By default Delphi unmasks floating point exceptions. So when code asks the floating point unit to perform operations that result in errors, the FPU signals which is then converted to an exception.
But most other Windows development environments mask these exceptions on the FPU. Such code is written under the assumption that the execution environment has FPU exceptions masked. But if you call a DLL from Delphi, the execution environment will have unmasked FPU exceptions, breaking that assumption. I suspect that if you mask FPU exceptions then your problems will disappear.
To test if this is the problem, you can simply add this to your code, executed early in its life:
Set8087CW($027F);
This will mask all exceptions and set the FPU control word to the default Windows setting.
In the longer term you may wish to mask exceptions before each call to this DLL, and then restore the FPU control word when the call to the DLL returns.
That is a slightly dangerous game using the libraries that are supplied with Delphi since Set8087CW is not threadsafe due to its use of the global variable Default8087CW. If you wish to read more about that issue, I refer you to QC#107411.

Mixed mode DLL requires delay loading

I've created a mixed DLL (C++/CLI) and after successfully calling it from a plain ANSI C application, I've moved on to calling it from a C++ COM server (using the same C entry points). However even before the COM server successfully starts or calls into the DLL I get a "access violation" in ntdll.dll. The call stack just has ntdll.dll!ExecuteHandler2 repeated multiple times to the point where a stack over flow is reported in the VS debug output. I can see my mixed mode DLL and mscoree.dll are loaded.
I added the mixed DLL to the delay loaded DLL options of the COM server and it appears to work.
Why does the mixed DLL need to be delay loaded in the C++ COM Server when the C application I wrote seems to work fine without delay loading? How do I go about debugging this problem (unless this is expected, however I couldn't find anything about it)?
Looks like a COM Appartment needs to be initialized. COM appartments are used to care of threads synchronization.

"Failed to request ThreadStore" - WinDbg debugging live process

I am debugging the live process (not dump) of PresentationHost.exe. It used to works fine, but suddenly few days ago I get the above error message. !Threads, !pe, virtually all SOS command doesn't work.
All I remember is that I installed Visual Studio 2010 and .NET framework 4.0 before I'm getting that error. Is it related?
UPDATE:
I myself can not reproduce the problem I was having. Probably I was debugging 32 bit process with 64 bit debugger, or .NET 4 process with .NET 2.0 SOS, or vice versa, or combination of both bitness and DLL version.
Apologize that this question may not valid.
When are you trying to issue the command?
This error is quite common when attempting to issue SOS commands before the CLR was fully loaded.
You could attempt to break right after the CLR had finished its initialization procedure. In order to break at that point, you could place a breakpoint in the following manner: bp clr!EEStartup "gu". This will cause the debugger to break on the EEStartup function, and continue execution until the function completes.
When the debugger breaks on that breakpoint, you should be able to issue your SOS commands.