I'm using single server Ignite.NET configuration. And after a various working time of the server, it crashes and the following message appears:
Possible too long JVM pause
But I doubt that the issue is related to JVM garbage collection, because I found one regularity:
This message is always preceded by the following:
[WARNING][sys-stripe-11-#12][Marshaller] Type 'System.ArgumentException' implements 'System.Runtime.Serialization.ISerializable'. It will be written in Ignite binary format, however, the following limitations apply: DateTime fields would not work in SQL; sbyte, ushort, uint, ulong fields would not work in DML.
I don't think it's a coincidence. But I don't store in Ignite any ArgumentException instances, so I don't understand how ArgumentException is related to the issue.
The warning likely happens at the beginning of data load operation, while JVM pause may be caused by the bulk data loading.
It does seem that you have an exception stored somewhere.
Related
As you may know, it is pretty easy to have active code of a class containing syntax errors (someone activated the code ignoring syntax warnings or someone changed the signature of a method the class calls, for instance).
This means that also dynamic instantiation of such a class via
CREATE OBJECT my_object TYPE (class_name).
will fail with an apparently uncatchable SYNTAX_ERROR exception. The goal is to write code that does not terminate when this occurs.
Known solutions:
Wrap the CREATE OBJECT statement inside an RFC function module, call the module with destination NONE, then catch the (classic) exception SYSTEM_FAILURE from the RFC call. If the RFC succeeds, actually create the object (you can't pass the created object out of the RFC because RFC function modules can't pass references, and objects cannot be passed other than by reference as far as I know).
This solution is not only inelegant, but impacts performance rather harshly since an entirely new LUW is spawned by the RFC call. Additionally, you're not actually preventing the SYNTAX_ERROR dump, just letting it dump in a thread you don't care about. It will still, annoyingly, show up in ST22.
Before attempting to instantiate the class, call
cl_abap_typedescr=>describe_by_name( class_name )
and catch the class-based exception CX_SY_RTTI_SYNTAX_ERROR it throws when the code it attempts to describe has syntax errors.
This performs much better than the RFC variant, but still seems to add unnecessary overhead - usually, I don't want the type information that describe_by_name returns, I'm solely calling it to get a catchable exception, and when it succeeds, its result is thrown away.
Is there a way to prevent the SYNTAX_ERROR dump without adding such overhead?
Most efficient way we could come up with:
METHODS has_correct_syntax
IMPORTING
class_name TYPE seoclsname
RETURNING
VALUE(result) TYPE abap_bool.
METHOD has_correct_syntax.
DATA(include_name) = cl_oo_classname_service=>get_cs_name( class_name ).
READ REPORT include_name INTO DATA(source_code).
SYNTAX-CHECK FOR source_code MESSAGE DATA(message) LINE DATA(line) WORD DATA(word).
result = xsdbool( sy-subrc = 0 ).
ENDMETHOD.
Still a lot of overhead for loading the program and syntax-checking it. However, at least none additional for compiling descriptors you are not interested in.
We investigated when we produced a dependency manager that wires classes together upon startup and should exclude syntactically wrong candidates.
CS includes don't always exist, so get_cs_name might come back empty. Seems to depend on the NetWeaver version and the editor the developer used.
If you are certain that the syntax errors are caused by the classes’ own code, you might want to consider buffering the results of the syntax checks and only revalidate when the class changed after it was last checked. This does not work if you expect syntax errors to be caused by something outside those classes.
In my experience, every language which supports exceptions has a hierarchy of exception types. This allows a single catch clause to match a group of related exceptions by catching their common parent. For example, part of Python's hierarchy:
FloatingPointError < ArithmeticError < Exception < BaseException
Go, on the other hand, famously does not support exceptions and also has "no type hierarchy". Some people think exceptions should be added to Go - would it be possible to do this without adding a type hierarchy?
Are there other languages which have exceptions but no type hierarchy? Do they group related exceptions in some other way?
SuperTalk has effectively no data types, but has exceptions. Basically you throw an error code and check that. That's also how many early macOS application frameworks worked, even in C++.
So just as an object can be approximated by using a simple data structure with a type selector, exceptions can be made to work.
on doFoo
throw "myError"
end doFoo
on startUp
try
doFoo
catch tError
if tError = "myError" then
-- do something about it
else
throw tError
end if
end try
end startUp
Instead of "myError", you can throw any string or number, so you could use a formatted string, like "copyFileError,/path/to/source/file.txt,/path/to/dest/file.txt" (of course with proper escaping of dangerous characters like "," in this case) and then just compare the first item in this list to tell whether it's the error you want to handle.
If you're just going with error numbers without any additional payload, you can segment the number space to get error "classes" e.g. "fatal errors are negative, recoverable ones positive" or "1-100 are file system errors" or whatever (see HTTP status code for an example of using error code ranges to define error classes).
I'd rather post this as a comment, but the sentiment was too long to get across within the limitations of a comment. I am aware that this is primarily opinion based, and I apologize for that.
Go does not support exceptions because it does not need to. Exceptions are a crutch that developers have been lured into becoming dependent on because they don't want to handle errors properly. In Go, it is idiomatic to handle every error, on the spot, every time. If you do this, your programs run better, and you are aware of exactly when/where errors happen and you can fix them. Using catch in other languages ends up being more difficult to debug as you are not always aware of exactly where the error originally happened. By wrapping your code in try catch blocks, you essentially mask the bugs in your code. try and catch are also terribly inefficient because all of the optimizations in the binary grind to a halt as the program has to figure out what unexpectedly happened. Using errors properly in Go circumvents this because you capture errors and handle them, thereby "expecting" them as an eventuality and handling them properly.
In a COM object generally there are two ways of indicating that a function failed (that I'm aware of):
return S_OK and have an [out] parameter to give failure info
return a failure HRESULT, and use ICreateErrorInfo to set the info.
Currently what I am doing is using the failure-HRESULT method for failures that are "really bad", i.e. my object will be basically inoperable because this function failed. For example, unable to open its configuration file.
Is this correct, or should failure HRESULTs be reserved only for things like dispatch argument type mismatches?
The short version:
In COM you should use HRESULTs (and strive to use ISupportErrorInfo, etc.) for most/all types of error conditions. The HRESULT mechanism should be viewed as a form of exception throwing. If you are familiar with that, consider "Error conditions" as anything for which you would normally throw an exception in a language that supports them. Use custom return values for things for which you would not normally use exceptions.
For example, use a failure HRESULT for invalid parameters, invalid sequence of operations, network failures, database errors, unexpected conditions such as out-of-memory, etc. On the other hand, use custom out parameters for things like 'polling, data is not ready yet', EOF conditions, maybe 'checked data and it doesn't pass validations'. There is plenty of discussions out there discussing what each should be (e.g. Stroustrup's TC++PL). The specifics will heavily depend on your particular object's semantics.
The longer version:
At a fundamental level, the COM HRESULT mechanism is just an error code mechanism which has been standardized by the infrastructure. This is mostly because COM must support a number of features such as inter-process (DCOM) and inter-threaded (Apartments) execution, system managed services (COM+), etc. The infrastructure has a need to know when something has failed, and it has a need to communicate to both sides its own infrastructure-related errors. Everybody needs to agree on how to communicate errors.
Each language and programmer has a choice of how to present or handle those errors. In C++, we typically handle the HRESULTs as error codes (although you can translate them into exceptions if you prefer error handling that way). In .NET languages, failure HRESULTs are translated into exceptions because that's the preferred error mechanism in .NET.
VB6 supports "either". Now, I know VB6's so-called exception handling has a painful syntax and limited scoping options for handlers, but you don't have to use it if you don't want to. You can always use ON ERROR RESUME NEXT and do it by hand if you think the usage pattern justifies it in a specific situation. It's just that instead of writing something like this:
statusCode = obj.DoSomething(param1)
If IS_FAILURE(statusCode) Then
'handle error
End If
Your write it like this:
ON ERROR RESUME NEXT
...
obj.DoSomething param1
IF Error.Number <> 0 Then
'handle error
End If
VB6 is simply hiding the error code return value from the method call (and allowing the object's programmer to substitute it for a "virtual return value" via [retval]).
If you make up your own error reporting mechanism instead of using HRESULTs, you will:
Spend a lot of time reinventing a rich error reporting mechanism that will probably mirror what ISupportsErrorInfo already gives you (or most likely, not provide any rich error information).
Hide the error status from COM's infrastructure (which might or might not matter).
Force your VB6 clients to make one specific choice out of the two options they have: they must do explicit line-by-line check, or more likely just ignore the error condition by mistake, even if they would prefer an error handler.
Force your (say) C# clients to handle your errors in ways that runs contrary to the natural style of the language (to have to check every method call explicitly and... likely throw an exception by hand).
In my VB6 application I make several calls to a COM server my team created from a Ada project (using GNATCOM). There are basically 2 methods available on the COM server. Their prototypes in VB are:
Sub PutParam(Param As Parameter_Type, Value)
Function GetParam(Param As Parameter_Type)
where Parameter_Type is an enumerated type which distinguishes the many parameters I can put to/get from the COM server and 'Value' is a Variant type variable. PutParam() receives a variant and GetParam() returns a variant. (I don't really know why in the VB6 Object Browser there's no reference to the Variant type on the COM server interface...).
The product of this project has been used continuously this way for years without any problems in this interface on computers with Windows XP with SP2. On computers with WinXP SP3 we get the error 0x800706F7 "The stub received bad data" when trying to put parameters with the 'Long' type.
Does anybody have any clue on what could be causing this? The COM server is still being built in a system with SP2. Should make any difference building it on a system with SP3? (like when we build for X64 in X64 systems).
One of the calls that are causing the problem is the following (changed some var names):
Dim StructData As StructData_Type
StructData.FirstLong = 1234567
StructData.SecondLong = 8901234
StructData.Status = True
ComServer.PutParam(StructDataParamType, StructData)
Where the definition of StructData_Type is:
Type StructData_Type
FirstLong As Long
SecondLong As Long
Status As Boolean
End Type
(the following has been added after the question was first posted)
The definition of the primitive calls on the interface of the COM server in IDL are presented below:
// Service to receive data
HRESULT PutParam([in] Parameter_Type Param, [in] VARIANT *Value);
//Service to send requested data
HRESULT GetParam([in] Parameter_Type Param, [out, retval] VARIANT *Value);
The definition of the structure I'm trying to pass is:
struct StructData_Type
{
int FirstLong;
int SecondLong;
VARIANT_BOOL Status;
} StructData_Type;
I found it strange that this definition here is using 'int' as the type of FirstLong and SeconLong and when I check the VB6 object explorer they are typed 'Long'. Btw, when I do extract the IDL from the COM server (using a specific utility) those parameters are defined as Long.
Update:
I have tested the same code with a version of my COM server compiled for Windows 7 (different version of GNAT, same GNATCOM version) and it works! I don't really know what happened here. I'll keep trying to identify the problem on WinXP SP3 but It is good to know that it works on Win7. If you have a similar problem it may be good to try to migrate to Win7.
I'll focus on explaining what the error means, there are too few hints in the question to provide a simple answer.
A "stub" is used in COM when you make calls across an execution boundary. It wasn't stated explicitly in the question but your Ada program is probably an EXE and implements an out-of-process COM server. Crossing the boundary between processes in Windows is difficult due to their strong isolation. This is done in Windows by RPC, Remote Procedure Call, a protocol for making calls across such boundaries, a network being the typical case.
To make an RPC call, the arguments of a function must be serialized into a network packet. COM doesn't know how to do this because it doesn't know enough about the actual arguments to a function, it needs the help of a proxy. A piece of code that does know what the argument types are. On the receiving end is a very similar piece of code that does the exact opposite of what the proxy does. It deserializes the arguments and makes the internal call. This is the stub.
One way this can fail is when the stub receives a network packet and it contains more or less data than required for the function argument values. Clearly it won't know what to do with that packet, there is no sensible way to turn that into a StructData_Type value, and it will fail with "The stub received bad data" error.
So the very first explanation for this error to consider is a DLL Hell problem. A mismatch between the proxy and the stub. If this app has been stable for a long time then this is not a happy explanation.
There's another aspect about your code snippet that is likely to induce this problem. Structures are very troublesome beasts in software, their members are aligned to their natural storage boundary and the alignment rules are subject to interpretation by the respective compilers. This can certainly be the case for the structure you quoted. It needs 10 bytes to store the fields, 4 + 4 + 2 and they align naturally. But the structure is actually 12 bytes long. Two bytes are padded at the end to ensure that the ints still align when the structure is stored in an array. It also makes COM's job very difficult, since COM hides implementation detail and structure alignment is a massive detail. It needs help to copy a structure, the job of the IRecordInfo interface. The stub will also fail when it cannot find an implementation of that interface.
I'll talk a bit about the proxy, stub and IRecordInfo. There are two basic ways a proxy/stub pair are generated. One way is by describing the interfaces in a language called IDL, Interface Description Language, and compile that with MIDL. That compiler is capable of auto-generating the proxy/stub code, since it knows the function argument types. You'll get a DLL that needs to be registered on both the client and the server. Your server might be using that, I don't know.
The second way is what VB6 uses, it takes advantage of a universal proxy that's built into Windows. Called FactoryBuffer, its CLSID is {00000320-0000-0000-C000-000000000046}. It works by using a type library. A type library is a machine readable description of the functions in a COM server, good enough for FactoryBuffer to figure out how to serialize the function arguments. This type library is also the one that provides the info that IRecordInfo needs to figure out how the members of a structure are aligned. I don't know how it is done on the server side, never heard of GNATCOM before.
So a strong explanation for this problem is that you are having a problem with the type library. Especially tricky in VB6 because you cannot directly control the guids that it uses. It likes to generate new ones when you make trivial changes, the only way to avoid it is by selecting the binary compatibility option. Which uses an old copy of the type library and tries to keep the new one as compatible as possible. If you don't have that option turned on then do expect trouble, especially for the guid of the structure. Kaboom if it changed and the other end is still using the old guid.
Just some hints on where to start looking. Do not assume it is a problem caused by SP3, this COM infrastructure hasn't changed for a very long time. But certainly expect this kind of problem due to a new operating system version being installed and having to re-register everything. SysInternals' ProcMon is a good utility to see the programs use the registry to find the proxy, stub and type library. And you'd certainly get help from a COM Spy kind of utility, albeit that they are very hard to find these days.
If it suddenly stopped working happily on XP, the first culprit I'd look for is type mismatches. It is possible that "long" on such systems is now 64-bits, while your Ada COM code (and/or perhaps your C ints) are exepecting 32-bits. With a traditionally-compiled system this would have been checked for you by your compiler, but the extra indirection you have with COM makes that difficult.
The bit you wrote in there about "when we compile for 64-bit systems" makes me particularly leery. 64-bit compiles may change the size of many C types, you know.
This Related Post suggests you need padding in your struct, as marshalling code may expect more data than you actually send (which is a bug, of course). Your struct contains 9 bytes (assuming 4 bytes for each of the ints/longs and one for the boolean). Try to add padding so that your struct contains a multiple of 4 bytes (or, failing that, multiple of 8, as the post isn't clear on the expected size)
I am also suggesting that the problem is due to a padding issue in your structure. I don't know whether you can control this using a #pragma, but it might be worth looking at your documentation.
I think it would be a good idea to try and patch your struct so that the resulting type library struct is a multiple of four (or eight). Your Status member takes up 2 bytes, so maybe you should insert a dummy value of the same type either before or after Status - which should bring it up to 12 bytes (if packing to eight bytes, this would have to be three dummy variables).
If an exception occurs in a try block, how is execution transferred to the catch block? This is not a C#/Java/C++ question, I'm just wondering how it works internally.
this is not a c#/java/c++ question. How it works internally,how the line knows to go catch statement.
How this works internally makes this pretty much a c#/java/C++ question (because it will be implemented differently).
In Java, a try block installs itself into a special table (in the class file). When the JVM throws an exception, it looks at that table to see where the next catch or finally block to go to is.
When an exception occurs a special instruction is executed (usually called interrupt). This leads to executing a generic error handler that deduces which is the latest installed suitable exception handler. That handler is then executed.
There is a difference how exceptions are technically handled between natively compiled languages such as C++ and languages using byte-code being executed on a virtual machine such as Java or C#.
C++ compilers usually generate code that protocols the information needed for exception handling at runtime. A dedicated data structure is used to remember entrance/exit of try blocks and the associated exception handler. When an exception occurs, an interrupt is generated and control is passed to the OS which in turn inspects the call stack and determines which exception handler to call.
Further details are pretty well explained in the following article by Vishal Kochhar:
How a C++ compiler implements exception handling
In Java or .NET there is no need for the overhead of maintaining exception handling information as the runtime will be able to introspect the byte code to find the relevant exception handler. As a consequence, only exceptions that are actually thrown are causing an overhead.
It is basically parsing fundamentals of the language.
You can get all info at Here
it should work in all langues somewhat like this:
if (error_occured xy while doing things in try){
call_catch_part(error xy)
}
you could do the same in C even though there is no exception handling per se.
There you would use setjmp/longjmp unfortunately you then do not get the stack unwinding and have to handle all the nitty-gritty yourself.