Do we still need to do DEFINITION LOAD in ECC 6 EHP4+ - abap

I am adapting RM_FM_EXCEL_TO_INTERNAL_TAB to my needs and I saw the following code:
"Get TAB-sign for separation of fields
CLASS cl_abap_char_utilities DEFINITION LOAD.
ld_separator = cl_abap_char_utilities=>horizontal_tab.
Can you confirm that there is absolutely no need to do the definition load any more, I assume this is some ancient legacy code? I never load class before using static methods or attributes and it appears to work.

As always, I'd recommend to take the official documentation as a reference:
The variants of the statements CLASS and INTERFACE with the addition LOAD are obsolete. ABAP Compiler ignores these statements.

Related

What is the modern replacement to obsolete FORM subroutine in ABAP?

The ABAP documentation lists three kinds of modularization structures:
Methods. Problem: methods don't accept parameters.
Function modules. Problem: FMs belong to function groups and can be called from other programs. Apparently they are meant to be reused across the system.
Forms. Problem: are marked as "obsolete".
Is there a newer structure that replaces the obsolete FORM structure, that is:
Local to our program.
Accepts parameters.
Doesn't require ABAP Objects syntax ?
Methods. Problem: methods don't accept parameters.
I am not sure how you came to that conclusion, because methods support parameters very well. The only limitation compared to FORMs is that they don't support TABLES parameters to take a TABLE WITH HEADER LINE. But they support CHANGING parameters with internal tables, which covers any case where you don't actually need the header-line. And in the rare case that you are indeed forced to deal with a TABLE WITH HEADER LINE and the method actually needs the header-line (I pity you), you can pass the header-line as a separate parameter.
You declare a method with parameters like this:
CLASS lcl_main DEFINITION.
METHODS foo
IMPORTING iv_bar TYPE i
EXPORTING es_last_message TYPE bapiret2
CHANGING ct_all_messages TYPE bapiret2_t.
ENDCLASS.
And you call it either like that:
main->foo( IMPORTING iv_bar = 1
EXPORTING es_last_message = t_messages
CHANGING ct_all_messages = t_messages[] ).
or with the more classic syntax like that:
CALL METHOD main->foo
IMPORTING iv_bar = 1
EXPORTING es_last_message = t_messages
CHANGING ct_all_messages = t_messages[].
Function modules. Problem: FMs belong to function groups and can be called from other programs. Apparently they are meant to be reused across the system.
Yes, function modules are supposed to be global while FORM's are supposed to be local (supposed to: You can actually call a FORM in another program with PERFORM formname IN PROGRAM programname).
But classes can be local or global, depending on how you created them. A global class can be used by any program in the system. So function groups can be substituted by global classes in most cases.
The one use-case where function modules can not be substituted by methods of classes is for RFC-enabled function modules. RFC is the Remote Function Call protocol which allows external systems to execute a function module in another system via network. However, if you do need some other system to communicate with your SAP system, then you might want to consider to use webservices instead, which can be implemented with pure ABAP-OO. And they also offer much better interoperability with non-SAP systems because they don't require a proprietary protocol.
Is there a newer structure that replaces the obsolete FORM structure, that [...] Doesn't require ABAP Objects syntax ?
Here is where you got a problem. ABAP Objects syntax is the way we are supposed to program ABAP now. There is currently a pretty hard push to forget all the non-OO ways to write ABAP and fully embrace the ABAP-OO styles of writing code. With every new release, more classic syntax which can be substituted by ABAP-OO syntax gets declared obsolete.
However, so far SAP follows the philosophy of 100% backward compatibility. While they might try their best to compel people to not use certain obsolete language constructs (including adding scary-sounding warnings to the syntax check), they very rarely actually remove any language features. They hardly can, because they themselves got tons of legacy code which uses them and which would be far too expensive and risky to rewrite. The only case I can think of when they actually removed language features was when they introduced Unicode which made certain direct assignments between now incompatible types syntactically illegal.
You are having some wrong information there. Don't know what system version are you in, but this info could help you out:
Methods: They actually accept parameters (should be crazy if they wouldn't). In fact, they accept IMPORTING, EXPORTING, CHANGING and RETURNING parameters.
Forms: Indeed they are obsolete, but in my opinion there is no risk in using then, almost every standard component relies in programs made out of FORMS. FORMS are a core concept in ABAP programming. They are the "function" or "def" of many other languages. They accept USING, CHANGING and TABLES parameters.

How can I have a "private" Erlang module?

I prefer working with files that are less than 1000 lines long, so am thinking of breaking up some Erlang modules into more bite-sized pieces.
Is there a way of doing this without expanding the public API of my library?
What I mean is, any time there is a module, any user can do module:func_exported_from_the_module. The only way to really have something be private that I know of is to not export it from any module (and even then holes can be poked).
So if there is technically no way to accomplish what I'm looking for, is there a convention?
For example, there are no private methods in Python classes, but the convention is to use a leading _ in _my_private_method to mark it as private.
I accept that the answer may be, "no, you must have 4K LOC files."
The closest thing to a convention is to use edoc tags, like #private and #hidden.
From the docs:
#hidden
Marks the function so that it will not appear in the
documentation (even if "private" documentation is generated). Useful
for debug/test functions, etc. The content can be used as a comment;
it is ignored by EDoc.
#private
Marks the function as private (i.e., not part of the public
interface), so that it will not appear in the normal documentation.
(If "private" documentation is generated, the function will be
included.) Only useful for exported functions, e.g. entry points for
spawn. (Non-exported functions are always "private".) The content can
be used as a comment; it is ignored by EDoc.
Please note that this answer started as a comment to #legoscia's answer
Different visibilities for different methods is not currently supported.
The current convention, if you want to call it that way, is to have one (or several) 'facade' my_lib.erl module(s) that export the public API of your library/application. Calling any internal module of the library is playing with fire and should be avoided (call them at your own risk).
There are some very nice features in the BEAM VM that rely on being able to call exported functions from any module, such as
Callbacks (funs/anonymous funs), MFA, erlang:apply/3: The calling code does not need to know anything about the library, just that it's something that needs to be called
Behaviours such as gen_server need the previous point to work
Hot reloading: You can upgrade the bytecode of any module without stopping the VM. The code server inside the VM maintains at most two versions of the bytecode for any module, redirecting external calls (those with the Module:) to the most recent version and the internal calls to the current version. That's why you may see some ?MODULE: calls in long-running servers, to be able to upgrade the code
You'd be able to argue that these points'd be available with more fine-grained BEAM-oriented visibility levels, true. But I don't think it would solve anything that's not solved with the facade modules, and it'd complicate other parts of the VM/code a great deal.
Bonus
Something similar applies to records and opaque types, records only exist at compile time, and opaque types only at dialyzer time. Nothing stops you from accessing their internals anywhere, but you'll only find problems if you go that way:
You insert a new field in the record, suddenly, all your {record_name,...} = break
You use a library that returns an opaque_adt(), you know that it's a list and use like so. The library is upgraded to include the size of the list, so now opaque_adt() is a tuple() and chaos ensues
Only those functions that are specified in the -export attribute are visible to other modules i.e "public" functions. All other functions are private. If you have specified -compile(export_all) only then all functions in module are visible outside. It is not recommended to use -compile(export_all).
I don't know of any existing convention for Erlang, but why not adopt the Python convention? Let's say that "library-private" functions are prefixed with an underscore. You'll need to quote function names with single quotes for that to work:
-module(bar).
-export(['_my_private_function'/0]).
'_my_private_function'() ->
foo.
Then you can call it as:
> bar:'_my_private_function'().
foo
To me, that communicates clearly that I shouldn't be calling that function unless I know what I'm doing. (and probably not even then)

Implements vs Binary Compatibility

I have one VB6 ActiveX DLL that exposes a class INewReport. I added some new methods to this class and I was able to rebuild it and keep binary compatibility.
I have a second DLL that exposes a class clsNewReport, which implements the first class using:
Implements RSInterfaces.INewReport
Since I added new methods to INewReport, I had to also add those new methods to clsNewReport.
However, when I try to compile the second DLL, I get the binary-compatibility error "...class implemented an interface in the version-compatible component, but not in the current project".
I'm not sure what is happening here. Since I'm only adding to the class, why can't I maintain binary compatibility with the second DLL? Is there any way around this?
I think this is a correct explanation of what is happening, and some potential workarounds.
I made up a test case which reproduced the problem in the description and then dumped the IDL using OLEView from the old & new DLL which contained the interface.
Here is a diff of the old (left) and new IDL from INewReport:
Important differences:
The UUID of interface _INewReport has changed
A typedef called INewReport___v0 has been added which refers to the original UUID of the interface
(I assume that this is also what is happening to the code referred to in the question.)
So now in the client project the bincomp DLL refers to the original interface UUID; but that UUID only matches against a different name (INewReport___v0 instead of INewReport) than it did originally. I think this is the reason VB6 thinks there is a bincomp mismatch.
How to fix this problem? I've not been able to do anything in VB6 that would allow you to use the updated interface DLL with the client code without having to break bincomp of the client code.
A (bad) option could be to just change the client DLL to use project compatibility... but that may or may not be acceptable in your circumstances. It could cause whatever uses the client DLL to break unless all the consumers were also recompiled. (And this could potentially cause a cascade of broken bincomp).
A better but more complex option would be to define the interface in IDL itself, use the MIDL compiler to generate a typelib (TLB file), and reference that directly. Then you would have full control over the interface naming, etc. You could use the IDL generated from OLEView as a starting point for doing this.
This second option assumes that the interface class is really truly an interface only and has no functional code in it.
Here's how I setup a case to reproduce this:
Step 1. Original interface definition - class called INewReport set to binary compatible:
Sub ProcA()
End Sub
Sub ProcB()
End Sub
Step 2. Create a test client DLL which implements INewReport, also set to binary compatible:
Implements INewReport
Sub INewReport_ProcA()
End Sub
Sub INewReport_ProcB()
End Sub
Step 3: Add ProcC to INewReport and recompile (which also registers the newly built DLL):
(above code, plus:)
Sub ProcC()
End Sub
Step 4: Try to run or compile the test client DLL - instantly get the OP's error. No need to change any references or anything at all.
I was able to recreate your problem, using something similar to DaveInCaz's code. I tried a number of things to fix it, probably repeating things you've already tried. I came up with a possible hypothesis as to why this is happening. It doesn't fix the problem, but it may throw some additional light on it.
Quoting from This doc page:
To ensure compatibility, Visual Basic places certain restrictions on changes you make to default interfaces. Visual Basic allows you to add new classes, and to enhance the default interface of any existing class by adding properties and methods. Removing classes, properties, or methods, or changing the arguments of existing properties or methods, will cause Visual Basic to issue incompatibility warnings.
Another quote:
The ActiveX rule you must follow to ensure compatibility with multiple interfaces is simple: once an interface is in use, it can never be changed. The interface ID of a standard interface is fixed by the type library that defines the interface.
So, here's a hypothesis. The first quote mentions the default interface, which suggests that it may not be possible to alter custom interfaces in any way. That's suggested by the second quote as well. You're able to alter the interface class, because you are essentially altering its default interface. However, when you attempt to alter the implementing class in kind, to reflect the changes in your interface, your implementation reference is pointing to the older version of the interface, which no longer exists. Of course, the error message doesn't hint at this at all, because it appears to be based on the idea that you didn't attempt to implement the interface.
I haven't been able to prove this, but looking at DaveInCaz's answer, the fact that the UUID has changed seems to bear this idea out.

ByteBuddy - rebase already loaded class

I have the following code working in a SpringBoot application, and it does what's I'm expecting.
TypePool typePool = TypePool.Default.ofClassPath();
ByteBuddyAgent.install();
new ByteBuddy()
.rebase(typePool.describe("com.foo.Bar").resolve(), ClassFileLocator.ForClassLoader.ofClassPath())
.implement(typePool.describe("com.foo.SomeInterface").resolve())
.make()
.load(ClassLoader.getSystemClassLoader());
Its makes is so that the class com.foo.Bar implements the interface com.foo.SomeInterface (which has a default implementation)
I would like to . use the above code by referring to the class as Bar.class, not using the string representation of the name. But if I do that I get the following exception.
java.lang.UnsupportedOperationException: class redefinition failed: attempted to change superclass or interfaces
I believe due to the fact that it cause the class to be loaded, prior to the redefinition. I'm just now learning to use ByteBuddy.
I want to avoid some reflection at runtime, by adding the interface and an implementation using ByteBuddy. I've some other code that checks for this interface.
This is impossible, not because of Byte Buddy but no tool is allowed to do this on a regular VM. (There is the so-called dynamic code evolution VM which is capable of that).
If you want to avoid the problem, use redefine rather then rebase. Whenever you instrument a method, you do now however replace the original.
If this is not acceptable, have a look at the Advice class which you can use by the .visit-API to wrap logic around your original code without replacing it.

How do you implement C#4's IDynamicObject interface?

To implement "method-missing"-semantics and such in C# 4.0, you have to implement IDynamicObject:
public interface IDynamicObject
{
MetaObject GetMetaObject(Expression parameter);
}
As far as I can figure out IDynamicObject is actually part of the DLR, so it is not new. But I have not been able to find much documentation on it.
There are some very simple example implementations out there (f.x. here and here), but could anyone point me to more complete implementations or some real documentation?
Especially, how exactly are you supposed to handle the "parameter"-parameter?
The short answer is that the MetaObject is what's responsible for actually generating the code that will be run at the call site. The mechanism that it uses for this is LINQ expression trees, which have been enhanced in the DLR. So instead of starting with an object, it starts with an expression that represents the object, and ultimately it's going to need to return an expression tree that describes the action to be taken.
When playing with this, please remember that the version of System.Core in the CTP was taken from a snapshot at the end of August. It doesn't correspond very cleanly to any particular beta of IronPython. A number of changes have been made to the DLR since then.
Also, for compatibility with the CLR v2 System.Core, releases of IronPython starting with either beta 4 or beta 5 now rename everything in that's in the System namespace to be in the Microsoft namespace instead.
If you want an end to end sample including source code, resulting in a dynamic object that stores value for arbitrary properties in a Dictionary then my post "A first look at Duck Typing in C# 4.0" could be right for you. I wrote that post to show how dynamic object can be cast to statically typed interfaces. It has a complete working implementation of a Duck that is a IDynamicObject and may acts like a IQuack.
If you need more information contact me on my blog and I will help you along, as good as I can.
I just blogged about how to do this here:
http://mikehadlow.blogspot.com/2008/10/dynamic-dispatch-in-c-40.html
Here is what I have figured out so far:
The Dynamic Language Runtime is currently maintained as part of the IronPython project. So that is the best place to go for information.
The easiest way to implement a class supporting IDynamicObject seems to be to derive from Microsoft.Scripting.Actions.Dynamic and override the relevant methods, for instance the Call-method to implement function call semantics. It looks like Microsoft.Scripting.Actions.Dynamic hasn't been included in the CTP, but the one from IronPython 2.0 looks like it will work.
I am still unclear on the exact meaning of the "parameter"-parameter, but it seems to provide context for the binding of the dynamic-object.
This presentation also provides a lot of information about the DLR:
Deep Dive: Dynamic Languages in Microsoft .NET by Jim Hugunin.