Naming conventions for replacement APIs / classes - naming-conventions

Do you have a naming convention for APIs or Classes that are being phased in to replace an older version that performed the same function / filled the same role?
E.g. Windows does this by adding "Ex" to the end of the function:
ShellExecute // old
ShellExecuteEx // new
What do you prefer, and what are you reasonings?
Appending 2, V2, New, NowInStereo?
Doing a one-time rename of the old API from Something to SomethingOld and using Something for the new stuff? This option worries me when it comes to version control, but it also seems the least likely to be burdened with a V3 or ReallyNew problem in the future.
Making up a completely different name that may describe the function less accurately, but at least is different.

Much of the time you can get away with changing the package name, rather than the class name itself.

If the new class performs the same function as the old one, I remove the old one and replace it with the new one with same name. I can still consult the old one in version control if needed.

Why change it at all? Interfaces being the contracts that they are, why would you break the interface after everything is using it.
Edit: Sorry to botch your comment Josh...
I think that if you've gone to the trouble to create the interface you need to do everything you can to maintain it. What kind of bad choice were you thinking about when you commented before?

You only need to create a new function if the arguments to the function are changed due to new requirements for the function.
The best naming convention for an API is the one the users of the API would expect. If the API is used only on Windows then adding an Ex (or Ex2 for the next rev) may be appropriate. I'm not aware of other conventions on other platforms. Also, your programming language may have a convention for extending API methods.
If you are using an object oriented language and this is an object method, you do not have to change the name because you can have methods with the same name and different signatures. However, it may still make sense to provide a new name to let users know that you want them to migrate to the new method.

Related

Enforcing API boundaries at the Module (Distribution?) level

How do I structure Raku code so that certain symbols are public within the the library I am writing, but not public to users of the library? (I'm saying "library" to avoid the terms "distribution" and "module", which the docs sometimes use in overlapping ways. But if there's a more precise term that I should be using, please let me know.)
I understand how to control privacy within a single file. For example, I might have a file Foo.rakumod with the following contents:
unit module Foo;
sub private($priv) { #`[do internal stuff] }
our sub public($input) is export { #`[ code that calls &private ] }
With this setup, &public is part of my library's public API, but &private isn't – I can call it within Foo, but my users cannot.
How do I maintain this separation if &private gets large enough that I want to split it off into its own file? If I move &private into Bar.rakumod, then I will need to give it our (i.e., package) scope and export it from the Bar module in order to be able to use it from Foo. But doing so in the same way I exported &public from Foo would result in users of my library being able to use Foo and call &private – exactly the outcome I am trying to avoid. How do maintain &private's privacy?
(I looked into enforcing privacy by listing Foo as a module that my distribution provides in my META6.json file. But from the documentation, my understanding is that provides controls what modules package managers like zef install by default but do not actually control the privacy of the code. Is that correct?)
[EDIT: The first few responses I've gotten make me wonder whether I am running into something of an XY problem. I thought I was asking about something in the "easy things should be easy" category. I'm coming at the issue of enforcing API boundaries from a Rust background, where the common practice is to make modules public within a crate (or just to their parent module) – so that was the X I asked about. But if there's a better/different way to enforce API boundaries in Raku, I'd also be interested in that solution (since that's the Y I really care about)]
I will need to give it our (i.e., package) scope and export it from the Bar module
The first step is not necessary. The export mechanism works just as well on lexically scoped subs too, and means they are only available to modules that import them. Since there is no implicit re-export, the module user would have to explicitly use the module containing the implementation details to have them in reach. (As an aside, personally, I pretty much never use our scope for subs in my modules, and rely entirely on exporting. However, I see why one might decide to make them available under a fully qualified name too.)
It's also possible to use export tags for the internal things (is export(:INTERNAL), and then use My::Module::Internals :INTERNAL) to provide an even stronger hint to the module user that they're voiding the warranty. At the end of the day, no matter what the language offers, somebody sufficiently determined to re-use internals will find a way (even if it's copy-paste from your module). Raku is, generally, designed with more of a focus on making it easy for folks to do the right thing than to make it impossible to "wrong" things if they really want to, because sometimes that wrong thing is still less wrong than the alternatives.
Off the bat, there's very little you can't do, as long as you're in control of the meta-object protocol. Anything that's syntactically possible, you could in principle do it using a specific kind of method, or class, declared using that. For instance, you could have a private-class which would be visible only to members of the same namespace (to the level that you would design). There's Metamodel::Trusting which defines, for a particular entity, who it does trust (please bear in mind that this is part of the implementation, not spec, and then subject to change).
A less scalable way would be to use trusts. The new, private modules would need to be classes and issue a trusts X for every class that would access it. That could include classes belonging to the same distribution... or not, that's up to you to decide. It's that Metamodel class above who supplies this trait, so using it directly might give you a greater level of control (with a lower level of programming)
There is no way to enforce this 100%, as others have said. Raku simply provides the user with too much flexibility for you to be able to perfectly hide implementation details externally while still sharing them between files internally.
However, you can get pretty close with a structure like the following:
# in Foo.rakumod
use Bar;
unit module Foo;
sub public($input) is export { #`[ code that calls &private ] }
# In Bar.rakumod
unit module Bar;
sub private($priv) is export is implementation-detail {
unless callframe(1).code.?package.^name eq 'Foo' {
die '&private is a private function. Please use the public API in Foo.' }
#`[do internal stuff]
}
This function will work normally when called from a function declared in the mainline of Foo, but will throw an exception if called from elsewhere. (Of course, the user can catch the exception; if you want to prevent that, you could exit instead – but then a determined user could overwrite the &*EXIT handler! As I said, Raku gives users a lot of flexibility).
Unfortunately, the code above has a runtime cost and is fairly verbose. And, if you want to call &private from more locations, it would get even more verbose. So it is likely better to keep private functions in the same file the majority of the time – but this option exists for when the need arises.

What is the modern replacement to obsolete FORM subroutine in ABAP?

The ABAP documentation lists three kinds of modularization structures:
Methods. Problem: methods don't accept parameters.
Function modules. Problem: FMs belong to function groups and can be called from other programs. Apparently they are meant to be reused across the system.
Forms. Problem: are marked as "obsolete".
Is there a newer structure that replaces the obsolete FORM structure, that is:
Local to our program.
Accepts parameters.
Doesn't require ABAP Objects syntax ?
Methods. Problem: methods don't accept parameters.
I am not sure how you came to that conclusion, because methods support parameters very well. The only limitation compared to FORMs is that they don't support TABLES parameters to take a TABLE WITH HEADER LINE. But they support CHANGING parameters with internal tables, which covers any case where you don't actually need the header-line. And in the rare case that you are indeed forced to deal with a TABLE WITH HEADER LINE and the method actually needs the header-line (I pity you), you can pass the header-line as a separate parameter.
You declare a method with parameters like this:
CLASS lcl_main DEFINITION.
METHODS foo
IMPORTING iv_bar TYPE i
EXPORTING es_last_message TYPE bapiret2
CHANGING ct_all_messages TYPE bapiret2_t.
ENDCLASS.
And you call it either like that:
main->foo( IMPORTING iv_bar = 1
EXPORTING es_last_message = t_messages
CHANGING ct_all_messages = t_messages[] ).
or with the more classic syntax like that:
CALL METHOD main->foo
IMPORTING iv_bar = 1
EXPORTING es_last_message = t_messages
CHANGING ct_all_messages = t_messages[].
Function modules. Problem: FMs belong to function groups and can be called from other programs. Apparently they are meant to be reused across the system.
Yes, function modules are supposed to be global while FORM's are supposed to be local (supposed to: You can actually call a FORM in another program with PERFORM formname IN PROGRAM programname).
But classes can be local or global, depending on how you created them. A global class can be used by any program in the system. So function groups can be substituted by global classes in most cases.
The one use-case where function modules can not be substituted by methods of classes is for RFC-enabled function modules. RFC is the Remote Function Call protocol which allows external systems to execute a function module in another system via network. However, if you do need some other system to communicate with your SAP system, then you might want to consider to use webservices instead, which can be implemented with pure ABAP-OO. And they also offer much better interoperability with non-SAP systems because they don't require a proprietary protocol.
Is there a newer structure that replaces the obsolete FORM structure, that [...] Doesn't require ABAP Objects syntax ?
Here is where you got a problem. ABAP Objects syntax is the way we are supposed to program ABAP now. There is currently a pretty hard push to forget all the non-OO ways to write ABAP and fully embrace the ABAP-OO styles of writing code. With every new release, more classic syntax which can be substituted by ABAP-OO syntax gets declared obsolete.
However, so far SAP follows the philosophy of 100% backward compatibility. While they might try their best to compel people to not use certain obsolete language constructs (including adding scary-sounding warnings to the syntax check), they very rarely actually remove any language features. They hardly can, because they themselves got tons of legacy code which uses them and which would be far too expensive and risky to rewrite. The only case I can think of when they actually removed language features was when they introduced Unicode which made certain direct assignments between now incompatible types syntactically illegal.
You are having some wrong information there. Don't know what system version are you in, but this info could help you out:
Methods: They actually accept parameters (should be crazy if they wouldn't). In fact, they accept IMPORTING, EXPORTING, CHANGING and RETURNING parameters.
Forms: Indeed they are obsolete, but in my opinion there is no risk in using then, almost every standard component relies in programs made out of FORMS. FORMS are a core concept in ABAP programming. They are the "function" or "def" of many other languages. They accept USING, CHANGING and TABLES parameters.

How can I have a "private" Erlang module?

I prefer working with files that are less than 1000 lines long, so am thinking of breaking up some Erlang modules into more bite-sized pieces.
Is there a way of doing this without expanding the public API of my library?
What I mean is, any time there is a module, any user can do module:func_exported_from_the_module. The only way to really have something be private that I know of is to not export it from any module (and even then holes can be poked).
So if there is technically no way to accomplish what I'm looking for, is there a convention?
For example, there are no private methods in Python classes, but the convention is to use a leading _ in _my_private_method to mark it as private.
I accept that the answer may be, "no, you must have 4K LOC files."
The closest thing to a convention is to use edoc tags, like #private and #hidden.
From the docs:
#hidden
Marks the function so that it will not appear in the
documentation (even if "private" documentation is generated). Useful
for debug/test functions, etc. The content can be used as a comment;
it is ignored by EDoc.
#private
Marks the function as private (i.e., not part of the public
interface), so that it will not appear in the normal documentation.
(If "private" documentation is generated, the function will be
included.) Only useful for exported functions, e.g. entry points for
spawn. (Non-exported functions are always "private".) The content can
be used as a comment; it is ignored by EDoc.
Please note that this answer started as a comment to #legoscia's answer
Different visibilities for different methods is not currently supported.
The current convention, if you want to call it that way, is to have one (or several) 'facade' my_lib.erl module(s) that export the public API of your library/application. Calling any internal module of the library is playing with fire and should be avoided (call them at your own risk).
There are some very nice features in the BEAM VM that rely on being able to call exported functions from any module, such as
Callbacks (funs/anonymous funs), MFA, erlang:apply/3: The calling code does not need to know anything about the library, just that it's something that needs to be called
Behaviours such as gen_server need the previous point to work
Hot reloading: You can upgrade the bytecode of any module without stopping the VM. The code server inside the VM maintains at most two versions of the bytecode for any module, redirecting external calls (those with the Module:) to the most recent version and the internal calls to the current version. That's why you may see some ?MODULE: calls in long-running servers, to be able to upgrade the code
You'd be able to argue that these points'd be available with more fine-grained BEAM-oriented visibility levels, true. But I don't think it would solve anything that's not solved with the facade modules, and it'd complicate other parts of the VM/code a great deal.
Bonus
Something similar applies to records and opaque types, records only exist at compile time, and opaque types only at dialyzer time. Nothing stops you from accessing their internals anywhere, but you'll only find problems if you go that way:
You insert a new field in the record, suddenly, all your {record_name,...} = break
You use a library that returns an opaque_adt(), you know that it's a list and use like so. The library is upgraded to include the size of the list, so now opaque_adt() is a tuple() and chaos ensues
Only those functions that are specified in the -export attribute are visible to other modules i.e "public" functions. All other functions are private. If you have specified -compile(export_all) only then all functions in module are visible outside. It is not recommended to use -compile(export_all).
I don't know of any existing convention for Erlang, but why not adopt the Python convention? Let's say that "library-private" functions are prefixed with an underscore. You'll need to quote function names with single quotes for that to work:
-module(bar).
-export(['_my_private_function'/0]).
'_my_private_function'() ->
foo.
Then you can call it as:
> bar:'_my_private_function'().
foo
To me, that communicates clearly that I shouldn't be calling that function unless I know what I'm doing. (and probably not even then)

Class versioning

I'm looking for a clean way to make incremental updates to my code library, without breaking backwards compatibility. This could mean adding new members to classes, or changing existing members to provide additional functionality. Sometimes I am required to change a member in such a way that it would break existing code (e.g. renaming a method or changing its return type), so I'd rather not touch any of my existing types once they are shipped.
The way I currently set this up is through inheritance and polymorphism by creating a new class that extends the previous "version" of that class.
The way this works is by creating the appropriate version of StatusResult (e.g. StatusResultVersion3), based on the actual value of the ProtocolVersion property, and returning it as an instance of CommandResult.
Because .NET does not seem to have a concept of class versioning, I had to come up with my own: appending the version number to the end of the class name. This will no doubt make you cringe. I could easily imagine yourself scratching your eyes out after zooming in on the diagram. But it works. I can add new members and override existing members, without introducing any code breaking changes.
Is there a better way to version my classes?
There are typically two approaches when considering existing code and assembly updates:
Regression Testing
This is a great approach for non-breaking changes, where you can simply overload functions to provide new parameters, etc. Visual Studio has some very advanced unit testing capabilities to make your regression testing relatively easy and automated.
Assembly Versions
If your changes are going to start breaking things, like rewriting the way some utility works, then it's time for a new assembly version. .NET is very good about working with assembly versions. You can deploy the versioned assemblies to different folders so that existing code can continue to reference the old version while new code can take advantage of the features in the new version.
The problem with interfaces is that once published they're largely set in stone. To quote Anders Hejlsburg:
... It's like adding a method to an interface. After you publish an interface, it is for all practical purposes immutable, because any implementation of it might have the methods that you want to add in the next version. So you've got to create a new interface instead.
So you can never just update an interface, you need to create a completely new one. Of course, you can have a single class implement both interfaces so your maintainability effort is fairly low compared with (say) polymorphic classes where your code will become spread out between multiple classes over time.
Multiple Interfaces also allows you to remove methods in a way that classes do not (Sure, you can Deprecate them but that can result in very noisy intellisense after a few iterations)
I personally lean towards having entirely stand-alone versions of the interface in each assembly version.
That is to say...
v 0.1.0.0
interface IExample
{
String DoSomething();
}
v 0.2.0.0
interface IExample
{
void DoSomethingElse();
}
How you implement them behind the scenes is up to you, but most likely it'll be the same classes with slightly different methods doing similar jobs (otherwise, why use the same interface?)
All the old code should be referencing 0.1.x.x and new code will reference 0.2.x.x. About the only issue is when you find (say) a security flaw and the fix needs to be back-ported to an earlier version. This is where a decent VCS comes in (Personal preference is TFS but SVN or anything else which supports branching/merging will do).
Merge the fixes from the 0.2 branch back into the 0.1 branch and then do a recompile to result in (say) 0.1.1.0.
As long as you stick to a process like this:
Major or Minor build will increment if there are any breaking changes (aka signatures will not change on Build/Revision increments)
Use publisher policies if the new Major/Minor version should be used by older programs (equivalent to guaranteeing nothing broke so use the new version anyway)
References in client apps should point at a Major/Minor version but not specify revision/build
This gives you:
A clean codebase without legacy clutter
Allows clients to use the latest version with no code changes if nothing has broken
Prevents clients using newer versions of an assembly which do have breaking changes until they recompile (and, one hopes, update their code as appropriate to take advantage of the new features.)
Allows you to release security patches for previous versions
The OP solved his problem as indicated by this comment:
In the end, I went with the interfaces idea because it allows me to keep multiple versions of a class member in a single class file. When I need to update the class, I'll just add the new interface, shadowing the member that has been changed, and change the return type on some of my methods. This works without breaking backwards compatibility because of polymorphism.
If this is mainly for serialization, This can be achieved in .Net using DataContractSerializers and DataAnnotations. They can deserialize different versions an object into the same object to allow for different versions of the same class to be deserialized, leaving any properties it can't map blank.

How do you implement C#4's IDynamicObject interface?

To implement "method-missing"-semantics and such in C# 4.0, you have to implement IDynamicObject:
public interface IDynamicObject
{
MetaObject GetMetaObject(Expression parameter);
}
As far as I can figure out IDynamicObject is actually part of the DLR, so it is not new. But I have not been able to find much documentation on it.
There are some very simple example implementations out there (f.x. here and here), but could anyone point me to more complete implementations or some real documentation?
Especially, how exactly are you supposed to handle the "parameter"-parameter?
The short answer is that the MetaObject is what's responsible for actually generating the code that will be run at the call site. The mechanism that it uses for this is LINQ expression trees, which have been enhanced in the DLR. So instead of starting with an object, it starts with an expression that represents the object, and ultimately it's going to need to return an expression tree that describes the action to be taken.
When playing with this, please remember that the version of System.Core in the CTP was taken from a snapshot at the end of August. It doesn't correspond very cleanly to any particular beta of IronPython. A number of changes have been made to the DLR since then.
Also, for compatibility with the CLR v2 System.Core, releases of IronPython starting with either beta 4 or beta 5 now rename everything in that's in the System namespace to be in the Microsoft namespace instead.
If you want an end to end sample including source code, resulting in a dynamic object that stores value for arbitrary properties in a Dictionary then my post "A first look at Duck Typing in C# 4.0" could be right for you. I wrote that post to show how dynamic object can be cast to statically typed interfaces. It has a complete working implementation of a Duck that is a IDynamicObject and may acts like a IQuack.
If you need more information contact me on my blog and I will help you along, as good as I can.
I just blogged about how to do this here:
http://mikehadlow.blogspot.com/2008/10/dynamic-dispatch-in-c-40.html
Here is what I have figured out so far:
The Dynamic Language Runtime is currently maintained as part of the IronPython project. So that is the best place to go for information.
The easiest way to implement a class supporting IDynamicObject seems to be to derive from Microsoft.Scripting.Actions.Dynamic and override the relevant methods, for instance the Call-method to implement function call semantics. It looks like Microsoft.Scripting.Actions.Dynamic hasn't been included in the CTP, but the one from IronPython 2.0 looks like it will work.
I am still unclear on the exact meaning of the "parameter"-parameter, but it seems to provide context for the binding of the dynamic-object.
This presentation also provides a lot of information about the DLR:
Deep Dive: Dynamic Languages in Microsoft .NET by Jim Hugunin.