Related
Making subs/procedures available for reuse is one core function of modules, and I would argue that it is the fundamental way how a language can be composable and therefore efficient with programmer time:
if you create a type in your module, I can create my own module that adds a sub that operates on your type. I do not have to extend your module to do that.
# your module
class Foo {
has $.id;
has $.name;
}
# my module
sub foo-str(Foo:D $f) is export {
return "[{$f.id}-{$f.name}]"
}
# someone else using yours and mine together for profit
my $f = Foo.new(:id(1234), :name("brclge"));
say foo-str($f);
As seen in Overloading operators for a class this composability of modules works equally well for operators, which to me makes sense since operators are just some kinda syntactic sugar for subs anyway (in my head at least). Note that the definition of such an operator does not cause any surprising change of behavior of existing code, you need to import it into your code explicitly to get access to it, just like the sub above.
Given this, I find it very odd that we do not have a similar mechanism for methods, see e.g. the discussion at How do you add a method to an existing class in Perl 6?, especially since perl6 is such a method-happy language. If I want to extend the usage of an existing type, I would want to do that in the same style as the original module was written in. If there is a .is-prime on Int, it must be possible for me to add a .is-semi-prime as well, right?
I read the discussion at the link above, but don't quite buy the "action at a distance" argument: how is that different from me exporting another multi sub from a module? for example the rust way of making this a lexical change (Trait + impl ... for) seems quite hygienic to me, and would be very much in line with the operator approach above.
More interesting (to me at least) than the technicalities is the question if language design: isn't the ability to provide new verbs (subs, operators, methods) for existing nouns (types) a core design goal for a language like perl6? If it is, why would it treat methods differently? And if it does treat them differently for a good reason, does that not mean we are using way to many non-composable methods as nouns where we should be using subs instead?
From a language design perspective, it all comes down to a simple question: which language are we speaking? In Perl 6, this is a question about which we always try to be very clear.
The notion of ones current language in Perl 6 is defined entirely in terms of lexical scope. Sub declarations are lexically scoped. When we import symbols from a module, including extra multi candidates, those are lexically scoped. When we perform language tweaks - such as introducing new operators - those are lexically scoped. Verbs in our current language - that is, subroutine calls - are those with a lexical definition. (Operators are simply sub calls with more interesting parsing.) Since lexical scopes are closed at the end of compile time, the compiler has a complete view of the current language. That's why sub calls to non-existent subs, or references to undeclared variables, are detected and reported at compile time, as well as some basic compile-time type checking; future Perl 6 versions are likely to extend the set of compile-time checks that can be expected. The current language is the static, early-bound, part of Perl 6.
By contrast, a method call is a verb to be interpreted in the target object's language. This is the dynamic, late-bound, part of Perl 6. While the most immediate result of that is the typical polymorphism found in various forms in implementations of OO, thanks to meta-programming even the manner in which a verb is interpreted is up for grabs. For example, a monitor will acquire a lock while it interprets the verb and release it afterwards. Other objects might have been constructed based on things other than Perl 6 code, and so the interpretation of a verb doesn't mean invoking code written as a Perl 6 method. Or the code might be somewhere over the network. Who knows? Well, certainly not the caller, and that's the point, and the power, and the risk, of late binding.
The Perl 6 answer to "I want to extend the range of verbs I can use with this object in my current language" is very simple: use language features that relate to extending the current language! There's even a special syntax, $obj.&foo, that allows for a verb foo to be defined in the current language - by writing a sub - and then invoked much as if it's a method on the object. However, the small syntactic distinction makes it clear to the reader - and to the compiler - what is going on, and which language is getting to define that verb.
Through the use of augment it is possible to extend the language defined by some type of objects. However, it's rarely the best way to do things, given that it will have global effect, and also scatter the definition of the language of the object.
Much of what we do in programming is about building languages. By that I don't mean new syntax; most of our new languages - even in a language as open to mutation as Perl 6 - are just nouns and verbs defined using standard language features. However, in any non-trivial program, we can't keep every detail of every language in mind at once. When I go to the restaurant and order a schnitzel, I don't know how the order will be transported to the kitchen, what the kitchen looks like, whether the schnitzel is hammered out, breaded, and cooked on demand, or just served from a (hopefully not too stale) cache of prepared schnitzels. The kitchen and I have just enough shared meaning to make the right kind of thing happen, but I don't know how they'll precisely react to my request and they need not know what I'll do in the meantime. This kind of thinking is acknowledged by OO itself - at least when we fully embrace it - and at a larger scale by concepts such as bounded contexts, as found in Domain Driven Design.
In summary, Perl 6 tries to help us keep our languages straight: to know what is in our current language, and what we express with only limited understanding. That distinction is encoded by the sub/method distinction, which also turns out to be a sensible place to hang a static/dynamic distinction too.
The browser-based software StudyTRAX ( http://wiki.studytrax.com ), used for research data management, allows for custom form and form variable management via JavaScript. However, a StudyTRAX "variable" (essentially, a representation of both an element of a form [HTML properties included] and its corresponding parameter, with some data typing/etc.) must be referred to with #<varname>, while regular JavaScript variables will just be <varname>.
Is this sort of thing done to make parsing easier, or is it just to distinguish between the two so that researchers who aren't so technologically-inclined won't have as much trouble figuring out what they're doing? Given the nature of JavaScript, I would think the StudyTRAX "variables" are just regular JavaScript objects defined in such a way to make form design and customization simpler, and thus the latter would make more sense, but am I wrong?
Also, I know that there are other programming languages that do require specific variable prefixes (though I can't think of some off the top of my head at the moment); what is/was the usual reasoning for that choice in language design?
Two part answer, StudyTRAX is almost certainly using a preprocessor to do some magic. JavaScript makes this relativity easy, but not as easy as a Lisp would. You still need to parse the code. By prefixing, the parser can ignore a lot of the complicated syntax of JavaScript and get to the good part without needing a "picture perfect" compiler. Actually, a lot of templeting systems do this. It is an implementation of Lisp's quasi-quote (see Greenspun's Tenth Rule).
As for prefixes in general, the best way to understand them is to try to write a parser for a language without them. For very dynamic and pure languages like Lisp and JavaScript where everything is a List / object it is not too bad. When you get languages where methods are distinct from objects, or functions are not first class the parser begins having to ask itself what type of thing doe "foo" refer to? An annoying example from Ruby: an unprefixed identifier is either a local variable or a method implicitly on self. In Rails there are a few functions that are implemented with method_missing. Person.find_first_by_rank works fine, but
Class Person < ActiveRecord::Base
def promotion(name)
p = find_first_by_rank
[...]
end
end
gives an error because find_first_by_rank looks like it might be a local variable and Ruby is scared to call method_missing on something that might just be a misspelled local variable.
Now imagine trying to distinguish between instance variables (prefix-#), class-variables (prefix-##), global variables (prefix-$), Constants (first letter Capitol), method names and local variables (no prefix small case) by context alone.
(From a Compiler & Language Hobbyst Designer).
Your question is more especific to the "StudyTRAX" software.
In early days of programming, variables in Basic used prefixes as $ (for strings, "a$"), to difference from numeric values. Today, some programming languages such as PHP prefixes variables with "$". COBNOL used variables starting with A to I, for integers, and later letters for floats.
Transforming, and later, executing some code, its a complex task, that's why many programmers, use shortcuts like adding prefixes or suffixes to programming languages.
In many Collegues or Universities, exist specialized classes / courses for transforming code from a programming language, to something that the computer does, like "Compilers", "Automatons", "Language Design", because its not an easy task.
Perl requires different variable prefixes, depending on the type of data:
$scalar = 4.2;
#array = (1, 4, 9, 16);
%map = ("foo" => 42, "bar" => 17, "baz" => 137);
As I understand it, this is so the reader can immediately identify what kind of object they're dealing with. It's not a matter of whether the reader is technologically inclined or not: if you reduce the programmer's cognitive load, he can use his brainpower for more important things than figuring out fiddly syntactic details.
Whether Perl's design is successful in this respect is another question, but I believe that's the reasoning behind the feature.
One of the big uses of code generation in c++ is to support message serialisation. Typically, you want to support specifying message contents and layout in the same step and produce code for that message type that can give you objects capable of being serialised to/from communication streams. In the past, this has usually resulted in code that looks like:
class MyMessage : public SerialisableObject
{
// message members
int myNumber_;
std::string myString_;
std::vector<MyOtherSerialisableObject> aBunchOfThingsIWantToSerialise_;
public:
// ctor, dtor, accesors, mutators, then:
virtual void Serialise(SerialisationStream & stream)
{
stream & myNumber_;
stream & myString_;
stream & aBunchOfThingsIWantToSerialise_;
}
};
The problem with using this kind of design is that violates an important rule of good architecture: you should not have to specify the intent of a design twice. Duplication of intent, like duplicated code and other common development duplication, leaves room for one place in the code to become divergent with the other, causing errors.
In the above, the duplication is the list of members. Potential errors include adding a member to the class but forgetting to add it to the serialisation list, serialising a member twice (possibly by not using the same order as the member declaration or possibly due to a misspelling of a similar member, among other ways), or serialising something that is not a member (which might produce a compiler error, unless name lookup finds something at a different scope than the object that matches lookup rules). That kind of mistake is the same reason we no longer try to match every heap allocation with a delete (instead using smart pointers) or ever file open with a close (using RAII ctor//dtor mechanisms) - we don't want to have to match up our intent in multiple places because there are times we - or another engineer less familiar with the intent - make mistakes.
Generally, therefore, this has been one of the things that code generation could take care of. You might create a file MyMessage.cg to specify both layout and members in one step
serialisable MyMessage
{
int myNumber_;
std::string myString_;
std::vector<MyOtherSerialisableObject> aBunchOfThingsIWantToSerialise_;
};
that would be run through a code generation utility and produce the code.
I was wondering if it was possible yet to do this in c++0x without external code generation. Are there any new language mechanisms that make it possible to specify a class as serialisable once, and the names and layout of it's members are used to layout the message during serialisation?
To be clear, I know that there are tricks with boost tuples and fusion that can come close to this kind of behavior even in the pre-c++0x language. Those usages, though, being based on indexing into the tuple rather than by-member-name access, have all been brittle to changing the layout, as other places in the code that access the messages would then also need to be reordered. Some kind of by-member-name access is necessary to not have to duplicate the layout specification in places in the code that use the messages.
Also, I know it might be nice to take this up to the next level and ask for specifying when some of the members shouldn't be serialised. Other languages that offer serialisation built in often offer some kind of attribute to do this, so
int myNonSerialisedNumber_ [[noserialise]];
might seem natural. However, I personally think it is bad design to have serialisable objects where everything is not serialised, since the lifetime of messages is in the transport to/from the communications layer, separate from other data lifetimes. Also, you could have an object which has a purely serialisable as on of it's members, so such functionality doesn't by anything the language doesn't already offer.
Is this possible? Or did the standards committee leave out this kind of introspective capability? I don't need it to look like the code gen file above - any simple method for compiletime specification of layout and members in a single step would solve this common problem.
This is both possible and practical in C++11 – in fact it was possible back in C++03, the syntax was just a little too unwieldy. I wrote a small library based around the same idea - see the following:
www.github.com/molw5/framework
Sample syntax:
class Object : serializable <Object,
value <NAME(“Field 1”), int>,
value <NAME(“Field 2”), float>,
value <NAME(“Field 3”), double>>
{
};
Most of the underlying code could be reproduced, in principal, in C++03 – some of the implementation details without variadic templates would have been...tricky, but I believe it would have been possible to recover the core functionality. What you could not reproduce in C++03 was the NAME macro above and the syntax relies fairly heavily on it. The macro provides the machinery necessary to generate a unique typename from a string, that is the following:
NAME(“Field 1”)
expands to
type_string <'F', 'i', 'e', 'l', 'd', ' ', '1'>
through the use of some common macros and constexpr (for character extraction). Back in C++03 something similar to the type_string above would need to be entered manually.
C++, of any form, supports neither introspection nor reflection (to the extent that they are different).
One nice thing about doing serialization manually (ie: without introspection or reflection) is that you can provide object versioning. You can support older forms of the serialization, and simply create reasonable defaults for the data that wasn't in the old versions. Or if a new version removes some data, you can simply serialize and discard it.
It seems to me that what you need is Boost.Serialization.
Do you have any input on how to organize and name utility classes?
Whenever I run in to some code-duplication, could be just a couple of code lines, I move them to a utility class.
After a while, I tend to get a lot of small static classes, usually with only one method, which I usualy put in a utility namespace that gets bloated with classes.
Examples:
ParseCommaSeparatedIntegersFromString( string )
CreateCommaSeparatedStringFromIntegers( int[] )
CleanHtmlTags( string )
GetListOfIdsFromCollectionOfX( CollectionX )
CompressByteData( byte[] )
Usually, naming conventions tell you to name your class as a Noun. I often end up with a lot of classes like HtmlHelper, CompressHelper but they aren't very informative. I've also tried being really specific like HtmlTagCleaner, which usualy ends up with one class per utility method.
Have you any ideas on how to name and group these helper methods?
I believe there is a continuum of complexity, therefore corresponding organizations. Examples follow, choose depending of the complexity of your project and your utilities, and adapt to other constraints :
One class (called Helper), with a few methods
One package (called helper), with a few classes (called XXXHelper), each class with a few methods.
Alternatively, the classes may be split in several non-helper packages if they fit.
One project (called helper), with a few packages (called XXX), each package with ...
Alternatively, the packages can be split in several non-helper packages if they fit.
Several helper projects (split by tier, by library in use or otherwise)...
At each grouping level (package, class) :
the common part of the meaning is the name of the grouping name
inner codes don't need that meaning anymore (so their name is shorter, more focused, and doesn't need abbreviations, it uses full names).
For projects, I usually repeat the common meaning in a superpackage name. Although not my prefered choice in theory, I don't see in my IDE (Eclipse) from which project a class is imported, so I need the information repeated. The project is actually only used as :
a shipping unit : some deliverables or products will have the jar, those that don't need it won't),
to express dependencies : for example, a business project have no dependency on web tier helpers ; having expressed that in projects dependencies, we made an improvement in apparent complexity, good for us ; or finding such a dependency, we know something is wrong, and start to investigate... ; also, by reducing the dependencies, we may accelerate compilation and building ....
to categorize the code, to find it faster : only when it's huge, I'm talking about thousands of classes in the project
Please note that all the above applies to dynamic methods as well, not only static ones.
It's actually our good practices for all our code.
Now that I tried to answer your question (although in a broad way), let me add another thought
(I know you didn't ask for that).
Static methods (except those using static class members) work without context, all data have to be passed as parameters. We all know that, in OO code, this is not the preferred way. In theory, we should look for the object most relevant to the method, and move that method on that object. Remember that code sharing doesn't have to be static, it only has to be public (or otherwise visible).
Examples of where to move a static method :
If there is only one parameter, to that parameter.
If there are several parameters, choose between moving the method on :
the parameter that is used most : the one with several fields or methods used, or used by conditionals (ideally, some conditionnals would be removed by subclasses overriding) ...
one existing object that has already good access to several of the parameters.
build a new class for that need
Although this method moving may seem for OO-purist, we find this actually helps us in the long run (and it proves invaluable when we want to subclass it, to alter an algorithm). Eclipse moves a method in less than a minute (with all verifications), and we gain so much more than a minute when we look for some code, or when we don't code again a method that was coded already.
Limitations : some classes can't be extended, usually because they are out of control (JDK, libraries ...). I believe this is the real helper justification, when you need to put a method on a class that you can't change.
Our good practice then is to name the helper with the name of the class to extend, with Helper suffix. (StringHelper, DateHelper). This close matching between the class where we would like the code to be and the Helper helps us find those method in a few seconds, even without knowledge if someone else in our project wrote that method or not.
Helper suffix is a good convention, since it is used in other languages (at least in Java, IIRC rails use it).
The intent of your helper should be transported by the method name, and use the class only as placeholder. For example ParseCommaSeparatedIntegersFromString is a bad name for a couple of reasons:
too long, really
it is redundant, in a statically typed language you can remove FromString suffix since it is deduced from signature
What do you think about:
CSVHelper.parse(String)
CSVHelper.create(int[])
HTMLHelper.clean(String)
...
I often find myself needing reference to an object that is several objects away, or so it seems. The options I see are passing a reference through a middle-man or just making something available statically. I understand the danger of global scope, but passing a reference through an object that does nothing with it feels ridiculous. I'm okay with a little bit passing around, I suppose. I suspect there's a line to be drawn somewhere.
Does anyone have insight on where to draw this line?
Or a good way to deal with the problem of distributing references amongst dependent objects?
Use the Law of Demeter (with moderation and good taste, not dogmatically). If you're coding a.b.c.d.e, something IS wrong -- you've nailed forevermore the implementation of a to have a b which has a c which... EEP!-) One or at the most two dots is the maximum you should be using. But the alternative is NOT to plump things into globals (and ensure thread-unsafe, buggy, hard-to-maintain code!), it is to have each object "surface" those characteristics it is designed to maintain as part of its interface to clients going forward, instead of just letting poor clients go through such undending chains of nested refs!
This smells of an abstraction that may need some improvement. You seem to be violating the Law of Demeter.
In some cases a global isn't too bad.
Consider, you're probably programming against an operating system's API. That's full of globals, you can probably access a file or the registry, write to the console. Look up a window handle. You can do loads of stuff to access state that is global across the whole computer, or even across the internet... and you don't have to pass a single reference to your class to access it. All this stuff is global if you access the OS's API.
So, when you consider the number of global things that often exist, a global in your own program probably isn't as bad as many people try and make out and scream about.
However, if you want to have very nice OO code that is all unit testable, I suppose you should be writing wrapper classes around any access to globals whether they come from the OS, or are declared yourself to encapsulate them. This means you class that uses this global state can get references to the wrappers, and they could be replaced with fakes.
Hmm, anyway. I'm not quite sure what advice I'm trying to give here, other than say, structuring code is all a balance! And, how to do it for your particular problem depends on your preferences, preferences of people who will use the code, how you're feeling on the day on the academic to pragmatic scale, how big the code base is, how safety critical the system is and how far off the deadline for completion is.
I believe your question is revealing something about your classes. Maybe the responsibilities could be improved ? Maybe moving some code would solve problems ?
Tell, don't ask.
That's how it was explained to me. There is a natural tendency to call classes to obtain some data. Taken too far, asking too much, typically leads to heavy "getter sequences". But there is another way. I must admit it is not easy to find, but improves gradually in a specific code and in the coder's habits.
Class A wants to perform a calculation, and asks B's data. Sometimes, it is appropriate that A tells B to do the job, possibly passing some parameters. This could replace B's "getName()", used by A to check the validity of the name, by an "isValid()" method on B.
"Asking" has been replaced by "telling" (calling a method that executes the computation).
For me, this is the question I ask myself when I find too many getter calls. Gradually, the methods encounter their place in the correct object, and everything gets a bit simpler, I have less getters and less call to them. I have less code, and it provides more semantic, a better alignment with the functional requirement.
Move the data around
There are other cases where I move some data. For example, if a field moves two objects up, the length of the "getter chain" is reduced by two.
I believe nobody can find the correct model at first.
I first think about it (using hand-written diagrams is quick and a big help), then code it, then think again facing the real thing... Then I code the rest, and any smells I feel in the code, I think again...
Split and merge objects
If a method on A needs data from C, with B as a middle man, I can try if A and C would have some in common. Possibly, A or a part of A could become C (possible splitting of A, merging of A and C) ...
However, there are cases where I keep the getters of course.
But it's less likely a long chain will be created.
A long chain will probably get broken by one of the techniques above.
I have three patterns for this:
Pass the necessary reference to the object's constructor -- the reference can then be stored as a data member of the object, and doesn't need to be passed again; this implies that the object's factory has the necessary reference. For example, when I'm creating a DOM, I pass the element name to the DOM node when I construct the DOM node.
Let things remember their parent, and get references to properties via their parent; this implies that the parent or ancestor has the necessary property. For example, when I'm creating a DOM, there are various things which are stored as properties of the top-level DomDocument ancestor, and its child nodes can access those properties via the reference which each one has to its parent.
Put all the different things which are passed around as references into a single class, and then pass around just that one class instance as the only thing that's passed around. For example, there are many properties required to render a DOM (e.g. the GDI graphics handle, the viewport coordinates, callback events, etc.) ... I put all of these things into a single 'Context' instance which is passed as the only parameter to the methods of the DOM nodes to be rendered, and each method can get whichever properties it needs out of that context parameter.