?- assertz(:- module(foo1, [f/1])).
true.
?- foo1:assertz(f(1)).
true.
?- foo1:f(1).
true.
?- foo2:f(1).
Correct to: "foo1:f(1)"? no
ERROR: Undefined procedure: foo2:f/1
ERROR: In:
ERROR: [8] foo2:f(1)
ERROR: [7] <user>
Makes sense to me. But then (from scratch)....
?- assertz(:- module(foo1, [f/1])).
true.
?- assertz(f(1)).
true.
?- foo1:f(1).
true.
?- foo2:f(1).
true. # Wait, what? foo2 doesn't appear in my program. Should fail?
?- frobnoz:f(1).
true. # Also odd!
But then...
?- foo2:assertz(f(1)).
true.
?- foo2:f(1).
true.
?- frobnoz:f(1).
ERROR: Undefined procedure: frobnoz:f/1
How does f get added to foo2 when I don't mention foo2.
Why does frobnoz:f succeed in the second example, but fail in the third?
What are modules? I thought they were namespaces, but am now confused.
1st question:
How does f get added to foo2 when I don't mention foo2.
From SWI-Prolog Manual - Module autoload section:
SWI-Prolog by default support autoloading from its standard library. Autoloading implies that when a predicate is found missing during execution the library is searched and the predicate is imported lazily using use_module/2.
You can go deeper but, basically, when the module is not explicitly defined, prolog search for it and silently loads it. It's the default behavior. You can change it with autoload flag.
2nd question:
Why does frobnoz:f succeed in the second example, but fail in the third?
Probably this frobnoz:f can be found as a dependency from foo1 module, which you are not referencing in the third example.
3rd question:
What are modules? I thought they were namespaces, but am now confused.
As SWI-Prolog Reference Manual reads:
A Prolog module is a collection of predicates which defines a public interface by means of a set of provided predicates and operators. Prolog modules are defined by an ISO standard. Unfortunately, the standard is considered a failure and, as far as we are aware, not implemented by any concrete Prolog implementation. The SWI-Prolog module system syntax is derived from the Quintus Prolog module system. The Quintus module system has been the starting point for the module systems of a number of mainstream Prolog systems, such as SICStus, Ciao and YAP. The underlying primitives of the SWI-Prolog module system differ from the mentioned systems. These primitives allow for multiple modules in a file, hierarchical modules, emulation of other modules interfaces, etc. (source)
In classic Prolog systems, all predicates are organised in a single namespace and any predicate can call any predicate. [...]
A Prolog module encapsulates a set of predicates and defines an interface. Modules can import other modules, which makes the dependencies explicit. Given explicit dependencies and a well-defined interface, it becomes much easier to change the internal organisation of a module without breaking the overall application. (source)
Typically, the name of a module is the same as the name of the file by which it is defined without the filename extension, but this naming is not enforced. Modules are organised in a single and flat namespace and therefore module names must be chosen with some care to avoid conflicts. As we will see, typical applications of the module system rarely use the name of a module explicitly in the source text. (source)
Related
Consider a neon register such as:
uint16x8_t foo;
To access an individual lane, one is supposed to use vgetq_lane_u16(foo, 3). However, one might be tempted to write foo[3] instead given the intuition that foo is an array of shorts. When doing so, the gcc (10) compiles without warnings, but it is not clear that it does what it was intended to do.
The gcc documentation does not specifically mention the indexed access, but it says that operations behave like C++ valarrays. Those do support indexed access in the intuitive way.
Regardless of what foo[3] evaluates to, doing so seems to be faster than vgetq_lane_u16(foo, 3), so probably they're different or we wouldn't need both.
So what exactly does foo[3] mean? Is its behavior defined at all? If not, why does gcc happily compile it?
The foo[3] form is the GCC Vector extension form, as you have found and linked documentation to; it behaves as so:
Vectors can be subscripted as if the vector were an array with the same number of elements and base type. Out of bound accesses invoke undefined behavior at run time. Warnings for out of bound accesses for vector subscription can be enabled with -Warray-bounds.
This can have surprising results when used on big-endian systems, so Arm’s Arm C Language Extensions recommend to use vget_lane if you are using other Neon intrinsics like vld1 in the same code path.
I've got an embarrassingly simple question here. I'm a smalltalk newbie (I attempt to dabble with it every 5 years or so), and I've got Pharo 6.1 running. How do I go about finding the official standard library documentation? Especially for the compiler class? Things like the compile and evaluate methods? I don't see how to perform a search with the Help Browser, and the method comments in the compiler class are fairly terse and cryptic. I also don't see an obvious link to the standard library API documentation at: http://pharo.org/documentation. The books "Pharo by Example" and "Deep into Pharo" don't appear to cover that class either. I imagine the class is probably similar for Squeak and other smalltalks, so a link to their documentation for the compiler class could be helpful as well.
Thanks!
There are several classes that collaborate in the compilation of a method (or expression) and, given your interest in the subject, I'm tempted to stimulate you even further in their study and understanding.
Generally speaking, the main classes are the Scanner, the Parser, the Compiler and the Encoder. Depending on the dialect these may have slightly different names and implementations but the central idea remains the same.
The Scanner parses the stream of characters of the source code and produces a stream of tokens. These tokens are then parsed by the Parser, which transforms them into the nodes of the AST (Abstract Syntax Tree). Then the Compiler visits the nodes of the AST to analyze them semantically. Here all variable nodes are classified: method arguments, method temporaries, shared, block arguments, block temporaries, etc. It is during this analysis where all variables get bound in their corresponding scope. At this point the AST is no longer "abstract" as it has been annotated with binding information. Finally, the nodes are revisited to generate the literal frame and bytecodes of the compiled method.
Of course, there are lots of things I'm omitting from this summary (pragmas, block closures, etc.) but with these basic ideas in mind you should now be ready to debug a very simple example. For instance, start with
Object compile: 'm ^3'
to internalize the process.
After some stepping into and over, you will reach the first interesting piece of code which is the method OpalCompiler >> #compile. If we remove the error handling blocks this methods speaks for itself:
compile
| cm |
ast := self parse.
self doSemanticAnalysis.
self callPlugins.
cm := ast generate: self compilationContext compiledMethodTrailer
^cm
First we have the #parse message where the parse nodes are created. Then we have the semantic analysis I mentioned above and finally #generate: produces the encoding. You should debug each of these methods to understand the compilation process in depth. Given that you are dealing with a tree be prepared to navigate thru a lot of visitors.
Once you become familiar with the main ideas you may want to try more elaborated -yet simple- examples to see other objects entering the scene.
Here are some simple facts:
Evaluation in Smalltalk is available everywhere: in workspaces, in
the Transcript, in Browsers, inspectors, the debugger, etc.
Basically, if you are allowed to edit text, most likely you will
also be allowed to evaluate it.
There are 4 evaluation commands
Do it (evaluates without showing the answer)
Print it (evaluates and prints the answer next to the expression)
Inspect it (evaluates and opens an inspector on the result)
Debug it (opens a debugger so you can evaluate your expression step by step).
Your expression can contain any literal (numbers, arrays, strings, characters, etc.)
17 "valid expression"
Your expression can contain any message.
3 + 4.
'Hello world' size.
1 bitShift: 28
Your expression can use any Global variable
Object new.
Smalltalk compiler
Your expression can reference self, super, true, nil, false.
SharedRandom globalGenerator next < 0.2 ifTrue: [nil] ifFalse: [self]
Your expression can use any variables declared in the context of the pane where you are writing. For example:
If you are writing in a class browser, self will be bound to the current class
If you are writing in an inspector, self is bound to the object under inspection. You can also use its instances variables in the expression.
If you are in the debugger, your expression can reference self, the instance variables, message arguments, temporaries, etc.
Finally, if you are in a workspace (a.k.a. Playground), you can use any temporaries there, which will be automatically created and remembered, without you having to declare them.
As near as I can tell, there is no API documentation for the Pharo standard library, like you find with other programming languages. This seems to be confirmed on the Pharo User's mailing list: http://forum.world.st/Essential-Documentation-td4916861.html
...there is a draft version of the ANSI standard available: http://wiki.squeak.org/squeak/uploads/172/standard_v1_9-indexed.pdf
...but that doesn't seem to cover the compiler class.
I've read the spec but I'm still confused how my class differs from [our] class. What are differences and when to use which?
The my scope declarator implies lexical scoping: following its declaration, the symbol is visible to the code within the current set of curly braces. We thus tend to call the region within a pair of curly braces a "lexical scope". For example:
sub foo($p) {
# say $var; # Would be a compile time error, it's not declared yet
my $var = 1;
if $p {
$var += 41; # Inner scope, $var is visible
}
return $var; # Same scope that it was declared in, $var is visible
}
# say $var; # $var is no longer available, the scope ended
Since the variable's visibility is directly associated with its location in the code, lexical scope is really helpful in being able to reason about programs. This is true for:
The programmer (both for their own reasoning about the program, but also because more errors can be detected and reported when things have lexical scope)
The compiler (lexical scoping permits easier and better optimization)
Tools such as IDEs (analyzing and reasoning about things with lexical scope is vastly more tractable)
Early on in the design process of the language that would become Raku, subroutines did not default to having lexical scope (and had our scope like in Perl), however it was realized that lexical scope is a better default. Making subroutine calls always try to resolve a symbol with lexical scope meant it was possible to report undeclared subroutines at compile time. Furthermore, the set of symbols in lexical scope is fixed at compile time, and in the case of declarative constructs like subroutines, the routine is bound to that symbol in a readonly manner. This also allows things like compile-time resolution of multiple dispatch, compile-time argument checking, and so forth. It is likely that future versions of the Raku language will specify an increasing number of compile-time checks on lexically scoped program elements.
So if lexical scoping is so good, why does our (also known as package) scope exist? In short, because:
Sometimes we want to share things more widely than within a given lexical scope. We could just declare everything lexical and then mark things we want to share with is export, but..
Once we get to the point of using a lot of different libraries, having everything try to export things into the single lexical scope of the consumer would likely lead to a lot of conflicts
Packages allow namespacing of symbols. For example, if I want to use the Cro clients for both HTTP and WebSockets in the same code, I can happily use both, and refer to them as Cro::HTTP::Client and Cro::WebSocket::Client respectively.
Packages are introduced by package declarators, such as class, module, grammar, and (with caveats) role. An our declaration will make an installation in the enclosing package construct.
These packages ultimately exist within a top-level package named GLOBAL - which is fitting, since they are effectively globally visible. If we declare an our-scoped variable, it is thus a global variable (albeit hopefully a namespaced one), about which enough has been written that we know we should pause for thought and wonder if a global variable is the best API decision (because, ultimately, everything that ends up visible via GLOBAL is an API decision).
Where things do get a bit blurry, however, is that we can have lexical packages. These are packages that do not get installed in GLOBAL. I find these extremely useful when doing OO programming. For example, I might have:
# This class that ends up in GLOBAL...
class Cro::HTTP::Client {
# Lexically scoped classes, which are marked `my` and thus hidden
# implementation details. This means I can refactor them however I
# want, and never have to worry about downstream fallout!
my class HTTP1Pipeline {
# Implementation...
}
my class HTTP2Pipeline {
# Implementation...
}
# Implementation...
}
Lexical packages can also be nested and contain our-scoped variables, however don't end up being globally visible (unless we somehow choose to leak them out).
Different Raku program elements have been ascribed a default scope:
Subroutines default to lexical (my) scope
Methods default to has scope (only visible through a method dispatch)
Type (class, role, grammar, subset) and module declarations default to package (our) scope
Constants and enumerations default to package (our) scope
Effectively, things that are most often there to be shared default to package scope, and the rest do not. (Variables do force us to pick a scope explicitly, however the most common choice is also the shortest one to type.)
Personally, I'm hesitant to make a thing more visible than the language defaults, however I'll often make them less visible (for example, my on constants that are for internal use, and on classes that I'm using to structure implementation details). When I could do something by exposing an our-scoped variable in a globally visible package, I'll still often prefer to make it my-scoped and provide a sub (exported) or method (visible by virtue of being on a package-scoped class) to control access to it, to buy myself some flexibility in the future. I figure it's OK to make wrong choices now if I've given myself space to make them righter in the future without inconveniencing anyone. :-)
In summary:
Use my scope for everything that's an implementation detail
Also use my scope for things that you plan to export, but remember exporting puts symbols into the single lexical scope of the consumer and risks name clashes, so be thoughtful about exporting particularly generic names
Use our for things that are there to be shared, and when its desired to use namespacing to avoid clashes
The elements we'd most want to share default to our scope anyway, so explicitly writing our should give pause for thought
As with variables, my binds a name lexically, whereas our additionally creates an entry in the surrounding package.
module M {
our class Foo {}
class Bar {} # same as above, really
my class Baz {}
}
say M::Foo; # ok
say M::Bar; # still ok
say M::Baz; # BOOM!
Use my for classes internal to your module. You can of course still make such local symbols available to importing code by marking them is export.
The my vs our distinction is mainly relevant when generating the symbol table. For example:
my $a; # Create symbol <$a> at top level
package Foo { # Create symbol <Foo> at top level
my $b; # Create symbol <$b> in Foo scope
our $c; # Create symbol <$c> in Foo scope
} # and <Foo::<$c>> at top level
In practice this means that anything that is our scoped is readily shared to the outside world by prefixing the package identifier ($Foo::c or Foo::<$c> are synonymous), and anything that is my scoped is not readily available — although you can certainly provide access to it via, e.g., getter subs.
Most of the time you'll want to use my. Most variables just belong to their current scope, and no one has any business peaking in. But our can be useful in some cases:
constants that don't poison the symbol table (this is why, actually, using constant implies an our scope). So you can make a more C-style enum/constants by using package Colors { constant red = 1; constant blue = 2; } and then referencing them as Colors::red
classes or subs that should be accessible but needn't be exported (or shouldn't be because overlapping symbols with builtins or other modules). Exporting symbols can be great, but sometimes it's also nice to have the package/module namespace to remind you what stuff goes with. As such, it's also a nice way to manage options at runtime via subs: CoolModule::set-preferences( ... ). (although dynamic variables can be used to nice effect here as well).
I'm sure others will comment with other times the our scope is useful, but these are the ones from my own experience.
TL;DR
Please provide a piece of code written in some well known dynamic language (e.g. JavaScript) and how that code would look like in Java bytecode using invokedynamic and explain why the usage of invokedynamic is a step forward here.
Background
I have googled and read quite a lot about the not-that-new-anymore invokedynamic instruction which everyone on the internet agrees on that it will help speed dynamic languages on the JVM. Thanks to stackoverflow I managed to get my own bytecode instructions with Sable/Jasmin to run.
I have understood that invokedynamic is useful for lazy constants and I also think that I understood how the OpenJDK takes advantage of invokedynamic for lambdas.
Oracle has a small example, but as far as I can tell the usage of invokedynamic in this case defeats the purpose as the example for "adder" could much simpler, faster and with roughly the same effect expressed with the following bytecode:
aload whereeverAIs
checkcast java/lang/Integer
aload whereeverBIs
checkcast java/lang/Integer
invokestatic IntegerOps/adder(Ljava/lang/Integer;Ljava/lang/Integer;)Ljava/lang/Integer;
because for some reason Oracle's bootstrap method knows that both arguments are integers anyway. They even "admit" that:
[..]it assumes that the arguments [..] will be Integer objects. A bootstrap method requires additional code to properly link invokedynamic [..] if the parameters of the bootstrap method (in this example, callerClass, dynMethodName, and dynMethodType) vary.
Well yes, and without that interesing "additional code" there is no point in using invokedynamic here, is there?
So after that and a couple of further Javadoc and Blog entries I think that I have a pretty good grasp on how to use invokedynamic as a poor replacement when invokestatic/invokevirtual/invokevirtual or getfield would work just as well.
Now I am curious how to actually apply the invokedynamic instruction to a real world usecase so that it actually is some improvements over what we could with "traditional" invocations (except lazy constants, I got those...).
Actually, lazy operations are the main advantage of invokedynamic if you take the term “lazy creation” broadly. E.g., the lambda creation feature of Java 8 is a kind of lazy creation that includes the possibility that the actual class containing the code that will be finally invoked by the invokedynamic instruction doesn’t even exist prior to the execution of that instruction.
This can be projected to all kind of scripting languages delivering code in a different form than Java bytecode (might be even in source code). Here, the code may be compiled right before the first invocation of a method and remains linked afterwards. But it may even become unlinked if the scripting language supports redefinition of methods. This uses the second important feature of invokedynamic, to allow mutable CallSites which may be changed afterwards while supporting maximal performance when being invoked frequently without redefinition.
This possibility to change an invokedynamic target afterwards allows another option, linking to an interpreted execution on the first invocation, counting the number of executions and compiling the code only after exceeding a threshold (and relinking to the compiled code then).
Regarding dynamic method dispatch based on a runtime instance, it’s clear that invokedynamic can’t elide the dispatch algorithm. But if you detect at runtime that a particular call-site will always call the method of the same concrete type you may relink the CallSite to an optimized code which will do a short check if the target is that expected type and performs the optimized action then but branches to the generic code performing the full dynamic dispatch only if that test fails. The implementation may even de-optimize such a call-site if it detects that the fast path check failed a certain number of times.
This is close to how invokevirtual and invokeinterface are optimized internally in the JVM as for these it’s also the case that most of these instructions are called on the same concrete type. So with invokedynamic you can use the same technique for arbitrary lookup algorithms.
But if you want an entirely different use case, you can use invokedynamic to implement friend semantics which are not supported by the standard access modifier rules. Suppose you have a class A and B which are meant to have such a friend relationship in that A is allowed to invoke private methods of B. Then all these invocations may be encoded as invokedynamic instructions with the desired name and signature and pointing to a public bootstrap method in B which may look like this:
public static CallSite bootStrap(Lookup l, String name, MethodType type)
throws NoSuchMethodException, IllegalAccessException {
if(l.lookupClass()!=A.class || (l.lookupModes()&0xf)!=0xf)
throw new SecurityException("unprivileged caller");
l=MethodHandles.lookup();
return new ConstantCallSite(l.findStatic(B.class, name, type));
}
It first verifies that the provided Lookup object has full access to A as only A is capable of constructing such an object. So sneaky attempts of wrong callers are sorted out at this place. Then it uses a Lookup object having full access to B to complete the linkage. So, each of these invokedynamic instructions is permanently linked to the matching private method of B after the first invocation, running at the same speed as ordinary invocations afterwards.
I understand that the module! type provides a better structure for protected namespaces than object! or the 'use function. How are words bound within the module—I notice some errors related to unbound words:
REBOL [Type: 'module] set 'foo "Bar"
Also, how does Rebol distinguish between a word local to the module ('foo) and that of a system function ('set)?
Minor update, shortly after:
I see there's a switch that changes the method of binding:
REBOL [Type: 'module Options: [isolate]] set 'foo "Bar"
What does this do differently? What gotchas are there in using this method by default?
OK, this is going to be a little tricky.
In Rebol 3 there are no such things as system words, there are just words. Some words have been added to the runtime library lib, and set is one of those words, which happens to have a function assigned to it. Modules import words from lib, though what "import" means depends on the module options. That might be more tricky than you were expecting, so let me explain.
Regular Modules
For starters, I'll go over what importing means for "regular" modules, ones that don't have any options specified. Let's start with your first module:
REBOL [Type: 'module] set 'foo "Bar"
First of all, you have a wrong assumption here: The word foo is not local to the module, it's just the same as set. If you want to define foo as a local word you have to use the same method as you do with objects, use the word as a set-word at the top level, like this:
REBOL [Type: 'module] foo: "Bar"
The only difference between foo and set is that you hadn't exported or added the word foo to lib yet. When you reference words in a module that you haven't declared as local words, it has to get their values and/or bindings from somewhere. For regular modules, it binds the code to lib first, then overrides that by binding the code again to the module's local context. Any words defined in the local context will be bound to it. Any words not defined in the local context will retain their old bindings, in this case to lib. That is what "importing" means for regular modules.
In your first example, assuming that you haven't done so yourself, the word foo was not added to the runtime library ahead of time. That means that foo wasn't bound to lib, and since it wasn't declared as a local word it wasn't bound to the local context either. So as a result, foo wasn't bound to anything at all. In your code that was an error, but in other code it might not be.
Isolated Modules
There is an "isolate" option that changes the way that modules import stuff, making it an "isolated" module. Let's use your second example here:
REBOL [Type: 'module Options: [isolate]] set 'foo "Bar"
When an isolated module is made, every word in the module, even in nested code, is collected into the module's local context. In this case, it means that set and foo are local words. The initial values of those words are set to whatever values they have in lib at the time the module is created. That is, if the words are defined in lib at all. If the words don't have values in lib, they won't initially have values in the module either.
It is important to note that this import of values is a one-time thing. After that initial import, any changes to these words made outside the module don't affect the words in the module. That is why we say the module is "isolated". In the case of your code example, it means that someone could change lib/set and it wouldn't affect your code.
But there's another important module type you missed...
Scripts
In Rebol 3, scripts are another kind of module. Here's your code as a script:
REBOL [] set 'foo "Bar"
Or if you like, since script headers are optional in Rebol 3:
set 'foo "Bar"
Scripts also import their words from lib, and they import them into an isolated context, but with a twist: All scripts share the same isolated context, known as the "user" context. This means that when you change the value of a word in a script, the next script to use that word will see the change when it starts. So if after running the above script, you try to run this one:
print foo
Then it will print "Bar", rather than have foo be undefined, even though foo is still not defined in lib. You might find it interesting to know that if you are using Rebol 3 interactively, entering commands into the console and getting results, that every command line you enter is a separate script. So if your session looks like this:
>> x: 1
== 1
>> print x
1
The x: 1 and print x lines are separate scripts, the second taking advantage of the changes made to the user context by the first.
The user context is actually supposed to be task-local, but for the moment let's ignore that.
Why the difference?
Here is where we get back to the "system function" thing, and that Rebol doesn't have them. The set function is just like any other function. It might be implemented differently, but it's still a normal value assigned to a normal word. An application will have to manage a lot of these words, so that's why we have modules and the runtime library.
In an application there will be stuff that needs to change, and other stuff that needs to not change, and which stuff is which depends on the application. You will want to group your stuff, to keep things organized or for access control. There will be globally defined stuff, and locally defined stuff, and you will want to have an organized way to get the global stuff to the local places, and vice-versa, and resolve any conflicts when more than one thing wants to define stuff with the same name.
In Rebol 3, we use modules to group stuff, for convenience and access control. We use the runtime library lib as a place to collect the exports of the modules, and resolve conflicts, in order to control what gets imported to the local places like other modules and the user context(s). If you need to override some stuff, you do this by changing the runtime library, and if necessary propagating your changes out to the user context(s). You can even upgrade modules at runtime, and have the new version of the module override the words exported by the old version.
For regular modules, when things are overridden or upgraded, your module will benefit from such changes. Assuming those changes are a benefit, this can be a good thing. A regular module cooperates with other regular modules and scripts to make a shared environment to work in.
However, sometimes you need to stay separate from these kinds of changes. Perhaps you need a particular version of some function and don't want to be upgraded. Perhaps your module will be loaded in a less trustworthy environment and you don't want your code hacked. Perhaps you just need things to be more predictable. In cases like this, you may want to isolate your module from these kinds of external changes.
The downside to being isolated is that, if there are changes to the runtime library that you might want, you're not going to get them. If your module is somehow accessible (such as by having been imported with a name), someone might be able to propagate those changes to you, but if you're not accessible then you're out of luck. Hopefully you've thought to monitor lib for changes you want, or reference the stuff through lib directly.
Still, we've missed another important issue...
Exporting
The other part of managing the runtime library and all of these local contexts is exporting. You have to get your stuff out there somehow. And the most important factor is something that you wouldn't suspect: whether or not your module has a name.
Names are optional for Rebol 3's modules. At first this might seem like just a way to make it simpler to write modules (and in Carl's original proposal, that is exactly why). However, it turns out that there is a lot of stuff that you can do when you have a name that you can't when you don't, simply because of what a name is: a way to refer to something. If you don't have a name, you don't have a way to refer to something.
It might seem like a trivial thing, but here are some things that a name lets you do:
You can tell whether a module is loaded.
You can make sure a module is only loaded once.
You can tell whether an older version of a module was there earlier, and maybe upgrade it.
You can get access to a module that was loaded earlier.
When Carl decided to make names optional, he gave us a situation where it would be possible to make modules for which you couldn't do any of those things. Given that module exports were intended to be collected and organized in the runtime library, we had a situation where you could have effects on the library that you couldn't easily detect, and modules that got reloaded every time they were imported.
So for safety we decided to cut out the runtime library completely and just export words from these unnamed modules directly to the local (module or user) contexts that were importing them. This makes these modules effectively private, as if they are owned by the target contexts. We took a potentially awkward situation and made it a feature.
It was such a feature that we decided to support it explicitly with a private option. Making this an explicit option helps us deal with the last problem not having a name caused us: making private modules not have to reload over and over again. If you give a module a name, its exports can still be private, but it only needs one copy of what it's exporting.
However, named or not, private or not, that is 3 export types.
Regular Named Modules
Let's take this module:
REBOL [type: module name: foo] export bar: 1
Importing this adds a module to the loaded modules list, with the default version of 0.0.0, and exports one word bar to the runtime library. "Exporting" in this case means adding a word bar to the runtime library if it isn't there, and setting that word lib/bar to the value that the word foo/bar has after foo has finished executing (if it isn't set already).
It is worth noting that this automatic exporting happens only once, when the body of foo is finished executing. If you make a change to foo/bar after that, that doesn't affect lib/bar. If you want to change lib/bar too, you have to do it manually.
It is also worth noting that if lib/bar already exists before foo is imported, you won't have another word added. And if lib/bar is already set to a value (not unset), importing foo won't overwrite the existing value. First come, first served. If you want to override an existing value of lib/bar, you'll have to do so manually. This is how we use lib to manage overrides.
The main advantages that the runtime library gives us is that we can manage all of our exported words in one place, resolving conflicts and overrides. However, another advantage is that most modules and scripts don't actually have to say what they are importing. As long as the runtime library is filled in properly ahead of time with all the words you need, your script or module that you load later will be fine. This makes it easy to put a bunch of import statements and any overrides in your startup code which sets up everything the rest of your code will need. This is intended to make it easier to organize and write your application code.
Named Private Modules
In some cases, you don't want to export your stuff to the main runtime library. Stuff in lib gets imported into everything, so you should only export stuff to lib that you want to make generally available. Sometimes you want to make modules that only export stuff for the contexts that want it. Sometimes you have some related modules, a general facility and a utility module or so. If this is the case, you might want to make a private module.
Let's take this module:
REBOL [type: module name: foo options: [private]] export bar: 1
Importing this module doesn't affect lib. Instead, its exports are collected into a private runtime library that is local to the module or user context that is importing this module, along with those of any other private modules that the target is importing, then imported to the target from there. The private runtime library is used for the same conflict resolution that lib is used for. The main runtime library lib takes precedence over the private lib, so don't count on the private lib overriding global things.
This kind of thing is useful for making utility modules, advanced APIs, or other such tricks. It is also useful for making strong-modular code which requires explicit imports, if that is what you're into.
It's worth noting that if your module doesn't actually export anything, there is no difference between a named private module or a named public module, so it's basically treated as public. All that matters is that it has a name. Which brings us to...
Unnamed Modules
As explained above, if your module doesn't have a name then it pretty much has to be treated as private. More than private though, since you can't tell if it's loaded, you can't upgrade it or even keep from reloading it. But what if that's what you want?
In some cases, you really want your code run for effect. In these cases having your code rerun every time is what you want to do. Maybe it's a script that you're running with do but structuring as a module to avoid leaking words. Maybe you're making a mixin, some utility functions that have some local state that needs initializing. It could be just about anything.
I frequently make my %rebol.r file an unnamed module because I want to have more control over what it exports and how. Plus, since it's done for effect and doesn't need to be reloaded or upgraded there's no point in giving it a name.
No need for a code example, your earlier ones will act this way.
I hope this gives you enough of an overview of the design of R3's module system.