I found this line in an old branch and, as I have a lot of respect for the (unreachable) author, I'm trying to make sense of one specific line, more precisely the lambda at the end:
container.Register(Of ServiceStack.Configuration.IResolver)(Function(x) x)
container is a Funq.Container through ServiceStack. The intellisense tells me that the lambda is filling in for a (factory As Func(Of Funq.Container, ServiceStack.Configuration.IResolver)).
There is two things that I can assume about the author: he's better coder than I am, and he may have left this unfinished. For now I'm guessing that this lambda was deliberate and not some unfinished line with no clear intent, but so far nobody could help me understand why.
The container is a dependency injection container. Code elsewhere will ask the container for instances of interfaces. The code here is registration code, which is telling the container how to provide an IResolver. Furthermore, it's designed to accept a factory function; resolution later will call the function to product the requested IResolver.
In this case, it appears that the container itself implements IResolver. The lambda is a function that returns its argument, so it's trivial; its argument is a Funq.Container and it returns a ServiceStack.Configuration.IResolver, so the only way this can compile is if the container implements that interface.
Thus: The container implements IResolver. The code registers a factory function which always returns back the container itself when called.
It seems rather odd to do. I don't know ServiceStack at all, so I'm not sure why it's done this way.
Related
I prefer working with files that are less than 1000 lines long, so am thinking of breaking up some Erlang modules into more bite-sized pieces.
Is there a way of doing this without expanding the public API of my library?
What I mean is, any time there is a module, any user can do module:func_exported_from_the_module. The only way to really have something be private that I know of is to not export it from any module (and even then holes can be poked).
So if there is technically no way to accomplish what I'm looking for, is there a convention?
For example, there are no private methods in Python classes, but the convention is to use a leading _ in _my_private_method to mark it as private.
I accept that the answer may be, "no, you must have 4K LOC files."
The closest thing to a convention is to use edoc tags, like #private and #hidden.
From the docs:
#hidden
Marks the function so that it will not appear in the
documentation (even if "private" documentation is generated). Useful
for debug/test functions, etc. The content can be used as a comment;
it is ignored by EDoc.
#private
Marks the function as private (i.e., not part of the public
interface), so that it will not appear in the normal documentation.
(If "private" documentation is generated, the function will be
included.) Only useful for exported functions, e.g. entry points for
spawn. (Non-exported functions are always "private".) The content can
be used as a comment; it is ignored by EDoc.
Please note that this answer started as a comment to #legoscia's answer
Different visibilities for different methods is not currently supported.
The current convention, if you want to call it that way, is to have one (or several) 'facade' my_lib.erl module(s) that export the public API of your library/application. Calling any internal module of the library is playing with fire and should be avoided (call them at your own risk).
There are some very nice features in the BEAM VM that rely on being able to call exported functions from any module, such as
Callbacks (funs/anonymous funs), MFA, erlang:apply/3: The calling code does not need to know anything about the library, just that it's something that needs to be called
Behaviours such as gen_server need the previous point to work
Hot reloading: You can upgrade the bytecode of any module without stopping the VM. The code server inside the VM maintains at most two versions of the bytecode for any module, redirecting external calls (those with the Module:) to the most recent version and the internal calls to the current version. That's why you may see some ?MODULE: calls in long-running servers, to be able to upgrade the code
You'd be able to argue that these points'd be available with more fine-grained BEAM-oriented visibility levels, true. But I don't think it would solve anything that's not solved with the facade modules, and it'd complicate other parts of the VM/code a great deal.
Bonus
Something similar applies to records and opaque types, records only exist at compile time, and opaque types only at dialyzer time. Nothing stops you from accessing their internals anywhere, but you'll only find problems if you go that way:
You insert a new field in the record, suddenly, all your {record_name,...} = break
You use a library that returns an opaque_adt(), you know that it's a list and use like so. The library is upgraded to include the size of the list, so now opaque_adt() is a tuple() and chaos ensues
Only those functions that are specified in the -export attribute are visible to other modules i.e "public" functions. All other functions are private. If you have specified -compile(export_all) only then all functions in module are visible outside. It is not recommended to use -compile(export_all).
I don't know of any existing convention for Erlang, but why not adopt the Python convention? Let's say that "library-private" functions are prefixed with an underscore. You'll need to quote function names with single quotes for that to work:
-module(bar).
-export(['_my_private_function'/0]).
'_my_private_function'() ->
foo.
Then you can call it as:
> bar:'_my_private_function'().
foo
To me, that communicates clearly that I shouldn't be calling that function unless I know what I'm doing. (and probably not even then)
Suppose I have a function (in Kotlin over Java):
fun <E> myFun() = ...
where E is a general type I know nothing about. Can I determine within this function whether there exists an extension function E.extFun()? And if so, how?
I very much doubt this is possible.
Note that extension functions are resolved statically, at compile time.
And that they're dependent on the extension function being in scope, usually via a relevant import. In particular, it's possible to have more than one extension function with the same name for the same class, as long as they're defined in different places; the one that's in scope will get called.
Within your function, you won't have access to any of that context. So even if you use reflection (which is the usual, and much-abused, ‘get out of jail free card’ for this sort of issue), you still won't be able to find the relevant extension function(s). (Not unless you have prior knowledge of where they might be defined — but in that case, you can probably use that knowledge to come up with a better approach.)
So while I can't say for certain, it seems highly unlikely.
Why do you want to determine this? What are you trying to achieve by it?
I am learning command design pattern. As far as I know, four terms always associated with the command pattern are command, receiver, invoker and client.
A concrete command class has an execute() method and the invoker has a couple of commands. The invoker decides when to call the execute() method of a command.
When the execute() method is called, it calls a method of the receiver. Then, the receiver does the work.
I don't understand why do we need the receiver class? We can do the work inside execute() method, it seems that the receiver class is redundant.
Thank in advance.
Design patterns are used to solve software problems.
You have to understand the problem before trying to understand the solution (in this case Command pattern)
The problems which command pattern apply are in the context of an object A (client) invoking a method in an object B (receiver), so the Receiver is part of the problem, not part of the solution.
The solution or idea that command pattern offers is to encapsulate the method invocation from A to B in an object (Command), in fact this is close to the formal pattern definition. When you manage a request as an object you are able to solve some problems or to implement some features. (you also will need other pieces like the one called Invoker)
This list can give you some good examples of what kind of problems o features are suitable for command pattern.
note: Comamnd pattern is not necesary about decoupling, in fact the most common example pattern immplementation, the client needs to make a new instance of the receiver so we cannot talk about decoupling here.
Imagine a class that can do couple of things, like Duck, it can eat and quack. Duck is a receiver in this example. To apply command pattern here, you need to be able to wrap eating and quacking into a command. They should be separate classes that derive from Command base class with execute() method because Duck can have only single execute() method. So EatCommand.execute() calls Duck.eat() and QuackCommand.execute() calls Duck.quack().
The goal of the command pattern is to decouple the invoker from the receiver.
The receiver must do the work ,not the command itself , the command just knows what is the receiver method to call, or the command can execute other commands . With the command pattern the invoker doesnt know what is being called expect for the command.
So a command can be reused by many invokers to execute the same action on the receiver.
Short answer is depends. This is not based on my opinion alone. From GOF, Command Pattern, Implementation (page 238)
"How intelligent should a command be? A command can have a wide range of abilities. At one extreme it merely defines a binding between a receiver and the actions that carry out the request. At the other extreme it implements everything itself without delegating to a receiver at all. The latter extreme is useful when you want to define commands that are independent of existing classes, when no suitable receiver exists, or when a command knows its receiver implicitly. For example, a command that creates another application window may be just as capable of creating the window as any other object."
So I do not think one should create a receiver class just for the sake of it, or because most example say so. Create it only if there is a real need. One such case is when a class that acts as a receiver already exists as a separate class. If you have to write the code that is going to be invoked/executed and see no reason to create a separate class for that, then I do not see any fault in adding the invoker code to Command itself.
I need to serialize a function in Erlang, send it over to another note, deserialize and execute it there. The problem I am having is with files. If the function reads from a file which is not in the second node I am getting an error. Is there a way how I can differentiate between serializable and not serializable constructs in Erlang? Thus if a function makes use of a file or pid, then it fails to serialize?
Thanks
First of all, if you are sending anonymous functions, be extremely careful with that. Or, rather, just don't do it
There are a couple of cases when this function won't even be executed or will be executed in a completely wrong way.
Every function in Erlang, even an anonymous one, belongs to some module, in the one it's been constructed inside, to be precise. If this function has been built in REPL, it's bound to erl_eval module, which is even more dangerous (I'll explain further, why).
Say, you start two nodes, one of them has a module named 'foo', and one doesn't have such a module loaded (and cannot load it). If you construct a lambda inside the module 'foo', send it to the second node and try to call it, you'll fail with {error, undef}.
There can be another funny problem. Try to make two versions of module 'foo', implement a 'bar' function inside each of them and implement a lambda inside of which (but the lambdas will be different). You'll get yet another error when trying to call a sent lambda.
I think, there could possibly be other tricky parts of sending lambdas to different nodes, but trust me, that's already quite a lot.
Secondly, there are tons of way you can get a process or a port inside a lambda without knowing it in advance
Even though there is a way of catching closured variables from a lambda (if you take a look at a binarized lambda, all the external variables that are being used inside it are listed starting from the 2nd byte), they are not the only source of potential pids or ports.
Consider an easy example: you call a self() function inside your lambda. What will it return? Right, a pid. Okay, we can probably parse the binary and catch this function call, along with a dozen of other built-in functions. But what will you do when you are calling some external function? ets:lookup(sometable, somekey)? some_module:some_function_that_returns_god_knows_what()? You don't know what they are going to return.
Now, to what you can actually do here
When working with files, always send filenames, not descriptors. If you need file's position or something, send it as well. File descriptors shouldn't be known outside the process they've been opened.
As I mentioned, do everything to avoid lambdas to be sent to other nodes. It's hard to tell how to avoid that, since I don't know your exact task. Maybe, you can send a list of functions to execute, like:
[{module1, parse_query},
{module1, dispatch_parsed_query},
{module2, validate_response},
{module2, serialize_query}]
and pass arguments through this functions sequence (be sure all the modules exist everywhere). Maybe, you can actually stick to some module that is going to be frequently changed and deployed over the entire cluster. Maybe, you might want to switch to JS/Lua and use externally started ports (Riak is using spidermonkey to process JS-written lambdas for Map/Reduce requests). Finally, you can actually get module's object code, send it over to another node and load it there. Just keep in mind it's not safe too. You can break some running processes, lose some constructed lambdas, and so on.
Is there an elegant/convinient way (without creating many "empty" classes or at least they should be not annoying) to have fluent interfcaes that maintain order on compilation level.
Fluent interfaces:
http://en.wikipedia.org/wiki/Fluent_interface
with an idea to permit this compilation
var fluentConfig = new ConfigurationFluent().SetColor("blue")
.SetHeight(1)
.SetLength(2)
.SetDepth(3);
and decline this
var fluentConfig = new ConfigurationFluent().SetLength(2)
.SetColor("blue")
.SetHeight(1)
.SetDepth(3);
Each step in the chain needs to return an interface or class that only includes the methods that are valid to use after the current step. In other words, if SetColor must come first, ConfigurationFluent should only have a SetColor method. SetColor would then return an object that only has a SetHeight method, and so forth.
In reality, the return values could all be the same instance of ConfigurationFluent but cast to different interfaces explicitly implemented by that class.
I've got a set of three ways of doing this in C++ using essentially a compile time FSM to validate the actions. You can find the code on github.
The short answer is no, there is no elegant or convenient way to enforce an order of constructing a class that properly impelemnts the "Fluent Interface" as you've linked.
The longer answer starts with playing devil's advocate. If I had dependent properties (i.e. properties that required other properties to be set first), then I could implement them something like this:
method SetLength(int millimeters)
if color is null throw new ValidationException
length = millimeters
return this
end
(NOTE: the above does not map to any real language, it is just psuedocode)
So now I have exceptions to worry about. If I don't obey the rules, the fluent object will throw an exception. Now let's say I have a declaration like yours:
var config = new Fluent().SetLength(2).SetHeight(1).SetDepth(3).SetColor("blue");
When I catch the ValidationException because length depends on the color being set first, how am I as the user supposed to know what the correct order is? Even if I had each SetX method on a different line, the stacktrace will just give me the line where the config variable was declared in most languages. Furthermore, how am I supposed to keep the rules of this object straight in my head compared to other objects? It is a cocophony of conflicting ideals.
Such precedence checks violate the spirit of the "Fluent Interface" approach. That approach was designed for conveniently configure complex objects. You take the convenience out when you attempt to enforce order.
To properly and elegantly implement the fluent interface there are a couple of guidelines that are best observed to make consumers of your class thank you:
Provide meaningful default values: minimizes need to change values, and minimizes chances of creating an invalid object.
Do not perform configuration validation until explicitly asked to do so. That event can be when we use the configuration to create a new fully configured object, or when the consumer explicitly calls a Validate() method.
In any exceptions thrown, make sure the error message is clear and points out any inconsistencies.
maybe the compiler could check that methods are called in the same order as they are defined.
this could be a new feature for compilers.
Or maybe by means of annotations, something like:
class ConfigurationFluent {
#Called-before SetHeight
SetColor(..) {}
#Called-After SetColor
SetHeight(..) {}
#Called-After SetHeight
SetLength(..){ }
#Called-After SetLength
SetDepth(..) {}
}
You can implement a state machine of valid sequence of operations and on each method call the state machine and verify if the sequence of operation is allowed or throw an exception if not.
I will not suggest this approach for Configurations though, it can get very messy and not readable