Which actor.Tell method is preferable and why? - akka.net

I am wondering which Tell method should be used by default?
The docs at
http://getakka.net/docs/working-with-actors/sending-messages
hint that Tell(message, sender) is the preferred way of sending a message, however looking at Akka.Net code, it seems that Tell(message) calls the two argument version anyway with the sender field filled automatically.
Apart from calling simpler code with two argument version of Tell (less ifs under the hood), is there another reason why it should be used instead of a single argument version (when calling from inside an actor)?

I lean towards calling things with least amount of dependancies necessary to achieve the task at hand.
Anyway aside from that. The article you are referring to is really saying favour Tell over Ask<>. I do not think the intent is to specify which overload is preferable to use. Often you will use the one with Sender because you want a response to goto a different actor.
Calling Tell(blah, Self) seems horribly redundant which is probably why the overload exists. The times you need to be careful are when you are telling from a place where you do not have the reference to Self or a suitable Sender EG from tests.
Another common scenario is at a service layer, here (ie the surface of the system) you will often find Ask<> is appropriate if a response is synchronously required. The point of that article is to point out that reactive systems are often not wanting to be synchronous (ask based) and so you should have tell based pathways throughout (eg in a web context to a Rx hub maybe)

Related

What are the drawbacks of encapsulating arguments for different cases in one object?

I'll give you an example about path finding. When you wnat to find a path, you can pick a final destination, a initial position and find the fastest way between the two, or you can just define the first position, and let the algorithm show every path you can finish, or you may want to mock this for a test and just say the final destination and assume you "teleport" to there, and so on. It's clear that the function is the same: finding a path. But the arguments may vary between implementations. I've searched a lot and found a lot of solutions: getting rid of the interface, putting all the arguments as fields in the implementation, using the visitor pattern...
But I'd like to know from you guys what is the drawback of putting every possible argument (not state) in one object (let's call it MovePreferences) and letting every implementation take what it needs. Sure, may you need another implementation that takes as argument that you didn't expect, you will need to change the MovePreferences, but it don't sound too bad, since you will only add methods to it, not refactor any existing method. Even though this MovePreferences is not an object of my domain, I'm still tempted to do it. What do you think?
(If you have a better solution to this problem, feel free to add it to your answer.)
The question you are asking is really why have interfaces at all, no, why have any concept of context short of 'whatever I need?' I think the answers to that are pretty straightforward: programming with shared global state is easy for you, the programmer, and quickly turns into a vortex for everyone else once they have to coalesce different features, for different customers, render enhancements, etc.
Now the far other end of the spectrum is the DbC argument: every single interface must be a highly constrained contract that not only keeps the knowledge exchanged to an absolute minimum, but makes the possibility of mayhem minimal.
Frankly, this is one of the reasons why dependency injection can quickly turn into a mess: as soon as design issues like this come up, people just start injecting more 'objects,' often to get access to just one property, whose scope might not be the same as the scope of the present operation. [Different kind of nightmare.]
Unfortunately, there's almost no information in your question. Do I think it would be possible to correctly model the notion of a Route? Sure. That doesn't sound very challenging. Here are a few ideas:
Make a class called Route that has starting and ending points. Then a collection of Traversals. The idea here would be that a Route could completely ignore the notion of how someone got from point a to point b, where traversal could contain information about roads, traffic, closures, whatever. Then your mocked case could just have no Traversals inside.
Another option would be to make Route a Composite so that each trip is then seen as the stringing together of various segments. That's the way routes are usually presented: go 2 miles on 2 South, exit, go 3 miles east on Santa Monica Boulevard, etc. In this scenario, you could just have Routes that have no children.
Finally, you will probably need a creational pattern. Perhaps a Builder. That simplifies mocking things too because you can just make a mock builder and have it construct Routes that consist of whatever you need.
The other advantage of combining the Composite and Builder is that you could make a builder that can build a new Route from an existing one by trying to improve only the troubling subsegments, e.g. it got traffic information that the 2S was slow, it could just replace that one segment and present its new route.
Consider an example,
Say if 5 arguments are encapsulated in an object and passed on to 3 methods.
If the object undergoes change in structure, then we need to run test cases for all the 3 methods. Instead if the method accepts only the arguments they need, they need not be tested.
Only problem I see out of this is Increase in Testing Efforts
Secondly you will naturally violate Single Responsibility Principle(SRP) if you pass more arguments than what the method actually needs.

What is implemetation difference between dojo.aspect and dojox.lang.aspect

While implementating aspect oriented programming i am getting confuse
Why Dojo having to two differentaspect library files?
When to use
dojo.aspect and dojox.lang.aspect ?
I have never heard about dojox.lang.aspect before, but according to git log the latest commit dates to 2008.
You can find an explanation why dojox.lang.aspect exists in an article by its author Eugene Lazutkin: AOP aspect of JavaScript with Dojo.
While it doesn't make much sense in some cases, we should realize that the primary role of dojo.connect() is to process DOM events.
[...]
To sum it up: events != AOP. Events cannot simulate all aspects of AOP, and AOP cannot be used in lieu of events.
So AOP in JavaScript sounds simple! Well, in reality there are several sticky places:
Implementing every single advice as a function is expensive.
Usually iterations are more efficient than recursive calls.
The size of call stack is limited.
If we have chained calls, it'll be impossible to rearrange them. So if we want to remove one advice in the middle, we are out of luck.
The “around” advice will “eat” all previously attached “before” and “after” advices changing the order of their execution. We cannot guarantee anymore that “before” advices run before all “around” advices, and so on.
Usually in order to decouple an “around” advice from the original function, the proceed() function is used. Calling it will result in calling the next-in-chain around advice method or the original method.
Our aspect is an object but in some cases we want it to be a static object, in other cases we want it to be a dynamic object created taking into account the state of the object we operate upon.
These and some other problems were resolved in dojox.lang.aspect (currently in the trunk, will be released in Dojo 1.2).
As of the latest Dojo 1.7, there is a strong tendency to differentiate between events and aspects, i.e. between dojo/on and dojo/aspect (both were implemented via dojo.connect before).
From a usage standpoint, dojo/aspect is a very simplified version of dojox/lang/aspect.
With dojo/aspect, you can create aspects corresponding to a named function (e.g. the "get" method in the "xhr" class), allowing you to create a before, after or around advice anytime xhr.get is called.
On the other hand, (TMHO) only dojox/lang/aspect provides enough features to play with aop.
It allows you to define your pointcuts with regular expressions, therefore allowing things like "execute an around advice for any functions whose name starts with get on any object"...
You can even pass-in an array of function names or regular expressions to which your aspects will be applied.
The blog post pointed by phusick gives good examples of that.

Using NSStringFromSelector to send method over a network

I'm currently making a client-client approach on some simulation with objective-c with two computers (mac1 and mac2).
I have a class Client, and every computer has a instance of the "Client" on it (client1,client2). I expect that both clients will be synchronized: they will both be equal apart from memory locations.
When a user presses a key on mac1, I want both client1 and client2 to receive a given method from class Client (so that they are synchronized, i.e. they are the same apart from it's memory location on each mac).
To this approach, my current idea is to make 2 methods:
- (void) sendSelector:(Client*)toClient,...;
- (void) receiveSelector:(Client*)fromClient,...;
sendSelector: uses NSStringFromSelector() to transform the method to a NSString, and send it over the network (let's not worry about sending strings over net now).
On the other hand, receiveSelector: uses NSSelectorFromString() to transform a NSString back to a selector.
My first question/issue is: to what extent is this approach "standard" on networking with objective-c?
My second question:
And the method's arguments? Is there any way of "packing" a given class instance and send it over the network? I understand the pointer's problem when packing, but every instance on my program as an unique identity, so that should be no problem since both clients will know how to retrieve the object from its identity.
Thanks for your help
Let me address your second question first:
And the method's arguments? Is there any way of "packing" a given
class instance and send it over the network?
Many Cocoa classes implement/adopt the NSCoding #protocol. This means they support some default implementation for serializing to a byte stream, which you could then send over the network. You would be well advised to use the NSCoding approach unless it's fundamentally not suited to your needs for some reason. (i.e. use the highest level of abstraction that gets the job done)
Now for the more philosophical side of your first question; I'll rephrase your question as "is it a good approach to use serialized method invocations as a means of communication between two clients over a network?"
First, you should know that Objective-C has a not-often-used-any-more, but reasonably complete, implementation for handling remote invocations between machines with a high level of abstraction. It was called Distributed Objects. Apple appears to be shoving it under the rug to some degree (with good reason -- keep reading), but I was able to find an old cached copy of the Distributed Objects Programming Topics guide. You may find it informative. AFAIK, all the underpinnings of Distributed Objects still ship in the Objective-C runtime/frameworks, so if you wanted to use it, if only to prototype, you probably could.
I can't speculate as to the exact reasons that you can't seem to find this document on developer.apple.com these days, but I think it's fair to say that, in general, you don't want to be using a remote invocation approach like this in production, or over insecure network channels (for instance: over the Internet.) It's a huge potential attack vector. Just think of it: If I can modify, or spoof, your network messages, I can induce your client application to call arbitrary selectors with arbitrary arguments. It's not hard to see how this could go very wrong.
At a high level, let me recommend coming up with some sort of protocol for your application, with some arbitrary wire format (another person mentioned JSON -- It's got a lot of support these days -- but using NSCoding will probably bootstrap you the quickest), and when your client receives such a message, it should read the message as data and make a decision about what action to take, without actually deriving at runtime what is, in effect, code from the message itself.
From a "getting things done" perspective, I like to share a maxim I learned a while ago: "Make it work; Make it work right; Make it work fast. In that order."
For prototyping, maybe you don't care about security. Maybe when you're just trying to "make it work" you use Distributed Objects, or maybe you roll your own remote invocation protocol, as it appears you've been thinking of doing. Just remember: you really need to "make it work right" before releasing it into the wild, or those decisions you made for prototyping expedience could cost you dearly. The best approach here will be to create a class or group of classes that abstracts away the network protocol and wire format from the rest of your code, so you can swap out networking implementations later without having to touch all your code.
One more suggestion: I read in your initial question a desire to 'keep an object (or perhaps an object graph) in sync across multiple clients.' This is a complex topic, but you may wish to employ a "Command Pattern" (see the Gang of Four book, or any number of other treatments in the wild.) Taking such an approach may also inherently bring structure to your networking protocol. In other words, once you've broken down all your model mutation operations into "commands" maybe your protocol is as simple as serializing those commands using NSCoding and shipping them over the wire to the other client and executing them again there.
Hopefully this helps, or at least gives you some starting points and things to consider.
These days it would seem that the most standard way is to package everything up on JSON.

How much responsibility should a method have?

This is most certainly a language agnostic question and one that has bothered me for quite some time now. An example will probably help me explain the dilemma I am facing:
Let us say we have a method which is responsible for reading a file, populating a collection with some objects (which store information from the file), and then returning the collection...something like the following:
public List<SomeObject> loadConfiguration(String filename);
Let us also say that at the time of implementing this method, it would seem infeasible for the application to continue if the collection returned was empty (a size of 0). Now, the question is, should this validation (checking for an empty collection and perhaps the subsequent throwing of an exception) be done within the method? Or, should this methods sole responsibility be to perform the load of the file and ignore the task of validation, allowing validation to be done at some later stage outside of the method?
I guess the general question is: is it better to decouple the validation from the actual task being performed by a method? Will this make things, in general, easier at a later stage to change or build upon - in the case of my example above, it may be the case at a later stage where a different strategy is added to recover from the event of an empty collection being return from the 'loadConfiguration' method..... this would be difficult if the validation (and resulting exception) was being done in the method.
Perhaps I am being overly pedantic in the quest for some dogmatic answer, where instead it simply just relies on the context in which a method is being used. Anyhow, I would be very interested in seeing what others have to say regarding this.
Thanks all!
My recommendation is to stick to the single responsibility principle which says, in a nutshell, that each object should have 1 purpose. In this instance, your method has 3 purposes and then 4 if you count the validation aspect.
Here's my recommendation on how to handle this and how to provide a large amount of flexibility for future updates.
Keep your LoadConfig method
Have it call the a new method for reading the file.
Pass the previous method's return value to another method for loading the file into the collection.
Pass the object collection into some validation method.
Return the collection.
That's taking 1 method initially and breaking it into 4 with one calling 3 others. This should allow you to change pieces w/o having any impact on others.
Hope this helps
I guess the general question is: is it
better to decouple the validation from
the actual task being performed by a
method?
Yes. (At least if you really insist on answering such a general question – it’s always quite easy to find a counter-example.) If you keep both the parts of the solution separate, you can exchange, drop or reuse any of them. That’s a clear plus. Of course you must be careful not to jeopardize your object’s invariants by exposing the non-validating API, but I think you are aware of that. You’ll have to do some little extra typing, but that won’t hurt you.
I will answer your question by a question: do you want various validation methods for the product of your method ?
This is the same as the 'constructor' issue: is it better to raise an exception during the construction or initialize a void object and then call an 'init' method... you are sure to raise a debate here!
In general, I would recommend performing the validation as soon as possible: this is known as the Fail Fast which advocates that finding problems as soon as possible is better than delaying the detection since diagnosis is immediate while later you would have to revert the whole flow....
If you're not convinced, think of it this way: do you really want to write 3 lines every time you load a file ? (load, parse, validate) Well, that violates the DRY principle.
So, go agile there:
write your method with validation: it is responsible for loading a valid configuration (1)
if you ever need some parametrization, add it then (like a 'check' parameter, with a default value which preserves the old behavior of course)
(1) Of course, I don't advocate a single method to do all this at once... it's an organization matter: under the covers this method should call dedicated methods to organize the code :)
To deflect the question to a more basic one, each method should do as little as possible. So in your example, there should be a method that reads in the file, a method that extracts the necessary data from the file, another method to write that data to the collection, and another method that calls these methods. The validation can go in a separate method, or in one of the others, depending on where it makes the most sense.
private byte[] ReadFile(string fileSpec)
{
// code to read in file, and return contents
}
private FileData GetFileData(string fileContents)
{
// code to create FileData struct from file contents
}
private void FileDataCollection: Collection<FileData> { }
public void DoItAll (string fileSpec, FileDataCollection filDtaCol)
{
filDtaCol.Add(GetFileData(ReadFile(fileSpec)));
}
Add validation, verification to each of the methods as appropriate
You are designing an API and should not make any unnecessary assumptions about your client. A method should take only the information that it needs, return only the information requested, and only fail when it is unable to return a meaningful value.
So, with that in mind, if the configuration is loadable but empty, then returning an empty list seems correct to me. If your client has an application specific requirement to fail when provided an empty list, then it may do so, but future clients may not have that requirement. The loadConfiguration method itself should fail when it really fails, such as when it is unable to read or parse the file.
But you can continue to decouple your interface. For example, why must the configuration be stored in a file? Why can't I provide a URL, a row in a database, or a raw string containing the configuration data? Very few methods should take a file path as an argument since it binds them tightly to the local file system and makes them responsible for opening, reading, and closing files in addition to their core logic. Consider accepting an input stream as an alternative. Or if you want to allow for elaborate alternatives -- like data from a database -- consider accepting a ConfigurationReader interface or similar.
Methods should be highly cohesive ... that is single minded. So my opinion would be to separate the responsibilities as you have described. I sometimes feel tempted to say...it is just a short method so it does not matter...then I regret it 1.5 weeks later.
I think this depends on the case: If you could think of a scenario where you would use this method and it returned an empty list, and this would be okay, then I would not put the validation inside the method. But for e.g. a method which inserts data into a database which have to be validated (is the email address correct, has a name been specified, ... ) then it should be ok to put validation code inside the function and throw an exception.
Another alternative, not mentioned above, is to support Dependency Injection and have the method client inject a validator. This would allow the preservation of the "strong" Resource Acquisition Is Initialization principle, that is to say Any Object which Loads Successfully is Ready For Business (Matthieu's mention of Fail Fast is much the same notion).
It also allows a resource implementation class to create its own low-level validators which rely on the structure of the resource without exposing clients to implementation details unnecessarily, which can be useful when dealing with multiple disparate resource providers such as Ryan listed.

Passing object references needlessly through a middleman

I often find myself needing reference to an object that is several objects away, or so it seems. The options I see are passing a reference through a middle-man or just making something available statically. I understand the danger of global scope, but passing a reference through an object that does nothing with it feels ridiculous. I'm okay with a little bit passing around, I suppose. I suspect there's a line to be drawn somewhere.
Does anyone have insight on where to draw this line?
Or a good way to deal with the problem of distributing references amongst dependent objects?
Use the Law of Demeter (with moderation and good taste, not dogmatically). If you're coding a.b.c.d.e, something IS wrong -- you've nailed forevermore the implementation of a to have a b which has a c which... EEP!-) One or at the most two dots is the maximum you should be using. But the alternative is NOT to plump things into globals (and ensure thread-unsafe, buggy, hard-to-maintain code!), it is to have each object "surface" those characteristics it is designed to maintain as part of its interface to clients going forward, instead of just letting poor clients go through such undending chains of nested refs!
This smells of an abstraction that may need some improvement. You seem to be violating the Law of Demeter.
In some cases a global isn't too bad.
Consider, you're probably programming against an operating system's API. That's full of globals, you can probably access a file or the registry, write to the console. Look up a window handle. You can do loads of stuff to access state that is global across the whole computer, or even across the internet... and you don't have to pass a single reference to your class to access it. All this stuff is global if you access the OS's API.
So, when you consider the number of global things that often exist, a global in your own program probably isn't as bad as many people try and make out and scream about.
However, if you want to have very nice OO code that is all unit testable, I suppose you should be writing wrapper classes around any access to globals whether they come from the OS, or are declared yourself to encapsulate them. This means you class that uses this global state can get references to the wrappers, and they could be replaced with fakes.
Hmm, anyway. I'm not quite sure what advice I'm trying to give here, other than say, structuring code is all a balance! And, how to do it for your particular problem depends on your preferences, preferences of people who will use the code, how you're feeling on the day on the academic to pragmatic scale, how big the code base is, how safety critical the system is and how far off the deadline for completion is.
I believe your question is revealing something about your classes. Maybe the responsibilities could be improved ? Maybe moving some code would solve problems ?
Tell, don't ask.
That's how it was explained to me. There is a natural tendency to call classes to obtain some data. Taken too far, asking too much, typically leads to heavy "getter sequences". But there is another way. I must admit it is not easy to find, but improves gradually in a specific code and in the coder's habits.
Class A wants to perform a calculation, and asks B's data. Sometimes, it is appropriate that A tells B to do the job, possibly passing some parameters. This could replace B's "getName()", used by A to check the validity of the name, by an "isValid()" method on B.
"Asking" has been replaced by "telling" (calling a method that executes the computation).
For me, this is the question I ask myself when I find too many getter calls. Gradually, the methods encounter their place in the correct object, and everything gets a bit simpler, I have less getters and less call to them. I have less code, and it provides more semantic, a better alignment with the functional requirement.
Move the data around
There are other cases where I move some data. For example, if a field moves two objects up, the length of the "getter chain" is reduced by two.
I believe nobody can find the correct model at first.
I first think about it (using hand-written diagrams is quick and a big help), then code it, then think again facing the real thing... Then I code the rest, and any smells I feel in the code, I think again...
Split and merge objects
If a method on A needs data from C, with B as a middle man, I can try if A and C would have some in common. Possibly, A or a part of A could become C (possible splitting of A, merging of A and C) ...
However, there are cases where I keep the getters of course.
But it's less likely a long chain will be created.
A long chain will probably get broken by one of the techniques above.
I have three patterns for this:
Pass the necessary reference to the object's constructor -- the reference can then be stored as a data member of the object, and doesn't need to be passed again; this implies that the object's factory has the necessary reference. For example, when I'm creating a DOM, I pass the element name to the DOM node when I construct the DOM node.
Let things remember their parent, and get references to properties via their parent; this implies that the parent or ancestor has the necessary property. For example, when I'm creating a DOM, there are various things which are stored as properties of the top-level DomDocument ancestor, and its child nodes can access those properties via the reference which each one has to its parent.
Put all the different things which are passed around as references into a single class, and then pass around just that one class instance as the only thing that's passed around. For example, there are many properties required to render a DOM (e.g. the GDI graphics handle, the viewport coordinates, callback events, etc.) ... I put all of these things into a single 'Context' instance which is passed as the only parameter to the methods of the DOM nodes to be rendered, and each method can get whichever properties it needs out of that context parameter.