instance creation method, like
ClassName new
To aid with some details,
we could write a = arithmetic method in abstract class,
then doubledispatch them in subclasses.
could we use that in instance creation?
I have tried new but it fail. Leads to some predefined basic new method.
Double Dispatch doesn't really make sense in the new case. The idea behind double dispatch, is that you can't determine the correct behavior by dispatching only to the receiver. The type of the (single) argument has an equal effect on which behavior is chosen (dispatched). In other words, double dispatch only makes sense if you have arguments to your methods, new being unary, it does not.
That said, you can certainly implement your own new method that overrides the stock default inherited one. And you can make it do all kinds of interesting things. It's common to do some sort of environment check to determine what subclass is appropriate.
AbstractClass>>>new
^self platform = #unix
ifTrue: [SubclassThatLeveragesUnix basicNew]
ifFalse: [CrappyPCSubclass basicNew]
Note that we use basicNew here, rather than new. If you used new you would need to either implement distinct overrides in those subclasses, otherwise it would just inherit and resend the AbstractClass>>>new message again.
... or you could do something like:
AbstractClass class>>#new
^ (self platform concreteClassFor: self) basicNew initialize.
which is basically same idea, but without ifs :)
The key point of double dispatch is that by swapping the receiver and the argument of the primary message, you call a second time a virtual call and that you get then the effect of selecting a method based on the message receiver and its argument. Therefore you need to have message with argument.
Here is a typical example of double dispatch: addition integer and floats and performing adequate conversion.
Integer>>+ arg
^ arg sumFromInteger: self
Float>>+ arg
^ arg sumFromFloat: self
Integer>>sumFromInteger: anInt
<primitive adding to ints>
Integer>>sumFromFloat: aFloat
^ self asFloat + aFloat
Float>>sumFromFloat: aFloat
<primitive adding two floats>
Float>>sumFromInteger: anInt
^ self + anInt asFloat
Now 1 + 1.0 will hit first + on Integer then sumFromInt: then + then sumFromFloat. Note that we have enough information so we could shortcut the second + invocation,
What the example shows is that during the first call, the dynamic message resolution finds on method (so it defining like a dynamic case) and then by swapping the argument and the receiver, the dynamic message resolution performs another case based on the arg. So at the end you get a method selected using the two objects of the original call.
Now about your question: In Pharo class messages are dynamically looked up so you could implement instance creation methods using double dispatch without problem but the goal is unclear.
MyClass class>>newWith: arg
arg newFromMyClass: aClass
Related
I've found the InvokeDynamic class and have made it work with a static method handle acquired via MethodHandles.Lookup.findStatic().
Now I am trying to do the same thing, but with a virtual method handle acquired via MethodHandles.Lookup.findVirtual().
I can cause my bootstrap method to run, and I make sure in my bootstrap method that I'm returning a ConstantCallSite(mh), where mh is the result of calling MethodHandles.Lookup.findVirtual(). (This part all works fine, i.e. I understand how "indy" works.)
However, when I use the resulting Implementation as the argument to an intercept() call, I cannot pass the actual object on which the method represented by the method handle is to be invoked. This is due to the withArgument() method being used for two contradictory purposes.
Here is my recipe:
Implementation impl =
InvokeDynamic.bootstrap(myBootstrapDescription, someOtherConstantArgumentsHere)
.invoke(theMethodName, theMethodReturnType)
// 0 is the object on which I want to invoke my virtual-method-represented-by-a-method-handle;
// 1 is the sole argument that the method actually takes.
.withArgument(0, 1);
There are some problems here.
Specifically, it seems that withArgument() is used by ByteBuddy for two things, not just one:
Specifying the parameter types that will be used to build a MethodType that will be supplied to the bootstrap method. Let's say my virtual method takes one argument.
Specifying how the instrumented method's arguments are passed to the actual method handle execution.
If I have supplied only one argument, the receiver type is left unbound and execution of the resulting MethodHandle cannot happen, because I haven't passed an argument that will be used for the receiver type "slot". If I accordingly supply two arguments to (1) above (as I do in my recipe), then the method handle is not found by my bootstrap method, because the supplied MethodType indicates that the method I am searching for requires two arguments, and my actual method that I'm finding only takes one.
Finally, I can work around this (and validate my hypothesis) by doing some fairly ugly stuff in my bootstrap method:
First, I deliberately continue to pass two arguments, not one, even though my method only takes two arguments: withArgument(0, 1)
In my bootstrap method, I now know that the MethodType it will receive will be "incorrect" (it will have two parameter types, not one, where the first parameter type will represent the receiver type). I drop the first parameter using MethodType#dropParameterTypes(int, int).
I call findVirtual() with the new MethodType. It returns a MethodType with two parameter types: the receiver type that it adds automatically, and the existing non-dropped parameter type.
(More simply I can just pass a MethodType as a constant to my bootstrap method via, for example, JavaConstant.MethodType.of(myMethodDescription) or built however I like, and ignore the one that ByteBuddy synthesizes. It would still be nice if there were instead a way to control the MethodType that ByteBuddy supplies (is obligated to supply) to the bootstrap method.)
When I do things like this in my bootstrap method, my recipe works. I'd prefer not to tailor my bootstrap method to ByteBudddy, but will here if I have to.
Is it a bug that ByteBuddy does not seem to allow InvokeDynamic to specify the ingredients for a MethodType directly, without also specifying the receiver?
What you described, is entirely independent of Byte-Buddy. It’s just the way how invokedynamic works.
JVMS, §5.4.3.6
5.4.3.6. Dynamically-Computed Constant and Call Site Resolution
To resolve an unresolved symbolic reference R to a dynamically-computed constant or call site, there are three tasks. First, R is examined to determine which code will serve as its bootstrap method, and which arguments will be passed to that code. Second, the arguments are packaged into an array and the bootstrap method is invoked. Third, the result of the bootstrap method is validated, and used as the result of resolution.
…
The second task, to invoke the bootstrap method handle, involves the following steps:
An array is allocated with component type Object and length n+3, where n is the number of static arguments given by R (n ≥ 0).
The zeroth component of the array is set to a reference to an instance of java.lang.invoke.MethodHandles.Lookup for the class in which R occurs, produced as if by invocation of the lookup method of java.lang.invoke.MethodHandles.
The first component of the array is set to a reference to an instance of String that denotes N, the unqualified name given by R.
The second component of the array is set to the reference to an instance of Class or java.lang.invoke.MethodType that was obtained earlier for the field descriptor or method descriptor given by R.
Subsequent components of the array are set to the references that were obtained earlier from resolving R's static arguments, if any. The references appear in the array in the same order as the corresponding static arguments are given by R.
A Java Virtual Machine implementation may be able to skip allocation of the array and, without any change in observable behavior, pass the arguments directly to the bootstrap method.
So the first three arguments to the bootstrap method are provided by the JVM according to the rules cited above. Only the other arguments are under the full control of the programmer.
The method type provided as 3rd argument always matches the type of the invokedynamic instruction describing the element types to pop from the stack and the type to push afterwards, if not void. Since this happens automatically, there’s not even a possibility to create contradicting, invalid bytecode in that regard; there is just a single method type stored in the class file.
If you want to bind the invokedynamic instruction to an invokevirtual operation using a receiver from the operand stack, you have exactly the choices already mentioned in your question. You may derive the method from other bootstrap arguments or drop the first parameter type of the instruction’s type. You can also use that first parameter type to determine the target of the method lookup. There’s nothing ugly in this approach; it’s the purpose of bootstrap methods to perform adaptations.
I am using ByteBuddy to generate a class.
Prior to working with DynamicType.Builder, I was going to store a MethodCall as an instance variable:
private final MethodCall frobCall =
MethodCall.invoke(ElementMatchers.named("frob")); // here I invoke a method I'm going to define as part of the instrumented type
Then later in my generation logic for the instrumented type I define the frob method to do something:
.defineMethod("frob")
.intercept(...etc....) // here I define frob to do something
…and I define the (let's say) baz method to invoke frob:
.defineMethod("baz")
.withParameter(...) // etc.
.intercept(frobCall); // invokes "frob", which I've just defined above
(I am trying to keep this simple and may have mistyped something but I hope you can see the gist of what I'm trying to do.)
When I make() my DynamicType, I receive an error that indicates that the dynamic type does not define frob. This is mystifying to me, because of course I have defined it, as you can see above.
Is there some restriction I am unaware of that prohibits ElementMatchers from identifying instrumented type methods that are defined later? Do I really have to use MethodDescription.Latent here?
It should match all methods of the instrumented type. If this is not happening as expected, please set a breakpoint in MethodCall.MethodLocator.ForElementMatcher to see why the method is not showing up. I assume it is filtered by your method matcher.
I noticed however that it did not include private methods which is now fixed and will be released within Byte Buddy 1.10.18.
I know that everything is an object and you send messages to objects in Smalltalk to do almost everything.
Now how can we implement an object (memory representation and basic operations) to represent a primitive data type? For example how + for integers is implemented?
I looked at the source code for Smalltalk and found this in Smallint.st. Can someone explain this piece of code?
+ arg [
"Sum the receiver and arg and answer another Number"
<category: 'built ins'>
<primitive: VMpr_SmallInteger_plus>
^self generality == arg generality
ifFalse: [self retrySumCoercing: arg]
ifTrue: [(LargeInteger fromInteger: self) + (LargeInteger fromInteger: arg)]
]
Here is the link of above code: https://github.com/gnu-smalltalk/smalltalk/blob/62dab58e5231909c7286f1e61e26c9f503b2b3df/kernel/SmallInt.st
Conceptually speaking primitive methods are pieces of behavior (routines) implemented by the Virtual Machine (VM), not by regular Smalltalk code.
When the Smalltalk compiler finds the statement <primitive: ...> it interprets this as an special type of method whose argument (in your case VMpr_SmallInteger_plus) indicates the integer index of the target routine within the VM.
In this sense a primitive is a global routine not bound to the MethodDictionary of any particular class. The primitive logic is intended for a receiver and arguments of certain classes and that's why it must check that the receiver and the arguments (if any) conform its requirements. If not, the primitive fails and in that case the control flows to the Smalltalk code that follows the <primitive: ...> statement. Otherwise the primitive succeeds and the Smalltalk code below is not executed. Note also that the compiler will not allow for any Smalltalk code other than temporary declaration occurring above the <primitive:...> sentence.
In your example, if the argument arg is not of the expected class (presumably a SmallInteger) the routine gives up trying to sum it to the receiver and delegates the resolution of the operation to the Smalltalk code.
If the argument happens to be a SmallInteger, the primitive will compute the result (using the routine held in the VM) and answer with it.
I haven't seen the code of this primitive but it could also happen that the primitive fails if the result of the sum does not fit in a SmallInteger, in which case both the receiver and the argument would be cast to LargeIntegers and the addition would take place in the #+ method of the appropriate class (LargePositiveInteger or LargeNegativeInteger).
The other branch of the Smalltalk code allows for the implementation of a polymorphic sum between a SmallInteger and any other type of object. For instance this part of the Smalltalk code would take place if you evaluate 3 + 4.0 because in this case the argument is a Float. Something similar happens if you evaluate 3 + (4 / 3), etc.
I'm very new to Smalltalk and would like to understand a few things and confirm others (in order to see if I'm getting the idea or not):
1) In Smalltalk variables are untyped?
2) The only "type check" in Smalltalk occurs when a message is sent and the inheritance hierarchy is climbed up in order to bind the message to a method? And in case the class Object is reached it throws a run time error because the method doesn't exist?
3) There are no coercions because there are no types...?
4) Is it possible to overload methods or operators?
5) Is there some kind of Genericity? I mean, parametric polymorphism?
6) Is there some kind of compatibility/equivalence check for arguments when a message is sent? or when a variable is assigned?
Most questions probably have very short answers (If I'm in the right direction).
1) Variables have no declared types. They are all implicitly references to objects. The objects know what kind they are.
2) There is no implicit type check but you can do explicit checks if you like. Check out the methods isMemberOf: and isKindOf:.
3) Correct. There is no concept of coercion.
4) Operators are just messages. Any object can implement any method so, yes it has overloading.
5) Smalltalk is the ultimate in generic. Variables and collections can contain any object. Languages that have "generics" make the variables and collections more specific. Go figure. Polymorphism is based on the class of the receiver. To do multiple polymorphism use double dispatching.
6) There are no implicit checks. You can add your own explicit checks as needed.
Answer 3) you can change the type of an object using messages like #changeClassTo:, #changeClassToThatOf:, and #adoptInstance:. There are, of course caveats on what can be converted to what. See the method comments.
For the sake of completion, an example from the Squeak image:
Integer>>+ aNumber
"Refer to the comment in Number + "
aNumber isInteger ifTrue:
[self negative == aNumber negative
ifTrue: [^ (self digitAdd: aNumber) normalize]
ifFalse: [^ self digitSubtract: aNumber]].
aNumber isFraction ifTrue:
[^Fraction numerator: self * aNumber denominator + aNumber numerator denominator: aNumber denominator].
^ aNumber adaptToInteger: self andSend: #+
This shows:
that classes work as some kind of 'practical typing', effectively differentiating things that can be summed (see below).
a case of explicitly checking for Type/Class. Of course, if the parameter is not an Integer or Fraction, and does_not_understand #adaptToInteger:andSend:, it will raise a DNU (doesNotUnderstand see below).
some kind of 'coercion' going on, but not implicitly. The last line:
^aNumber adaptToInteger: self andSend: #+
asks the argument to the method to do the appropriate thing to add himself to an integer. This can involve asking the original receiver to return, say, a version of himself as a Float.
(doesn't really show, but insinuates) that #+ is defined in more than one class. Operators are defined as regular methods, they're called binary methods. The difference is some Smalltalk dialects limit them up to two chars length, and their precedence.
an example of dispatching on the type of the receiver and the argument. It uses double dispatch (see 3).
an explicit check where it's needed. Object can be seen as having types (classes), but variables are not. They just hold references to any object, as Smalltalk is dynamically typed.
This also shows that much of Smalltalk is implemented in Smalltalk itself, so the image is always a good place to look for this kind of things.
About DNU errors, they are actually a bit more involved:
When the search reaches the top class in the inheritance chain (presumably ProtoObject) and the method is not found, a #doesNotUndertand: message is sent to the object, with the message not understood as parameter) in case it wants to handle the miss. If #doesNotUnderstand: is not implemented, the lookup once again climbs up to Object, where its implementation is to throw an error.
Note: I'm not sure about the equivalence between Classes and Types, so I tried to be careful about that point.
I am trying to implement a JSON-RPC solution, using a server connector object which obtains a list of available functions from a server somehow like
NSDictionary *functions = [server
callJSONFunction: #"exposedFunctions" arguments: nil];
which is a simplified description, since callJSONFunction actually triggers an asynchronous NSURLConnection.
An element of the function list consists of a string describing the objective c selector, the original function name which will be called using the mechanism mentioned above, the function signature and an optional array of argument names.
So for example a function list could look like this:
(
#"someFunctionWithArgumentOne:argumentTwo:" =
{
signature = #"##:##",
functionName = #"someFunction",
arguments = ( #"arg_one", #"arg_two" )
},
#"anotherFunction" =
{
signature = #"##:",
functionName = #"anotherFunction"
}
)
As soon as the function list was successfully retrieved, the selectors are added to the server connector instance using class_addMethod in a loop:
for ( NSString *selectorName in functions ) {
SEL aSelector = NSSelectorFromString ( selName );
IMP methodIMP = class_getMethodImplementation (
[ self class ], #selector ( callerMethod: ) );
class_addMethod ( [ self class ], aSelector, methodIMP, "v#:####" );
}
where callerMethod: is a wrapper function used to compose the actual request, consisting of the function name as a NSString and an NSDictionary of the form
{ #"argument1_name" = arg1, #"argument2_name" = arg2, ... }
hence the signature "v#:##". The callerMethod then invokes callJSONFunction on the server.
After this exhausting introduction (my bad, I just did not know, how to shorten it) I'll finally get to the point: to cover the possibility of different numbers of arguments,
I defined the callerMethod like
- (void) callerMethod: (id)argument, ... { }
wherein I use the va_* macros from stdarg.h to obtain the passed arguments. But when I test the mechanism by invoking
[serverConnector someFunctionWithArgumentOne: #"Argument 1"
argumentTwo: #"Argument 2" ];
the first argument returned by id arg = va_arg ( list, id); is always #"Argument 2"!
I'd really appreciate all theories and explanations on why that happens. This thing is really driving me nuts!
Var-args do not map to regular argument passing quite so neatly. Encoding of arguments is actually quite architecture specific and rife with highly entertaining details that sometime seem like they are self-conflicting (until you discover the note about the one exception to the rule that makes the whole thing coherent). If you really want a good read [sarcasm intended], go have a look at how ppc64 handles long doubles sometime; there are cases where half of the double will be in a register and the other half on the stack. Whee!
The above long, and slightly frothy due to scarring, paragraph is to say that you can't transparently forward a call from one function to another where the two functions take different arguments. Since an Objective-C method is really just a function, the same holds true for methods.
Instead, use NSInvocation as it is designed to hide all of the esoteric details of argument encoding that comprises any given platforms ABI.
In your case, though, you might be able to get away with class_addMethod() by defining a set of functions that define all possible combinations of argumentation. You don't even really need to make a dictionary as you can use the dlsym() function to look up the correct function. I.e.
id trampolineForIdIdSELIdIdInt(id self, SEL _cmd, id obj1, id obj2, int) {
... your magic here ...
}
Then, you could translate the type string "##:##i" into that function name and pass it to dlsym, grab the result and use class_addMethod()....
I do feel an obligation to also mention this book as it is a sort of "whoah... man... if we represent classes as objects called meta classes that are themselves represented as classes then we can, like, redefine the universe as metaclasses and classes" ultimate end of this line of thinking.
Also see this unfinished book by Gregor Kiczales and Andreas Paepcke.