To create an object (of some class) in a listener - oop

I'm creating a script and have troubles.
Is it possible to create an object (of some class) from within a listener?
I tried it but I get an error: ``class not found''.
I want to do something like:
class ONE {
class_ONE_code
}
class TWO {
object o = alloc(ONE)
}
I need this to create a new listener when I execute another listener.

What you wish to do is certainly possible. Most likely you have a syntax error in your code. For example, your implementation of class TWO is invalid since a member variable like "o" cannot be initialized in the member declaration section of the class code. This can only be done within a class method, as illustrated in the example code below.
class One
{
void DoClassOneAction(Object self)
{
OKDialog("Class One action executed.");
}
}
class Two
{
Object oneInstance;
void DoClassTwoAction(Object self)
{
if (!oneInstance.ScriptObjectIsValid())
oneInstance = Alloc(One);
oneInstance.DoClassOneAction();
}
}
void main()
{
Object twoInstance = Alloc(Two);
twoInstance.DoClassTwoAction();
}
main();
Note that the coding requirements for DM script classes differ somewhat from those of other languages that support objects. You may want to review details in the Scripting > Objects section of the DM on-line help (accessed via Help > Search… menu item).

Related

Is it possible to have a field with a generic type that refers to the actual runtime type of the containing class?

I'm fiddling around with this code where I have a base class Node which can be extended:
open class Node
class SubNode : Node()
Now, I have a Behavior class that can be attached to a node, and when this attachment happens, the behavior object is invoked:
open class Behavior {
fun attach(node: Node) {
println("Behavior was attached to a node")
}
}
open class Node {
var behavior: Behavior? = null
set(value) {
field = value
value.attach(this)
}
}
This works, but could this be generified in such way that the type of the attach method would always refer to the actual type of the attached Node? For instance, if the Behavior class was extended like this:
open class Behavior<NodeType: Node> {
open fun attach(node: NodeType) {
}
}
class SubBehavior : Behavior<SubNode>() {
override fun attach(node: SubNode) {
}
}
I've tried various ways of setting up the types in Node class, but can't figure any other way than passing the actual subclass type to the base class (which seems rather cumbersome):
open class Node<SubType: Node> {
var behavior: Behavior<SubType>? = null
}
class SubNode : Node<SubNode>()
Is there a way to do this in any other way?
I think what you need are self types, which don't exist in Kotlin (at least, not yet).
Using recursive generics like you did is the most common way around the problem.
That said, I have trouble understanding your use case here for intertwining these 2 classes together this way. Like how is behaviour used inside your node, etc.

Koin - How to generify Singleton creation?

I have a class InteractorCache<T> that I would like to inject in different places using Koin.
I would like to create a singleton instance of that class based on the type T. So if I have 10 types T, I would like 10 different singletons.
So far I managed to do the above with the following code (this is an example with only 2 types, A and B):
val interactorAModule = module {
factory {
InteractorA(get())
}
}
val aCache = module {
single(named("A")){
InteractorCache<List<A>>()
}
}
val interactorBModule = module {
factory {
InteractorB(get())
}
}
val bCache = module {
single(named("B")){
InteractorCache<List<B>>()
}
}
This works but there is a lot of repetition as I have to create a new cache module (aCache, bCache) every time I create a new type. I would like to be able to do something like this instead:
val cacheModule = module{
single<T>{
InteractorCache<T>()
}
}
so there is only 1 declaration that works for any type T.
Is there a way to do this in Koin?
Although this is late but the idea of making generic or T a singleton is bad idea, when you declare a class singleton it will run a single instance, so runtime error would be InteractorCache() is incompatible or mismatched to InteractorCache() as the first class you would assign the T for example the class A InteractorCache() it would be fixed instance of A and cannot anymore assign to class B.

Mockito mocking method with class parameter vs actual object parameter

What is the difference between these two as per Mockito -
Mockito.when(serviceObject.myMethod(Customer.class)).thenThrow(new
RuntimeException());
and
Customer customer = new Customer();
Mockito.when(serviceObject.myMethod(customer)).thenThrow(new
RuntimeException());
And if both serve the same purpose then using which one is considered to be best practice?
There is a misunderstanding on your side - that method specification myMethod(SomeClass.class) is only possible when the signature of that method allows for a class parameter. Like:
Whatever myMethod(Object o) {
or directly
Whatever myMethod(Class<X> clazz) {
In other words: it is not Mockito that does something special about a parameter that happens to be of class Class!
Thus your first option is not something that works "in general". Example: I put down this code in a unit test:
static class Inner {
public int foo(String s) { return 5; }
}
#Test
public void testInner() {
Inner mocked = mock(Inner.class);
when(mocked.foo(Object.class)).thenReturn(4);
System.out.println(mocked.foo(""));
}
And guess what - the above does not compile. Because foo() doesn't allow for a Class parameter. We can rewrite to
static class Inner {
public int foo(Object o) { return 5; }
}
#Test
public void testInner() {
Inner mocked = mock(Inner.class);
when(mocked.foo(Object.class)).thenReturn(4);
System.out.println(mocked.foo(""));
}
And now the above compiles - but prints 0 (zero) when invoked. Because the above would be the same as mocked.foo(eq(Object.class)). In other words: when your method signature allows for passing a Class instance and you then pass a class instance, that is a simple mocking specification for mockito. In my example: when the incoming object would be Object.class - then 4 would be returned. But the incoming object is "" - therefore the Mockito default kicks in and 0 is returned.
I am with the other answer here - I think you are mixing up that older versions of Mockito asked you to write down when(mocked.foo(any(ExpectedClass.class))) - which can nowadays be written as when(mocked.foo(any())). But when(mocked.foo(ExpectedClass.class)) is not a Mockito construct - it is a simple method specification that gives a specific object to "match on" - and that specific object happens to be an instance of class Class.
First one which uses generic Customer class to match type can also be written as:
Mockito.when(serviceObject.myMethod(Mockito.any(Customer.class))).thenThrow(new
RuntimeException());
In case of the second one, you are passing the actual object that will be used in stubbing.
Usage:
If your method myMethod throws the exception based on the state of the Customer object then you can use the latter approach, where you can set the state of the Customer object appropriately.
However If your method myMethod does not depend on the Customer object to throw the exception rather you need it only to pass it as an argument just to invoke the method, then you can take the former approach.

Create Method via GDSL script that has a delegating closure parameter

Using the (scarcely documented) gdsl scripts of Intellij, one can add dynamic methods to a class:
contributor(context(ctype: "my.Type")) {
method name: "doIt", params: [body: {}], type: void
}
One can also configure the delegation of a closure:
contributor(context(scope: closureScope())) {
def call = enclosingCall("doIt")
if (call) {
def method = call.bind()
def clazz = method?.containingClass
if (clazz?.qualName == 'my.Type') {
delegatesTo(findClass('my.Inner'))
}
}
}
Which, when doIt is a method that is defined in the code (not dynamically added), also works as designed.
However, when using the closureScope with the previously created method, the containing class method is always null, meaning that I can not safely delegate inside the closure to the addressed my.Inner class.
What I want is adding a dynamic method equivalent to:
void doIt(#DelegatesTo(my.Inner) Closure)...
I.e. I want the method to be available in code completion (this works), and inside the so created closure, I want correct code completion when addressing methods of my.Inner.
So far, I tried various approaches:
include the #DelegatesTo annotation in the param definition
try more esoteric approaches in finding the owner of the closure, which fails because the GrMethodCall simply has no parent
unconditionally delegating all closures named doIt to my.Inner which works, but is no viable solution since I do have multiple doIt methods (on different classes) delegating to different targets.
So, how can I make IDEA behave as expected and delegate to the correct target?
Edit to make it clearer:
Given the following classes:
package my
class Type {
void doIt(Closure) {}
}
class Inner {
void inInner() {}
}
and the following gdsl:
contributor(context(scope: closureScope())) {
def call = enclosingCall("doIt")
if (call) {
def method = call.bind()
def clazz = method?.containingClass
println clazz?.qualName
if (clazz?.qualName == 'my.Type') {
delegatesTo(findClass('my.Inner'))
}
}
}
when I start typing in a new script:
new Type().doIt {
inInner()
}
When inside the closure, I get the following:
code completion for inInner
inInner is shown as valid
The console output when started with idea.bat from commandline shows the line my.Type (from the println)
Ctrl-B on inInner correctly links to source code.
(The same behaviour can be reached without the gdsl when annotation the Closure Parameter in the doIt method with #DelegatesTo(Inner))
However, I do not want to manually include the doIt method in the source of Type, it is generated by an AST Transformation, so my source file now looks like this:
package my
class Type {
}
class Inner {
void inInner() {}
}
I can tell IntelliJ about the new method using the following gdsl snippet
contributor(context(ctype: "my.Type")) {
method name: "doIt", params: [body: {}], type: void
}
Now the IDE correctly recognizes the doIt method with a closure parameter. However, inside the Closure, the following happens:
sometimes code completion shows inInner, sometimes after changing something, it does not (when using the original code to fix a type, it was shown, but later declared "unresolved", after going through the code changes of this edited example, it is not shown anymore...)
Even when shown, inInner is shown with "cannot resolve symbol" decoration
the console shows null as clazz, i.e. the method is found, but not linked to an owner ASTNode
Ctrl-B does not link to the corresponding method in Inner
So what I want is the same behaviour for an injected doIt method (via Gdsl) as with a method included in the source, i.e. I want the gdsl to inject a doIt method with a delegating closure (to Inner) into the type class.
This worked for me adding the ctype to scope insted of finding the class type from the method
contributor(context(scope: closureScope(), ctype: 'my.Type')) {
def call = enclosingCall("doIt")
if (call) {
delegatesTo(findClass('my.Inner'))
}
}

code in the middle is different, everything else the same

I often have a situation where I need to do:
function a1() {
a = getA;
b = getB;
b.doStuff();
.... // do some things
b.send()
return a - b;
}
function a2() {
a = getA;
b = getB;
b.doStuff();
.... // do some things, but different to above
b.send()
return a - b;
}
I feel like I am repeating myself, yet where I have ...., the methods are different, have different signatures, etc..
What do people normally do? Add an if (this type) do this stuff, else do the other stuff that is different? It doesn't seem like a very good solution either.
Polymorphism and possibly abstraction and encapsulation are your friends here.
You should specify better what kind of instructions you have on the .... // do some things part. If you're always using the same information, but doing different things with it, the solution is fairly easy using simple polymorphism. See my first revision of this answer. I'll assume you need different information to do the specific tasks in each case.
You also didn't specify if those functions are in the same class/module or not. If they are not, you can use inheritance to share the common parts and polymorphism to introduce different behavior in the specific part. If they are in the same class you don't need inheritance nor polymorphism.
In different classes
Taking into account you're stating in the question that you might need to make calls to functions with different signature depending on the implementation subclass (for instance, passing a or b as parameter depending on the case), and assuming you need to do something with the intermediate local variables (i.e. a and b) in the specific implementations:
Short version: Polymorphism+Encapsulation: Pass all the possible in & out parameters that every subclass might need to the abstract function. Might be less painful if you encapsulate them in an object.
Long Version
I'd store intermediate state in generic class' member, and pass it to the implementation methods. Alternatively you could grab the State from the implementation methods instead of passing it as an argument. Then, you can make two subclasses of it implementing the doSpecificStuff(State) method, and grabbing the needed parameters from the intermediate state in the superclass. If needed by the superclass, subclasses might also modify state.
(Java specifics next, sorry)
public abstract class Generic {
private State state = new State();
public void a() {
preProcess();
prepareState();
doSpecificStuf(state);
clearState();
return postProcess();
}
protected void preProcess(){
a = getA;
b = getB;
b.doStuff();
}
protected Object postProcess(){
b.send()
return a - b;
}
protected void prepareState(){
state.prepareState(a,b);
}
private void clearState() {
state.clear();
}
protected abstract doSpecificStuf(State state);
}
public class Specific extends Generic {
protected doSpecificStuf(State state) {
state.getA().doThings();
state.setB(someCalculation);
}
}
public class Specific2 extends Generic {
protected doSpecificStuf(State state) {
state.getB().doThings();
}
}
In the same class
Another possibility would be making the preProcess() method return a State variable, and use it inthe implementations of a1() and a2().
public class MyClass {
protected State preProcess(){
a = getA;
b = getB;
b.doStuff();
return new State(a,b);
}
protected Object postProcess(){
b.send()
return a - b;
}
public void a1(){
State st = preProcess();
st.getA().doThings();
State.clear(st);
return postProcess();
}
public void a2(){
State st = preProcess();
st.getB().doThings();
State.clear(st);
return postProcess();
}
}
Well, don't repeat yourself. My golden rule (which admittedly I break from time on time) is based on the ZOI rule: all code must live exactly zero, one or infinite times. If you see code repeated, you should refactor that into a common ancestor.
That said, it is not possible to give you a definite answer how to refactor your code; there are infinite ways to do this. For example, if a1() and a2() reside in different classes then you can use polymorphism. If they live in the same class, you can create a function that receives an anonymous function as parameter and then a1() and a2() are just wrappers to that function. Using a (shudder) parameter to change the function behavior can be used, too.
You can solve this in one of 2 ways. Both a1 and a2 will call a3. a3 will do the shared code, and:
1. call a function that it receives as a parameter, which does either the middle part of a1 or the middle part of a2 (and they will pass the correct parameter),
- or -
2. receive a flag (e.g. boolean), which will tell it which part it needs to do, and using an if statement will execute the correct code.
This screams out loud for the design pattern "Template Method"
The general part is in the super class:
package patterns.templatemethod;
public abstract class AbstractSuper {
public Integer doTheStuff(Integer a, Integer b) {
Integer x = b.intValue() + a.intValue();
Integer y = doSpecificStuff(x);
return b.intValue() * y;
}
protected abstract Integer doSpecificStuff(Integer x);
}
The spezific part is in the subclass:
package patterns.templatemethod;
public class ConcreteA extends AbstractSuper {
#Override
protected Integer doSpecificStuff(Integer x) {
return x.intValue() * x.intValue();
}
}
For every spezific solution you implement a subclass, with the specific behavior.
If you put them all in an Collection, you can iterate over them and call always the common method and evry class does it's magic. ;)
hope this helps