I'm writing a matrix and a vector class, and I'd like to make it usable with all "sensible" numerical data types, not only for the BigFraction data type I wrote myself for my current purpose.
Therefore, I'm wondering whether there is an interface which requires + and * to be implemented (or perhaps even more operator functions).
Thus, I'd like to have a useful generic constraint.
Thanks a lot in advance!
Related
I found the new value class been
I found the purpose is like :
value class adds attribute to a variable and constraint it’s usage.
I was wondering what is some practical usage of value class.
Well, as stated in the documentation Kotlin Inline classes
Sometimes it is necessary for business logic to create a wrapper around some type. However, it introduces runtime overhead due to additional heap allocations. Moreover, if the wrapped type is primitive, the performance hit is terrible, because primitive types are usually heavily optimized by the runtime, while their wrappers don't get any special treatment.
To solve such issues, Kotlin introduces a special kind of class called an inline class. Inline classes are a subset of value-based classes. They don't have an identity and can only hold values.
A value class can be helpful when, for example, you want to be clear about what unit a certain value uses: does a function expect me to pass my value in meters per second or kilometers per hour? What about miles per hour? You could add documentation on what unit the function expects, but that still would be error-prone. Value classes force developers to use the correct units.
You can also use value classes to provide clear means for other devs on your project on doing operations with your data, for example converting from one unit to another.
Value classes also are not assignment-compatible, so they are treated like actual new class declarations: When a function expects a value class of an integer, you still have to pass an instance of your value class - an integer won't work. With type aliases, you could still accidentally use the underlying type, and thus introduce expensive errors.
In other words, if you simply want things to be easier to read, you can just use type aliases. If you need things to be strict and safe in some way, you probably want to use value classes instead.
In statically typed language, people are able to use algebraic data type to abstract data and also generate constructors, or use class, trait and mixin to deal with data abstraction.
In dynamically typed language, like Python and Ruby, they all provide a class system to users.
But what about scheme, the simplest functional language, the closest one to λ-calculi, how does it abstract data?
Do scheme programmers usually just put data in a list or a lambda abstraction, and write some accessor function to make it look like a tree or something else? like EOPL says: specifying data via interfaces.
And then how does this abstraction technique relate to abstract data type (ADT) and objects? with regard to On understanding data abstraction, revisited.
What SICP (and I guess, EOPL) is advocating is just using functions to access data; then you can always switch one set of functions for another, implementing the same named set of functions to work with another concrete implementation. And that (i.e. the sets of such functions) is what forms the "interfaces", and that's what you put in different source files, and by just loading the appropriate one you can switch the concrete implementation while all the other code is none the wiser. That's what makes it "abstract" datatype.
As for the algebraic data types, the old bare-bones Scheme way is to create closures (that hold and hide the data) which respond to "messages" and thus become "objects" (something about "Scheme mailboxes"). This gives us products, i.e. records, and functions we get for free from Scheme itself. For sum types, just as in C/C++, we can use tagged unions in a disciplined manner (or, again, hide the specifics behind a set of "interface" functions).
EOPL has something called "variant-case" which handles such sum types in a manner similar to pattern matching. Searching brings up e.g. this link saying
I'm using DrScheme w/ the EOPL textbook, which uses define-record and variant-case. I've got the macro definitions from the PLT site, but am now dealing with ...
so seems relevant, as one example.
Hi I have a situation like that;
I have different items in my design and all these items has some specific effect on the Character. There is an apply function in every item so it can use Character object and change its functionalities. But what if I change the Character function, I would have to change all the Item classes in accordance to that.
How can I decouple Item and Character efficiently?
The language I am going to use is C++ and I don't know the other variables and functions inside the Item and Character classes. I just want to decouple them.
You could introduce an interface (abstract class in C++) that Character would inherit. Let's call it ItemUser. The Item#apply signature would be changed so that it would take an object of ItemUser instead of Character. Now you are able to change the implementation of Character freely as long as it respects the ItemUser contract.
Check Decorator design pattern, it seems that this design pattern is what you are looking for. Link :Decorator design pattern
As per what I have understood from reading your question is : You have multiple Item classes each having a effect associated. Effect corressponding to the type of Item object is applied on another entity which is Character. Now your issue is whenever there is a change in Character class your Item classes also needs to change and you want a cleaner way to avoid this.
A good way to handle change is to define the well defined Contract which is less prone to change. For example if we have a functionality to add two integers and later we may have the changes such that we require to add two floating point numbers and later we may need to replace add operation with multiplication. In such a case you can define an abstraction Compute (INum num1, INum num2) : INum as return type. Here INum is an abstraction for type and Compute is abstraction for behaviour of function. Actual implementation defines INum and Compute. Now code using our code only depends on the abstractions and we can freely modify the operation and actual type without affecting the user code.
While implementing the contract you can modify the internal implementation without affecting the outside code using the contract.
You can define an abstract class ICharacter. For certain attributes whose type can change in future you can use Templates and generics or simply create interface for the attribute type as well and let the concrete type implement the interfaces. Refer all your fields with interfaces. Let ICharacter define public abstract methods with parameters of type Interfaces and return type also as Interfaces.
Let Item class use ICharacter and When you need to apply effect as per item class just use the constant abstract functions defined. Your Character internal modifications now can change without affecting the Item class.
I wonder if the concept of multiple dispatch (that is, built-in support, as if the dynamic dispatch of virtual methods is extended to the method's arguments as well) should be included in an object-oriented language if its impact on performance would be negligible.
Problem
Consider the following scenario: I have a -- not necessarily flat -- class hierarchy containing types of animals. At different locations in my code, I want to perform some actions on an animal object. I do not care, nor can I control, how this object reference is obtained. I might encounter it by traversing a list of animals, or it might be given to me as one of a method's arguments. The action I want to perform should be specialized depending on the runtime type of the given animal. Examples of such actions would be:
Construct a view-model for the animal in order to present it in the GUI.
Construct a data object (to later store into the DB) representing this type of animal.
Feed the animal with some food, but give different kinds of food depending on the type of the animal (what is more healthy for it)
All of these examples operate on the public API of an animal object, but what they do is not the animal's own business, and therefore cannot be put into the animal itself.
Solutions
One "solution" would be to perform type checks. But this approach is error-prone and uses reflective features, which (in my opinion) is almost always an indication of bad design. Types should be a compile-time concept only.
Another solution would be to "abuse" (sort of) the visitor pattern to mimic double dispatch. But this would require that I change my animals to accept a visitor.
I am sure there are other approaches. Also, the problem of extension should be addressed: If new types of animals join the party, how many code locations need to be adapted, and how can I find them reliably?
The Question
So, in the light of these requirements, shouldn't multiple dispatch be an integral part of any well-designed object-oriented language?
Isn't it natural to make external (not just internal) actions dependent on the dynamic type of a given object?
Best regards!
You are suggesting dynamic dispatching based on method name / signature combined with runtime actual argument types. I think you're crazy.
So, in the light of these requirements, shouldn't multiple dispatch be an integral part of any well-designed object-oriented language?
That there are problems for which the availability of the kind of dispatch strategy you envision would simplify coding is a weak argument for such dispatch being built into any given language, much less every OO language.
Isn't it natural to make external (not just internal) actions dependent on the dynamic type of a given object?
Perhaps, but not everything that seems "natural" is in fact a good idea. Clothes are not natural, for instance, but see what happens if you try going around in public without (somewhere other than Berkeley, anyway).
Some languages already have static dispatch based on argument types, more conventionally called "overloading". Dynamic dispatch based on argument types, on the other hand, is a real mess if there is more than one argument to be considered, and it cannot help but be slow(er). Today's popular OO languages provide for you to perform double dispatch where it is wanted, without the overhead of supporting it in the vast majority of places where you don't want it.
Furthermore, although implementing double-dispatch does present maintenance issues arising from tight coupling between separate components, there are coding strategies that can help keep that manageable. It is anyway unclear to what extent having argument-based multiple dispatch built in to a given language would actually help with that problem.
One "solution" would be to perform type checks. But this approach is
error-prone and uses reflective features, which (in my opinion) is
almost always an indication of bad design. Types should be a
compile-time concept only.
You're wrong. All uses of virtual functions, virtual inheritance, and such things involve reflective features and dynamic types. The ability to defer typing until runtime when you need to is absolutely critical and is inherent in even the most basic formulation of the situation you're in, which literally cannot even arise without the use of dynamic types. You even describe your problem as wanting to do different things depending on.. the dynamic type. After all, if there is no dynamic typing, why would you need to do things differently? You already know the concrete final type.
Of course, a bit of run-time typing can handle the problem you got yourself into with run-time typing.
Simply build a dictionary/hash table from type to function. You can add entries to this structure dynamically for any dynamically linked derived types, it's a nice O(1) to look up into, and requires no internal support.
If one restricts oneself to the situation where knowledge of how an object of type X should fnorble an object of type Y must be stored in either class X or class Y, one can have the base type of Y include a method that accepts a reference of X's base type and indicates how much an object knows about how to be fnorbled by the object identified by that reference, as well as a method that asks the Y to have an X fnorble it.
Having done that, one can have X's Fnorble(Y) method start by asking the Y how much it knows about being fnorbled by a particular type of X. If the Y knows more about X than X knows about Y, then X's Fnorble(Y) method should call the Y's BeFnorbledBy(X) method; otherwise, the X should fnorble the Y as best it knows how.
Depending upon how many different kinds of X and Y there are, Y could define BeFnorbledBy overloads methods for different kinds of X, such that when X calls target.BeFnorbledBy(this) it would automatically dispatch directly to a suitable method; such an approach, however, would require every Y to know about every type of X that was "interesting" to anybody whether or not it had any interest that particular type itself.
Note that this approach doesn't accommodate the situation where there might be an outside object of class Z which knows things about how an X should fnorble a Y that neither X nor Y knows directly. That kinds of situation is best handled by having a "rulebook" object where everything that knows about how various kinds of X should fnorble various kinds of Y can tell the rulebook, and code which wants an X to fnorble a Y can ask the rulebook to make that happen. Although languages could provide assistance in cases where rulebooks are singletons, there may be times when it may be useful to have multiple rulebooks. The semantics in those cases are probably best handled by having code use rulebooks directly.
I see that Sorm already supports org.joda.time.DateTime. Is there a possibility to add support for other types?
For example, my case class has a java.nio.charset.Charset or Locale field, which I would like to convert to a string. Suppose I have functions to accomplish the conversion from the custom type to/from a SQL type, how can I tell Sorm to use it?
SORM's support for a certain datatype is quite more complex than just ability to convert to and from an SQL type. Values of some types may span several columns (e.g. Tuple, Range), others may require intermediate tables (Seq, Set, Map) and all of them require an individual approach to translating query clauses. All that would have resulted in a quite complex ad-hoc type-mapping API if one was to be exposed.
But the thing is the above is really not the reason why such an API is not exposed and most probably not to ever be. You see, SORM's philosophy is essentially all about pure immutable data model and the cleanest way to design such one is to use standard Scala's immutable datatypes and case classes.
So the clean way for you to design your application with SORM would be to convert those stateful Java's classes to immutable values in your application. For instance you could implement a custom case class Charset (...) in your model, register it with SORM's Instance and have your conversion functions work between this type and the Java's one in your application. Besides that, you could implement this Charset as an Enumeration, which seems to be the most appropriate.
Concerning your argument about the Joda Time types support, it's there mostly because some data types were needed to represent the SQL's timestamps. See this logic as reverse to what you were thinking of.