In Eiffel it is said that we should "loosen the pre-conditions and tightening the post-conditions", but I am not sure what this means. How does this benefit/is benefited by sub-classing?
Thank you
In Design by Contract, you specify a set of pre-conditions and a set of post-conditions for a function. For example, let's say you were writing a memory allocation function. You require that it accept a positive integer as input, and produces an evenly aligned pointer as its result.
Loosening the precondition means that when you create a derived class, it has to accept any input that the base class could accept, but might accept other inputs as well. Using the example above, a derived class could be written to accept a non-negative integer instead of just positive integers.
On the result side, you have to ensure that the result from a derived function meets all the requirements placed on the base function -- but it can also add more restrictions. For example, a derived version of the function above could decide to only produce results that were multiples of 8. Every multiple of 8 is clearly even, so it still meets the requirement of the base function, but has imposed an additional restriction as well.
The opposite would not work: if the base class function allows non-negative integers as input, then the derived class must continue to accept all non-negative integers as input. Attempting to change it to accept only positive integers (i.e., reject 0, which is allowed by the base class) would not be allowed -- your derived class can no longer be substituted for the base version under all circumstances.
Likewise with results: if the base class imposed a "multiple of 8" requirement on a result, the derived version must also ensure that all results are multiples of 8. Returning 2 or 4 would violate that requirement.
Related
In Kotlin, the Number type sounds quite useful: A type to use whenever I need something numeric.
When actually using it, however, I quickly noticed it is pretty useless: I cannot use any operators on these numbers. As soon as I need to do something with them, I need to explicitly convert them (even for comparing).
Why did the language designers choose to not include operators in the Number specification?
Thinking on this, I noticed it could be tricky to implement Number.plus(n: Number): Number, because n might be of a different type than this.
On the other hand, such implementations do exist in all Number subtypes I checked. And of course they are necessary if I want to type 1 + 1.2, which calls Int.plus(d: Double): Double
The result for me is that I have to call .toDouble() every time I use a number. This makes the code hard to read (compare a.toDouble() < b.toDouble() with a < b).
Is there any technical reason why operators where omitted from Number?
The problem is the implementation of the compareTo method. While it sounds reasonable and easy to add it in the first place, the devil lies in the details:
How would you compare instances of arbitrary Number classes to each other? Kotlin could implement the compare method using toDouble(); however this has problems with equality/precision: How do you compare a BigDecimal to a Double? Using toDouble() on the BigDecimal might lose precision, and two (actually different) BigDecimals might be considered equal using this method.
The mess gets even worse when you start to assume one or both types were supplied by libraries, where you cannot make assumptions on precision etc.
In Java, the Number type is not Comparable either.
Furthermore, some Number values like NaN might not be comparable at all.
If you need a Number to be comparable, you can easily implement your own compareTo-method as extension function. This has some additional limitations though, as most Number subtypes implement Comparable, and the extension function will lose against that implementation.
Credit for this answer goes to Roland, I only extended his comments (see on the question) into an answer.
It is my understanding that the memory layout of a Common Lisp object (bitwise tagging is defined by CLOS (classes).
I understand that every class has a corresponding type, but not every type has a corresponding class, because types can be compound (lists). I think that types are like logical constraints, as opposed to classes that are concrete "types" with a tagging scheme.
If this is correct, does the type system serve any other purpose other than being a logical constraint (such as specifying that an integer must be within a certain range, or that an array contains a particular type)?
If this is not correct, what purpose does the type system actually serve in light of CLOS? Thanks.
An object has only one class at a time, whereas it can satisfy multiple types.
The type system is a lattice, where you can compute a least-upper-bound and greatest lower bound of two types (using resp. or, and), and which admits a top type (T) and a bottom type (the NIL type, which is not the same as the NULL type).
An implementation of Common Lisp must be able to determine if a value belongs to a type, and that starts with atomic type specifiers, like character or integer, and grows with compound type specifiers (which can be defined by the user).
But whether this is done using tags or by static analysis is left to the implementation; in practice, CL is such that there are cases where you cannot statically determine the type of an object precisely (other than T), simply because an object can be redefined at a later point: you cannot assume its type is fixed (say: a function; that's why inlining or global declarations may help with type inference).
But if you have a scope in which a type can be guaranteed to be invariant, the the compiler is free to use unboxed data types to store values. Then you don't have tagged data. That is the case for local declaration of types for variables, but also for specialized arrays: once an array is built, its element type does not change over time and in some cases knowing that an array contains only (integer 0 15) elements can be used to pack data more efficiently.
CLOS was added to CL fairly late in the game (and it was not the only object system designed for CL)
Even with CLOS, the type system can be used by the compiler for optimizations and by user to reason about their code.
I think it's important to get away from the implementation of things, and instead concentrate on how the language thinks about them. Clearly the implementation needs to have enough information to know what sort of thing a given object is, and it's going to do that with some kind of 'tag' (which may or may not be some extra bits attached to the object -- some of it might be the leading bits of the address for instance). Below I've called this the 'representational type'. But you really have almost no access to that implementation detail from the language. It's tempting to think that, type-of tells you something which maps 1-1 onto the representational type, but that's not true: (type-of (cons 1 2) is permitted to return (cons integer integer) for instance, and I think it is probably allowed to return (cons integer number) or (cons (integer 1 1) (integer 2 2)). It's unlikely that there are distinct representational types for all of these: indeed there can't be since (type-of 1) can return (integer m n) for an infinite number of values of m & n.
So here's a take on how the language thinks about things, and the differences between classes and types, in CL.
Both the type system and the class system consist of a bounded lattice of types / classes. Being a lattice means that for any pair of objects there is a unique supremum (so, for types, a unique type of which both types are subtypes, and which has no subtypes for which that is true) and infimum (the reverse). Being bounded means there is a top & a bottom type / class.
Classes
Classes are first-class objects (you can store a class in a variable for instance).
All objects (including classes) belong to a class, and there is a well-defined operator to find the immediate class to which any object belongs.
There are a finite number of classes.
The class of an object corresponds fairly closely to its representational type, but not completely (there may be specialized array types which do not have corresponding classes for instance).
Classes can serve as types: (type-of 1 (class-of 1)) works, as does (subtypep (class-of 1) '(integer 0 1)) (the answers being t and nil, t respectively).
Types
Types are ways to denote collections of objects with common properties, but they are not themselves objects: they are, if anything, just names for collections of things -- the language specification calls these 'type specifiers'. In particular there are an infinite number of types: think of the type (integer m n) for instance. A small number of this infinitude of types correspond to representational types -- the actual information that tells the system what sort of thing something is -- but obviously most of them do not. There may be representational types which do not have corresponding types.
Types in practice serve three purposes I think.
Type information can tell the system about what representational types to use which can help it check that things are the right representational type and optimise things.
Type information can let the system make inferences which can help things significantly.
Type information can let programmers talk about what sort of things they are dealing with, even when that information is not helpful to the system. The system can treat such declarations as assertions about types which can make programs safer & easier to debug. This is an important reason for types: even if the system does not check them, it is useful for the person reading your code to know that it expects, say, an integer in [0, 30], ie an (integer 0 30). Indeed, even if the system does not automatically check declarations you can force checks with, say (check-type x '(integer 0 30) ...).
The second case is interesting. Let's say I have something which I have told the system is of type (double-float 0.0d0). This is very unlikely to be more useful in terms of representational type than double-float would be. But if I take the square root of this thing then knowing this type might be very useful indeed: the system can know that the result is a double-float, rather than a (complex double-float), and those types are extremely unlikely to be representationally the same. So the system can use my type declaration to make inferences in this way (and these inferences can cascade through the program). Note that classes can't do this (at least CL's classes can't), and neither can the representational type of an object: you need more information than that.
So yes, types serve a number of very useful purposes which aren't satisfied by classes.
A type is a set of values.
A type specifier is some way to succinctly represent a type.
Implementations may do all kinds of markings and registering in order to help them sort out the types of things, but that is not inherent to the concept of types.
A class is an object describing a set of other objects. Since having a succinct name for such a set (type) is quite useful, Common Lisp registers the class name as a type specifier for the corresponding set of objects. That is the whole relation of types to classes.
The type system defines different objects that do different things. The CLOS system is more so used for methods that define special behaviors for types in a more logical way for some programmers. Coming from Java, the CLOS System was more logical and systematic for me, so it has a role for some programmers. I like to think of the CLOS system as a class in Java such as the Integer class, and the type system similar to primitives in Java. The CLOS system simply helps you extend your objects with methods in a more systematic way than creating a structure imho.
Is it a data-type? And what language is it?
A real data type is a data type used in a computer program to
represent an approximation of a real number. Because the real numbers
are not countable, computers cannot represent them exactly using a
finite amount of information. Most often, a computer will use a
rational approximation to a real number.
https://en.wikipedia.org/wiki/Real_data_type
Real is known in SQL for example, but other languages have corresponding datatypes.
The specification "says" : DataTypes model Types whose instances are distinguished only by their value.
It means, as I understood, each instance as indentifier (technically, for me, it could seen as the address in memory if no other identifier is available) but two instances can have the same attributes values.
For example, you can have a class Person with an attribute name.
Two different instances of Person may have the same name because they have another identifier (they are not in the same address)
For Datatype, this is not possible because the identifier is the value.
Datatype is not PrimitiveType, PrimitiveType defines a predefined DataType, without any substructure. A PrimitiveType may have an algebra and operations defined outside of UML, for example, mathematically. (see 10.5.7 of specification document)
Real is a PrimitiveType defines as (see 21.1 of specification document)
:
An instance of Real is a value in the (infinite) set of real numbers. Typically an implementation
will internally represent Real numbers using a floating point standard such as ISO/IEC/IEEE
60559:2011 (whose content is identical to the predecessor IEEE 754 standard).
hope this helps you.
I'm attempting to create a class, A, that has a collection of objects, X[].
Each element in X will contain a reference to another class, B, and associate a Boolean value, U, with that reference.
In this way, I'll be able to create a instance of an object, and poll if it's relationship with X[i] is true, false, or none.
Is there a standard practice for doing this?
The particular problem I'm trying to solve is that I have a array of cells, each of which is defined by a positive or negative relationship to its bounding surfaces.
I want to loop through the cells, and find out the path length of a ray that transverses a series of them.
Don't restrict your thinking to objects or data structures. Think dynamically. If the Boolean value you want to associate to every class can be deduced from some logical rules, at it is likely the case, implement a message that will return that value. Then enumerate the classes and collect the Boolean values by sending the (same) message to all of them.
Then think dynamically again and apply the very same concept to calculate the collection of classes: do not hardcode them in a list or array, implement the message that will select them based on the logic that dictates such selection.
Of course, the ability to do all of this depends on the language of your choice as it will have to support classes as first-class objects. But hey, if you have a problem that can be better expressed in some language, different from the one you are currently using, take the opportunity to give it a try.
What's a sound approach to override the CompareTo() method in a custom class with multiple approaches to comparing the data contained in the class? I'm attempting to implement IComparable(Of T) just so I have a few of the baseline interfaces implemented. Not planning on doing any sorting yet, but this will save me down the road if I need to.
Reading MSDN states mostly that we have to return 0 if the objects are equal, -1 if obj1 is less than obj2, or 1 is obj1 is greater than obj2. But that's rather simplistic.
Consider an IPv4 address (which is what I'm implementing in my class). There's two main numbers to consider -- the IP address itself, and the CIDR. An IPv4 address by itself is assumed to have a CIDR of /32, so in that case, in a CompareTo method, I can just compare the addresses directly to determine if one is greater or less than the other. But when the CIDR's are different, then things get tricky.
Assume obj1 is 10.0.0.0/8 and obj2 is 192.168.75.0/24. I could compare these two addresses a number of ways. I could just ignore the CIDR, and still regard as obj2 as being greater than obj1. I could compare them based on their CIDR, which would be comparing the size of the network (and a /8 will trump a /24 quite easily). I could compare them on both their numerical address AND their CIDR, on the off chance obj2 was actually an address inside the network defined by obj1.
What's the kind of approach used to handle situations like this? Can I define two CompareTo methods, overloaded, such that one would evaluate one address relative to another address, and the second would evaluate the size of the overall network? How would the .NET framework be told which one to use depending on how one might want to sort an array? Or do some other function that relies on CompareTo()?
For CompareTo, you should use a comparison that represents the default, normal sort order for a particular type. For example, in the example you gave, I would probably expect it to sort on the address first, then on the subnet size.
But for the case where there is no obvious "default" sort order, or when there are multiple ways to compare (such as case sensitive or not when comparing strings) the recommended approach is to use an IComparer<T>. This would be a separate object that is able to compare two instances of your type. For example, a AddressComparer or SubnetComparer. You could even make them static properties of a class which is what StringComparer does.
Just about all methods that take IComparable types should also have an overload that allows you to specify an IComparer to use instead. You don't have to implement both, but if it makes sense, do it. That way you can specify a particular comparer when needed or use the default built-in IComparable logic of your type.