Using online dictionary tools doesn't really help. I think the way encapsulate is use in computer science doesn't exactly match its meaning in plain English.
What is the antonym of computer science's version of encaspulate? More specifically, what is an antonym for encapsulate that would work as a function name.
Why should I care? Here's my motivation:
// A class with a private member variable;
class Private
{
public:
// Test will be able to access Private's private members;
class Test;
private:
int i;
}
// Make Test exactly like Private
class Private::Test : public Private
{
public:
// Make Private's copy of i available publicly in Test
using Private::i;
};
// A convenience function to quickly break encapsulation on a class to be tested.
// I don't have good name for what it does
Private::Test& foo( Private& p )
{ return *reinterpret_cast<Private::Test*>(&p); } // power cast
void unit_test()
{
Private p;
// using the function quickly grab access to p's internals.
// obviously it would be evil to use this anywhere except in unit tests.
assert( foo(p).i == 42 );
}
The antonym is "C".
Ok, just kidding. (Sort of.)
The best terms I can come up with are "expose" and "violate".
The purpose behind encapsulation is to hide/cover/protect. The antonym would be reveal/expose/make public.
How about Decapsulation..
Though it aint a computer science term, but in medical science, Surgical removal of a capsule or enveloping membrane.. Check out here..
"Removing/Breaking encapsulation" is about the closest thing I've seen, honestly.
If you think of the word in the English sense, to encapsulate means to enclose within something. But in the CS sense, there's this concept of protection levels and it looks like you want to imply circumventing the access levels as well, so something like "extraction" doesn't really convey the meaning you're looking for.
But if you just think of it in terms of what the access levels are, it looks like you're making something public so, how about "publicizing"?
This is not such a simple question - Scott Meyers had an interesting article to demonstrate some of the nuances around encapsulation here.
I'll start with the punchline: If
you're writing a function that can be
implemented as either a member or as a
non-friend non-member, you should
prefer to implement it as a non-member
function. That decision increases
class encapsulation. When you think
encapsulation, you should think
non-member functions.
How about "Bad Idea"?
The true antonym of "Encapsulation" is "Global State".
The general opposite of encapsulation is coupling and we often talk about systems that are tightly coupled or loosely coupled.
The reason you'd want components to be encapsulated is because it makes it easier to reason about how they work.
Take the analogy of trains: the consequence of coupling the railcars is that the driver must consider the characteristics (inertia, length) of the entire train.
Obviously, though, we couple systems because we need them to work together.
Inverted encapsulation and data structures
There's another term that I've been digging for, which is how I came across this question, that refers to a non-standard style of data structures.
The standard style of encapsulation is exemplified by Java's LinkedList; the actual nodes of the list are designed to be inaccessible to the consumer. The theory is that this is an implementation detail and can change to improve performance, while existing code will continue to run.
Another style is the classic functional cons-list. This is a singly linked list, and the idea is that it's so simple that there's nothing to improve about the data structure, e.g.
data [a] = [] | a : [a] deriving (Eq, Ord)
-- Haskellers then work directly with the list
-- There's nothing to hide because it's so simple
typicalHaskell :: [a] -> b
typicalHaskell [] = emptyValue
typicalHaskell h : t = h `doAThing` (typicalHaskell t)
That's the definition from Haskell's standard prelude though the report notes that isn't valid Haskell syntax, and in practice [a] is defined in the guts of the compiler.
Then there's what I'm calling an "inverted" data structure, but I'm still looking for the correct term. This is, I think, really the opposite of encapsulation.
A good example of this is Python's heapq module. The data structure here is a binary heap, but there isn't a Heap class. Rather, you get a collection of functions that operate on generic Python lists and you're responsible for using those methods correctly to ensure the heap invariants are maintained.
How about "spaghetti"?
Related
I have a lot to learn in the way of OO patterns and this is a problem I've come across over the years. I end up in situations where my classes' sole purpose is procedural, just basically wrapping a procedure up in a class. It doesn't seem like the right OO way to do things, and I wonder if someone is experienced with this problem enough to help me consider it in a different way. My specific example in the current application follows.
In my application I'm taking a set of points from engineering survey equipment and normalizing them to be used elsewhere in the program. By "normalize" I mean a set of transformations of the full data set until a destination orientation is reached.
Each transformation procedure will take the input of an array of points (i.e. of the form class point { float x; float y; float z; }) and return an array of the same length but with different values. For example, a transformation like point[] RotateXY(point[] inList, float angle). The other kind of procedure wold be of the analysis type, used to supplement the normalization process and decide what transformation to do next. This type of procedure takes in the same points as a parameter but returns a different kind of dataset.
My question is, what is a good pattern to use in this situation? The one I was about to code in was a Normalization class which inherits class types of RotationXY for instance. But RotationXY's sole purpose is to rotate the points, so it would basically be implementing a single function. This doesn't seem very nice, though, for the reasons I mentioned in the first paragraph.
Thanks in advance!
The most common/natural approach for finding candidate classes in your problem domain is to look for nouns and then scan for the verbs/actions associated with those nouns to find the behavior that each class should implement. While this is generally a good advise, it doesn't mean that your objects must only represent concrete elements. When processes (which are generally modeled as methods) start to grow and become complex, it is a good practice to model them as objects. So, if your transformation has a weight on its own, it is ok to model it as an object and do something like:
class RotateXY
{
public function apply(point p)
{
//Apply the transformation
}
}
t = new RotateXY();
newPoint = t->apply(oldPoint);
in case you have many transformations you can create a polymorphic hierarchy and even chain one transformation after another. If you want to dig a bit deeper you can also take a look at the Command design pattern, which closely relates to this.
Some final comments:
If it fits your case, it is a good idea to model the transformation at the point level and then apply it to a collection of points. In that way you can properly isolate the transformation concept and is also easier to write test cases. You can later even create a Composite of transformations if you need.
I generally don't like the Utils (or similar) classes with a bunch of static methods, since in most of the cases it means that your model is missing the abstraction that should carry that behavior.
HTH
Typically, when it comes to classes that contain only static methods, I name them Util, e.g. DbUtil for facading DB access, FileUtil for file I/O etc. So find some term that all your methods have in common and name it that Util. Maybe in your case GeometryUtil or something along those lines.
Since the particulars of the transformations you apply seem ad-hoc for the problem and possibly prone to change in the future you could code them in a configuration file.
The point's client would read from the file and know what to do. As for the rotation or any other transformation method, they could go well as part of the Point class.
I see nothing particularly wrong with classes/interfaces having just essentially one member.
In your case the member is an "Operation with some arguments of one type that returns same type" - common for some math/functional problems. You may find convenient to have interface/base class and helper methods that combine multiple transformation classes together into more complex transformation.
Alternative approach: if you language support it is just go functional style altogether (similar to LINQ in C#).
On functional style suggestion: I's start with following basic functions (probably just find them in standard libraries for the language)
collection = map(collection, perItemFunction) to transform all items in a collection (Select in C#)
item = reduce (collection, agregateFunction) to reduce all items into single entity (Aggregate in C#)
combine 2 functions on item funcOnItem = combine(funcFirst, funcSecond). Can be expressed as lambda in C# Func<T,T> combined = x => second(first(x)).
"bind"/curry - fix one of arguments of a function functionOfOneArg = curry(funcOfArgs, fixedFirstArg). Can be expressed in C# as lambda Func<T,T> curried = x => funcOfTwoArg(fixedFirstArg, x).
This list will let you do something like "turn all points in collection on a over X axis by 10 and shift Y by 15": map(points, combine(curry(rotateX, 10), curry(shiftY(15))).
The syntax will depend on language. I.e. in JavaScript you just pass functions (and map/reduce are part of language already), C# - lambda and Func classes (like on argument function - Func<T,R>) are an option. In some languages you have to explicitly use class/interface to represent a "function" object.
Alternative approach: If you actually dealing with points and transformation another traditional approach is to use Matrix to represent all linear operations (if your language supports custom operators you get very natural looking code).
My question is pretty much what the title says: Is it possible to have a programming language which does not allow explicit type casting?
To clarify what I mean, assume we're working in some C#-like language with a parent Base class and a child Derived class. Clearly, such code would be safe:
Base a = new Derived();
Since going up the inheritance hierarchy is safe, but
Dervied b = (Base)a;
is not guarenteed safe, since going down is not safe.
But, regardless of the safety, such downcasts are valid in many languages (like Java or C#) - the code will compile, and will simply fail at runtime if the types aren't right. So technically, the code is still safe, but via runtime checks and not compile-time checks (btw, I'm not a fan of runtime checks).
I would personally find complete compile-time type safety to be very important, at least from a theoretical perspective, and at most from the perspective of reliable code. A consequence of compile-time type safety is that casts are no longer needed (which I think is great, 'cause they're ugly anyways). Any cast-like behaviour can be implemented by an implicit conversion operator or by a constructor.
So I'm wondering, are currently any OO languages which provide such a rigourous type safety at compile-time that casts are obsolete? I.e., they don't any allow unsafe conversion operations whatsoever? Or is there a reason this wouldn't work?
Thanks for any input.
Edit
If I can clarify by example, here's the big reason I hate downcasts so much.
Let's say I have the following (loosely based on C#'s collections):
public interface IEnumerable<T>
{
IEnumerator<T> GetEnumerator();
IEnumerable<T> Filter( Func<T, bool> );
}
public class List<T> : IEnumerable<T>
{
// All of list's implementation here
}
Now suppose someone decides to write code like this:
List<int> list = new List<int>( new int[]{1, 2, 3, 4, 5, 6} );
// Let's filter out the odd numbers
List<int> result = (List<int>)list.Filter( x => x % 2 != 0 );
Notice how the cast is necessary on that last line. But is it valid? Not in general. Sure, it makes sense that the implementation of List<T>.Filter will return another List<T>, but this is not guarenteed (it could be any subtype of IEnumerable<T>). Even if this runs at one point in time, a later version may change this, exposing how brittle the code is.
Pretty much all of the situations I can think that require downcasts would boil down to something like this example - a method has a return type of some class or interface, but since we know some implementation details, we're confident in downcasting the result. But this is anti-OOP, since OOP actually encourages abstracting from implementation details. So why do we do it anyways, even in purely OOP languages?
Downcasts can be gradually eliminated by improving the power of the type system.
One proposed solution to the example you gave is to add the ability to declare the return type of a method as "the same as this". This allows a subclass to return a subclass without requiring a cast. Thus you get something like this:
public interface IEnumerable<T>
{
IEnumerator<T> GetEnumerator();
This<T> Filter( Func<T, bool> );
}
public class List<T> : IEnumerable<T>
{
// All of list's implementation here
}
Now the cast is unnecessary:
List<int> list = new List<int>( new int[]{1, 2, 3, 4, 5, 6} );
// Compiler "knows" that Filter returns the same type as its receiver
List<int> result = list.Filter( x => x % 2 != 0 );
Other cases of downcasting also have proposed solutions by improving the type system, but these improvements have not yet been made to C#, Java, or C++.
Well, it's certainly possible to have programming languages that don't have subtyping at all, and then naturally there's no need for downcasts there. Most non-OO language fall into that class.
Even in a class-based OO language like Java, most downcasts could formally be replaced simply by letting the base class have a method
Foo meAsFoo() {
return null;
}
which the subclass would then override to return itself. However, that would still just be another way to express a run-time test, with the added downside of being more complicated to use. And it would be hard to forbid the pattern without losing all other advantages of inheritance-based subtyping.
Of course, this is only possible if you're able to modify the parent class. I suspect you might consider that a plus, but given how often one can modify the parent class and so use the workaround, I'm not sure how much that would be worth in terms of encouraging "good" design (for some more or less arbitrary value of "good").
A case could be made that it would encourage safe programming more if the language offered a case-matching construct instead of a downcast expression:
Shape x = .... ;
switch( x ) {
case Rectangle r:
return 5*r.diagonal();
case Circle c:
return c.radius();
case Point:
return 0 ;
default:
throw new RuntimeException("This can't happen, and I, "+
"the programmer, take full responsibility");
}
However, it might then be a problem in practice that without a closed-world assumption (which modern programming languages seem to be reluctant to make) many of those switches would need default: cases that the programmer knows can never happen, which might well desensitivize the programmer to the resultant throws.
There are many languages with duck typing and/or implicit type conversion. Perl certainly comes to mind; the intricacies of how subtypes of the scalar type are converted internally are a frequent source of criticism, but also receive praise because when they do work like you expect, they contribute to the DWIM feel of the language.
Traditional Lisp is another good example - all you have is atoms and lists, and nil which is both at the same time. Otherwise, the twain never meet ...
(You seem to come from a universe where programming languages are necessarily object-oriented, strongly typed, and compiled, though.)
I am a C# developer. Coming from OO side of the world, I start with thinking in terms of interfaces, classes and type hierarchies. Because of lack of OO in Haskell, sometimes I find myself stuck and I cannot think of a way to model certain problems with Haskell.
How to model, in Haskell, real world situations involving class hierarchies such as the one shown here: http://www.braindelay.com/danielbray/endangered-object-oriented-programming/isHierarchy-4.gif
First of all: Standard OO design is not going to work nicely in Haskell. You can fight the language and try to make something similar, but it will be an exercise in frustration. So step one is look for Haskell-style solutions to your problem instead of looking for ways to write an OOP-style solution in Haskell.
But that's easier said than done! Where to even start?
So, let's disassemble the gritty details of what OOP does for us, and think about how those might look in Haskell.
Objects: Roughly speaking, an object is the combination of some data with methods operating on that data. In Haskell, data is normally structured using algebraic data types; methods can be thought of as functions taking the object's data as an initial, implicit argument.
Encapsulation: However, the ability to inspect an object's data is usually limited to its own methods. In Haskell, there are various ways to hide a piece of data, two examples are:
Define the data type in a separate module that doesn't export the type's constructors. Only functions in that module can inspect or create values of that type. This is somewhat comparable to protected or internal members.
Use partial application. Consider the function map with its arguments flipped. If you apply it to a list of Ints, you'll get a function of type (Int -> b) -> [b]. The list you gave it is still "there", in a sense, but nothing else can use it except through the function. This is comparable to private members, and the original function that's being partially applied is comparable to an OOP-style constructor.
"Ad-hoc" polymorphism: Often, in OO programming we only care that something implements a method; when we call it, the specific method called is determined based on the actual type. Haskell provides type classes for compile-time function overloading, which are in many ways more flexible than what's found in OOP languages.
Code reuse: Honestly, my opinion is that code reuse via inheritance was and is a mistake. Mix-ins as found in something like Ruby strike me as a better OO solution. At any rate, in any functional language, the standard approach is to factor out common behavior using higher-order functions, then specialize the general-purpose form. A classic example here are fold functions, which generalize almost all iterative loops, list transformations, and linearly recursive functions.
Interfaces: Depending on how you're using an interface, there are different options:
To decouple implementation: Polymorphic functions with type class constraints are what you want here. For example, the function sort has type (Ord a) => [a] -> [a]; it's completely decoupled from the details of the type you give it other than it must be a list of some type implementing Ord.
Working with multiple types with a shared interface: For this you need either a language extension for existential types, or to keep it simple, use some variation on partial application as above--instead of values and functions you can apply to them, apply the functions ahead of time and work with the results.
Subtyping, a.k.a. the "is-a" relationship: This is where you're mostly out of luck. But--speaking from experience, having been a professional C# developer for years--cases where you really need subtyping aren't terribly common. Instead, think about the above, and what behavior you're trying to capture with the subtyping relationship.
You might also find this blog post helpful; it gives a quick summary of what you'd use in Haskell to solve the same problems that some standard Design Patterns are often used for in OOP.
As a final addendum, as a C# programmer, you might find it interesting to research the connections between it and Haskell. Quite a few people responsible for C# are also Haskell programmers, and some recent additions to C# were heavily influenced by Haskell. Most notable is probably the monadic structure underlying LINQ, with IEnumerable being essentially the list monad.
Let's assume the following operations: Humans can speak, Dogs can bark, and all members of a species can mate with members of the same species if they have opposite gender. I would define this in haskell like this:
data Gender = Male | Female deriving Eq
class Species s where
gender :: s -> Gender
-- Returns true if s1 and s2 can conceive offspring
matable :: Species a => a -> a -> Bool
matable s1 s2 = gender s1 /= gender s2
data Human = Man | Woman
data Canine = Dog | Bitch
instance Species Human where
gender Man = Male
gender Woman = Female
instance Species Canine where
gender Dog = Male
gender Bitch = Female
bark Dog = "woof"
bark Bitch = "wow"
speak Man s = "The man says " ++ s
speak Woman s = "The woman says " ++ s
Now the operation matable has type Species s => s -> s -> Bool, bark has type Canine -> String and speak has type Human -> String -> String.
I don't know whether this helps, but given the rather abstract nature of the question, that's the best I could come up with.
Edit: In response to Daniel's comment:
A simple hierarchy for collections could look like this (ignoring already existing classes like Foldable and Functor):
class Foldable f where
fold :: (a -> b -> a) -> a -> f b -> a
class Foldable m => Collection m where
cmap :: (a -> b) -> m a -> m b
cfilter :: (a -> Bool) -> m a -> m a
class Indexable i where
atIndex :: i a -> Int -> a
instance Foldable [] where
fold = foldl
instance Collection [] where
cmap = map
cfilter = filter
instance Indexable [] where
atIndex = (!!)
sumOfEvenElements :: (Integral a, Collection c) => c a -> a
sumOfEvenElements c = fold (+) 0 (cfilter even c)
Now sumOfEvenElements takes any kind of collection of integrals and returns the sum of all even elements of that collection.
Instead of classes and objects, Haskell uses abstract data types. These are really two compatible views on the problem of organizing ways of constructing and observing information. The best help I know of on this subject is William Cook's essay Object-Oriented Programming Versus Abstract Data Types. He has some very clear explanations to the effect that
In a class-based system, code is organized around different ways of constructing abstractions. Generally each different way of constructing an abstraction is assigned its own class. The methods know how to observe properties of that construction only.
In an ADT-based system (like Haskell), code is organized around different ways of observing abstractions. Generally each different way of observing an abstraction is assigned its own function. The function knows all the ways the abstraction could be constructed, and it knows how to observe a single property, but of any construction.
Cook's paper will show you a nice matrix layout of abstractions and teach you how to organize any class as an ADY or vice versa.
Class hierarchies involve one more element: the reuse of implementations through inheritance. In Haskell, such reuse is achieved through first-class functions instead: a function in a Primate abstraction is a value and an implementation of the Human abstraction can reuse any functions of the Primate abstraction, can wrap them to modify their results, and so on.
There is not an exact fit between design with class hierarchies and design with abstract data types. If you try to transliterate from one to the other, you will wind up with something awkward and not idiomatic—kind of like a FORTRAN program written in Java.
But if you understand the principles of class hierarchies and the principles of abstract data types, you can take a solution to a problem in one style and craft a reasonably idiomatic solution to the same problem in the other style. It does take practice.
Addendum: It's also possible to use Haskell's type-class system to try to emulate class hierarchies, but that's a different kettle of fish. Type classes are similar enough to ordinary classes that a number of standard examples work, but they are different enough that there can also be some very big surprises and misfits. While type classes are an invaluable tool for a Haskell programmer, I would recommend that anyone learning Haskell learn to design programs using abstract data types.
Haskell is my favorite language, is a pure functional language.
It does not have side effects, there is no assignment.
If you find to hard the transition to this language, maybe F# is a better place to start with functional programming. F# is not pure.
Objects encapsulate states, there is a way to achieve this in Haskell, but this is one of the issues that takes more time to learn because you must learn some category theory concepts to deeply understand monads. There is syntactic sugar that lets you see monads like non destructive assignment, but in my opinion it is better to spend more time understanding the basis of category theory (the notion of category) to get a better understanding.
Before trying to program in OO style in Haskell, you should ask yourself if you really use the object oriented style in C#, many programmers use OO languages, but their programs are written in the structured style.
The data declaration allows you to define data structures combining products (equivalent to structure in C language) and unions (equivalent to union in C), the deriving part o the declaration allows to inherit default methods.
A data type (data structure) belongs to a class if has an implementation of the set of methods in the class.
For example, if you can define a show :: a -> String method for your data type, then it belong to the class Show, you can define your data type as an instance of the Show class.
This is different of the use of class in some OO languages where it is used as a way to define structures + methods.
A data type is abstract if it is independent of it's implementation. You create, mutate, and destroy the object by an abstract interface, you do not need to know how it is implemented.
Abstraction is supported in Haskell, it is very easy to declare.
For example this code from the Haskell site:
data Tree a = Nil
| Node { left :: Tree a,
value :: a,
right :: Tree a }
declares the selectors left, value, right.
the constructors may be defined as follows if you want to add them to the export list in the module declaration:
node = Node
nil = Nil
Modules are build in a similar way as in Modula. Here is another example from the same site:
module Stack (Stack, empty, isEmpty, push, top, pop) where
empty :: Stack a
isEmpty :: Stack a -> Bool
push :: a -> Stack a -> Stack a
top :: Stack a -> a
pop :: Stack a -> (a,Stack a)
newtype Stack a = StackImpl [a] -- opaque!
empty = StackImpl []
isEmpty (StackImpl s) = null s
push x (StackImpl s) = StackImpl (x:s)
top (StackImpl s) = head s
pop (StackImpl (s:ss)) = (s,StackImpl ss)
There is more to say about this subject, I hope this comment helps!
I'm designing a language, and I'm wondering if it's reasonable to make reference types non-nullable by default, and use "?" for nullable value and reference types. Are there any problems with this? What would you do about this:
class Foo {
Bar? b;
Bar b2;
Foo() {
b.DoSomething(); //valid, but will cause exception
b2.DoSomething(); //?
}
}
My current language design philosophy is that nullability should be something a programmer is forced to ask for, not given by default on reference types (in this, I agree with Tony Hoare - Google for his recent QCon talk).
On this specific example, with the unnullable b2, it wouldn't even pass static checks: Conservative analysis cannot guarantee that b2 isn't NULL, so the program is not semantically meaningful.
My ethos is simple enough. References are an indirection handle to some resource, which we can traverse to obtain access to that resource. Nullable references are either an indirection handle to a resource, or a notification that the resource is not available, and one is never sure up front which semantics are being used. This gives either a multitude of checks up front (Is it null? No? Yay!), or the inevitable NPE (or equivalent). Most programming resources are, these days, not massively resource constrained or bound to some finite underlying model - null references are, simplistically, one of...
Laziness: "I'll just bung a null in here". Which frankly, I don't have too much sympathy with
Confusion: "I don't know what to put in here yet". Typically also a legacy of older languages, where you had to declare your resource names before you knew what your resources were.
Errors: "It went wrong, here's a NULL". Better error reporting mechanisms are thus essential in a language
A hole: "I know I'll have something soon, give me a placeholder". This has more merit, and we can think of ways to combat this.
Of course, solving each of the cases that NULL current caters for with a better linguistic choice is no small feat, and may add more confusion that it helps. We can always go to immutable resources, so NULL in it's only useful states (error, and hole) isn't much real use. Imperative technqiues are here to stay though, and I'm frankly glad - this makes the search for better solutions in this space worthwhile.
Having reference types be non-nullable by default is the only reasonable choice. We are plagued by languages and runtimes that have screwed this up; you should do the Right Thing.
This feature was in Spec#. They defaulted to nullable references and used ! to indicate non-nullables. This was because they wanted backward compatibility.
In my dream language (of which I'd probably be the only user!) I'd make the same choice as you, non-nullable by default.
I would also make it illegal to use the . operator on a nullable reference (or anything else that would dereference it). How would you use them? You'd have to convert them to non-nullables first. How would you do this? By testing them for null.
In Java and C#, the if statement can only accept a bool test expression. I'd extend it to accept the name of a nullable reference variable:
if (myObj)
{
// in this scope, myObj is non-nullable, so can be used
}
This special syntax would be unsurprising to C/C++ programmers. I'd prefer a special syntax like this to make it clear that we are doing a check that modifies the type of the name myObj within the truth-branch.
I'd add a further bit of sugar:
if (SomeMethodReturningANullable() into anotherObj)
{
// anotherObj is non-nullable, so can be used
}
This just gives the name anotherObj to the result of the expression on the left of the into, so it can be used in the scope where it is valid.
I'd do the same kind of thing for the ?: operator.
string message = GetMessage() into m ? m : "No message available";
Note that string message is non-nullable, but so are the two possible results of the test above, so the assignment is value.
And then maybe a bit of sugar for the presumably common case of substituting a value for null:
string message = GetMessage() or "No message available";
Obviously or would only be validly applied to a nullable type on the left side, and a non-nullable on the right side.
(I'd also have a built-in notion of ownership for instance fields; the compiler would generate the IDisposable.Dispose method automatically, and the ~Destructor syntax would be used to augment Dispose, exactly as in C++/CLI.)
Spec# had another syntactic extension related to non-nullables, due to the problem of ensuring that non-nullables had been initialized correctly during construction:
class SpecSharpExampleClass
{
private string! _nonNullableExampleField;
public SpecSharpExampleClass(string s)
: _nonNullableExampleField(s)
{
}
}
In other words, you have to initialize fields in the same way as you'd call other constructors with base or this - unless of course you initialize them directly next to the field declaration.
Have a look at the Elvis operator proposal for Java 7. This does something similar, in that it encapsulates a null check and method dispatch in one operator, with a specified return value if the object is null. Hence:
String s = mayBeNull?.toString() ?: "null";
checks if the String s is null, and returns the string "null" if so, and the value of the string if not. Food for thought, perhaps.
A couple of examples of similar features in other languages:
boost::optional (C++)
Maybe (Haskell)
There's also Nullable<T> (from C#) but that is not such a good example because of the different treatment of reference vs. value types.
In your example you could add a conditional message send operator, e.g.
b?->DoSomething();
To send a message to b only if it is non-null.
Have the nullability be a configuration setting, enforceable in the authors source code. That way, you will allow people who like nullable objects by default enjoy them in their source code, while allowing those who would like all their objects be non-nullable by default have exactly that. Additionally, provide keywords or other facility to explicitly mark which of your declarations of objects and types can be nullable and which cannot, with something like nullable and not-nullable, to override the global defaults.
For instance
/// "translation unit 1"
#set nullable
{ /// Scope of default override, making all declarations within the scope nullable implicitly
Bar bar; /// Can be null
non-null Foo foo; /// Overriden, cannot be null
nullable FooBar foobar; /// Overriden, can be null, even without the scope definition above
}
/// Same style for opposite
/// ...
/// Top-bottom, until reset by scoped-setting or simply reset to another value
#set nullable;
/// Nullable types implicitly
#clear nullable;
/// Can also use '#set nullable = false' or '#set not-nullable = true'. Ugly, but human mind is a very original, mhm, thing.
Many people argue that giving everyone what they want is impossible, but if you are designing a new language, try new things. Tony Hoare introduced the concept of null in 1965 because he could not resist (his own words), and we are paying for it ever since (also, his own words, the man is regretful of it). Point is, smart, experienced people make mistakes that cost the rest of us, don't take anyones advice on this page as if it were the only truth, including mine. Evaluate and think about it.
I've read many many rants on how it's us poor inexperienced programmers who really don't understand where to really use null and where not, showing us patterns and antipatterns that are meant to prevent shooting ourselves in the foot. All the while, millions of still inexperienced programmers produce more code in languages that allow null. I may be inexperienced, but I know which of my objects don't benefit from being nullable.
Here we are, 13 years later, and C# did it.
And, yes, this is the biggest improvement in languages since Barbara and Stephen invented types in 1974.:
Programming With Abstract Data Types
Barbara Liskov
Massachusetts Institute of Technology
Project MAC
Cambridge, Massachusetts
Stephen Zilles
Cambridge Systems Group
IBM Systems Development Division
Cambridge, Massachusetts
Abstract
The motivation
behind the work in very-high-level languages is to ease the
programming task by providing the programmer with a language
containing primitives or abstractions suitable to his problem area.
The programmer is then able to spend his effort in the right place; he
concentrates on solving his problem, and the resulting program will be
more reliable as a result. Clearly, this is a worthwhile goal.
Unfortunately, it is very difficult for a designer to select in
advance all the abstractions which the users of his language might
need. If a language is to be used at all, it is likely to be used to
solve problems which its designer did not envision, and for which the
abstractions embedded in the language are not sufficient. This paper
presents an approach which allows the set of built-in abstractions to
be augmented when the need for a new data abstraction is discovered.
This approach to the handling of abstraction is an outgrowth of work
on designing a language for structured programming. Relevant aspects
of this language are described, and examples of the use and
definitions of abstractions are given.
I think null values are good: They are a clear indication that you did something wrong. If you fail to initialize a reference somewhere, you'll get an immediate notice.
The alternative would be that values are sometimes initialized to a default value. Logical errors are then a lot more difficult to detect, unless you put detection logic in those default values. This would be the same as just getting a null pointer exception.
I am working on a little pinball-game project for a hobby and am looking for a pattern to encapsulate constant variables.
I have a model, within which there are values which will be constant over the life of that model e.g. maximum speed/maximum gravity etc. Throughout the GUI and other areas these values are required in order to correctly validate input. Currently they are included either as references to a public static final, or just plain hard-coded. I'd like to encapsulate these "constant variables" in an object which can be injected into the model, and retrieved by the view/controller.
To clarify, the value of the "constant variables" may not necessarily be defined at compile-time, they could come from reading in a file; user input etc. What is known at compile time is which ones are needed. A way which may be easier to explain it is that whatever this encapsulation is, the values it provides are immutable.
I'm looking for a way to achieve this which:
has compile time type-safety (i.e. not mapping a string to variable at runtime)
avoids anything static (including enums, which can't be extended)
I know I could define an interface which has the methods such as:
public int getMaximumSpeed();
public int getMaximumGravity();
... and inject an instance of that into the model, and make it accessible in some way. However, this results in a lot of boilerplate code, which is pretty tedious to write/test etc (I am doing this for funsies :-)).
I am looking for a better way to do this, preferably something which has the benefits of being part of a shared vocabulary, as with design patterns.
Is there a better way to do this?
P.S. I've thought some more about this, and the best trade-off I could find would be to have something like:
public class Variables {
enum Variable {
MaxSpeed(100),
MaxGravity(10)
Variable(Object variableValue) {
// assign value to field, provide getter etc.
}
}
public Object getVariable(Variable v) { // look up enum and get member }
} // end of MyVariables
I could then do something like:
Model m = new Model(new Variables());
Advantages: the lookup of a variable is protected by having to be a member of the enum in order to compile, variables can be added with little extra code
Disadvantages: enums cannot be extended, brittleness (a recompile is needed to add a variable), variable values would have to be cast from Object (to Integer in this example), which again isn't type safe, though generics may be an option for that... somehow
Are you looking for the Singleton or, a variant, the Monostate? If not, how does that pattern fail your needs?
Of course, here's the mandatory disclaimer that Anything Global Is Evil.
UPDATE: I did some looking, because I've been having similar debates/issues. I stumbled across a list of "alternatives" to classic global/scope solutions. Thought I'd share.
Thanks for all the time spent by you guys trying to decipher what is a pretty weird question.
I think, in terms of design patterns, the closest that comes to what I'm describing is the factory pattern, where I have a factory of pseudo-constants. Technically it's not creating an instance each call, but rather always providing the same instance (in the sense of a Guice provider). But I can create several factories, which each can provide different psuedo-constants, and inject each into a different model, so the model's UI can validate input a lot more flexibly.
If anyone's interested I've came to the conclusion that an interface providing a method for each psuedo-constant is the way to go:
public interface IVariableProvider {
public int maxGravity();
public int maxSpeed();
// and everything else...
}
public class VariableProvider {
private final int maxGravity, maxSpeed...;
public VariableProvider(int maxGravity, int maxSpeed) {
// assign final fields
}
}
Then I can do:
Model firstModel = new Model(new VariableProvider(2, 10));
Model secondModel = new Model(new VariableProvider(10, 100));
I think as long as the interface doesn't provide a prohibitively large number of variable getters, it wins over some parameterised lookup (which will either be vulnerable at run-time, or will prohibit extension/polymorphism).
P.S. I realise some have been questioning what my problem is with static final values. I made the statement (with tongue in cheek) to a colleague that anything static is an inherently not object-oriented. So in my hobby I used that as the basis for a thought exercise where I try to remove anything static from the project (next I'll be trying to remove all 'if' statements ;-D). If I was on a deadline and I was satisfied public static final values wouldn't hamstring testing, I would have used them pretty quickly.
If you're just using java/IOC, why not just dependency-inject the values?
e.g. Spring inject the values via a map, specify the object as a singleton -
<property name="values">
<map>
<entry> <key><value>a1</value></key><value>b1</value></entry>
<entry> <key><value>a2</value></key><value>b3</value></entry>
</map>
</property>
your class is a singleton that holds an immutable copy of the map set in spring -
private Map<String, String> m;
public String getValue(String s)
{
return m.containsKey(s)?m.get(s):null;
}
public void setValues(Map m)
{
this.m=Collections.unmodifiableMap(m):
}
From what I can tell, you probably don't need to implement a pattern here -- you just need access to a set of constants, and it seems to me that's handled pretty well through the use of a publicly accessible static interface to them. Unless I'm missing something. :)
If you simply want to "objectify" the constants though, for some reason, than the Singleton pattern would probably be called for, if any; I know you mentioned in a comment that you don't mind creating multiple instances of this wrapper object, but in response I'd ask, then why even introduce the sort of confusion that could arise from having multiple instances at all? What practical benefit are you looking for that'd be satisfied with having the data in object form?
Now, if the values aren't constants, then that's different -- in that case, you probably do want a Singleton or Monostate. But if they really are constants, just wrap a set of enums or static constants in a class and be done! Keep-it-simple is as good a "pattern" as any.