I am implementing a custom collection implementation that can either be readonly or non-readonly; that is, all the methods that change the collection call a function that is the moral equivalent of:
private void ThrowIfReadOnly() {
if (this.isReadOnly)
throw new SomeException("Cannot modify a readonly collection.");
}
I am not sure which of NotSupportedException or InvalidOperationException I should use in that case.
The MSDN only has one bit of guidance on this precise topic, on NotSupportedException:
For scenarios where it is sometimes possible for the object to perform the requested operation, and the object state determines whether the operation can be performed, see InvalidOperationException.
What follows is purely my own interpretation of the rule:
If the object's state can change so that the operation can become invalid / valid during the object's lifetime, then InvalidOperationException should be used.
If the operation is always invalid / valid during the whole object's lifetime, then NotSupportedException should be used.
In that case, "lifetime" means "the whole time that anyone can get a reference to the object" - that is, even after a Dispose() call that often makes most other instance methods unusable;
As pointed out by Martin Liversage, in the case of an object having been disposed, the more specific ObjectDisposedException type should be used. (That is still a subtype of InvalidOperationException).
The practical application of these rules in that case would be as follows:
If isReadOnly can only be set at the time when the object is created (e.g. a constructor argument), and never at any other time, then NotSupportedException should be used.
If isReadOnly can change during the lifetime of the object, then InvalidOperationException should be used.
However, the point of InvalidOperationException vs NotSupportedException is actually moot in the case of implementing a collection - given the description of IsReadOnly on MSDN, the only permitted behavior for IsReadOnly is that its value never changes after the collection is initialized. Meaning that a collection instance can either be modifiable or read-only - but it should choose one at initialization and stick with it for the rest of its lifetime.
Related
I am generating a class in ByteBuddy.
As part of one method implementation, I would like to set a (let's just say) public instance field in another object to the return value of a MethodCall invocation. (Keeping the example public means that access checks etc. are irrelevant.)
I thought I could use MethodCall#setsField(FieldDescription) to do this.
But from my prior question related to this I learned that MethodCall#setsField(FieldDescription) is intended to work only on fields of the instrumented type, and, looking at it now, I'm not entirely sure why or how I thought it was ever going to work.
So: is there a way for a ByteBuddy-generated method implementation to set an instance field of another object to the return value of a method invocation?
If it matters, the "instrumented method" (in ByteBuddy's terminology) accepts the object whose field I want to set as an argument. Naïvely I'd expect to be able to do something like:
MethodCall.invoke(someMethod).setsField(somePublicField).onArgument(2);
There may be problems here that I am not seeing but I was slightly surprised not to see this DSL option. (It may not exist for perfectly good reasons; I just don't know what they would be.)
This is not possible as of Byte Buddy 1.10.18, the mechanism was originally created to support getters/setters when defining beans, for example. That said, it would not be difficult to add; I think it would even be easiest to allow any custom byte code to be dispatched as a consumer of the method call.
I will look into how this can be done, but as a new feature, this will take some time before I find the empty space to do so. The change is tracked on GitHub.
Since Kotlin hasn't got checked exception, what's the correct way to document exception expected to be thrown by an interface method? Should I document it in the interface or in the implementing class (only if the concrete method actually throws it)?
Since clients program against the interface, I'd suggest the documentation to be made in the Javadoc/KDoc of that interface. Whether you actually should document them is discussed in this thread for example:
Oracle recommends:
If it's so good to document a method's API, including the exceptions it can throw, why not specify runtime exceptions too? Runtime exceptions represent problems that are the result of a programming problem, and as such, the API client code cannot reasonably be expected to recover from them or to handle them in any way. Such problems include arithmetic exceptions, such as dividing by zero; pointer exceptions, such as trying to access an object through a null reference; and indexing exceptions, such as attempting to access an array element through an index that is too large or too small. Runtime exceptions can occur anywhere in a program, and in a typical one they can be very numerous. Having to add runtime exceptions in every method declaration would reduce a program's clarity.
So if the information is useful for a client it should be documented (i.e. if the client can handle it, e.g. IOException). For "regular" runtime exceptions such as an IllegalArgumentException I would say "no", do not document them.
I have noticed that some of my serialized objects stored in Redis have problems deserializing.
This typically occurs when I make changes to the object class being stored in Redis.
I want to understand the problem so that I can have a clear design for a solution.
My question is, what causes deserialization problems?
Would a removal of a public/private property cause a problem?
Adding new properties, perhaps?
Would a adding a new function to the class create problems? How about more constructors?
In my serialized object, I have a property Map, what if I change (updated some properties, added functions, etc) myObject, would it cause a deserialization problem?
what causes deserialization problems?
I would like to give you bit of background before answering your question,
The serialization runtime associates with each serializable class a version number, called a serialVersionUID, which is used during deserialization to verify that the sender and receiver of a serialized object have loaded classes for that object that are compatible with respect to serialization. If the receiver has loaded a class for the object that has a different serialVersionUID than that of the corresponding sender's class, then deserialization will result in an InvalidClassException.
If a serializable class does not explicitly declare a serialVersionUID, then the serialization runtime will calculate a default serialVersionUID value for that class based on various aspects of the class, It uses the following information of the class to compute SerialVersionUID,
The class name.
The class modifiers written as a 32-bit integer.
The name of each interface sorted by name.
For each field of the class sorted by field name (except private static and private transient fields:
The name of the field.
The modifiers of the field written as a 32-bit integer.
The descriptor of the field.
if a class initializer exists, write out the following:
The name of the method, .
The modifier of the method, java.lang.reflect.Modifier.STATIC, written as a 32-bit integer.
The descriptor of the method, ()V.
For each non-private constructor sorted by method name and signature:
The name of the method, .
The modifiers of the method written as a 32-bit integer.
The descriptor of the method.
For each non-private method sorted by method name and signature:
The name of the method.
The modifiers of the method written as a 32-bit integer.
The descriptor of the method.
So, to answer your question,
Would a removal of a public/private property cause a problem? Adding new properties, perhaps? Would a adding a new function to the class create problems? How about more constructors?
Yes, all these additions/removal by default will cause the problem.
But one way to overcome this is to explicitly define the SerialVersionUID, this will tell the serialization system that i know the class will evolve (or evolved) over the time and don't throw an error. So the de-serialization system reads only those fields that are present in both the side and assigns the value. Newly added fields on the de-serialization side will get the default values. If some fields are deleted on the de-serialization side, the algorithm just reads and skips.
Following is the way one can declare the SerialVersionUID,
private static final long serialVersionUID = 3487495895819393L;
We are currently designing an API for storing settings and we are considering having these two types of methods:
public Object getSetting(String key) {
// return null if key does not exist
}
public Object getSettingExc(String key) {
// throw a runtime exception if key does not exist
}
Somehow I feel that this just isn't right, but I can't think of any disadvantages except for doubling the number of functions in the API and perhaps decreased code readability (if I know the setting MUST exist, I think I should throw an exception explicitly rather than relying on the get method).
What are your opinions on this?
Exceptions are for exceptional occurrences, when the code cannot continue to function according to its advertised function.
Requesting a setting that isn't set is hardly exception-worthy. If "you" (i.e. the calling code) "know" that setting "must" exist, call getSetting(), check the return value for null, and then throw an exception yourself out of the knowledge that it should have been there. Add a meaningful message about what setting in which context wasn't found. (This is something only the caller knows.)
Don't delegate the throwing of the exception to code that doesn't know the context of the query or that the setting should be there, and needs to be told explicitly (by getting called under a different name). Also, getSettingExc() will most likely be only a null-check-and-throw wrapper around getSetting() anyway, so why not do it at a point where you can make the exception message so much more helpful?
IMHO. (And this is the point where I realize I should have voted-to-close instead of writing an answer...)
This is introducing a weird kind of coupling between the object structure and the potential error conditions. Regarding your comment:
I'm just trying to gather arguments to persuade other guys in my team.
The onus should be on the proponent of this design to justify it, not on you to justify against it. Nobody else uses this in any design I've ever seen.
This does however remind me of another design that maybe is what your team intended? Consider these two methods:
public Object getSetting(String key) {
// return the setting or throw an exception
}
public Object getSettingOrDefault(String key) {
// return the setting or a pre-determined default
}
This aligns the methods more with the functionality than with the error conditions. getSetting() can advertise that it might throw an exception, whereas getSettingOrDefault() can advertise that it will default to a specific value if none can be found in the settings.
If Java has optional parameters or something akin to that, getSettingOrDefault() might even accept as an argument a default to use in the event of no such setting. That might get a little kludgy for consuming code though, just sort of thinking out loud on that one.
Either way, the API should reflect the functionality and not the error conditions. Ideally there should be only one method, but if there's a noticeable need to differentiate between a method that throws and a method that doesn't (and I could certainly see that being the case in a language with checked exceptions), those two should align with the functionality rather than with the exception.
IMHO having two methods to do precisely the same operation indicates that you as the API designer did not complete the job of 'designing' your API. Choose one way or another, publicize it via the API (javadocs) and then the consumers will be consistent in their usage (one way or the other).
Is there an elegant/convinient way (without creating many "empty" classes or at least they should be not annoying) to have fluent interfcaes that maintain order on compilation level.
Fluent interfaces:
http://en.wikipedia.org/wiki/Fluent_interface
with an idea to permit this compilation
var fluentConfig = new ConfigurationFluent().SetColor("blue")
.SetHeight(1)
.SetLength(2)
.SetDepth(3);
and decline this
var fluentConfig = new ConfigurationFluent().SetLength(2)
.SetColor("blue")
.SetHeight(1)
.SetDepth(3);
Each step in the chain needs to return an interface or class that only includes the methods that are valid to use after the current step. In other words, if SetColor must come first, ConfigurationFluent should only have a SetColor method. SetColor would then return an object that only has a SetHeight method, and so forth.
In reality, the return values could all be the same instance of ConfigurationFluent but cast to different interfaces explicitly implemented by that class.
I've got a set of three ways of doing this in C++ using essentially a compile time FSM to validate the actions. You can find the code on github.
The short answer is no, there is no elegant or convenient way to enforce an order of constructing a class that properly impelemnts the "Fluent Interface" as you've linked.
The longer answer starts with playing devil's advocate. If I had dependent properties (i.e. properties that required other properties to be set first), then I could implement them something like this:
method SetLength(int millimeters)
if color is null throw new ValidationException
length = millimeters
return this
end
(NOTE: the above does not map to any real language, it is just psuedocode)
So now I have exceptions to worry about. If I don't obey the rules, the fluent object will throw an exception. Now let's say I have a declaration like yours:
var config = new Fluent().SetLength(2).SetHeight(1).SetDepth(3).SetColor("blue");
When I catch the ValidationException because length depends on the color being set first, how am I as the user supposed to know what the correct order is? Even if I had each SetX method on a different line, the stacktrace will just give me the line where the config variable was declared in most languages. Furthermore, how am I supposed to keep the rules of this object straight in my head compared to other objects? It is a cocophony of conflicting ideals.
Such precedence checks violate the spirit of the "Fluent Interface" approach. That approach was designed for conveniently configure complex objects. You take the convenience out when you attempt to enforce order.
To properly and elegantly implement the fluent interface there are a couple of guidelines that are best observed to make consumers of your class thank you:
Provide meaningful default values: minimizes need to change values, and minimizes chances of creating an invalid object.
Do not perform configuration validation until explicitly asked to do so. That event can be when we use the configuration to create a new fully configured object, or when the consumer explicitly calls a Validate() method.
In any exceptions thrown, make sure the error message is clear and points out any inconsistencies.
maybe the compiler could check that methods are called in the same order as they are defined.
this could be a new feature for compilers.
Or maybe by means of annotations, something like:
class ConfigurationFluent {
#Called-before SetHeight
SetColor(..) {}
#Called-After SetColor
SetHeight(..) {}
#Called-After SetHeight
SetLength(..){ }
#Called-After SetLength
SetDepth(..) {}
}
You can implement a state machine of valid sequence of operations and on each method call the state machine and verify if the sequence of operation is allowed or throw an exception if not.
I will not suggest this approach for Configurations though, it can get very messy and not readable