I used to do this (in setupContext in a subclass of SimpleModule):
DeserializationConfig dc = context.getDeserializationConfig();
dc.disable(Feature.CAN_OVERRIDE_ACCESS_MODIFIERS);
dc.disable(Feature.READ_ENUMS_USING_TO_STRING);
dc.disable(Feature.FAIL_ON_UNKNOWN_PROPERTIES);
But get deprecation warnings in 1.9, so I try :
DeserializationConfig dc = context.getDeserializationConfig();
dc.without(Feature.CAN_OVERRIDE_ACCESS_MODIFIERS)
.without(Feature.READ_ENUMS_USING_TO_STRING)
.without(Feature.FAIL_ON_UNKNOWN_PROPERTIES);
but this seems to have no effect, since after these calls
dc = context.getDeserializationConfig();
System.out.println(dc.isEnabled(Feature.CAN_OVERRIDE_ACCESS_MODIFIERS));
System.out.println(dc.isEnabled(Feature.READ_ENUMS_USING_TO_STRING));
System.out.println(dc.isEnabled(Feature.FAIL_ON_UNKNOWN_PROPERTIES));
prints
true
false
true
which seem to be the default values. What am I missing here ?
These methods create new instances, so you MUST assign them. They have been added to get to bit more functional style, trying to make most objects immutable, which in turn helps a lot with concurrency (basically can share instances without synhcronization).
Naming convention tries to make this clear: set - methods change state, withXxx() methods are "fluent factories".
That some frameworks use fluent methods purely for chaining, but in most cases these are for Builder objects (mutable), and then resulting immmutable objects have no methods for changing state; but there may be methods to create new Builders.
Jackson uses withXxx() methods in most cases, without builders (there are some cases where full builders are used, but these are minority).
As you correctly note, issue 686 is related to specific case of changing features via Module interface. This is an unfortunate side effect of other changes and needs to be addresses for the next release. But until you either need to change features directly via ObjectMapper (setDeserializationConfig(...), or configure(...), or use deprecated methods if you must use module interface.
Related
Do you know how to use Web Speech API in KMM project for Web application: https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API/Using_the_Web_Speech_API
I'm using Kotlin to build the web app, and the web app require speech to text feature.
I'm not familiar with this particular WEB API, but here's the general process of wrapping global JS APIs in Kotlin so hopefully you'll be able to correct the odd inconsistencies yourself via trial and error.
Firstly, since the target API is global, there's no need for any meta-information for the compiler about where to source JS code from - it's present in the global context. Therefore, we only need to declare the shape of that global context. Normally that would be a straightforward task as outlined in this article, however there's a caveat here which requires some trickery to make it work on all the browsers:
As mentioned earlier, Chrome currently supports speech recognition with prefixed properties, therefore at the start of our code we include these lines to feed the right objects to Chrome, and any future implementations that might support the features without a prefix:
var SpeechRecognition = window.SpeechRecognition || webkitSpeechRecognition;
var SpeechGrammarList = window.SpeechGrammarList || webkitSpeechGrammarList;
var SpeechRecognitionEvent = window.SpeechRecognitionEvent || >webkitSpeechRecognitionEvent;
But let's ignore that for now since the API shape is consistent across the implementation, and name is the only difference that we'll address later. Two main API entities we need to wrap here are SpeechRecognition and SpeechGrammarList, both being classes. However, to make it easier to bridge the inconsistent names for them later on, in Kotlin it's best to describe their shapes as external interfaces. The process for both is the same, so I'll just outline it for SpeechRecognition.
First, the interface declaration. Here we can already make use from EventTarget declaration in Kotlin/JS stdlib. Note that the name of it does not matter here and will not clash with webkitSpeechRecognition when present since we declare it as an interface and as such we only care about the API shape.
external interface SpeechRecognition: EventTarget {
val grammars: SpeechGrammarList // or dynamic if you don't want to declare nested types
var lang: String
// etc...
}
Once we have the API shape declared, we need to bridge naming inconsistencies and provide a unified way to construct its instances from Kotlin. For that, we'll inject some hacky Kotlin code to act as our constructors.
// We match the function name to the type name here so that from Kotlin consumer's perspective it's indistinguishable from an actual constructor.
fun SpeechRecognition(): SpeechRecognition {
// Using some direct JS code to get an appropriate class reference
val cls = js("window.SpeechRecognition || webkitSpeechRecognition")
// Using the class reference to construct an instance of it and then tell the kotlin compiler to assume it's type
return js("new cls()").unsafeCast<SpeechRecognition>()
}
Hopefully this gives you the general idea of how things tie together. Let me know if something's still not quite clear.
If I compare Typhoon with one of the common IOC container spring in java i could not find two important freatures in the documentation.
How to annotate #autowired?
How to annotate #Scope? Especially distinglish between SCOPE_SINGLETON and SCOPE_PROTOTYPE.
More about spring here:
http://docs.spring.io/spring/docs/4.0.0.RELEASE/spring-framework-reference/html/beans.html#beans-standard-annotations
Typhoon supports the prototype and singleton scopes along with two other scopes designed specifically for mobile and desktop applications.
In a server-side application, the server may be supporting any of the application's use-cases at a given time. Therefore it makes sense for those components to have the singleton scope. In a mobile application, while there are background services its more common to service one use case at a time. And there are memory, CPU and batter constraints.
Therefore the default scope with Typhoon is TyphoonScopeObjectGraph, which means that references to other components while resolving eg a top-level controller will be shared. In this way an object graph can be loaded up and then disposed of when done.
There's also the following:
TyphoonScopeSingleton
TyphoonScopePrototype
TyphoonScopeWeakSingleton
Auto-wiring macros vs native style assembly:
Unfortunately, Objective-C has only limited run-time support for "annotations" using macros. So the option was to use either a compile-time pre-processor, which has some drawbacks, or to work around the limitations and force it in using a quirky style. We decided that its best (for now) to use Macros only for simple convention-over-configuration cases.
For more control we strongly recommend using the native style of assembly. This allows the following:
Modularize an application's configuration, so that the architecture tells a story.
IDE code-completion and refactoring works without any additional plugins.
Components can be resolved at runtime using the assembly interface, using Objective-C's AOP-like dynamism.
To set the scope using the native style:
- (id)rootController
{
return [TyphoonDefinition withClass:[RootViewController class]
configuration:^(TyphoonDefinition* definition)
{
definition.scope = TyphoonScopeSingleton;
}];
}
I'm looking for a clean way to make incremental updates to my code library, without breaking backwards compatibility. This could mean adding new members to classes, or changing existing members to provide additional functionality. Sometimes I am required to change a member in such a way that it would break existing code (e.g. renaming a method or changing its return type), so I'd rather not touch any of my existing types once they are shipped.
The way I currently set this up is through inheritance and polymorphism by creating a new class that extends the previous "version" of that class.
The way this works is by creating the appropriate version of StatusResult (e.g. StatusResultVersion3), based on the actual value of the ProtocolVersion property, and returning it as an instance of CommandResult.
Because .NET does not seem to have a concept of class versioning, I had to come up with my own: appending the version number to the end of the class name. This will no doubt make you cringe. I could easily imagine yourself scratching your eyes out after zooming in on the diagram. But it works. I can add new members and override existing members, without introducing any code breaking changes.
Is there a better way to version my classes?
There are typically two approaches when considering existing code and assembly updates:
Regression Testing
This is a great approach for non-breaking changes, where you can simply overload functions to provide new parameters, etc. Visual Studio has some very advanced unit testing capabilities to make your regression testing relatively easy and automated.
Assembly Versions
If your changes are going to start breaking things, like rewriting the way some utility works, then it's time for a new assembly version. .NET is very good about working with assembly versions. You can deploy the versioned assemblies to different folders so that existing code can continue to reference the old version while new code can take advantage of the features in the new version.
The problem with interfaces is that once published they're largely set in stone. To quote Anders Hejlsburg:
... It's like adding a method to an interface. After you publish an interface, it is for all practical purposes immutable, because any implementation of it might have the methods that you want to add in the next version. So you've got to create a new interface instead.
So you can never just update an interface, you need to create a completely new one. Of course, you can have a single class implement both interfaces so your maintainability effort is fairly low compared with (say) polymorphic classes where your code will become spread out between multiple classes over time.
Multiple Interfaces also allows you to remove methods in a way that classes do not (Sure, you can Deprecate them but that can result in very noisy intellisense after a few iterations)
I personally lean towards having entirely stand-alone versions of the interface in each assembly version.
That is to say...
v 0.1.0.0
interface IExample
{
String DoSomething();
}
v 0.2.0.0
interface IExample
{
void DoSomethingElse();
}
How you implement them behind the scenes is up to you, but most likely it'll be the same classes with slightly different methods doing similar jobs (otherwise, why use the same interface?)
All the old code should be referencing 0.1.x.x and new code will reference 0.2.x.x. About the only issue is when you find (say) a security flaw and the fix needs to be back-ported to an earlier version. This is where a decent VCS comes in (Personal preference is TFS but SVN or anything else which supports branching/merging will do).
Merge the fixes from the 0.2 branch back into the 0.1 branch and then do a recompile to result in (say) 0.1.1.0.
As long as you stick to a process like this:
Major or Minor build will increment if there are any breaking changes (aka signatures will not change on Build/Revision increments)
Use publisher policies if the new Major/Minor version should be used by older programs (equivalent to guaranteeing nothing broke so use the new version anyway)
References in client apps should point at a Major/Minor version but not specify revision/build
This gives you:
A clean codebase without legacy clutter
Allows clients to use the latest version with no code changes if nothing has broken
Prevents clients using newer versions of an assembly which do have breaking changes until they recompile (and, one hopes, update their code as appropriate to take advantage of the new features.)
Allows you to release security patches for previous versions
The OP solved his problem as indicated by this comment:
In the end, I went with the interfaces idea because it allows me to keep multiple versions of a class member in a single class file. When I need to update the class, I'll just add the new interface, shadowing the member that has been changed, and change the return type on some of my methods. This works without breaking backwards compatibility because of polymorphism.
If this is mainly for serialization, This can be achieved in .Net using DataContractSerializers and DataAnnotations. They can deserialize different versions an object into the same object to allow for different versions of the same class to be deserialized, leaving any properties it can't map blank.
Is there an elegant/convinient way (without creating many "empty" classes or at least they should be not annoying) to have fluent interfcaes that maintain order on compilation level.
Fluent interfaces:
http://en.wikipedia.org/wiki/Fluent_interface
with an idea to permit this compilation
var fluentConfig = new ConfigurationFluent().SetColor("blue")
.SetHeight(1)
.SetLength(2)
.SetDepth(3);
and decline this
var fluentConfig = new ConfigurationFluent().SetLength(2)
.SetColor("blue")
.SetHeight(1)
.SetDepth(3);
Each step in the chain needs to return an interface or class that only includes the methods that are valid to use after the current step. In other words, if SetColor must come first, ConfigurationFluent should only have a SetColor method. SetColor would then return an object that only has a SetHeight method, and so forth.
In reality, the return values could all be the same instance of ConfigurationFluent but cast to different interfaces explicitly implemented by that class.
I've got a set of three ways of doing this in C++ using essentially a compile time FSM to validate the actions. You can find the code on github.
The short answer is no, there is no elegant or convenient way to enforce an order of constructing a class that properly impelemnts the "Fluent Interface" as you've linked.
The longer answer starts with playing devil's advocate. If I had dependent properties (i.e. properties that required other properties to be set first), then I could implement them something like this:
method SetLength(int millimeters)
if color is null throw new ValidationException
length = millimeters
return this
end
(NOTE: the above does not map to any real language, it is just psuedocode)
So now I have exceptions to worry about. If I don't obey the rules, the fluent object will throw an exception. Now let's say I have a declaration like yours:
var config = new Fluent().SetLength(2).SetHeight(1).SetDepth(3).SetColor("blue");
When I catch the ValidationException because length depends on the color being set first, how am I as the user supposed to know what the correct order is? Even if I had each SetX method on a different line, the stacktrace will just give me the line where the config variable was declared in most languages. Furthermore, how am I supposed to keep the rules of this object straight in my head compared to other objects? It is a cocophony of conflicting ideals.
Such precedence checks violate the spirit of the "Fluent Interface" approach. That approach was designed for conveniently configure complex objects. You take the convenience out when you attempt to enforce order.
To properly and elegantly implement the fluent interface there are a couple of guidelines that are best observed to make consumers of your class thank you:
Provide meaningful default values: minimizes need to change values, and minimizes chances of creating an invalid object.
Do not perform configuration validation until explicitly asked to do so. That event can be when we use the configuration to create a new fully configured object, or when the consumer explicitly calls a Validate() method.
In any exceptions thrown, make sure the error message is clear and points out any inconsistencies.
maybe the compiler could check that methods are called in the same order as they are defined.
this could be a new feature for compilers.
Or maybe by means of annotations, something like:
class ConfigurationFluent {
#Called-before SetHeight
SetColor(..) {}
#Called-After SetColor
SetHeight(..) {}
#Called-After SetHeight
SetLength(..){ }
#Called-After SetLength
SetDepth(..) {}
}
You can implement a state machine of valid sequence of operations and on each method call the state machine and verify if the sequence of operation is allowed or throw an exception if not.
I will not suggest this approach for Configurations though, it can get very messy and not readable
This question already has answers here:
What is reflection and why is it useful?
(23 answers)
Closed 6 years ago.
I was just curious, why should we use reflection in the first place?
// Without reflection
Foo foo = new Foo();
foo.hello();
// With reflection
Class cls = Class.forName("Foo");
Object foo = cls.newInstance();
Method method = cls.getMethod("hello", null);
method.invoke(foo, null);
We can simply create an object and call the class's method, but why do the same using forName, newInstance and getMthod functions?
To make everything dynamic?
Simply put: because sometimes you don't know either the "Foo" or "hello" parts at compile time.
The vast majority of the time you do know this, so it's not worth using reflection. Just occasionally, however, you don't - and at that point, reflection is all you can turn to.
As an example, protocol buffers allows you to generate code which either contains full statically-typed code for reading and writing messages, or it generates just enough so that the rest can be done by reflection: in the reflection case, the load/save code has to get and set properties via reflection - it knows the names of the properties involved due to the message descriptor. This is much (much) slower but results in considerably less code being generated.
Another example would be dependency injection, where the names of the types used for the dependencies are often provided in configuration files: the DI framework then has to use reflection to construct all the components involved, finding constructors and/or properties along the way.
It is used whenever you (=your method/your class) doesn't know at compile time the type should instantiate or the method it should invoke.
Also, many frameworks use reflection to analyze and use your objects. For example:
hibernate/nhibernate (and any object-relational mapper) use reflection to inspect all the properties of your classes so that it is able to update them or use them when executing database operations
you may want to make it configurable which method of a user-defined class is executed by default by your application. The configured value is String, and you can get the target class, get the method that has the configured name, and invoke it, without knowing it at compile time.
parsing annotations is done by reflection
A typical usage is a plug-in mechanism, which supports classes (usually implementations of interfaces) that are unknown at compile time.
You can use reflection for automating any process that could usefully use a list of the object's methods and/or properties. If you've ever spent time writing code that does roughly the same thing on each of an object's fields in turn -- the obvious way of saving and loading data often works like that -- then that's something reflection could do for you automatically.
The most common applications are probably these three:
Serialization (see, e.g., .NET's XmlSerializer)
Generation of widgets for editing objects' properties (e.g., Xcode's Interface Builder, .NET's dialog designer)
Factories that create objects with arbitrary dependencies by examining the classes for constructors and supplying suitable objects on creation (e.g., any dependency injection framework)
Using reflection, you can very easily write configurations that detail methods/fields in text, and the framework using these can read a text description of the field and find the real corresponding field.
e.g. JXPath allows you to navigate objects like this:
//company[#name='Sun']/address
so JXPath will look for a method getCompany() (corresponding to company), a field in that called name etc.
You'll find this in lots of frameworks in Java e.g. JavaBeans, Spring etc.
It's useful for things like serialization and object-relational mapping. You can write a generic function to serialize an object by using reflection to get all of an object's properties. In C++, you'd have to write a separate function for every class.
I have used it in some validation classes before, where I passed a large, complex data structure in the constructor and then ran a zillion (couple hundred really) methods to check the validity of the data. All of my validation methods were private and returned booleans so I made one "validate" method you could call which used reflection to invoke all the private methods in the class than returned booleans.
This made the validate method more concise (didn't need to enumerate each little method) and garuanteed all the methods were being run (e.g. someone writes a new validation rule and forgets to call it in the main method).
After changing to use reflection I didn't notice any meaningful loss in performance, and the code was easier to maintain.
in addition to Jons answer, another usage is to be able to "dip your toe in the water" to test if a given facility is present in the JVM.
Under OS X a java application looks nicer if some Apple-provided classes are called. The easiest way to test if these classes are present, is to test with reflection first
some times you need to create a object of class on fly or from some other place not a java code (e.g jsp). at that time reflection is useful.