What is the use of reflection in Java/C# etc [duplicate] - oop

This question already has answers here:
What is reflection and why is it useful?
(23 answers)
Closed 6 years ago.
I was just curious, why should we use reflection in the first place?
// Without reflection
Foo foo = new Foo();
foo.hello();
// With reflection
Class cls = Class.forName("Foo");
Object foo = cls.newInstance();
Method method = cls.getMethod("hello", null);
method.invoke(foo, null);
We can simply create an object and call the class's method, but why do the same using forName, newInstance and getMthod functions?
To make everything dynamic?

Simply put: because sometimes you don't know either the "Foo" or "hello" parts at compile time.
The vast majority of the time you do know this, so it's not worth using reflection. Just occasionally, however, you don't - and at that point, reflection is all you can turn to.
As an example, protocol buffers allows you to generate code which either contains full statically-typed code for reading and writing messages, or it generates just enough so that the rest can be done by reflection: in the reflection case, the load/save code has to get and set properties via reflection - it knows the names of the properties involved due to the message descriptor. This is much (much) slower but results in considerably less code being generated.
Another example would be dependency injection, where the names of the types used for the dependencies are often provided in configuration files: the DI framework then has to use reflection to construct all the components involved, finding constructors and/or properties along the way.

It is used whenever you (=your method/your class) doesn't know at compile time the type should instantiate or the method it should invoke.
Also, many frameworks use reflection to analyze and use your objects. For example:
hibernate/nhibernate (and any object-relational mapper) use reflection to inspect all the properties of your classes so that it is able to update them or use them when executing database operations
you may want to make it configurable which method of a user-defined class is executed by default by your application. The configured value is String, and you can get the target class, get the method that has the configured name, and invoke it, without knowing it at compile time.
parsing annotations is done by reflection

A typical usage is a plug-in mechanism, which supports classes (usually implementations of interfaces) that are unknown at compile time.

You can use reflection for automating any process that could usefully use a list of the object's methods and/or properties. If you've ever spent time writing code that does roughly the same thing on each of an object's fields in turn -- the obvious way of saving and loading data often works like that -- then that's something reflection could do for you automatically.
The most common applications are probably these three:
Serialization (see, e.g., .NET's XmlSerializer)
Generation of widgets for editing objects' properties (e.g., Xcode's Interface Builder, .NET's dialog designer)
Factories that create objects with arbitrary dependencies by examining the classes for constructors and supplying suitable objects on creation (e.g., any dependency injection framework)

Using reflection, you can very easily write configurations that detail methods/fields in text, and the framework using these can read a text description of the field and find the real corresponding field.
e.g. JXPath allows you to navigate objects like this:
//company[#name='Sun']/address
so JXPath will look for a method getCompany() (corresponding to company), a field in that called name etc.
You'll find this in lots of frameworks in Java e.g. JavaBeans, Spring etc.

It's useful for things like serialization and object-relational mapping. You can write a generic function to serialize an object by using reflection to get all of an object's properties. In C++, you'd have to write a separate function for every class.

I have used it in some validation classes before, where I passed a large, complex data structure in the constructor and then ran a zillion (couple hundred really) methods to check the validity of the data. All of my validation methods were private and returned booleans so I made one "validate" method you could call which used reflection to invoke all the private methods in the class than returned booleans.
This made the validate method more concise (didn't need to enumerate each little method) and garuanteed all the methods were being run (e.g. someone writes a new validation rule and forgets to call it in the main method).
After changing to use reflection I didn't notice any meaningful loss in performance, and the code was easier to maintain.

in addition to Jons answer, another usage is to be able to "dip your toe in the water" to test if a given facility is present in the JVM.
Under OS X a java application looks nicer if some Apple-provided classes are called. The easiest way to test if these classes are present, is to test with reflection first

some times you need to create a object of class on fly or from some other place not a java code (e.g jsp). at that time reflection is useful.

Related

Kotlin: Idiomatic usage of extension functions - putting extension functions next to the class it extends

I see some usages of Extension functions in Kotlin I don't personally think that makes sense, but it seems that there are some guidelines that "apparently" support it (a matter of interpretation).
Specifically: defining an extension function outside a class (but in the same file):
data class AddressDTO(val state: State,
val zipCode: String,
val city: String,
val streetAddress: String
)
fun AddressDTO.asXyzFormat() = "${streetAddress}\n${city}\n${state.name} $zipCode"
Where the asXyzFormat() is widely used, and cannot be defined as private/internal (but also for the cases it may be).
In my common sense, if you own the code (AddressDTO) and the usage is not local to some class / module (hence behing private/internal) - there is no reason to define an extension function - just define it as a member function of that class.
Edge case: if you want to avoid serialization of the function starting with get - annotate the class to get the desired behavior (e.g. #JsonIgnore on the function). This IMHO still doesn't justify an extension function.
The counter-response I got to this is that the approach of having an extension function of this fashion is supported by the Official Kotlin Coding Conventions. Specifically:
Use extension functions liberally. Every time you have a function that works primarily on an object, consider making it an extension function accepting that object as a receiver.
Source
And:
In particular, when defining extension functions for a class which are relevant for all clients of this class, put them in the same file where the class itself is defined. When defining extension functions that make sense only for a specific client, put them next to the code of that client. Do not create files just to hold "all extensions of Foo".
Source
I'll appreciate any commonly accepted source/reference explaining why it makes more sense to move the function to be a member of the class and/or pragmatic arguments support this separation.
That quote about using extension functions liberally, I'm pretty sure means use them liberally as opposed to top level non-extension functions (not as opposed to making it a member function). It's saying that if a top-level function conceptually works on a target object, prefer the extension function form.
I've searched before for the answer to why you might choose to make a function an extension function instead of a member function when working on a class you own the source code for, and have never found a canonical answer from JetBrains. Here are some reasons I think you might, but some are highly subject to opinion.
Sometimes you want a function that operates on a class with a specific generic type. Think of List<Int>.sum(), which is only available to a subset of Lists, but not a subtype of List.
Interfaces can be thought of as contracts. Functions that do something to an interface may make more sense conceptually since they are not part of the contract. I think this is the rationale for most of the standard library extension functions for Iterable and Sequence. A similar rationale might apply to a data class, if you think of a data class almost like a passive struct.
Extension functions afford the possibility of allowing users to pseudo-override them, but forcing them to do it in an independent way. Suppose your asXyzFormat() were an open member function. In some other module, you receive AddressDTO instances and want to get the XYZ format of them, exactly in the format you expect. But the AddressDTO you receive might have overridden asXyzFormat() and provide you something unexpected, so now you can't trust the function. If you use an extension function, than you allow users to replace asXyzFormat() in their own packages with something applicable to that space, but you can always trust the function asXyzFormat() in the source package.
Similarly for interfaces, a member function with default implementation invites users to override it. As the author of the interface, you may want a reliable function you can use on that interface with expected behavior. Although the end-user can hide your extension in their own module by overloading it, that will have no effect on your own uses of the function.
For what it's worth, I think it would be very rare to choose to make an extension function for a class (not an interface) when you own the source code for it. And I can't think of any examples of that in the standard library. Which leads me to believe that the Coding Conventions document is using the word "class" in a liberal sense that includes interfaces.
Here's a reverse argument…
One of the main reasons for adding extension functions to the language is being able to add functionality to classes from the standard library, and from third-party libraries and other dependencies where you don't control the code and can't add member functions (AKA methods).  I suspect it's mainly those cases that that section of the coding conventions is talking about.
In Java, the only option in this cases is utility methods: static methods, usually in a utility class gathering together lots of such methods, each taking the relevant object as its first parameter:
public static String[] splitOnChar(String str, char separator)
public static boolean isAllDigits(String str)
…and so on, interminably.
The main problem there is that such methods are hard to find (no help from the IDE unless you already know about all the various utility classes).  Also, calling them is long-winded (though it improved a bit once static imports were available).
Kotlin's extension methods are implemented exactly the same way down at the bytecode level, but their syntax is much simpler and exactly like member functions: they're written the same way (with this &c), calling them looks just like calling a member function, and your IDE will suggest them.
(Of course, they have drawbacks, too: no dynamic dispatch, no inheritance or overriding, scoping/import issues, name clashes, references to them are awkward, accessing them from Java or reflection is awkward, and so on.)
So: if the main purpose of extension functions is to substitute for member functions when member functions aren't possible, why would you use them when member functions are possible?!
(To be fair, there are a few reasons why you might want them.  For example, you can make the receiver nullable, which isn't possible with member functions.  But in most cases, they're greatly outweighed by the benefits of a proper member function.)
This means that the vast majority of extension functions are likely to be written for classes that you don't control the source code for, and so you don't have the option of putting them next to the class.

How can deserialization of polymorphic trait objects be added in Rust if at all?

I'm trying to solve the problem of serializing and deserializing Box<SomeTrait>. I know that in the case of a closed type hierarchy, the recommended way is to use an enum and there are no issues with their serialization, but in my case using enums is an inappropriate solution.
At first I tried to use Serde as it is the de-facto Rust serialization mechanism. Serde is capable of serializing Box<X> but not in the case when X is a trait. The Serialize trait can’t be implemented for trait objects because it has generic methods. This particular issue can be solved by using erased-serde so serialization of Box<SomeTrait> can work.
The main problem is deserialization. To deserialize polymorphic type you need to have some type marker in serialized data. This marker should be deserialized first and after that used to dynamically get the function that will return Box<SomeTrait>.
std::any::TypeId could be used as a marker type, but the main problem is how to dynamically get the deserialization function. I do not consider the option of registering a function for each polymorphic type that should be called manually during application initialization.
I know two possible ways to do it:
Languages that have runtime reflection like C# can use it to get
deserialization method.
In C++, the cereal library uses magic of static objects to register deserializer in a static map at the library initialization time.
But neither of these options is available in Rust. How can deserialization of polymorphic objects be added in Rust if at all?
This has been implemented by dtolnay.
The concept is quite clever ans is explained in the README:
How does it work?
We use the inventory crate to produce a registry of impls of your trait, which is built on the ctor crate to hook up initialization functions that insert into the registry. The first Box<dyn Trait> deserialization will perform the work of iterating the registry and building a map of tags to deserialization functions. Subsequent deserializations find the right deserialization function in that map. The erased-serde crate is also involved, to do this all in a way that does not break object safety.
To summarize, every implementation of the trait declared as [de]serializable is registered at compile-time, and this is resolved at runtime in case of [de]serialization of a trait object.
All your libraries could provide a registration routine, guarded by std::sync::Once, that register some identifier into a common static mut, but obviously your program must call them all.
I've no idea if TypeId yields consistent values across recompiles with different dependencies.
A library to do this should be possible. To create such a library, we would create a bidirectional mapping from TypeId to type name before using the library, and then use that for serialization/deserialization with a type marker. It would be possible to have a function for registering types that are not owned by your package, and to provide a macro annotation that automatically does this for types declared in your package.
If there's a way to access a type ID in a macro, that would be a good way to instrument the mapping between TypeId and type name at compile time rather than runtime.

Has Freemarker something similar to toolbox.xml-file of Velocity?

I have a Struts 1 application which works with Velocity as a template language. I shall replace Velocity with Freemarker, and am looking for something similar to 'toolbox.xml'-File from VelocityViewServlet. (there you can map names to Java Classes and, using these names it is possible to access methods and variables of various Java class in the Velocity template).
Does someone know, what is possible with Freemarker instead? So far I have found only information about the form beans...would be glad if someone can help....
For the utility functions and macros that are View-related (not Model-related), the standard practice is to implement them in FreeMarker and put them into one or more templates and #import (or #include) them. It's also possible to pull in TemplateDirectiveModel-s and TemplateMethodModelEx-es (these are similar to macros and function, but they are implemented in Java) into the template that you will #import/#inlcude as <#assign foo = 'com.example.Foo'?new()>.
As of calling plain static Java methods, you may use the ObjectWrapper's getStaticModels() (assuming it's a BeansWrapper subclass) and then get the required methods as TemplateMethodModelEx-es with staticModels.get("com.example.MyStatics"). Now that you have them, you can put them into the data-model (Velocity context) in the Controller, or pick methods from them in an #import-ed template, etc. Of course, you can also put POJO objects into the data-model so you can call their non-static methods.
The third method, which is not much different from putting things into the data-model is using "shared variables", which are variables (possibly including TemplateMethodModelEx-es and TemplateDirectiveModel-s) defined on the Configuration level.

FxCop, compose list of callers from dependent assembly

I'm building a couple of customs FxCop rules and one of the rules needs to enforce that a constructor is called in specific methods. For that, I need to create a list of callers, to that specific constructor, prior to performing the actual test. How is this possible? Is there some kind of handle to acquire a list of all loaded assemblies in the ApplicationDomain, where I can iterate through the classes and find the constructor Method object? Ideally the list of callers should be composed in the BeforeAnalysis method.
The Microsoft.FxCop.Sdk.CallGraph.CallersFor(Method) method may give you what you want. However, the general approach you seem to be describing is rarely a good idea because it would typically assign the problems to the wrong target. For example, in the scenario you describe, it would presumably be desirable to attribute the problems to the methods that should but do not contain the target contructor call. However, if your analysis target is the constructor, the detected problems will be attributed to the constructor rather than the methods that should have called it.
I think I haven't explained the question very well, but I see your point.
I have 3 different assemblies and for certain method calls from one assembly to another, I need to ensure that a benchmark constructor invoked. The benchmark class resides in a 4th assembly. Now my problem was that only VS2010 only loads one target assembly for analysis and when I used the CallGraph to construct the a list of methods calling the constructur, it would not find any. When Invoking FxCopCmd.exe manually I could just add the dependent assemblies manually with the /file: parameter.
My solution is to load the different assemblies manually (not relying on the loaded assembly in RuleUtilities.AnalysisAssemblies and contruct the list of callers in the BeforeAnalysis method.
RuleUtilities.GetAssembly(
RuleUtilities.AnalysisAssemblies
.First().Directory + "\\" + additionalAssemblyFilename)
.Types.SelectMany(type => type.Members)
.Where(member => member.IsPublic)
.Where(CanBeCastedToMethod)
.Cast<Method>()
.SelectMany(CallGraph.CallersFor);
With this approach I can contruct a list of callers, for each of the assemblies and for the benchmark class constructor. Works perfectly i VS2010.

Is it good practice to call module functions directly in VB.NET?

I have a Util module in my VB.NET program that has project-wide methods such as logging and property parsing. The general practice where I work seems to be to call these methods directly without prefixing them with Util. When I was new to VB, it took me a while to figure out where these methods/functions were coming from. As I use my own Util methods now, I can't help thinking that it's a lot clearer and more understandable to add Util. before each method call (you know immediately that it's user-defined but not within the current class, and where to find it), and is hardly even longer. What's the general practice when calling procedures/functions of VB modules? Should we prefix them with the module name or not?
Intellisense (and "Goto Definition") should make it trivial to find where things are located, but I always preface the calls with a better namespace, just for clarity of reading. Then it's clear that it's a custom function, and not something built in or local to the class you're working with.
Maybe there's a subtle difference I'm missing, but I tend to use shared classes instead of modules for any code that's common and self-contained - it just seems easier to keep track of for me, and it would also enforce your rule of prefacing it, since you can't just call it from everywhere without giving a namespace to call it from.
I usually put the complete namespace for a shared function, for readibility.
Call MyNameSpace.Utils.MySharedFunction()
Util is such a generic name.
Example from the .Net framework. You have System.Web.HttpUtility.UrlEncode(...). Usually you refer to this as HttpUtility.UrlEncode since you have an import statement at the top.
The name of the class which has the static utility methods should be readable and explainable. That is good practice. If you have good class names they might just as well reside in a Utils namespace, but the class name should not be Utils.
Put all your logging in a Logger class. All your string handing in a StringUtils class etc. And try to keep the class names as specific as possible, and I'd rather have more classes with fewer functions than the other way around.