I'm trying to sandbox MVEL expression evaluation. Unfortunately, by default MVEL includes all java.lang.* classes in the expression language, so a user could call "Runtime.exit()" and kill the whole system.
How can I exclude all classes that I haven't explicitly added with addImport()?
I haven't been able to make heads or tails of the VariableResolvers.
As far as I known this is not supported.
I faced this need some time ago on a project of my company. We had to change MVEL quite a bit to introduce a way to configure a custom policy to control access to types and methods.
The problem is that you can also access any class by its fully qualified name, so it was not just a matter of removing the default imports.
Unfortunately I don't own the code to make it available.
ParserContext ctx = new ParserContext();
ctx.addImport("System", String.class);
ctx.addImport("Runtime", String.class);
Have you tried using AspectJ to constrain these calls from MVEL?
Related
I know some of this, but not all of it. Most notably, I am aware of TypeDescription.Generic.Builder but I have a very specific question about it.
Suppose I want to build Supplier<? extends Frob<X>>.
Suppose further that all I know I have is a TypeDefinition for the parameter, but I don't know what it represents (in the example above it would represent Frob<X>). That is, I don't know whether the TypeDefinition I have is a class, a parameterized type, a generic array type, a type variable, a wildcard, or anything else; I just know it's a TypeDefinition.
Obviously if I wanted to make Supplier<Frob<X>>, I could just do:
TypeDescription.Generic.Builder.parameterizedType(TypeDescription.ForLoadedType.of(Supplier.class),
myTypeDefinition)
.build();
…assuming I haven't made any typos in the snippet above.
How can I make an upper-bounded wildcard TypeDefinition out of an existing TypeDefinition suitable for supplying as the "parameterized" part of a parameterized type build? Is there an obvious recipe I'm overlooking, or is this a gap in the builder's DSL?
(I'm aware of the asWildcardUpperBound() method on TypeDescription.Generic.Builder, but that presumes I have a builder to work with, and in order to "bootstrap" such a builder I would need to give it a TypeDescription at the very least. But I don't have a TypeDescription; I have a TypeDefinition which might be parameterized, and I don't want to use asErasure().)
(I'm sort of looking for a way to do TypeDescription.Generic.Builder.parameterizedType(myTypeDefinition).asWildcardUpperBound().build(), but I can't obviously do that.)
There does seem to be TypeDescription.Generic.OfWildcardType.Latent::boundedAbove but I can't tell if that's supposed to be an "internal use only" class/method or not.
Such an API was indeed missing. I added an API in today's release (1.11.5) to translate an existing generic type description to a builder what allows transformations to arrays or wildcards. The API is TypeDescription.Generic.Builder.of which accepts a loaded or unloaded generic type description.
I prefer working with files that are less than 1000 lines long, so am thinking of breaking up some Erlang modules into more bite-sized pieces.
Is there a way of doing this without expanding the public API of my library?
What I mean is, any time there is a module, any user can do module:func_exported_from_the_module. The only way to really have something be private that I know of is to not export it from any module (and even then holes can be poked).
So if there is technically no way to accomplish what I'm looking for, is there a convention?
For example, there are no private methods in Python classes, but the convention is to use a leading _ in _my_private_method to mark it as private.
I accept that the answer may be, "no, you must have 4K LOC files."
The closest thing to a convention is to use edoc tags, like #private and #hidden.
From the docs:
#hidden
Marks the function so that it will not appear in the
documentation (even if "private" documentation is generated). Useful
for debug/test functions, etc. The content can be used as a comment;
it is ignored by EDoc.
#private
Marks the function as private (i.e., not part of the public
interface), so that it will not appear in the normal documentation.
(If "private" documentation is generated, the function will be
included.) Only useful for exported functions, e.g. entry points for
spawn. (Non-exported functions are always "private".) The content can
be used as a comment; it is ignored by EDoc.
Please note that this answer started as a comment to #legoscia's answer
Different visibilities for different methods is not currently supported.
The current convention, if you want to call it that way, is to have one (or several) 'facade' my_lib.erl module(s) that export the public API of your library/application. Calling any internal module of the library is playing with fire and should be avoided (call them at your own risk).
There are some very nice features in the BEAM VM that rely on being able to call exported functions from any module, such as
Callbacks (funs/anonymous funs), MFA, erlang:apply/3: The calling code does not need to know anything about the library, just that it's something that needs to be called
Behaviours such as gen_server need the previous point to work
Hot reloading: You can upgrade the bytecode of any module without stopping the VM. The code server inside the VM maintains at most two versions of the bytecode for any module, redirecting external calls (those with the Module:) to the most recent version and the internal calls to the current version. That's why you may see some ?MODULE: calls in long-running servers, to be able to upgrade the code
You'd be able to argue that these points'd be available with more fine-grained BEAM-oriented visibility levels, true. But I don't think it would solve anything that's not solved with the facade modules, and it'd complicate other parts of the VM/code a great deal.
Bonus
Something similar applies to records and opaque types, records only exist at compile time, and opaque types only at dialyzer time. Nothing stops you from accessing their internals anywhere, but you'll only find problems if you go that way:
You insert a new field in the record, suddenly, all your {record_name,...} = break
You use a library that returns an opaque_adt(), you know that it's a list and use like so. The library is upgraded to include the size of the list, so now opaque_adt() is a tuple() and chaos ensues
Only those functions that are specified in the -export attribute are visible to other modules i.e "public" functions. All other functions are private. If you have specified -compile(export_all) only then all functions in module are visible outside. It is not recommended to use -compile(export_all).
I don't know of any existing convention for Erlang, but why not adopt the Python convention? Let's say that "library-private" functions are prefixed with an underscore. You'll need to quote function names with single quotes for that to work:
-module(bar).
-export(['_my_private_function'/0]).
'_my_private_function'() ->
foo.
Then you can call it as:
> bar:'_my_private_function'().
foo
To me, that communicates clearly that I shouldn't be calling that function unless I know what I'm doing. (and probably not even then)
I have the following code working in a SpringBoot application, and it does what's I'm expecting.
TypePool typePool = TypePool.Default.ofClassPath();
ByteBuddyAgent.install();
new ByteBuddy()
.rebase(typePool.describe("com.foo.Bar").resolve(), ClassFileLocator.ForClassLoader.ofClassPath())
.implement(typePool.describe("com.foo.SomeInterface").resolve())
.make()
.load(ClassLoader.getSystemClassLoader());
Its makes is so that the class com.foo.Bar implements the interface com.foo.SomeInterface (which has a default implementation)
I would like to . use the above code by referring to the class as Bar.class, not using the string representation of the name. But if I do that I get the following exception.
java.lang.UnsupportedOperationException: class redefinition failed: attempted to change superclass or interfaces
I believe due to the fact that it cause the class to be loaded, prior to the redefinition. I'm just now learning to use ByteBuddy.
I want to avoid some reflection at runtime, by adding the interface and an implementation using ByteBuddy. I've some other code that checks for this interface.
This is impossible, not because of Byte Buddy but no tool is allowed to do this on a regular VM. (There is the so-called dynamic code evolution VM which is capable of that).
If you want to avoid the problem, use redefine rather then rebase. Whenever you instrument a method, you do now however replace the original.
If this is not acceptable, have a look at the Advice class which you can use by the .visit-API to wrap logic around your original code without replacing it.
This question already has answers here:
What is reflection and why is it useful?
(23 answers)
Closed 6 years ago.
I was just curious, why should we use reflection in the first place?
// Without reflection
Foo foo = new Foo();
foo.hello();
// With reflection
Class cls = Class.forName("Foo");
Object foo = cls.newInstance();
Method method = cls.getMethod("hello", null);
method.invoke(foo, null);
We can simply create an object and call the class's method, but why do the same using forName, newInstance and getMthod functions?
To make everything dynamic?
Simply put: because sometimes you don't know either the "Foo" or "hello" parts at compile time.
The vast majority of the time you do know this, so it's not worth using reflection. Just occasionally, however, you don't - and at that point, reflection is all you can turn to.
As an example, protocol buffers allows you to generate code which either contains full statically-typed code for reading and writing messages, or it generates just enough so that the rest can be done by reflection: in the reflection case, the load/save code has to get and set properties via reflection - it knows the names of the properties involved due to the message descriptor. This is much (much) slower but results in considerably less code being generated.
Another example would be dependency injection, where the names of the types used for the dependencies are often provided in configuration files: the DI framework then has to use reflection to construct all the components involved, finding constructors and/or properties along the way.
It is used whenever you (=your method/your class) doesn't know at compile time the type should instantiate or the method it should invoke.
Also, many frameworks use reflection to analyze and use your objects. For example:
hibernate/nhibernate (and any object-relational mapper) use reflection to inspect all the properties of your classes so that it is able to update them or use them when executing database operations
you may want to make it configurable which method of a user-defined class is executed by default by your application. The configured value is String, and you can get the target class, get the method that has the configured name, and invoke it, without knowing it at compile time.
parsing annotations is done by reflection
A typical usage is a plug-in mechanism, which supports classes (usually implementations of interfaces) that are unknown at compile time.
You can use reflection for automating any process that could usefully use a list of the object's methods and/or properties. If you've ever spent time writing code that does roughly the same thing on each of an object's fields in turn -- the obvious way of saving and loading data often works like that -- then that's something reflection could do for you automatically.
The most common applications are probably these three:
Serialization (see, e.g., .NET's XmlSerializer)
Generation of widgets for editing objects' properties (e.g., Xcode's Interface Builder, .NET's dialog designer)
Factories that create objects with arbitrary dependencies by examining the classes for constructors and supplying suitable objects on creation (e.g., any dependency injection framework)
Using reflection, you can very easily write configurations that detail methods/fields in text, and the framework using these can read a text description of the field and find the real corresponding field.
e.g. JXPath allows you to navigate objects like this:
//company[#name='Sun']/address
so JXPath will look for a method getCompany() (corresponding to company), a field in that called name etc.
You'll find this in lots of frameworks in Java e.g. JavaBeans, Spring etc.
It's useful for things like serialization and object-relational mapping. You can write a generic function to serialize an object by using reflection to get all of an object's properties. In C++, you'd have to write a separate function for every class.
I have used it in some validation classes before, where I passed a large, complex data structure in the constructor and then ran a zillion (couple hundred really) methods to check the validity of the data. All of my validation methods were private and returned booleans so I made one "validate" method you could call which used reflection to invoke all the private methods in the class than returned booleans.
This made the validate method more concise (didn't need to enumerate each little method) and garuanteed all the methods were being run (e.g. someone writes a new validation rule and forgets to call it in the main method).
After changing to use reflection I didn't notice any meaningful loss in performance, and the code was easier to maintain.
in addition to Jons answer, another usage is to be able to "dip your toe in the water" to test if a given facility is present in the JVM.
Under OS X a java application looks nicer if some Apple-provided classes are called. The easiest way to test if these classes are present, is to test with reflection first
some times you need to create a object of class on fly or from some other place not a java code (e.g jsp). at that time reflection is useful.
Do you have a naming convention for APIs or Classes that are being phased in to replace an older version that performed the same function / filled the same role?
E.g. Windows does this by adding "Ex" to the end of the function:
ShellExecute // old
ShellExecuteEx // new
What do you prefer, and what are you reasonings?
Appending 2, V2, New, NowInStereo?
Doing a one-time rename of the old API from Something to SomethingOld and using Something for the new stuff? This option worries me when it comes to version control, but it also seems the least likely to be burdened with a V3 or ReallyNew problem in the future.
Making up a completely different name that may describe the function less accurately, but at least is different.
Much of the time you can get away with changing the package name, rather than the class name itself.
If the new class performs the same function as the old one, I remove the old one and replace it with the new one with same name. I can still consult the old one in version control if needed.
Why change it at all? Interfaces being the contracts that they are, why would you break the interface after everything is using it.
Edit: Sorry to botch your comment Josh...
I think that if you've gone to the trouble to create the interface you need to do everything you can to maintain it. What kind of bad choice were you thinking about when you commented before?
You only need to create a new function if the arguments to the function are changed due to new requirements for the function.
The best naming convention for an API is the one the users of the API would expect. If the API is used only on Windows then adding an Ex (or Ex2 for the next rev) may be appropriate. I'm not aware of other conventions on other platforms. Also, your programming language may have a convention for extending API methods.
If you are using an object oriented language and this is an object method, you do not have to change the name because you can have methods with the same name and different signatures. However, it may still make sense to provide a new name to let users know that you want them to migrate to the new method.