From PMD:
IntegerInstantiation: In JDK 1.5, calling new Integer() causes memory allocation. Integer.valueOf() is more memory friendly.
ByteInstantiation: In JDK 1.5, calling new Byte() causes memory allocation. Byte.valueOf() is more memory friendly.
ShortInstantiation: In JDK 1.5, calling new Short() causes memory allocation. Short.valueOf() is more memory friendly.
LongInstantiation: In JDK 1.5, calling new Long() causes memory allocation. Long.valueOf() is more memory friendly.
Does the same apply for JDK 1.6? I am just wondering if the compiler or jvm optimize this to their respective valueof methods.
In theory the compiler could optimize a small subset of cases where (for example) new Integer(n) was used instead of the recommended Integer.valueOf(n).
Firstly, we should note that the optimization can only be applied if the compiler can guarantee that the wrapper object will never be compared against other objects using == or !=. (If this occurred, then the optimization changes the semantics of wrapper objects, such that "==" and "!=" would behave in a way that is contrary to the JLS.)
In the light of this, it is unlikely that such an optimization would be worth implementing:
The optimization would only help poorly written applications that ignore the javadoc, etc recommendations. For a well written application, testing to see if the optimization could be applied only slows down the optimizer; e.g. the JIT compiler.
Even for a poorly written application, the limitation on where the optimization is allowed means that few of the actual calls to new Integer(n) qualify for optimization. In most situations, it is too expensive to trace all of places where the wrapper created by a new expression might be used. If you include reflection in the picture, the tracing is virtually impossible for anything but local variables. Since most uses of the primitive wrappers entail putting them into collections, it is easy to see that the optimization would hardly ever be found (by a practical optimizer) to be allowed.
Even in the cases where the optimization was actually applied, it would only help for values of n within in a limited range. For example, calling Integer.valueOf(n) for large n will always create a new object.
This holds for Java 6 as well. Try the following with Java 6 to prove it:
System.out.println(new Integer(3) == new Integer(3));
System.out.println(Integer.valueOf(3) == Integer.valueOf(3));
System.out.println(new Long(3) == new Long(3));
System.out.println(Long.valueOf(3) == Long.valueOf(3));
System.out.println(new Byte((byte)3) == new Byte((byte)3));
System.out.println(Byte.valueOf((byte)3) == Byte.valueOf((byte)3));
However, with big numbers the optimization is off, as expected.
The same applies for Java SE 6. In general it is difficult to optimise away the creation of new objects. It is guaranteed that, say, new Integer(42) != new Integer(42). There is potential in some circumstances to remove the need to get an object altogether, but I believe all that is disabled in HotSpot production builds at the time of writing.
Most of the time it doesn't matter. Unless you have identified you are running a critical piece of code which is called lots of times (e.g. 10K or more) it is likely it won't make much difference.
If in doubt, assume the compiler does no optimisations. It does in fact very little. The JVM however, can do lots of optimisations, but removing the need to create an object is not one of them. The general assumption is that object allocation is fast enough most of the time.
Note: code which is run only a few times (< 10K time by default) will not even be fully compiled to native code and this is likely to slow down your code more than the object allocation.
On a recent jvm with escape analysis and scalar replacement, new Integer() might actually be faster than Integer.valueOf() when the scope of the variable is restricted to one method or block. See Autoboxing versus manual boxing in Java for some more details.
Related
It's easy to crash at runtime with unwrap:
fn main() {
c().unwrap();
}
fn c() -> Option<i64> {
None
}
Result:
Compiling playground v0.0.1 (file:///playground)
Running `target/debug/playground`
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', ../src/libcore/option.rs:325
note: Run with `RUST_BACKTRACE=1` for a backtrace.
error: Process didn't exit successfully: `target/debug/playground` (exit code: 101)
Is unwrap only designed for quick tests and proofs-of-concept?
I can not affirm "My program will not crash here, so I can use unwrap" if I really want to avoid panic! at runtime, and I think avoiding panic! is what we want in a production application.
In other words, can I say my program is reliable if I use unwrap? Or must I avoid unwrap even if the case seems simple?
I read this answer:
It is best used when you are positively sure that you don't have an error.
But I don't think I can be "positively sure".
I don't think this is an opinion question, but a question about Rust core and programming.
While the whole “error handling”-topic is very complicated and often opinion based, this question can actually be answered here, because Rust has rather narrow philosophy. That is:
panic! for programming errors (“bugs”)
proper error propagation and handling with Result<T, E> and Option<T> for expected and recoverable errors
One can think of unwrap() as converting between those two kinds of errors (it is converting a recoverable error into a panic!()). When you write unwrap() in your program, you are saying:
At this point, a None/Err(_) value is a programming error and the program is unable to recover from it.
For example, say you are working with a HashMap and want to insert a value which you may want to mutate later:
age_map.insert("peter", 21);
// ...
if /* some condition */ {
*age_map.get_mut("peter").unwrap() += 1;
}
Here we use the unwrap(), because we can be sure that the key holds a value. It would be a programming error if it didn't and even more important: it's not really recoverable. What would you do when at that point there is no value with the key "peter"? Try inserting it again ... ?
But as you may know, there is a beautiful entry API for the maps in Rust's standard library. With that API you can avoid all those unwrap()s. And this applies to pretty much all situations: you can very often restructure your code to avoid the unwrap()! Only in a very few situation there is no way around it. But then it's OK to use it, if you want to signal: at this point, it would be a programming bug.
There has been a recent, fairly popular blog post on the topic of “error handling” whose conclusion is similar to Rust's philosophy. It's rather long but worth reading: “The Error Model”. Here is my try on summarizing the article in relation to this question:
deliberately distinguish between programming bugs and recoverable errors
use a “fail fast” approach for programming bugs
In summary: use unwrap() when you are sure that the recoverable error that you get is in fact unrecoverable at that point. Bonus points for explaining “why?” in a comment above the affected line ;-)
In other words, can I say my program is reliable if I use unwrap? Or must I avoid unwrap even if the case seems simple?
I think using unwrap judiciously is something you have to learn to handle, it can't just be avoided.
My rhetorical question barrage would be:
Can I say my program is reliable if I use indexing on vectors, arrays or slices?
Can I say my program is reliable if I use integer division?
Can I say my program is reliable if I add numbers?
(1) is like unwrap, indexing panics if you make a contract violation and try to index out of bounds. This would be a bug in the program, but it doesn't catch as much attention as a call to unwrap.
(2) is like unwrap, integer division panics if the divisor is zero.
(3) is unlike unwrap, addition does not check for overflow in release builds, so it may silently result in wraparound and logical errors.
Of course, there are strategies for handling all of these without leaving panicky cases in the code, but many programs simply use for example bounds checking as it is.
There are two questions folded into one here:
is the use of panic! acceptable in production
is the use of unwrap acceptable in production
panic! is a tool that is used, in Rust, to signal irrecoverable situations/violated assumptions. It can be used to either crash a program that cannot possibly continue in the face of this failure (for example, OOM situation) or to work around the compiler knowing it cannot be executed (at the moment).
unwrap is a convenience, that is best avoided in production. The problem about unwrap is that it does not state which assumption was violated, it is better instead to use expect("") which is functionally equivalent but will also give a clue as to what went wrong (without opening the source code).
unwrap() is not necessarily dangerous. Just like with unreachable!() there are cases where you can be sure some condition will not be triggered.
Functions returning Option or Result are sometimes just suited to a wider range of conditions, but due to how your program is structured those cases might never occur.
For example: when you create an iterator from a Vector you buid yourself, you know its exact length and can be sure how long invoking next() on it returns a Some<T> (and you can safely unwrap() it).
unwrap is great for prototyping, but not safe for production. Once you are done with your initial design you go back and replace unwrap() with Result<Value, ErrorType>.
Does anyone know whether the Cog VM for Pharo and Squeak is able to optimize away simple indirect variable accesses with accessors like this:
SomeClass>>someProperty
^ someProperty
SomeClass>>someSecondProperty
^ someSecondProperty
that just return an instance variable, so that methods like this:
SomeClass>>someMethod
^ self someProperty doWith: self someSecondProperty
will be no slower than methods like this:
SomeClass>>someMethod
^ someProperty doWith: someSecondProperty
I did some benchmarks, and they do seem roughly equivalent in speed, but I'm curious if anyone familiar with Cog knows for certain, because if there is a difference (no matter how slight), then there might be situations however rare where one is inappropriate.
There's a little cost right now but it's so little that you should not bother. If you want performance, you are willing to change other parts of your code, not instance variable access.
A quick bench:
bench
^ { [ iv yourself ] bench . [ self iv yourself ] bench }
=> #('52,400,000 per second.' '49,800,000 per second.')
The difference does not look so big.
Once jitted and executed once, the difference is that "self iv" executes an inline cache check, a cpu call and a cpu return in addition of fetching the instance variable value. The call and return instructions are most probably going to be anticipated by the cpu and not really executed. So it's about the inline cache check which is a very cheap operation.
What the inlining compiler in development will add is that the cpu call and return are really going to be removed with inlining, which will cover the cases where the cpu has not anticipated them. In addition, the inline cache check may or may not be removed depending on circumstances.
There are details such as the getter method needs to be compiled to native code which takes room in the machine code zone which could increase the number of machine code zone garbage collection, but that's even more anecdotic than the inline cache check overhead.
So in short, there is a very very very little overhead right now but that overhead will decrease in the future.
Clement
This is a tough question... And I don't know the exact answer. But I can help you learning how to check by yourself with a few clues.
You'll need to load the VMMaker package in an image. In Pharo, there is a procedure to build such image by just downloading everything from the net and github. See https://github.com/pharo-project/pharo-vm
Then the main hint is that methods that just return an instance variable are compiled as if executing primitive 264 + inst var offset... (for example, you'll see this by inspecting Interval>>#first or any other simple inst var getter)
In classical interpreter VM, this is handled in Interpreter>>internalExecuteNewMethod.
It seems like you pay the cost of a method lookup (some caches make this cheaper), but not of a real method activation.
I suppose that it explains that debuggers can't enter into such simple methods... This however is not a real inlining.
In COG, the same happens in StackInterpreter>>internalQuickPrimitiveResponse if ever interpreter is used.
As for the JIT, this is handled by Cogit>>compilePrimitive, see also implementors of genQuickReturnInstVar. This is not proper inlining either, but you can see that there are very few instructions generated. Again, I bet you generally don't pay the price of a lookup thank to so called Polymorphic Inline Cache (PIC).
For real inlining, I didn't find a clue after this quick browsing of source code...
My understanding is that it will happen at image side thru callback from Sista VM, but this is work in progress and only my vague recollection. Clement Bera is writing a blog about this (the sista chronicles at http://clementbera.wordpress.com)
If you're afraid of digging in VMMaker source code, I invite you to ask on vm-dev.lists.squeakfoundation.org I'm pretty sure Eliot Miranda or Clement will be happy to give you a far more accurate answer.
EDIT
I forgot to tell you about the conclusion of above perigrinations: I think that there will be a very small difference if you directly use the inst. var. rather than a getter, but this shouldn't be really noticeable, and in all cases, your programming style should NOT be guided by such neglectable optimizations.
TL;DR
Please provide a piece of code written in some well known dynamic language (e.g. JavaScript) and how that code would look like in Java bytecode using invokedynamic and explain why the usage of invokedynamic is a step forward here.
Background
I have googled and read quite a lot about the not-that-new-anymore invokedynamic instruction which everyone on the internet agrees on that it will help speed dynamic languages on the JVM. Thanks to stackoverflow I managed to get my own bytecode instructions with Sable/Jasmin to run.
I have understood that invokedynamic is useful for lazy constants and I also think that I understood how the OpenJDK takes advantage of invokedynamic for lambdas.
Oracle has a small example, but as far as I can tell the usage of invokedynamic in this case defeats the purpose as the example for "adder" could much simpler, faster and with roughly the same effect expressed with the following bytecode:
aload whereeverAIs
checkcast java/lang/Integer
aload whereeverBIs
checkcast java/lang/Integer
invokestatic IntegerOps/adder(Ljava/lang/Integer;Ljava/lang/Integer;)Ljava/lang/Integer;
because for some reason Oracle's bootstrap method knows that both arguments are integers anyway. They even "admit" that:
[..]it assumes that the arguments [..] will be Integer objects. A bootstrap method requires additional code to properly link invokedynamic [..] if the parameters of the bootstrap method (in this example, callerClass, dynMethodName, and dynMethodType) vary.
Well yes, and without that interesing "additional code" there is no point in using invokedynamic here, is there?
So after that and a couple of further Javadoc and Blog entries I think that I have a pretty good grasp on how to use invokedynamic as a poor replacement when invokestatic/invokevirtual/invokevirtual or getfield would work just as well.
Now I am curious how to actually apply the invokedynamic instruction to a real world usecase so that it actually is some improvements over what we could with "traditional" invocations (except lazy constants, I got those...).
Actually, lazy operations are the main advantage of invokedynamic if you take the term “lazy creation” broadly. E.g., the lambda creation feature of Java 8 is a kind of lazy creation that includes the possibility that the actual class containing the code that will be finally invoked by the invokedynamic instruction doesn’t even exist prior to the execution of that instruction.
This can be projected to all kind of scripting languages delivering code in a different form than Java bytecode (might be even in source code). Here, the code may be compiled right before the first invocation of a method and remains linked afterwards. But it may even become unlinked if the scripting language supports redefinition of methods. This uses the second important feature of invokedynamic, to allow mutable CallSites which may be changed afterwards while supporting maximal performance when being invoked frequently without redefinition.
This possibility to change an invokedynamic target afterwards allows another option, linking to an interpreted execution on the first invocation, counting the number of executions and compiling the code only after exceeding a threshold (and relinking to the compiled code then).
Regarding dynamic method dispatch based on a runtime instance, it’s clear that invokedynamic can’t elide the dispatch algorithm. But if you detect at runtime that a particular call-site will always call the method of the same concrete type you may relink the CallSite to an optimized code which will do a short check if the target is that expected type and performs the optimized action then but branches to the generic code performing the full dynamic dispatch only if that test fails. The implementation may even de-optimize such a call-site if it detects that the fast path check failed a certain number of times.
This is close to how invokevirtual and invokeinterface are optimized internally in the JVM as for these it’s also the case that most of these instructions are called on the same concrete type. So with invokedynamic you can use the same technique for arbitrary lookup algorithms.
But if you want an entirely different use case, you can use invokedynamic to implement friend semantics which are not supported by the standard access modifier rules. Suppose you have a class A and B which are meant to have such a friend relationship in that A is allowed to invoke private methods of B. Then all these invocations may be encoded as invokedynamic instructions with the desired name and signature and pointing to a public bootstrap method in B which may look like this:
public static CallSite bootStrap(Lookup l, String name, MethodType type)
throws NoSuchMethodException, IllegalAccessException {
if(l.lookupClass()!=A.class || (l.lookupModes()&0xf)!=0xf)
throw new SecurityException("unprivileged caller");
l=MethodHandles.lookup();
return new ConstantCallSite(l.findStatic(B.class, name, type));
}
It first verifies that the provided Lookup object has full access to A as only A is capable of constructing such an object. So sneaky attempts of wrong callers are sorted out at this place. Then it uses a Lookup object having full access to B to complete the linkage. So, each of these invokedynamic instructions is permanently linked to the matching private method of B after the first invocation, running at the same speed as ordinary invocations afterwards.
Both instructions use static rather than dynamic dispatch. It seems like the only substantial difference is that invokespecial will always have, as its first argument, an object that is an instance of the class that the dispatched method belongs to. However, invokespecial does not actually put the object there; the compiler is the one responsible for making that happen by emitting the appropriate sequence of stack operations before emitting invokespecial. So replacing invokespecial with invokestatic should not affect the way the runtime stack / heap gets manipulated -- though I expect that it will cause a VerifyError for violating the spec.
I'm curious about the possible reasons behind making two distinct instructions that do essentially the same thing. I took a look at the source of the OpenJDK interpreter, and it seems like invokespecial and invokestatic are handled almost identically. Does having two separate instructions help the JIT compiler better optimize code, or does it help the classfile verifier prove some safety properties more efficiently? Or is this just a quirk in the JVM's design?
Disclaimer: It is hard to tell for sure since I never read an explicit Oracle statement about this, but I pretty much think this is the reason:
When you look at Java byte code, you could ask the same question about other instructions. Why would the verifier stop you when pushing two ints on the stack and treating them as a single long right after? (Try it, it will stop you.) You could argue that by allowing this, you could express the same logic with a smaller instruction set. (To go further with this argument, a byte cannot express too many instructions, the Java byte code set should therefore cut down wherever possible.)
Of course, in theory you would not need a byte code instruction for pushing ints and longs to the stack and you are right about the fact that you would not need two instructions for INVOKESPECIAL and INVOKESTATIC in order to express method invocations. A method is uniquely identified by its method descriptor (name and raw argument types) and you could not define both a static and a non-static method with an identical description within the same class. And in order to validate the byte code, the Java compiler must check whether the target method is static nevertheless.
Remark: This contradicts the answer of v6ak. However, a methods descriptor of a non-static method is not altered to include a reference to this.getClass(). The Java runtime could therefore always infer the appropriate method binding from the method descriptor for a hypothetical INVOKESMART instruction. See JVMS §4.3.3.
So much for the theory. However, the intentions that are expressed by both invocation types are quite different. And remember that Java byte code is supposed to be used by other tools than javac to create JVM applications, as well. With byte code, these tools produce something that is more similar to machine code than your Java source code. But it is still rather high level. For example, byte code still is verified and the byte code is automatically optimized when compiled to machine code. However, the byte code is an abstraction that intentionally contains some redundancy in order to make the meaning of the byte code more explicit. And just like the Java language uses different names for similar things to make the language more readable, the byte code instruction set contains some redundancy as well. And as another benefit, verification and byte code interpretation/compilation can speed up since a method's invocation type does not always need to be inferred but is explicitly stated in the byte code. This is desirable because verification, interpretation and compilation are done at runtime.
As a final anecdote, I should mention that a class's static initializer <clinit> was not flagged static before Java 5. In this context, the static invocation could also be inferred by the method's name but this would cause even more run time overhead.
There are the definitions:
http://docs.oracle.com/javase/specs/jvms/se5.0/html/Instructions2.doc6.html#invokestatic
http://docs.oracle.com/javase/specs/jvms/se5.0/html/Instructions2.doc6.html#invokespecial
There are significant differences. Say we want to design an invokesmart instruction, which would choose smartly between inkovestatic and invokespecial:
First, it would not be a problem to distinguish between static and virtual calls, since we can't have two methods with same name, same parameter types and same return type, even if one is static and second is virtual. JVM does not allow that (for a strange reason). Thanks raphw for noticing that.
First, what would invokesmart foo/Bar.baz(I)I mean? It may mean:
A static method call foo.Bar.baz that consumes int from operand stack and adds another int. // (int) -> (int)
An instance method call foo.Bar.baz that consumes foo.Bar and int from operand stack and adds int. // (foo.Bar, int) -> (int)
How would you choose from them? There may exist both methods.
We may try to solve it by requiring foo/Bar.baz(Lfoo/Bar;I) for the static call. However, we may have both public static int baz(Bar, int) and public int baz(int).
We may say that it does not matter and possibly disable such situation. (I don't think that it is a good idea, but just to imagine.) What would it mean?
If the method is static, there are probably no additional restrictions. On the other hand, if the method is not static, there are some restrictions: "Finally, if the resolved method is protected (§4.6), and it is either a member of the current class or a member of a superclass of the current class, then the class of objectref must be either the current class or a subclass of the current class."
There are some further differences, see the note about ACC_SUPER.
It would mean that all the referenced classes must be loaded before bytecode verification. I hope this is not necessary now, but I am not 100% sure.
So, it would mean very inconsistent behavior.
I am converting some C++ code to Java and I was wondering what I can do about the inlined functions. Can I assume that functions will be inlined by the VM (as an when necessary) and just not worry about this? How do I profile to observe this behaviour? Suppose there is a main outer function, and I throw a for loop around it and cause a million invocations. Should I expect to see an improvements as the VM inlines more and more?
Yes Java does inline method calls. The inlining is performed by the JIT compiler, so you won't see it by examining the bytecode files.
Whether inlining actually occurs for a given method call will depend on the size of the method body, and whether the call is inlineable. (If a method call involves dispatching ... after the JVM has a bunch of global optimization designed to remove unnecessary dispatching ... then it cannot be inlined.)
The same applies to your example with your outer main function. It depends on how big the method body is. On the other hand, if the method takes a significant time to execute, then the relative importance of the optimization decreases correspondingly.
My advice is to not worry about things like this at this stage. Just write the code clearly and simply, and let the JIT compiler deal with the problem of optimizing. When your application is working, you can profile it and see if there are any "hot spots" in the code that are worthwhile optimizing by hand.
But I should be able to see this in something like Visual VM right? I mean initially no inlining, then more and more stuff is inlined so the average time for the outer method is slightly reduced.
It may be observable and it may not, depending on the amount time spent in making the calls relative to executing the method bodies. (Profiling often relies on sampling the program counter. The reported times may be inaccurate if the number of samples for a given region of code is too small ... and for other reasons.)
It also depends on the JVM you are using. Not all JVMs will re-optimize code that they have previously optimized.
Finally, there is a way to get the JVM to dump the native code output by the JIT compiler. That will give you a definitive answer as to what has been inlined ... if you are prepared to read the machine instructions.