Java Virtual Machine Specification (JVMS): Bug in "5.4.5 Method overriding" - jvm

I filed the following bug on September 28th, 2009. Sadly, I still did not get any response and the final version of the specification still is incorrect. Is this really a bug? If not, why not? If yes, what should I do?
The section that contains the bug is 5.4.5 (Method overriding): http://docs.oracle.com/javase/specs/jvms/se7/html/jvms-5.html#jvms-5.4.5 in combination with the description of the INVOKEVIRTUAL opcode: http://docs.oracle.com/javase/specs/jvms/se7/html/jvms-6.html#jvms-6.5.invokevirtual
According to 5.4.5 m1 can override m2 even if m1 is private. This can happen if creating .class files manually or combining the .class from two compilations.
In my example I have classes A and B with B extends A. I compiled these classes so that A contains a public method named f and B contains a private method, also named f (by first declaring both methods public, compiling, copying A.class to a safe place, removing the declaration of f in A and changing to private in B, then compile B and using the saved version of A.class).
When now running this, my current Oracle JVM outputs A (meaning the method f in A is invoked). According to the specification, B should be the output (meaning the method f in B should be invoked).
EDIT: Actually, B.f should be resolved. Invocation may fail because of access right checks for the resolved method, if the caller is not B. However, I believe the method resolution part is wrong.
I think that the definition in 5.4.5 should check the access rights of m1, not only m2.
public class A {
public void f();
Code:
0: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream;
3: ldc #3 // String A
5: invokevirtual #4 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
8: return
}
public class B extends A {
private void f();
Code:
0: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream;
3: ldc #3 // String B
5: invokevirtual #4 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
8: return
}
Thanks,
Carsten

Your issue has been addressed, finally. The current version of the Java 8 JVM specification contains the required clarification:
5.4.5 Overriding
An instance method mC declared in class C overrides another instance method mA
declared in class A iff either mC is the same as mA, or all of the following are true:
C is a subclass of A.
mC has the same name and descriptor as mA.
mC is not marked ACC_PRIVATE.
One of the following is true:
mA is marked ACC_PUBLIC; or is marked ACC_PROTECTED; or is marked neither
ACC_PUBLIC nor ACC_PROTECTED nor ACC_PRIVATE and A belongs to the same
run-time package as C.
mC overrides a method m' (m' distinct from mC and mA) such that m' overrides mA.
There is another addition in §4.10.1.5 “Type Checking Abstract and Native Methods”:
private methods and static methods are orthogonal to dynamic method dispatch,
so they never override other methods (§5.4.5).
Less than five years for a fix, that’s fast compared to some other issues…

Related

Explanation for C# Language Specification: 6.2.4 Explicit reference conversions

As I mentioned in this post, I faced a for me not understandable compiler behaviour.
The code:
IEnumerable<IList<MyClass>> myData = //...getMyData
foreach (MyClass o in myData){}
It compiles, but fails on runtime: InvalidCastException; for me it is obvious.
If I change the IList to List as following, it complains:
IEnumerable<List<MyClass>> myData = //...getMyData
foreach (MyClass o in myData){}
When instead of the class type I put var as following, intellisense recognizes the correct type:
IEnumerable<List<MyClass>> myData = //...getMyData
foreach (var o in myData){}
My first question was: Why doesn't the compiler complain? The answer was that the behaviour respects the C# Language definition. See chapter 6.2.4 Explicit reference conversions, page 116.
Read the 4th and 5th statement:
• From any interface-type S to any class-type T, provided T is not sealed or provided T implements S.
• From any interface-type S to any interface-type T, provided S is not derived from T.
For the second part of the first statement provided T implements S is clear, no doubts. But why might we cast an interface-type S to any class-type T if it is not derived or not implemented?
In which case/scenario with an non empty list would the code run without throwing an InvalidCastException?

NullReferenceException on bool, int, or other stack variable

First of all: the title of this post does not match the actual question I have.
But I am also supplying the answer to the original problem (NullRefExcp on bool), so other users will find it's solution here by the chosen title.
I have a class, similar to the following:
ref class CTest
{
bool m_bInit;
void func()
{
if (!m_bInit)
return;
...
}
...
}
Today I had the problem that func crashed with a NullReferenceException at some point although it had been executed successfully many times before.
The exception occured in the line if (!m_bInit)!
I know, you all are saying now, that this is impossible. But it actually was this line. The reason was following:
I have two different variables, both named oTest, but at different places. One of them was initialized: oTest = gcnew CTest. Calling func on this oTest worked well. The first call of func on the other oTest failed with the exception from above. The curious thing is, that the crash seems to happen at the query on m_bInit, also the stacktrace of the exception tells so. But this was just the first place where a member of the not initialized object (it was still nullptr) was called.
Therefore, the advice for other users with the same problem: Check the stack backwards to find a function call on an object that is nullptr/null.
My question now is:
Why does the execution not fail on the first call of a function of oTest which is nullptr?
Why is the function entered and executed until the first access to a member?
Actually, in my case 3 functions were entered and a couple of variables were created on the stack and on the heap...
This code:
void func()
{
if (!m_bInit)
return;
...
}
could actually be written as:
void func()
{
if (!this->m_bInit)
return;
...
}
Hopefully now you can see where the problem comes from.
A member function call is just a regular function call that includes the this parameter implicitly (it's passed along with the other parameters).
The C++/CLI compiler won't perform a nullptr check when calling non-virtual functions - it emits a call MSIL opcode.
This is not actually the case in C#, since the C# compiler will emit the callvirt MSIL opcode even for non-virtual functions. This opcode forces the JIT to perform a null check on the target instance. The only ways you could get this error in C# is by calling the function via reflection or by generating your own IL that uses the call opcode.

I cannot understand how Dart Editor analyze source code

Dart Editor version 1.2.0.release (STABLE). Dart SDK version 1.2.0.
This source code produces runtime exception.
void main() {
test(new Base());
}
void test(Child child) {
}
class Base {
}
class Child extends Base {
}
I assumed that the analyzer generates something like this.
The argument type 'Base' cannot be assigned to the parameter type 'Child'
But I can only detect this error at runtime when occurred this exception (post factum).
Unhandled exception:
type 'Base' is not a subtype of type 'Child' of 'child'.
The analyzer is following the language specification here.
It only warns if a the static type of the argument expression is not assignable to the type of function the parameter.
In Dart, expressions of one type is assignable to variables of another type if either type is a subtype of the other.
That is not a safe type check. It does not find all possible errors. On the other hand, it also does not disallow some correct uses like:
Base foo = new Child();
void action(Child c) { ... }
action(foo); // Perfectly correct code at runtime.
Other languages have safe assignment checks, but they also prevent some correct programs. You then have to add (unsafe/runtime checked) cast operators to tell the compiler that you know the program is safe. It's a trade-off where Dart has chosen to be permissive and avoid most casts.
Let's try to be polite and answer the question without any prejudice.
I think I understand what you expected and here my angle on what the error means:
You are invoking the method with the argument of type Base
The method is expecting an argument of type Child
The Child is not equal to the Base, neither is a subtype of it (as a fact it is the Child that is a subtype of the Base)
It is working as expected as it makes only sense to provide object of the expected type (or it's subtypes - specialisations).
Update:
After reading again your question I realised that you are pointing out that editor is not finding the type problem. I assume this is due to the point that Dart programs are dynamic and hence certain checks are not done before the runtime.
Hope it helps ;-)

SCJP Sierra Bates Chapter 2 Question 2 Default constructor calls

Background info
I have a query regarding a questions from Sierra & Bates, SCJP v6 book. Namely Chapter 2 question 2. The answer given is that the "compilation fails". However when I tried this in neBeans, the code compiled and ran without error. It also returned a output of "D" which was not one of the alternatives. There are some other discussions on this same question in various forums, regarding the need to insert super() etc. However none seem to have recognised it can compile.
Question
1. I expected the constructor "Bottom2(String s)...to have called the super constructor "Top(String s)...". In which case the output would have been "BD" (which happens to be an option for the question. Why does the "Top(String s)..." not get called.
2. As there is a Top constructor then would the default compiler constructor still be implicitly created. ie in effect a "Top() {}" constructor which can be called by "Bottom2(String s)". This not how I understood this to happen - ie the compiler only creates this default if no other constructor is created.
3. Is there and error in this question, or is this a carry over question from the Java 5 version and somehow in Java 6 the compiler can now handle this.
4. Could netBeans have a means to "solve" the compiler problem. This is quite important as I am studying for the SCJP and I find not all the questions can be duplicated in netBeans. In which case I may learn to believe some code works when (for exam purposes) it does not.
Code included for ease of reference.
class Top {
public Top(String s) { System.out.print("B"); }
}
public class Bottom2 extends Top {
public Bottom2(String s) { System.out.print("D"); }
public static void main(String [] args) {
new Bottom2("C");
System.out.println(" ");
}
}
Top doesn't have a default constructor (a default constructor is a public constructor with empty argument list. Therefore, the constructor of Bottom2 must explicitly invoke the super constructor (and pass its argument), but doesn't, and hence compilation fails.
Indeed, eclipse helios says:
Implicit super constructor Top() is undefined. Must explicitly invoke another constructor
and javac says:
cannot find symbol
symbol : constructor Top()
location: class tools.Top
public Bottom2(String s) { System.out.print("D"); }
^
Are you really sure you have tried the same code in Netbeans?

Java: Why method type in .class file contains return type, not only signature?

There is a "NameAndType" structure in the constants pool in .class file.
It is used for dynamic binding.
All methods that class can "export" described as "signature + return type".
Like
"getVector()Ljava/util/Vector;"
That breakes my code when return type of the method in some .jar is changed, even if new type is narrower.
i.e:
I have the following code:
List l = some.getList();
External .jar contains:
public List getList()
Than external jar changes method signature to
public ArrayList getList().
And my code dies in run-time with NoSuchMethodException, because it can't find
getList()Ljava/util/List;
So, I have to recompile my code.
I do not have to change it. Just recompile absolutely the same code!
That also gives ability to have two methods with one signature, but different return types! Compiler would not accept it, but it is possible to do it via direct opcoding.
My questions is why?
Why they did it?
I have only one idea: to prevent sophisticated type checking in the runtime.
You need to look up to the hierarchy and check if there is a parent with List interface.
It takes time, and only compiler has it. JVM does not.
Am I right?
thanks.
One reason may be because method overloading (as opposed to overriding) is determined at compile time. Consider the following methods:
public void doSomething(List util) {}
public void doSomething(ArrayList util) {}
And consider code:
doSomething(getList());
If Java allowed the return type to change and did not throw an exception, the method called would still be doSomething(List) until you recompiled - then it would be doSomething(ArrayList). Which would mean that working code would change behavior just for having recompiled it.