Why does Math.sin() delegate to StrictMath.sin()? - jvm

I was wondering, why does Math.sin(double) delegate to StrictMath.sin(double) when I've found the problem in a Reddit thread. The mentioned code fragment looks like this (JDK 7u25):
Math.java :
public static double sin(double a) {
return StrictMath.sin(a); // default impl. delegates to StrictMath
}
StrictMath.java :
public static native double sin(double a);
The second declaration is native which is reasonable for me. The doc of Math states that:
Code generators are encouraged to use platform-specific native libraries or microprocessor instructions, where available (...)
And the question is: isn't the native library that implements StrictMath platform-specific enough? What more can a JIT know about the platform than an installed JRE (please only concentrate on this very case)? In ther words, why isn't Math.sin() native already?

I'll try to wrap up the entire discussion in a single post..
Generally, Math delegates to StrictMath. Obviously, the call can be inlined so this is not a performance issue.
StrictMath is a final class with native methods backed by native libraries. One might think, that native means optimal, but this doesn't necessarily has to be the case. Looking through StrictMath javadoc one can read the following:
(...) the definitions of some of the numeric functions in this package require that they produce the same results as certain published algorithms. These algorithms are available from the well-known network library netlib as the package "Freely Distributable Math Library," fdlibm. These algorithms, which are written in the C programming language, are then to be understood as executed with all floating-point operations following the rules of Java floating-point arithmetic.
How I understand this doc is that the native library implementing StrictMath is implemented in terms of fdlibm library, which is multi-platform and known to produce predictable results. Because it's multi-platform, it can't be expected to be an optimal implementation on every platform and I believe that this is the place where a smart JIT can fine-tune the actual performance e.g. by statistical analysis of input ranges and adjusting the algorithms/implementation accordingly.
Digging deeper into the implementation it quickly turns out, that the native library backing up StrictMath actually uses fdlibm:
StrictMath.c source in OpenJDK 7 looks like this:
#include "fdlibm.h"
...
JNIEXPORT jdouble JNICALL
Java_java_lang_StrictMath_sin(JNIEnv *env, jclass unused, jdouble d)
{
return (jdouble) jsin((double)d);
}
and the sine function is defined in fdlibm/src/s_sin.c refering in a few places to __kernel_sin function that comes directly from the header fdlibm.h.
While I'm temporarily accepting my own answer, I'd be glad to accept a more competent one when it comes up.

Why does Math.sin() delegate to StrictMath.sin()?
The JIT compiler should be able to inline the StrictMath.sin(a) call. So there's little point creating an extra native method for the Math.sin() case ... and adding extra JIT compiler smarts to optimize the calling sequence, etcetera.
In the light of that, your objection really boils down to an "elegance" issue. But the "pragmatic" viewpoint is more persuasive:
Fewer native calls makes the JVM core and JIT easier to maintain, less fragile, etcetera.
If it ain't broken, don't fix it.
At least, that's how I imagine how the Java team would view this.

The question assumes that the JVM actually runs the delegation code. On many JVMs, it won't. Calls to Math.sin(), etc.. will potentially be replaced by the JIT with some intrinsic function code (if suitable) transparently. This will typically be done in an unobservable way to the end user. This is a common trick for JVM implementers where interesting specializations can happen (even if the method is not tagged as native).
Note however that most platforms can't simply drop in the single processor instruction for sin due to suitable input ranges (eg see: Intel discussion).

Math API permits a non-strict but better-performing implementations of its methods but does not require it and by default Math simply uses StrictMath impl.

Related

Can the Kotlin compiler optimize away wrapper functions?

I'm new to Kotlin, but I want to try using it for game development, targeting at least Android with OpenGL ES 2.0 and HTML5 with WebGL (with which I am reasonably familiar). Not having to have slightly different versions of my rendering engine's classes/functions for WebGL and GLES20 would obviously be a good thing, but is there a practical way to achieve this in Kotlin without overhead?
I think what I'll have to do is write a class that implements WebGLRenderingContextBase or a clone of it (if a clone is necessary I can just use a delegate for the WebGL implementation) in OpenGL ES 2.0, full of methods like this:
override fun bindBuffer(target: Int, buffer, Int) {
GLES20.glBindBuffer(target, buffer)
}
I'll write a script to do the bulk of the work.
My question is, is the compiler smart enough to optimise away such wrappers and use GLES20.glBindBuffer etc directly in my class' vtable, or whatever equivalent the JVM has? Presumably inline can't be of any use when calling an overridden method via a reference to an interface or base class.
The Kotlin compiler does not optimize the bytecode to this extent, and it does not need to: the JVM itself is quite good at optimizing the code.
Moreover, inline functions were not designed to be a performance tool in Kotlin, instead they are used for non-local control flow and code transformation that cannot be achieved without inlining.
Actually, the JVM performs a lot of optimizations, sparing the compilers from the necessity of optimizing the bytecode they generate on their side too much. And inlining is one of the optimizations the JVM can do. (1) (2) (3)
Though neither compilers nor JVM can inline native methods, because of completely different nature of the native code.
The Kotlin compiler, in turn, performs some local optimizations that do not affect the overall structure of the program. One more reason to do so is debugging experience which is hard to preserve with heavy optimizations. To check the exact Kotlin optimizations, you can try to disable them by adding the -Xno-optimize flag to the free compiler arguments, then look through the generated bytecode or do some benchmarking.

Programmatic introspection/reflection - easier in VMs?

What makes programmatic introspection/reflection easier in virtual machines rather than native code?
I read somewhere that VMs by nature allow for better introspection/reflection capabilities but I cannot find more information about it online. Would like to know why.
I believe you mean higher-level languages vs lower-level languages instead of virtual machines.
Higher level languages like Java and C# have implemented reflection and introspection, so there are functions available to the developer to use this information.
Languages like C do not have any pre-built reflection capabilities.
Reflection is very expensive (time-consuming) for any language to run, and should not be used in code that needs to be extremely fast.
Programmatic introspection essentially means to examine & inspect the current call stack, or the current continuation. (Read Appel's book: Compiling with Continuations).
Few programming languages provide this ability. Scheme's call/cc reifies the current continuation, but give no standard ways to inspect it.
The current call stack might be inspectable (e.g. see GCC __builtin_return_address as an ad hoc example).
Most compilers (but not all) do not have an easy way to give information about the layout of the current call frame (however, the debugger DWARF format contains it).
And optimizing compilers (e.g. for C) usually don't give access to the offset of some local variable in the call frame (even if the compiler computes this offset). BTW, the same stack slot might be reused for different variables; read about register spilling.
See also J.Pitrat's CAIA system - the generated C code is able to organize the stack to be able to inspect it;
In a bytecode VM like JVM or NekoVM or Parrot, introspection is easier because each local variable has a well defined slot in the call frame. This is not the case for most compiled languages (e.g. C or C++) because the compiler is able to reuse (for optimization purposes) some slots, or even put a variable only in some machine register, without even allocating any call stack slot to spill it.

Is C++/CLI an extension of Standard ISO C++?

Is Microsofts C++/CLI built on top of the C++ Standard (C++98 or C++11) or is it only "similar" and has deviations?
Or, specifically, is every ISO standard conforming C++ program (either C++98 or C++11), also a conforming C++/CLI program?
Note: I interpret the Wikipedia article above only comparing C++/CLI to MC++, not to ISO Standard C++.
Sure, it is an extension to C++03 and can compile any compliant C++03 program that doesn't conflict with the added keywords. The only thing it doesn't support are some of the Microsoft extensions to C++, the kind that are fundamentally incompatible with managed code execution like __fastcall and __try. MC++ was their first attempt at it, kept compatible by prefixing all added keywords with underscores. The syntax was rather forced and not well received by their customers, C++/CLI dropped the practice and has a much more intuitive syntax. Stanley Lippman of C++ Primer fame was heavily involved btw.
The compiler can be switched between managed and native code generation on-the-fly with #pragma managed, the product is a .NET mixed-mode assembly that contains both MSIL and native machine code. The MSIL produced from native C++ source is not exactly equivalent to the kind produced by, say, the C# or VB.NET compilers. It doesn't magically become verifiable and doesn't get the garbage collector love, you can corrupt the heap or blow the stack just as easily. And no optimizer love either, the MSIL gets translated to machine code at runtime and is optimized just like normal managed code with the time restrictions inherent in a jitter. Getting too much native C++ code translated to MSIL is a very common mistake, the compiler hides it too well.
C++/CLI is notable for introducing syntax that got later adopted into C++11. Like nullptr, override, final and enum class. Bit of a problem, actually, it begat __nullptr to be able to distinguish between a managed and a native null pointer. They never found a great solution for enum class, you have to declare it public to get a managed enum type. Some C++11 extensions work, few beyond the ones it already had, auto is fine but no lambda expressions, quite a loss in .NET programming. The language has been frozen since 2005.
The C++/CX language extension is notable as well, one that makes writing C++ code for Store and Phone apps palatable. The syntax resembles C++/CLI a great deal, including the ref class and hats in the syntax. But with objects allocated with ref new instead of gcnew, the latter would have been too misleading. Otherwise very different from C++/CLI at runtime, you get pure native code out of C++/CX. The language extension hides the COM interop code that's underneath, automatically reference-counting objects, translating error codes into exceptions and mapping generics. The resemblance to C++/CLI syntax is no accident, they basically perform the same role. Mapping C++-like syntax to a foreign type system.
CLI is a set of extensions for standard C++. CLI has full support of standard C++ and adds something more. So every C++ program will compile with enabled CLI, except you are using a CLI reserved word and this is the weakness of the extension, because it does not respect the double underscore rule for extensions (such reserved words has to begin with __).
You can deactivate those extensions in the GUI by:
Configuration Properties -> General -> Common Language Runtime Support
Even Bjarne Stroustrup calls CLI an extension:
On the difficult and controversial question of what the CLI binding/extensions to C++ is to be called, I prefer C++/CLI as a shorthand for "The CLI extensions to ISO C++". Keeping C++ as part of the name reminds people what is the base language and will help keep C++ a proper subset of C++ with the C++/CLI extension
Language extensions could always be called deviations from the standard, because it will not compile with a compiler without CLI support (e.g. the ^ pointer).

C code for interpreting Java HelloWorld byte code

What is a simple C/C++ code which can interpret a java class file (byte code) which only contains System.out.print() statements.(I had a look at simple opensource JVMs but they are bit complex because of the completeness.)
Or where can I find a well explained guide to make an interpreter (i.e explanation of Java byte code)
Perhaps you're looking for the Java Virtual Machine Specification.
While your question may seem trivial at first, this is only because of how well a facade the JVM places over the internal aspects of even a simple class like this:
final class WorldGreeter {
public static void main(final String[] argv) {
System.out.println("Greetings, Earth!");
}
}
Reading through the fifth chapter of the specification, namely Loading, Linking, and Initializing, you'll see there is plenty of work a virtual machine must do to run even the most simple programs.
To point out the necessity of all of these complex stages, I'll be assuming you're using the standard Oracle JDK; according to this blog post, you'll expect the initialization of System.out to require quite a bit of work -- namely, the loading of several various classes, and more importantly a working JNI layer.
Now, there's no reason you'd need to be using the Oracle JDK implementation... sure, you could use a more simple setup, but most of the structure and work put into the loading, linking, and initialization stages still stands. It's not as easy as your hunch might tell you.

Is overriding Objective-C framework methods ever a good idea?

ObjC has a very unique way of overriding methods. Specifically, that you can override functions in OSX's own framework. Via "categories" or "Swizzling". You can even override "buried" functions only used internally.
Can someone provide me with an example where there was a good reason to do this? Something you would use in released commercial software and not just some hacked up tool for internal use?
For example, maybe you wanted to improve on some built in method, or maybe there was a bug in a framework method you wanted to fix.
Also, can you explain why this can best be done with features in ObjC, and not in C++ / Java and the like. I mean, I've heard of the ability to load a C library, but allow certain functions to be replaced, with functions of the same name that were previously loaded. How is ObjC better at modifying library behaviour than that?
If you're extending the question from mere swizzling to actual library modification then I can think of useful examples.
As of iOS 5, NSURLConnection provides sendAsynchronousRequest:queue:completionHandler:, which is a block (/closure) driven way to perform an asynchronous load from any resource identifiable with a URL (local or remote). It's a very useful way to be able to proceed as it makes your code cleaner and smaller than the classical delegate alternative and is much more likely to keep the related parts of your code close to one another.
That method isn't supplied in iOS 4. So what I've done in my project is that, when the application is launched (via a suitable + (void)load), I check whether the method is defined. If not I patch an implementation of it onto the class. Henceforth every other part of the program can be written to the iOS 5 specification without performing any sort of version or availability check exactly as if I was targeting iOS 5 only, except that it'll also run on iOS 4.
In Java or C++ I guess the same sort of thing would be achieved by creating your own class to issue URL connections that performs a runtime check each time it is called. That's a worse solution because it's more difficult to step back from. This way around if I decide one day to support iOS 5 only I simply delete the source file that adds my implementation of sendAsynchronousRequest:.... Nothing else changes.
As for method swizzling, the only times I see it suggested are where somebody wants to change the functionality of an existing class and doesn't have access to the code in which the class is created. So you're usually talking about trying to modify logically opaque code from the outside by making assumptions about its implementation. I wouldn't really support that as an idea on any language. I guess it gets recommended more in Objective-C because Apple are more prone to making things opaque (see, e.g. every app that wanted to show a customised camera view prior to iOS 3.1, every app that wanted to perform custom processing on camera input prior to iOS 4.0, etc), rather than because it's a good idea in Objective-C. It isn't.
EDIT: so, in further exposition — I can't post full code because I wrote it as part of my job, but I have a class named NSURLConnectionAsyncForiOS4 with an implementation of sendAsynchronousRequest:queue:completionHandler:. That implementation is actually quite trivial, just dispatching an operation to the nominated queue that does a synchronous load via the old sendSynchronousRequest:... interface and then posts the results from that on to the handler.
That class has a + (void)load, which is the class method you add to a class that will be issued immediately after that class has been loaded into memory, effectively as a global constructor for the metaclass and with all the usual caveats.
In my +load I use the Objective-C runtime directly via its C interface to check whether sendAsynchronousRequest:... is defined on NSURLConnection. If it isn't then I add my implementation to NSURLConnection, so from henceforth it is defined. This explicitly isn't swizzling — I'm not adjusting the existing implementation of anything, I'm just adding a user-supplied implementation of something if Apple's isn't available. Relevant runtime calls are objc_getClass, class_getClassMethod and class_addMethod.
In the rest of the code, whenever I want to perform an asynchronous URL connection I just write e.g.
[NSURLConnection sendAsynchronousRequest:request
queue:[self anyBackgroundOperationQueue]
completionHandler:
^(NSURLResponse *response, NSData *data, NSError *blockError)
{
if(blockError)
{
// oh dear; was it fatal?
}
if(data)
{
// hooray! You know, unless this was an HTTP request, in
// which case I should check the response code, etc.
}
/* etc */
}
So the rest of my code is just written to the iOS 5 API and neither knows nor cares that I have a shim somewhere else to provide that one microscopic part of the iOS 5 changes on iOS 4. And, as I say, when I stop supporting iOS 4 I'll just delete the shim from the project and all the rest of my code will continue not to know or to care.
I had similar code to supply an alternative partial implementation of NSJSONSerialization (which dynamically created a new class in the runtime and copied methods to it); the one adjustment you need to make is that references to NSJSONSerialization elsewhere will be resolved once at load time by the linker, which you don't really want. So I added a quick #define of NSJSONSerialization to NSClassFromString(#"NSJSONSerialization") in my precompiled header. Which is less functionally neat but a similar line of action in terms of finding a way to keep iOS 4 support for the time being while just writing the rest of the project to the iOS 5 standards.
There are both good and bad cases. Since you didn't mention anything in particular these examples will be all-over-the-place.
It's perfectly normal (good idea) to override framework methods when subclassing:
When subclassing NSView (from the AppKit.framework), it's expected that you override drawRect:(NSRect). It's the mechanism used for drawing views.
When creating a custom NSMenu, you could override insertItemWithTitle:action:keyEquivalent:atIndex: and any other methods...
The main thing when subclassing is whether or not your behaviour completes re-defines the old behaviour... or extends it (in which case your override eventually calls [super ...];)
That said, however, you should always stand clear of using (and overriding) any private API methods (those normally have an underscore prefix in their name). This is a bad idea.
You also should not override existing methods via categories. That's also bad. It has undefined behaviour.
If you're talking about categories, you don't override methods with them (because there is no way to call original method, like calling super when subclassing), but only completely replace with your own ones, which makes the whole idea mostly pointless. Categories are only useful for safely extending functionality, and that's the only use I have even seen (and which is a very good, an excellent idea), although indeed they can be used for dangerous things.
If you mean overriding by subclassing, that is not unique. But in Obj-C you can override everything, even private undocumented methods, not just what was declared 'overridable' like in other languages. Personally, I think it's nice, as I remember in Delphi and C++ I used to “hack” access to private and protected members to workaround an internal bug in framework. This is not a good idea, but at some moments it can be a life saver.
There is also method swizzling, but that's not standard language feature, that's a hack. Hacking undocumented internals is rarely a good idea.
And regarding “how can you explain why this can best be done with features in ObjC”, the answer is simple — Obj-C is dynamic, and this freedom is common to almost all dynamic languages (Javascript, Python, Ruby, Io, a lot more). Unless artificially disabled, every dynamic language has it.
Refer to the wikipedia page on dynamic languages for longer explanation and more examples. For example, an even more miraculous things possible in Obj-C and other dynamic languages is that an object can change it's type (class) in place, without recreation.