eval in Objective-C [duplicate] - objective-c

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is there something like Matlab's eval statement in Objective-C 2.0?
Is there an eval function (as in lisp, javascript, python, ruby...) in Objective-C?
What I mean by eval is a function that can take in arbitrary Objective-C code (possibly having class definitions, side effects, instrospection, etc) as a string (not a block or an NSInvocation or IMP or something like that), and evaluate each expression, respecting the current runtime environment state, side effects, class definitions, etc.
If not, is it possible to implement within the confines of the existing runtime?

Neither the language nor Apple's frameworks directly support such a thing. However, one of the goals of LLVM is to be an embeddable compiler suite. I'm pretty sure it can generate executable code right into memory. The hard part would be providing that code with access to the pre-existing environment of the calling code. For example, compiling code which references a local variable or something like that.
(Mind you, this approach is forbidden for the iOS App Store, but it could maybe be workable on Mac OS X.)

Absolutely not. Objective-C is a fully compiled language. Only interpreted languages can do that sort of thing.

No. Code eval is the feature for dynamic language. Although objective-C has dynamic feature , and even Cocoa runtime , it is still considered as a static language (generally).

Related

Programmatic introspection/reflection - easier in VMs?

What makes programmatic introspection/reflection easier in virtual machines rather than native code?
I read somewhere that VMs by nature allow for better introspection/reflection capabilities but I cannot find more information about it online. Would like to know why.
I believe you mean higher-level languages vs lower-level languages instead of virtual machines.
Higher level languages like Java and C# have implemented reflection and introspection, so there are functions available to the developer to use this information.
Languages like C do not have any pre-built reflection capabilities.
Reflection is very expensive (time-consuming) for any language to run, and should not be used in code that needs to be extremely fast.
Programmatic introspection essentially means to examine & inspect the current call stack, or the current continuation. (Read Appel's book: Compiling with Continuations).
Few programming languages provide this ability. Scheme's call/cc reifies the current continuation, but give no standard ways to inspect it.
The current call stack might be inspectable (e.g. see GCC __builtin_return_address as an ad hoc example).
Most compilers (but not all) do not have an easy way to give information about the layout of the current call frame (however, the debugger DWARF format contains it).
And optimizing compilers (e.g. for C) usually don't give access to the offset of some local variable in the call frame (even if the compiler computes this offset). BTW, the same stack slot might be reused for different variables; read about register spilling.
See also J.Pitrat's CAIA system - the generated C code is able to organize the stack to be able to inspect it;
In a bytecode VM like JVM or NekoVM or Parrot, introspection is easier because each local variable has a well defined slot in the call frame. This is not the case for most compiled languages (e.g. C or C++) because the compiler is able to reuse (for optimization purposes) some slots, or even put a variable only in some machine register, without even allocating any call stack slot to spill it.

Is C++/CLI an extension of Standard ISO C++?

Is Microsofts C++/CLI built on top of the C++ Standard (C++98 or C++11) or is it only "similar" and has deviations?
Or, specifically, is every ISO standard conforming C++ program (either C++98 or C++11), also a conforming C++/CLI program?
Note: I interpret the Wikipedia article above only comparing C++/CLI to MC++, not to ISO Standard C++.
Sure, it is an extension to C++03 and can compile any compliant C++03 program that doesn't conflict with the added keywords. The only thing it doesn't support are some of the Microsoft extensions to C++, the kind that are fundamentally incompatible with managed code execution like __fastcall and __try. MC++ was their first attempt at it, kept compatible by prefixing all added keywords with underscores. The syntax was rather forced and not well received by their customers, C++/CLI dropped the practice and has a much more intuitive syntax. Stanley Lippman of C++ Primer fame was heavily involved btw.
The compiler can be switched between managed and native code generation on-the-fly with #pragma managed, the product is a .NET mixed-mode assembly that contains both MSIL and native machine code. The MSIL produced from native C++ source is not exactly equivalent to the kind produced by, say, the C# or VB.NET compilers. It doesn't magically become verifiable and doesn't get the garbage collector love, you can corrupt the heap or blow the stack just as easily. And no optimizer love either, the MSIL gets translated to machine code at runtime and is optimized just like normal managed code with the time restrictions inherent in a jitter. Getting too much native C++ code translated to MSIL is a very common mistake, the compiler hides it too well.
C++/CLI is notable for introducing syntax that got later adopted into C++11. Like nullptr, override, final and enum class. Bit of a problem, actually, it begat __nullptr to be able to distinguish between a managed and a native null pointer. They never found a great solution for enum class, you have to declare it public to get a managed enum type. Some C++11 extensions work, few beyond the ones it already had, auto is fine but no lambda expressions, quite a loss in .NET programming. The language has been frozen since 2005.
The C++/CX language extension is notable as well, one that makes writing C++ code for Store and Phone apps palatable. The syntax resembles C++/CLI a great deal, including the ref class and hats in the syntax. But with objects allocated with ref new instead of gcnew, the latter would have been too misleading. Otherwise very different from C++/CLI at runtime, you get pure native code out of C++/CX. The language extension hides the COM interop code that's underneath, automatically reference-counting objects, translating error codes into exceptions and mapping generics. The resemblance to C++/CLI syntax is no accident, they basically perform the same role. Mapping C++-like syntax to a foreign type system.
CLI is a set of extensions for standard C++. CLI has full support of standard C++ and adds something more. So every C++ program will compile with enabled CLI, except you are using a CLI reserved word and this is the weakness of the extension, because it does not respect the double underscore rule for extensions (such reserved words has to begin with __).
You can deactivate those extensions in the GUI by:
Configuration Properties -> General -> Common Language Runtime Support
Even Bjarne Stroustrup calls CLI an extension:
On the difficult and controversial question of what the CLI binding/extensions to C++ is to be called, I prefer C++/CLI as a shorthand for "The CLI extensions to ISO C++". Keeping C++ as part of the name reminds people what is the base language and will help keep C++ a proper subset of C++ with the C++/CLI extension
Language extensions could always be called deviations from the standard, because it will not compile with a compiler without CLI support (e.g. the ^ pointer).

What is formal comp. sci. name of this language property?

As a self-taught programmer, my definitions get fuzzy sometimes.
I'm very used to C and ObjC. In both of those your code must adhere to the language "structure". You can only do certain things in certain places. As an example, this is an error:
// beginning of file
NSLog(#"Hello world!"); // can't do this
#implementation MYClass
...
#end
However, in Ruby, anything you put anywhere is executed as the interpreter goes through it. So what is the difference between Ruby and Objective-C that allows this?
At first I thought it was that one was interpreted and the other compiled. Then I read some SO posts and the wikipedia definitions. Interpreted or compiled is a property of the implementation not the language. So that would mean there could (theoretically) be an interpreted implementation of Objective-C? In that case, the fact that a statement cannot be outside the implementation can't be a property of compiled languages, and vice-versa if there was a compiled implementation of Ruby. Or am I wrong in assuming that different implementations of a language would work the same way?
I'm not sure there's a technical term for it, but in most programming languages the context of the statement is extremely important.
Ruby has a concept of a root or main context where code is allowed. Other scripting languages follow this convention, presumably made popular by languages like Perl which allowed for very concise programming.
This allows things like this to be a complete and valid program:
print "Hello world!\n"
In other languages you need to define an entry point, such as a main routine, that is executed instead. Arbitrary code is not really allowed at the top level, which instead is reserved for things like function, type, constant, structure and class definitions.
A language like Ruby has a lot of control over the order in which the code is executed. C, by comparison, is usually composed of separate source files that are then linked together, where there's no inherent order to the way things are linked. All the modules are simply assembled into the final library or executable. This is why the main entry point is required, it defines which function to run first.
In short, it boils down to syntax, context, and language design considerations.
Ruby hides lots of stuff.
Ruby is OO like C++, Objective C and Java, and has main like C but you don't see this.
puts(42) is method call. It is a method of the main object called main. You can see it by typing puts self.
If you don't specify the receiver (receiver.method()) Ruby will use the implicit one, main.
Check available methods:
puts Object.private_methods.sort
Why you can put everything anywhere?
C/C++ look for main method called main, and when C/C++ find it, it will be executed.
Ruby on other hands doesn't need main or other method/class to run first.
It execute code from the first line until it meet the end of file(or __END__ on the separate line).
class Strongman
puts "I'm the best!"
end
is just syntactic sugar for Class.new method:
Strongman = Class.new do
puts "I'm the best!"
end
The same goes for 'module`.
for calls each and returns some kind of object. So you may think of it as something similar to method.
a = for i in 1..12; 42;end
puts a
# 1..12
In the end, it doesn't matter if it is method call or some kind of structure like C's int main(). Programming language decides what it should run first.

Why does Math.sin() delegate to StrictMath.sin()?

I was wondering, why does Math.sin(double) delegate to StrictMath.sin(double) when I've found the problem in a Reddit thread. The mentioned code fragment looks like this (JDK 7u25):
Math.java :
public static double sin(double a) {
return StrictMath.sin(a); // default impl. delegates to StrictMath
}
StrictMath.java :
public static native double sin(double a);
The second declaration is native which is reasonable for me. The doc of Math states that:
Code generators are encouraged to use platform-specific native libraries or microprocessor instructions, where available (...)
And the question is: isn't the native library that implements StrictMath platform-specific enough? What more can a JIT know about the platform than an installed JRE (please only concentrate on this very case)? In ther words, why isn't Math.sin() native already?
I'll try to wrap up the entire discussion in a single post..
Generally, Math delegates to StrictMath. Obviously, the call can be inlined so this is not a performance issue.
StrictMath is a final class with native methods backed by native libraries. One might think, that native means optimal, but this doesn't necessarily has to be the case. Looking through StrictMath javadoc one can read the following:
(...) the definitions of some of the numeric functions in this package require that they produce the same results as certain published algorithms. These algorithms are available from the well-known network library netlib as the package "Freely Distributable Math Library," fdlibm. These algorithms, which are written in the C programming language, are then to be understood as executed with all floating-point operations following the rules of Java floating-point arithmetic.
How I understand this doc is that the native library implementing StrictMath is implemented in terms of fdlibm library, which is multi-platform and known to produce predictable results. Because it's multi-platform, it can't be expected to be an optimal implementation on every platform and I believe that this is the place where a smart JIT can fine-tune the actual performance e.g. by statistical analysis of input ranges and adjusting the algorithms/implementation accordingly.
Digging deeper into the implementation it quickly turns out, that the native library backing up StrictMath actually uses fdlibm:
StrictMath.c source in OpenJDK 7 looks like this:
#include "fdlibm.h"
...
JNIEXPORT jdouble JNICALL
Java_java_lang_StrictMath_sin(JNIEnv *env, jclass unused, jdouble d)
{
return (jdouble) jsin((double)d);
}
and the sine function is defined in fdlibm/src/s_sin.c refering in a few places to __kernel_sin function that comes directly from the header fdlibm.h.
While I'm temporarily accepting my own answer, I'd be glad to accept a more competent one when it comes up.
Why does Math.sin() delegate to StrictMath.sin()?
The JIT compiler should be able to inline the StrictMath.sin(a) call. So there's little point creating an extra native method for the Math.sin() case ... and adding extra JIT compiler smarts to optimize the calling sequence, etcetera.
In the light of that, your objection really boils down to an "elegance" issue. But the "pragmatic" viewpoint is more persuasive:
Fewer native calls makes the JVM core and JIT easier to maintain, less fragile, etcetera.
If it ain't broken, don't fix it.
At least, that's how I imagine how the Java team would view this.
The question assumes that the JVM actually runs the delegation code. On many JVMs, it won't. Calls to Math.sin(), etc.. will potentially be replaced by the JIT with some intrinsic function code (if suitable) transparently. This will typically be done in an unobservable way to the end user. This is a common trick for JVM implementers where interesting specializations can happen (even if the method is not tagged as native).
Note however that most platforms can't simply drop in the single processor instruction for sin due to suitable input ranges (eg see: Intel discussion).
Math API permits a non-strict but better-performing implementations of its methods but does not require it and by default Math simply uses StrictMath impl.

Compiling Objective-C into runtime form? [duplicate]

As I understand correctly, besides the fact that Objective-C language is a strict superset of a "clean" C, added OOP paradigm is simulated by a set of functions partially described in Objective-C Runtime Reference.
Therefore, I'm expecting a possibility to somehow compile Objective-C code in an intermediate C/C++ file (maybe with some asm inserts).
Is it generally possible ?
You could use the clang rewriter to convert to C++. Not aware of a way to go to C though.
The rewriter is available via the "-rewrite-objc" command line option.
As far as I know, there is no software that preprocesses Objective-C code into intermediate C code.
But you could write your Objective-C program entirely in C by calling directly into the Objective-C runtime. The trouble is just that the code might vary between implementations or even different versions of the same runtime.
The question is, is it actually worth the trouble?