How should I implement Obj C-like headers in Lua? - objective-c

Now I have an import(a) function, that in short words dofile's header in .framework like this:
import("<Kakao/KARect>") => dofile("/System/Library/Frameworks/Kakao.framework/Headers/KARect.lua")
And in KARect.lua for example I have:
KARect = {}
function KARect:new(_x, _y, _width, _height, _colorBack)
local new = setmetatable({}, {__index = self})
new.id = KAEntities:generateID()
...
return new
end
function KARect:draw()
...
end
After some time I thought about reworking this system and making "headers" work like typical Lua modules with advanced require() so function will do e.g.:
import("<Kakao/KARect>") => package.path = "/System/Library/Frameworks/Kakao.framework/Headers/?.lua"; KARect = require("KARect")
and file will contain:
local KARect = {}
...
return KARect
Because headers should not contain anything but only classes with their names? I'm getting confused when thinking about it, as I never used Obj C :s

I never used Obj C
Then why are you trying to implement its headers in a language, that does not use headers at all?
Header! What is a header?
Header files in C-like languages store more than just a name. They store constants and macro commands, function and class method argument and return types, structure and class fields. In essence, the contents of the header file are forward declarations. They came into existence due to the need to perform the same forward-declarations across many files.
I don't know what additional rules and functions were added to header files in Obj-C, but you can get general understanding of what they do in the following links: 1, 2, 3, 4 with the last one being the most spot-on.
Answer to the question present
Lua is dynamically-typed interpreted language. It does not do compile time type checks and, typically, Lua programs can and should be structured in a way that does not need forward declarations across files. So there is no meaningful way for a programmer to create and for lua bytecode generator and interpreter to use header files.
Lua does not have classes at all. The code you've posted is a syntactic sugar for an assignment of a function with a slightly different signature to a table which imitates class:
KARect.new = function( first_arg_is_self, _x, _y, _width, _height, _colorBack)
local new = setmetatable({}, {__index = first_arg_is_self})
return new
end
There is no declarations here, only generation of an anonymous function and its assignment to a field in a table. Other parts of program do not need to know anything about a particular field, variable or function (which is stored in variable) in advance (unlike C).
So, no declaration means nothing to separate from implementation. You of course can first list fields of the class-table and do dummy assignments to them, but, again, Lua will have no use for those. If you want to give hints to humans, it is probably better to write a dedicated manual or put comments in the implementation.
Lua has situations where forward declarations are needed to reference local functions. But this situation does not arise in object oriented code, as all methods are accessed through reference to the object, and by the time first object is created, the class itself is usually fully constructed.

Related

Inherit from table returned from function

There is an API provided function, let's call it createBase which returns a table (object). I want to add methods to this table, but I can't just do x = createBase() and then function x:foo() because I have another function similar to createBase, but it's createExtended. It might be easier to explain with the code I have so far:
import api --I don't know how you'd do this in vanilla Lua, I'd use os.loadAPI("api") but that's computercraft specific, I think
Extended = {}
function Extended:foo()
print("foo from extended")
end
function createExtended(params)
x = api.createBase(params)
Extended.__index = x
return Extended --this is obviously wrong: you can't return a class and expect it to be an object
end
Of course, this doesn't work: but I don't know how I might make it work either. Let's assume the table returned by createBase has a function called bar which just prints bar from base. With this test code, the following outputs are given:
e = createExtended()
e.foo() --prints "foo from extended"
e.bar() --nil, therefor error
How can I make this possible, short of defining function x.bar() inside createExtended?
Thanks in advance.
The very simplest way is to attach the method to it directly, instead of using a metatable.
local function extend(super_instance)
super_instance.newMethod = newMethod
return super_instance
end
local function createExtended(...)
return extend(createSuper(...))
end
This will work, unless your superclass uses __newindex (for example, preventing you from writing to unknown properties/methods), or iterates over the keys using pairs or next, since it will now have an additional key.
If for some reason you cannot modify the object, you will instead have to 'wrap' it up.
You could make a new instance which "proxies" all of its methods, properties, and operators to another instance, except that it adds additional fields and methods.
local function extend(super_instance)
local extended_instance = {newMethod = newMethod}
-- and also `__add`, `__mul`, etc as needed
return setmetatable(extended_instance, {__index = super_instance, __newindex = super_instance})
end
local function createExtended(...)
return extend(createSuper(...))
end
This will work for simple classes, but won't work for all uses:
Table iteration like pairs and next won't find the keys from the original table, since they're not actually there. If the superclass inspects the metatable of the object it is given (or if the superclass is actually a userdata), it will also not work, since you'll find the extension metatable instead.
However, many pure-Lua classes will not do those things, so this is still a fairly simple approach that will probably work for you.
You could also do something similar to Go; instead of having a way to 'extend' a class, you simply embed that class as a field and offer convenience to directly calling methods on the wrapping class that just call the methods on the 'extended' class.
This is slightly complicated by how 'methods' work in Lua. You can't tell if a property is a function-that-is-a-property or if it's actually a method. The code below assumes that all of the properties with type(v) == "function" are actually methods, which will usually be true, but may not actually be for your specific case.
In the worst case, you could just manually maintain the list of methods/properties you want to 'proxy', but depending on how many classes you need to proxy and how many properties they have, that could become unwieldy.
local function extend(super_instance)
return setmetatable({
newMethod = newMethod, -- also could be provided via a more complicated __index
}, {
__index = function(self, k)
-- Proxy everything but `newMethod` to `super_instance`.
local super_field = super_instance[k]
if type(super_field) == "function" then
-- Assume the access is for getting a method, since it's a function.
return function(self2, ...)
assert(self == self2) -- assume it's being called like a method
return super_field(super_instance, ...)
end
end
return super_field
end,
-- similar __newindex and __add, etc. if necessary
})
end
local function createExtended(...)
return extend(createSuper(...))
end

Variable Encapsulation in Case Statement

While modifying an existing program's CASE statement, I had to add a second block where some logic is repeated to set NetWeaver portal settings. This is done by setting values in a local variable, then assigning that variable to a Changing parameter. I copied over the code and did a Pretty Print, expecting to compiler to complain about the unknown variable. To my surprise however, this code actually compiles just fine:
CASE i_actionid.
WHEN 'DOMIGO'.
DATA: ls_portal_actions TYPE powl_follow_up_sty.
CLEAR ls_portal_actions.
ls_portal_actions-bo_system = 'SAP_ECC_Common'.
" [...]
c_portal_actions = ls_portal_actions.
WHEN 'EBELN'.
ls_portal_actions-bo_system = 'SAP_ECC_Common'.
" [...]
C_PORTAL_ACTIONS = ls_portal_actions.
ENDCASE.
As I have seen in every other programming language, the DATA: declaration in the first WHEN statement should be encapsulated and available only inside that switch block. Does SAP ignore this encapsulation to make that value available in the entire CASE statement? Is this documented anywhere?
Note that this code compiles just fine and double-clicking the local variable in the second switch takes me to the data declaration in the first. I have however not been able to test that this code executes properly as our testing environment is down.
In short you cannot do this. You will have the following scopes in an abap program within which to declare variables (from local to global):
Form routine: all variables between FORM and ENDFORM
Method: all variables between METHOD and ENDMETHOD
Class - all variables between CLASS and ENDCLASS but only in the CLASS DEFINITION section
Function module: all variables between FUNCTION and ENDFUNCTION
Program/global - anything not in one of the above is global in the current program including variables in PBO and PAI modules
Having the ability to define variables locally in a for loop or if is really useful but unfortunately not possible in ABAP. The closest you will come to publicly available documentation on this is on help.sap.com: Local Data in the Subroutine
As for the compile process do not assume that ABAP will optimize out any variables you do not use it won't, use the code inspector to find and remove them yourself. Since ABAP works the way it does I personally define all my variables at the start of a modularization unit and not inline with other code and have gone so far as to modify the pretty printer to move any inline definitions to the top of the current scope.
Your assumption that a CASE statement defines its own scope of variables in ABAP is simply wrong (and would be wrong for a number of other programming languages as well). It's a bad idea to litter your code with variable declarations because that makes it awfully hard to read and to maintain, but it is possible. The DATA statements - as well as many other declarative statements - are only evaluated at compile time and are completely ignored at runtime. You can find more information about the scopes in the online documentation.
The inline variable declarations are now possible with the newest version of SAP Netweaver. Here is the link to the documentation DATA - inline declaration. Here are also some guidelines of a good and bad usage of this new feature
Here is a quote from this site:
A declaration expression with the declaration operator DATA declares a variable var used as an operand in the current writer position. The declared variable is visible statically in the program from DATA(var) and is valid in the current context. The declaration is made when the program is compiled, regardless of whether the statement is actually executed.
Personally have not had time to check it out yet, because of lack of access to such system.

Naming convention to distinguish in-place mutation/creation function

When I wrote some general programming utility code, I found that it's good to have both inplace mutator and new object creator member function for one functionality.
For example, some class which represents path in file system may have "normalize" functionality. Path object may mutates itself into normalized one, or returns new normalized path object.
class path {
...
void normalize_itself()
path get_new_normalized_path()
...
}
I've tried some convention for this one, but most of them are not satisfiable.
'normalize!' for inplace function like ruby - good, but most other languages don't support special character to be included in identifier.
'normalize_ip' for inplace function - since most of my function usages are inplace, I think it's too ugly.
'get_normalized' for non-inplace function - acceptable, but can be confused with other simple getter function for member.
'normalized' for non-inplace function - sometime not uniform, and easily confused with its inplace counter part.
write non-inplace function as free function - lack of intellisense assistance of IDE, sometime visibility issues.
I'd like to find some good/practical convention to distinguish two function.
I think normalize for your mutator and grab_normalized for your object creator would work.

Is custom generalized object serialization possible in Matlab?

I would like to save my objects as XML so that other applications can read from and write to the data files -- something that is very difficult with Matlab's binary mat files.
The underlying problem I'm running into is that Matlab's equivalent of reflection (which I've used to do similar things in .NET) is not very functional with respect to private properties. Matlab's struct(object) function offers a hack in terms of writing XML from an object because while I can't do
x = myInstance.myPrivateProperty;
...I can do
props = struct(myInstance);
x = props.myPrivateProperty;
So, I can create a pure (contains no objects) struct from any object using the code below, and then it's trivial to write an XML file using a pure struct.
But, is there any way to reverse the process? That is, to create an object instance using the data saved by the code below (that data contains a list of all non-Dependent, non-Constant, non-Transient properties of the class instance, and the name of the class)? I was thinking about having all my objects inherit from a class called XmlSerializable which would accept a struct as a single argument in the constructor and then assign all the values contained in the struct to the correspondingly-named properties. But, this doesn't work because if MyClass inherits XmlSerializable, the code in XmlSerializable isn't allowed to set the private properties of MyClass (related to How can I write generalized functions to manipulate private properties?). This would be no problem in .NET (See Is it possible to set private property via reflection?), but I'm having trouble figuring it out in Matlab.
This code creates a struct that contains all of the state information for the object(s) passed in, yet contains no object instances. The resulting struct can be trivially written to XML:
function s = toPureStruct(thing)
if isstruct(thing)
s = collapseObjects(thing);
s.classname = 'struct';
elseif isobject(thing)
s.classname = class(thing);
warning off MATLAB:structOnObject;
allprops = struct(thing);
warning on MATLAB:structOnObject
mc = metaclass(thing);
for i=1:length(mc.PropertyList)
p = mc.PropertyList(i);
if strcmp(p.Name, 'classname')
error('toStruct:PropertyNameCollision', 'Objects used in toStruct may not have a property named ''classname''');
end
if ~(p.Dependent || p.Constant || p.Transient)
if isobject(allprops.(p.Name))
s.(p.Name) = toPureStruct(allprops.(p.Name));
elseif isstruct(allprops.(p.Name))
s.(p.Name) = collapseObjects(allprops.(p.Name));
else
s.(p.Name) = allprops.(p.Name);
end
end
end
else
error(['Conversion to pure struct from ' class(thing) ' is not possible.']);
end
end
function s = collapseObjects(s)
fnames = fields(s);
for i=1:length(fnames)
f = s.(fnames{i});
if isobject(f)
s.(fnames{i}) = toPureStruct(f);
elseif isstruct(f)
s.(fnames{i}) = collapseObjects(f);
end
end
end
EDIT: One of the other "applications" I would like to read the saved files is a version control system (to track changes in parameters in configurations defined by Matlab objects), so any viable solution must be capable of producing human-intelligible text. The toPureStruct method above does this when the struct is converted to XML.
You might be able to sidestep this issue by using the new v7.3 MAT file format for your saved objects. Unlike the older MAT file formats, v7.3 is a variant of the HDF5, and there's HDF5 support and libraries in other languages. This could be a lot less work, and you'd probably get better performance too, since HDF5 is going to have a more efficient representation of numeric arrays than naive XML will.
It's not the default format; you can enable it using the -v7.3 switch to the save function.
To the best of my knowledge, what I want to do is impossible in Matlab 2011b. It might be possible, per #Andrew Janke's answer, to do something similar using Matlab's load command on binary HDF5 files that can be read and modified by other programs. But, this adds an enormous amount of complexity as Matlab's HDF5 representation of even the simplest class is extremely opaque. For instance, if I create a SimpleClass classdef in Matlab with two standard properties (prop1 and prop2), the HDF5 binary generated with the -v7.3 switch is 7k, and the expanded XML is 21k, and the text "prop1" and "prop2" do not appear anywhere. What I really want to create from that SimpleClass is this:
<root>
<classname>SimpleClass</classname>
<prop1>123</prop1>
<prop2>456</prop2>
</root>
I do not think it is possible to produce the above text from class properties in a generalized fashion in Matlab, even though it would be possible, for instance, in .NET or Java.

Write a compiler for a language that looks ahead and multiple files?

In my language I can use a class variable in my method when the definition appears below the method. It can also call methods below my method and etc. There are no 'headers'. Take this C# example.
class A
{
public void callMethods() { print(); B b; b.notYetSeen();
public void print() { Console.Write("v = {0}", v); }
int v=9;
}
class B
{
public void notYetSeen() { Console.Write("notYetSeen()\n"); }
}
How should I compile that? what i was thinking is:
pass1: convert everything to an AST
pass2: go through all classes and build a list of define classes/variable/etc
pass3: go through code and check if there's any errors such as undefined variable, wrong use etc and create my output
But it seems like for this to work I have to do pass 1 and 2 for ALL files before doing pass3. Also it feels like a lot of work to do until I find a syntax error (other than the obvious that can be done at parse time such as forgetting to close a brace or writing 0xLETTERS instead of a hex value). My gut says there is some other way.
Note: I am using bison/flex to generate my compiler.
My understanding of languages that handle forward references is that they typically just use the first pass to build a list of valid names. Something along the lines of just putting an entry in a table (without filling out the definition) so you have something to point to later when you do your real pass to generate the definitions.
If you try to actually build full definitions as you go, you would end up having to rescan repatedly, each time saving any references to undefined things until the next pass. Even that would fail if there are circular references.
I would go through on pass one and collect all of your class/method/field names and types, ignoring the method bodies. Then in pass two check the method bodies only.
I don't know that there can be any other way than traversing all the files in the source.
I think that you can get it down to two passes - on the first pass, build the AST and whenever you find a variable name, add it to a list that contains that blocks' symbols (it would probably be useful to add that list to the corresponding scope in the tree). Step two is to linearly traverse the tree and make sure that each symbol used references a symbol in that scope or a scope above it.
My description is oversimplified but the basic answer is -- lookahead requires at least two passes.
The usual approach is to save B as "unknown". It's probably some kind of type (because of the place where you encountered it). So you can just reserve the memory (a pointer) for it even though you have no idea what it really is.
For the method call, you can't do much. In a dynamic language, you'd just save the name of the method somewhere and check whether it exists at runtime. In a static language, you can save it in under "unknown methods" somewhere in your compiler along with the unknown type B. Since method calls eventually translate to a memory address, you can again reserve the memory.
Then, when you encounter B and the method, you can clear up your unknowns. Since you know a bit about them, you can say whether they behave like they should or if the first usage is now a syntax error.
So you don't have to read all files twice but it surely makes things more simple.
Alternatively, you can generate these header files as you encounter the sources and save them somewhere where you can find them again. This way, you can speed up the compilation (since you won't have to consider unchanged files in the next compilation run).
Lastly, if you write a new language, you shouldn't use bison and flex anymore. There are much better tools by now. ANTLR, for example, can produce a parser that can recover after an error, so you can still parse the whole file. Or check this Wikipedia article for more options.