What OCaml standard library types cannot be marshalled? - marshalling

I have a failure Marshaling a data structure (error abstract type (Custom)). There is one known abstract type in use, namely Big_int. However that Marshals fine. There is no custom C code in the application. Apart from Nums, Unix library is also used (however I don't believe there are any active objects of that type). We're Marshal'ing with Closures.
Two (only) third party libraries are in use: OCS Scheme (Scheme interpreter, pure Ocaml) and Dypgen (extensible GLR parser, also pure Ocaml). The problem is with a new feature of Dypgen, saving a dynamically extended parser.
The Ocaml error message is next to useless (it doesn't identify which abstract type with Custom tag is the culprit).
We suspected Lexbuf as the culprit because it contains a closure over an Ocaml channel, and can't be Marshal'ed, but it seems this isn't the problem. So my question is:
Which standard library components can't be Marshall'd?

Weak arrays cannot be marshaled. I am not familiar with OCS Scheme, but I would expect an interpreter for a garbage-collected language written in OCaml to use weak pointers (they let you piggy-back on OCaml's memory management).
In OCaml's defense, I do not think that the Custom method block contains the name of the type (retrospectively, that seems like a good thing to have).
EDIT: Yep:
$ grep Weak ~/Downloads/ocs-1.0.3/src/*.ml
/Users/pascal/Downloads/ocs-1.0.3/src/ocs_sym.ml:module SymTable = Weak.Make (HashSymbol)
EDIT2:
As pointed out by ygrek, there is room for a name in the custom method block. I should also clarify that weak arrays are not custom values, since my answer seemed to imply that. Weak arrays have the Abstract tag and are chained using the first word of data so that the garbage collector can traverse them in special weak-pointer-related phases of the collection cycle.

Related

Finding the Pharo documentation for the compile & evaluate methods, etc. in the compiler class

I've got an embarrassingly simple question here. I'm a smalltalk newbie (I attempt to dabble with it every 5 years or so), and I've got Pharo 6.1 running. How do I go about finding the official standard library documentation? Especially for the compiler class? Things like the compile and evaluate methods? I don't see how to perform a search with the Help Browser, and the method comments in the compiler class are fairly terse and cryptic. I also don't see an obvious link to the standard library API documentation at: http://pharo.org/documentation. The books "Pharo by Example" and "Deep into Pharo" don't appear to cover that class either. I imagine the class is probably similar for Squeak and other smalltalks, so a link to their documentation for the compiler class could be helpful as well.
Thanks!
There are several classes that collaborate in the compilation of a method (or expression) and, given your interest in the subject, I'm tempted to stimulate you even further in their study and understanding.
Generally speaking, the main classes are the Scanner, the Parser, the Compiler and the Encoder. Depending on the dialect these may have slightly different names and implementations but the central idea remains the same.
The Scanner parses the stream of characters of the source code and produces a stream of tokens. These tokens are then parsed by the Parser, which transforms them into the nodes of the AST (Abstract Syntax Tree). Then the Compiler visits the nodes of the AST to analyze them semantically. Here all variable nodes are classified: method arguments, method temporaries, shared, block arguments, block temporaries, etc. It is during this analysis where all variables get bound in their corresponding scope. At this point the AST is no longer "abstract" as it has been annotated with binding information. Finally, the nodes are revisited to generate the literal frame and bytecodes of the compiled method.
Of course, there are lots of things I'm omitting from this summary (pragmas, block closures, etc.) but with these basic ideas in mind you should now be ready to debug a very simple example. For instance, start with
Object compile: 'm ^3'
to internalize the process.
After some stepping into and over, you will reach the first interesting piece of code which is the method OpalCompiler >> #compile. If we remove the error handling blocks this methods speaks for itself:
compile
| cm |
ast := self parse.
self doSemanticAnalysis.
self callPlugins.
cm := ast generate: self compilationContext compiledMethodTrailer
^cm
First we have the #parse message where the parse nodes are created. Then we have the semantic analysis I mentioned above and finally #generate: produces the encoding. You should debug each of these methods to understand the compilation process in depth. Given that you are dealing with a tree be prepared to navigate thru a lot of visitors.
Once you become familiar with the main ideas you may want to try more elaborated -yet simple- examples to see other objects entering the scene.
Here are some simple facts:
Evaluation in Smalltalk is available everywhere: in workspaces, in
the Transcript, in Browsers, inspectors, the debugger, etc.
Basically, if you are allowed to edit text, most likely you will
also be allowed to evaluate it.
There are 4 evaluation commands
Do it (evaluates without showing the answer)
Print it (evaluates and prints the answer next to the expression)
Inspect it (evaluates and opens an inspector on the result)
Debug it (opens a debugger so you can evaluate your expression step by step).
Your expression can contain any literal (numbers, arrays, strings, characters, etc.)
17 "valid expression"
Your expression can contain any message.
3 + 4.
'Hello world' size.
1 bitShift: 28
Your expression can use any Global variable
Object new.
Smalltalk compiler
Your expression can reference self, super, true, nil, false.
SharedRandom globalGenerator next < 0.2 ifTrue: [nil] ifFalse: [self]
Your expression can use any variables declared in the context of the pane where you are writing. For example:
If you are writing in a class browser, self will be bound to the current class
If you are writing in an inspector, self is bound to the object under inspection. You can also use its instances variables in the expression.
If you are in the debugger, your expression can reference self, the instance variables, message arguments, temporaries, etc.
Finally, if you are in a workspace (a.k.a. Playground), you can use any temporaries there, which will be automatically created and remembered, without you having to declare them.
As near as I can tell, there is no API documentation for the Pharo standard library, like you find with other programming languages. This seems to be confirmed on the Pharo User's mailing list: http://forum.world.st/Essential-Documentation-td4916861.html
...there is a draft version of the ANSI standard available: http://wiki.squeak.org/squeak/uploads/172/standard_v1_9-indexed.pdf
...but that doesn't seem to cover the compiler class.

Does COM's put_XXX methods change to set_XXX in a .NET RCW

I have a COM component that has get_XXX and put_XXX methods inside it. I used it in a .NET project and a RCW was generated for it. I now see get_XXX and set_XXX methods and NOT the put_XXX one? Is that automatic or defined somewhere in IDL?
These are property accessor methods. A compiler that uses the COM server is expected to generate a call to get_Xxx() when the client program reads the property, put_Xxx() when it writes it. A special one that C# doesn't have at all is putref_Xxx(), used to unambiguously access an object instead of a value.
The normal translation performed by Tlbimp.exe is as a plain C# property. But that doesn't always work, C# is a lot more strict about what a property can look like:
The default property, the one that's annotated as DISPID_VALUE (dispid 0) must take a single argument to be compatible. This maps to the C# indexer property, the one that makes it look like you are indexing an array.
Any other property cannot take an argument, C# does not supported indexed properties other than the indexer.
C# does not have the equivalent of putref_Xxx(), the syntax ambiguity cannot occur in a C# program because of the previous two bullets. And the core reason that the C# team decided to put these restrictions in place, they greatly disliked ambiguity in the language.
So Tlbimp.exe is forced to deal with these restrictions, if the COM property accessors are not compatible then it must fall back to exposing them as plain methods instead of a property. With default names, they'll get the get_ and set_ prefixes. The latter one explains your question, they didn't pick put_ for an otherwise unclear reason.
Notable is that C# version 4 relaxed several of these restrictions, most of all to make interop with Office programs easier. Which was quite painful in earlier C# versions, to put it mildly. It extended the property syntax to lessen the pain, but only for COM interop. Very strongly recommended if you are still stuck on an old version of .NET, now is a good time to consider updating.
The properties themselves have to prefixes (put_ etc.), they have names, getter method, setter method, but no prefixes. Method table generated from type library receives prefixes to distinguish between getters and setters, hence the prefixes. Prefix string exactly depends on preference of the one who generates the names.
See also:
#pragma import attributes - raw_property_prefixes
By default, low-level propget, propput, and propputref methods are exposed by member functions named with prefixes of get_, put_, and putref_ respectively. These prefixes are compatible with the names used in the header files generated by MIDL.

How would you implement a toString method, on the base class, if the string class inherits from the base class?

In many modern OOP languages, such as Java and C#, reference types have a base class typically called Object from which all other reference types inherit their core functionality.
These languages also have a universal .toString() method shared across all the reference types, so that it's easy to extract data as a string from it.
Now here's the question: If the String class is a subclass of Object, how can Object implement a .toString() method without causing a circular dependency?
if A uses B and B implements A it's bound to cause problems, or am I totally wrong in this?
Regarding C# (and I'm pretty sure the same goes for Java), the compiler doesn't require that source files be provided in dependency order.
This means that, unlike other compilers (the F# compiler and gcc, I believe), the C# compiler allows you to refer to symbols that haven't been seen by the compiler yet (as long as both types are in the same assembly).
In other words - yes, there's is a circular dependency, but the compiler takes care of that for you. If you want to know how compilers handle circular dependencies, then that has been asked on programmers.stackexchange already.

keep around a piece of context built during compile-time for later use in runtime?

I'm aware this might be a broad question (there's no specific code for you to look at), but I'm hoping I'd get some insights as to what to do, or how to approach the problem.
To keep things simple, suppose the compiler that I'm writing performs these three steps:
parse (and bind all variables)
typecheck
codegen
Also the language that I'm building the compiler for wants to support late-analysis/late-binding (ie., it has a function that takes a String, which is to be compiled and executed as a piece of source-code during runtime).
Now during parse-phase, I have a piece of context that I need to keep around till run-time for the sole benefit of the aforementioned function (because it needs to parse and typecheck its argument in that context).
So the question, how should I do this? What do other compilers do?
Should I just serialise the context object to disk (codegen for it) and resurrect it during run-time or something?
Thanks
Yes, you'll need to emit the type information (or other context, you weren't very specific) in your object/executable files, so that your eval can read it at runtime. You might look at Java's .class file format for inspiration; Java doesn't have eval as such, but you can dynamically spin new bytecode at runtime that must be linked in a type-safe manner. David Conrad's comment is spot-on: this information can also be used to implement reflection, if your language has such a feature.
That's as much as I can help you without more specifics.

For a lanuage to work with com object does it need to have an api and compiler developed specific for com?

In the documentation for com it says that it works literally with every language. Do you need to have a specific API for that language so it can interface with com, or can any language literally just use it out of box? Also do you need a special compiler? Sorry if this is a stupid question but I have never used it before, and I have been trying to find this answer. When I look at demos of com examples it all seems to access the objects in a c style syntax, are their bindings and apis for other languages (literally all)?
The key thing about COM is that it is a "binary standard": which is to say that it doesn't care what the language used is, so long as the bits and bytes in memory end up in the right place.
COM basically specifies that all COM objects must have a specific layout in memory: the interface pointer points to a pointer that in turn points to a table of function pointers, which has at least three members, the first three of which are pointers to the IUnknown functions (AddRef, Release, QueryInterface), and the remainder are pointers to the other functions in the interface. COM also specifies how arguments are passed to these functions - so that the caller and callee agree on how the stack is used, and who pops off the values.
This requirement happily matches how C++ just happens to work on Windows; so most C++ classes that implement IUnknown will just happen to ends up as being valid COM classes: this is because Microsoft's implementation of C++ happens to use an object layout that matches what COM requires: the C++ object vtable pointer is the same as the COM pointer-to-table-of-function-pointers, the C++ table of function pointers is exactly what COM requires for its table-of-function-pointers, and so on. (This isn't entirely just a happy coincidence: COM was likely designed to take advantage of the most common way that C++ objects are implemented in memory which is the technique that MS's compiler uses. Note that C++ the language specification doesn't actually specify any particular object layout - so you could have a 3rd party C++ compiler that implemented C++ in a way that gave you classes that are not usable by COM. But no compiler vendor in their right mind would do that, since they would appear to be broken compared to the others!)
In plain C, you can create a COM object by creating suitable structs-containing-pointers manually. This works because C essentially allows you to specify binary-level memory layout for structs manually; you can create structs that you know will have the appropriate layout that COM is expecting.
In other languages, especially those that don't allow the user to specify memory layout explicitly, you need support from the language to allow for COM support. All the .Net languages - C#, VB.Net, and so on - use support in the .Net runtime that understands what COM expects, and produces the appropriate wrappers as needed to allow the interop to work.
So, long story short, it's not the case that any language under the sun will automatically work with COM; it's really the case that a couple of languages - namely C and C++ - are already aligned with COM's requirements; and most other languages will need some compiler support to make it work.