Why cannot CMake functions return values? - cmake

A question for CMake experts out-there.
According to the CMake function documentation a function simply does not return anything. To change variable values one has to pass it to the function, and inside the function set the new value specifying the PARENT_SCOPE option.
Fine, this is a well-known feature of CMake.
My question here is not about the how, rather on why: why CMake functions do not return values? What is the idea behind?
For example, a function cannot be used inside a if expression, or called inside a set command.
If I remember correctly, it is the same with autotools, therefore I do not think it is like this just by chance.
Is there any expert that knows why?

You can find a partial answer by Ken Martin in a message from the CMake's mailing list:
With respect to the general question of functions returning values it
could be done but it is a bit of a big change. Functions and commands
look the same (and should act the same IMO) to the people using them.
So really we are talking about commands returning values. This is
mostly just a syntax issue. Right now we have
command(arg arg arg
)
to support return values we need something that could handle
command (arg command2(arg arg) arg arg
)
or in your case
if(assertdef(foo))
or in another case
set(foo get_property(
))
etc. This hits the parser and the argument processing in CMake but I
think it could be done. I guess I’m not sure if we should do it.
Open to opinions here.

Related

How can one invoke the non-extension `run` function (the one without scope / "object reference") in environments where there is an object scope?

Example:
data class T(val flag: Boolean) {
constructor(n: Int) : this(run {
// Some computation here...
<Boolean result>
})
}
In this example, the custom constructor needs to run some computation in order to determine which value to pass to the primary constructor, but the compiler does not accept the run, citing Cannot access 'run' before superclass constructor has been called, which, if I understand correctly, means instead of interpreting it as the non-extension run (the variant with no object reference in https://kotlinlang.org/docs/reference/scope-functions.html#function-selection), it construes it as a call to this.run (the variant with an object reference in the above table) - which is invalid as the object has not completely instantiated yet.
What can I do in order to let the compiler know I mean the run function which is not an extension method and doesn't take a scope?
Clarification: I am interested in an answer to the question as asked, not in a workaround.
I can think of several workarounds - ways to rewrite this code in a way that works as intended without calling run: extracting the code to a function; rewriting it as a (possibly highly nested) let expression; removing the run and invoking the lambda (with () after it) instead (funnily enough, IntelliJ IDEA tags that as Redundant lambda creation and suggests to Inline the body, which reinstates the non-compiling run). But the question is not how to rewrite this without using run - it's how to make run work in this context.
A good answer should do one of the following things:
Explain how to instruct the compiler to call a function rather than an extension method when a name is overloaded, in general; or
Explain how to do that specifically for run; or
Explain that (and ideally also why) it is not possible to do (ideally with supporting references); or
Explain what I got wrong, in case I got something wrong and the whole question is irrelevant (e.g. if my analysis is incorrect, and the problem is something other than the compiler construing the call to run as this.run).
If someone has a neat workaround not mentioned above they're welcome to post it in a comment - not as an answer.
In case it matters: I'm using multi-platform Kotlin 1.4.20.
Kotlin favors the receiver overload if it is in scope. The solution is to use the fully qualified name of the non-receiver function:
kotlin.run { //...
The specification is explained here.
Another option when the overloads are not in the same package is to use import renaming, but that won't work in this case since both run functions are in the same package.

inspect regex made with a variable

sub make-regex {
my $what's-in-the-box = rand > .5 ?? 'x' !! 'y';
/$what's-in-the-box/
}
my $lol-who-knows = make-regex;
$lol-who-knows.gist.say;
How do you see the innards of the regex (i.e. x or y)? Forcing a match is not a solution.
How do you see the innards of the regex (i.e. x or y)?
You:
Statically compile the regex. Doing so will involve use of a raku compiler, i.e. Rakudo.
Dynamically evaluate the regex so that the $what's-in-the-box variable in your regex gets interpolated and turns into x or y. Doing so will involve running the regex as code. That in turn means both using Rakudo and running the regex with a Match object invocant (or sub-class instance or, conceivably, a mock object equivalent).
View the resulting regex. Doing so will involve using compiler (Rakudo) toolchain specific introspection or debug functionality.
Forcing a match is not a solution.
You can run a regex with no input if your concern is just to avoid the need for a successful match:
say Match.new.&$lol-who-knows; # #<failed match>
But it must be run, otherwise the $what's-in-the-box variable won't turn into an x or y. You might think you could cheat on this by writing something that mimics this part of raku regex construction/use but there's good reason to think that's not going to work out[1].
And then you must view its innards, using Rakudo toolchain features, after you've started running it and before it finishes running.
A regex is a method
In raku, a Regex is code, a sub-class of Method.
Until you run it, by using it in a match, that code is just /$what's-in-the-box/ (aka regex { $what's-in-the-box }). It's the equivalent of something like:
method {
...self should be a Match or a sub-class of it
...if self is not an instance, create a new one
...do matching -- compiler evaluates $what's-in-the-box during this
...return updated self / new instance
}
(To see a bit more detail, see Moritz's answer to the SO Can I change the slang inside a method?.)
A routine is a closure
You create your regex inside the closure named make-regex. The compiler spots that you've used $what's-in-the-box and therefore hangs on to the variable even after the make-regex closure returns. Later on, if/when your regex is run, the $what's-in-the-box variable will be replaced by its value at the moment the regex is run.
Footnotes
[1] There are plenty of huge complications you'll encounter if you try to do an end-run around using the compiler. Even something simple like interpolation is non-trivial. To quote Jonathan Worthington, in 2020:
I thought oh I'm only building [a tool that's] not the real compiler. I can ... have a simpler model of the symbol resolution. In the end it turned out that no I couldn't really. That started to create us some problems. When we aligned the way that we resolved symbols with the same algorithms and lookup structures that was being used in the compiler then suddenly it all became a lot simpler.

Differences between .Bool, .so, ? and so

I’m trying to figure out what the differences are between the above-mentioned routines, and if statements like
say $y.Bool;
say $y.so;
say ? $y;
say so $y;
would ever produce a different result.
So far the only difference that is apparent to me is that ? has a higher precedence than so. .Bool and .so seem to be completely synonymous. Is that correct and (practically speaking) the full story?
What I've done to answer your question is to spelunk the Rakudo compiler source code.
As you note, one aspect that differs between the prefixes is parsing differences. The variations have different precedences and so is alphabetic whereas ? is punctuation. To see the precise code controlling this parsing, view Rakudo's Grammar.nqp and search within that page for prefix:sym<...> where the ... is ?, so, etc. It looks like ternary (... ?? ... !! ...) turns into an if. I see that none of these tokens have correspondingly named Actions.pm6 methods. As a somewhat wild guess perhaps the code generation that corresponds to them is handled by this part of method EXPR. (Anyone know, or care to follow the instructions in this blog post to find out?)
The definitions in Bool.pm6 and Mu.pm6 show that:
In Mu.pm6 the method .Bool returns False for an undefined object and .defined otherwise. In turn .defined returns False for an undefined object and True otherwise. So these are the default.
.defined is documented as overridden in two built in classes and .Bool in 19.
so, .so, and ? all call the same code that defers to Bool / .Bool. In theory classes/modules could override these instead of, or as well, as overriding .Bool or .defined, but I can't see why anyone would ever do that either in the built in classes/modules or userland ones.
not and ! are the same (except that use of ! with :exists dies) and both turn into calls to nqp::hllbool(nqp::not_i(nqp::istrue(...))). I presume the primary reason they don't go through the usual .Bool route is to avoid marking handling of Failures.
There are .so and .not methods defined in Mu.pm6. They just call .Bool.
There are boolean bitwise operators that include a ?. They are far adrift from your question but their code is included in the links above.

Write a compiler for a language that looks ahead and multiple files?

In my language I can use a class variable in my method when the definition appears below the method. It can also call methods below my method and etc. There are no 'headers'. Take this C# example.
class A
{
public void callMethods() { print(); B b; b.notYetSeen();
public void print() { Console.Write("v = {0}", v); }
int v=9;
}
class B
{
public void notYetSeen() { Console.Write("notYetSeen()\n"); }
}
How should I compile that? what i was thinking is:
pass1: convert everything to an AST
pass2: go through all classes and build a list of define classes/variable/etc
pass3: go through code and check if there's any errors such as undefined variable, wrong use etc and create my output
But it seems like for this to work I have to do pass 1 and 2 for ALL files before doing pass3. Also it feels like a lot of work to do until I find a syntax error (other than the obvious that can be done at parse time such as forgetting to close a brace or writing 0xLETTERS instead of a hex value). My gut says there is some other way.
Note: I am using bison/flex to generate my compiler.
My understanding of languages that handle forward references is that they typically just use the first pass to build a list of valid names. Something along the lines of just putting an entry in a table (without filling out the definition) so you have something to point to later when you do your real pass to generate the definitions.
If you try to actually build full definitions as you go, you would end up having to rescan repatedly, each time saving any references to undefined things until the next pass. Even that would fail if there are circular references.
I would go through on pass one and collect all of your class/method/field names and types, ignoring the method bodies. Then in pass two check the method bodies only.
I don't know that there can be any other way than traversing all the files in the source.
I think that you can get it down to two passes - on the first pass, build the AST and whenever you find a variable name, add it to a list that contains that blocks' symbols (it would probably be useful to add that list to the corresponding scope in the tree). Step two is to linearly traverse the tree and make sure that each symbol used references a symbol in that scope or a scope above it.
My description is oversimplified but the basic answer is -- lookahead requires at least two passes.
The usual approach is to save B as "unknown". It's probably some kind of type (because of the place where you encountered it). So you can just reserve the memory (a pointer) for it even though you have no idea what it really is.
For the method call, you can't do much. In a dynamic language, you'd just save the name of the method somewhere and check whether it exists at runtime. In a static language, you can save it in under "unknown methods" somewhere in your compiler along with the unknown type B. Since method calls eventually translate to a memory address, you can again reserve the memory.
Then, when you encounter B and the method, you can clear up your unknowns. Since you know a bit about them, you can say whether they behave like they should or if the first usage is now a syntax error.
So you don't have to read all files twice but it surely makes things more simple.
Alternatively, you can generate these header files as you encounter the sources and save them somewhere where you can find them again. This way, you can speed up the compilation (since you won't have to consider unchanged files in the next compilation run).
Lastly, if you write a new language, you shouldn't use bison and flex anymore. There are much better tools by now. ANTLR, for example, can produce a parser that can recover after an error, so you can still parse the whole file. Or check this Wikipedia article for more options.

Store Variable Name in String in VB.NET

I'm trying to store the names of some variables inside strings. For example:
Dim Foo1 as Integer
Dim Foo1Name as String
' -- Do something to set Foo1Name to the name of the other variable --
MessageBox.Show(Foo1Name & " is the variable you are looking for.")
' Outputs:
' Foo1 is the variable you are looking for.
This would help with some debugging I'm working on.
Well, you can clearly just set Foo1Name = "Foo1" - but I strongly suspect that's not what you're after.
How would you know which variable you're trying to find the name of? What's the bigger picture? What you want may be possible with reflection, if we're talking about non-local variables, but I suspect it's either not feasible, or there's a better way to attack the problem in the first place.
Does this example from msdn using reflection help?
One solution would be to use an associative array to store your variables. Once, I did this in .Net, but I think I wrote a custom class to do it.
myArray("foo1Name") = "foo1"
Then, you can just store a list of your variable names, or you can wrap that up in the same class.
if( myArray(variableName(x)) == whatImLookingFor ) print variableName(x) & "is it"
I think this really depends on what you are trying to debug. Two possible things to look at are the Reflection and StackTrace classes. That said when your program is compiled, the compiler and runtime do not guarantee that that names need to be consistent with the original program.
This is especially the case with debug vs. release builds. The point of the .PDB files (symbols) in the debug version are to include more information about the original program. For native C/C++ applications it is strongly recommended that you generate symbols for every build (debug+release) of your application to help with debugging. In .NET this is less of an issue since there are features like Reflection. IIRC John Robbins recommends that you always generate symbols for .NET projects too.
You might also find Mike Stall's blog useful and the managed debugger samples.
For finding the variable name, see: Finding the variable name passed to a function
This would apply to VB.Net as well.