Reading Idris2 code I've seen several cases of functions "decorated" with %inline and also %tcinline I've been searching for a clear explanation about it but haven't found anything except that it "can" be used for giving some "hints" to help on foreign calls, but it's not clear what's the main purpose of it and when it should be used or when should not be used.
Additionally it would be really good to know if these "decorators" which happen to start with % have any common purpose.
From the change log:
New function flag %tcinline which means that the function should be inlined for the purposes of totality checking (but otherwise not inlined). This can be used as a hint for totality checking, to make the checker look inside functions that it otherwise might not.
From the documentation on pragmas:
%inline Instruct the compiler to inline the following definition when it is applied. It is generally best to let the compiler and the backend you are using optimize code based on its predetermined rules, but if you want to
force a function to be inlined when it is called, this pragma will force it.
Related
Suppose I have a function (in Kotlin over Java):
fun <E> myFun() = ...
where E is a general type I know nothing about. Can I determine within this function whether there exists an extension function E.extFun()? And if so, how?
I very much doubt this is possible.
Note that extension functions are resolved statically, at compile time.
And that they're dependent on the extension function being in scope, usually via a relevant import. In particular, it's possible to have more than one extension function with the same name for the same class, as long as they're defined in different places; the one that's in scope will get called.
Within your function, you won't have access to any of that context. So even if you use reflection (which is the usual, and much-abused, ‘get out of jail free card’ for this sort of issue), you still won't be able to find the relevant extension function(s). (Not unless you have prior knowledge of where they might be defined — but in that case, you can probably use that knowledge to come up with a better approach.)
So while I can't say for certain, it seems highly unlikely.
Why do you want to determine this? What are you trying to achieve by it?
I have been looking in Rakudo source for the implementation of require, first out of curiosity and second because I wanted to know if it was returning something.
I looked up sub require and it returned this hit, which actually seems to be the source for require, but it's called sub REQUIRE_IMPORT. It returns Nil and is declared as such, which pretty much answers my original question. But now my question is: Where's the mapping from that sub to require? Is it really the implementation for that function? Are there some other functions that are declared that way?
require is not a sub, but rather a statement control (so, in the same category of things like use, if, for, etc.) It is parsed by the Perl 6 grammar and there are a few different cases that are accepted. It is compiled in the Perl 6 actions, which has quite a bit to handle.
Much of the work is delegated to the various CompUnit objects, which are also involved with use/need. It also has to take care of stubbing symbols that the require will bring in, since the set of symbols in a given lexical scope is fixed at compile time, and the REQUIRE_IMPORT utility sub is involved with the runtime symbol import too.
The answer to your question as to what it will evaluate to comes at the end of the method:
$past.push($<module_name>
?? self.make_indirect_lookup($longname.components())
!! $<file>.ast);
Which means:
If it was a require Some::Module then evaluate to a lookup of Some::Module
If it was a require $file style case, evaluate to the filename
Being a newbie Kotlin coder, I wonder, if there are some good practices or even language constructs for declaring pre-conditions in functions.
In Java I have been using Guava's Preconditions checking utilities:
https://github.com/google/guava/wiki/PreconditionsExplained
After some further investigation I came across the require function:
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/require.html
Is this what is generally used for checking preconditions on functions?
Of course. You can find all of the preconditions in Preconditions.kt. In addition to the require function, there are requireNotNull, check & checkNotNull functions.
Since the documentation describes it poorly in Kotlin, but you can see the Objects#requireNonNull documentation in jdk as further.
Checks that the specified object reference is not null. This method is designed primarily for doing parameter validation in methods and constructors.
I use assert() and require() from the stdlib.
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/assert.html
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/require.html
Actually, 'require' appears to not be inherited - that is, if a subclass overrides a function that has a 'require' statement, the 'require' in the parent function is not enforced. A true precondition would also apply in the case of a redefinition of the inherited function, so (IMO) 'require' does not truly provide full precondition-checking functionality.
(I say "appears" because, being new to kotlin, I've learned this by a simple experiment using inheritance - and it's possible I'm wrong - e.g., there's a bug in the compiler causing incorrect behavior, or I've done something wrong in compiling/setup. I don't think this possibility is likely, though.)
Yes, it seems that toolforger is right about 'require'. I just searched for "require" as a keyword at https://kotlinlang.org and couldn't find it, nor as a documented function. It appears to be undocumented (unless the doc for require is hidden somewhere I couldn't find); and, of course, that means we cannot count on it to implement the standard DBC "require" behavior, and so the logical assumption is that it is simply the equivalent to "assert" in C.
I'm working on an iOS app using C and Objective-C, and I want to write a very small piece of code that will be executed thousands of times from more than one place. Is it safe to make this an inline function and be sure that it will always be expanded (I won't ever be taking its address) or should I make it a macro? The code is small and it will be executed very frequently, so I'd like to make sure I won't end up with thousands of function calls for it, but still I'd like the type safety of the function approach if possible...
If you want to be sure that a function is inlined, make it "extern inline" (this is a GNU-C feature). Such functions are only used for inlining; the compiler will never generate a "real" function for it. Thus, if the inlining fails, you should be getting linker errors. I assume clang has "inherited" this feature.
In general, always use inline instead of macros, if possible. There's a reason why many C-compilers had it for ages, and C++ finally added it as a core feature; it makes things a lot safer and reliable to use. There are still things that need macros, but those are few and far between.
Yes, you should use an inline function over a macro.
The performance will be identical to a macro (the code is inline, after all) and you'll get type safety as well.
N.B., this assumes that your function is simple enough for the compiler to inline. gcc's -Winline option warns if this isn't the case; not sure what flags do the same on your platform.
Also see this post for cases when you might prefer a macro (e.g., deferred evaluation)--but based on your question it sounds like inline function is the clear choice.
I may be wrong, but I understand a compiler can only inline functions which are in the same source file. If your inline function is in file A and you're trying to use it elsewhere, it cannot be inlined, unless the linker does link-time optimization.
This is because the compiler only compiles one C file at a time into one object file. It cannot obtain the inlined function from another object file, because firstly, it may not yet have been compiled and secondly, it wouldn't know which object file to look for anyway.
My questions are:
Is there a way to conclusively determine if a function is async-signal-safe if you don't have access to its implementation?
If not, is there a way to test if function would be async-signal-safe enough to call from a signal handler?
If you reads the man pages of signal() or sigaction(), you get a list of async-signal-safe functions (functions that can be safely called inside a signal handler). However, I believe that this list is not exhaustive. For example, the following page http://linux.die.net/man/7/signal, under the Async-signal-safe functions header, reads:
POSIX.1-2004 (also known as POSIX.1-2001 Technical Corrigendum 2) requires an implementation to guarantee that the following functions can be safely called inside a signal handler:
And then it proceeds to list the normal async-signal-safe functions listed in the man pages above. As I read it, it says "it requires", not "these are the only ones".
For example, this site says that back_trace_symbols_fd() is async-signal safe. That function obtains is data from dladdr() and it doesn't use malloc() like back_trace_symbols(), so it looks like it may be safe. Also, I did some testing, and the output struct of dladdr() contains char* variables, but these are NOT malloc'ed at runtime. The char string they point to exists at run-time even before dladdr() is called.
Any thoughts or ideas that can point me in the right direction are appreciated.
If you don't have access to the function's implementation, you can look at the manual page. If the manual page doesn't say it is async-safe, and the POSIX standard doesn't say it is async-safe, the only safe conclusion is "it is not async-safe" (coupled with "do not use it").
There is no 100% reliable way to test whether a function is async-safe. Remember, testing can only show the presence of bugs, not their absence (Dijkstra). The mere fact that you don't manage to tickle the function into misbehaving under test may simply mean that your testing is not adequate (but rest assured, the important customer who you can't afford to offend will immediately and accidentally devise a devastatingly effective test that demonstrates that the function is not async-safe almost as soon as you release the code with the faulty assumption).
What are you hoping to achieve in the signal handler? You should consider whether it is the right place for it. It is probably best to follow the advice of the man page:
In general, signal handlers should do little more
than set a flag; most other actions are not safe.