As far as I can tell, the expression given: (starting the "given" block) does nothing at all in Spock tests and they function exactly the same if you just omit it.
Does anyone know different?
From the Spock documentation,
The given block is where you do any setup work for the feature that you are describing. It may not be preceded by other blocks, and may not be repeated. A given block doesn’t have any special semantics. The given: label is optional and may be omitted, resulting in an implicit given block. Originally, the alias setup: was the preferred block name, but using given: often leads to a more readable feature method description (see Specifications as Documentation).
So, ultimately yes, it functions just the same if you omit it, because Spock implicitly puts the label there anyway; but making the given label explicit can help to document your tests.
Related
Reading Idris2 code I've seen several cases of functions "decorated" with %inline and also %tcinline I've been searching for a clear explanation about it but haven't found anything except that it "can" be used for giving some "hints" to help on foreign calls, but it's not clear what's the main purpose of it and when it should be used or when should not be used.
Additionally it would be really good to know if these "decorators" which happen to start with % have any common purpose.
From the change log:
New function flag %tcinline which means that the function should be inlined for the purposes of totality checking (but otherwise not inlined). This can be used as a hint for totality checking, to make the checker look inside functions that it otherwise might not.
From the documentation on pragmas:
%inline Instruct the compiler to inline the following definition when it is applied. It is generally best to let the compiler and the backend you are using optimize code based on its predetermined rules, but if you want to
force a function to be inlined when it is called, this pragma will force it.
I have been looking in Rakudo source for the implementation of require, first out of curiosity and second because I wanted to know if it was returning something.
I looked up sub require and it returned this hit, which actually seems to be the source for require, but it's called sub REQUIRE_IMPORT. It returns Nil and is declared as such, which pretty much answers my original question. But now my question is: Where's the mapping from that sub to require? Is it really the implementation for that function? Are there some other functions that are declared that way?
require is not a sub, but rather a statement control (so, in the same category of things like use, if, for, etc.) It is parsed by the Perl 6 grammar and there are a few different cases that are accepted. It is compiled in the Perl 6 actions, which has quite a bit to handle.
Much of the work is delegated to the various CompUnit objects, which are also involved with use/need. It also has to take care of stubbing symbols that the require will bring in, since the set of symbols in a given lexical scope is fixed at compile time, and the REQUIRE_IMPORT utility sub is involved with the runtime symbol import too.
The answer to your question as to what it will evaluate to comes at the end of the method:
$past.push($<module_name>
?? self.make_indirect_lookup($longname.components())
!! $<file>.ast);
Which means:
If it was a require Some::Module then evaluate to a lookup of Some::Module
If it was a require $file style case, evaluate to the filename
I've got an embarrassingly simple question here. I'm a smalltalk newbie (I attempt to dabble with it every 5 years or so), and I've got Pharo 6.1 running. How do I go about finding the official standard library documentation? Especially for the compiler class? Things like the compile and evaluate methods? I don't see how to perform a search with the Help Browser, and the method comments in the compiler class are fairly terse and cryptic. I also don't see an obvious link to the standard library API documentation at: http://pharo.org/documentation. The books "Pharo by Example" and "Deep into Pharo" don't appear to cover that class either. I imagine the class is probably similar for Squeak and other smalltalks, so a link to their documentation for the compiler class could be helpful as well.
Thanks!
There are several classes that collaborate in the compilation of a method (or expression) and, given your interest in the subject, I'm tempted to stimulate you even further in their study and understanding.
Generally speaking, the main classes are the Scanner, the Parser, the Compiler and the Encoder. Depending on the dialect these may have slightly different names and implementations but the central idea remains the same.
The Scanner parses the stream of characters of the source code and produces a stream of tokens. These tokens are then parsed by the Parser, which transforms them into the nodes of the AST (Abstract Syntax Tree). Then the Compiler visits the nodes of the AST to analyze them semantically. Here all variable nodes are classified: method arguments, method temporaries, shared, block arguments, block temporaries, etc. It is during this analysis where all variables get bound in their corresponding scope. At this point the AST is no longer "abstract" as it has been annotated with binding information. Finally, the nodes are revisited to generate the literal frame and bytecodes of the compiled method.
Of course, there are lots of things I'm omitting from this summary (pragmas, block closures, etc.) but with these basic ideas in mind you should now be ready to debug a very simple example. For instance, start with
Object compile: 'm ^3'
to internalize the process.
After some stepping into and over, you will reach the first interesting piece of code which is the method OpalCompiler >> #compile. If we remove the error handling blocks this methods speaks for itself:
compile
| cm |
ast := self parse.
self doSemanticAnalysis.
self callPlugins.
cm := ast generate: self compilationContext compiledMethodTrailer
^cm
First we have the #parse message where the parse nodes are created. Then we have the semantic analysis I mentioned above and finally #generate: produces the encoding. You should debug each of these methods to understand the compilation process in depth. Given that you are dealing with a tree be prepared to navigate thru a lot of visitors.
Once you become familiar with the main ideas you may want to try more elaborated -yet simple- examples to see other objects entering the scene.
Here are some simple facts:
Evaluation in Smalltalk is available everywhere: in workspaces, in
the Transcript, in Browsers, inspectors, the debugger, etc.
Basically, if you are allowed to edit text, most likely you will
also be allowed to evaluate it.
There are 4 evaluation commands
Do it (evaluates without showing the answer)
Print it (evaluates and prints the answer next to the expression)
Inspect it (evaluates and opens an inspector on the result)
Debug it (opens a debugger so you can evaluate your expression step by step).
Your expression can contain any literal (numbers, arrays, strings, characters, etc.)
17 "valid expression"
Your expression can contain any message.
3 + 4.
'Hello world' size.
1 bitShift: 28
Your expression can use any Global variable
Object new.
Smalltalk compiler
Your expression can reference self, super, true, nil, false.
SharedRandom globalGenerator next < 0.2 ifTrue: [nil] ifFalse: [self]
Your expression can use any variables declared in the context of the pane where you are writing. For example:
If you are writing in a class browser, self will be bound to the current class
If you are writing in an inspector, self is bound to the object under inspection. You can also use its instances variables in the expression.
If you are in the debugger, your expression can reference self, the instance variables, message arguments, temporaries, etc.
Finally, if you are in a workspace (a.k.a. Playground), you can use any temporaries there, which will be automatically created and remembered, without you having to declare them.
As near as I can tell, there is no API documentation for the Pharo standard library, like you find with other programming languages. This seems to be confirmed on the Pharo User's mailing list: http://forum.world.st/Essential-Documentation-td4916861.html
...there is a draft version of the ANSI standard available: http://wiki.squeak.org/squeak/uploads/172/standard_v1_9-indexed.pdf
...but that doesn't seem to cover the compiler class.
As a self-taught programmer, my definitions get fuzzy sometimes.
I'm very used to C and ObjC. In both of those your code must adhere to the language "structure". You can only do certain things in certain places. As an example, this is an error:
// beginning of file
NSLog(#"Hello world!"); // can't do this
#implementation MYClass
...
#end
However, in Ruby, anything you put anywhere is executed as the interpreter goes through it. So what is the difference between Ruby and Objective-C that allows this?
At first I thought it was that one was interpreted and the other compiled. Then I read some SO posts and the wikipedia definitions. Interpreted or compiled is a property of the implementation not the language. So that would mean there could (theoretically) be an interpreted implementation of Objective-C? In that case, the fact that a statement cannot be outside the implementation can't be a property of compiled languages, and vice-versa if there was a compiled implementation of Ruby. Or am I wrong in assuming that different implementations of a language would work the same way?
I'm not sure there's a technical term for it, but in most programming languages the context of the statement is extremely important.
Ruby has a concept of a root or main context where code is allowed. Other scripting languages follow this convention, presumably made popular by languages like Perl which allowed for very concise programming.
This allows things like this to be a complete and valid program:
print "Hello world!\n"
In other languages you need to define an entry point, such as a main routine, that is executed instead. Arbitrary code is not really allowed at the top level, which instead is reserved for things like function, type, constant, structure and class definitions.
A language like Ruby has a lot of control over the order in which the code is executed. C, by comparison, is usually composed of separate source files that are then linked together, where there's no inherent order to the way things are linked. All the modules are simply assembled into the final library or executable. This is why the main entry point is required, it defines which function to run first.
In short, it boils down to syntax, context, and language design considerations.
Ruby hides lots of stuff.
Ruby is OO like C++, Objective C and Java, and has main like C but you don't see this.
puts(42) is method call. It is a method of the main object called main. You can see it by typing puts self.
If you don't specify the receiver (receiver.method()) Ruby will use the implicit one, main.
Check available methods:
puts Object.private_methods.sort
Why you can put everything anywhere?
C/C++ look for main method called main, and when C/C++ find it, it will be executed.
Ruby on other hands doesn't need main or other method/class to run first.
It execute code from the first line until it meet the end of file(or __END__ on the separate line).
class Strongman
puts "I'm the best!"
end
is just syntactic sugar for Class.new method:
Strongman = Class.new do
puts "I'm the best!"
end
The same goes for 'module`.
for calls each and returns some kind of object. So you may think of it as something similar to method.
a = for i in 1..12; 42;end
puts a
# 1..12
In the end, it doesn't matter if it is method call or some kind of structure like C's int main(). Programming language decides what it should run first.
My questions are:
Is there a way to conclusively determine if a function is async-signal-safe if you don't have access to its implementation?
If not, is there a way to test if function would be async-signal-safe enough to call from a signal handler?
If you reads the man pages of signal() or sigaction(), you get a list of async-signal-safe functions (functions that can be safely called inside a signal handler). However, I believe that this list is not exhaustive. For example, the following page http://linux.die.net/man/7/signal, under the Async-signal-safe functions header, reads:
POSIX.1-2004 (also known as POSIX.1-2001 Technical Corrigendum 2) requires an implementation to guarantee that the following functions can be safely called inside a signal handler:
And then it proceeds to list the normal async-signal-safe functions listed in the man pages above. As I read it, it says "it requires", not "these are the only ones".
For example, this site says that back_trace_symbols_fd() is async-signal safe. That function obtains is data from dladdr() and it doesn't use malloc() like back_trace_symbols(), so it looks like it may be safe. Also, I did some testing, and the output struct of dladdr() contains char* variables, but these are NOT malloc'ed at runtime. The char string they point to exists at run-time even before dladdr() is called.
Any thoughts or ideas that can point me in the right direction are appreciated.
If you don't have access to the function's implementation, you can look at the manual page. If the manual page doesn't say it is async-safe, and the POSIX standard doesn't say it is async-safe, the only safe conclusion is "it is not async-safe" (coupled with "do not use it").
There is no 100% reliable way to test whether a function is async-safe. Remember, testing can only show the presence of bugs, not their absence (Dijkstra). The mere fact that you don't manage to tickle the function into misbehaving under test may simply mean that your testing is not adequate (but rest assured, the important customer who you can't afford to offend will immediately and accidentally devise a devastatingly effective test that demonstrates that the function is not async-safe almost as soon as you release the code with the faulty assumption).
What are you hoping to achieve in the signal handler? You should consider whether it is the right place for it. It is probably best to follow the advice of the man page:
In general, signal handlers should do little more
than set a flag; most other actions are not safe.