In Racket, how do I execute a button's callback function, when the function is in another file? - module

In Racket, how do I execute a button's callback function, when the function is in another file?
I have a file GUI.rkt with my GUI code:
#lang racket/gui
(provide (all-defined-out))
(define main (new frame% [label "App"]))
(new button% [parent main] [label "Click"]
[callback (lambda (button event) (begin-capture)])
I have a main file, proj.rkt:
#lang racket
(require "GUI.rkt")
(define (begin-capture)
;do stuff
;...
)
The compiler gives an error saying that begin-capture is an unbound identifier.
I know it is an unbound identifier because I didn't define the variable in the GUI file. The Racket documentation shows how to set the callback function in the object definition, but not outside of the definition. Ideally, I would like to access functions in the other file from my GUI, so that all my GUI code is in the GUI.rkt file.

If "GUI.rkt" needs identifiers from "proj.rkt" then "proj.rkt" needs to provide them and "GUI.rkt" needs to require "proj.rkt", not the other way around. If the two modules need identifiers from each other then you almost certainly have a design problem.
If you want the GUI part of the program to be something that is required by other parts, then an obvious approach is for it to provide procedures to make things which take arguments which are things like callbacks:
(provide
...
make-main-frame
...)
(define (make-main-frame ... capture-callback ...)
(define main (new frame% [label "App"]))
(new button% [parent main] [label "Click"]
[callback (lambda (button event) (capture-callback))])
...
main)
Note, however that I don't know anything about how people conventionally organize programs with GUIs, let alone how they do it in Racket, since I haven't written that sort of code for a very long time. The basic deal, I think, for any program with modules is:
you want the module structure of programs to not have loops in it – even if it's possible for Racket's module system to have loops, their presence in a program would ring alarm bells for me;
where a 'lower' module in the graph (a module which is being required by some 'higher' module in the graph) may need to use functionality from that higher module it should probably do by providing procedures which take arguments which the higher module can provide, or equivalent functionality to that.
The above two points are my opinion only: I may be wrong about hat the best style is in Racket.
A possible example
Here's one way of implementing a trivial GUI in such a way that the callback can be changed, but the GUI code and the implementation code are isolated.
First of all the gui lives in "gui.rkt" which looks like this:
#lang racket/gui
(provide (contract-out
(selection-window (->* (string?
(listof string?)
(-> string? any))
(#:initial-choice string?)
(object/c)))))
(define (selection-window name choices selection-callback
#:initial-choice (initial-choice (first choices)))
;; make & show a selection window for a number of choices.
;; selection-callback gets called with the choice, a string.
(define frame (new frame% [label name]))
(new choice%
[parent frame]
[label "state"]
[choices choices]
[selection (index-of choices initial-choice)]
[callback (λ (self event)
(selection-callback (send self get-string-selection)))])
(send frame show #t)
frame)
So this provides a single function which constructs the GUI. In real life you'd probably want to provide some additional functionality to manipulate the returned object without users of this module needing to know about it.
The function takes a callback function as an argument, and this is called in a way which might be useful to the implementation, not the GUI (so in particular it's called with the selected string).
"gui.rkt" doesn't provide any way to change the callback. But that's OK: users of the module can do that, for instance like this:
#lang racket
(require "gui.rkt")
(define selection-callback-implementation
(make-parameter (λ (s)
(printf "selected ~A~%" s))))
(selection-window "foo" '("red" "amber" "green")
(λ (s) ((selection-callback-implementation) s))
#:initial-choice "green")
Now the parameter selection-callback-implementation is essentially the callback, and can be adjusted to change what it is. Of course you can do this without parameters if you want, but parameters are quite a nice approach I think (although, perhaps, unrackety).

Related

What exactly does it mean for a handler to "decline" to handle a signal?

In the HyperSpec entry for HANDLER-BIND, it says that a handler can decline to handle a signal.
However, the linked glossary entry for decline to handle a signal is not very enlightening:
decline v. (of a handler) to return normally without having handled the condition being signaled, permitting the signaling process to continue as if the handler had not been present.
This definition begs and does not answer the question of what constitutes not returning normally?
Is there a full list of actions that constitute "handling" a signal?
I know from hands-on experience that INVOKE-RESTART appears to fit this criterion. But is that the only way for a handler to "handle" a signal, or are there others?
I think to understand the true meaning of “handle”, you should consider how the condition system works under the hood. Kent Pitman's sample implementation, written during the standardization process, is a good place to start (even though it has kludgy stuff like essentially implementing an entire object system, since CLOS was not yet part of the language).
Roughly speaking, the action of handler-bind is to set up a special variable, which we will call *handler-clusters*, so that it holds lists of lists of (type . function) pairs, corresponding to the list of bindings. A possible definition is
(defmacro handler-bind (bindings &body forms)
`(let ((*handler-clusters* (cons (list (mapcar #'(lambda (x) `(cons ',(car x) ,(cadr x)))
bindings))
*handler-clusters*)))
,#forms))
The signal function, then, goes through the clusters; if it finds one for the correct condition type, it calls the associated function. Definition:
(defun signal (datum &rest arguments)
(let ((condition (coerce-to-condition datum arguments :default 'simple-condition))
(*handler-clusters* *handler-clusters*)) ; save current value
(when (typep condition *break-on-signals*)
(with-simple-restart (continue "Continue the signaling process")
(break "Break caused by *BREAK-ON-SIGNALS*")))
(loop for cluster := (pop *handler-clusters*) do
(loop for binding in cluster do
(when (typep condition (car binding))
(funcall (cdr binding) condition)))))
nil)
where coerce-to-condition is a function that deals with “condition designators”. A subtle point is that we can't simply (loop for cluster in *handler-clusters* do …), because if, during the call of a handler, a condition of the same type as that which is being handled is signaled, the handler would be called recursively, which is probably not desirable. Thus the previous value is saved and we destructively pop the cluster off.
Now, remember that Common Lisp allows closures over block names and tagbody tags. That is, after the definitions
(defvar *transfer-control*)
(defun weird (function)
(tagbody
(go :start)
:tag
(print 'transferred)
(return-from weird)
:start
(setf *transfer-control* (lambda () ; captures the tag :tag
(go :tag)))
(funcall function)))
a form like
(weird (lambda () (funcall *transfer-control*)))
is allowed, and will print 'transferred. The control transfer happens, in a sense, outside of the lexical scope of the tagbody; the ability to (go :tag) has “escaped” its enclosing scope. (It would be an error to funcall *transfer-control* after weird has returned, because the dynamic extent of the tagbody has been exited.)
All this is to say that calling an ordinary Common Lisp function can cause a transfer of control, instead of returning a value. Calling *transfer-control* does nothing but unwind the dynamic environment up to the point of the tagbody, and then jumps to :tag. The function doesn't “return” in the usual sense of the term, because the evaluation of the expression in which it is embedded will be abruptly stopped, never to resume. (With weird and *transfer-control*, we've defined a primitive substitute for catch and throw that simply transfers control but doesn't convey a value at the same time. To see definitions of tagbody, block, and catch in terms of each other, see Henry Baker's “Metacircular Semantics for Common Lisp Special Forms”.)
Therefore, when signal calls the handler for a condition, two things can happen:
The handler transfers control, aborting the evaluation of signal and unwinding the stack to the place to which control was transferred.
The handler does not transfer control, but returns a value. In this case, as the definition above shows, signal will continue looking for a handler until reaching the end of *handler-clusters* or until another handler transfers control. This is called “declining”.
(In a way, it can also do neither or both, by, for instance, calling signal on another condition. The specification calls this deferring.)
For example, the hyperspec gives a sample expansion for handler-case. The form
(handler-case form
(type1 (var1) . body1)
(type2 (var2) . body2) ...)
becomes (ignoring problems of variable capture)
(block return-point
(let ((condition nil))
(tagbody
(handler-bind ((type1 #'(lambda (temp)
(setq condition temp)
(go :handler-tag-1)))
(type2 #'(lambda (temp)
(setq condition temp)
(go :handler-tag-2)))
...)
(return-from return-point form))
:handler-tag-1
(return-from return-point (let ((var1 condition)) . body1))
:handler-tag-2
(return-from return-point (let ((var2 condition)) . body2))
...)))
(I've rewritten the hyperspec's code to be more readable, although unhygienic, and I also fixed an error in the original.)
As you can see, the handlers established by handler-case unconditionally transfer control if called. Thus handler-case handlers definitely “handle” conditions.
Restarts are implemented in a very similar way, by restart-bind setting up the dynamic environment and invoke-restart using it to call a function. Because restarts are just functions, they need not transfer control, and so calling invoke-restart is not always an act of “handling”, although it is if the restart was established by restart-case or with-simple-restart—or, of course, if the restart transfers control.
The glossary describes things exactly but tersely: if a handler returns normally, it has declined to handle the condition. In order to handle the condition, it must instead transfer control so that it never returns. In CL this means it must transfer control 'upwards'.
An example might be
(block done
(handler-bind ((error (lambda (c)
(return-from done c))))
(error "exploded")))
Here the handler for conditions of type error is handling the condition, since it never returns normally but rather returns from the done block.
A full description of this is here.
Apologies for any indentation / paren errors: my lisp machine has emitted smoke so I am typing this on a more primitive system.

If CLOS had a compile-time dispatch, what would happen to this code snippet?

I am reading the Wikipedia article about CLOS.
It says that:
This dispatch mechanism works at runtime. Adding or removing methods thus may lead to changed effective methods (even when the generic function is called with the same arguments) at runtime. Changing the method combination also may lead to different effective methods.
Then, I inserted:
; declare the common argument structure prototype
(defgeneric f (x y))
; define an implementation for (f integer t), where t matches all types
(defmethod f ((x integer) y) 1)
Using SBCL and SLIME, I compiled the regions with code and had the following result:
CL-USER> (f 1 2)
1
Then, I added to the definition:
; define an implementation for (f integer real)
(defmethod f ((x integer) (y real)) 2)
Again, I repeated the process compiling the new region and using the REPL to eval:
CL-USER> (f 1 2.0)
2
First question, if CLOS had the opposite behavior of run-time dispatch (compile-time dispatch, I suppose), what would the result be?
Second question, I decided to comment out the second method, leaving just the generic function and the first written method. Then, I re-compiled the region with Emacs.
When calling the fuction fin the REPL with (f 1 2) I thought I would get 1 since the second method is out. Instead, I got 2.
CL-USER> (f 1 2.0)
2
Why did this happen?
The only way I can get back to (f 1 2) returning 1 is re-starting the Slime REPL and compiling the region (with the second method being commented out). Third question, is there a better way to have this result without having to re-start the REPL?
Thanks!
These two are actually the same question. The answer is: you are modifying a system while it is running.
If CLOS objects weren't re-definable at run-time, this would simply not work, or you'd not be allowed to do that. Try such re-definitions with basic structs (i. e. the things you get when using defstruct), and you will often run into pretty severe warnings or even errors when the change is not compatible. Of course, structs have other limitations, too, e. g. only single dispatch, so that it's not so easy to make an exactly analogous example. But try to remove a slot from a defstruct.
Just commenting out some source code doesn't change the fact that you evaluated (compiled and loaded) it before. You are manipulating a running system, and the source code is just that: source. If you want to remove a method from the running system, you can use remove-method (see also How to remove a defmethod for a struct). Most Lisp IDEs have ways to do that interactively, e. g. in SLIME using the SLIME inspector.

How to make SBCL optimize away possible call to FDEFINITION?

Apologies: I don't have sufficient knowledge to rework this as an easy to understand code snippet.
I've been using the SBCL compiler notes as signs to what might be improved but I'm well out of my depth with this —
; compiling (DEFUN EXECUTE-PARALLEL ...)
; file: /home/dunham/8000-benchmarksgame/bench/spectralnorm/spectralnorm.sbcl-8.sbcl
; in: DEFUN EXECUTE-PARALLEL
; (FUNCALL FUNCTION START END)
; --> SB-C::%FUNCALL THE
; ==>
; (SB-KERNEL:%COERCE-CALLABLE-FOR-CALL FUNCTION)
;
; note: unable to
; optimize away possible call to FDEFINITION at runtime
; because:
; FUNCTION is not known to be a function
—
#+sb-thread
(defun execute-parallel (start end function)
(declare (type int31 start end))
(let* ((num-threads 4))
(loop with step = (truncate (- end start) num-threads)
for index from start below end by step
collecting (let ((start index)
(end (min end (+ index step))))
(sb-thread:make-thread
(lambda () (funcall function start end))))
into threads
finally (mapcar #'sb-thread:join-thread threads))))
#-sb-thread
(defun execute-parallel (start end function )
(funcall function start end))
(The program is here. Measurements for similar programs are here.)
Is it practical to make SBCL "optimize away possible call to FDEFINITION" or is that compiler note an explanation rather than an opportunity?
The reason for the possible call to fdefinition is that it doesn't know that function is a function: it might be the name of one: in general it may be a function designator rather than a function. To keep the compiler quiet, explain to it that it is a function with a suitable type declaration, which is (declare (type function function)): you just need to declare that its type is function).
Rainer is right: there is ε chance that this is ever going to be a performance problem, given you're starting a new thread. In particular it is fairly likely that adding a declaration will make no difference at all:
without a declaration the call to funcall will get compiled as something like 'check the type of the object: if it is a function, call it; if it is not, call fdefinition on it and call the result;';
with a declaration then the overall function looks like 'check the object is a function, signalling an error if not ... call the function'.
In both cases, if the object is a function, there is one type check and one call: the type check is just in a different place. In the first case, the code will still work if the object is merely the name of a function, while with the type check it won't.
And in both of these cases this is code where you care calling make-thread: if this is anything like as fast as a function call, even via fdefinition I would be really impressed by the threading system! Almost certainly the performance of this function is entirely dominated by the overhead of making threads.
In real code, avoid optimizations like that - unless really needed
Is it practical to make SBCL "optimize away possible call to FDEFINITION" or is that compiler note an explanation rather than an opportunity?
Generally it does not matter, especially since most Lisp code should not be compiled with optimization qualities (speed 3) (safety 0) (space 0), since it may open up the software to runtime errors and crashes depending on the implementation and program used. Calling things unchecked (without safety), other than functions or symbols naming functions, via funcall might be dangerous enough to cause a program crash.
For a specific benchmark one might check via timings if a type declaration and a specialized fdefinition compilation brings any advantage.
a type declaration
A type declaration to make clear that a variable named fn is referencing an object of type function would be:
(declare (type function fn))
in the specific benchmark program FDEFINITION won't be called anyway
In the example you have provided, fdefinition will not be called anyway.
(setf foo (lambda (x) x)) ; foo references a function object
(funcall foo 3)
funcall is probably implemented by something like this:
(etypecase f
((or cons symbol) (funcall (fdefinition f) ...))
(function ...))
Since your code passes a function object, there is never the need to call fdefinition.
The optimization benefit then will be that the runtime type dispatch can be removed and the dead code for the cons or symbol case...
You ask a question about removing an fdefinition but actually your question relies on a premise that the sbcl notes are a good way to drive optimisations and improvements. The notes are a good way to spot obvious issues and places where type declarations can help. They do not tell you what actually makes your program slow. The correct way to improve the performance of a program is to 1. Think if there is a faster algorithm, and 2. Measure it’s performance and work out what is slow.
A single fdefinition call will only matter if it happens in a tight loop (ie it is not single but very plural)
In this case it happens to start a thread. If you are starting threads in a tight loop then your performance problem comes from starting threads in a tight loop. Don’t do that.
If you aren’t starting threads in a tight loop (looking at your code, it appears you are not), there are bigger fish to fry. Why waste time on an fdefinition that maybe gets called 4 times per call to execute-parallel when you can optimise the inner function instead.

How can I attach a type tag to a closure in Scheme?

How can I attach an arbitrary tag to a closure in Scheme?
Here are a couple things I'd like to use this for:
(1) To mark closures that provide an interface to produce a string for what they represent, like what #kud0h asked for here. A general ->string procedure could include code something like this:
(display (if (stringable? x)
(x 'string)
x)
str-port)
(2) More generally, to determine if a closure is an "object" that obeys the rules of a general object interface, or maybe to tell the class of an object (something like what #KPatnode was asking about here).
I can't query a procedure to see if it supports a certain interface by calling it, because if it doesn't support a known interface, calling the procedure will produce unpredictable results, most likely a run-time error.
Chez Scheme has putprop and getprop procedures that allow you to add keys and values to symbols. However, closures can be anonymous, or bound to different symbols, so I'd prefer to attach a calling-convention tag to the closure itself, not a symbol that it's bound to.
The only idea I have right now is to maintain a global hash table of all "stringable" or "object" closures in the system. That seems a little clunky. Is there a simpler, more elegant, or more efficient way?
Racket has applicable structures: you can give a structure type an apply hook to be called if an instance is used as a function.
If you want a more portable solution, you can use a hash table to associate your data with certain procedures. Unless your Scheme provides weak hashtables, though, keep in mind that the hashtable will prevent the procedures from being garbage-collected.
I think you might, instead of tagging procedures per se, want to look at Racket's object system, which has a concept of interfaces. It sounds quite similar to what you're after.
You could go extreme and redefine lambda syntax. Something like this (but untested by me):
(define *properties* '()) ;; example only
(define-syntax lambda
(let-syntax ((sys-lambda
(syntax-rules ()
((_ args body ...)
(lambda args body ...)))))
(syntax-rules ()
((_ args body ...)
(let ((func (sys-lambda args body ...)))
(set! *properties*
(cons (cons func '(NO-PROPERTIES))
*properties*))
func)))))

Methods and properties in scheme: is OOP possible in Scheme?

I will use a simple example to illustrate my question. In Java, C, or any other OOP language, I could create a pie class in a way similar to this:
class Apple{
public String flavor;
public int pieces;
private int tastiness;
public goodness(){
return tastiness*pieces;
}
}
What's the best way to do that with Scheme? I suppose I could do with something like this:
(define make-pie
(lambda (flavor pieces tastiness)
(list flavor pieces tastiness)))
(define pie-goodness
(lambda (pie)
(* (list-ref pie 1) (list-ref pie 2))))
(pie-goodness (make-pie 'cherry 2 5))
;output: 10
...where cherry is the flavor, 2 is the pieces, and 5 is the tastiness. However then there's no type-safety or visibility, and everything's just shoved in an unlabeled list. How can I improve that?
Sidenote: The make-pie procedure expects 3 arguments. If I want to make some of them optional (like I'd be able to in curly-brace languages like Java or C), is it good practice to just take the arguments in as a list (that is treat the arguments as a list - not require one argument which is a list) and deal with them that way?
Update:
I've received a couple answers with links to various extensions/libraries that can satisfy my hunger for OOP in scheme. That is helpful, so thank you.
However although I may not have communicated it well, I'm also wondering what the best way is to implement the pie object above without such classes or libraries, so I can gain a better understanding of scheme best practices.
In some sense, closures and objects are equivalent, so it's certainly possible. There are a heaping helping of different OO systems for Scheme -- a better question might be which one to use!
On the other hand, if this is an educational exercise, you could even roll your own using the closure-object equivalency. (Please forgive any errors, my Scheme is rather rusty.)
(define (make-pie flavor pieces tastiness)
(lambda (selector)
(cond ((eqv? selector 'flavor) flavor)
((eqv? selector 'pieces) pieces)
((eqv? selector 'tastiness) tastiness)
((eqv? selector 'goodness) (* pieces tastiness))
(else '()))))
This is a simple constructor for a pie object. The parameter variables flavor, pieces and tastiness are closed over by the lambda expression, becoming fields of the object, and the first (and for simplicity's sake here, only) argument to the closure is the method selector.
That done, you can instantiate and poke at some:
> (define pie1 (make-pie "rhubarb" 8 4))
> (define pie2 (make-pie "pumpkin" 6 7))
> (pie1 'flavor)
"rhubarb"
> (pie1 'goodness)
32
> (pie2 'flavor)
"pumpkin"
Many Schemes allow you to define classes that contain fields and methods. For example, see:
Bigloo Object System
PLT scheme Classes and Objects
This is how I would recommend implementing this:
(define PersonClass (lambda (name age strength life)
(let ((name name)(age age) (life life) (strength strength))
(lambda (command data)
(cond
((< life 1)
"I am dead")
((equal? command "name")
name)
((equal? command "age")
age)
((equal? command "birthday")
(set! age(+ age 1)))
((equal? command "receive damage")
(begin (set! life(- life Data)) (display "I received damage\n")))
((equal? command "hit")
(data "receive damage" strength))
)))))
Use it like such: (Karl "name" 0)
Most schemes support SRFI-9 records or the similar R7RS records, and R6RS also provides records with slightly different syntax. These records are a way to make new types in scheme. In addition, most schemes, together with R6RS and R7RS, support modules or libraries, which are one way to encapsulate operations on such types.
Many scheme programmers use these instead of OOP to write their programs, depending on the nature of the application. The record provides the type and its fields; an associated procedure is provided which creates new objects of this type; other procedures which take the record as an argument (conventionally the first argument) provide the required operations on the type; and the module/library definition determines which of these are exported to user code and which are private to the implementation of the module/library.
Where a field of the record is itself a procedure, it can also have private data as a closure: but often you want to use the module definition for data hiding and encapsulation rather than closures (it is also usually more efficient).