How to know whether a racket variable is defined or not - variables

How you can have a different behaviour if a variable is defined or not in racket language?

There are several ways to do this. But I suspect that none of these is what you want, so I'll only provide pointers to the functions (and explain the problems with each one):
namespace-variable-value is a function that retrieves the value of a toplevel variable from some namespace. This is useful only with REPL interaction and REPL code though, since code that is defined in a module is not going to use these things anyway. In other words, you can use this function (and the corresponding namespace-set-variable-value!) to get values (if any) and set them, but the only use of these values is in code that is not itself in a module. To put this differently, using this facility is as good as keeping a hash table that maps symbols to values, only it's slightly more convenient at the REPL since you just type names...
More likely, these kind of things are done in macros. The first way to do this is to use the special #%top macro. This macro gets inserted automatically for all names in a module that are not known to be bound. The usual thing that this macro does is throw an error, but you can redefine it in your code (or make up your own language that redefines it) that does something else with these unknown names.
A slightly more sophisticated way to do this is to use the identifier-binding function -- again, in a macro, not at runtime -- and use it to get information about some name that is given to the macro and decide what to expand to based on that name.
The last two options are the more useful ones, but they're not the newbie-level kind of macros, which is why I suspect that you're asking the wrong question. To clarify, you can use them to write a kind of a defined? special form that checks whether some name is defined, but that question is one that would be answered by a macro, based on the rest of the code, so it's not really useful to ask it. If you want something like that that can enable the kind of code in other dynamic languages where you use such a predicate, then the best way to go about this is to redefine #%top to do some kind of a lookup (hashtable or global namespace) instead of throwing a compilation error -- but again, the difference between that and using a hash table explicitly is mostly cosmetic (and again, this is not a newbie thing).

First, read Eli's answer. Then, based on Eli's answer, you can implement the defined? macro this way:
#lang racket
; The macro
(define-syntax (defined? stx)
(syntax-case stx ()
[(_ id)
(with-syntax ([v (identifier-binding #'id)])
#''v)]))
; Tests
(define x 3)
(if (defined? x) 'defined 'not-defined) ; -> defined
(let ([y 4])
(if (defined? y) 'defined 'not-defined)) ; -> defined
(if (defined? z) 'defined 'not-defined) ; -> not-defined
It works for this basic case, but it has a problem: if z is undefined, the branch of the if that considers that it is defined and uses its value will raise a compile-time error, because the normal if checks its condition value at run-time (dynamically):
; This doesn't work because z in `(list z)' is undefined:
(if (defined? z) (list z) 'not-defined)
So what you probably want is a if-defined macro, that tells at compile-time (instead of at run-time) what branch of the if to take:
#lang racket
; The macro
(define-syntax (if-defined stx)
(syntax-case stx ()
[(_ id iftrue iffalse)
(let ([where (identifier-binding #'id)])
(if where #'iftrue #'iffalse))]))
; Tests
(if-defined z (list z) 'not-defined) ; -> not-defined
(if-defined t (void) (define t 5))
t ; -> 5
(define x 3)
(if-defined x (void) (define x 6))
x ; -> 3

Related

Is it possible / what are examples of using hygienic macros for the compile time computational optimization?

I've been reading through https://lispcast.com/when-to-use-a-macro, and it states (about clojure's macros)
Another example is performing expensive calculations at compile time as an optimization
I looked up, and it seems clojure has unhygienic macros. Can this also be applied to hygienic ones? Particularly talking about Scheme. As far as I understand hygienic macros, they only transform syntax, but the actual execution of code is deferred until the runtime no matter what.
Yes. Macro hygiene just refers to whether or not macro expansion can accidentally capture identifiers. Whether or not a macro is hygienic, regular macro expansion (as opposed to reader macro expansion) occurs at compile-time. Macro expansion replaces the macro's code with the results of it being executed. Two major use cases for them are to transform syntax (i.e. DSLs), to enhance performance by eliminating computations at run time or both.
A few examples come to mind:
You prefer to write your code with angles in degrees but all of the calculations are actually in radians. You could have macros eliminate these trivial, but unnecessary (at run time) conversions, at compile time.
Memoization is a broad example of compute optimization that macros can be used for.
You have a string representing a SQL statement or complex textual math expression which you want to parse and possibly even execute at compile time.
You could also combine the examples and have a memoizing SQL parser. Pretty much any scenario where you have all the necessary inputs at compile time and can therefore compute the result is a candidate.
Yes, hygienic macros can do this sort of thing. As an example here is a macro called plus in Racket which is like + except that, at macroexpansion-time, it sums sequences of adjacent literal numbers. So it does some of the work you might expect to be done at run-time at macroexpansion-time (so, effectively, at compile-time). So, for instance
(plus a b 1 2 3 c 4 5)
expands to
(+ a b 6 c 9)
Some notes on this macro.
It's probably not very idiomatic Racket, because I'm a mostly-unreformed CL hacker, which means I live in a cave and wear animal skins and say 'ug' a lot. In particular I am sure I should use syntax-parse but I can't understand it.
It might not even be right.
There are subtleties with arithmetic which means that this macro can return different results than +. In particular + is defined to add pairwise from left to right, while plus does not in general: all the literals get added firsto in particular (assuming you have done (require racket/flonum, and +max.0 &c have the same values as they do on my machine), then (+ -max.0 1.7976931348623157e+308 1.7976931348623157e+308) has a value of 1.7976931348623157e+308, while (plus -max.0 1.7976931348623157e+308 1.7976931348623157e+308) has a value of +inf.0, because the two literals get added first and this overflows.
In general this is a useless thing: it's safe to assume, I think, that any reasonable compiler will do these kind of optimisations for you. The only purpose of it is to show that it's possible to do the detect-and-compile-away compile-time constants.
Remarkably, at least from the point of view of caveman-lisp users like me, you can treat this just like + because of the last in the syntax-case: it works to say (apply plus ...) for instance (although no clever optimisation happens in that case of course).
Here it is:
(require (for-syntax racket/list))
(define-syntax (plus stx)
(define +/stx (datum->syntax stx +))
(syntax-case stx ()
[(_)
;; return additive identity
#'0]
[(_ a)
;; identity with one argument
#'a]
[(_ a ...)
;; the interesting case: there's more than one argument, so walk over them
;; looking for literal numbers. This is probably overcomplicated and
;; unidiomatic
(let* ([syntaxes (syntax->list #'(a ...))]
[reduced (let rloop ([current (first syntaxes)]
[tail (rest syntaxes)]
[accum '()])
(cond
[(null? tail)
(reverse (cons current accum))]
[(and (number? (syntax-e current))
(number? (syntax-e (first tail))))
(rloop (datum->syntax stx
(+ (syntax-e current)
(syntax-e (first tail))))
(rest tail)
accum)]
[else
(rloop (first tail)
(rest tail)
(cons current accum))]))])
(if (= (length reduced) 1)
(first reduced)
;; make sure the operation is our +
#`(#,+/stx #,#reduced)))]
[_
;; plus on its own is +, but we want our one. I am not sure this is right
+/stx]))
It is possible to do this even more aggressively in fact, so that (plus a b 1 2 c 3) is turned into (+ a b c 6). This has probably even more exciting might-get-different answers implications. It's worth noting what the CL spec says about this:
For functions that are mathematically associative (and possibly commutative), a conforming implementation may process the arguments in any manner consistent with associative (and possibly commutative) rearrangement. This does not affect the order in which the argument forms are evaluated [...]. What is unspecified is only the order in which the parameter values are processed. This implies that implementations may differ in which automatic coercions are applied [...].
So an optimisation like this is clearly legal in CL: I'm not clear that it's legal in Racket (although I think it should be).
(require (for-syntax racket/list))
(define-for-syntax (split-literals syntaxes)
;; split a list into literal numbers and the rest
(let sloop ([tail syntaxes]
[accum/lit '()]
[accum/nonlit '()])
(if (null? tail)
(values (reverse accum/lit) (reverse accum/nonlit))
(let ([current (first tail)])
(if (number? (syntax-e current))
(sloop (rest tail)
(cons (syntax-e current) accum/lit)
accum/nonlit)
(sloop (rest tail)
accum/lit
(cons current accum/nonlit)))))))
(define-syntax (plus stx)
(define +/stx (datum->syntax stx +))
(syntax-case stx ()
[(_)
;; return additive identity
#'0]
[(_ a)
;; identity with one argument
#'a]
[(_ a ...)
;; the interesting case: there's more than one argument: split the
;; arguments into literals and nonliterals and handle approprately
(let-values ([(literals nonliterals)
(split-literals (syntax->list #'(a ...)))])
(if (null? literals)
(if (null? nonliterals)
#'0
#`(#,+/stx #,#nonliterals))
(let ([sum/stx (datum->syntax stx (apply + literals))])
(if (null? nonliterals)
sum/stx
#`(#,+/stx #,#nonliterals #,sum/stx)))))]
[_
;; plus on its own is +, but we want our one. I am not sure this is right
+/stx]))

Optimization for accessing array in lisp

I am trying to learn how to make type declarations in lisp. I figured out that aref causes problems:
(defun getref (seq k)
(declare (optimize (speed 3) (safety 0)))
(declare (type (vector fixnum *) seq) (type fixnum k))
(aref seq k))
Compiled, it says:
; in: DEFUN GETREF
; (AREF MORE-LISP::SEQ MORE-LISP::K)
; ==>
; (SB-KERNEL:HAIRY-DATA-VECTOR-REF ARRAY SB-INT:INDEX)
;
; note: unable to
; avoid runtime dispatch on array element type
; due to type uncertainty:
; The first argument is a (VECTOR FIXNUM), not a SIMPLE-ARRAY.
;
; compilation unit finished
; printed 1 note
And so in every other function, where I want to use aref (and I do, since I need adjustable vectors), this happens too. How do I fix it?
It's not a problem and not an error. It just an information (a note) from the SBCL compiler that it can't optimize the code better. The code will work just fine. You can safely ignore it.
If you can't use a simple vector (a one-dimensional simple array), then this is the price to pay for it: aref might be slightly slower.
The optimization hint you get comes from the docstring of a deftransform defined in sbcl/src/compiler/generic/vm-tran.lisp:
(deftransform hairy-data-vector-ref ((array index) (simple-array t) *)
"avoid runtime dispatch on array element type"
...)
It has a comment which says:
This and the corresponding -SET transform work equally well on non-simple
arrays, but after benchmarking (on x86), Nikodemus didn't find any cases
where it actually helped with non-simple arrays -- to the contrary, it
only made for bigger and up to 100% slower code.
The code for arrays is quite complex and it is hard to say why and how things are designed as they are. You should probably ask SBCL developers on sbcl-help. See the mailing lists section on
Sourceforge.
Currently it seems preferable to favor simple arrays if possible.

Emacs function returns Symbol's value as variable is void:

I am fairly new to Emacs but I know enough to be dangerous. I've built my .emacs file from scratch and now have it in an org file. I am now trying to take it to the next level and make my configuration more user friendly for myself.
I mostly use Emacs for writing. Books, blogs, screenwriting, etc. I am trying to create a function that will turn on multiple modes and add the settings on the fly.
For example, I use olivetti-mode when writing. It centers the text. Each time I have to adjust the olivetti-set-width. I thought I would get fancy and enable the spell checker and turn off linum-mode as well.
However, every time I try it I get the error:
Symbol's value as variable is void: my-writing
Can anyone explain what I am doing wrong? I've google-fu'd quite a bit but I clearly have a gap in my understanding of what I am doing.
#+BEGIN_SRC emacs-lisp
(defun my-writing ()
"Start olivetti mode, set the width to 120, turn on spell-check."
((interactive)
(olivetti-mode)
(setq olivetti-set-width . 120)
(flyspell-mode)
(global-linum-mode 0)))
(add-hook 'olivetti-mode-hook
(lambda () olivetti-mode my-writing t))
#+END_SRC
To disable global-linum-mode for specific major-modes, see automatically disable a global minor mode for a specific major mode
[Inasmuch as olivetti-mode is a minor-mode that is enabled subsequent to whatever major-mode is already present in the buffer, the original poster may wish to turn off linum-mode locally in the current buffer by adding (linum-mode -1) to the tail end of the function my-writing (see below). That idea, however, assumes that the original poster wanted to have linum-mode active in the current buffer just prior to calling my-writing.]
The function my-writing in the initial question contains an extra set of parenthesis that should be omitted, and the hook setting is not in proper form.
olivetti-set-width is a function that takes one argument, so you cannot use setq -- see function beginning at line 197: https://github.com/rnkn/olivetti/blob/master/olivetti.el setq is used when setting a variable, not a function.
Although flyspell-mode is generally buffer-local, it is a good idea to get in the habit of using an argument of 1 to turn on a minor-mode or a -1 or 0 to turn it off. When an argument is omitted, calling the minor-mode works as an on/off toggle.
Unless there are other items already attached to the olivetti-mode-hook that require prioritization or special reasons for using a hook with buffer-local settings, you do not need the optional arguments for add-hook -- i.e., APPEND and LOCAL.
There is no apparent reason to call (olivetti-mode) as part of the olivetti-mode-hook that gets called automatically at the tail end of initializing the minor-mode, so there is now a check to see whether that mode has already been enabled. The olivetti-mode-hook is being included in this example to demonstrate how to format its usage. However, the original poster should consider eliminating (add-hook 'olivetti-mode-hook 'my-writing) as it appears to serve no purpose if the user will be calling M-x my-writing instead of M-x olivetti-mode. The hook would be useful in the latter circumstance -- i.e., when typing M-x olivetti-mode -- in which case, there is really no need to have (unless olivetti-mode (olivetti-mode 1)) as part of my-writing.
#+BEGIN_SRC emacs-lisp
(defun my-writing ()
"Start olivetti mode, set the width to 120, turn on spell-check."
(interactive)
(unless olivetti-mode (olivetti-mode 1))
(linum-mode -1) ;; see comments above
(olivetti-set-width 120)
(flyspell-mode 1))
;; original poster to consider eliminating this hook
(add-hook 'olivetti-mode-hook 'my-writing)
#+END_SRC
lawlist's answer describes how you can go about doing what you're actually trying to accomplish, but the particular error you're getting is because Emacs Lisp (like Common Lisp, but not Scheme) is a Lisp-2. When you associate a symbol with a function using defun, it doesn't make the value of that symbol (as a variable) that function, it makes the function value of that symbol the function. You'll get the same error in a much simplified situation:
(defun foo ()
42)
(list foo)
The symbol foo has no value here as a variable. To get something that you could later pass to funcall or apply, you need to either use the symbol foo, e.g.:
(funcall 'foo)
;=> 42
or the form (function foo):
(funcall (function foo))
;=> 42
which can be abbreviated with the shorthand #':
(funcall #'foo)
;=> 42
You're getting the error because of:
(add-hook 'olivetti-mode-hook
(lambda () olivetti-mode my-writing t))
which tries to use my-writing as a variable, but it has no variable value at that point.

Check if variable is empty or filled

I have the following problem:
prolog prog:
man(thomas, 2010).
man(leon, 2011).
man(thomas, 2012).
man(Man) :- once(man(Man, _).
problem:
?- man(thomas).
true ; %i want only on true even if there are more "thomas" *working because of once()*
?- man(X).
X = thomas ; %i want all man to be listed *isn't working*
goal:
?- man(thomas).
true ;
?- man(X).
X = thomas ;
X = leon ;
X = thomas ;
I do unterstand why this happens, but still want to get the names of all man.
So my solution woud be to look if "Man" is initialized, if yes than "once.." else then... something like that:
man(Man) :- (->check<-,once(man(Man, _)); man(Man, _).
On "check" shoud be the code sniped that checks if the variable "Man" is filled.
Is this possible?
One way to achieve this is as follows:
man(X) :-
(nonvar(X), man(X, _)), !
;
man(X, _).
Or, more preferred, would be:
man(X) :-
( var(X)
-> man(X, _)
; once(man(X, _))
).
The cut will ensure only one solution (at most) to an instantiated X, whereas the non-instantiated case will run its course. Note that, with the cut, you don't need once/1. The reason once/1 doesn't work as expected without the cut is that backtracking will still come back and take the "or" condition and succeed there as well.
man(X) :-
setof(t,Y^man(X,Y),_).
Additionally to what you are asking this removes redundant answers/solutions.
The built-in setof/3 describes in its last argument the sorted list of solutions found in the first argument. And that for each different instantiation of the free variables of the goal.
Free variables are those which neither occur in the first argument nor as an existential variable – the term on the left of (^)/2.
In our case this means that the last argument will always be [t] which is uninteresting. Therefore the _.
Two variables occurring in the goal are X and Y. Or, to be more precise the variables contained in X and Y. Y is an existential variable.
The only free variable is X. So all solutions for X are enumerated without redundancies. Note that you cannot depend on the precise order which happens to be sorted in this concrete case in many implementations.

Optional named arguments without wrapping them all in "OptionValue"

Suppose I have a function with optional named arguments but I insist on referring to the arguments by their unadorned names.
Consider this function that adds its two named arguments, a and b:
Options[f] = {a->0, b->0}; (* The default values. *)
f[OptionsPattern[]] :=
OptionValue[a] + OptionValue[b]
How can I write a version of that function where that last line is replaced with simply a+b?
(Imagine that that a+b is a whole slew of code.)
The answers to the following question show how to abbreviate OptionValue (easier said than done) but not how to get rid of it altogether: Optional named arguments in Mathematica
Philosophical Addendum: It seems like if Mathematica is going to have this magic with OptionsPattern and OptionValue it might as well go all the way and have a language construct for doing named arguments properly where you can just refer to them by, you know, their names. Like every other language with named arguments does. (And in the meantime, I'm curious what workarounds are possible...)
Why not just use something like:
Options[f] = {a->0, b->0};
f[args___] := (a+b) /. Flatten[{args, Options[f]}]
For more complicated code I'd probably use something like:
Options[f] = {a->0, b->0};
f[OptionsPattern[]] := Block[{a,b}, {a,b} = OptionValue[{a,b}]; a+b]
and use a single call to OptionValue to get all the values at once. (Main reason is that this cuts down on messages if there are unknown options present.)
Update, to programmatically generate the variables from the options list:
Options[f] = {a -> 0, b -> 0};
f[OptionsPattern[]] :=
With[{names = Options[f][[All, 1]]},
Block[names, names = OptionValue[names]; a + b]]
Here is the final version of my answer, containing the contributions from the answer by Brett Champion.
ClearAll[def];
SetAttributes[def, HoldAll];
def[lhs : f_[args___] :> rhs_] /; !FreeQ[Unevaluated[lhs], OptionsPattern] :=
With[{optionNames = Options[f][[All, 1]]},
lhs := Block[optionNames, optionNames = OptionValue[optionNames]; rhs]];
def[lhs : f_[args___] :> rhs_] := lhs := rhs;
The reason why the definition is given as a delayed rule in the argument is that this way we can
benefit from the syntax highlighting. Block trick is used because it fits the problem: it does not interfere with possible nested lexical scoping constructs inside your function, and therefore there is no danger of inadvertent variable capture. We check for presence of OptionsPattern since this code wil not be correct for definitions without it, and we want def to also work in that case.
Example of use:
Clear[f, a, b, c, d];
Options[f] = {a -> c, b -> d};
(*The default values.*)
def[f[n_, OptionsPattern[]] :> (a + b)^n]
You can look now at the definition:
Global`f
f[n$_,OptionsPattern[]]:=Block[{a,b},{a,b}=OptionValue[{a,b}];(a+b)^n$]
f[n_,m_]:=m+n
Options[f]={a->c,b->d}
We can test it now:
In[10]:= f[2]
Out[10]= (c+d)^2
In[11]:= f[2,a->e,b->q]
Out[11]= (e+q)^2
The modifications are done at "compile - time" and are pretty transparent. While this solution saves
some typing w.r.t. Brett's, it determines the set of option names at "compile-time", while Brett's - at "run-time". Therefore, it is a bit more fragile than Brett's: if you add some new option to the function after it has been defined with def, you must Clear it and rerun def. In practice, however, it is customary to start with ClearAll and put all definitions in one piece (cell), so this does not seem to be a real problem. Also, it can not work with string option names, but your original concept also assumes they are Symbols. Also, they should not have global values, at least not at the time when def executes.
Here's a kind of horrific solution:
Options[f] = {a->0, b->0};
f[OptionsPattern[]] := Module[{vars, tmp, ret},
vars = Options[f][[All,1]];
tmp = cat[vars];
each[{var_, val_}, Transpose[{vars, OptionValue[Automatic,#]& /# vars}],
var = val];
ret =
a + b; (* finally! *)
eval["ClearAll[", StringTake[tmp, {2,-2}], "]"];
ret]
It uses the following convenience functions:
cat = StringJoin##(ToString/#{##})&; (* Like sprintf/strout in C/C++. *)
eval = ToExpression[cat[##]]&; (* Like eval in every other lang. *)
SetAttributes[each, HoldAll]; (* each[pattern, list, body] *)
each[pat_, lst_, bod_] := ReleaseHold[ (* converts pattern to body for *)
Hold[Cases[Evaluate#lst, pat:>bod];]]; (* each element of list. *)
Note that this doesn't work if a or b has a global value when the function is called. But that was always the case for named arguments in Mathematica anyway.