What exactly does it mean for a handler to "decline" to handle a signal? - error-handling

In the HyperSpec entry for HANDLER-BIND, it says that a handler can decline to handle a signal.
However, the linked glossary entry for decline to handle a signal is not very enlightening:
decline v. (of a handler) to return normally without having handled the condition being signaled, permitting the signaling process to continue as if the handler had not been present.
This definition begs and does not answer the question of what constitutes not returning normally?
Is there a full list of actions that constitute "handling" a signal?
I know from hands-on experience that INVOKE-RESTART appears to fit this criterion. But is that the only way for a handler to "handle" a signal, or are there others?

I think to understand the true meaning of “handle”, you should consider how the condition system works under the hood. Kent Pitman's sample implementation, written during the standardization process, is a good place to start (even though it has kludgy stuff like essentially implementing an entire object system, since CLOS was not yet part of the language).
Roughly speaking, the action of handler-bind is to set up a special variable, which we will call *handler-clusters*, so that it holds lists of lists of (type . function) pairs, corresponding to the list of bindings. A possible definition is
(defmacro handler-bind (bindings &body forms)
`(let ((*handler-clusters* (cons (list (mapcar #'(lambda (x) `(cons ',(car x) ,(cadr x)))
bindings))
*handler-clusters*)))
,#forms))
The signal function, then, goes through the clusters; if it finds one for the correct condition type, it calls the associated function. Definition:
(defun signal (datum &rest arguments)
(let ((condition (coerce-to-condition datum arguments :default 'simple-condition))
(*handler-clusters* *handler-clusters*)) ; save current value
(when (typep condition *break-on-signals*)
(with-simple-restart (continue "Continue the signaling process")
(break "Break caused by *BREAK-ON-SIGNALS*")))
(loop for cluster := (pop *handler-clusters*) do
(loop for binding in cluster do
(when (typep condition (car binding))
(funcall (cdr binding) condition)))))
nil)
where coerce-to-condition is a function that deals with “condition designators”. A subtle point is that we can't simply (loop for cluster in *handler-clusters* do …), because if, during the call of a handler, a condition of the same type as that which is being handled is signaled, the handler would be called recursively, which is probably not desirable. Thus the previous value is saved and we destructively pop the cluster off.
Now, remember that Common Lisp allows closures over block names and tagbody tags. That is, after the definitions
(defvar *transfer-control*)
(defun weird (function)
(tagbody
(go :start)
:tag
(print 'transferred)
(return-from weird)
:start
(setf *transfer-control* (lambda () ; captures the tag :tag
(go :tag)))
(funcall function)))
a form like
(weird (lambda () (funcall *transfer-control*)))
is allowed, and will print 'transferred. The control transfer happens, in a sense, outside of the lexical scope of the tagbody; the ability to (go :tag) has “escaped” its enclosing scope. (It would be an error to funcall *transfer-control* after weird has returned, because the dynamic extent of the tagbody has been exited.)
All this is to say that calling an ordinary Common Lisp function can cause a transfer of control, instead of returning a value. Calling *transfer-control* does nothing but unwind the dynamic environment up to the point of the tagbody, and then jumps to :tag. The function doesn't “return” in the usual sense of the term, because the evaluation of the expression in which it is embedded will be abruptly stopped, never to resume. (With weird and *transfer-control*, we've defined a primitive substitute for catch and throw that simply transfers control but doesn't convey a value at the same time. To see definitions of tagbody, block, and catch in terms of each other, see Henry Baker's “Metacircular Semantics for Common Lisp Special Forms”.)
Therefore, when signal calls the handler for a condition, two things can happen:
The handler transfers control, aborting the evaluation of signal and unwinding the stack to the place to which control was transferred.
The handler does not transfer control, but returns a value. In this case, as the definition above shows, signal will continue looking for a handler until reaching the end of *handler-clusters* or until another handler transfers control. This is called “declining”.
(In a way, it can also do neither or both, by, for instance, calling signal on another condition. The specification calls this deferring.)
For example, the hyperspec gives a sample expansion for handler-case. The form
(handler-case form
(type1 (var1) . body1)
(type2 (var2) . body2) ...)
becomes (ignoring problems of variable capture)
(block return-point
(let ((condition nil))
(tagbody
(handler-bind ((type1 #'(lambda (temp)
(setq condition temp)
(go :handler-tag-1)))
(type2 #'(lambda (temp)
(setq condition temp)
(go :handler-tag-2)))
...)
(return-from return-point form))
:handler-tag-1
(return-from return-point (let ((var1 condition)) . body1))
:handler-tag-2
(return-from return-point (let ((var2 condition)) . body2))
...)))
(I've rewritten the hyperspec's code to be more readable, although unhygienic, and I also fixed an error in the original.)
As you can see, the handlers established by handler-case unconditionally transfer control if called. Thus handler-case handlers definitely “handle” conditions.
Restarts are implemented in a very similar way, by restart-bind setting up the dynamic environment and invoke-restart using it to call a function. Because restarts are just functions, they need not transfer control, and so calling invoke-restart is not always an act of “handling”, although it is if the restart was established by restart-case or with-simple-restart—or, of course, if the restart transfers control.

The glossary describes things exactly but tersely: if a handler returns normally, it has declined to handle the condition. In order to handle the condition, it must instead transfer control so that it never returns. In CL this means it must transfer control 'upwards'.
An example might be
(block done
(handler-bind ((error (lambda (c)
(return-from done c))))
(error "exploded")))
Here the handler for conditions of type error is handling the condition, since it never returns normally but rather returns from the done block.
A full description of this is here.
Apologies for any indentation / paren errors: my lisp machine has emitted smoke so I am typing this on a more primitive system.

Related

If CLOS had a compile-time dispatch, what would happen to this code snippet?

I am reading the Wikipedia article about CLOS.
It says that:
This dispatch mechanism works at runtime. Adding or removing methods thus may lead to changed effective methods (even when the generic function is called with the same arguments) at runtime. Changing the method combination also may lead to different effective methods.
Then, I inserted:
; declare the common argument structure prototype
(defgeneric f (x y))
; define an implementation for (f integer t), where t matches all types
(defmethod f ((x integer) y) 1)
Using SBCL and SLIME, I compiled the regions with code and had the following result:
CL-USER> (f 1 2)
1
Then, I added to the definition:
; define an implementation for (f integer real)
(defmethod f ((x integer) (y real)) 2)
Again, I repeated the process compiling the new region and using the REPL to eval:
CL-USER> (f 1 2.0)
2
First question, if CLOS had the opposite behavior of run-time dispatch (compile-time dispatch, I suppose), what would the result be?
Second question, I decided to comment out the second method, leaving just the generic function and the first written method. Then, I re-compiled the region with Emacs.
When calling the fuction fin the REPL with (f 1 2) I thought I would get 1 since the second method is out. Instead, I got 2.
CL-USER> (f 1 2.0)
2
Why did this happen?
The only way I can get back to (f 1 2) returning 1 is re-starting the Slime REPL and compiling the region (with the second method being commented out). Third question, is there a better way to have this result without having to re-start the REPL?
Thanks!
These two are actually the same question. The answer is: you are modifying a system while it is running.
If CLOS objects weren't re-definable at run-time, this would simply not work, or you'd not be allowed to do that. Try such re-definitions with basic structs (i. e. the things you get when using defstruct), and you will often run into pretty severe warnings or even errors when the change is not compatible. Of course, structs have other limitations, too, e. g. only single dispatch, so that it's not so easy to make an exactly analogous example. But try to remove a slot from a defstruct.
Just commenting out some source code doesn't change the fact that you evaluated (compiled and loaded) it before. You are manipulating a running system, and the source code is just that: source. If you want to remove a method from the running system, you can use remove-method (see also How to remove a defmethod for a struct). Most Lisp IDEs have ways to do that interactively, e. g. in SLIME using the SLIME inspector.

In Racket, how do I execute a button's callback function, when the function is in another file?

In Racket, how do I execute a button's callback function, when the function is in another file?
I have a file GUI.rkt with my GUI code:
#lang racket/gui
(provide (all-defined-out))
(define main (new frame% [label "App"]))
(new button% [parent main] [label "Click"]
[callback (lambda (button event) (begin-capture)])
I have a main file, proj.rkt:
#lang racket
(require "GUI.rkt")
(define (begin-capture)
;do stuff
;...
)
The compiler gives an error saying that begin-capture is an unbound identifier.
I know it is an unbound identifier because I didn't define the variable in the GUI file. The Racket documentation shows how to set the callback function in the object definition, but not outside of the definition. Ideally, I would like to access functions in the other file from my GUI, so that all my GUI code is in the GUI.rkt file.
If "GUI.rkt" needs identifiers from "proj.rkt" then "proj.rkt" needs to provide them and "GUI.rkt" needs to require "proj.rkt", not the other way around. If the two modules need identifiers from each other then you almost certainly have a design problem.
If you want the GUI part of the program to be something that is required by other parts, then an obvious approach is for it to provide procedures to make things which take arguments which are things like callbacks:
(provide
...
make-main-frame
...)
(define (make-main-frame ... capture-callback ...)
(define main (new frame% [label "App"]))
(new button% [parent main] [label "Click"]
[callback (lambda (button event) (capture-callback))])
...
main)
Note, however that I don't know anything about how people conventionally organize programs with GUIs, let alone how they do it in Racket, since I haven't written that sort of code for a very long time. The basic deal, I think, for any program with modules is:
you want the module structure of programs to not have loops in it – even if it's possible for Racket's module system to have loops, their presence in a program would ring alarm bells for me;
where a 'lower' module in the graph (a module which is being required by some 'higher' module in the graph) may need to use functionality from that higher module it should probably do by providing procedures which take arguments which the higher module can provide, or equivalent functionality to that.
The above two points are my opinion only: I may be wrong about hat the best style is in Racket.
A possible example
Here's one way of implementing a trivial GUI in such a way that the callback can be changed, but the GUI code and the implementation code are isolated.
First of all the gui lives in "gui.rkt" which looks like this:
#lang racket/gui
(provide (contract-out
(selection-window (->* (string?
(listof string?)
(-> string? any))
(#:initial-choice string?)
(object/c)))))
(define (selection-window name choices selection-callback
#:initial-choice (initial-choice (first choices)))
;; make & show a selection window for a number of choices.
;; selection-callback gets called with the choice, a string.
(define frame (new frame% [label name]))
(new choice%
[parent frame]
[label "state"]
[choices choices]
[selection (index-of choices initial-choice)]
[callback (λ (self event)
(selection-callback (send self get-string-selection)))])
(send frame show #t)
frame)
So this provides a single function which constructs the GUI. In real life you'd probably want to provide some additional functionality to manipulate the returned object without users of this module needing to know about it.
The function takes a callback function as an argument, and this is called in a way which might be useful to the implementation, not the GUI (so in particular it's called with the selected string).
"gui.rkt" doesn't provide any way to change the callback. But that's OK: users of the module can do that, for instance like this:
#lang racket
(require "gui.rkt")
(define selection-callback-implementation
(make-parameter (λ (s)
(printf "selected ~A~%" s))))
(selection-window "foo" '("red" "amber" "green")
(λ (s) ((selection-callback-implementation) s))
#:initial-choice "green")
Now the parameter selection-callback-implementation is essentially the callback, and can be adjusted to change what it is. Of course you can do this without parameters if you want, but parameters are quite a nice approach I think (although, perhaps, unrackety).

How to make SBCL optimize away possible call to FDEFINITION?

Apologies: I don't have sufficient knowledge to rework this as an easy to understand code snippet.
I've been using the SBCL compiler notes as signs to what might be improved but I'm well out of my depth with this —
; compiling (DEFUN EXECUTE-PARALLEL ...)
; file: /home/dunham/8000-benchmarksgame/bench/spectralnorm/spectralnorm.sbcl-8.sbcl
; in: DEFUN EXECUTE-PARALLEL
; (FUNCALL FUNCTION START END)
; --> SB-C::%FUNCALL THE
; ==>
; (SB-KERNEL:%COERCE-CALLABLE-FOR-CALL FUNCTION)
;
; note: unable to
; optimize away possible call to FDEFINITION at runtime
; because:
; FUNCTION is not known to be a function
—
#+sb-thread
(defun execute-parallel (start end function)
(declare (type int31 start end))
(let* ((num-threads 4))
(loop with step = (truncate (- end start) num-threads)
for index from start below end by step
collecting (let ((start index)
(end (min end (+ index step))))
(sb-thread:make-thread
(lambda () (funcall function start end))))
into threads
finally (mapcar #'sb-thread:join-thread threads))))
#-sb-thread
(defun execute-parallel (start end function )
(funcall function start end))
(The program is here. Measurements for similar programs are here.)
Is it practical to make SBCL "optimize away possible call to FDEFINITION" or is that compiler note an explanation rather than an opportunity?
The reason for the possible call to fdefinition is that it doesn't know that function is a function: it might be the name of one: in general it may be a function designator rather than a function. To keep the compiler quiet, explain to it that it is a function with a suitable type declaration, which is (declare (type function function)): you just need to declare that its type is function).
Rainer is right: there is ε chance that this is ever going to be a performance problem, given you're starting a new thread. In particular it is fairly likely that adding a declaration will make no difference at all:
without a declaration the call to funcall will get compiled as something like 'check the type of the object: if it is a function, call it; if it is not, call fdefinition on it and call the result;';
with a declaration then the overall function looks like 'check the object is a function, signalling an error if not ... call the function'.
In both cases, if the object is a function, there is one type check and one call: the type check is just in a different place. In the first case, the code will still work if the object is merely the name of a function, while with the type check it won't.
And in both of these cases this is code where you care calling make-thread: if this is anything like as fast as a function call, even via fdefinition I would be really impressed by the threading system! Almost certainly the performance of this function is entirely dominated by the overhead of making threads.
In real code, avoid optimizations like that - unless really needed
Is it practical to make SBCL "optimize away possible call to FDEFINITION" or is that compiler note an explanation rather than an opportunity?
Generally it does not matter, especially since most Lisp code should not be compiled with optimization qualities (speed 3) (safety 0) (space 0), since it may open up the software to runtime errors and crashes depending on the implementation and program used. Calling things unchecked (without safety), other than functions or symbols naming functions, via funcall might be dangerous enough to cause a program crash.
For a specific benchmark one might check via timings if a type declaration and a specialized fdefinition compilation brings any advantage.
a type declaration
A type declaration to make clear that a variable named fn is referencing an object of type function would be:
(declare (type function fn))
in the specific benchmark program FDEFINITION won't be called anyway
In the example you have provided, fdefinition will not be called anyway.
(setf foo (lambda (x) x)) ; foo references a function object
(funcall foo 3)
funcall is probably implemented by something like this:
(etypecase f
((or cons symbol) (funcall (fdefinition f) ...))
(function ...))
Since your code passes a function object, there is never the need to call fdefinition.
The optimization benefit then will be that the runtime type dispatch can be removed and the dead code for the cons or symbol case...
You ask a question about removing an fdefinition but actually your question relies on a premise that the sbcl notes are a good way to drive optimisations and improvements. The notes are a good way to spot obvious issues and places where type declarations can help. They do not tell you what actually makes your program slow. The correct way to improve the performance of a program is to 1. Think if there is a faster algorithm, and 2. Measure it’s performance and work out what is slow.
A single fdefinition call will only matter if it happens in a tight loop (ie it is not single but very plural)
In this case it happens to start a thread. If you are starting threads in a tight loop then your performance problem comes from starting threads in a tight loop. Don’t do that.
If you aren’t starting threads in a tight loop (looking at your code, it appears you are not), there are bigger fish to fry. Why waste time on an fdefinition that maybe gets called 4 times per call to execute-parallel when you can optimise the inner function instead.

Sidestepping errors by defining vars with SETF

Crew,
I'm one of those types that insists on defining my variables with SETF. I've upgraded to a new machine (and a new version of SBCL) and it's not letting me get away with doing that (naturally, I get the appropriate "==> undefined variable..." error)
My problem here is, I've already written 20,000 lines of code (incorrectly) defining my variables with SETF and i don't like the prospect of re-writing all of my code to get the interpreter to digest it all.
Is there a way to shut down that error such that interpretation can continue?
Any help is appreciated.
Sincerely,
-Todd
One option is to set up your package environment so that a bare symbol setf refers to my-gross-hack::setf instead of cl:setf. For example, you could set things up like:
(defpackage #:my-gross-hack
(:use #:cl)
(:shadow #:setf))
;;; define your own setf here
(defpackage #:my-project
(use #:cl)
(shadowing-import-from #:my-gross-hack #:setf))
;;; proceed to use setf willy-nilly
You can handle warnings, so that compilation terminates; while doing so, you can collect enough information to fix your code.
Tested with SBCL 1.3.13.
Handle warnings
I have warnings when using setf with undefined variables. The following invokes the debugger, from which I can invoke muffle-warning:
(handler-bind ((warning #'invoke-debugger))
(compile nil '(lambda () (setf *shame* :on-you))))
The warning is of type SIMPLE-WARNING, which has the following accessors: SIMPLE-CONDITION-FORMAT-CONTROL and SIMPLE-CONDITION-FORMAT-ARGUMENTS.
(defparameter *setf-declarations* nil)
(defun handle-undefined-variables (condition)
(when (and (typep condition 'simple-warning)
(string= (simple-condition-format-control condition)
"undefined ~(~A~): ~S"))
(let* ((arguments (simple-condition-format-arguments condition))
(variable (and (eq (first arguments) :variable)
(second arguments))))
(when variable
(proclaim `(special ,variable))
(push variable *setf-declarations*)
(invoke-restart 'muffle-warning)))))
Use that as a handler:
(handler-bind ((warning #'handle-undefined-variables))
;; compilation, quickload, asdf ...
)
The above handler is not robust: the error message might change in future versions, the code assumes the arguments follow a given pattern, ... But this needs only work once, since from now on you are going to declare all your variables.
Fix your code
Now that your code compiles, get rid of the ugly. Or at least, add proper declarations.
(with-open-file (out #P"declarations.lisp" :direction :output)
(let ((*package* (find-package :cl-user)))
(format out
"(in-package :cl-user)~%~%~{(defvar ~(~S~))~%~}"
*setf-declarations*)))
This iterates over all the symbols you collected and write declarations inside a single file.
In my example, it would contain:
(in-package :cl-user)
(defvar *shame*)
Try to cleanly recompile without handling errors, by loading this file early in your compilation process, but after packages are defined. Eventually, you may want to find the time to move those declarations in place of the setf expressions that triggered a warning.

What is the difference between an Idempotent and a Deterministic function?

Are idempotent and deterministic functions both just functions that return the same result given the same inputs?
Or is there a distinction that I'm missing?
(And if there is a distinction, could you please help me understand what it is)
In more simple terms:
Pure deterministic function: The output is based entirely, and only, on the input values and nothing else: there is no other (hidden) input or state that it relies on to generate its output. There are no side-effects or other output.
Impure deterministic function: As with a deterministic function that is a pure function: the output is based entirely, and only, on the input values and nothing else: there is no other (hidden) input or state that it relies on to generate its output - however there is other output (side-effects).
Idempotency: The practical definition is that you can safely call the same function multiple times without fear of negative side-effects. More formally: there are no changes of state between subsequent identical calls.
Idempotency does not imply determinacy (as a function can alter state on the first call while being idempotent on subsequent calls), but all pure deterministic functions are inherently idempotent (as there is no internal state to persist between calls). Impure deterministic functions are not necessarily idempotent.
Pure deterministic
Impure deterministic
Pure Nondeterministic
Impure Nondeterministic
Idempotent
Input
Only parameter arguments (incl. this)
Only parameter arguments (incl. this)
Parameter arguments and hidden state
Parameter arguments and hidden state
Any
Output
Only return value
Return value or side-effects
Only return value
Return value or side-effects
Any
Side-effects
None
Yes
None
Yes
After 1st call: Maybe.After 2nd call: None
SQL Example
UCASE
CREATE TABLE
GETDATE
DROP TABLE
C# Example
String.IndexOf
DateTime.Now
Directory.Create(String)Footnote1
Footnote1 - Directory.Create(String) is idempotent because if the directory already exists it doesn't raise an error, instead it returns a new DirectoryInfo instance pointing to the specified extant filesystem directory (instead of creating the filesystem directory first and then returning a new DirectoryInfo instance pointing to it) - this is just like how Win32's CreateFile can be used to open an existing file.
A temporary note on non-scalar parameters, this, and mutating input arguments:
(I'm currently unsure how instance methods in OOP languages (with their hidden this parameter) can be categorized as pure/impure or deterministic or not - especially when it comes to mutating the the target of this - so I've asked the experts in CS.SE to help me come to an answer - once I've got a satisfactory answer there I'll update this answer).
A note on Exceptions
Many (most?) programming languages today treat thrown exceptions as either a separate "kind" of return (i.e. "return to nearest catch") or as an explicit side-effect (often due to how that language's runtime works). However, as far as this answer is concerned, a given function's ability to throw an exception does not alter its pure/impure/deterministic/non-deterministic label - ditto idempotency (in fact: throwing is often how idempotency is implemented in the first place e.g. a function can avoid causing any side-effects simply by throwing right-before it makes those state changes - but alternatively it could simply return too.).
So, for our CS-theoretical purposes, if a given function can throw an exception then you can consider the exception as simply part of that function's output. What does matter is if the exception is thrown deterministically or not, and if (e.g. List<T>.get(int index) deterministically throws if index < 0).
Note that things are very different for functions that catch exceptions, however.
Determinacy of Pure Functions
For example, in SQL UCASE(val), or in C#/.NET String.IndexOf are both deterministic because the output depends only on the input. Note that in instance methods (such as IndexOf) the instance object (i.e. the hidden this parameter) counts as input, even though it's "hidden":
"foo".IndexOf("o") == 1 // first cal
"foo".IndexOf("o") == 1 // second call
// the third call will also be == 1
Whereas in SQL NOW() or in C#/.NET DateTime.UtcNow is not deterministic because the output changes even though the input remains the same (note that property getters in .NET are equivalent to a method that accepts no parameters besides the implicit this parameter):
DateTime.UtcNow == 2016-10-27 18:10:01 // first call
DateTime.UtcNow == 2016-10-27 18:10:02 // second call
Idempotency
A good example in .NET is the Dispose() method: See Should IDisposable.Dispose() implementations be idempotent?
a Dispose method should be callable multiple times without throwing an exception.
So if a parent component X makes an initial call to foo.Dispose() then it will invoke the disposal operation and X can now consider foo to be disposed. Execution/control then passes to another component Y which also then tries to dispose of foo, after Y calls foo.Dispose() it too can expect foo to be disposed (which it is), even though X already disposed it. This means Y does not need to check to see if foo is already disposed, saving the developer time - and also eliminating bugs where calling Dispose a second time might throw an exception, for example.
Another (general) example is in REST: the RFC for HTTP1.1 states that GET, HEAD, PUT, and DELETE are idempotent, but POST is not ( https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html )
Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.
So if you use DELETE then:
Client->Server: DELETE /foo/bar
// `foo/bar` is now deleted
Server->Client: 200 OK
Client->Server DELETE /foo/bar
// foo/bar` is already deleted, so there's nothing to do, but inform the client that foo/bar doesn't exist
Server->Client: 404 Not Found
// the client asks again:
Client->Server: DELETE /foo/bar
// foo/bar` is already deleted, so there's nothing to do, but inform the client that foo/bar doesn't exist
Server->Client: 404 Not Found
So you see in the above example that DELETE is idempotent in that the state of the server did not change between the last two DELETE requests, but it is not deterministic because the server returned 200 for the first request but 404 for the second request.
A deterministic function is just a function in the mathematical sense. Given the same input, you always get the same output. On the other hand, an idempotent function is a function which satisfies the identity
f(f(x)) = f(x)
As a simple example. If UCase() is a function that converts a string to an upper case string, then clearly UCase(Ucase(s)) = UCase(s).
Idempotent functions are a subset of all functions.
A deterministic function will return the same result for the same inputs, regardless of how many times you call it.
An idempotent function may NOT return the same result (it will return the result in the same form but the value could be different, see http example below). It only guarantees that it will have no side effects. In other words it will not change anything.
For example, the GET verb is meant to be idempotent in HTTP protocol. If you call "~/employees/1" it will return the info for employee with ID of 1 in a specific format. It should never change anything but simply return the employee information. If you call it 10, 100 or so times, the returned format will always be the same. However, by no means can it be deterministic. Maybe if you call it the second time, the employee info has changed or perhaps the employee no longer even exists. But never should it have side effects or return the result in a different format.
My Opinion
Idempotent is a weird word but knowing the origin can be very helpful, idem meaning same and potent meaning power. In other words it means having the same power which clearly doesn't mean no side effects so not sure where that comes from. A classic example of There are only two hard things in computer science, cache invalidation and naming things. Why couldn't they just use read-only? Oh wait, they wanted to sound extra smart, perhaps? Perhaps like cyclomatic complexity?