Condition inside a loop in smalltalk - smalltalk

I'm trying to draw a chain of symbols by using a loop. I'm doing this way, but it always draw x circles...
1 to: x do: [
(self lastWasSquare)
ifTrue: [ self drawCircle]
ifFalse: [ self drawSquare]
]
I also tried:
x timesRepeat: [
(self lastWasSquare)
ifTrue: [ self drawCircle]
ifFalse: [ self drawSquare]
].
But still drawing circles. I also tryed to do it by adding a :n | variable to the loop and asking if even, but again, it's always executing the circle code.
What am I doing wrong?
Thank you

It looks like your call to self lastWasSquare keeps returning true so that your #ifTrue:ifFalse: keeps going into the block that calls self drawCircle. You can either:
Make sure that your drawCircle and drawSquare methods properly set your lastWasSquare instance variable (at least I'm assuming that's just a getter method).
Move the decision of whether the last item drawn was a circle or a square into a temporary variable.
The first way is better if you need the lastWasSquare value anywhere outside the method you're working on. The second way is better if it's the only place where you're drawing circles or squares (keep the scope as small as it needs to be) and could look something like this:
| lastWasSquare |
lastWasSquare := false.
x timesRepeat: [
lastWasSquare
ifTrue: [ self drawCircle ]
ifFalse: [ self drawSquare ].
lastWasSquare := lastWasSquare not
].
So you're continually toggling the lastWasSquare between true and false, and it will draw alternating shapes. (I'm assuming that's what you're trying to achieve when you say "draw a chain of symbols"...)
If neither of these apply, then, as Uko said in the comments, you'd need to post more of your code in order for us to be able to help you.

Related

Rebol: Dynamic binding of block words

In Rebol, there are words like foreach that allow "block parametrization" over a given word and a series, e.g., foreach w [1 2 3] [print w]. Since I find that syntax very convenient (as opposed to passing func blocks), I'd like to use it for my own words that operate on lazy lists, e.g map/stream x s [... x ... ].
How is that syntax idiom called? How is it properly implemented?
I was searching the docs, but I could not find a straight answer, so I tried to implement foreach on my own. Basically, my implementation comes in two parts. The first part is a function that binds a specific word in a block to a given value and yields a new block with the bound words.
bind-var: funct [block word value] [
qw: load rejoin ["'" word]
do compose [
set (:qw) value
bind [(block)] (:qw)
[(block)] ; This shouldn't work? see Question 2
]
]
Using that, I implemented foreach as follows:
my-foreach: func ['word s block] [
if empty? block [return none]
until [
do bind-var block word first s
s: next s
tail? s
]
]
I find that approach quite clumsy (and it probably is), so I was wondering how the problem can be solved more elegantly. Regardless, after coming up with my contraption, I am left with two questions:
In bind-var, I had to do some wrapping in bind [(block)] (:qw) because (block) would "dissolve". Why?
Because (?) of 2, the bind operation is performed on a new block (created by the [(block)] expression), not the original one passed to my-foreach, with seperate bindings, so I have to operate on that. By mistake, I added [(block)] and it still works. But why?
Great question. :-) Writing your own custom loop constructs in Rebol2 and R3-Alpha (and now, history repeating with Red) has many unanswered problems. These kinds of problems were known to the Rebol3 developers and considered blocking bugs.
(The reason that Ren-C was started was to address such concerns. Progress has been made in several areas, though at time of writing many outstanding design problems remain. I'll try to just answer your questions under the historical assumptions, however.)
In bind-var, I had to do some wrapping in bind [(block)] (:qw) because (block) would "dissolve". Why?
That's how COMPOSE works by default...and it's often the preferred behavior. If you don't want that, use COMPOSE/ONLY and blocks will not be spliced, but inserted as-is.
qw: load rejoin ["'" word]
You can convert WORD! to LIT-WORD! via to lit-word! word. You can also shift the quoting responsibility into your boilerplate, e.g. set quote (word) value, and avoid qw altogether.
Avoiding LOAD is also usually preferable, because it always brings things into the user context by default--so it loses the binding of the original word. Doing a TO conversion will preserve the binding of the original WORD! in the generated LIT-WORD!.
do compose [
set (:qw) value
bind [(block)] (:qw)
[(block)] ; This shouldn't work? see Question 2
]
Presumably you meant COMPOSE/DEEP here, otherwise this won't work at all... with regular COMPOSE the embedded PAREN!s cough, GROUP!s for [(block)] will not be substituted.
By mistake, I added [(block)] and it still works. But why?
If you do a test like my-foreach x [1] [print x probe bind? 'x] the output of the bind? will show you that it is bound into the "global" user context.
Fundamentally, you don't have any MAKE OBJECT! or USE to create a new context to bind the body into. Hence all you could potentially be doing here would be stripping off any existing bindings in the code for x and making sure they are into the user context.
But originally you did have a USE, that you edited to remove. That was more on the right track:
bind-var: func [block word value /local qw] [
qw: load rejoin ["'" word]
do compose/deep [
use [(qw)] [
set (:qw) value
bind [(block)] (:qw)
[(block)] ; This shouldn't work? see Question 2
]
]
]
You're right to suspect something is askew with how you're binding. But the reason this works is because your BIND is only redoing the work that USE itself does. USE already deep walks to make sure any of the word bindings are adjusted. So you could omit the bind entirely:
do compose/deep [
use [(qw)] [
set (:qw) value
[(block)]
]
]
the bind operation is performed on a new block (created by the [(block)] expression), not the original one passed to my-foreach, with separate bindings
Let's adjust your code by taking out the deep-walking USE to demonstrate the problem you thought you had. We'll use a simple MAKE OBJECT! instead:
bind-var: func [block word value /local obj qw] [
do compose/deep [
obj: make object! [(to-set-word word) none]
qw: bind (to-lit-word word) obj
set :qw value
bind [(block)] :qw
[(block)] ; This shouldn't work? see Question 2
]
]
Now if you try my-foreach x [1 2 3] [print x]you'll get what you suspected... "x has no value" (assuming you don't have some latent global definition of x it picks up, which would just print that same latent value 3 times).
But to make you sufficiently sorry you asked :-), I'll mention that my-foreach x [1 2 3] [loop 1 [print x]] actually works. That's because while you were right to say a bind in the past shouldn't affect a new block, this COMPOSE only creates one new BLOCK!. The topmost level is new, any "deeper" embedded blocks referenced in the source material will be aliases of the original material:
>> original: [outer [inner]]
== [outer [inner]]
>> composed: compose [<a> (original) <b>]
== [<a> outer [inner] <b>]
>> append original/2 "mutation"
== [inner "mutation"]
>> composed
== [<a> outer [inner "mutation"] <b>]
Hence if you do a mutating BIND on the composed result, it can deeply affect some of your source.
until [
do bind-var block word first s
s: next s
tail? s
]
On a general note of efficiency, you're running COMPOSE and BIND operations on each iteration of your loop. No matter how creative new solutions to these kinds of problems get (there's a LOT of new tech in Ren-C affecting your kind of problem), you're still probably going to want to do it only once and reuse it on the iterations.

Squeak Smalltalk: Game loop

In many languages you can do something like the following:
while true:
handle events like keyboard input
update game world
draw screen
(optional: delay execution)
while this is far from optimal it should suffice for simple games.
How do you do this in Squeak Smalltalk?
I can read keyboard input and react to it as described on wiki.squeak.org. But if I try to execute something like
1 to: 10 do: [ :i | game updateAndDraw ]
all the events are only ever handled after the loop has executed.
Morphic already provides that main loop. It's in MorphicProject class>>spawnNewProcess:
uiProcess := [
[ world doOneCycle. Processor yield ] repeat.
] newProcess ...
And if you dig into doOneCycle you will find it
(optionally) does a delay (interCyclePause:)
checks for screen resize
processes events
processes step methods
re-displays the world
Your code should hook into these phases by adding mouse/keyboard event handlers, step methods for animation, and draw methods for redisplaying. All of these should be methods in your own game morph. You can find examples throughout the system.
To perform an action a fixed number of times:
10 timesRepeat: [game updateAndDraw]
To use while semantics:
i := 5
[i > 0] whileTrue: [
i printNl.
i := i - 1.
]
To create a perpetual loop using while semantics,
[true] whileTrue: [something do]
You should be able to take advantage of the Morphic event loop by using the Object >> #when:send:to: message.

Does Pharo provide tail-call optimisation?

The implementation of Integer>>#factorial in Pharo is:
factorial
"Answer the factorial of the receiver."
self = 0 ifTrue: [^ 1].
self > 0 ifTrue: [^ self * (self - 1) factorial].
self error: 'Not valid for negative integers'
This a tail-recursive definition. However, I can evaluate 10000 factorial without error in the workspace.
Does Pharo perform tail-call optimisation in any circumstances, is it doing some other optimisation, or is it just using a really deep stack?
It's a really deep stack. Or rather, no stack at all.
Pharo is a descendent of Squeak, which inherits its execution semantics directly from Smalltalk-80. There is no linear fixed-size stack, instead every method call creates a new MethodContext object which provides the space for arguments and temporary variables in each recursive call. It also points to the sending context (for later return) creating a linked list of contexts (which is displayed just like a stack in the debugger). Context objects are allocated on the heap just like any other object. That means call chains can be very deep, since all available memory can be used. You can inspect thisContext to see the currently active method context.
Allocating all these context objects is expensive. For speed, modern VMs (such as the Cog VM used in Pharo) do actually use a stack internally, which consists of linked pages, so it can be arbitrarily large as well. The context objects are only created on demand (e.g. while debugging) and refer to the hidden stack frames and vice versa. This machinery behind the scenes is quite complex, but fortunately hidden from the Smalltalk programmer.
There is no mystery in the execution model of Pharo. The recursive fragment
^ self * (self - 1) factorial
that happens inside the second ifTrue: compiles to the following sequence of bytecodes:
39 <70> self ; receiver of outer message *
40 <70> self ; receiver of inner message -
41 <76> pushConstant: 1 ; argument of self - 1
42 <B1> send: - ; subtract
43 <D0> send: factorial ; send factorial (nothing special here!)
44 <B8> send: * ; multiply
45 <7C> returnTop ; return
Note that in line 43 nothing special happens. The code just sends factorial in the same way it would, had the selector been any other. In particular we can see that there is no special manipulation of the stack here.
This doesn't mean that there cannot be optimizations in the underlying native code. But that is a different discussion. It is the execution model the one that matters to the programmer because any optimization underneath bytecodes is meant to support this model at the conceptual level.
UPDATE
Interestingly, the non-recursive version
factorial2
| f |
f := 1.
2 to: self do: [:i | f := f * i].
^f
is a little bit slower that the recursive one (Pharo). The reason must be that the overhead associated to increasing i is a little bit greater than the recursive send mechanism.
Here are the expressions I tried:
[25000 factorial] timeToRun
[25000 factorial2] timeToRun
IMHO, the initial code that is presumed to have a tail-recursive call to factorial
factorial
"Answer the factorial of the receiver."
self = 0 ifTrue: [^ 1].
self > 0 ifTrue: [^ self * (self - 1) factorial].
self error: 'Not valid for negative integers'
it isn't, actually. The bytecode reported by Leandro's reply proofs that:
39 <70> self ; receiver of outer message *
40 <70> self ; receiver of inner message -
41 <76> pushConstant: 1 ; argument of self - 1
42 <B1> send: - ; subtract
43 <D0> send: factorial ; send factorial (nothing special here!)
44 <B8> send: * ; multiply
45 <7C> returnTop ; return
before the returnTop there is a send of * instead of factorial. I would have written a message using an accumulator as
factorial: acc
^ self = 0
ifTrue: [ acc ]
ifFalse: [ self - 1 factorial: acc * self ]
that produces the bytecode reported in this picture.
Btw,
n := 10000.
[n slowFactorial] timeToRun .
[n factorial] timeToRun.
[n factorial: 1] timeToRun.
both the first and second ones takes 29 millisecs, the last one 595 millisecs on a fresh Pharo 9 image. Why so slow?
No, Pharo and its VM do not optimize recursive tail calls.
It is apparent from running tests on a Pharo 9 image, and this master thesis on the subject confirms that.
As of today Pharo ships with two factorial methods, one (Integer >> factorial) uses a 2-partition algorithm and is the most efficient, the other looks like this:
Integer >> slowFactorial [
self > 0
ifTrue: [ ^ self * (self - 1) factorial ].
self = 0
ifTrue: [ ^ 1 ].
self error: 'Not valid for negative integers'
]
It has an outer recursive structure, but actually still calls the non-recursive factorial method. That probably explains why Massimo Nocentini got nearly identical results when he timed them.
If we try this modified version:
Integer >> recursiveFactorial [
self > 0
ifTrue: [ ^ self * (self - 1) recursiveFactorial ].
self = 0
ifTrue: [ ^ 1 ].
self error: 'Not valid for negative integers'
]
we now have a real recursive method, but, as Massimo pointed out, it's still not tail recursive.
This is tail recursive:
tailRecursiveFactorial: acc
^ self = 0
ifTrue: [ acc ]
ifFalse: [ self - 1 tailRecursiveFactorial: acc * self ]
Without tail call optimization this version shows by far the worst performance, even compared to recursiveFactorial. I think that's because it burdens the stack with all the redundant intermediate results.

How do I animate a morph without using step?

I want to animate a dice being rolled, but don't want to use the Morph>>step methods because I want more control over when the roll finishes. I know that I can use Delay>>wait within a forked block to see my animation, but then how should I call this method from other methods to ensure I get the final numberRolled?
Here's my roll method:
roll
| n t |
numberRolled := nil.
[
t := 10 + (10 atRandom).
t timesRepeat: [
n := 6 atRandom.
self showNumber: n.
(Delay forSeconds: 0.1) wait.
].
numberRolled := n.
] fork.
So if I call this from a method like guessLower the roll method returns instantly because the real work is completed in the forked process.
guessLower
previousNumberRolled := numberRolled.
self roll.
"this next line is called before the dice has finished rolling"
self checkWin: (numberRolled < previousNumberRolled)
My current solution is to modify roll method to take a block, which that executes after the rolling has finished e.g. rollAndThen: aBlock but is there a more elegant / simpler solution?
In Morphic it is a Really Bad Idea to use Delays and explicit looping.
But it is really simple to make the step method do what you want: Inside you simply check if it should continue rolling or not. Then you do self stopStepping. self checkWin: ....

Seaside - clear form on re-render?

Is there a way to reset all of the text inputs for a page when it's re-rendered? The page keeps loading with the text from the previous rendering still in the inputs.
That depends very much on the way you render those inputs. If you use Seaside components, then you might implement you own logic within the callback:
html textInput
callback: [ :value | self setOrResetMyInputWith: value ]
with: 'my input'.
#setOrResetMyInputWith: might then look like this:
setOrResetMyInputWith: aString
myInputValue := self allCriteriaMet
ifTrue: [ aString ]
ifFalse: [ nil ]
Keep in mind that you cannot predict the order in which the callbacks will be evaluated. Therefore, it might be easier to do the check before rendering:
renderContentOn: html
self checkMyInputs.
"continue rendering process"
...
You could then simply reset your instance variables if the criteria are not satisfied.
That's for components. If you use Magritte, than Magritte's verification mechanism should take care of this. All you need to do is to enable verification in the respective descriptions.