understanding a piece of code with ``boolean`` and ``switch`` - processing.js

i was looking some examples of interactions with the keyboard and stumbled upon this code that i found interesting. But i'm having trouble understanding a certain part of it(it's marked down below).I don't get how all this whole ''boolean'' declaration, ''switch'' and ''CASE'' works, i tried to look in the reference but still. Could someone explain in a simple maner how these work?
float x = 300;
float y = 300;
float speed = 5;
boolean isLeft, isRight, isUp, isDown;
int i = 0;
void keyPressed() {
setMove(keyCode, true);
if (isLeft ){
x -= speed;
}
if(isRight){
x += speed;
}
}
void keyReleased() {
setMove(keyCode, false);
}
boolean setMove(int k, boolean b) {// <<<--- From this part down
switch (k) {
case UP:
return isUp = b;
case DOWN:
return isDown = b;
case LEFT:
return isLeft = b;
case RIGHT:
return isRight = b;
default:
return b; }
}

Questions like these are best answered by the reference:
Works like an if else structure, but switch() is more convenient when you need to select between three or more alternatives. Program controls jumps to the case with the same value as the expression. All remaining statements in the switch are executed unless redirected by a break. Only primitive datatypes which can convert to an integer (byte, char, and int) may be used as the expression parameter. The default is optional.
The rest of the code is setting the corresponding variable to whatever value you passed in as the b parameter, and then returning it.
You should get into the habit of debugging your code. Add print statements to figure out exactly what the code is doing.

Related

Creating single use intermediate variables

I've read somewhere that a variable should be entered into the code if it is reused. But when I write my code for logic transparency, I sometimes create intermediate variables (with names reflecting what they contain) which are used only once.
How incorrect is this concept?
PS:
I want to do it right.
It is important to note that most of the time clarity takes precedence over re-usability or brevity. This is one of the basic principles of clean code. Most modern compilers optimize code anyway so creating new variables need not be a concern at all.
It is perfectly fine to create a new variable if it would add clarity to your code. Make sure to give it a meaningful name. Consider the following function:
public static boolean isLeapYear(final int yyyy) {
if ((yyyy % 4) != 0) {
return false;
}
else if ((yyyy % 400) == 0) {
return true;
}
else if ((yyyy % 100) == 0) {
return false;
}
else {
return true;
}
}
Even though the boolean expressions are used only once, they may confuse the reader of the code. We can rewrite it as follows
public static boolean isLeapYear(int year) {
boolean fourth = year % 4 == 0;
boolean hundredth = year % 100 == 0;
boolean fourHundredth = year % 400 == 0;
return fourth && (!hundredth || fourHundredth);
}
These boolean variables add much more clarity to the code.
This example is from the Clean Code book by Robert C. Martin.

Sprite Smooth movement and facing position according to movement

i'm trying to make this interaction with keyboard for movement using some sprites and i got stuck with two situations.
1) The character movement is not going acording to the animation itself (it only begin moving after one second or so while it's already being animated). What i really want it to do is, to move without a "initial acceleration feeling" that i get because of this problem
2) I can't think of a way to make the character face the position it should be facing when the key is released. I'll post the code here, but since it need images to work correctly and is not so small i made a skecth available at this link if you want to check it out: https://www.openprocessing.org/sketch/439572
PImage[] reverseRun = new PImage [16];
PImage[] zeroArray = new PImage [16];
void setup(){
size(800,600);
//Right Facing
for(int i = 0; i < zeroArray.length; i++){
zeroArray[i] = loadImage (i + ".png");
zeroArray[i].resize(155,155);
}
//Left Facing
for( int z = 0; z < reverseRun.length; z++){
reverseRun[z] = loadImage ( "mirror" + z + ".png");
reverseRun[z].resize(155,155);
}
}
void draw(){
frameRate(15);
background(255);
imageMode(CENTER);
if(x > width+10){
x = 0;
} else if (x < - 10){
x = width;}
if (i >= zeroArray.length){
i = 3;} //looping to generate constant motiion
if ( z >= reverseRun.length){
z = 3;} //looping to generate constant motiion
if (isRight) {
image(zeroArray[i], x, 300);
i++;
} //going through the images at the array
else if (isLeft) {
image(reverseRun[z],x,300);
z++;
} going through the images at the array
else if(!isRight){
image(zeroArray[i], x, 300);
i = 0; } //"stoped" sprite
}
}
//movement
float x = 300;
float y = 300;
float i = 0;
float z = 0;
float speed = 25;
boolean isLeft, isRight, isUp, isDown;
void keyPressed() {
setMove(keyCode, true);
if (isLeft ){
x -= speed;
}
if(isRight){
x += speed;
}
}
void keyReleased() {
setMove(keyCode, false);
}
boolean setMove(int k, boolean b) {
switch (k) {
case UP:
return isUp = b;
case DOWN:
return isDown = b;
case LEFT:
return isLeft = b;
case RIGHT:
return isRight = b;
default:
return b; }
}
The movement problem is caused by your operating system setting a delay between key presses. Try this out by going to a text editor and holding down a key. You'll notice that a character shows up immediately, followed by a delay, followed by the character repeating until you release the key.
That delay is also happening between calls to the keyPressed() function. And since you're moving the character (by modifying the x variable) inside the keyPressed() function, you're seeing a delay in the movement.
The solution to this problem is to check which key is pressed instead of relying solely on the keyPressed() function. You could use the keyCode variable inside the draw() function, or you could keep track of which key is pressed using a set of boolean variables.
Note that you're actually already doing that with the isLeft and isRight variables. But you're only checking them in the keyPressed() function, which defeats the purpose of them because of the problem I outlined above.
In other words, move this block from the keyPressed() function so it's inside the draw() function instead:
if (isLeft ){
x -= speed;
}
if(isRight){
x += speed;
}
As for knowing which way to face when the character is not moving, you could do that using another boolean value that keeps track of which direction you're facing.
Side note: you should really try to properly indent your code, as right now it's pretty hard to read.
Shameless self-promotion: I wrote a tutorial on user input in Processing available here.

Use multiple return statement

Is there a considerable difference of optimization between these two codes (in Java and/or C++, currently, even if I guess it's the same in every languages) ? Or is it just a question of code readability ?
int foo(...) {
if (cond) {
if (otherCondA)
return 1;
if (otherCondB)
return 2;
return 3;
}
int temp = /* context and/or param-dependent */;
if (otherCondA)
return 4 * temp;
if (otherCondB)
return 4 / temp;
return 4 % temp;
}
and
int foo(...) {
int value = 0;
if (cond) {
if (otherCondA)
value = 1;
else if (otherCondB)
value = 2;
else value = 3;
}
else {
int temp = /* context and/or param-dependent */;
if (otherCondA)
value = 4 * temp;
else if (otherCondB)
value = 4 / temp;
else
value = 4 % temp;
}
return value;
}
The first one is shorter, avoid multiple imbrications of else statement and economize one variable (or at least seems to do so), but I'm not sure that it really changes something...
After looking deeper into the different assembly codes generated by GCC, here's the results :
The multiple return statement is more efficient during "normal" compilation, but with the -O_ flag, the balance change :
The more you optimise the code, the less the first approach worths. It makes the code harder to optimise, so, use it carefully. As said in comments, it's very powerful when used at the front of the function when testing preconditions, but in the middle of the function, it's a nightmare for the compiler.
Of course the multiple return is acceptable.
Because you can halt the program as soon as the function is finished

Solving math equations from a text field

I am trying to increase the performance of the update(); function below. The numbers inside the mathNumber variable will come from an NSString created from a text field. Even though I'm using five numbers I would like it to be able to run any amount that the user inserts into a text field. What are some ways I could speed up the code in update(); with C and/or Objective-C? I also would like it to work on the Mac and iPhone.
typedef struct {
float *left;
float *right;
float *equals;
int operation;
} MathVariable;
#define MULTIPLY 1
#define DIVIDE 2
#define ADD 3
#define SUBTRACT 4
MathVariable *mathVariable;
float *mathPointer;
float newNumber;
void init();
void update();
float solution(float *left, float *right, int *operation);
void init()
{
float *mathNumber = (float *) malloc(sizeof(float) * 9);
mathNumber[0] =-1.0;
mathNumber[1] =-2.0;
mathNumber[2] = 3.0;
mathNumber[3] = 4.0;
mathNumber[4] = 5.0;
mathNumber[5] = 0.0;
mathNumber[6] = 0.0;
mathNumber[7] = 0.0;
mathNumber[8] = 0.0;
mathVariable = (MathVariable *) malloc(sizeof(MathVariable) * 4);
mathVariable[0].equals = &mathPointer[5];
mathVariable[0].left = &mathPointer[2];
mathVariable[0].operation = MULTIPLY;
mathVariable[0].right = &mathPointer[3];
mathVariable[1].equals = &mathPointer[6];
mathVariable[1].left = &mathPointer[1];
mathVariable[1].operation = SUBTRACT;
mathVariable[1].right = &mathPointer[5];
mathVariable[2].equals = &mathPointer[7];
mathVariable[2].left = &mathPointer[0];
mathVariable[2].operation = ADD;
mathVariable[2].right = &mathPointer[6];
mathVariable[3].equals = &mathPointer[8];
mathVariable[3].left = &mathPointer[7];
mathVariable[3].operation = MULTIPLY;
mathVariable[3].right = &mathPointer[4];
return self;
}
// This is updated with a timer
void update()
{
int i;
for (i = 0; i < 4; i++)
{
*mathVariable[i].equals = solution(mathVariable[i].left, mathVariable[i].right, &mathVariable[i].operation);
}
// Below is the equivalent of: newNumber = (-1.0 + (-2.0 - 3.0 * 4.0)) * 5.0;
// newNumber should equal -75
newNumber = mathPointer[8];
}
float solution(float *left, float *right, int *operation)
{
if ((*operation) == MULTIPLY)
{
return (*left) * (*right);
}
else if ((*operation) == DIVIDE)
{
return (*left) / (*right);
}
else if ((*operation) == ADD)
{
return (*left) + (*right);
}
else if ((*operation) == SUBTRACT)
{
return (*left) - (*right);
}
else
{
return 0.0;
}
}
EDIT:
I first must say thank you for all of your kind posts. This is the first forum I've gotten people that don't tell me I'm a complete idiot. Sorry about the return self; I didn't realize this was an objective-C forum too (thus why I hastily used C). I have my own parser which is slow but I'm not concerned with its speed. All I want is to speed up the update() function since it slows everything down and 90% of the objects use it. Also, I'm try to get it to work faster with iOS devices since I can't compile anything in the text boxes. If you have any other advice on making update() faster I thank you.
Thanks again,
Jonathan
EDIT 2:
Well I got it to run faster by changing it from:
int i;
for (i = 0; i < 4; i++)
{
*mathVariable[i].equals = solution(*mathVariable[i].left, *mathVariable[i].right, mathVariable[i].operation);
}
To:
*mathVariable[0].equals = solution(*mathVariable[0].left, *mathVariable[0].right, mathVariable[0].operation);
*mathVariable[1].equals = solution(*mathVariable[1].left, *mathVariable[1].right, mathVariable[1].operation);
*mathVariable[2].equals = solution(*mathVariable[2].left, *mathVariable[2].right, mathVariable[2].operation);
*mathVariable[3].equals = solution(*mathVariable[3].left, *mathVariable[3].right, mathVariable[3].operation);
Is there any other way to increment it as fast as the preloaded numbers in the array like above?
Your code is a mix of styles, and contains some unwarranted uses of pointers (e.g. when passing operation to solution). It is unclear why you are passing the floats by reference, but maybe you intend that these change be changed and the expression reevaluated?
Below are some changes both to tidy and incidentally speed it up - the cost of any of this is not high and you may be guilt of premature optimization. As #Dave commented there are libraries to do parsing for you, but if you're targeting simple math expressions an operator precedence stack-based parser/evaluator is easy enough to code.
Suggestion 1: use enum - cleaner:
typedef enum { MULTIPLY, DIVIDE, ADD, SUBTRACT } BinaryOp;
typedef struct
{
float *left;
float *right;
float *equals;
BinaryOp operation;
} MathVariable;
Suggestion 2: use switch - cleaner and probably faster as well:
float solution(float left, float right, int operation)
{
switch(operation)
{
case MULTIPLY:
return left * right;
case DIVIDE:
return left / right;
case ADD:
return left + right;
case SUBTRACT:
return left - right;
default:
return 0.0;
}
}
Note I also removed passing pointers, the call is now:
*mathVariable[i].equals = solution(*mathVariable[i].left,
*mathVariable[i].right,
mathVariable[i].operation);
Now an OO person will probably object (:-)) to the switch (or the if/else) and argue each node (your MathVariable) should be an instance which knows how to perform its own operation. A C person might suggest you use function pointers in the node so they can perform their own operation. All this is design and you'll have to figure that out yourself.

How do I write a generic memoize function?

I'm writing a function to find triangle numbers and the natural way to write it is recursively:
function triangle (x)
if x == 0 then return 0 end
return x+triangle(x-1)
end
But attempting to calculate the first 100,000 triangle numbers fails with a stack overflow after a while. This is an ideal function to memoize, but I want a solution that will memoize any function I pass to it.
Mathematica has a particularly slick way to do memoization, relying on the fact that hashes and function calls use the same syntax:
triangle[0] = 0;
triangle[x_] := triangle[x] = x + triangle[x-1]
That's it. It works because the rules for pattern-matching function calls are such that it always uses a more specific definition before a more general definition.
Of course, as has been pointed out, this example has a closed-form solution: triangle[x_] := x*(x+1)/2. Fibonacci numbers are the classic example of how adding memoization gives a drastic speedup:
fib[0] = 1;
fib[1] = 1;
fib[n_] := fib[n] = fib[n-1] + fib[n-2]
Although that too has a closed-form equivalent, albeit messier: http://mathworld.wolfram.com/FibonacciNumber.html
I disagree with the person who suggested this was inappropriate for memoization because you could "just use a loop". The point of memoization is that any repeat function calls are O(1) time. That's a lot better than O(n). In fact, you could even concoct a scenario where the memoized implementation has better performance than the closed-form implementation!
You're also asking the wrong question for your original problem ;)
This is a better way for that case:
triangle(n) = n * (n - 1) / 2
Furthermore, supposing the formula didn't have such a neat solution, memoisation would still be a poor approach here. You'd be better off just writing a simple loop in this case. See this answer for a fuller discussion.
I bet something like this should work with variable argument lists in Lua:
local function varg_tostring(...)
local s = select(1, ...)
for n = 2, select('#', ...) do
s = s..","..select(n,...)
end
return s
end
local function memoize(f)
local cache = {}
return function (...)
local al = varg_tostring(...)
if cache[al] then
return cache[al]
else
local y = f(...)
cache[al] = y
return y
end
end
end
You could probably also do something clever with a metatables with __tostring so that the argument list could just be converted with a tostring(). Oh the possibilities.
In C# 3.0 - for recursive functions, you can do something like:
public static class Helpers
{
public static Func<A, R> Memoize<A, R>(this Func<A, Func<A,R>, R> f)
{
var map = new Dictionary<A, R>();
Func<A, R> self = null;
self = (a) =>
{
R value;
if (map.TryGetValue(a, out value))
return value;
value = f(a, self);
map.Add(a, value);
return value;
};
return self;
}
}
Then you can create a memoized Fibonacci function like this:
var memoized_fib = Helpers.Memoize<int, int>((n,fib) => n > 1 ? fib(n - 1) + fib(n - 2) : n);
Console.WriteLine(memoized_fib(40));
In Scala (untested):
def memoize[A, B](f: (A)=>B) = {
var cache = Map[A, B]()
{ x: A =>
if (cache contains x) cache(x) else {
val back = f(x)
cache += (x -> back)
back
}
}
}
Note that this only works for functions of arity 1, but with currying you could make it work. The more subtle problem is that memoize(f) != memoize(f) for any function f. One very sneaky way to fix this would be something like the following:
val correctMem = memoize(memoize _)
I don't think that this will compile, but it does illustrate the idea.
Update: Commenters have pointed out that memoization is a good way to optimize recursion. Admittedly, I hadn't considered this before, since I generally work in a language (C#) where generalized memoization isn't so trivial to build. Take the post below with that grain of salt in mind.
I think Luke likely has the most appropriate solution to this problem, but memoization is not generally the solution to any issue of stack overflow.
Stack overflow usually is caused by recursion going deeper than the platform can handle. Languages sometimes support "tail recursion", which re-uses the context of the current call, rather than creating a new context for the recursive call. But a lot of mainstream languages/platforms don't support this. C# has no inherent support for tail-recursion, for example. The 64-bit version of the .NET JITter can apply it as an optimization at the IL level, which is all but useless if you need to support 32-bit platforms.
If your language doesn't support tail recursion, your best option for avoiding stack overflows is either to convert to an explicit loop (much less elegant, but sometimes necessary), or find a non-iterative algorithm such as Luke provided for this problem.
function memoize (f)
local cache = {}
return function (x)
if cache[x] then
return cache[x]
else
local y = f(x)
cache[x] = y
return y
end
end
end
triangle = memoize(triangle);
Note that to avoid a stack overflow, triangle would still need to be seeded.
Here's something that works without converting the arguments to strings.
The only caveat is that it can't handle a nil argument. But the accepted solution can't distinguish the value nil from the string "nil", so that's probably OK.
local function m(f)
local t = { }
local function mf(x, ...) -- memoized f
assert(x ~= nil, 'nil passed to memoized function')
if select('#', ...) > 0 then
t[x] = t[x] or m(function(...) return f(x, ...) end)
return t[x](...)
else
t[x] = t[x] or f(x)
assert(t[x] ~= nil, 'memoized function returns nil')
return t[x]
end
end
return mf
end
I've been inspired by this question to implement (yet another) flexible memoize function in Lua.
https://github.com/kikito/memoize.lua
Main advantages:
Accepts a variable number of arguments
Doesn't use tostring; instead, it organizes the cache in a tree structure, using the parameters to traverse it.
Works just fine with functions that return multiple values.
Pasting the code here as reference:
local globalCache = {}
local function getFromCache(cache, args)
local node = cache
for i=1, #args do
if not node.children then return {} end
node = node.children[args[i]]
if not node then return {} end
end
return node.results
end
local function insertInCache(cache, args, results)
local arg
local node = cache
for i=1, #args do
arg = args[i]
node.children = node.children or {}
node.children[arg] = node.children[arg] or {}
node = node.children[arg]
end
node.results = results
end
-- public function
local function memoize(f)
globalCache[f] = { results = {} }
return function (...)
local results = getFromCache( globalCache[f], {...} )
if #results == 0 then
results = { f(...) }
insertInCache(globalCache[f], {...}, results)
end
return unpack(results)
end
end
return memoize
Here is a generic C# 3.0 implementation, if it could help :
public static class Memoization
{
public static Func<T, TResult> Memoize<T, TResult>(this Func<T, TResult> function)
{
var cache = new Dictionary<T, TResult>();
var nullCache = default(TResult);
var isNullCacheSet = false;
return parameter =>
{
TResult value;
if (parameter == null && isNullCacheSet)
{
return nullCache;
}
if (parameter == null)
{
nullCache = function(parameter);
isNullCacheSet = true;
return nullCache;
}
if (cache.TryGetValue(parameter, out value))
{
return value;
}
value = function(parameter);
cache.Add(parameter, value);
return value;
};
}
}
(Quoted from a french blog article)
In the vein of posting memoization in different languages, i'd like to respond to #onebyone.livejournal.com with a non-language-changing C++ example.
First, a memoizer for single arg functions:
template <class Result, class Arg, class ResultStore = std::map<Arg, Result> >
class memoizer1{
public:
template <class F>
const Result& operator()(F f, const Arg& a){
typename ResultStore::const_iterator it = memo_.find(a);
if(it == memo_.end()) {
it = memo_.insert(make_pair(a, f(a))).first;
}
return it->second;
}
private:
ResultStore memo_;
};
Just create an instance of the memoizer, feed it your function and argument. Just make sure not to share the same memo between two different functions (but you can share it between different implementations of the same function).
Next, a driver functon, and an implementation. only the driver function need be public
int fib(int); // driver
int fib_(int); // implementation
Implemented:
int fib_(int n){
++total_ops;
if(n == 0 || n == 1)
return 1;
else
return fib(n-1) + fib(n-2);
}
And the driver, to memoize
int fib(int n) {
static memoizer1<int,int> memo;
return memo(fib_, n);
}
Permalink showing output on codepad.org. Number of calls is measured to verify correctness. (insert unit test here...)
This only memoizes one input functions. Generalizing for multiple args or varying arguments left as an exercise for the reader.
In Perl generic memoization is easy to get. The Memoize module is part of the perl core and is highly reliable, flexible, and easy-to-use.
The example from it's manpage:
# This is the documentation for Memoize 1.01
use Memoize;
memoize('slow_function');
slow_function(arguments); # Is faster than it was before
You can add, remove, and customize memoization of functions at run time! You can provide callbacks for custom memento computation.
Memoize.pm even has facilities for making the memento cache persistent, so it does not need to be re-filled on each invocation of your program!
Here's the documentation: http://perldoc.perl.org/5.8.8/Memoize.html
Extending the idea, it's also possible to memoize functions with two input parameters:
function memoize2 (f)
local cache = {}
return function (x, y)
if cache[x..','..y] then
return cache[x..','..y]
else
local z = f(x,y)
cache[x..','..y] = z
return z
end
end
end
Notice that parameter order matters in the caching algorithm, so if parameter order doesn't matter in the functions to be memoized the odds of getting a cache hit would be increased by sorting the parameters before checking the cache.
But it's important to note that some functions can't be profitably memoized. I wrote memoize2 to see if the recursive Euclidean algorithm for finding the greatest common divisor could be sped up.
function gcd (a, b)
if b == 0 then return a end
return gcd(b, a%b)
end
As it turns out, gcd doesn't respond well to memoization. The calculation it does is far less expensive than the caching algorithm. Ever for large numbers, it terminates fairly quickly. After a while, the cache grows very large. This algorithm is probably as fast as it can be.
Recursion isn't necessary. The nth triangle number is n(n-1)/2, so...
public int triangle(final int n){
return n * (n - 1) / 2;
}
Please don't recurse this. Either use the x*(x+1)/2 formula or simply iterate the values and memoize as you go.
int[] memo = new int[n+1];
int sum = 0;
for(int i = 0; i <= n; ++i)
{
sum+=i;
memo[i] = sum;
}
return memo[n];