simultaneous variable assignation in pascal - variables

I wish to do simultaneous variable assignment in Pascal.
As far as I know, it's not possible. Googling on the issue, I can see that many programming languages implement that, but I can't find how to do it in Pascal.
For example, in Python I can do this:
(x, y) = (y, x)
In Pascal, I need an additional variable to hold the value of x before it's removed, something like this:
bubble := x;
x := y;
y := bubble;
So, is there simultaneous assignment in Pascal, or should I rewrite the code to something like the bubble thing above?
I don't just have to do swaps; sometimes I have to do things like this:
(x,y) = (x+1,y+x)
Would it be ok to do it like the following?
old_x := x;
old_y := y;
x := x + 1; // maybe x := old_x + 1;
y := old_y + old_x;

PASCAL does not contain a simultaneous variable assignment.
Nor does it contain a SWAP(X,Y) predefined procedure.
You have to do it yourself.
You might want to consider buying a copy of [Jensen & Wirth]. It is still the best reference manual available on the language. If you are using one of the Borland PASCAL systems, use the manual that came with it: Borland made some incompatible changes, that nevertheless made the language significantly easier to use.

I'm not familiar at all with Pascal, but I can't find any special swap function that does what you want.
In any case, what you're doing is perfectly reasonable; any standard implementation of swap requires a temporary variable to hold one of the values being swapped. The only thing I would change in the code you have written above is to rename the variable to temp, to make it clear that the variable only exists temporarily for the purposes of the swap:
temp := x;
x := y;
y := temp;
EDIT: There's also nothing wrong with what you're doing when changing x and y. If you need to keep the old value as part of your calculations, it's perfectly fine to assign the old value to a variable and then use it.

Related

How does declaring variables in swi prolog work?

Sorry for the inconvenience, I'm still new to Prolog's syntax, Im just asking how would one declare/instantiate and manipulate a variable X (if possible) and be able to write/print it out similar to other languages. For example:
int x = 5;
print(x);
or even swapping variables
int x = 5, y = 10, z;
z = y;
x = z;
y = x;
Is it possible to implement these in prolog? If so, how? If not, why?
You are not the first person to ask this exact question. Even searching around on Stackoverflow would have been easier. By now I suspect there is a CompSci professor out there just trolling their students with this question.
Your example looks like C or a C-like language. Other very popular languages do not require declarations, for example Python. Declaring variables is an artifact of early compiled languages. This is a very broad topic that belongs to a university-level lecture on compiler design, programming language design and so on.
You "swap variables" when your variables are handles to registers in the processor or memory locations. Prolog is on a very different level, to the point that I feel like saying "unask the question" despite not being too wise.
So, let's start again: what are you trying to achieve by swapping variables?
If you want to print a 5, write:
?- X = 5.
X = 5.
or maybe:
?- write(5).
5
true.
and so on and so forth. Long story short, there is more to this question than meets the eye, but I am not convinced this is the right place to ask it.

Conditional Equations with Variable in GAMS

I need your help to solve this "Little" problem I'm having programming with GAMS.
In my objective function I have this member that is z = [...]-TWC(j)*HS(j).
Where HS(j)is a variable.
Now, TWC(j) should be a parameter that works like this:
TWC(j) = 0 when HS(j) < 1000
and
TWC(j) = 3.21 when HS(j) >=1000.
Any idea how to implement this in GAMS? my attempts all failed.
EDIT: this is what I tried I defined an equation called TWCup(j) that was:
TWCup(j)$(HS.l(j) >= 1000).. TWC(j) =e= 3.21;
Thanks ;)
Probably not relevant for the OP anymore (since the question is more than 3 years old), but maybe useful for someone else that looks at this question.
If TWC(j) is a function of your variable HS(j), it is not a parameter. It is another variable. So you should define TWC(j) as a variable and not as a parameter. This is probably the reason you were getting errors.
There are some ways to fix your problem: One is to actually turn TWC(j) into a variable. But this would turn your problem into non-linear which could be (or not) an issue. Also, this could need the use of binary variables, which could also become a problem (again, or not).
But I think this issue could be resolved with a different specification of the LP. The cost function f(HS(j)) = TWC(j)*HS(j) is linear by parts and convex, which you can represent in a standard LP using auxiliary variables (assuming you are minimizing).
* declare auxiliary variable
Variable
w(j);
* declare equations for linear by part cost function
Equation
costfun1(j)
costfun2(j);
;
* define costfun1 and costfun2
costfun1(j).. w(j) =g= 0;
costfun2(j).. w(j) =g= -3210 + 3.21*HS(j);
*redefine objective function (note that I changed to plus because I assumed this is a cost function that you are minimizing)
z = [...]+w(j)
This solution is very problem dependent. I assumed you were minimizing and I changed the sign in the objective function to '+'. If this was not the case, this would not work (would not be convex). Then we would need to check other approaches.
But the takeaway here is to stress that something that is a function of a variable is also a variable. But you may have options to reformulate your problem to address the problem.

Why does 22 when converted to a string, with printOn or StoreOn, still add like an integer?

This is my code:
x := 22 storeString.
y := x + x.
Transcript show: y.
Expected output: 2222
Actual output: 44.
I thought that the storeString message, sent to 22, assigned to x, would result in a string value being stored into x.
So I thought, I'm pretty new in smalltalk. Maybe it's order of operations? So I tried this:
x := (22 storeString).
y := x + x.
Transcript show: y.
Same result, and same, if I use printOn instead of storeOn. This is probably a day-one tutorial-following type question. But what is going on? Note that I know about the concatenation operator (,) but I am still wondering how it is that you can add two strings together like this? Is some implicit conversion from string back to integer happening as part of +?
Only a few things are implicit in Smalltalk. You can browse the implementation of #+ selector in String class and find out yourself what is going on. Or print String >> #+ definition.
You can also check out the internals of any running object instance, so you could have evaluated x inspect, to find out that x really is a String.
#+ is implemented on String and does a coercion to a Number before doing the addition.
Squeak has lot of eToys (a Smalltalk variation for kids) code spread throughout its core codebase. This is likely the reason why String implements all math operators. In Pharo the math operators have been mostly removed from String, so '1' + '2' raises an error like in any other Smalltalk.
Open a workspace. Enter:
'12' + '34'
Highlight and then use the right button menu to invoke "debug it". If ever there was a "killer app" for Smalltlak, it is the way the Smalltalk debugger interacts with the "all objects all the time" nature of Smalltalk. You can see what everything is and how it does. If you use "into", you'll be able to see exactly how it pulls off turning that into '46'.
Even cooler (I think), is that you can do
12 + '34'
(the first is no longer a string, rather a direct number). Again, you can use the debugger, and the whole double dispatch mechanism Smalltalk uses to do transcendental math will be opened up to you.
You can even do weirder examples like
4.0 + #('13' 2)
(here we're adding a number to an array, and the array contents are of mixed type)
Happy Smalltalking!
this behavior may appear puzzling to anyone not familiar with smalltalk, especially since in other languages the exact opposite tends to happen (numbers are coerced to strings).
the reason why this not a problem, is because string concatenation is done with ,. once aware of that, it becomes clear that '22' + 22 or even '22' + '22' can never be '2222'. it's either going to fail, or produce 44.
so if string concatenation is what you want, you need to send the right message:
x := 22 storeString.
y := x , x.
Transcript show: y.

signal vs variable

VHDL provides two major object types to hold data, namel signal and variable, but I can't find anywhere that is clear on when to use one data-type over the other. Can anyone shed some light on their strengths/limitations/scope/synthesis/situations in which using one would be better than the other?
Signals can be used to communicate values between processes. Variables cannot. There are shared variables which can in older compilers, but you really are asking for problems (with race conditions) if you do that - unless you use protected types which are a bit like classes. Then they are same to use for communication, but not (as far as I know) synthesisable.
This fundamental restriction on communication comes from the way updates on signals and variables work.
The big distinction comes because variables update immediately they are assigned to (with the := operator). Signals have an update scheduled when assigned to (with the <= operator) but the value that anyone sees when they read the signal will not change until some time passes.
(Aside: That amount of time could be as small as a delta cycle, which is the smallest amount of time in a VHDL simuator - no "real" time passes. Something like wait for 0 ps; causes the simulator to wait for the next delta cycle before continuing.)
If you need the same logic to feed into multiple flipflops a variable is a good way of factoring that logic into a single point, rather than copying/pasting code.
In terms of logic, within a clocked process, signals always infer a flipflop. Variables can be used for both combinatorial logic and inferring a flipflop. Sometimes both for the same variable. Some think this confusing, personally, I think it's fine:
process (clk)
variable something : std_logic;
if rising_edge(clk) then
if reset = '1' then
something := '0';
else
output_b <= something or input c; -- using the previous clock's value of 'something' infers a register
something := input_a and input_b; -- comb. logic for a new value
output_a <= something or input_c; -- which is used immediately, not registered here
end if;
end if;
end process;
One thing to watch using variables is that because if they are read after they are written, no register output is used, you can get long chains of logic which can lead to missing your fmax target
One thing to watch using signals (in clocked processes) is that they always infer a register, and hence leads to latency.
As others have said signals get updated with their new value at the end of the time slice, but variables are updated immediately.
// inside some process
// varA = sigA = 0. sigB = 2
varA := sigB + 1; // varA is now 3
sigC <= varA + 1; // sigC will be 4
sigA <= sigB + 1; // sigA will be 3
sigD <= sigA + 1; // sigD will be 1 (original sigA + 1)
For hardware design, I use variables very infrequently. It's normally when I'm hacking in some feature that really needs the code to be re-factored, but I'm on a deadline. I avoid them because I find the mental model of working with signals and variables too different to live nicely in one piece of code. That's not to say it can't be done, but I think most RTL engineers avoid mixing... and you can't avoid signals.
Other points:
Signals have entity scoping. Variables are local to the process.
Both synthesize

Optional named arguments without wrapping them all in "OptionValue"

Suppose I have a function with optional named arguments but I insist on referring to the arguments by their unadorned names.
Consider this function that adds its two named arguments, a and b:
Options[f] = {a->0, b->0}; (* The default values. *)
f[OptionsPattern[]] :=
OptionValue[a] + OptionValue[b]
How can I write a version of that function where that last line is replaced with simply a+b?
(Imagine that that a+b is a whole slew of code.)
The answers to the following question show how to abbreviate OptionValue (easier said than done) but not how to get rid of it altogether: Optional named arguments in Mathematica
Philosophical Addendum: It seems like if Mathematica is going to have this magic with OptionsPattern and OptionValue it might as well go all the way and have a language construct for doing named arguments properly where you can just refer to them by, you know, their names. Like every other language with named arguments does. (And in the meantime, I'm curious what workarounds are possible...)
Why not just use something like:
Options[f] = {a->0, b->0};
f[args___] := (a+b) /. Flatten[{args, Options[f]}]
For more complicated code I'd probably use something like:
Options[f] = {a->0, b->0};
f[OptionsPattern[]] := Block[{a,b}, {a,b} = OptionValue[{a,b}]; a+b]
and use a single call to OptionValue to get all the values at once. (Main reason is that this cuts down on messages if there are unknown options present.)
Update, to programmatically generate the variables from the options list:
Options[f] = {a -> 0, b -> 0};
f[OptionsPattern[]] :=
With[{names = Options[f][[All, 1]]},
Block[names, names = OptionValue[names]; a + b]]
Here is the final version of my answer, containing the contributions from the answer by Brett Champion.
ClearAll[def];
SetAttributes[def, HoldAll];
def[lhs : f_[args___] :> rhs_] /; !FreeQ[Unevaluated[lhs], OptionsPattern] :=
With[{optionNames = Options[f][[All, 1]]},
lhs := Block[optionNames, optionNames = OptionValue[optionNames]; rhs]];
def[lhs : f_[args___] :> rhs_] := lhs := rhs;
The reason why the definition is given as a delayed rule in the argument is that this way we can
benefit from the syntax highlighting. Block trick is used because it fits the problem: it does not interfere with possible nested lexical scoping constructs inside your function, and therefore there is no danger of inadvertent variable capture. We check for presence of OptionsPattern since this code wil not be correct for definitions without it, and we want def to also work in that case.
Example of use:
Clear[f, a, b, c, d];
Options[f] = {a -> c, b -> d};
(*The default values.*)
def[f[n_, OptionsPattern[]] :> (a + b)^n]
You can look now at the definition:
Global`f
f[n$_,OptionsPattern[]]:=Block[{a,b},{a,b}=OptionValue[{a,b}];(a+b)^n$]
f[n_,m_]:=m+n
Options[f]={a->c,b->d}
We can test it now:
In[10]:= f[2]
Out[10]= (c+d)^2
In[11]:= f[2,a->e,b->q]
Out[11]= (e+q)^2
The modifications are done at "compile - time" and are pretty transparent. While this solution saves
some typing w.r.t. Brett's, it determines the set of option names at "compile-time", while Brett's - at "run-time". Therefore, it is a bit more fragile than Brett's: if you add some new option to the function after it has been defined with def, you must Clear it and rerun def. In practice, however, it is customary to start with ClearAll and put all definitions in one piece (cell), so this does not seem to be a real problem. Also, it can not work with string option names, but your original concept also assumes they are Symbols. Also, they should not have global values, at least not at the time when def executes.
Here's a kind of horrific solution:
Options[f] = {a->0, b->0};
f[OptionsPattern[]] := Module[{vars, tmp, ret},
vars = Options[f][[All,1]];
tmp = cat[vars];
each[{var_, val_}, Transpose[{vars, OptionValue[Automatic,#]& /# vars}],
var = val];
ret =
a + b; (* finally! *)
eval["ClearAll[", StringTake[tmp, {2,-2}], "]"];
ret]
It uses the following convenience functions:
cat = StringJoin##(ToString/#{##})&; (* Like sprintf/strout in C/C++. *)
eval = ToExpression[cat[##]]&; (* Like eval in every other lang. *)
SetAttributes[each, HoldAll]; (* each[pattern, list, body] *)
each[pat_, lst_, bod_] := ReleaseHold[ (* converts pattern to body for *)
Hold[Cases[Evaluate#lst, pat:>bod];]]; (* each element of list. *)
Note that this doesn't work if a or b has a global value when the function is called. But that was always the case for named arguments in Mathematica anyway.