In natural languages, we would say "some color is a primary color if the color is red, blue, or yellow."
In every programming language I've seen, that translates into something like:
isPrimaryColor = someColor == "Red" or someColor == "Blue" or someColor == "Yellow"
Why isn't there a syntax that more closely matches the English sentence. After all, you wouldn't say "some color is a primary color if that color is red, or that color is blue, or that color is yellow."
I realize simply isPrimaryColor = someColor == ("Red" or "Blue" or "Yellow") because instead of Red Blue and Yellow they could be boolean statement in which case boolean logic applies, but what about something like:
isPrimaryColor = someColor ( == "Red" or == "Blue" or == "Yellow")
As an added bonus that syntax would allow for more flexibility, say you wanted to see if a number is between 1 and 100 or 1000 and 2000, you could say:
someNumber ((>= 1 and <=100) or (>=1000 and <=2000))
Edit:
Very interesting answers, and point taken that I should learn more languages. After reading through the answers I agree that for strictly equality comparison something similar to set membership is a clear and concise way of expressing the same thing (for languages that have language support for concise inline lists or sets and testing membership)
One issues that came up is that if the value to compare is the result of an expensive calculation a temporary variable would need to be (well, should be) created. The other issue is that there may be different evaluations that need to be checked, such as "the result of some expensive calculation should be prime and between 200 and 300"
These scenarios are also covered by more functional languages (though depending on the language may not be more concise), or really any language that can take a function as a parameter. For instance the previous example could be
MeetsRequirements(GetCalculatedValue(), f(x):x > 200, f(x):x < 300, IsPrime)
I think that most people consider something like
isPrimaryColor = ["Red", "Blue", "Yellow"].contains(someColor)
to be sufficiently clear that they don't need extra syntax for this.
In python you can do something like this:
color = "green"
if color in ["red", "green", "blue"]:
print 'Yay'
It is called in operator, which tests for set membership.
In perl 6 you could do this with junctions:
if $color eq 'Red'|'Blue'|'Green' {
doit()
}
Alternately you could do it with the smart match operator (~~). The following is roughly equivalent to python's if value in list: syntax, except that ~~ does a lot more in other contexts.
if ($color ~~ qw/Red Blue Green/) {
doit()
}
The parens also make it valid perl 5 (>=5.10); in perl 6 they're optional.
In Haskell, it is easy to define a function to do this:
matches x ps = foldl (||) False $ map (\ p -> p x) ps
This function takes a value list of predicates (of type a -> Bool) and returns True if any of the the predicates match the value.
This allows you to something like this:
isMammal m = m `matches` [(=="Dog"), (=="Cat"), (=="Human")]
The nice thing is that it doesn't have to just be equality, you can use anything with the correct type:
isAnimal a = a `matches` [isMammal, (=="Fish"), (=="Bird")]
Ruby
Contained in list:
irb(main):023:0> %w{red green blue}.include? "red"
=> true
irb(main):024:0> %w{red green blue}.include? "black"
=> false
Numeric Range:
irb(main):008:0> def is_valid_num(x)
irb(main):009:1> case x
irb(main):010:2> when 1..100, 1000..2000 then true
irb(main):011:2> else false
irb(main):012:2> end
irb(main):013:1> end
=> nil
irb(main):014:0> is_valid_num(1)
=> true
irb(main):015:0> is_valid_num(100)
=> true
irb(main):016:0> is_valid_num(101)
=> false
irb(main):017:0> is_valid_num(1050)
=> true
So far, nobody has mentioned SQL. It has what you are suggesting:
SELECT
employee_id
FROM
employee
WHERE
hire_date BETWEEN '2009-01-01' AND '2010-01-01' -- range of values
AND employment_type IN ('C', 'S', 'H', 'T') -- list of values
COBOL uses 88 levels to implement named values, named groups of values
and named ranges of values.
For example:
01 COLOUR PIC X(10).
88 IS-PRIMARY-COLOUR VALUE 'Red', 'Blue', 'Yellow'.
...
MOVE 'Blue' TO COLOUR
IF IS-PRIMARY-COLOUR
DISPLAY 'This is a primary colour'
END-IF
Range tests are covered as follows:
01 SOME-NUMBER PIC S9(4) BINARY.
88 IS-LESS-THAN-ZERO VALUE -9999 THRU -1.
88 IS-ZERO VALUE ZERO.
88 IS-GREATER-THAN-ZERO VALUE 1 THRU 9999.
...
MOVE +358 TO SOME-NUMBER
EVALUATE TRUE
WHEN IS-LESS-THAN-ZERO
DISPLAY 'Negative Number'
WHEN IS-ZERO
DISPLAY 'Zero'
WHEN IS-GREATER-THAN-ZERO
DISPLAY 'Positive Number'
WHEN OTHER
DISPLAY 'How the heck did this happen!'
END-EVALUATE
I guess this all happened because COBOL was supposed to emulate English
to some extent.
You'll love Perl 6 because it has:
chaining comparison operators:
(1 <= $someNumber <= 100) || (1000 <= $someNumber <= 2000))
junctive operators:
$isPrimaryColor = $someColor ~~ "Red" | "Blue" | "Yellow"
And you can combine both with ranges:
$someNumber ~~ (1..100) | (1000..2000)
Python actually gives you the ability to do the last thing quite well:
>>> x=5
>>> (1<x<1000 or 2000<x<3000)
True
In Python you can say ...
isPrimaryColor = someColor in ('Red', 'Blue', 'Yellow')
... which I find more readable than your (== "Red" or == "Blue") syntax. There's a few reasons to add syntax support for a language feature:
Efficiency: Not a reason here, since there's no speed improvement.
Functionality: Also not a concern; there's nothing you can do in the new syntax that you can't do in the old.
Legibility: Most languages handle the case where you're checking the equality of multiple values just fine. In other cases (e.g., someNumber (> 1 and < 10)) it might be more useful, but even then it doesn't buy you much (and Python allows you to say 1 < someNumber < 10, which is even clearer).
So it's not clear the proposed change is particularly helpful.
My guess would be that languages are designed by force of habit. Early languages only would have had binary comparison operators because they are simpler to implement. Everyone got used to saying (x > 0 and x < y) until language designers didn't ever bother to support the common form in mathematics, (0 < x < y).
In most languages a comparison operator returns a boolean type. In the case of 0 < x < y, if this is interpreted as (0 < x) < y it would be meaningless, since < does not make sense for comparing booleans. Therefore, a new compiler could interpret 0 < x < y as tmp:=x, 0 < tmp && tmp < y without breaking backward compatibility. In the case of x == y == z, however, if the variables are already booleans, it is ambiguous whether this means x == y && y == z or (x == y) == z.
In C# I use the following extension method so that you can write someColor.IsOneOf("Red", "Blue", "Yellow"). It is less efficient than direct comparison (what with the array, loop, Equals() calls and boxing if T is a value type), but it sure is convenient.
public static bool IsOneOf<T>(this T value, params T[] set)
{
object value2 = value;
for (int i = 0; i < set.Length; i++)
if (set[i].Equals(value2))
return true;
return false;
}
Icon has the facility you describe.
if y < (x | 5) then write("y=", y)
I rather like that aspect of Icon.
In C#:
if ("A".IsIn("A", "B", "C"))
{
}
if (myColor.IsIn(colors))
{
}
Using these extensions:
public static class ObjectExtenstions
{
public static bool IsIn(this object obj, params object [] list)
{
foreach (var item in list)
{
if (obj == item)
{
return true;
}
}
return false;
}
public static bool IsIn<T>(this T obj, ICollection<T> list)
{
return list.Contains(obj);
}
public static bool IsIn<T>(this T obj, IEnumerable<T> list)
{
foreach (var item in list)
{
if (obj == item)
{
return true;
}
}
return false;
}
}
You'll have to go a bit down the abstraction layer to find out the reason why. x86's comparison/jump instructions are binary (since they can be easily computed in a few clock cycles), and that's the way things have been.
If you want, many languages offer an abstraction for that. In PHP, for example you could use:
$isPrimaryColor = in_array($someColor, array('Red', 'White', 'Blue'));
I don't see an Objective-C answer yet. Here is one:
BOOL isPRimaryColour = [[NSSet setWithObjects: #"red", #"green", #"blue", nil] containsObject: someColour];
The question is reasonable, and I wouldn't regard the change as syntactic sugar. If the value being compared is the result of computation, it would be nicer to say:
if (someComplicatedExpression ?== 1 : 2 : 3 : 5)
than to say
int temp;
temp = someComplicatedExpression;
if (temp == 1 || temp == 2 || temp == 3 || temp == 5)
particularly if there was no other need for the temp variable in question. A modern compiler could probably recognize the short useful lifetime of 'temp' and optimize it to a register, and could probably recognize the "see if variable is one of certain constants" pattern, but there'd be no harm in allowing a programmer to save the compiler the trouble. The indicated syntax wouldn't compile on any existing compiler, but I don't think it would be any more ambiguous than (a+b >> c+d) whose behavior is defined in the language spec.
As to why nobody's done that, I don't know.
I'm reminded of when I first started to learn programming, in Basic, and at one point I wrote
if X=3 OR 4
I intended this like you are describing, if X is either 3 or 4. The compiler interpreted it as:
if (X=3) OR (4)
That is, if X=3 is true, or if 4 is true. As it defined anything non-zero as true, 4 is true, anything OR TRUE is true, and so the expression was always true. I spent a long time figuring that one out.
I don't claim this adds anything to the discussion. I just thought it might be a mildly amusing anecdote.
As a mathematician, I would say that the colour is primary if and only if it is a member of the set {red, green, blue} of primary colours.
And this is exactly how you could say in Delphi:
isPrimary := Colour in [clRed, clGreen, clBlue]
In fact, I employ this technique very often. Last time was three days ago. Implementing my own scripting language's interpreter, I wrote
const
LOOPS = [pntRepeat, pntDoWhile, pntFor];
and then, at a few lines,
if Nodes[x].Type in LOOPS then
The Philosophical Part of the Question
#supercat, etc. ("As to why nobody's done that, I don't know."):
Probably because the designers of programming languages are mathematicians (or, at least, mathematically inclined). If a mathematician needs to state the equality of two objects, she would say
X = Y,
naturally. But if X can be one of a number of things A, B, C, ..., then she would define a set S = {A, B, C, ...} of these things and write
X ∈ S.
Indeed, it is extremely common that you (mathematicians) write X ∈ S, where S is the set
S = {x ∈ D; P(x)}
of objects in some universe D that has the property P, instead of writing P(X). For instance, instead of saying "x is a positive real number", or "PositiveReal(x)", one would say x ∈ ℝ⁺.
It's because programming languages are influenced by mathematics, logic and set theory in particular. Boolean algebra defines ∧, ∨ operators in a way that they do not work like spoken natural language. Your example would be written as:
Let p(x) be unary relation which holds if and only if x is a primary color
p(x) ⇔ r(x) ∨ g(x) ∨ b(x)
or
p(x) ⇔ (x=red) ∨ (x=green) ∨ (x=blue)
As you see, it's pretty similar to notation that would be used in programming language. As mathematics provide strong theoretic foundations, programming languages are based on mathematics rather than natural language which always leaves a lot of space for interpretation.
EDIT: Above statement could be simplified by using set notation:
p(x) ⇔ x ∈ {red, green, blue}
and indeed, some programming languages, most notably Pascal, included set, so you could type:
type
color = (red, green, blue, yellow, cyan, magenta, black, white);
function is_primary (x : color) : boolean;
begin
is_primary := x in [red, green, blue]
end
But sets as a language feature didn't catch on.
PS. Sorry for my imperfect English.
The latter examples you give are effectively syntactic sugar, they'd have to evaluate to the same code as the longer form as at some point the executed code has to compare your value with each of the conditions in turn.
The array comparison syntax, given in several forms here, closer and I suspect there are other languages which get even closer.
The main problem with making syntax closer to natural language is that the latter is not just ambiguous, it's hideously ambiguous. Even keeping ambiguity to a minimum we still manage to introduce bugs into our apps, can you imagine what it would be like if you programmed in natural english?!
Just to add to language examples
Scheme
(define (isPrimaryColor color)
(cond ((member color '(red blue yellow)) #t)
(else #f)))
(define (someNumberTest x)
(cond ((or (and (>= x 1) (<= x 100)) (and (>= x 10000 (<= x 2000))) #t)
(else #f)))
Two possibilities
Java
boolean isPrimary = Arrays.asList("red", "blue", "yellow").contains(someColor);
Python
a = 1500
if 1 < a < 10 or 1000 < a < 2000:
print "In range"
This can be replicated in Lua with some metatable magic :D
local function operator(func)
return setmetatable({},
{__sub = function(a, _)
return setmetatable({a},
{__sub = function(self, b)
return f(self[1], b)
end}
)
end}
)
end
local smartOr = operator(function(a, b)
for i = 1, #b do
if a == b[i] then
return true
end
end
return false
end)
local isPrimaryColor = someColor -smartOr- {"Red", "Blue", "Either"}
Note: You can change the name of -smartOr- to something like -isEither- to make it even MORE readable.
Languages on computers compare as binary because they are all for a machine that uses binary to represent information. They were designed using similar logic and with broadly similar goals. The English language wasn't designed logically, designed to describe algorithms, and human brains (the hardware it runs on) aren't based on binary. They're tools designed for different tasks.
Related
Hello fellows, i am learning Julia and integer programing but i am stuck at one point
How to model "then" in julia-jump for integer programing leanring.
Stuck here here
#Define the variables of the model
#variable(mo, x[1:N,1:S], Bin)
#variable(mo, a[1:S]>=0)
#Assignment constraint
#constraint(mo, [i=1:N], sum(x[i,j] for j=1:S) == 1)
##constraint (mo, PLEASE HELP )
In cases like this you usually need to use Big-M constraints
So this will be:
a_ij >= s_i^2 - M*(1-x_ij)
where M is a "big enough" number. This means that if x_ij == 0 the inequality will always be true (and hence kind of turned-off). On the other hand when x_ij == 1 the M-part will be zeroed and the equation will hold.
In JuMP terms the code will look like this:
const M = 10_000
#constraint(mo, [i=1:N, j=1:S], a[i, j] >= s[i]^2 - M*(1 - x[i, j]))
However, if s[i] is an external parameter rather than model variable you could simply use x[i,j] <= a[j]/s[i]^2 proposed by #DanGetz. However when s[i] is #variable you really want to avoid dividing or multiplying variables by each other. So this big M approach is more general across use cases.
Yes, I've seen this answer - What is pseudopolynomial time? How does it differ from polynomial time? - but I still don't understand.
Why does the representation in bits make a difference only sometimes?
For this program for example
function isPrime(n):
for i from 2 to n - 1:
if (n mod i) = 0, return false
return true
it says the complexity is not polynomial, because n requires log n bits to write out so the complexity is O(2^(4*log n)) but if i use that on every other problem then it could also be pseudopolynomial, right? (unless im getting it all wrong here). What makes this program so special to be measured in the amount of bits required to write out n?
You have linked to other questions where this is explained fairly well for someone who understands the concept, so here comes a very brief version.
for i from 2 to n - 1:
can be rewritten as
i = 2
while(i < n - 1):
if (n mod i) == 0:
return false
i = i + 1
Very often, we assume that the operations i < n - 1, i = i + 1 and n mod i are O(1). But this is not necessarily true. It is usually true for small values. And on a 32 bit machine, a "small value" is in the order of a billion.
Number that requires more than 32 bits to be represented will take more time to perform operations on than a number that fits in 32 bit. And it will take even more if it required more than 64 bit.
In practice, this rarely matters.
A very simple way to visualize this is to imagine that you get the task to implement the common mathematical operations where the operands are represented as strings. Here is a simple python function that takes two strings representing binary numbers and returns the sum as a string. It was quickly hacked together and assumes both strings has the same length. It may contain bugs and can most likely be refined. But it demonstrate the point. This function adds two numbers, but it will take longer time for longer numbers.
def binadd(a, b):
carry = '0'
result = list('0'*(len(a)+1))
for i in range(len(a)-1,-1, -1):
xor = '1' if (a[i] == '1') != (b[i] == '1') else '0'
val = '1' if (xor == '1') != (carry == '1') else '0'
carry = '1' if (carry == '1' and xor == '1') or (a[i] == '1' and b[i] == '1') else '0'
result[i] = val
result[0]=carry
return ''.join(result)
What makes this program so special to be measured in the amount of bits required to write out n?
There's nothing special about this particular program. At least not theoretical. In practice it is special in the sense that determining if a VERY big number is a prime is a common problem. Or to be more accurate, it would have been a much more common problem if there existed a very fast algorithm to do it. If it did, it would basically break encryption as we know it today.
I'm learning OCaml, and the docs and books I'm reading aren't very clear on some things.
In simple words, what are the differences between
-10
and
~-10
To me they seem the same. I've encountered other resources trying to explain the differences, but they seem to explain in terms that I'm not yet familiar with, as the only thing I know so far are variables.
In fact, - is a binary operator, so some expression can be ambigous : f 10 -20 is treated as (f 10) - 20. For example, let's imagine this dummy function:
let f x y = (x, y)
If I want produce the tuple (10, -20) I naïvely would write something like that f 10 -20 which leads me to the following error:
# f 10 -20;;
Error: This expression has type 'a -> int * 'a
but an expression was expected of type int
because the expression is evaluated as (f 10) - 20 (so a substract over a function!!) so you can write the expression like this: f 10 (-20), which is valid or f 10 ~-20 since ~- (and ~+ and ~-. ~+. for floats) are unary operators, the predecense is properly respected.
It is easier to start by looking at how user-defined unary (~-) operators work.
type quaternion = { r:float; i:float; j:float; k:float }
let zero = { r = 0.; i = 0.; j = 0.; k = 0. }
let i = { zero with i = 1. }
let (~-) q = { r = -.q.r; i = -.q.i; j = -. q.j; k = -. q.k }
In this situation, the unary operator - (and +) is a shorthand for ~- (and ~+) when the parsing is unambiguous. For example, defining -i with
let mi = -i
works because this - could not be the binary operator -.
Nevertheless, the binary operator has a higher priority than the unary - thus
let wrong = Fun.id -i
is read as
let wrong = (Fun.id) - (i)
In this context, I can either use the full form ~-
let ok' = Fun.id ~-i
or add some parenthesis
let ok'' = Fun.id (-i)
Going back to type with literals (e.g integers, or floats), for those types, the unary + and - symbol can be part of the literal itself (e.g -10) and not an operator. For instance redefining ~- and ~+ does not change the meaning of the integer literals in
let (~+) = ()
let (~-) = ()
let really = -10
let positively_real = +10
This can be "used" to create some quite obfuscated expression:
let (~+) r = { zero with r }
let (+) x y = { r = x.r +. y.r; i = x.i +. y.i; k = x.k +. x.k; j =x.k +. y.j }
let ( *. ) s q = { r = s *. q.r; i = s *. q.i; j = s *. q.j; k = s *. q.k }
let this_is_valid x = +x + +10. *. i
OCaml has two kinds of operators - prefix and infix. The prefix operators precede expressions and infix occur in between the two expressions, e.g., in !foo we have the prefix operator ! coming before the expression foo and in 2 + 3 we have the infix operator + between expressions 2 and 3.
Operators are like normal functions except that they have a different syntax for application (aka calling), whilst functions are applied to an arbitrary number of arguments using a simple syntax of juxtaposition, e.g., f x1 x2 x3 x41, operators can have only one (prefix) or two (infix) arguments. Prefix operators are very close to normal functions, cf., f x and !x, but they have higher precedence (bind tighter) than normal function application. Contrary, the infix operators, since they are put between two expressions, enable a more natural syntax, e.g., x + y vs. (+) x y, but have lower precedence (bind less tight) than normal function application. Moreover, they enable chaining several operators in a row, e.g., x + y + z is interpreted as (x + y) + z, which is much more readable than add (add (x y) z).
Operators in OCaml distinguished purely syntactically. It means that the kind of an operator is fully defined by the first character of that operator, not by a special compiler directive, like in some other languages (i.e., there is not infix + directive in OCaml). If an operator starts with the prefix-symbol sequence, e.g., !, ?#, ~%, it is considered as prefix and if it starts with an infix-symbol then it is, correspondingly, an infix operator.
The - and -. operators are treated specially and can appear both as prefix and infix. E.g., in 1 - -2 we have - occurring both in the infix and prefix positions. However, it is only possible to disambiguate between the infix and the prefix versions of the - (and -.) operators when they occur together with other operators (infix or prefix), but when we have a general expression, the - operator is treated as infix. E.g., max 0 -1 is interpreted as (max 0) - 1 (remember that operator has lower precedence than function application, therefore when they two appear with no parentheses then functions are applied first and operators after that). Another example, Some -1, which is interpreted as Some - 1, not as Some (-1). To disambiguate such code, you can use either the parentheses, e.g., max 0 (-1), or the prefix-only versions, e.g, Some ~-1 or max 0 ~-1.
As a matter of personal style, I actually prefer parentheses as it is usually hard to keep these rules in mind when you read the code.
1) Purists will say that functions in OCaml can have only one argument and f x1 x2 x3 x4 is just ((f x1) x2) x3) x4, which would be a totally correct statement, but a little bit irrelevant to the current discussion.
I have recently sat a computing exam in university in which we were never taught beforehand about the modulus function or any other check for odd/even function and we have no access to external documentation except our previous lecture notes. Is it possible to do this without these and how?
Bitwise AND (&)
Extract the last bit of the number using the bitwise AND operator. If the last bit is 1, then it's odd, else it's even. This is the simplest and most efficient way of testing it. Examples in some languages:
C / C++ / C#
bool is_even(int value) {
return (value & 1) == 0;
}
Java
public static boolean is_even(int value) {
return (value & 1) == 0;
}
Python
def is_even(value):
return (value & 1) == 0
I assume this is only for integer numbers as the concept of odd/even eludes me for floating point values.
For these integer numbers, the check of the Least Significant Bit (LSB) as proposed by Rotem is the most straightforward method, but there are many other ways to accomplish that.
For example, you could use the integer division operation as a test. This is one of the most basic operation which is implemented in virtually every platform. The result of an integer division is always another integer. For example:
>> x = int64( 13 ) ;
>> x / 2
ans =
7
Here I cast the value 13 as a int64 to make sure MATLAB treats the number as an integer instead of double data type.
Also here the result is actually rounded towards infinity to the next integral value. This is MATLAB specific implementation, other platform might round down but it does not matter for us as the only behavior we look for is the rounding, whichever way it goes. The rounding allow us to define the following behavior:
If a number is even: Dividing it by 2 will produce an exact result, such that if we multiply this result by 2, we obtain the original number.
If a number is odd: Dividing it by 2 will result in a rounded result, such that multiplying it by 2 will yield a different number than the original input.
Now you have the logic worked out, the code is pretty straightforward:
%% sample input
x = int64(42) ;
y = int64(43) ;
%% define the checking function
% uses only multiplication and division operator, no high level function
is_even = #(x) int64(x) == (int64(x)/2)*2 ;
And obvisouly, this will yield:
>> is_even(x)
ans =
1
>> is_even(y)
ans =
0
I found out from a fellow student how to solve this simplistically with maths instead of functions.
Using (-1)^n :
If n is odd then the outcome is -1
If n is even then the outcome is 1
This is some pretty out-of-the-box thinking, but it would be the only way to solve this without previous knowledge of complex functions including mod.
I recently came across this code example in Mercury:
append(X,Y,Z) :-
X == [],
Z := Y.
append(X,Y,Z) :-
X => [H | T],
append(T,Y,NT),
Z <= [H | NT].
Being a Prolog programmer, I wonder: what's the difference between a normal unification =
and the := or => which are use here?
In the Mercury reference, these operators get different priorities, but they don't explain the difference.
First let's re-write the code using indentation:
append(X, Y, Z) :-
X == [],
Z := Y.
append(X, Y, Z) :-
X => [H | T],
append(T, Y, NT),
Z <= [H | NT].
You seem to have to indent all code by four spaces, which doesn't seem to work in comments, my comments above should be ignored (I'm not able to delete them).
The code above isn't real Mercury code, it is pseudo code. It doesn't make sense as real Mercury code because the <= and => operators are used for typeclasses (IIRC), not unification. Additionally, I haven't seen the := operator before, I'm not sure what is does.
In this style of pseudo code (I believe) that the author is trying to show that := is an assignment type of unification where X is assigned the value of Y. Similarly => is showing a deconstruction of X and <= is showing a construction of Z. Also == shows an equality test between X and the empty list. All of these operations are types of unification. The compiler knows which type of unification should be used for each mode of the predicate. For this code the mode that makes sense is append(in, in, out)
Mercury is different from Prolog in this respect, it knows which type of unification to use and therefore can generate more efficient code and ensure that the program is mode-correct.
One more thing, the real Mercury code for this pseudo code would be:
:- pred append(list(T)::in, list(T)::in, list(T)::out) is det.
append(X, Y, Z) :-
X = [],
Z = Y.
append(X, Y, Z) :-
X = [H | T],
append(T, Y, NT),
Z = [H | NT].
Note that every unification is a = and a predicate and mode declaration has been added.
In concrete Mercury syntax the operator := is used for field updates.
Maybe we are not able to use such operators like ':=' '<=' '=>' '==' in recent Mercury release, but actually these operators are specialized unification, according to the description in Nancy Mazur's thesis.
'=>' stands for deconstruction e.g. X => f(Y1, Y2, ..., Yn) where X is input and all Yn is output. It's semidet. '<=' is on the contrary, and is det. '==' is used in the situation where both sides are ground, and it is semidet. ':=' is just like regular assigning operator in any other language, and it's det. In older papers I even see that they use '==' instead of '=>' to perform a deconstruction. (I think my English is awful = x =)