Maxima : how to do formal calculations with complex numbers? - complex-numbers

I am new to Maxima and I can't find in the documentation how to do formal calculation on complex numbers. When I use unspecified variables, Maxima seems to assume that they are real :
conjugate(x) returns x for instance.
Is there anyway to solve this issue ?
Thanks in advance.

You can declare a variable complex:
(%i1) declare(x, complex) $
(%i2) conjugate(x);
(%o2) conjugate(x)
(%i3) conjugate(realpart(x));
(%o3) realpart(x)

Here are some tests with to_poly package :
(%i1) load(to_poly);
(%o1) home/a/maxima/share/to_poly_solve/to_poly.lisp
(%i2) r;
(%o2) r
(%i3) isreal_p(r);
(%o3) true /* When I use unspecified variables, Maxima seems to assume that they are real */
(%i4) z:x+y*%i;
(%o4) %i y + x
(%i5) isreal_p(z);
(%o5) isreal_p(%i y) /* maxima can't check if it is real or not */
(%i6) isreal_p(x);
(%o6) true
(%i7) isreal_p(y);
(%o7) true
(%i8) complex_number_p(z);
(%o8) false
(%i9) declare(z, complex);
(%o9) done
(%i10) complex_number_p(z);
(%o10) false /* still not complex */
It seems more "complex".

Related

simplifying composite function (diff) in maxima (chain rule for differentiation)

How can the following formula be simplified in Maxima:
diff(h((x-1)^2),x,1)
Mathematically it should be : 2*(x-1)*h'((x-1)^2)
But maxima gives : d/dx h((x-1)^2)
Maxima doesn't apply the chain rule by default, but there is an add-on package named pdiff (which is bundled with the Maxima installation) which can handle it.
pdiff means "positional derivative" and it uses a different, more precise, notation to indicate derivatives. I'll try it on the expression you gave.
(%i1) load ("pdiff") $
(%i2) diff (h((x - 1)^2), x);
2
(%o2) 2 h ((x - 1) ) (x - 1)
(1)
The subscript (1) indicates a first derivative with respect to the argument of h. You can convert the positional derivative to the notation which Maxima usually uses.
(%i3) convert_to_diff (%);
!
d !
(%o3) 2 (x - 1) (----- (h(g485))! )
dg485 ! 2
!g485 = (x - 1)
The made-up variable name g485 is just a place-holder; the name of the variable could be anything (and if you run this again, chances are you'll get a different variable name).
At this point you can substitute for h or x to get some specific values. Note that ev(something, nouns) means to call any quoted (evaluation postponed) functions in something; in this case, the quoted function is diff.
(%i4) ev (%, h(u) := sin(u));
!
d !
(%o4) 2 (x - 1) (----- (sin(g485))! )
dg485 ! 2
!g485 = (x - 1)
(%i5) ev (%, nouns);
2
(%o5) 2 cos((x - 1) ) (x - 1)

Proving linear equations/Inequalities automatically

I'm looking for a tool for determining whether a given set of linear equations/inequalities (A) entails another given set of linear equations/inequalities (B). The return value should be either 'true' or 'false'.
To illustrate, let's look at possible instances of A and B and the expected return value of the algorithm:
A: {Z=3,Y=Z+2, X < Y} ;
B: {X<5} ;
Expected result: true
A: {Z=3,Y=Z+2, X < Y} ;
B: {X<10} ;
Expected result: true
A: {Z=3,Y=Z+2, X < Y} ;
B: {X=3} ;
Expected result: false
A: {X<=Y,X>=Y} ;
B: {X=Y} ;
Expected result: true
A: {X<=Y,X>=Y} ;
B: {X=Y, X>Z+2} ;
Expected result: false
Typically A contains up to 10 equations/inequalities, and B contains 1 or 2. All of them are linear and relatively simple. We may even assume that all variables are integers.
This task - of determining whether A entails B - is part of a bigger system and therefore I'm looking for tools/source code/packages that already implemented something like that and I can use.
Things I started to look at:
Theorem provers with algebra - Otter, EQP and Z3 (Vampire is currently unavailable for download).
Coq formal proof management system.
Linear Programming.
However, my experience with these tools is very limited and so far I didn't find a promising direction. Any guidelines and ideas from people more experienced than me will be highly appreciated.
Thank you for your time!
I think I found a working solution. The problem can be rephrased as an assignment problem and then it can be solved by theorem provers such as Z3 and with some work probably also by linear programming solvers.
To illustrate, let's look at the first example I gave above:
A: {Z=3, Y=Z+2, X<Y} ;
B: {X<5} ;
Determining whether A entails B is equivalent to determining whether it is impossible that A is true while B is false. This is a simple simple logical equivalence. In our case, it means that rather than checking whether the condition in B follows from the ones in A, we can check whether there is no assignment of X, Y and Z that satisfies the conditions in A and not in B.
Now, when phrased as an assignment problem, a theorem prover such as Z3 can be called for the task. The following code checks if the conditions in A are satisfiable while the one in B is not:
(declare-fun x () Int)
(declare-fun y () Int)
(declare-fun z () Int)
(assert (= z 3))
(assert (= y (+ z 2)))
(assert (< x y))
(assert (not (< x 5)))
(check-sat)
(get-model)
(exit)
Z3 reports that there is no model that satisfies these conditions, thus it is not possible that B doesn't follow from A, and therefore we can conclude that B follows from A.

Unsigned to signed without comparison

To convert 32-bit unsigned to signed integer one could use:
function convert(n)
if n >= 2 ^ 31 then
return n - 2 ^ 32
end
return n
end
Is it possible to do it without that comparison?
PS: This is Lua, hence I cannot "cast" as in C.
Maybe you could do it with bit operations. In Smalltalk that would be:
^self - (2*(self bitAnd: 16r80000000))
Apparently bitops are not native in Lua, but various bit library seem available, see http://lua-users.org/wiki/BitwiseOperators
Once you find appropriate bitand function, that would be something like
return n - bitand(n,MAXINT)*2
Not in plain Lua. You could of course optimize the exponentiation and the if-statement away by writing:
local MAXINT, SUBT = math.pow(2, 31), math.pow(2, 32)
function convert(n)
-- Like C's ternary operator
return (n >= MAXINT and n - SUBT) or n
end
I do not know if optimizing the if-statement away will help the interpreter much; not for LuaJIT, I think; but probably for plain Lua?
If you really want to avoid the comparison, go for C, e.g. (untested code!):
int convert(lua_State *L)
{
lua_pushinteger(L, (int) ((unsigned int) luaL_checklong(L, 1)));
return 1;
}
However, stack overhead will probably defeat the purpose.
Any specific reason for the micro-optimization?
Edit: I've been thinking about this, and it is actually possible in plain Lua:
local DIV, SUBT = math.pow(2, 31) + 1, math.pow(2, 32)
-- n MUST be an integer!
function convert(n)
-- the math.floor() evaluates to 0 for integers 0 through 2^31;
-- else it is 1 and SUBT is subtracted.
return n - (math.floor(n / DIV) * SUBT)
end
I'm unsure whether it will improve performance; the division would have to be faster than the conditional jump.
Technically however, this answers the question and avoids the comparison.

":=" and "=>" in Mercury

I recently came across this code example in Mercury:
append(X,Y,Z) :-
X == [],
Z := Y.
append(X,Y,Z) :-
X => [H | T],
append(T,Y,NT),
Z <= [H | NT].
Being a Prolog programmer, I wonder: what's the difference between a normal unification =
and the := or => which are use here?
In the Mercury reference, these operators get different priorities, but they don't explain the difference.
First let's re-write the code using indentation:
append(X, Y, Z) :-
X == [],
Z := Y.
append(X, Y, Z) :-
X => [H | T],
append(T, Y, NT),
Z <= [H | NT].
You seem to have to indent all code by four spaces, which doesn't seem to work in comments, my comments above should be ignored (I'm not able to delete them).
The code above isn't real Mercury code, it is pseudo code. It doesn't make sense as real Mercury code because the <= and => operators are used for typeclasses (IIRC), not unification. Additionally, I haven't seen the := operator before, I'm not sure what is does.
In this style of pseudo code (I believe) that the author is trying to show that := is an assignment type of unification where X is assigned the value of Y. Similarly => is showing a deconstruction of X and <= is showing a construction of Z. Also == shows an equality test between X and the empty list. All of these operations are types of unification. The compiler knows which type of unification should be used for each mode of the predicate. For this code the mode that makes sense is append(in, in, out)
Mercury is different from Prolog in this respect, it knows which type of unification to use and therefore can generate more efficient code and ensure that the program is mode-correct.
One more thing, the real Mercury code for this pseudo code would be:
:- pred append(list(T)::in, list(T)::in, list(T)::out) is det.
append(X, Y, Z) :-
X = [],
Z = Y.
append(X, Y, Z) :-
X = [H | T],
append(T, Y, NT),
Z = [H | NT].
Note that every unification is a = and a predicate and mode declaration has been added.
In concrete Mercury syntax the operator := is used for field updates.
Maybe we are not able to use such operators like ':=' '<=' '=>' '==' in recent Mercury release, but actually these operators are specialized unification, according to the description in Nancy Mazur's thesis.
'=>' stands for deconstruction e.g. X => f(Y1, Y2, ..., Yn) where X is input and all Yn is output. It's semidet. '<=' is on the contrary, and is det. '==' is used in the situation where both sides are ground, and it is semidet. ':=' is just like regular assigning operator in any other language, and it's det. In older papers I even see that they use '==' instead of '=>' to perform a deconstruction. (I think my English is awful = x =)

Finite difference in Haskell, or how to disable potential optimizations

I'd like to implement the following naive (first order) finite differencing function:
finite_difference :: Fractional a => a -> (a -> a) -> a -> a
finite_difference h f x = ((f $ x + h) - (f x)) / h
As you may know, there is a subtle problem: one has to make sure that (x + h) and x differ by an exactly representable number. Otherwise, the result has a huge error, leveraged by the fact that (f $ x + h) - (f x) involves catastrophic cancellation (and one has to carefully choose h, but that is not my problem here).
In C or C++, the problem can be solved like this:
volatile double temp = x + h;
h = temp - x;
and the volatile modifier disables any optimization pertaining to the variable temp, so we are assured that a "clever" compiler will not optimize away those two lines.
I don't know enough Haskell yet to know how to solve this. I'm afraid that
let temp = x + h
hh = temp - x
in ((f $ x + hh) - (f x)) / h
will get optimized away by Haskell (or the backend it uses). How do I get the equivalent of volatile here (if possible without sacrificing laziness)? I don't mind GHC specific answers.
I have two solutions and a suggestion:
First solution: You can guarantee that this won't be optimized out with two helper functions and the NOINLINE pragma:
norm1 x h = x+h
{-# NOINLINE norm1 #-}
norm2 x tmp = tmp-x
{-# NOINLINE norm2 #-}
normh x h = norm2 x (norm1 x h)
This will work, but will introduce a small cost.
Second solution: Write the normalization function in C using volatile and call it through the FFI. The performance penalty will be minimal.
Now for the suggestion: Currently the math isn't optimized out, so it will work properly at present. You're afraid it will break in a future compiler. I think this is unlikely, but not so unlikely that I wouldn't want to guard against it also. So write some unit tests that cover the cases in question. Then if it does break in the future (for any reason), you'll know exactly why.
One way is to look at the Core.
Specializing to Doubles (which will be the case most likely to trigger some optimization):
finite_difference :: Double -> (Double -> Double) -> Double -> Double
finite_difference h f x = ((f $ x + hh) - (f x)) / h
where
temp = x + h
hh = temp - x
Is compiled to:
A.$wfinite_difference h f x =
case f (case x of
D# x' -> D# (+## x' (-## (+## x' h) x'))
) of
D# x'' -> case f x of D# y -> /## (-## x'' y) h
And similarly (with even less rewriting) for the polymorphic version.
So while the variables are inlined, the math isn't optimized away.
Beyond looking at the Core, I can't think of a way to guarantee the property you want.
I don't think that
temp = unsafePerformIO $ return $ x + h
would get optimised out. Just a guess.