Resetting/Deleting/Forgetting variables in Mathematica Notebooks - variables

I am computing some formulae in a notebook. Suppose I define a function
Myf[x_] := Sin[c*x] + Tanh[x/c]*Exp[-x]
and then compute
Integrate[Myf[y], {y, -1, 1}]
Now, just to do some sanity check, I define c as
c = 1
and evaluate Integrate[Myf[y], {y, -1, 1}] to get
1/E - E + 2 ArcCot[1/E] - 2 ArcCot[E]
Now, even if I delete the c = 1 line, Integrate[Myf[y], {y, -1, 1}] still evaluates to
1/E - E + 2 ArcCot[1/E] - 2 ArcCot[E]
instead of the unsubstituted
(1/(-2 + c))E^(-1 - 2/
c) (c E^2 Hypergeometric2F1[1, 1 - c/2, 2 - c/2, -E^(-2/c)] -
E^(2/c) (c E^(2/c)
Hypergeometric2F1[1, 1 - c/2,
2 - c/2, -E^(2/c)] + (-2 +
c) (E^2 Hypergeometric2F1[1, -(c/2), 1 - c/2, -E^(-2/c)] -
Hypergeometric2F1[1, -(c/2), 1 - c/2, -E^(2/c)])))
How do I delete/forget the value of c for the notebook once I defined it.
What is the best way to deal with these situations. I suppose people use Substitute or something like that.

Apparently, x=. or Clear[x] clears x.

Quit[]
This function quit the Kernel. It clears all the variables and stuff it could have stored after opening the notebook.
You can also try:
ClearAll["Global`*"]

Related

How to convert the following if conditions to Linear integer programming constraints?

These are the conditions:
if(x > 0)
{
y >= a;
z <= b;
}
It is quite easy to convert the conditions into Linear Programming constraints if x were binary variable. But I am not finding a way to do this.
You can do this in 2 steps
Step 1: Introduce a binary dummy variable
Since x is continuous, we can introduce a binary 0/1 dummy variable. Let's call it x_positive
if x>0 then we want x_positive =1. We can achieve that via the following constraint, where M is a very large number.
x < x_positive * M
Note that this forces x_positive to become 1, if x is itself positive. If x is negative, x_positive can be anything. (We can force it to be zero by adding it to the objective function with a tiny penalty of the appropriate sign.)
Step 2: Use the dummy variable to implement the next 2 constraints
In English: if x_positive = 1, then y >= a
However, if x_positive = 0, y can be anything (y > -inf)
y > a - M (1 - x_positive)
Similarly,
if x_positive = 1, then z <= b
z <= b + M * (1 - x_positive)
Both the linear constraints above will kick in if x>0 and will be trivially satisfied if x <=0.

Calculating Time Complexity of a recursive function

.0 < c < 1 ,T(n) = T(cn) + T((1 - c)n) + 1
Base level:
if(n<=1) return;
data type - positive integers
I have to find the Big-Theta function of this recursive function.
I've tried to develop the recursive equation but it gets complicated from level to level and no formation is seen.
I also tried this -
assume that c<(1-c).
so -
2T(cn) + 1 <= T(cn) + T((1-c)n)+1 <= 2T((1-c)n)+1
It gave me some lower bound and upper bound but not a theta bound :(
As c approaches either 0 or 1, the recursion approaches T(n) = T(n-1) + 2 (assuming that T(0) = 1 as well). This has as a solution the linear function T(n) = 2n - 1 for n > 0.
For c = 1/2, the recursion becomes T(n) = 2T(n/2) + 1. It looks like T(n) = 2n - 1 is a solution to this for n > 0.
This seems like strong evidence that the function T(n) = 2n - 1 is a solution for all c: it works on both ends and in the middle. If we sub in...
2n - 1 = 2cn - 1 + 2(1-c)n - 1 + 1
= 2cn - 1 + 2n - 2cn - 1 + 1
= 2n - 1
We find that T(n) = 2n - 1 is a solution for the general case.

How do you calculate combined orders of growth?

Suppose I have a recursive procedure with a formal parameter p. This procedure
wraps the recursive call in a Θ(1) (deferred) operation
and executes a Θ(g(k)) operation before that call.
k is dependent upon the value of p. [1]
The procedure calls itself with the argument p/b where b is a constant (assume it terminates at some point in the range between 1 and 0).
Question 1.
If n is the value of the argument to p in the initial call to the procedure, what are the orders of growth of the space and the number of steps executed, in terms of n, for the process this procedure generates
if k = p? [2]
if k = f(p)? [3]
Footnotes
[1] i.e., upon the value of the argument passed into p.
[2] i.e., the size of the input to the nested operation is same as that for our procedure.
[3] i.e., the size of the input to the nested operation is some function of the input size of our procedure.
Sample procedure
(define (* a b)
(cond ((= b 0) 0)
((even? b) (double (* a (halve b))))
(else (+ a (* a (- b 1))))))
This procedure performs integer multiplication as repeated additions based on the rules
a * b = double (a * (b / 2)) if b is even
a * b = a + (a * (b - 1)) if b is odd
a * b = 0 if b is zero
Pseudo-code:
define *(a, b) as
{
if (b is 0) return 0
if (b is even) return double of *(a, halve (b))
else return a + *(a, b - 1)
}
Here
the formal parameter is b.
argument to the recursive call is b/2.
double x is a Θ(1) operation like return x + x.
halve k is Θ(g(k)) with k = b i.e., it is Θ(g(b)).
Question 2.
What will be the orders of growth, in terms of n, when *(a, n) is evaluated?
Before You Answer
Please note that the primary questions are the two parts of question 1.
Question 2 can be answered as the first part. For the second part, you can assume f(p) to be any function you like: log p, p/2, p^2 etc.
I saw someone has already answered question 2, so I'll answer question 1 only.
First thing is to notice is that the two parts of the question are equivalent. In the first question, k=p so we execute a Θ(g(p)) operation for some function g. In the second one, k=f(p) and we execute a Θ(g(f(p))) = Θ((g∘f)(p)). replace g from the first question by g∘f and the second question is solved.
Thus, let's consider the first case only, i.e. k=p. Denote the time complexity of the recursive procedure by T(n) and we have that:
T(n) = T(n/b) + g(n) [The free term should be multiplied by a constant c, but we can talk about complexity in "amount of c's" and the theta bound will obviously remain the same]
The solution of the recursive formula is T(n) = g(n) + g(n/b) + ... + g(n/b^i) + ... + g(1)
We cannot further simplify it unless given additional information about g. For example, if g is a polynomial, g(n) = n^k, we get that
T(n) = n^k * (1 + b^-k + b^-2k + b^-4k + ... + b^-log(n)*k) <= n^k * (1 + b^-1 + b^-2 + ....) <= n^k * c for a constant c, thus T(n) = Θ(n^k).
But, if g(n) = log_b(n), [from now on I ommit the base of the log] we get that T(n) = log(n) + log(n/b) + ... + log(n/(b^log_b(n))) = log(n^log(n) * 1/b^(1 + 2 + ... log(n))) = log(n)^2 - log(n)^2 / 2 - log(n) / 2 = Θ(log(n) ^ 2) = Θ(g(n)^2).
You can easily prove, using a similar proof to the one where g is a polynomial that when g = Ω(n), i.e., at least linear, then the complexity is g(n). But when g is sublinear the complexity may be well bigger than g(n), as g(n/b) may be much bigger then g(n) / b.
You need to apply the wort case analysis.
First,
you can approximate the solution by using powers of two:
If then clearly the algorithm takes: (where ).
If it is an odd number then after applying -1 you get an even number and you divide by 2, you can repeat this only times, and the number of steps is also , the case of b being an odd number is clearly the worst case and this gives you the answer.
(I think you need an additional base case for: b=1)

Mathematica: how to find the max of an expression with exponents as parameters

I'm using Mathematica 8 to find an analytic solution to the max of an expression. When I use the Maximize command to try to find a solution, it just repeats what I entered, implying that Mathematica doesn't know how to do it. I've narrowed down the problem to this: it seems like if there is an exponent that is a parameter, Maximize doesn't work. Here's an example. This is the likelihood function from a Bernoulli trial, where a and b are the successes and failures.
Maximize[{t^a*(1 - t)^b, {t >= 0, t <= 1, a > 0, b > 0}}, {t}]
What I would like to get as a solution is a/(a+b) in this case. If I provide constants like 3 and 2 instead of a and b then it finds the solution.
Is there a different way to specify the expression or the constraints so that Mathematica can find a maximum to expressions with exponents that are parameters? I feel like there's something I'm missing because this is so simple.
I've been playing with it, i.e. moving conditions, changing expression form, removing conditions, and I can't get Maximize to behave, either. However, this can be solved directly, as follows
Solve[ D[ t^a (1 - t)^b, t ] == 0, t]
which gives, as you said, {{t -> a/(a + b)}}. Sometimes Reduce can be used to help understand why a function like Maximize misbehaves by giving a more complete picture of the solution space. It is invoked like Solve, as follows
Reduce[ D[ t^a (1 - t)^b, t ] == 0, t]
giving
((-1 + t) t != 0 && a == 0 && b == 0) ||
(a + b != 0 && a b != 0 && t == a/(a + b)) ||
(Re[b] > 1 && t == 1) ||
(Re[a] > 1 && t == 0)
which isn't all that helpful, in this case.
The Maximize function in Mathematica, applied on an exponential function, only works if you maximize with respect to all parameters (a, b and t in your case). Now, you only maximize with respect to t, which does not work.
Consider this easy example (using Mathematica 8.0):
Maximize[{Exp[a + b], a <= 1, b <= 1}, {a, b}]
Maximize[{Exp[a + b], a <= 1, b <= 1}, {a}]
Maximize[{a + b, a <= 1, b <= 1}, {a}]

In csh, why does 4 - 3 + 1 == 0?

#!/bin/csh
# cows = 4 - 3 + 1
echo $cows
This simple csh script when run produces "0" for output when I'd expect "2".
~root: csh simple.1
0
I did a bunch of looking and the only thing I could think of was that the "-" was being read as a unary negation rather than subtraction, therefore changing operator precedence and ending up with 4 - 4 rather than 2 + 1. Is this correct? If so, any reason why? If not...help!
Edit: So they're right associative! These operators are NOT right associative in C, are they? Is C-Shell that different from C?
While you are expecting the operators to be left associative, they are right associative in csh, so it's evaluated as 4-(3+1)
-
/ \
/ \
4 +
/ \
3 1
The + and - operators are right-associative in csh. This means that '4 - 3 + 1' is evaluated as '4 - (3 + 1)'.
Operator grouping. It's reading the operation as 4 - (3 + 1), as opposed to (4 - 3) + 1.