Mathematica Solve[] returns function with variable slots - vb.net

When I solve for a high degree polynomial in Wolfram Mathamatica it returns a function with variable slots (the "#1"'s) in it, like this:
In[1]:= Solve[p^4 + 4*p^4 (1 - p) + 10*p^4*(1 - p)^2 + 20*p^5*(1 - p)^3 + (
40*p^6*(1 - p)^4)/(1 - 2*p*(1 - p)) == x && 0 < p < 1 && 0 < x < 1, p, Reals]
Out[1]:= {{p -> Root[x - 2 x #1 + 2 x #1^2 - 15 #1^4 + 34 #1^5 - 28 #1^6 +
8 #1^7 &, 2]}
How can I get it to give me the answer without the variable slots?
It's not as if it needs more information, because if I assign a value to x it will evaluate the expression completely:
In[2]:= x=0.7
Out[2]:= 0.7
In[3]:= Root[x - 2 x #1 + 2 x #1^2 - 15 #1^4 + 34 #1^5 - 28 #1^6 + 8 #1^7 &, 2]
Out[3]:= 0.583356
The Mathematica help shows this syntax under the reference for Root[] but gives no explanation.
I need to use this result in terms of x in a VB program so I need to know how to get rid of the #1's. Any help would be greatly appreciated, thank you!

This is more of a mathematical issue than a programming issue. The fact is that there are plenty of mathematical functions that have roots that cannot be expressed in closed form. The classic example is the Abel-Ruffini theorem, which states that the roots of the general polynomial of degree five or higher cannot be expressed in closed form. The need to express the roots of such a polynomial is the whole purpose of Mathematica's Root object. Here's a simple example:
roots = x /. Solve[x^5 - x - 1 == 0, x]
(* Out:
{Root[-1 - #1 + #1^5 &, 1], Root[-1 - #1 + #1^5 &, 2],
Root[-1 - #1 + #1^5 &, 3], Root[-1 - #1 + #1^5 &, 4],
Root[-1 - #1 + #1^5 &, 5]}
*)
These are exact representations of the roots of the polynomial. They can be estimated to whatever precision you want:
N[roots, 20]
(* Out:
{1.1673039782614186843,
-0.76488443360058472603 - 0.35247154603172624932 I,
-0.76488443360058472603 + 0.35247154603172624932 I,
0.18123244446987538390 - 1.08395410131771066843 I,
0.18123244446987538390 + 1.08395410131771066843 I}
*)
Now, in your case, you are asking when a rational function of degree 7 in $p$ is equal to $x$. The answer is
Root[x - 2 x #1 + 2 x #1^2 - 15 #1^4 + 34 #1^5 - 28 #1^6 + 8 #1^7 &, 2]
This tells you that you need
x - 2 x p + 2 x p^2 - 15 p^4 + 34 p^5 - 28 p^6 + 8 p^7 = 0
and there is no simpler closed form. Now, if you set x=0.7, or some other decimal approximation, then you'll get a numerical estimate that's good for that particular x value. It's still not a closed form, though. For comparison, try x=7/10. You should get
x=7/10
Root[7 - 14 #1 + 14 #1^2 - 150 #1^4 + 340 #1^5 - 280 #1^6 + 80 #1^7 &, 2]
Now, you could certainly write a function f using the Root object to help you explore what's going on.
f[x_] = Root[x - 2 x #1 + 2 x #1^2 - 15 #1^4 + 34 #1^5 - 28 #1^6 + 8 #1^7 &,2];
f[0.7]
(* Out: 0.583356 *)
You can even plot it.
Plot[f[x], {x, 0, 1}]

Related

Calculating Time Complexity of a recursive function

.0 < c < 1 ,T(n) = T(cn) + T((1 - c)n) + 1
Base level:
if(n<=1) return;
data type - positive integers
I have to find the Big-Theta function of this recursive function.
I've tried to develop the recursive equation but it gets complicated from level to level and no formation is seen.
I also tried this -
assume that c<(1-c).
so -
2T(cn) + 1 <= T(cn) + T((1-c)n)+1 <= 2T((1-c)n)+1
It gave me some lower bound and upper bound but not a theta bound :(
As c approaches either 0 or 1, the recursion approaches T(n) = T(n-1) + 2 (assuming that T(0) = 1 as well). This has as a solution the linear function T(n) = 2n - 1 for n > 0.
For c = 1/2, the recursion becomes T(n) = 2T(n/2) + 1. It looks like T(n) = 2n - 1 is a solution to this for n > 0.
This seems like strong evidence that the function T(n) = 2n - 1 is a solution for all c: it works on both ends and in the middle. If we sub in...
2n - 1 = 2cn - 1 + 2(1-c)n - 1 + 1
= 2cn - 1 + 2n - 2cn - 1 + 1
= 2n - 1
We find that T(n) = 2n - 1 is a solution for the general case.

How to find a polynomial as an approximate solution to a nonlinear equation?

For my small FLOSS project, I want to approximate the Green et al. equation for maximum shear stress for point contact:
that should looks like this when plotted
the same equation in Maxima:
A: (3 / 2 / (1 + zeta^2) - 1 - nu + zeta * (1 + nu) * acot(zeta)) / 2;
Now to find the maximum 𝜏max I differentiate the above equations against 𝜁:
diff(A, zeta);
trying to solve the derivative for 𝜁:
solve(diff(A, zeta), zeta);
I ended up with a multipage equation that I can't actually use or test.
Now I was wondering if I can find the polynomial:
𝜁max = a + b * 𝜈 + c * 𝜈2 + ...
that approximately solves the
diff(A, zeta) = 0
equation for 0 < 𝜈 < 0.5 and 0 < 𝜁 < 1.
(1) Probably the first thing to try is just to solve diff(A, zeta) = 0 numerically (via find_root in this case). Here is an approximate solution for one value of nu:
(%i2) A: (3 / 2 / (1 + zeta^2) - 1 - nu + zeta * (1 + nu) * acot(zeta)) / 2;
3
(nu + 1) zeta acot(zeta) + ------------- - nu - 1
2
2 (zeta + 1)
(%o2) -------------------------------------------------
2
(%i3) dAdzeta: diff(A, zeta);
(nu + 1) zeta 3 zeta
(nu + 1) acot(zeta) - ------------- - ------------
2 2 2
zeta + 1 (zeta + 1)
(%o3) --------------------------------------------------
2
(%i4) find_root (subst ('nu = 0.25, dAdzeta), zeta, 0, 1);
(%o4) 0.4643131929806135
Here I'll plot the approximate solution for different values of nu:
(%i5) plot2d (find_root (dAdzeta, zeta, 0, 1), [nu, 0, 0.5]) $
Let's plot that together with Eq. 10 which is the approximation derived in the paper by Green:
(%i6) plot2d ([find_root (dAdzeta, zeta, 0, 1), 0.38167 + 0.33136*nu], [nu, 0, 0.5]) $
(2) I looked at some different ways to get to a symbolic solution and here is something which is maybe workable. Note that this is also an approximation since it's derived from a Taylor series. You would have to look at whether it works well enough.
Find a low-order Taylor series for acot and plug it into dAdzeta.
(%i7) acot_approx: taylor (acot(zeta), zeta, 1/2, 3);
1 1 2 1 3
4 (zeta - -) 8 (zeta - -) 16 (zeta - -)
2 2 2
(%o7)/T/ atan(2) - ------------ + ------------- + -------------- + . . .
5 25 375
(%i8) dAdzeta_approx: subst (acot(zeta) = acot_approx, dAdzeta);
(25 atan(2) - 10) nu + 25 atan(2) - 34
(%o8)/T/ --------------------------------------
50
1 1 2
(80 nu + 104) (zeta - -) (320 nu + 1184) (zeta - -)
2 2
- ------------------------ + ---------------------------
125 625
1 3
(640 nu + 11584) (zeta - -)
2
- ---------------------------- + . . .
9375
The approximate dAdzeta is a cubic polynomial in zeta, so we can solve it. The result is a big messy expression. The first two solutions are complex and the third is real, so I guess that's the one we want.
(%i9) zeta_max: solve (dAdzeta_approx = 0, zeta);
<large mess omitted here>
(%i10) grind (zeta_max[3]);
zeta = ((625*sqrt((22500*atan(2)^2+30000*atan(2)-41200)*nu^4
+(859500*atan(2)^2-1878000*atan(2)+926000)
*nu^3
+(9022725*atan(2)^2-15859620*atan(2)+7283316)
*nu^2
+(15556950*atan(2)^2-36812760*atan(2)
+19709144)
*nu+7371225*atan(2)^2-22861140*atan(2)
+17716484))
/(256*(10*nu+181)^2)
+((3*((9375*nu+9375)*atan(2)+4810*nu+6826))/(1280*nu+23168)
-((90*nu+549)*(1410*nu+4281))/((10*nu+181)*(80*nu+1448)))
/6+(90*nu+549)^3/(27*(10*nu+181)^3))
^(1/3)
-((1410*nu+4281)/(3*(80*nu+1448))
+((-1)*(90*nu+549)^2)/(9*(10*nu+181)^2))
/((625*sqrt((22500*atan(2)^2+30000*atan(2)-41200)*nu^4
+(859500*atan(2)^2-1878000*atan(2)+926000)
*nu^3
+(9022725*atan(2)^2-15859620*atan(2)+7283316)
*nu^2
+(15556950*atan(2)^2-36812760*atan(2)
+19709144)
*nu+7371225*atan(2)^2-22861140*atan(2)
+17716484))
/(256*(10*nu+181)^2)
+((3*((9375*nu+9375)*atan(2)+4810*nu+6826))
/(1280*nu+23168)
-((90*nu+549)*(1410*nu+4281))
/((10*nu+181)*(80*nu+1448)))
/6+(90*nu+549)^3/(27*(10*nu+181)^3))
^(1/3)+(90*nu+549)/(3*(10*nu+181))$
I tried some ideas to simplify the solution, but didn't find anything workable. Whether it's usable in its current form, I'll let you be the judge. Plotting the approximate solution along with the other two seems to show they're all pretty close together.
(%i18) plot2d ([find_root (dAdzeta, zeta, 0, 1),
0.38167 + 0.33136*nu,
rhs(zeta_max[3])],
[nu, 0, 0.5]) $
Here's a different approach, which is to calculate some approximate values by find_root and then assemble an approximation function which is a cubic polynomial. This makes use of a little function I wrote named polyfit. See: https://github.com/maxima-project-on-github/maxima-packages/tree/master/robert-dodier and then look in the polyfit folder.
(%i2) A: (3 / 2 / (1 + zeta^2) - 1 - nu + zeta * (1 + nu) * acot(zeta)) / 2;
3
(nu + 1) zeta acot(zeta) + ------------- - nu - 1
2
2 (zeta + 1)
(%o2) -------------------------------------------------
2
(%i3) dAdzeta: diff(A, zeta);
(nu + 1) zeta 3 zeta
(nu + 1) acot(zeta) - ------------- - ------------
2 2 2
zeta + 1 (zeta + 1)
(%o3) --------------------------------------------------
2
(%i4) nn: makelist (k/10.0, k, 0, 5);
(%o4) [0.0, 0.1, 0.2, 0.3, 0.4, 0.5]
(%i5) makelist (find_root (dAdzeta, zeta, 0, 1), nu, nn);
(%o5) [0.3819362006941755, 0.4148794361988409,
0.4478096487716516, 0.4808644852928955, 0.5141748609122403,
0.5478684611102143]
(%i7) load ("polyfit.mac");
(%o7) polyfit.mac
(%i8) foo: polyfit (nn, %o5, 3) $
(%i9) grind (foo);
[beta = matrix([0.4643142407230925],[0.05644202066198245],
[2.746081069103333e-4],[1.094924180450318e-4]),
Yhat = matrix([0.3819365703555216],[0.4148782994206623],
[0.4478104992708994],[0.4808650578507559],
[0.5141738631047557],[0.5478688029774219]),
residuals = matrix([-3.696613460890674e-7],
[1.136778178534303e-6],
[-8.504992477509354e-7],
[-5.725578604010018e-7],
[9.97807484637292e-7],
[-3.418672076538343e-7]),
mse = 5.987630959972099e-13,Xmean = 0.25,
Xsd = 0.1707825127659933,
f = lambda([X],
block([Xtilde:(X-0.25)/0.1707825127659933,X1],
X1:[1,Xtilde,Xtilde^2,Xtilde^3],
X1 . matrix([0.4643142407230925],
[0.05644202066198245],
[2.746081069103333e-4],
[1.094924180450318e-4])))]$
(%o9) done
Not sure which pieces are going to be most relevant, so I just returned several things. Items can be extracted via assoc. Here I'll extract the constructed function.
(%i10) assoc ('f, foo);
X - 0.25
(%o10) lambda([X], block([Xtilde : ------------------, X1],
0.1707825127659933
2 3
X1 : [1, Xtilde, Xtilde , Xtilde ],
[ 0.4643142407230925 ]
[ ]
[ 0.05644202066198245 ]
X1 . [ ]))
[ 2.746081069103333e-4 ]
[ ]
[ 1.094924180450318e-4 ]
(%i11) %o10(0.25);
(%o11) 0.4643142407230925
Plotting the function shows it is close to the values returned by find_root.
(%i12) plot2d ([find_root (dAdzeta, zeta, 0, 1), %o10], [nu, 0, 0.5]);

Different big O notation for same calculation(Cracking the coding interview)

In Cracking the Coding Interview, 6th edition, page 6, the amortized time for insertion is explained as:
As we insert elements, we double the capacity when the size of the array is a power of 2. So after X elements, we double the capacity at
array sizes 1, 2, 4, 8, 16, ... , X.
That doubling takes, respectively, 1, 2, 4, 8, 16, 32, 64, ... , X
copies. What is the sum of 1 + 2 + 4 + 8 + 16 + ... + X?
If you read this sum left to right, it starts with 1 and doubles until
it gets to X. If you read right to left, it starts with X and halves
until it gets to 1.
What then is the sum of X + X/2 + X/4 + ... + 1? This is roughly 2X.
Therefore, X insertions take O( 2X) time. The amortized time for each
insertion is O(1).
While for this code snippet(a recursive algorithm),
`
int f(int n) {
if (n <= 1) {
return 1;
}
return f(n - 1) + f(n - 1); `
The explanation is:
The tree will have depth N. Each node has two children. Therefore,
each level will have twice as many calls as the one above it.
Therefore,there will be 2^0+ 2^1 + 2^2 + 2^3 + ... + 2^N(which is
2^(N+1) - 1) nodes. . In this case, this gives us O(2^N) .
My question is:
In the first case, we have a GP 1+2+4+8...X. In the 2nd case we have the same GP 1+2+4+8..2^N. Why is the sum 2X in one case while it is 2^(N+1)-1 in another.
I think that it might be because we can't represent X as 2^N but I'm not sure.
Because in the second case N is the depth of the tree and not the total number of nodes. It would be 2^N = X, as you already stated.

complexity of the sum of the squares of geometric progression

I have a question in my data structure course homework and I thought of 2 algorithms to solve this question, one of them is O(n^2) time and the other one is:
T(n) = 3 * n + 1*1 + 2*2 + 4*4 + 8*8 + 16*16 + ... + logn*logn
And I'm not sure which one is better.
I know that the sum of geometric progression from 1 to logn is O(logn) because I can use the geometric series formula for that. But here I have the squares of the geometric progression and I have no idea how to calculate this.
You can rewrite it as:
log n * log n + ((log n) / 2) * ((log n) / 2) + ((log n) / 4) * ((log n) / 4) ... + 1
if you substitute (for easier understanding) log^2 n with x, you get:
x + x/4 + x/16 + x/64 + ... + 1
You can use formula to sum the series, but if you dont have to be formal, then basic logic is enough. Just imagine you have 1/4 of pie and then add 1/16 pie and 1/64 etc., you can clearly see, it will never reach whole piece therefore:
x + x/4 + x/16 + x/64 + ... + 1 < 2x
Which means its O(x)
Changing back the x for log^2 n:
T(n) = O(3*n + log^2 n) = O(n)

Rewrite Resurrence Function to the Idea of Dynamic Programming

Can someone help me?
Rewrite the pseudo-code of Count(n) using the idea of Dynamic Programming. And determine the Time Complexity.
Test(n)
If n=1 return 1
Tn=0
For k=1 to n-1
Tn = Tn + Test(k) * Test(n-k)
Return Tn
Add Memoization to get a DP solution from a recursion one:
Python Code:
d = {}
def test(n):
if n == 1:
return 1
if d.get(n) is not None:
return d[n]
ans = 0
for k in range(1, n):
ans += test(k) * test(n - k)
d[n] = ans
return ans
You can check(It's Catalan numbers indeed, learn more about it in OEIS):
for i in range(1, 10):
print str(i) + ' ' + str(test(i))
Output:
1 1
2 1
3 2
4 5
5 14
6 42
7 132
8 429
9 1430
Time Complexity is O(n^2). Because calculate a state is O(n)(for k from 1 to n - 1), and we need calculate n state in total to get test(n).
In fact, we can achieve a O(n) solution since it's Catalan numbers...