I am trying to solve the following equation in my program:
7.7^2 x 0.012^2/(0.2145 x 1.67^(16/3))
That should equal : 0.002582 (this is verified w/ google & scientific calculator)
This is the code that I am using
CGFloat eX1 = pow(7.7, 2) * pow(0.012, 2)/(0.2145 * pow(1.67, (16/3)));
NSLog(#"%f",eX1);
And even though, I believe my code should give me the same results, it's actually giving me:0.002679
What am I doing wrong? What can I do to obtain the correct answer?
Change (16/3) to (16.0/3.0). Otherwise 16/3 results in 5, not 5.33333349.
And you have 7.2 instead of 7.7 at the start.
Related
println(log(it.toDouble(), 10.0).toInt()+1) // n1
println(log10(it.toDouble()).toInt() + 1) // n2
I had to count the "length" of the number in n-base for non-related to the question needs and stumbled upon a bug (or rather unexpected behavior) that for it == 1000 these two functions give different results.
n1(1000) = 3,
n2(1000) = 4.
Checking values before conversion to int resulted in:
n1_double(1000) = 3.9999999999999996,
n2_double(1000) = 4.0
I understand that some floating point arithmetics magic is involved, but what is especially weird to me is that for 100, 10000 and other inputs that I checked n1 == n2.
What is special about it == 1000? How I ensure that log gives me the intended result (4, not 3.99..), because right now I can't even figure out what cases I need to double-check, since it is not just powers of 10, it is 1000 (and probably some other numbers) specifically.
I looked into implementation of log() and log10() and log is implemented as
if (base <= 0.0 || base == 1.0) return Double.NaN
return nativeMath.log(x) / nativeMath.log(base) //log() here is a natural logarithm
while log10 is implemented as
return nativeMath.log10(x)
I suspect this division in the first case is the reason of an error, but I can't figure out why it causes an error only in specific cases.
I also found this question:
Python math.log and math.log10 giving different results
But I already know that one is more precise than another. However there is no analogy for log10 for some base n, so I'm curious of reason WHY it is specifically 1000 that goes wrong.
PS: I understand there are methods of calculating length of a number without fp arithmetics and log of n-base, but at this point it is a scientific curiosity.
but I can't figure out why it causes an error only in specific cases.
return nativeMath.log(x) / nativeMath.log(base)
//log() here is a natural logarithm
Consider x = 1000 and nativeMath.log(x). The natural logarithm is not exactly representable. It is near
6.90775527898213_681... (Double answer)
6.90775527898213_705... (closer answer)
Consider base = 10 and nativeMath.log(base). The natural logarithm is not exactly representable. It is near
2.302585092994045_901... (Double)
2.302585092994045_684... (closer answer)
The only exactly correct nativeMath.log(x) for a finite x is when x == 1.0.
The quotient of the division of 6.90775527898213681... / 2.302585092994045901... is not exactly representable. It is near 2.9999999999999995559...
The conversion of the quotient to text is not exact.
So we have 4 computation errors with the system giving us a close (rounded) result instead at each step.
Sometimes these rounding errors cancel out in a way we find acceptable and the value of "3.0" is reported. Sometimes not.
Performed with higher precision math, it is easy to see log(1000) was less than a higher precision answer and that log(10) was more. These 2 round-off errors in the opposite direction for a / contributed to the quotient being extra off (low) - by 1 ULP than hoped.
When log(x, 10) is computed for other x = power-of-10, and the log(x) is slightly more than than a higher precision answer, I'd expect the quotient to less often result in a 1 ULP error. Perhaps it will be 50/50 for all powers-of-10.
log10(x) is designed to compute the logarithm in a different fashion, exploiting that the base is 10.0 and certainly exact for powers-of-10.
In[11]:= $Version
Out[11]= 9.0 for Linux x86 (32-bit) (November 20, 2012)
In[12]:= DSolve[{f[0] == d, f'[0] == v0, f''[t] == g*m2/f[t]^2}, f, t]
DSolve::bvimp: General solution contains implicit solutions. In the boundary
value problem these solutions will be ignored, so some of the solutions will
be lost.
Out[12]= {}
The code above pretty much says it all. I get the same error if I replace g*m2 with 1.
This seems like a really simple DFQ to solve. I'd like to tell DSolve to assume all variables are real and that d, g, and m2 are all greater than 0, but there's unfortunately no way to do that.
Thoughts?
You are trying for a symbolic solution. And unfortunately, symbolic integration is hard (while symbolic differentiation is easy).
The way this integration works is to obtain the energy functional by integrating once
E = 1/2*f'[t]^2 + C/f[t]
and then to isolate f'[t]. The resulting integral is not easy to solve and leads to the mentioned implicit solutions.
Did you really want to get the symbolic solution or only some function table to plot the solutions or compute other related quantities?
Since it was clarified that the requested quantity is the maximum of certain solutions: This can be computed by setting v=0 in the energy equation
C/x = E = 1/2*v0^2 + C/x0
or
x = C*x0/(C + 1/2*v0^2*x0 )
One would have to analyze the time aspect to make sure that this extremum is reached before passing again at the initial point x0.
I'm writing this post in case anyone else is having the same issue I've been having with the lack of documentation for the CVDisplayLink API.
Intro:
In my CVDisplayLink code I've been using the following code to obtain the deltaSeconds value between calls to its callback:
float deltaTime = 1.0 / (outputTime->rateScalar * (float)outputTime->videoTimeScale / (float)outputTime->videoRefreshPeriod);
It seems like this line of code is widely used across different apps & engines.
The issue:
While running my OpenGL app I've noticed that this value is now constant (0.016669 to be precise). I haven't made any big changes to account for this change of behaviour, other than using Mavericks and the new development tools.
Finding the cause has been a lost cause so far.
I've found what I believe is a good way to calculate the deltaSeconds between frames by using the following alternative code:
double deltaSeconds = (outputTime->videoTime - self.previousOutputVideoTime) / (double)outputTime->videoTimeScale;
self.previousOutputVideoTime = outputTime->videoTime;
During my exercise with wxmaxima 11.08.0 (ubuntu 12.04, Maxima version: 5.24.0)
I followed an example from P.Lutus and his second example didn't work for me.
eq: y(t) = -r*c*'diff(y(t),t)+m*sin(%omega*t);
sol:desolve( eq, y(t) );
Is %omega zero or nonzero? nonzero
then Maxima isn't reacting anymore until I restart it.
Is there something changed in maxima that I need to activate or define first to get the result ?
The expected output should be:
There is a second part of my question in case I define the equation by hand:
sol: y(t) = (m * sin(%omega*t)) / (%omega^2*c^2*r^2 + 1) -
(%omega*c*m*r*cos(%omega*t)) / (%omega^2*c^2*r^2 + 1) +
(%omega*c*m*r*%e^-((1*t)/(c*r))) / (%omega^2*c^2*r^2 + 1);
Initial conditions for a continuous process:
init_val:-(c*m*r*(%e^-(t/r*c))*%omega)/(c^2*r^2*%omega^2+1);
atvalue(y(t),t=0, init_val);
try2 : desolve(sol,y(t));
"Is "%omega" zero or nonzero?" nonzero;
Here the last term is still there. Are these problems based on the use of trigonometric functions ?
Best regards,
Marcus
I updated via PPA to wxMaxima 13.04.0 & Maxima 5.29.1. Now desolve fnished, but the last term seems very complicated.
Doing the init_val with the negativ last term and desolve command still leaves the %e^(..)*... in the equation.
You might get more interest in this question on the Maxima mailing list. See: http://maxima.sourceforge.net/maximalist.html
For the first version of Lutus example 2, I get:
y(t) = m*sin(%omega*t)/(%omega^2*c^2*r^2+1)
-%omega*c*m*r*cos(%omega*t)/(%omega^2*c^2*r^2+1)
+(y(0)*%omega^2*c^3*r^3+%omega*c^2*m*r^2+y(0)*c*r)*%e^-(t/(c*r))
/(c*r*(%omega^2*c^2*r^2+1))$
which is the same as the expected result, if y(0) = 0. However, I don't see where that is assumed.
After atvalue(y(t),t=0,init_val), I get the same result as Lutus, namely:
y(t) = m*sin(%omega*t)/(%omega^2*c^2*r^2+1)
-%omega*c*m*r*cos(%omega*t)/(%omega^2*c^2*r^2+1)$
I am working with Maxima 5.31.1, built with Clisp, on Linux.
Does anyone know how can I calculate pi (π) in VB?
System.Math.Pi
Assuming you actually want to compute pi instead of just using the built in constants, there are a bunch of ways that you can do it. Here are a few links that could be useful:
http://www.codeproject.com/KB/recipes/CRHpi.aspx
http://en.wikipedia.org/wiki/Pi#Computation_in_the_computer_age
http://en.wikipedia.org/wiki/Machin-like_formula
If you mean VB6, it doesn't have a pi constant. You can use:
Dim pi as Double
pi = 4 * Atn(1)
If the OP is asking about algorithms as a learning experience, good for him/her.
If the OP wanted help finding the built-in value, s/he has it now.
But if the goal is a good value of higher precision than the built-in value with a minimum of effort, here's pi to one million digits:
http://www.eveandersson.com/pi/digits/1000000
That should be enough.
I hope the OP isn't asking how to recalculate the value of Pi each and every time it's used. That would be madness.
Meh, so efficient, accurate and most of all boring approximations... Try this instead! Pseudocode ensues:
initialize inside and total as 0
repeat an insane amount of times:
assign both x and y random values between (and including) 0 and +1.
assign distance as the square root of (x2 + y2)
if distance ≤ 1, add 1 to inside
add 1 to total
assign pi as inside / total * 4
If you don't want to use the built in values in the .net math library...
22 / 7