In Tinkercad, amplitude definition for Function Generators and scale definition for Oscilloscopes are quite confusing. Here is an ss from Tinkercad's function generator:
On the device 6.20 V is represented as peak-to-peak voltage, look at the red-lines I've marked. But on the panel right-hand-side, we input it as the amplitude, look at the green line I've marked. Which one is true?
And I cannot deduce the answer using an oscilloscope, because there is not enough info about oscilloscope. (At least, I couldn't find enough info.) Here is the input signal from the function generator above:
Answer is not obvious, because the meaning of 10 V placed on y_axis is ambiguous. Is it +/- 10 V as in 20 V in total, i.e. the voltage-per-division is 2 V (first explanation)? Or, is it +/- 5 V as in 10 V in total, i.e. voltage-per-division is 1 V (second explanation)? In some Youtube lectures the explanation is first one. But, I'm not quite sure. Because, if 6.2 V is amplitude and voltage-per-division is 2 V, then this is noncontradictory. But if 6.2 V is peak-to-peak voltage and voltage-per-division is 1 V, then this, too, is noncontradictory. Again, which one is true?
And also, while studying, I've realise that a real life experiment indicates that the second explanation should be true. Let me explain the experiment step by step.
Theory: Full Wave Rectifier Circuits
Assume we apply V_in as the amplitude, the peak-peak voltage is, V_peaktopeak = 2 * V_in. And for output signal we have,
V_out = (V_in - n * V_diode) * R_L / (R_L + r_d),
where n is the number of diode in conduction, V_diode is bias of a diode and R_L is load resistor. Load resistor is choosen big enough so that R_L >> r_d and we get,
V_out = V_in - n * V_diode.
In a real experiment r_d is in between 1 \ohm and 25 \ohm, and we choose R_L on the order of kilo \ohm. Therefore, we can ignore R_L / (R_L + r_d) part, safely.
And for DC voltage corresponding to the output signal we have,
V_DC = 2 * V_out / \pi = 0.637 * V_out.
Sheme of Circuit in an Experiment
Here is circuit scheme,
As you may see, for positive half-periode only two of four diode is in conduction. And for negative half-periode, the other two is in conduction. Thus n is 2 for this circuit. Let's construct this experiment on Tinkercad. I didn't use breadboard to show more similarity between the scheme of circuit and the circuit built in Tinkercad.
Scenerio #1 - Theoretical Expectations
Let's assume 6.2 V to be the amplitude. Then, V_in=6.2 V. And V_peaktopeak is 12.4 V. As output signal we calculate,
V_out = V_in - n * V_diode = 6.2 V - 2 * 0.7 V = 4.8 V.
And for DC equivalent we theoretically get,
V_DC = 0.637 * V_out = 3.06 V.
But in multimeter, we see 1.06 V. This indicates nearly %60 percantage error.
Scenerio #2 - Theoretical Expectations
Let's assume 6.2 V to be the peak-to-peak voltage. Then, V_in=3.1 V. And V_peaktopeak is 6.2 V. As output signal we calculate,
V_out = V_in - n * V_diode = 3.1 V - 2 * 0.7 V = 1.7 V.
And for DC equivalent we theoretically get,
V_DC = 0.637 * V_out = 1.08 V.
And in multimeter, we see 1.06 V. There values are pretty close to each other.
Conclusion
Based on these results, we may conclude that 6.2 V is peak-to-peak voltage, scheme on the function generator is true, the tag "Amplitude" in the description of function generator is wrong and the y-scale of an oscilloscope represents the total voltage half of which is positive and the other half is negative.
BUT
I cannot be sure, and since I'll teach this material in my electronic laboratory class, I really need to be sure about this conclusion. Therefore, here I'm asking you about your opinions, conclusions or maybe other references that I've missed.
TinkerCAD refers to peak-to-peak voltage as amplitude for some reason. I believe the second explanation (+/- 5V, 10 V total) is correct, based on the x axis and frequency value.
Related
I'm trying to implement Algorithm D from Knuth's "The Art of Computer Programming, Vol 2" in Rust although I'm having trouble understating how to implement the very last step of unnormalizing. My natural numbers are a class where each number is a vector of u64, in base u64::MAX. Addition, subtraction, and multiplication have been implemented.
Knuth's Algorithm D is a euclidean division algorithm which takes two natural numbers x and y and returns (q,r) where q = x / y (integer division) and r = x % y, the remainder. The algorithm depends on an approximation method which only works if the first digit of y is greater than b/2, where b is the base you're representing the numbers in. Since not all numbers are of this form, it uses a "normalizing trick", for example (if we were in base 10) instead of doing 200 / 23, we calculate a normalizer d and do (200 * d) / (23 * d) so that 23 * d has a first digit greater than b/2.
So when we use the approximation method, we end up with the desired q but the remainder is multiplied by a factor of d. So the last step is to divide r by d so that we can get the q and r we want. My problem is, I'm a bit confused at how we're suppose to do this last step as it requires division and the method it's part of is trying to implement division.
(Maybe helpful?):
The way that d is calculated is just by taking the integer floor of b-1 divided by the first digit of y. However, Knuth suggests that it's possible to make d a power of 2, as long as d * the first digit of y is greater than b / 2. I think he makes this suggestion so that instead of dividing, we can just do a binary shift for this last step. Although I don't think I can do that given that my numbers are represented as vectors of u64 values, instead of binary.
Any suggestions?
Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.
I have solid object that is spinning with a torque W, and I want to calculate the force F applied on a certain point that's D units away from the center of the object. All these values are represented in Vector3 format (x, y, z)
I know until now that W = D x F, where x is the cross product, so by expanding this I get:
Wx = Dy*Fz - Dz*Fy
Wy = Dz*Fx - Dx*Fz
Wz = Dx*Fy - Dy*Fx
So I have this equation, and I need to find (Fx, Fy, Fz), and I'm thinking of using the Simplex method to solve it.
Since the F vector can also have negative values, I split each F variable into 2 (F = G-H), so the new equation looks like this:
Wx = Dy*Gz - Dy*Hz - Dz*Gy + Dz*Hy
Wy = Dz*Gx - Dz*Hx - Dx*Gz + Dx*Hz
Wz = Dx*Gy - Dx*Hy - Dy*Gx + Dy*Hx
Next, I define the simplex table (we need <= inequalities, so I duplicate each equation and multiply it by -1.
Also, I define the objective function as: minimize (Gx - Hx + Gy - Hy + Gz - Hz).
The table looks like this:
Gx Hx Gy Hy Gz Hz <= RHS
============================================================
0 0 -Dz Dz Dy -Dy <= Wx = Gx
0 0 Dz -Dz -Dy Dy <= -Wx = Hx
Dz -Dz 0 0 Dx -Dx <= Wy = Gy
-Dz Dz 0 0 -Dx Dx <= -Wy = Hy
-Dy Dy Dx -Dx 0 0 <= Wz = Gz
Dy -Dy -Dx Dx 0 0 <= -Wz = Hz
============================================================
1 -1 1 -1 1 -1 0 = Z
The problem is that when I run it through an online solver I get Unbounded solution.
Can anyone please point me to what I'm doing wrong ?
Thanks in advance.
edit: I'm sure I messed up some signs somewhere (for example the Z should be defined as a max), but I'm sure I'm wrong when defining something more important.
There exists no unique solution to the problem as posed. You can only solve for the tangential projection of the force. This comes from the properties of the vector (cross) product - it is zero for collinear vectors and in particular for the vector product of a vector by itself. Therefore, if F is a solution of W = r x F, then F' = F + kr is also a solution for any k:
r x F' = r x (F + kr) = r x F + k (r x r) = r x F
since the r x r term is zero by the definition of vector product. Therefore, there is not a single solution but rather a whole linear space of vectors that are solutions.
If you restrict the solution to forces that have zero projection in the direction of r, then you could simply take the vector product of W and r:
W x r = (r x F) x r = -[r x (r x F)] = -[(r . F)r - (r . r)F] = |r|2F
with the first term of the expansion being zero because the projection of F onto r is zero (the dot denotes scalar (inner) product). Therefore:
F = (W x r) / |r|2
If you are also given the magnitude of F, i.e. |F|, then you can compute the radial component (if any) but there are still two possible solutions with radial components in opposing directions.
Quick dirty derivation...
Given D and F, you get W perpendicular to them. That's what a cross product does.
But you have W and D and need to find F. This is a bad assumption, but let's assume F was perpendicular to D. Call it Fp, since it's not necessarily the same as F. Ignoring magnitudes, WxD should give you the direction of Fp.
This ignoring magnitudes, so fix that with a little arithmetic. Starting with W=DxF applied to Fp:
mag(W) = mag(D)*mag(Fp) (ignoring geometry; using Fp perp to D)
mag(Fp) = mag(W)/mag(D)
Combining the cross product bit for direction with this stuff for magnitude,
Fp = WxD / mag(WxD) * mag(Fp)
Fp = WxD /mag(W) /mag(D) *mag(W) /mag(D)
= WxD / mag(D)^2.
Note that given any solution Fp to W=DxF, you can add any vector proportional to D to Fp to obtain another solution F. That is a totally free parameter to choose as you like.
Note also that if the torque applies to some sort of axle or object constrained to rotate about some axis, and F is applied to some oddball lever sticking out at a funny angle, then vector D points in some funny direction. You want to replace D with just the part perpendicular to the axle/axis, otherwise the "/mag(D)" part will be wrong.
So from your comment is clear that all rotations are spinning around center of gravity
in that case
F=M/r
F force [N]
M torque [N/m]
r scalar distance between center of rotation [m]
this way you know the scalar size of your Force
now you need the direction
it is perpendicular to rotation axis
and it is the tangent of the rotation in that point
dir=r x axis
F = F * dir / |dir|
bolds are vectors rest is scalar
x is cross product
dir is force direction
axis is rotation axis direction
now just change the direction according to rotation direction (signum of actual omega)
also depending on your coordinate system setup
so ether negate F or not
but this is in 3D free rotation very unprobable scenario
the object had to by symmetrical from mass point of view
or initial driving forces was applied in manner to achieve this
also beware that after first hit with any interaction Force this will not be true !!!
so if you want just to compute Force it generate on certain point if collision occurs is this fine
but immediately after this your spinning will change
and for non symmetric objects the spinning will be most likely off the center of gravity !!!
if your object will be disintegrated then you do not need to worry
if not then you have to apply rotation and movement dynamics
Rotation Dynamics
M=alpha*I
M torque [N/m]
alpha angular acceleration
I quadratic mass inertia for actual rotation axis [kg.m^2]
epislon''=omega'=alpha
' means derivation by time
omega angular speed
epsilon angle
I've coded a basic Mandelbrot explorer in C#, but I have those horrible bands of color, and it's all greyscale.
I have the equation for smooth coloring:
mu = N + 1 - log (log |Z(N)|) / log 2
Where N is the escape count, and |Z(N)| is the modulus of the complex number after the value has escaped, it's this value which I'm unsure of.
My code is based off the pseudo code given on the wikipedia page: http://en.wikipedia.org/wiki/Mandelbrot_set#For_programmers
The complex number is represented by the real values x and y, using this method, how would I calculate the value of |Z(N)| ?
|Z(N)| means the distance to the origin, so you can calculate it via sqrt(x*x + y*y).
If you run into an error with the logarithm: Check the iterations before. If it's part of the Mandelbrot set (iteration = max_iteration), the first logarithm will result 0 and the second will raise an error.
So just add this snippet instead of your old return code. .
if (i < iterations)
{
return i + 1 - Math.Log(Math.Log(Math.Sqrt(x * x + y * y))) / Math.Log(2);
}
return i;
Later, you should divide i by the max_iterations and multiply it with 255. This will give you a nice rgb-value.
I am processing a series of points which all have the same Y value, but different X values. I go through the points by incrementing X by one. For example, I might have Y = 50 and X is the integers from -30 to 30. Part of my algorithm involves finding the distance to the origin from each point and then doing further processing.
After profiling, I've found that the sqrt call in the distance calculation is taking a significant amount of my time. Is there an iterative way to calculate the distance?
In other words:
I want to efficiently calculate: r[n] = sqrt(x[n]*x[n] + y*y)). I can save information from the previous iteration. Each iteration changes by incrementing x, so x[n] = x[n-1] + 1. I can not use sqrt or trig functions because they are too slow except at the beginning of each scanline.
I can use approximations as long as they are good enough (less than 0.l% error) and the errors introduced are smooth (I can't bin to a pre-calculated table of approximations).
Additional information:
x and y are always integers between -150 and 150
I'm going to try a couple ideas out tomorrow and mark the best answer based on which is fastest.
Results
I did some timings
Distance formula: 16 ms / iteration
Pete's interperlating solution: 8 ms / iteration
wrang-wrang pre-calculation solution: 8ms / iteration
I was hoping the test would decide between the two, because I like both answers. I'm going to go with Pete's because it uses less memory.
Just to get a feel for it, for your range y = 50, x = 0 gives r = 50 and y = 50, x = +/- 30 gives r ~= 58.3. You want an approximation good for +/- 0.1%, or +/- 0.05 absolute. That's a lot lower accuracy than most library sqrts do.
Two approximate approaches - you calculate r based on interpolating from the previous value, or use a few terms of a suitable series.
Interpolating from previous r
r = ( x2 + y2 ) 1/2
dr/dx = 1/2 . 2x . ( x2 + y2 ) -1/2 = x/r
double r = 50;
for ( int x = 0; x <= 30; ++x ) {
double r_true = Math.sqrt ( 50*50 + x*x );
System.out.printf ( "x: %d r_true: %f r_approx: %f error: %f%%\n", x, r, r_true, 100 * Math.abs ( r_true - r ) / r );
r = r + ( x + 0.5 ) / r;
}
Gives:
x: 0 r_true: 50.000000 r_approx: 50.000000 error: 0.000000%
x: 1 r_true: 50.010000 r_approx: 50.009999 error: 0.000002%
....
x: 29 r_true: 57.825065 r_approx: 57.801384 error: 0.040953%
x: 30 r_true: 58.335225 r_approx: 58.309519 error: 0.044065%
which seems to meet the requirement of 0.1% error, so I didn't bother coding the next one, as it would require quite a bit more calculation steps.
Truncated Series
The taylor series for sqrt ( 1 + x ) for x near zero is
sqrt ( 1 + x ) = 1 + 1/2 x - 1/8 x2 ... + ( - 1 / 2 )n+1 xn
Using r = y sqrt ( 1 + (x/y)2 ) then you're looking for a term t = ( - 1 / 2 )n+1 0.36n with magnitude less that a 0.001, log ( 0.002 ) > n log ( 0.18 ) or n > 3.6, so taking terms to x^4 should be Ok.
Y=10000
Y2=Y*Y
for x=0..Y2 do
D[x]=sqrt(Y2+x*x)
norm(x,y)=
if (y==0) x
else if (x>y) norm(y,x)
else {
s=Y/y
D[round(x*s)]/s
}
If your coordinates are smooth, then the idea can be extended with linear interpolation. For more precision, increase Y.
The idea is that s*(x,y) is on the line y=Y, which you've precomputed distances for. Get the distance, then divide it by s.
I assume you really do need the distance and not its square.
You may also be able to find a general sqrt implementation that sacrifices some accuracy for speed, but I have a hard time imagining that beating what the FPU can do.
By linear interpolation, I mean to change D[round(x)] to:
f=floor(x)
a=x-f
D[f]*(1-a)+D[f+1]*a
This doesn't really answer your question, but may help...
The first questions I would ask would be:
"do I need the sqrt at all?".
"If not, how can I reduce the number of sqrts?"
then yours: "Can I replace the remaining sqrts with a clever calculation?"
So I'd start with:
Do you need the exact radius, or would radius-squared be acceptable? There are fast approximatiosn to sqrt, but probably not accurate enough for your spec.
Can you process the image using mirrored quadrants or eighths? By processing all pixels at the same radius value in a batch, you can reduce the number of calculations by 8x.
Can you precalculate the radius values? You only need a table that is a quarter (or possibly an eighth) of the size of the image you are processing, and the table would only need to be precalculated once and then re-used for many runs of the algorithm.
So clever maths may not be the fastest solution.
Well there's always trying optimize your sqrt, the fastest one I've seen is the old carmack quake 3 sqrt:
http://betterexplained.com/articles/understanding-quakes-fast-inverse-square-root/
That said, since sqrt is non-linear, you're not going to be able to do simple linear interpolation along your line to get your result. The best idea is to use a table lookup since that will give you blazing fast access to the data. And, since you appear to be iterating by whole integers, a table lookup should be exceedingly accurate.
Well, you can mirror around x=0 to start with (you need only compute n>=0, and the dupe those results to corresponding n<0). After that, I'd take a look at using the derivative on sqrt(a^2+b^2) (or the corresponding sin) to take advantage of the constant dx.
If that's not accurate enough, may I point out that this is a pretty good job for SIMD, which will provide you with a reciprocal square root op on both SSE and VMX (and shader model 2).
This is sort of related to a HAKMEM item:
ITEM 149 (Minsky): CIRCLE ALGORITHM
Here is an elegant way to draw almost
circles on a point-plotting display:
NEW X = OLD X - epsilon * OLD Y
NEW Y = OLD Y + epsilon * NEW(!) X
This makes a very round ellipse
centered at the origin with its size
determined by the initial point.
epsilon determines the angular
velocity of the circulating point, and
slightly affects the eccentricity. If
epsilon is a power of 2, then we don't
even need multiplication, let alone
square roots, sines, and cosines! The
"circle" will be perfectly stable
because the points soon become
periodic.
The circle algorithm was invented by
mistake when I tried to save one
register in a display hack! Ben Gurley
had an amazing display hack using only
about six or seven instructions, and
it was a great wonder. But it was
basically line-oriented. It occurred
to me that it would be exciting to
have curves, and I was trying to get a
curve display hack with minimal
instructions.