howmany() Macro Objective C - objective-c

While using Xcode, I accidentally auto completed to the macro howmany(x,y) and traced it to types.h. The entire line reads as follows:
#define howmany(x, y) __DARWIN_howmany(x, y) /* # y's == x bits? */
This didn't really make much sense, so I followed the path a little more and found __DARWIN_howmany(x, y) in _fd_def.h. The entire line reads as follows:
#define __DARWIN_howmany(x, y) ((((x) % (y)) == 0) ? ((x) / (y)) : (((x) / (y)) + 1)) /* # y's == x bits? */
I have no idea what __DARWIN_howmany(x, y) does. Does the comment at the end of the line shed any light on the intended function of this macro? Could someone please explain what this macro does, how it is used, and its relevance in _fd_def.h

This is a fairly commonly used macro to help programmers quickly answer the question, if I have some number of things, and my containers can each hold y of them, how many containers do I need to hold x things?
So if your containers can hold five things each, and you have 18 of them:
n = howmany(18, 5);
will tell you that you will need four containers. Or, if my buffers are allocated in words, but I need to put n characters into them, and words are 8 characters long, then:
n = howmanu(n, 8);
returns the number of words needed. This sort of computation is ubiquitous in buffer allocation code.
This is frequently computed:
#define howmany(x, y) (((x)+(y)-1)/(y))
Also related is roundup(x, y), which rounds x up to the next multiple of y:
#define roundup(x, y) (howmany(x, y)*(y))

Based on what you've posted, the macro seems to be intended to answer a question like, "How many chars does it take to hold 18 bits?" That question could be answered with this line of code
int count = howmany( 18, CHAR_BIT );
which will set count to 3.
The macro works by first checking if y divides evenly into x. If so, it returns x/y, otherwise it divides x by y and rounds up.

Related

Efficient implementation of while loop in brainfuck

I am having trouble with implementing a brainfuck assembler for codegolf.se. I managed to load a string in to memory find its length cat it out, print strings n times etc, but I cant seem to load just the non lower case numbers into memory. So lets take the following loop which performs some wizardry. (Hash marks are debugging markers.)
#,#[>#<[<]<<#+#>>>[>]#,#]<[<]
It starts at pointer 512 and writes the string as ascii values to spots after 512
Now if (for whatever reason) I wish to strip out lowercase characters, it will look like this in psuedo BF.
#,#[>#<[<]<<#+#>>>[>]#do{,(takes input and assigns it)}
while(input>=96/*Go arbitrarily to the right for this implementation but
make sure that the first non-lowercase number is stored at the index*/)#
//Also be sure to zero out any temporary cells used
<[<]
Now my question is, how do I implement such a while loop while only using spaces to the right of 512 as storage AND clearing them out later.
For those curious, this is the problem I wish to solve in branfuck.
Your code can be simplified to
,[[<]<+>>[>],]<[<]
(the <<+>> is probably a result of using online compiler that forgets cell 255)
and repeated, to produce the outputting operation:
>.[[<]<->>[>]<.>]<[<]
If you want to use only the empty cells in your way, you can do it. But you will need to establish some protocol of your own for defining the next cell, like saving every data cell with the following cell stating the distance to the next one, as:
[..., 104, 5, x, x, x, x, 108, 3, x, x, 102, 2...]
[..., 104 , 5 , x, x, x, x, 108 , 3 , x, x, 102 , 2 ...]
data pointer data pointer data pointer
when x is some arbitrary, none-zero value (otherwise you would use it). This implementation would be kind-of a linked list, but notice it would be space and code expensive.
Zeroing down cells, or as you call it cleaning them, can be done the same way you did the [<] - by using [-]. this will decrease the cell's value until it reaches 0 - and then will loop-out. You can iterate the string down when you are in its end - and go back while cleaning every cell until you hit the beginning (0 or other reserved number you put there).

Efficient random permutation of n-set-bits

For the problem of producing a bit-pattern with exactly n set bits, I know of two practical methods, but they both have limitations I'm not happy with.
First, you can enumerate all of the possible word values which have that many bits set in a pre-computed table, and then generate a random index into that table to pick out a possible result. This has the problem that as the output size grows the list of candidate outputs eventually becomes impractically large.
Alternatively, you can pick n non-overlapping bit positions at random (for example, by using a partial Fisher-Yates shuffle) and set those bits only. This approach, however, computes a random state in a much larger space than the number of possible results. For example, it may choose the first and second bits out of three, or it might, separately, choose the second and first bits.
This second approach must consume more bits from the random number source than are strictly required. Since it is choosing n bits in a specific order when their order is unimportant, this means that it is making an arbitrary distinction between n! different ways of producing the same result, and consuming at least floor(log_2(n!)) more bits than are necessary.
Can this be avoided?
There is obviously a third approach of iteratively computing and counting off the legal permutations until a random index is reached, but that's simply a space-for-time trade-off on the first approach, and isn't directly helpful unless there is an efficient way to count off those n permutations.
clarification
The first approach requires picking a single random number between zero and (where w is the output size), as this is the number of possible solutions.
The second approach requires picking n random values between zero and w-1, zero and w-2, etc., and these have a product of , which is times larger than the first approach.
This means that the random number source has been forced to produce bits to distinguish n! different results which are all equivalent. I'd like to know if there's an efficient method to avoid relying on this superfluous randomness. Perhaps by using an algorithm which produces an un-ordered list of bit positions, or by directly computing the nth unique permutation of bits.
Seems like you want a variant of Floyd's algorithm:
Algorithm to select a single, random combination of values?
Should be especially useful in your case, because the containment test is a simple bitmask operation. This will require only k calls to the RNG. In the code below, I assume you have randint(limit) which produces a uniform random from 0 to limit-1, and that you want k bits set in a 32-bit int:
mask = 0;
for (j = 32 - k; j < 32; ++j) {
r = randint(j+1);
b = 1 << r;
if (mask & b) mask |= (1 << j);
else mask |= b;
}
How many bits of entropy you need here depends on how randint() is implemented. If k > 16, set it to 32 - k and negate the result.
Your alternative suggestion of generating a single random number representing one combination among the set (mathematicians would call this a rank of the combination) is simpler if you use colex order rather than lexicographic rank. This code, for example:
for (i = k; i >= 1; --i) {
while ((b = binomial(n, i)) > r) --n;
buf[i-1] = n;
r -= b;
}
will fill the array buf[] with indices from 0 to n-1 for the k-combination at colex rank r. In your case, you'd replace buf[i-1] = n with mask |= (1 << n). The binomial() function is binomial coefficient, which I do with a lookup table (see this). That would make the most efficient use of entropy, but I still think Floyd's algorithm would be a better compromise.
[Expanding my comment:] If you only have a little raw entropy available, then use a PRNG to stretch it further. You only need enough raw entropy to seed a PRNG. Use the PRNG to do the actual shuffle, not the raw entropy. For the next shuffle reseed the PRNG with some more raw entropy. That spreads out the raw entropy and makes less of a demand on your entropy source.
If you know exactly the range of numbers you need out of the PRNG, then you can, carefully, set up your own LCG PRNG to cover the appropriate range while needing the minimum entropy to seed it.
ETA: In C++there is a next_permutation() method. Try using that. See std::next_permutation Implementation Explanation for more.
Is this a theory problem or a practical problem?
You could still do the partial shuffle, but keep track of the order of the ones and forget the zeroes. There are log(k!) bits of unused entropy in their final order for your future consumption.
You could also just use the recurrence (n choose k) = (n-1 choose k-1) + (n-1 choose k) directly. Generate a random number between 0 and (n choose k)-1. Call it r. Iterate over all of the bits from the nth to the first. If we have to set j of the i remaining bits, set the ith if r < (i-1 choose j-1) and clear it, subtracting (i-1 choose j-1), otherwise.
Practically, I wouldn't worry about the couple of words of wasted entropy from the partial shuffle; generating a random 32-bit word with 16 bits set costs somewhere between 64 and 80 bits of entropy, and that's entirely acceptable. The growth rate of the required entropy is asymptotically worse than the theoretical bound, so I'd do something different for really big words.
For really big words, you might generate n independent bits that are 1 with probability k/n. This immediately blows your entropy budget (and then some), but it only uses linearly many bits. The number of set bits is tightly concentrated around k, though. For a further expected linear entropy cost, I can fix it up. This approach has much better memory locality than the partial shuffle approach, so I'd probably prefer it in practice.
I would use solution number 3, generate the i-th permutation.
But do you need to generate the first i-1 ones?
You can do it a bit faster than that with kind of divide and conquer method proposed here: Returning i-th combination of a bit array and maybe you can improve the solution a bit
Background
From the formula you have given - w! / ((w-n)! * n!) it looks like your problem set has to do with the binomial coefficient which deals with calculating the number of unique combinations and not permutations which deals with duplicates in different positions.
You said:
"There is obviously a third approach of iteratively computing and counting off the legal permutations until a random index is reached, but that's simply a space-for-time trade-off on the first approach, and isn't directly helpful unless there is an efficient way to count off those n permutations.
...
This means that the random number source has been forced to produce bits to distinguish n! different results which are all equivalent. I'd like to know if there's an efficient method to avoid relying on this superfluous randomness. Perhaps by using an algorithm which produces an un-ordered list of bit positions, or by directly computing the nth unique permutation of bits."
So, there is a way to efficiently compute the nth unique combination, or rank, from the k-indexes. The k-indexes refers to a unique combination. For example, lets say that the n choose k case of 4 choose 3 is taken. This means that there are a total of 4 numbers that can be selected (0, 1, 2, 3), which is represented by n, and they are taken in groups of 3, which is represented by k. The total number of unique combinations can be calculated as n! / ((k! * (n-k)!). The rank of zero corresponds to the k-index of (2, 1, 0). Rank one is represented by the k-index group of (3, 1, 0), and so forth.
Solution
There is a formula that can be used to very efficiently translate between a k-index group and the corresponding rank without iteration. Likewise, there is a formula for translating between the rank and corresponding k-index group.
I have written a paper on this formula and how it can be seen from Pascal's Triangle. The paper is called Tablizing The Binomial Coeffieicent.
I have written a C# class which is in the public domain that implements the formula described in the paper. It uses very little memory and can be downloaded from the site. It performs the following tasks:
Outputs all the k-indexes in a nice format for any N choose K to a file. The K-indexes can be substituted with more descriptive strings or letters.
Converts the k-index to the proper lexicographic index or rank of an entry in the sorted binomial coefficient table. This technique is much faster than older published techniques that rely on iteration. It does this by using a mathematical property inherent in Pascal's Triangle and is very efficient compared to iterating over the entire set.
Converts the index in a sorted binomial coefficient table to the corresponding k-index. The technique used is also much faster than older iterative solutions.
Uses Mark Dominus method to calculate the binomial coefficient, which is much less likely to overflow and works with larger numbers. This version returns a long value. There is at least one other method that returns an int. Make sure that you use the method that returns a long value.
The class is written in .NET C# and provides a way to manage the objects related to the problem (if any) by using a generic list. The constructor of this class takes a bool value called InitTable that when true will create a generic list to hold the objects to be managed. If this value is false, then it will not create the table. The table does not need to be created in order to use the 4 above methods. Accessor methods are provided to access the table.
There is an associated test class which shows how to use the class and its methods. It has been extensively tested with at least 2 cases and there are no known bugs.
The following tested example code demonstrates how to use the class and will iterate through each unique combination:
public void Test10Choose5()
{
String S;
int Loop;
int N = 10; // Total number of elements in the set.
int K = 5; // Total number of elements in each group.
// Create the bin coeff object required to get all
// the combos for this N choose K combination.
BinCoeff<int> BC = new BinCoeff<int>(N, K, false);
int NumCombos = BinCoeff<int>.GetBinCoeff(N, K);
// The Kindexes array specifies the indexes for a lexigraphic element.
int[] KIndexes = new int[K];
StringBuilder SB = new StringBuilder();
// Loop thru all the combinations for this N choose K case.
for (int Combo = 0; Combo < NumCombos; Combo++)
{
// Get the k-indexes for this combination.
BC.GetKIndexes(Combo, KIndexes);
// Verify that the Kindexes returned can be used to retrive the
// rank or lexigraphic order of the KIndexes in the table.
int Val = BC.GetIndex(true, KIndexes);
if (Val != Combo)
{
S = "Val of " + Val.ToString() + " != Combo Value of " + Combo.ToString();
Console.WriteLine(S);
}
SB.Remove(0, SB.Length);
for (Loop = 0; Loop < K; Loop++)
{
SB.Append(KIndexes[Loop].ToString());
if (Loop < K - 1)
SB.Append(" ");
}
S = "KIndexes = " + SB.ToString();
Console.WriteLine(S);
}
}
So, the way to apply the class to your problem is by considering each bit in the word size as the total number of items. This would be n in the n!/((k! (n - k)!) formula. To obtain k, or the group size, simply count the number of bits set to 1. You would have to create a list or array of the class objects for each possible k, which in this case would be 32. Note that the class does not handle N choose N, N choose 0, or N choose 1 so the code would have to check for those cases and return 1 for both the 32 choose 0 case and 32 choose 32 case. For 32 choose 1, it would need to return 32.
If you need to use values not much larger than 32 choose 16 (the worst case for 32 items - yields 601,080,390 unique combinations), then you can use 32 bit integers, which is how the class is currently implemented. If you need to use 64 bit integers, then you will have to convert the class to use 64 bit longs. The largest value that a long can hold is 18,446,744,073,709,551,616 which is 2 ^ 64. The worst case for n choose k when n is 64 is 64 choose 32. 64 choose 32 is 1,832,624,140,942,590,534 - so a long value will work for all 64 choose k cases. If you need numbers bigger than that, then you will probably want to look into using some sort of big integer class. In C#, the .NET framework has a BigInteger class. If you are working in a different language, it should not be hard to port.
If you are looking for a very good PRNG, one of the fastest, lightweight, and high quality output is the Tiny Mersenne Twister or TinyMT for short . I ported the code over to C++ and C#. it can be found here, along with a link to the original author's C code.
Rather than using a shuffling algorithm like Fisher-Yates, you might consider doing something like the following example instead:
// Get 7 random cards.
ulong Card;
ulong SevenCardHand = 0;
for (int CardLoop = 0; CardLoop < 7; CardLoop++)
{
do
{
// The card has a value of between 0 and 51. So, get a random value and
// left shift it into the proper bit position.
Card = (1UL << RandObj.Next(CardsInDeck));
} while ((SevenCardHand & Card) != 0);
SevenCardHand |= Card;
}
The above code is faster than any shuffling algorithm (at least for obtaining a subset of random cards) since it only works on 7 cards instead of 52. It also packs the cards into individual bits within a single 64 bit word. It makes evaluating poker hands much more efficient as well.
As a side, note, the best binomial coefficient calculator I have found that works with very large numbers (it accurately calculated a case that yielded over 15,000 digits in the result) can be found here.

Is this considered memoisation?

In optimising some code recently, we ended up performing what I think is a "type" of memoisation but I'm not sure we should be calling it that. The pseudo-code below is not the actual algorithm (since we have little need for factorials in our application, and posting said code is a firing offence) but it should be adequate for explaining my question. This was the original:
def factorial (n):
if n == 1 return 1
return n * factorial (n-1)
Simple enough, but we added fixed points so that large numbers of calculations could be avoided for larger numbers, something like:
def factorial (n):
if n == 1 return 1
if n == 10 return 3628800
if n == 20 return 2432902008176640000
if n == 30 return 265252859812191058636308480000000
if n == 40 return 815915283247897734345611269596115894272000000000
# And so on.
return n * factorial (n-1)
This, of course, meant that 12! was calculated as 12 * 11 * 3628800 rather than the less efficient 12 * 11 * 10 * 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1.
But I'm wondering whether we should be calling this memoisation since that seems to be defined as remembering past results of calculations and using them. This is more about hard-coding calculations (not remembering) and using that information.
Is there a proper name for this process or can we claim that memoisation extends back not just to calculations done at run-time but also those done at compile-time and even back to those done in my head before I even start writing the code?
I'd call it pre-calculation rather than memoization. You're not really remembering any of the calculations you've done in the process of calculating a final answer for a given input, rather you're pre-calculating some fixed number of answers for specific inputs. Memoization as I understand it is really more akin to "caching" a set of results as you calculate them for later reuse. If you were to store each value calculated so that you didn't need to recalculate it again later, that would be memoization. Your solution differs in that you never store any "calculated" results from your program, only the fixed points that have been pre-calculated. With memoization if you reran the function with an input different than one of the pre-calculated ones it would not have to recalculate the result, it would simply reuse it.
Whether or not you are hard coding the results in, this is still memoization because you have already calculated results that you are expecting to calculate again. Now this may come in the form of run-time, or compile time.. but either way, it's memoization.
Memoization is done at run-time. You are optimizing at compile time. So, it is not.
See for example ... Wikipedia
Or ...
Memoization
The term memoization was coined by Donald Michie (1968) to refer to the process by which a function is made to automatically remember the results of previous computations. The idea has become more popular in recent years with the rise of functional languages; Field and Harrison (1988) devote a whole chapter to it. The basic idea is just to keep a table of previously computed input/result pairs.
Peter Norvig
University of California
(the bold is mine)
Link
def memoisation(f):
dct = {}
def myfunction(x):
if x not in dct:
dct[x] = f(x)
return dct[x]
return myfunction
#memoisation
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
def nb_appels(n):
if n==0 or n==1:
return 0
else:
return 1 + nb_appels(n-1) + 1 + nb_appels(n-2)
print(fibonacci(13))
print ('nbappel',nb_appels(13))

get output as a vector in R during a loop

How can I get the output as a vector in R?
For example, if I want to have
for (i in 1:1000) {if i mod 123345 = 0, a = list(i)}
a
but I would want to find all i that divide evenly into 123345 (i.e., factors), and not just the largest one.
There may be a more concise way to do this, but I would do it this way:
i <- 1:1000
j <- i[12345 %% i == 0 ]
The resulting vector j contains a vector of the values in i which are factors of 12345. In R the modulo operator is %% and it's a bit of a bitch to find when searching on your own. It's buried in the help document for arithmetic operators and you can find it by searching for + which must be in quotes like: ?"+" and then you have to read down a bit.
You better add a VBA tag if you want to find a VBA answer. But I suspect it will involve the VBA modulo operator ;)
JD Long's method is really the first that came to mind, but another:
Filter(function(x) !(12345 %% x), 1:1000)
I think it's kind of fun to avoid any need for an explicit assignment. (Kind of too bad to create a new function each time.) (In this case, "!" converts a non-zero value to FALSE and zero to TRUE. "Filter" picks out each element evaluating to TRUE.)
Also avoiding the need for a separate allocation and not creating a new function:
which(!(12345 %% 1:1000))
Timing:
> y <- 1:1000
> system.time(replicate(1e5, y[12345 %% y == 0 ]))
user system elapsed
8.486 0.058 8.589
> system.time(replicate(1e5, Filter(function(x) !(12345 %% x), y)))
Timing stopped at: 90.691 0.798 96.118 # I got impatient and killed it
# Even pulling the definition of the predicate outside,
# it's still too slow for me want to wait for it to finish.
# I'm surprised Filter is so slow.
> system.time(replicate(1e5, which(!12345 %% y)))
user system elapsed
11.618 0.095 11.792
So, looks like JD Long's method is the winner.
You wrote:
for (i in 1:1000) {if i mod 123345 = 0, a = list(i)} a
JD Long's code is much better, but if you wanted this loopy strategy to work try instead:
a <- vector(mode="list"); for (i in 1:1000) {if (123345 %% i == 0){ a <-c(a,i) } }
as.vector(unlist(a))

Uniform distance between points

How could I, having a path defined by several points that are not in a uniform distance from each other, redefine along the same path the same number of points but with a uniform distance. I'm trying to do this in Objective-C with NSArrays of CGPoints but so far I haven't had any luck with this.
Thank you for any help.
EDIT
I was wondering if it would help to reduce the number of points, like when detecting if 3 points are collinear we could remove the middle one, but I'm not sure that would help.
EDIT
Illustrating:
Reds are the original points, blues the post processed points:
The new path defined by the blue dots does not correspond to the original one.
I don't think you can do what you state that you want to do. But that could be a misunderstanding on my part. For example, I have understood from your comment that the path is straight between successive points, not curved.
Take, for example, a simple path of 3 points (0,1,2) and 2 line segments (0-1,1-2) of different lengths. Leave points 0 and 2 where they are and introduce a new point 1' which is equidistant from points 0 and 2. If point 1' is on one of the line segments 0-1, 1-2, then one of the line segments 0-1', 1'-2 is not coincident with 0-1, 1-2. (Easier to draw this, which I suggest you do.) If point 1' is not on either of the original line segments then the entire path is new, apart from its endpoints.
So, what relationship between the new path and the old path do you want ?
EDIT: more of an extended comment really, like my 'answer' but the comment box is too small.
I'm still not clear how you want to define the new path and what relationship it has to the old path. First you wanted to keep the same number of points, but in your edit you say that this is not necessary. You agree that replacing points by new points will shift the path. Do you want, perhaps, a new path from point 0 to point N-1, defined by N points uniformly spaced on a path which minimises the area between the old and new paths when drawn on the Cartesian plane ?
Or, perhaps you could first define a polynomial (or spline or other simple curve) path through the original points, then move the points to and fro along the curve until they are uniformly spaced ?
I think the problem is simple and easily solvable actually :)
The basic idea is:
First check if the distance between your current point (P) and the end point of the line segment you are on is >= the distance between P and the next point (Q).
If it is, great, we use some simple trigonometry to figure it out.
Else, we move to the adjacent line segment (in your ordering) and deduct the distance between P and the endpoint of the line segment you are on and continue the process.
Pseudocode:
Defined previously
struct LineSegment
{
Point start,end;
int ID;
double len; // len = EuclideanDistance(start,end);
LineSegment *next_segment;
double theta; // theta = atan2(slope_of_line_segment);
}
Function [LineSegment nextseg] = FindNextLineSegment(LineSegment lineseg)
Input: LineSegment object of the current line segment
Output: LineSegment object of the adjacent line segment in your ordering.
nextseg.ID = -1 if there are no more segments
Function: Find the next point along your path
Function [Point Q, LineSegment Z] = FindNextPt(Point P, LineSegment lineseg, int dist):
Input: The current point P, the distance between this point and the next, and the LineSegment of the line segment which contains P.
Output: The next point Q, and the line segment it is on
Procedure:
distToEndpt = EuclideanDistance(P,lineseg->end);
if( distToEndpt >= d )
{
Point Q(lineseg->start.x + dist*cos(lineseg.theta), lineseg->start.y + dist*sin(lineseg.theta));
Z = lineseg;
}
else
{
nextseg = lineseg->next_segment;
if( nextseg.ID !=-1 )
{
[Q, Z] = FindNextPt(nextseg->start,nextseg->ID,dist-distToEndpt);
}
else
{
return [P,lineseg];
}
}
return [Q,Z]
Entry point
Function main()
Output: vector of points
Procedure:
vector<LineSegment> line_segments;
// Define it somehow giving it all the properties
// ....
vector<Point> equidistant_points;
const int d = DIST;
[Q Z] = FindNextPoint(line_segments[0].start,line_segments[0],DIST);
while( Z.ID != -1 )
{
equidistant_points.push_back(Q);
[Q Z] = FindNextPt(Q,Z,d);
}
My sense is that this is a very hard problem.
It basically amounts to a constrained optimization problem. The objective function measures how close the new line is from the old one. The constraints enforce that the new points are the same distance apart.
Finding a good objective function is the tricky bit, since it must be differentiable, and we don't know ahead of time on which segments each new point will lie: for instance, it's possible for two new points to lie on an extra-long old segment, and no new points lying on some extra-short old segment. If you somehow know a priori on which segments the new points will lie, you can sum the distances between points and their target segments and use that as your objective function (note that this distance function is nontrivial, since the segments are finite: it is composed of three pieces and its level-sets are "pill-shaped.")
Or you might forget about requiring the new points to lie on old segments, and just look for a new polyline that's "close" to the old one. For instance, you might try to write down an L2-like metric between polylines, and use that as your objective function. I don't expect this metric to be pleasant to write down, or differentiate.
I think a perturbative approach will work for this one.
I assume:
we know how to slide a point along the path and recalculate the distances (pretty trivial), and
the end points must remain fixed (otherwise the whole problem becomes trivial).
just iterate over the remaining (n-2) points: if point k is closer to point (k-1) than to point (k+1), move it a little forward along the path. Likewise if it's closer to point (k+1), move a little back along the path.
It's probably best to start with large step sizes (for speed) then make them smaller (for precision). Even if the points pass each other, I think this approach will sort them back into order.
This will use quite a bit of vector math but is quite simple really.
First you will need to find the total distance of the path. Depending on how the points of the path are stored is how you will do it. Here is a basic example on a 2 Dimensional Path in Pseudo-code.
// This would generally be done with vectors, however I'm not sure
// if you would like to make your own class for them as I do so I will use arrays.
// The collection of points
int Points[4][2] = { {0,0}, {1,2}, {5,4}, {6,5} };
int Points2 = Points;
// goes to 3 because there are 4 points
for(int i=0; i<3; i++) {
x = Points[i+1][0] - Points[i][0];
y = Points[i+1][1] - Points[i][1];
d += sqrt(( x * x ) + ( y * y ));
}
// divide distance by number of points to get uniform distance
dist = d/4;
// now that you have the new distance you must find the points
// on your path that are that far from your current point
// same deal here... goes to 3 because there are 4 points
for(int i=0; i<3; i++) {
// slope
m = ( Points[i+1][1] - Points[i][1] ) / ( Points[i+1][0] - Points[i][0] );
// y intercept
b = -(M * Points[i][0]) + Points[i][1];
// processor heavy which makes this problem difficult
// if some one knows a better way please say something
// check every degree grabbing the points till one matches
// if it doesn't then check next segment.
for(float j=0; j<360; j += 0.1) {
x = dist * sin(j);
y = sqrt((dist * dist) - ( x * x ));
if (y - (M * x + C)) {
// then the point is on the line so set it
Points2[i+1][0] = x;
Points2[i+1][1] = y;
}
}
}
The last step is what makes it unreasonable but this should work for you.
There may be a small math error somewhere I double checked this several times but there could be something I missed. So if anyone notices something please inform me and I will edit it.
Hope this helps,
Gale