We are given a N*N grid. And we are at the top left corner of the grid initially. Every square of the grid has some value attached to it, that is if someone reaches that square he wins the amount of money in dollars equal to the value attached to the square. Now legal moves are one step towards the right or one step towards the bottom. We have to reach the bottom right corner of the grid in a path such that we can maximize the amount of money won. Obviously we have to stay within the grid and cannot wander off it.
I started this problem by a greedy approach that at each step we look at the immediate right square and immediate square below the occupied square, and at each step choose the square having the higher value. But this does not give the right result all the time. For example in the following grid,
{ 6, 9, 18, 1 }
{ 2, 10, 0, 2 }
{ 1, 100, 1, 1 }
{ 1, 1, 1, 1 }
here my algorithm gives the maximum valued path as
6 -> 9 -> 18 -> 1 -> 2 -> 1 -> 1
which totals to 37, but we could have earned more on the path
6 -> 9 -> 10 -> 100 -> 1 -> 1 -> 1
which totals to 128. Could you people please help me in building a suitable algorithm? I have not yet coded this one because it would give a wrong output anyway. I don't know how to cope with this problem without brute force which would consist of seeing the values in all the paths not containing the square with the minimum value, and then finding the maximum.
#include <iostream>
#include <queue>
using namespace std;
int main()
{ int n; cin >> n;
int a[n+1][n+1], b[n+1][n+1];
for (int i=0;i<n;i++)
{
for (int j=0;j<n;j++)
{
cin >> a[i][j]; b[i][j]=a[i][j];
}
}
queue <int> q; int u,v,m=0;
q.push(0);q.push(0);
while (q.size()!=0)
{
u=q.front(); q.pop(); v=q.front(); q.pop();
if (v<n-1)
{
m=b[u][v]+a[u][v+1];
if (m>b[u][v+1])
{ b[u][v+1]=m; }
q.push(u);q.push(v+1);
}
if (u<n-1)
{
m=b[u][v]+a[u+1][v];
if (m>b[u+1][v])
{ b[u+1][v]=m; }
q.push(u+1);q.push(v);
}
}
cout << b[n-1][n-1];
return 0;
}
The problem can be solved with the following approach. Each cell at position (i,j) gets associated with a value val(i,j) which is the maximum total value possible by reaching it with the described legal moves (to the bottom, to the right) starting at position (0,0). The value at position (0,0) is the value from the grid, in the sequel termed as grid(i,j) for every i, j in {0,...,N-1}. We obtain the follwing recurrence relation
val(i,j) = grid(i,j) + max{ val(i-1,j), // path coming from upper cell
val(i,j-1) // path coming from left cell
}
where we suppose that indices outside of {0,...,N-1} * {0,...N-1} yield a value of negative infinity and are never really used. The recurrence relation is valid as there are at most two cases to reach a cell, namely from its upper neighbor or its left neighbour (except for cells at the border, which perhaps may be reached from only one neighbour).
The key point for an efficient evaluation of val is to organize the calculation of values in a sequence such that all needed neighbors are already evaluated; this can be done by succesively staring calculation at the leftmost cell for which val is not yet calculated and working from there in an upwards-rightwards manner until the top row is reached. This is iterated until position val(N-1,N-1) is evaluated, which yields the desired result.
If in addition the specific path to (N-1,N-1) is demanded, either backtracking or some auxiliary data structure has to be used to store how the value in the above recurrence relation was calculated, i.e. which term yields the maximum.
Edit
Alternatively, the evaluation can be done row-wise from left to right, which also has the desired property that all necessary values for the recurrence relation are already calculated; this is apparently easier to implement. In either case, the runtime bound will be O(n^2).
Actually this is a problem which is solvable using dynamic programming. You only need to adapt the algorithm for calculating the edit distance allowing for varying rewards.
The algorithm is described for example in https://web.stanford.edu/class/cs124/lec/med.pdf
The basic idea is that you start from top and you fill a square whenever you now its neighbouring (top,left) field.
The value you put in the field is the value of the higher of the two neighbours and the value of the current field. When you reach bottom right you just have to follow the path back.
Related
I am working on developing an efficient iterative code to compute mn. After some thinking and googling I found this code;
public static int power(int n, int m)
// Efficiently calculates m to the power of n iteratively
{
int pow=m, acc=1, count=n;
while(count!=0)
{
if(count%2==1)
acc=acc*pow;
pow=pow*pow;
count=count/2;
}
return acc;
}
This logic makes sense to me except the fact that why are we squaring value of pow towards the end each time. I am familiar with similar recursive approach, but this squaring is not looking very intuitive to me. Can I kindly get some help her? An example with explanation will be really helpful.
The accumulator is being squared each iteration because count (which is the inverse cumulative power) is being halved each iteration.
If the count is odd, the accumulator is multiplied by the number. This algorithm relies on integer arithmetic, which discards the fractional part of a division, effectively further decrementing by 1 when the count is odd.
This is a very tricky solution to understand. I am solving this problem in leetcode and have found the iterative solution. I have spent a whole day to understand this beautiful iterative solution. The main problem is this iterative solution does not work as like its recursive solution.
Let's pick an example to demonstrate. But first I have to re-write your code by replacing some variable name as the name in your given code is very confusing.
// Find m^n
public static int power(int n, int m)
{
int pow=n, result=1, base=m;
while(pow > 0)
{
if(pow%2 == 1) result = result * base;
base = base * base;
pow = pow/2;
}
return result;
}
Let's understand the beauty step by step.
Let say, base = 2 and power = 10.
Calculation
Description
2^10= (2*2)^5 = 4^5
even
We have changed the base to 4 and power to 5. So it is now enough to find 4^5. [base multiplied with itself and power is half
4^5= 4*(4)^4 = 4^5
odd
We separate single 4 which is the base for current iteration. We store the base to result variable. We will now find the value of 4^4 and then multiply with the result variable.
4^4= (4*4)^2 = 16^2
even
We change the base to 16 and power to 2. It is now enough to find 16^2
16^2= (16*16)^1 = 256^1
even
We change the base to 256 and power to 1. It is now enough to find 256^1
256^1 = 256 * 256^0
odd
We separate single 256 which is the base for current iteration. This value comes from evaluation of 4^4.So, we have to multiply it with our previous result variable. And continue evaluating the rest value of 256^0.
256^0
zero
Power 0. So stop the iteration.
So, after translating the process in pseudo code it will be similar to this,
If power is even:
base = base * base
power /= 2
If power is odd:
result = result * base
power -= 1
Now, let have another observation. It is observed that floor(5 / 2) and (5-1) / 2 is same.
So, for odd power, we can directly set power / 2 instead of power -= 1. So, the pseudo code will be like the below,
if power is both odd or even:
base = base * base
power /= 2
If power is odd:
result = result * base
I hope you got the behind the scene.
I was reading book about competitive programming and was encountered to problem where we have to count all possible paths in the n*n matrix.
Now the conditions are :
`
1. All cells must be visited for once (cells must not be unvisited or visited more than once)
2. Path should start from (1,1) and end at (n,n)
3. Possible moves are right, left, up, down from current cell
4. You cannot go out of the grid
Now this my code for the problem :
typedef long long ll;
ll path_count(ll n,vector<vector<bool>>& done,ll r,ll c){
ll count=0;
done[r][c] = true;
if(r==(n-1) && c==(n-1)){
for(ll i=0;i<n;i++){
for(ll j=0;j<n;j++) if(!done[i][j]) {
done[r][c]=false;
return 0;
}
}
count++;
}
else {
if((r+1)<n && !done[r+1][c]) count+=path_count(n,done,r+1,c);
if((r-1)>=0 && !done[r-1][c]) count+=path_count(n,done,r-1,c);
if((c+1)<n && !done[r][c+1]) count+=path_count(n,done,r,c+1);
if((c-1)>=0 && !done[r][c-1]) count+=path_count(n,done,r,c-1);
}
done[r][c] = false;
return count;
}
Here if we define recurrence relation then it can be like: T(n) = 4T(n-1)+n2
Is this recurrence relation true? I don't think so because if we use masters theorem then it would give us result as O(4n*n2) and I don't think it can be of this order.
The reason, why I am telling, is this because when I use it for 7*7 matrix it takes around 110.09 seconds and I don't think for n=7 O(4n*n2) should take that much time.
If we calculate it for n=7 the approx instructions can be 47*77 = 802816 ~ 106. For such amount of instruction it should not take that much time. So here I conclude that my recurrene relation is false.
This code generates output as 111712 for 7 and it is same as the book's output. So code is right.
So what is the correct time complexity??
No, the complexity is not O(4^n * n^2).
Consider the 4^n in your notation. This means, going to a depth of at most n - or 7 in your case, and having 4 choices at each level. But this is not the case. In the 8th, level you still have multiple choices where to go next. In fact, you are branching until you find the path, which is of depth n^2.
So, a non tight bound will give us O(4^(n^2) * n^2). This bound however is far from being tight, as it assumes you have 4 valid choices from each of your recursive calls. This is not the case.
I am not sure how much tighter it can be, but a first attempt will drop it to O(3^(n^2) * n^2), since you cannot go from the node you came from. This bound is still far from optimal.
Despite the last 30 minutes i spent on trying to understand time and space complexity better, i still can't confidently determine those for the algorithm below:
bool checkSubstr(std::string sub)
{
//6 OR(||) connected if statement(checks whether the parameter
//is among the items in the list)
}
void checkWords(int start,int end)
{
int wordList[2] ={0};
int j = 0;
if (start < 0)
{
start = 0;
}
if (end>cAmount)
{
end = cAmount -1;
}
if (end-start < 2)
{
return;
}
for (int i = start; i <= end-2; i++)
{
if (crystals[i] == 'I' || crystals[i] == 'A')
{
continue;
}
if (checkSubstr(crystals.substr(i,3)))
{
wordList[j] = i;
j++;
}
}
if (j==1)
{
crystals.erase(wordList[0],3);
cAmount -= 3;
checkWords(wordList[0]-2,wordList[0]+1);
}
else if (j==2)
{
crystals.erase(wordList[0],(wordList[1]-wordList[0]+3));
cAmount -= wordList[1]-wordList[0]+3;
checkWords(wordList[0]-2,wordList[0]+1);
}
}
The function basically checks a sub-string of the whole string for predetermined (3 letter, e.g. "SAN") combinations of letters. Sub-string length can be 4-6 no real way to determine, depends on the input(pretty sure it's not relevant, although not 100%).
My reasoning:
If there are n letters in the string, worst case scenario, we have to check each of them. Again depending on the input, this can be done 3 ways.
All 6 length sub-strings: If this is the case the function runs n/6 times, each running 8(or 10?) processes, which(i think) means that its time complexity is O(n).
All 4 length sub-strings: Pretty much the same reason above, O(n).
4 and 6 length sub-strings mixed: Can't see why this would be different than previous 2. O(n)
As for the space complexity, i am completely lost. However, i have an idea:
If the function recurs for maximum amount of time,it will require:
n/4 x The Amount Used In One Run
which made me think it should be O(n). Although, i'm not convinced this is correct. I thought maybe seeing someone else's thought process on this example would help me understand how to calculate time and space complexity better.
Thank you for your time.
EDIT: Let me provide clearer information. We read a combination of 6 different letters into a string, this can be (almost)any combination in any length. 'crystals' is the string, and we are looking for 6 different 3 letter combinations in that list of letters. Sort of like a jewel matching game. Now the starting list contains no matches(none of the 6 predetermined combinations exist in the first place). Therefore the only way matches can occur from then on is by swaps or matches disappearing. Once a swap is processed by top level code, the function is called to check for matches, and if a match is found the function recurs after deleting the "match" part of the string.
Now let's look at how the code is looking for a match. To demonstrate a swap of 2 letters:
ABA B-R ZIB(no spaces or '-' in the actual string, used for better demonstration),
B and R is being swapped. This swap only effects the 6 letters starting from 2nd letter and ending on 7th letter. In other words, the letters the first A and last B can form a match with are same, before and after the swap, thus no point checking for matches including those words. So a sub-string of 6 letters sent to the checking algorithm. Similarly, if a formed match disappears(gets deleted from the string) the range of effected letters is 4. So when i thought of a worst case scenario, i imagined either 1 swap creating a whole chain reaction and matching all the way till there are not enough letters to form a match, or each match happens with a swap. Again, i am not saying this is how we should think when calculating time and space complexity but this is how the code works. Hope this is clear enough if not let me know and i can provide more details. It's also important to note that swap amount and places are a part of the input we read.
EDIT: Here is how the function is called on top level for the first time:
checkWords(swaps[i]-2,swaps[i]+3);
Sub-string length can be 4-6 no real way to determine, depends on the
input (pretty sure it's not relevant, although not 100%).
That's not what the code shows; the line if (checkSubstr(crystals.substr(i,3))) conveys that substrings always have exactly 3 characters. If the substring length varies, it is relevant, since your naive substring match will degrade to O(N*M) in the general case, where N is start-end+1 (the size of the input string) and M is the size of the substring being searched. This happens because in the worst case you'll compare M characters for each of the N characters of the source string.
The rest of this answer assumes that substrings are of size 3, since that's what the code shows.
If substrings are always 3 characters long, it's different: you can essentially assume checkSubstr() is O(1) because you will always compare at most 3 characters. The bulk of the work happens inside the for loop, which is O(N), where N is end-1-start.
After the loop, in the worst case (when one of the ifs is entered), you erase a bunch of characters from crystal. Assuming this is a string backed by an array in memory, this is an O(cAmount) operation, because all elements after wordList[0] must be shifted. The recursive call always passes in a range of size 4; it does not grow nor shrink with the size of the input, so you can also say there are O(1) recursive calls.
Thus, time complexity is O(N+cAmount) (where N is end-1-start), and space complexity is O(1).
I have a set of images and want to make a cross matching between all and display the results using trackbars using OpenCV 2.4.6 (ROS Hydro package). The matching part is done using a vector of vectors of vectors of cv::DMatch-objects:
image[0] --- image[3] -------- image[8] ------ ...
| | |
| cv::DMatch-vect cv::DMatch-vect
|
image[1] --- ...
|
image[2] --- ...
|
...
|
image[N] --- ...
Because we omit matching an image with itself (no point in doing that) and because a query image might not be matched with all the rest each set of matched train images for a query image might have a different size from the rest. Note that the way it's implemented right I actually match a pair of images twice, which of course is not optimal (especially since I used a BruteForce matcher with cross-check turned on, which basically means that I match a pair of images 4 times!) but for now that's it. In order to avoid on-the-fly drawing of matched pairs of images I have populated a vector of vectors of cv::Mat-objects. Each cv::Mat represents the current query image and some matched train image (I populate it using cv::drawMatches()):
image[0] --- cv::Mat[0,3] ---- cv::Mat[0,8] ---- ...
|
image[1] --- ...
|
image[2] --- ...
|
...
|
image[N] --- ...
Note: In the example above cv::Mat[0,3] stands for cv::Mat that stores the product of cv::drawMatches() using image[0] and image[3].
Here are the GUI settings:
Main window: here I display the current query image. Using a trackbar - let's call it TRACK_QUERY - I iterate through each image in my set.
Secondary window: here I display the matched pair (query,train), where the combination between the position of TRACK_QUERY's slider and the position of the slider of another trackbar in this window - let's call it TRACK_TRAIN - allows me to iterate through all the cv::Mat-match-images for the current query image.
The issue here comes from the fact that each query can have a variable number of matched train images. My TRACK_TRAIN should be able to adjust to the number of matched train images, that is the number of elements in each cv::Mat-vector for the current query image. Sadly so far I was unable to find a way to do that. The cv::createTrackbar() requires a count-parameter, which from what I see sets the limit of the trackbar's slider and cannot be altered later on. Do correct me if I'm wrong since this is exactly what's bothering me. A possible solution (less elegant and involving various checks to avoid out-of-range erros) is to take the size of the largest set of matched train images and use it as the limit for my TRACK_TRAIN. I would like to avoid doing that if possible. Another possible solution involves creating a trackbar per query image with the appropriate value range and swap each in my secondary windows according to the selected query image. For now this seems to be the more easy way to go but poses a big overhead of trackbars not to mention that fact that I haven't heard of OpenCV allowing you to hide GUI controls. Here are two example that might clarify things a little bit more:
Example 1:
In main window I select image 2 using TRACK_QUERY. For this image I have managed to match 5 other images from my set. Let's say those are image 4, 10, 17, 18 and 20. The secondary window updates automatically and shows me the match between image 2 and image 4 (first in the subset of matched train images). TRACK_TRAIN has to go from 0 to 4. Moving the slider in both directions allows me to go through image 4, 10, 17, 18 and 20 updating each time the secondary window.
Example 2:
In main window I select image 7 using TRACK_QUERY. For this image I have managed to match 3 other images from my set. Let's say those are image 0, 1, 11 and 19. The secondary window updates automatically and shows me the match between image 2 and image 0 (first in the subset of matched train images). TRACK_TRAIN has to go from 0 to 2. Moving the slider in both directions allows me to go through image 0, 1, 1 and 19 updating each time the secondary window.
If you have any questions feel free to ask and I'll to answer them as well as I can. Thanks in advance!
PS: Sadly the way the ROS package is it has the bare minimum of what OpenCV can offer. No Qt integration, no OpenMP, no OpenGL etc.
After doing some more research I'm pretty sure that this is currently not possible. That's why I implemented the first proposition that I gave in my question - use the match-vector with the most number of matches in it to determine a maximum size for the trackbar and then use some checking to avoid out-of-range exceptions. Below there is a more or less detailed description how it all works. Since the matching procedure in my code involves some additional checks that does not concern the problem at hand, I'll skip it here. Note that in a given set of images we want to match I refer to an image as object-image when that image (example: card) is currently matched to a scene-image (example: a set of cards) - top level of the matches-vector (see below) and equal to the index in processedImages (see below). I find the train/query notation in OpenCV somewhat confusing. This scene/object notation is taken from http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html. You can change or swap the notation to your liking but make sure you change it everywhere accordingly otherwise you might end up with a some weird results.
// stores all the images that we want to cross-match
std::vector<cv::Mat> processedImages;
// stores keypoints for each image in processedImages
std::vector<std::vector<cv::Keypoint> > keypoints;
// stores descriptors for each image in processedImages
std::vector<cv::Mat> descriptors;
// fill processedImages here (read images from files, convert to grayscale, undistort, resize etc.), extract keypoints, compute descriptors
// ...
// I use brute force matching since I also used ORB, which has binary descriptors and HAMMING_NORM is the way to go
cv::BFmatcher matcher;
// matches contains the match-vectors for each image matched to all other images in our set
// top level index matches.at(X) is equal to the image index in processedImages
// middle level index matches.at(X).at(Y) gives the match-vector for the Xth image and some other Yth from the set that is successfully matched to X
std::vector<std::vector<std::vector<cv::DMatch> > > matches;
// contains images that store visually all matched pairs
std::vector<std::vector<cv::Mat> > matchesDraw;
// fill all the vectors above with data here, don't forget about matchesDraw
// stores the highest count of matches for all pairs - I used simple exclusion by simply comparing the size() of the current std::vector<cv::DMatch> vector with the previous value of this variable
long int sceneWithMaxMatches = 0;
// ...
// after all is ready do some additional checking here in order to make sure the data is usable in our GUI. A trackbar for example requires AT LEAST 2 for its range since a range (0;0) doesn't make any sense
if(sceneWithMaxMatches < 2)
return -1;
// in this window show the image gallery (scene-images); the user can scroll through all image using a trackbar
cv::namedWindow("Images", CV_GUI_EXPANDED | CV_WINDOW_AUTOSIZE);
// just a dummy to store the state of the trackbar
int imagesTrackbarState = 0;
// create the first trackbar that the user uses to scroll through the scene-images
// IMPORTANT: use processedImages.size() - 1 since indexing in vectors is the same as in arrays - it starts from 0 and not reducing it by 1 will throw an out-of-range exception
cv::createTrackbar("Images:", "Images", &imagesTrackbarState, processedImages.size() - 1, on_imagesTrackbarCallback, NULL);
// in this window we show the matched object-images relative to the selected image in the "Images" window
cv::namedWindow("Matches for current image", CV_WINDOW_AUTOSIZE);
// yet another dummy to store the state of the trackbar in this new window
int imageMatchesTrackbarState = 0;
// IMPORTANT: again since sceneWithMaxMatches stores the SIZE of a vector we need to reduce it by 1 in order to be able to use it for the indexing later on
cv::createTrackbar("Matches:", "Matches for current image", &imageMatchesTrackbarState, sceneWithMaxMatches - 1, on_imageMatchesTrackbarCallback, NULL);
while(true)
{
char key = cv::waitKey(20);
if(key == 27)
break;
// from here on the magic begins
// show the image gallery; use the position of the "Images:" trackbar to call the image at that position
cv::imshow("Images", processedImages.at(cv::getTrackbarPos("Images:", "Images")));
// store the index of the current scene-image by calling the position of the trackbar in the "Images:" window
int currentSceneIndex = cv::getTrackbarPos("Images:", "Images");
// we have to make sure that the match of the currently selected scene-image actually has something in it
if(matches.at(currentSceneIndex).size())
{
// store the index of the current object-image that we have matched to the current scene-image in the "Images:" window
int currentObjectIndex = cv::getTrackbarPos("Matches:", "Matches for current image");
cv::imshow(
"Matches for current image",
matchesDraw.at(currentSceneIndex).at(currentObjectIndex < matchesDraw.at(currentSceneIndex).size() ? // is the current object index within the range of the matches for the current object and current scene
currentObjectIndex : // yes, return the correct index
matchesDraw.at(currentSceneIndex).size() - 1)); // if outside the range show the last matched pair!
}
}
// do something else
// ...
The tricky part is the trackbar in the second window responsible for accessing the matched images to our currently selected image in the "Images" window. As I've explained above I set the trackbar "Matches:" in the "Matches for current image" window to have a range from 0 to (sceneWithMaxMatches-1). However not all images have the same amount of matches with the rest in the image set (applies tenfold if you have done some additional filtering to ensure reliable matches for example by exploiting the properties of the homography, ratio test, min/max distance check etc.). Because I was unable to find a way to dynamically adjust the trackbar's range I needed a validation of the index. Otherwise for some of the images and their matches the application will throw an out-of-range exception. This is due to the simple fact that for some matches we try to access a match-vector with an index greater than it's size minus 1 because cv::getTrackbarPos() goes all the way to (sceneWithMaxMatches - 1). If the trackbar's position goes out of range for the currently selected vector with matches, I simply set the matchDraw-image in "Matches for current image" to the very last in the vector. Here I exploit the fact that the indexing can't go below zero as well as the trackbar's position so there is not need to check this but only what comes after the initial position 0. If this is not your case make sure you check the lower bound too and not only the upper.
Hope this helps!
How could I, having a path defined by several points that are not in a uniform distance from each other, redefine along the same path the same number of points but with a uniform distance. I'm trying to do this in Objective-C with NSArrays of CGPoints but so far I haven't had any luck with this.
Thank you for any help.
EDIT
I was wondering if it would help to reduce the number of points, like when detecting if 3 points are collinear we could remove the middle one, but I'm not sure that would help.
EDIT
Illustrating:
Reds are the original points, blues the post processed points:
The new path defined by the blue dots does not correspond to the original one.
I don't think you can do what you state that you want to do. But that could be a misunderstanding on my part. For example, I have understood from your comment that the path is straight between successive points, not curved.
Take, for example, a simple path of 3 points (0,1,2) and 2 line segments (0-1,1-2) of different lengths. Leave points 0 and 2 where they are and introduce a new point 1' which is equidistant from points 0 and 2. If point 1' is on one of the line segments 0-1, 1-2, then one of the line segments 0-1', 1'-2 is not coincident with 0-1, 1-2. (Easier to draw this, which I suggest you do.) If point 1' is not on either of the original line segments then the entire path is new, apart from its endpoints.
So, what relationship between the new path and the old path do you want ?
EDIT: more of an extended comment really, like my 'answer' but the comment box is too small.
I'm still not clear how you want to define the new path and what relationship it has to the old path. First you wanted to keep the same number of points, but in your edit you say that this is not necessary. You agree that replacing points by new points will shift the path. Do you want, perhaps, a new path from point 0 to point N-1, defined by N points uniformly spaced on a path which minimises the area between the old and new paths when drawn on the Cartesian plane ?
Or, perhaps you could first define a polynomial (or spline or other simple curve) path through the original points, then move the points to and fro along the curve until they are uniformly spaced ?
I think the problem is simple and easily solvable actually :)
The basic idea is:
First check if the distance between your current point (P) and the end point of the line segment you are on is >= the distance between P and the next point (Q).
If it is, great, we use some simple trigonometry to figure it out.
Else, we move to the adjacent line segment (in your ordering) and deduct the distance between P and the endpoint of the line segment you are on and continue the process.
Pseudocode:
Defined previously
struct LineSegment
{
Point start,end;
int ID;
double len; // len = EuclideanDistance(start,end);
LineSegment *next_segment;
double theta; // theta = atan2(slope_of_line_segment);
}
Function [LineSegment nextseg] = FindNextLineSegment(LineSegment lineseg)
Input: LineSegment object of the current line segment
Output: LineSegment object of the adjacent line segment in your ordering.
nextseg.ID = -1 if there are no more segments
Function: Find the next point along your path
Function [Point Q, LineSegment Z] = FindNextPt(Point P, LineSegment lineseg, int dist):
Input: The current point P, the distance between this point and the next, and the LineSegment of the line segment which contains P.
Output: The next point Q, and the line segment it is on
Procedure:
distToEndpt = EuclideanDistance(P,lineseg->end);
if( distToEndpt >= d )
{
Point Q(lineseg->start.x + dist*cos(lineseg.theta), lineseg->start.y + dist*sin(lineseg.theta));
Z = lineseg;
}
else
{
nextseg = lineseg->next_segment;
if( nextseg.ID !=-1 )
{
[Q, Z] = FindNextPt(nextseg->start,nextseg->ID,dist-distToEndpt);
}
else
{
return [P,lineseg];
}
}
return [Q,Z]
Entry point
Function main()
Output: vector of points
Procedure:
vector<LineSegment> line_segments;
// Define it somehow giving it all the properties
// ....
vector<Point> equidistant_points;
const int d = DIST;
[Q Z] = FindNextPoint(line_segments[0].start,line_segments[0],DIST);
while( Z.ID != -1 )
{
equidistant_points.push_back(Q);
[Q Z] = FindNextPt(Q,Z,d);
}
My sense is that this is a very hard problem.
It basically amounts to a constrained optimization problem. The objective function measures how close the new line is from the old one. The constraints enforce that the new points are the same distance apart.
Finding a good objective function is the tricky bit, since it must be differentiable, and we don't know ahead of time on which segments each new point will lie: for instance, it's possible for two new points to lie on an extra-long old segment, and no new points lying on some extra-short old segment. If you somehow know a priori on which segments the new points will lie, you can sum the distances between points and their target segments and use that as your objective function (note that this distance function is nontrivial, since the segments are finite: it is composed of three pieces and its level-sets are "pill-shaped.")
Or you might forget about requiring the new points to lie on old segments, and just look for a new polyline that's "close" to the old one. For instance, you might try to write down an L2-like metric between polylines, and use that as your objective function. I don't expect this metric to be pleasant to write down, or differentiate.
I think a perturbative approach will work for this one.
I assume:
we know how to slide a point along the path and recalculate the distances (pretty trivial), and
the end points must remain fixed (otherwise the whole problem becomes trivial).
just iterate over the remaining (n-2) points: if point k is closer to point (k-1) than to point (k+1), move it a little forward along the path. Likewise if it's closer to point (k+1), move a little back along the path.
It's probably best to start with large step sizes (for speed) then make them smaller (for precision). Even if the points pass each other, I think this approach will sort them back into order.
This will use quite a bit of vector math but is quite simple really.
First you will need to find the total distance of the path. Depending on how the points of the path are stored is how you will do it. Here is a basic example on a 2 Dimensional Path in Pseudo-code.
// This would generally be done with vectors, however I'm not sure
// if you would like to make your own class for them as I do so I will use arrays.
// The collection of points
int Points[4][2] = { {0,0}, {1,2}, {5,4}, {6,5} };
int Points2 = Points;
// goes to 3 because there are 4 points
for(int i=0; i<3; i++) {
x = Points[i+1][0] - Points[i][0];
y = Points[i+1][1] - Points[i][1];
d += sqrt(( x * x ) + ( y * y ));
}
// divide distance by number of points to get uniform distance
dist = d/4;
// now that you have the new distance you must find the points
// on your path that are that far from your current point
// same deal here... goes to 3 because there are 4 points
for(int i=0; i<3; i++) {
// slope
m = ( Points[i+1][1] - Points[i][1] ) / ( Points[i+1][0] - Points[i][0] );
// y intercept
b = -(M * Points[i][0]) + Points[i][1];
// processor heavy which makes this problem difficult
// if some one knows a better way please say something
// check every degree grabbing the points till one matches
// if it doesn't then check next segment.
for(float j=0; j<360; j += 0.1) {
x = dist * sin(j);
y = sqrt((dist * dist) - ( x * x ));
if (y - (M * x + C)) {
// then the point is on the line so set it
Points2[i+1][0] = x;
Points2[i+1][1] = y;
}
}
}
The last step is what makes it unreasonable but this should work for you.
There may be a small math error somewhere I double checked this several times but there could be something I missed. So if anyone notices something please inform me and I will edit it.
Hope this helps,
Gale