Given a set points P in the plane, and given a threshold t, I'd like to compute a connected graph G to minimize the sum of the lengths of its edges, subject to the following constraints:
The vertices of G contain all the points in P.
For every pair of points u and v in P, their distance in G is no greater than t times their Euclidean distance.
When t=1, this problem is solved by constructing a complete graph on P. When t is infinite (or simply large enough), this problem is the Euclidean Steiner Tree Problem.
If there already a name for this problem, I'm curious what it is. More than that, does anyone have any suggestions for how to make an algorithm for this? Since it contains the Euclidean Steiner Tree Problem as a special case, it can't be simpler, so I'm not looking for anything particularly time efficient. Thanks!
I have a VRP in which I would like to include fuel consumption as soft constraint and that it is different between vehicles based on type. So I would want the engine to select the vehicles with the least fuel consumption.
I thought about adding a multiplier to the vehicle type so that it is multiplied with distance as soft constraint, is it possible? and would it affect the result negatively?
Thanks,
Yes, that's possible.
You distances can be in km. Then your score rule just multiplies each distance (= km) driven by a vehicle by that vehicle's vehicle.getCostPerKm().
You can even also keep track of driving time in seconds for each distance and build one big weighted function:
addSoft(..., - ($distanceInKm * $vehicle.getCostPerKm() + $distanceInSeconds * $vehicle.getDriverWagePerSecond()));
Consider a problem whose solution maximizes an objective function.
Problem : From 500 elements, 15 needs to be selected (candidate solution), Value of Objective function depends on the pairwise relationships between the elements in a candidate solution and some more.
The steps for solving such a problem is described here:
1. Generate a set of candidate solutions in guided random manner(population) //not purely random the direction is given to generate the population
2. Evaluating the objective function for current population
3. If the current_best_solution exceeds the global_best_solution, then replace the global_best with current_best
4. Repeat steps 1,2,3 for N (arbitrary number) times
where size of population and N are smaller (approx 50)
After N iterations it returns a candidate solution stored in global_best_solution
Is this the description of a well-known algorithm?
If it is, what is the name of that algorithm or if not under which category these type of algorithms fit?
What you have sounds like you are just fishing. Note that you might as well get rid of steps 3 and 4 since running the loop 100 times would be the same as doing it once with an initial population 100 times as large.
If you think of the objective function as a random variable which is a function of random decision variables then what you are doing would e.g. give you something in the 99.9th percentile with very high probability -- but there is no limit to how far the optimum might be from the 99.9th percentile.
To illustrate the difficulty, consider the following sort of Travelling Salesman Problem. Imagine two clusters of points A and B, each of which has 100 points. Within the clusters, each point is arbitrarily close to every other point (e.g. 0.0000001). But -- between the clusters the distance is say 1,000,000. The optimal tour would clearly have length 2,000,000 (+ a negligible amount). A random tour is just a random permutation of those 200 decision points. Getting an optimal or near optimal tour would be akin to shuffling a deck of 200 cards with 100 read and 100 black and having all of the red cards in the deck in a block (counting blocks that "wrap around") -- vanishingly unlikely (It can be calculated as 99 * 100! * 100! / 200! = 1.09 x 10^-57). Even if you generate quadrillions of tours it is overwhelmingly likely that each of those tours would be off by millions. This is a min problem, but it is also easy to come up with max problems where it is vanishingly unlikely that you will get a near-optimal solution by purely random settings of the decision variables.
This is an extreme example, but it is enough to show that purely random fishing for a solution isn't very reliable. It would make more sense to use evolutionary algorithms or other heuristics such as simulated annealing or tabu search.
why do you work with a population if the members of that population do not interact ?
what you have there is random search.
if you add mutation it looks like an Evolution Strategy: https://en.wikipedia.org/wiki/Evolution_strategy
Hi all. I've got hundreds of signals of this form on which I have
detected peaks above some threshold.
I define a peak width as FWHM (full width at half maximum). However, I've fitted a cubic polynomial to the valleys of the signal, and so I've defined a peak as the distance from this baseline at same index as the peak to the peak.
I'm calculating peak width as the greatest distance between the intersection of the signal and a line at half the max. It looks like this:
roots = indeces_of_intersections
intersection_lengths=[abs(y - x) for x, y in it.combinations(roots, 2)]
calculated_width = max(intersection_lengths)
I'm having problems calculating peak width consistently and that's because sometimes the line intersects with points on different peaks.
I've restricted the domain on which this intersecting line is defined:
Domain = [a little to the left of the peak, a little to the right of it]
but this domain restriction is the same for all peaks.
I've thought about somehow having this domain change for different peaks but not sure how to implement that. My code is almost fully automated, and I have to keep it that way.
Posting my question onto here helped me realize an easy solution:
more_than_peak=[x for x in it.ifilter(lambda x: x if x>peaks[i] else 0, roots)]
less_than_peak=[x for x in it.ifilter(lambda x: x if x<peaks[i] else 0, roots)]
if len(more_than_peak)>0 or len(less_than_peak)>0:
width = min(more_than_peak)-max(less_than_peak)
Here, I'm finding the intersections to the left and right of the peak index. Then, I'm finding the smallest one to the right of the peak and the biggest one to the left (x axis increases to the right). So simple and fast!
I'm trying to create a solar system simulation, and I'm having problems trying to figure out initial velocity vectors for random objects I've placed into the simulation.
Assume:
- I'm using Gaussian grav constant, so all my units are AU/Solar Masses/Day
- Using x,y,z for coordinates
- One star, which is fixed at 0,0,0. Quasi-random mass is determined for it
- I place a planet, at a random x,y,z coordinate, and its own quasi-random mass determined.
Before I start the nbody loop (using RK4), I would like the initial velocity of the planet to be such that it has a circular orbit around the star. Other placed planets will, of course, pull on it once the simulation starts, but I want to give it the chance to have a stable orbit...
So, in the end, I need to have an initial velocity vector (x,y,z) for the planet that means it would have a circular orbit around the star after 1 timestep.
Help? I've been beating my head against this for weeks and I don't believe I have any reasonable solution yet...
It is quite simple if you assume that the mass of the star M is much bigger than the total mass of all planets sum(m[i]). This simplifies the problem as it allows you to pin the star to the centre of the coordinate system. Also it is much easier to assume that the motion of all planets is coplanar, which further reduces the dimensionality of the problem to 2D.
First determine the magnitude of the circular orbit velocity given the magnitude of the radius vector r[i] (the radius of the orbit). It only depends on the mass of the star, because of the above mentioned assumption: v[i] = sqrt(mu / r[i]), where mu is the standard gravitational parameter of the star, mu = G * M.
Pick a random orbital phase parameter phi[i] by sampling uniformly from [0, 2*pi). Then the initial position of the planet in Cartesian coordinates is:x[i] = r[i] * cos(phi[i]) y[i] = r[i] * sin(phi[i])
With circular orbits the velocity vector is always perpendicular to the radial vector, i.e. its direction is phi[i] +/- pi/2 (+pi/2 for counter-clockwise (CCW) rotation and -pi/2 for clockwise rotation). Let's take CCW rotation as an example. The Cartesian coordinates of the planet's velocity are:vx[i] = v[i] * cos(phi[i] + pi/2) = -v[i] * sin(phi[i])vy[i] = v[i] * sin(phi[i] + pi/2) = v[i] * cos(phi[i])
This easily extends to coplanar 3D motion by adding z[i] = 0 and vz[i] = 0, but it makes no sense, since there are no forces in the Z direction and hence z[i] and vz[i] would forever stay equal to 0 (i.e. you will be solving for a 2D subspace problem of the full 3D space).
With full 3D simulation where each planet moves in a randomly inclined initial orbit, one can work that way:
This step is equal to step 1 from the 2D case.
You need to pick an initial position on the surface of the unit sphere. See here for examples on how to do that in a uniformly random fashion. Then scale the unit sphere coordinates by the magnitude of r[i].
In the 3D case, instead of two possible perpendicular vectors, there is a whole tangential plane where the planet velocity lies. The tangential plane has its normal vector collinear to the radius vector and dot(r[i], v[i]) = 0 = x[i]*vx[i] + y[i]*vy[i] + z[i]*vz[i]. One could pick any vector that is perpendicular to r[i], for example e1[i] = (-y[i], x[i], 0). This results in a null vector at the poles, so there one could pick e1[i] = (0, -z[i], y[i]) instead. Then another perpendicular vector can be found by taking the cross product of r[i] and e1[i]:e2[i] = r[i] x e1[i] = (r[2]*e1[3]-r[3]*e1[2], r[3]*e1[1]-r[1]*e1[3], r[1]*e1[2]-r[2]*e1[1]). Now e1[i] and e2[i] can be normalised by dividing them by their norms:n1[i] = e1[i] / ||e1[i]||n2[i] = e2[i] / ||e2[i]||where ||a|| = sqrt(dot(a, a)) = sqrt(a.x^2 + a.y^2 + a.z^2). Now that you have an orthogonal vector basis in the tangential plane, you can pick one random angle omega in [0, 2*pi) and compute the velocity vector as v[i] = cos(omega) * n1[i] + sin(omega) * n2[i], or as Cartesian components:vx[i] = cos(omega) * n1[i].x + sin(omega) * n2[i].xvy[i] = cos(omega) * n1[i].y + sin(omega) * n2[i].yvz[i] = cos(omega) * n1[i].z + sin(omega) * n2[i].z.
Note that by construction the basis in step 3 depends on the radius vector, but this does not matter since a random direction (omega) is added.
As to the choice of units, in simulation science we always tend to keep things in natural units, i.e. units where all computed quantities are dimensionless and kept in [0, 1] or at least within 1-2 orders of magnitude and so the full resolution of the limited floating-point representation could be used. If you take the star mass to be in units of Solar mass, distances to be in AUs and time to be in years, then for an Earth-like planet at 1 AU around a Sun-like star, the magnitude of the orbital velocity would be 2*pi (AU/yr) and the magnitude of the radius vector would be 1 (AU).
Just let centripetal acceleration equal gravitational acceleration.
m1v2 / r = G m1m2 / r2
v = sqrt( G m2 / r )
Of course the star mass m2 must be much greater than the planet mass m1 or you don't really have a one-body problem.
Units are a pain in the butt when setting up physics problems. I've spent days resolving errors in seconds vs timestep units. Your choice of AU/Solar Masses/Day is utterly insane. Fix that before anything else.
And, keep in mind that computers have inherently limited precision. An nbody simulation accumulates integration error, so after a million or a billion steps you will certainly not have a circle, regardless of the step duration. I don't know much about that math, but I think stable n-body systems keep themselves stable by resonances which absorb minor variations, whether introduced by nearby stars or by the FPU. So the setup might work fine for a stable, 5-body problem but still fail for a 1-body problem.
As Ed suggested, I would use the mks units, rather than some other set of units.
For the initial velocity, I would agree with part of what Ed said, but I would use the vector form of the centripetal acceleration:
m1v2/r r(hat) = G m1 m2 / r2 r(hat)
Set z to 0, and convert from polar coordinates to cartesian coordinates (x,y). Then, you can assign either y or x an initial velocity, and compute what the other variable is to satisfy the circular orbit criteria. This should give you an initial (Vx,Vy) that you can start your nbody problem from. There should also be quite a bit of literature out there on numerical recipes for nbody central force problems.