time evolution with Matlab BVP4C - differential-equations

is there any way to simulate the time evolution of a BVP with bvp4c? I have a problem with a moving boundary and I created a cycle over the position in time of the moving boundary. At each iteration, the bvp4c Matlab solver calculates the solution by taking as initial guess, the solution from the previous one, knowing the current position of the boundary in object. This is my trick to include time in your system of equations. But, as I am including the time evolution of temperature, I am doubting this is the right approach. I cannot use PDEs for this problem. Thank you very much.

Related

How to solve the scheduling problem with random durations? (undomoves will result different scores)

"A shadow variable is in essence the result of a formula/algo based on at least 1 planning variable (and maybe some problem properties). The same planning variables state should always deliver the exact same shadow variable state." However, for project scheduling problems with random durations, even the start time of a job remains the same as before move or undomove, the end time of the job will be different, because the duration is a random variable. Both the start time and the end time of a job are shadow variables. Then the score after undomove and beforemove will be different. How to deal with this situation?
When you say:
Then the score after undomove and beforemove will be different.
That is the root of your problem. Assume a solution X and a move M. Move M transforms solution X into solution Y. Undo move M2 must then transform solution Y back into solution X. (X and Y do not need to, and are not going to, be the same instance; they just need to represent the exact same state of the problem.)
Where you fall short is in modelling random duration of tasks. Every time the duration of a task changes, that is a change to the problem. When task duration changes, you are no longer solving the same problem - and you need to tell the solver that.
There are two ways of doing that:
Externally via ProblemChange. This will effectively restart the solver.
During a custom move, using ScoreDirector's before...() and after...() methods. But if you do that, then your undo moves must restore the solution back to its original state. They must reset the duration to what it was before.
There really is no way around this. Undo moves restore the solution to the exact same state as there was before the original move.
That said, I honestly do not understand how you implement randomness in your planning entities. If you share your code, maybe I will be able to give a more targetted answer.

Oxyplot: IsValidPoint on realtime LineSerie

I've been using oxyplot for a month now and I'm pretty happy with what it delivers. I'm getting data from an oscilloscope and, after a fast processing, I'm plotting it in real time to a graph.
However, if I compare my application CPU usage to the one provided by the oscilloscope manufacturer, I'm loading a lot more the CPU. Maybe they're using some gpu-based plotter, but I think I can reduce my CPU usage with some modifications.
I'm capturing 10.000 samples per second and adding it to a LineSeries. I'm not plotting all that data, I'm decimating it to a constant number of points, let's say 80 points for a 20 secs measure, so I have 4 points/sec while totally zoomed out and a bit more detail if I zoom in to a specific range.
With the aid of ReSharper, I've noticed that the application is calling a lot of times (I've 6 different plots) the IsValidPoint method (something like 400.000.000 times), which is taking a lot of time.
I think the problem is that, when I add new points to the series, it checks for every point if it is a valid point, instead of the added values only.
Also, it spends a lot of time in the MeasureText/DrawText method.
My question is: is there a way to override those methods and to adapt it to my needs? I'm adding 10.000 new values each second, but the first ones remain the same, so there's no need for re-validate them. Also, the text shown doesn't change.
Thank you in advance for any advice you can give me. Have a good day!

Stages in genetic programming

When evolving a genetic program, how is the required time distributed between different stages in development? I mean: Is 90 percent of the time devoted to becoming a little bit better than random programs, after which improving the program to the final version is not very computation-intensive?
Most metaheuristics (including genetic algorithms I think) have a progress like the green and red lines on this image. They try to reach the best score as fast as possible and it gets harder and hard to find a better score.
However, some (like simulated annealing, the blue line) can be told the amount of time they 'll be given and behave differently based upon that. In that case you can get a more linear like line.
Generally progress is quicker earlier, with progress slowing in later generations. But it does depend on the nature of the problem. Why not test it out on a few different problems and plot the progress?
An approximate indication to this can be the size of the program. If the program size becomes stable but you notice that fitness is still improving, then most likely all random programs were weeded out. The fitness improvement can therefore be attributed to minor numerical changes in say the coefficients.

How to run gradient descent algorithm when parameter space is constrained?

I would like to maximize a function with one parameter.
So I run gradient descent (or, ascent actually): I start with an initial parameter and keep adding the gradient (times some learning rate factor that gets smaller and smaller), re-evaluate the gradient given the new parameter, and so on until convergence.
But there is one problem: My parameter must stay positive, so it is not supposed to become <= 0 because my function will be undefined. My gradient search will sometimes go into such regions though (when it was positive, the gradient told it to go a bit lower, and it overshoots).
And to make things worse, the gradient at such a point might be negative, driving the search toward even more negative parameter values. (The reason is that the objective function contains logs, but the gradient doesn't.)
What are some good (simple) algorithms that deal with this constrained optimization problem? I'm hoping for just a simple fix to my algorithm. Or maybe ignore the gradient and do some kind of line search for the optimal parameter?
Each time you update your parameter, check to see if it's negative, and if it is, clamp it to zero.
If clamping to zero is not acceptable, try adding a "log-barrier" (Google it). Basically, it adds a smooth "soft" wall to your objective function (and modifying your gradient) to keep it away from regions you don't want it to go to. You then repeatedly run the optimization by hardening up the wall to make it more infinitely vertical, but starting with the previously found solution. In the limit (in practice only a few iterations are needed), the problem you are solving is identical to the original problem with a hard constraint.
Without knowing more about your problem, it's hard to give specific advice. Your gradient ascent algorithm may not be particularly suitable for your function space. However, given that's what you've got, here's one tweak that would help.
You're following what you believe is an ascending gradient. But when you move forwards in the direction of the gradient, you discover you have fallen into a pit of negative value. This implies that there was a nearby local maximum, but also a very sharp negative gradient cliff. The obvious fix is to backtrack to your previous position, and take a smaller step (e.g half the size). If you still fall in, repeat with a still smaller step. This will iterate until you find the local maximum at the edge of the cliff.
The problem is, there is no guarantee that your local maximum is actually global (unless you know more about your function than you are sharing). This is the main limitation of naive gradient ascent - it always fixes on the first local maximum and converges to it. If you don't want to switch to a more robust algorithm, one simple approach that could help is to run n iterations of your code, starting each time from random positions in the function space, and keeping the best maximum you find overall. This Monte Carlo approach increases the odds that you will stumble on the global maximum, at the cost of increasing your run time by a factor n. How effective this is will depend on the 'bumpiness' of your objective function.
A simple trick to restrict a parameter to be positive is to re-parametrize the problem in terms of its logarithm (make sure to change the gradient appropriately). Of course it is possible that the optimum moves to -infty with this transformation, and the search does not converge.
At each step, constrain the parameter to be positive. This is (in short) the projected gradient method you may want to google about.
I have three suggestions, in order of how much thinking and work you want to do.
First, in gradient descent/ascent, you move each time by the gradient times some factor, which you refer to as a "learning rate factor." If, as you describe, this move causes x to become negative, there are two natural interpretations: Either the gradient was too big, or the learning rate factor was too big. Since you can't control the gradient, take the second interpretation. Check whether your move will cause x to become negative, and if so, cut the learning rate factor in half and try again.
Second, to elaborate on Aniko's answer, let x be your parameter, and f(x) be your function. Then define a new function g(x) = f(e^x), and note that although the domain of f is (0,infinity), the domain of g is (-infinity, infinity). So g cannot suffer the problems that f suffers. Use gradient descent to find the value x_0 that maximizes g. Then e^(x_0), which is positive, maximizes f. To apply gradient descent on g, you need its derivative, which is f'(e^x)*e^x, by the chain rule.
Third, it sounds like you're trying maximize just one function, not write a general maximization routine. You could consider shelving gradient descent, and tailoring the
method of optimization to the idiosyncrasies of your specific function. We would have to know a lot more about the expected behavior of f to help you do that.
Just use Brent's method for minimization. It is stable and fast and the right thing to do if you have only one parameter. It's what the R function optimize uses. The link also contains a simple C++ implementation. And yes, you can give it MIN and MAX parameter value limits.
You're getting good answers here. Reparameterizing is what I would recommend. Also, have you considered another search method, like Metropolis-Hastings? It's actually quite simple once you bull through the scary-looking math, and it gives you standard errors as well as an optimum.

Modeling human running on a soccer field

In a soccer game, I am computing a steering force using steering behaviors. This part is ok.
However, I am looking for the best way to implement simple 2d human locomotion.
For instance, the players should not "steer" (or simply add acceleration computed from steering force) to its current velocity when the cos(angle) between the steering force and the current velocity or heading vectors is lower than 0.5 because it looks as if the player is a vehicule. A human, when there is an important change of direction, slows down and when it has slowed enough, it starts accelerating in the new direction.
Does anyone have any advice, ideas on how to achieve this behavior? Thanks in advance.
Make it change direction very quickly but without perfect friction. EG super mario
Edit: but feet should not slide - use procedural animation for feet
This is already researched and developed in an initiative called "Robocup". They have a simulation 2D league that should be really similar to what you are trying to accomplish.
Here's a link that should point you to the right direction:
http://wiki.robocup.org/wiki/Main_Page
Maybe you could compute the curvature. If the curvature value is to big, the speed slows down.
http://en.wikipedia.org/wiki/Curvature
At low speed a human can turn on a dime. At high speed only very slight turns require no slowing. The speed and radius of the turn are thus strongly correlated.
How much a human slows down when aiming toward a target is actually a judgment call, not an automatic computation. One human might come to almost a complete stop, turn sharply, and run directly toward the target. Another human might slow only a little and make a wide curving arc—even if this increases the total length to the target. The only caveat is that if the desired target is inside the radius of the curve at the current speed, the only reasonable path is to slow since it would take a wide loop far from the target in order to reach it (rather than circling it endlessly).
Here's how I would go about doing it. I apologize for the Imperial units if you prefer metric.
The fastest human ever recorded traveled just under 28 mph. Each of your human units should be given a personal top speed between 1 and 28 mph.
Create a 29-element table of the maximum acceleration and deceleration rates of a human traveling at each whole mph in a straight line. It doesn't have to be exact--just approximate accel and decel values for each value. Create fast, medium, slow versions of the 29-element table and assign each human to one of these tables. The table chosen may be mapped to the unit's top speed, so a unit with a max of 10mph would be a slow accelerator.
Create a 29-element table of the sharpest radius a human can turn at that mph (0-28).
Now, when animating each human unit, if you have target information and must choose an acceleration from that, the task is harder. If instead you just have a force vector, it is easier. Let's start with the force vector.
If the force vector's net acceleration and resultant angle would exceed the limit of the unit's ability, restrict the unit's new vector to the maximum angle allowed, and also decelerate the unit at its maximum rate for its current linear speed.
During the next clock tick, being slower, it will be able to turn more sharply.
If the force vector can be entirely accommodated, but the unit is traveling slower than its maximum speed for that curvature, apply the maximum acceleration the unit has at that speed.
I know the details are going to be quite difficult, but I think this is a good start.
For the pathing version where you have a target and need to choose a force to apply, the problem is a bit different, and even harder. I'm out of ideas for now--but suffice it to say that, given the example condition of the human already running away from the target at top stpeed, there will be a best-time path that is between on the one hand, slowing enough while turning to complete a perfect arc to the target, and on the other hand stopping completely, rotating completely and running straight to the target.