plotting direction field for systems of three equations in SageMath using maxima - matplotlib

I would like to plot the direction field for a system of 3 or more equations in SageMath using Maxima. I know how to do this for a system of 2 equations. I don't know what to modify so that I extend it to 3 or more equations. I tried the following example for a system of two equations
maxima('plotdf([x,-y],[x,y],[x,-2,2],[y,-2,2])')
I was thinking for the three or more equations I simply have to add more varibles like
maxima('plotdf([x,-y,z],[x,y,z],[x,-2,2],[y,-2,2],[z,-2,2])')
but its not working. I dont know what am missing.

The Maxima documentation for plotdf makes the syntax clear. I'm not sure what a slope/direction field would look like for more equations unless you had more variables, but then it would have to be in three dimensions, which is not supported.
In any case I'm surprised this worked from within Sage; you must have had wxmaxima or something analogous already there.
Finally, note that SageMath has slope fields natively, though this may not correspond with your workflow.

Related

Multi-objective optimization but the function equation is unknown?

Firstly, I am totally out of my expertise zone so please bear with me.
I developed a fluid dynamic engine with 5 exposed parameters (say A,B,C,D,E). When you give this engine these 5 parameters, it does magic and give out a value 'Z'.
I want to write a script which can explore which combinations of A-E give lowest (or close to lowest) value of Z.
I know optimization algorithm exists, but from all of my search for examples, they use some function.
So I guess my function would simply be minimize Z? But where do A-E go?
Not really an answer, but some questions and ideas that might help you think through the best way to address this. We have no understanding of how big a range of values needs to be explored for those parameters, or how Z behaves, so this is very vague...
If you look at the values of Z for given values of A...E, does the value of Z jump around a lot for small changes on the parameter values, or does the Z value change reasonably smoothly?
If the Z value is not too eratic you could try some kind of gradient descent approach using calculated values of Z for some values of the parameters to approximate the gradient - suppose changing the value of 'A' from 1 to 2 gives a better change in the Z value than a similar size change in the other parameters, then try other values of A while keeping the other parameters fixed until you find a value of A that gives the best value of Z. Then try changing the other parameter values to see which one gives the steepest descent and try to find some better value for that parameter. Repeat this process until you can't find any improvement and you will have found a (local) minimum. You could then start at a different place in your parameter space and try again - you will probably find several local minima, and may just choose the best of those. Not provably optimal but may be good enough. Of course you can get clever and use things like conjugate gradients, Newton-Raphson or similar if Z is smooth enough.
If the Z values are very eratic, then you might have to just do some sampling of the possible combinations of A...E to get values of Z and choose the best you can find. Again you might do that in some systematic way (e.g. points on a grid in your parameter space) or entirely at random, or a combination of both.
If you find that there are 'clusters' of good solutions with similar values of the parameters then maybe some kind of local search would help - the idea is that there is often a better solution in the local neighbourhood of a known good solution. So maybe try perturbing your parameter values a bit from a known solution to see if that can lead to a better solution - either by some gradient descent method or by random sampling.
Unfortunately, if your Z calculation is complex, then any method using it as a black box will likely be slow as it will need to be re-evaluated many times.
You could use a Genetic Algorithm, where your chromosomes are formed with the 5 candidate values of the variables you have to optimize, to minimize Z, and your optimization/fitness "function" is the simulation itself outputting Z.
Other viable alternatives are Particle Swarm Optimization algorithm or Ant Colony Optimization. All of those are usable algortihms for that kind of optimization problem.

How to set the initials when using NLsolve, Julia

Please, I want to use NLsolve of Julia for solving systems of nonlinear equations. The package requires to set the initials for the unknowns of your system. With my system of equations (sorry, I can not include it here since I need at least 10 reputations to be allowed to), when I keep the initials [0.1; 1.2] as in the example from the documentation, I obtain the "paper-pencil" solution. But if I set the initials at [1.1;2.2] for example, I receive the following error:
DomainError:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
Please, how should I come up with suitable values for initials for a given system of equations?
A rootfinder is always dependent on the initial condition. That's just how those algorithms work. The closer your guess is to the true maximum the better off you are. Trust region methods are more robust than Newton methods, but you'll never get away from the fact that these are local methods.

Range of possible values for alpha, gamma and eta params of HLDA's Mallet implementation

I'm trying to run the hlda algorytmm and producing a descriptive hierarchy of the input documents. The problem is I'm running diverse parameters configs and trying to understand how it works in an "empirical way", because I can not match the ones that are being used in the original papers (I understand it's a different team). E.g. alpha in Mallet seems to be eta in the paper, but I'm not very sure. Besides, I can not know the boundaries for each of them. I mean, the range of possible values for each parameter.
In the source code, there is some help:
double alpha; // smoothing on topic distributions
double gamma; // "imaginary" customers at the next
double eta; // smoothing on word distributions.
First, I used the default values: alpha=10.0; gamma=1.0; eta = 0.1;
Then, I tryed running the algorythm by changing the values and interpret the results, but I can't understand the meaning of them. E.g. I think changing gamma (in Mallet) has an effect on the customers decition: to start a new node in the tree or to be placed in an existing one. So, if I set gamma = 0.5, less nodes should be produced, because 0.5 is half the probability of the default one, right? But the results with gamma=1 give me 87 nodes, and with gamma=0.5, it returns 98! And then, I'm asking me something new: is that a probability? I was trying to find the range of possible values in these two papers, but I didn't find them:
Hierarchical Topic Models andthe Nested Chinese Restaurant Process
The Nested Chinese Restaurant Process and BayesianNonparametric Inference of Topic Hierarchies
I know I could be missing something, because I don't have the a good background on this, but that's why I'm asking here, maybe someone already had this problem and can help me understanding those limits.
Thanks in advance!
It may be helpful to run multiple times with each hyperparameter setting. I suspect that gamma does not have a big influence on the final number of topics, and that what you are seeing could just be typical variability in the sampling process.
In my experience the parameter that has by far the strongest influence on the number of topics is actually eta, the topic-word smoothing.

definition of a graph

I have two questions;
What is the name of the graph (or circuit) which goes along the outer vertices of existing nodes.
What will be the formal definition of that graph.
for the simplicity, I have added a sample figure and the red highlighted graph is what I wanted.
If I show how I got this outer vertices;
I have set of sub graphs. So I got UNION and INTERSECTION, then got the DIFFERENCE. Do this one after the other, Finally I ended with a graph which is similar to MY RED EDGE GRAPH.
So the final graph which I got was {1,2,6,7,9,8,10,11,10,7,6,5,3,4,3,1} if i start from 1.
PLEASE TELL ME WHETHER I AM USING CORRECT THING OR NOT AS I AM COMING TO END OF MY WORK.
You may be trying to define some kind of geometric hull, but surely not a graph property. These two graphs are equivalent and indistinguishable a far as Graph Theory goes:
Edit
Perhaps this old answer of mine may help you

optimizing a function to find global and local peaks with R

Y
I have 6 parameters for which I know maxi and mini values. I have a complex function that includes the 6 parameters and return a 7th value (say Y). I say complex because Y is not directly related to the 6 parameters; there are many embeded functions in between.
I would like to find the combination of the 6 parameters which returns the highest Y value. I first tried to calculate Y for every combination by constructing an hypercube but I have not enough memory in my computer. So I am looking for kinds of markov chains which progress in the delimited parameter space, and are able to overpass local peaks.
when I give one combination of the 6 parameters, I would like to know the highest local Y value. I tried to write a code with an iterative chain like a markov's one, but I am not sure how to process when the chain reach an edge of the parameter space. Obviously, some algorythms should already exist for this.
Question: Does anybody know what are the best functions in R to do these two things? I read that optim() could be appropriate to find the global peak but I am not sure that it can deal with complex functions (I prefer asking before engaging in a long (for me) process of code writing). And fot he local peaks? optim() should not be able to do this
In advance, thank you for any lead
Julien from France
Take a look at the Optimization and Mathematical Programming Task View on CRAN. I've personally found the differential evolution algorithm to be very fast and robust. It's implemented in the DEoptim package. The rgenoud package is another good candidate.
I like to use the Metropolis-Hastings algorithm. Since you are limiting each parameter to a range, the simple thing to do is let your proposal distribution simply be uniform over the range. That way, you won't run off the edges. It won't be fast, but if you let it run long enough, it will do a good job of sampling your space. The samples will congregate at each peak, and will spread out around them in a way that reflects the local curvature.