Suggestions for optimizing a fractal visualization method - optimization

I've written up a variation on Melinda Green's Buddhabrot method for visualizing the Mandelbrot set. Here it is:
http://pastebin.com/RH6dD77F
To create an animation I rendered hundreds of the individual images with slight variations. The variation is a transformation of the coefficients of the generating function as if they were an abstract vector in a space of coefficients. All of that produced incredible structures in the video...
http://www.youtube.com/watch?v=S2uMAvL_5Fo
The problem? As you can tell, the quality on each image is rather low because it takes forever using the method I came up with (the copies I have on my computer are a little better quality, but still look like old reel-to-reel movies). I'm hoping to find a few methods for increasing quality or lowering output time.
Thanks for any suggestions. I would really like to produce more detailed versions of these. Obviously there is much more structure in the graininess of these images.

You can try something like boxcounting, http://imagej.nih.gov/ij/plugins/fraclac/FLHelp/BoxCounting.htm. If buddhabrot is some sort mandelbrot you can skip some empty boxes. You can use a kd-tree like in packing lightmaps to subdivide the surface.

Related

Information about CGAL and alternatives

I'm working on a problem that will eventually run in an embedded microcontroller (ESP8266). I need to perform some fairly simple operations on linear equations. I don't need much, but do need to be able work with points and linear equations to:
Define an equations for lines either from two known points, or one
point and a gradient
Calculate a new x,y point on an equation line that is a specific distance from another point on that equation line
Drop a perpendicular onto an equation line from a point
Perform variations of cosine-rule calculations on points and triangle sides defined as equations
I've roughed up some code for this a while ago based on high school "y = mx + c" concepts, but it's flawed (it fails with infinities when lines are vertical), and currently in Scala. Since I suspect I'm reinventing a wheel that's not my primary goal, I'd like to use someone else's work for this!
I've come across CGAL, and it seems very likely it's capable of all this and more, but I have two questions about it (given that it seems to take ages to get enough understanding of this kind of huge library to actually be able to answer simple questions!)
It seems to assert some kind of mathematical perfection in it's calculations, but that's not important to me, and my system will be severely memory constrained. Does it use/offer memory efficient approximations?
Is it possible (and hopefully easy) to separate out just a limited subset of features, or am I going to find the entire library (or even a very large subset) heading into my memory limited machine?
And, I suppose the inevitable follow up: are there more suitable libraries I'm unaware of?
TIA!
The problems that you are mentioning sound fairly simple indeed, so I'm wondering if you really need any library at all. Maybe if you post your original code we could help you fix it--your problem sounds like you need to redo a calculation avoiding a division by zero.
As for your point (2) about separating a limited number of features from CGAL, giving the size and the coding style of that project, from my experience that will be significantly more complicated (if at all possible) than fixing your own code.
In case you want to try a simpler library than CGAL, maybe you could try Boost.Geometry
Regards,

OpenCascade: How to subdivide elongated triangles?

I am using OpenCascade to import STEP/IGES as meshes in my software. Works nicely.
But I need small triangles, and the one I get are sometimes very large (in flat area), or very elongated (eg. when meshing a cylinder). The best would be to split triangle's edge bigger than some absolute value. Avoiding T vertices, too.
I was'nt able to google anything about that... So, currently, I pass the mesh to OpenMesh, apply the OpenMesh::Subdivider::Uniform::LongestEdgeT operator, then pass it back to my software. Tedious and costly when I manage several M triangles...
Questions:
Is there an equivalent in OpenCascade ?
Or a simple code snipet to implement my own loop to do so ?
Thanks !
The meshing algorithm BRepMesh_IncrementalMesh coming with Open CASCADE Technology is mainly focused on two usage scenarios:
Visualization in 3D Viewer. Large and prolonged triangles make no harm to presentation, as vertex normals ensure proper smooth shading. Deflection parameters allows managing presentation quality.
Computing Algorithms using triangulation as approximation (to speed up calculations compared to the same algorithm working on exact geometry). In this case, deflection parameters determine the target precision of the algorithm. Large and prolonged triangles should not cause problems here, as deviation from exact geometry is controlled by meshing parameters.
There are, however, some categories of algorithms, where shape of mesh element is very important. Solvers (numerical simulation) make one of such categories, where unfortunate mesh elements may cause numerical instability or other issues. What exactly matters / cause issues depend on specific algorithm - this may include element skewness, element aspect ratio, element size and elements grid. Some solvers work much better on quads rather than on triangles.
If you take a look onto meshing result of BRepMesh_IncrementalMesh algorithm, you may notice that not only large prolonged triangles, but entire mesh structure is somewhat suboptimal for solver algorithms:
There are several options you may consider:
Triangulation refinement algorithm. Such algorithm processes existing triangulation and tries healing some properties like skewness. This what does OpenMesh from your question, I suppose. Such postprocessing algorithm might give satisfactory results at good performance, but final result will dramatically depend on properties of original meshing algorithm. For the moment, OCCT doesn't have any refinement tool, although it is possible writing such algorithm on your own (I cannot give you a small code snippet, because such algorithm is not that small an trivial as it may look from a first glance).
Consider alternative meshing algorithm. Probably incomplete list:
Express Mesh by Open Cascade (hence, working directly on OCCT shapes). This tool generates triangulation having nice grid-alike structure (for smooth surfaces), configurable element size and quad-dominant generation option. This is a commercial product though.
Netgen mesher. This open source tool provides bindings to OCCT, and although it is focused on 3D tetrahedral mesh generation, it may be also used for generating a common triangular mesh. I cannot say something good about this tool - it was rather slow and unstable when I've seen its work many years ago.
MeshGems surface meshing. Another commercial tool providing an interface to OCCT. Never worked with this product, so cannot share any opinion on it.
Consider improving BRepMesh_IncrementalMesh. As OCCT is an open source framework, you may consider extending its meshing algorithm with more parameters and contribute to the project.

Methods of labeling human muscles on tensorflow

I want to be able to label all of the muscles on an athletes body. I got a lot of the images that the athletes are almost in the same body pose but the issue that I am running into is that drawing a box around them makes them inaccurate as it ends up overlapping other muscles. Drawing exact lines around them is a bit difficult as they are a lot of smaller muscles and creates inconsistently over 20-30images. I was wondering if there is a way to feed in a human anatomy and then have tensorflow go in and label all of the muscles in given pictures.
Or I was wondering if you all had a different idea on approach this problem that I'm running into.
I don't have anybody else to ask and I've been researching this for awhile so if I missed or overlooked something please forgive me
The way i see is you need to combine with some prepossessing steps to normalize your target object in the image such as:
identify the human,
identify the pose or skeleton (which nowadays many open-source such as openpose-plus),
the pose estimation results can label the limbs, or part of the body from which you can do something either by hand-crafted image processing or other segmentation model.

Operating speed of processing planar images vs interleaved images

I am reading a book on advanced iOS programming which says "It's generally much faster to work on planar formats than interleaved formats. If at all possible, get your data into planar format as soon as you can , and leave it in the at format through the entire transformation process".
I got to wondering why that is. The only reason I could come up with was that as you are iterating over your pixels, that you can use ++ operator instead of things like w*y+x to calculate your offsets.
I am familiar with planar vs interleaved as well as RGBA vs YUV. I've written a lot of c code to do things like image format conversion, rotations, flips, resizing, etc... The conversions were typically either to or from i420.
Is there something I'm overlooking?
Sweeping generalizations like the one you quote are often of limited practical applicability (i.e. almost, but not quite, entirely generalized from one or two data points ;-)
You say you have written a lot of C code for image processing. Good, take it to the next step: learn how to use a profiler, write microbenchmarks to unit-test the performance of your algorithms, especially when you are exploring the solution space.

transform a path along an arc

Im trying to transform a path along an arc.
My project is running on osX 10.8.2 and the painting is done via CoreAnimation in CALayers.
There is a waveform in my project which will be painted by a path. There are about 200 sample points which are mirrored to the bottom side. These are painted 60 times per second and updated to a song postion.
Please ignore the white line, it is just a rotation indicator.
What i am trying to achieve is drawing a waveform along an arc. "Up" should point to the middle. It does not need to go all the way around. The waveform should be painted along the green circle. Please take a look at the sketch provided below.
Im not sure how to achieve this in a performant manner. There are many points per second that need coordinate correction.
I tried coming up with some ideas of my own:
1) There is the possibility to add linear transformations to paths, which, i think, will not help me here. The only thing i can think of is adding a point, rotating the path with a transformation, adding another point, rotating and so on. But this would be very slow i think
2) Drawing the path into an image and bending it would surely lead to image-artifacts.
3) Maybe the best idea would be to precompute sample points on an arc, then save save a vector to the center. Taking the y-coordinates of the waveform, placing them on the sample points and moving them along the vector to the center.
But maybe i am just not seeing some kind of easy solution to this problem. Help is really appreciated and fresh ideas very welcome. Thank you in advance!
IMHO, the most efficient way to go (in terms of CPU usage) would be to use some form of pre-computed approach that would take into account the resolution of the display.
Cleverly precomputed values
I would go for the mathematical transformation (from linear to polar) and combine two facts:
There is no need to perform expansive mathematical computation
There is no need to render two points that are too close from each other
I have no ready-made algorithm for you, but you could use a pre-computed sin or cos table, and match the data range to the display size in order to work with integers.
For instance imagine we have some data ranging from 0 to 1E6 and we need to display the sin value of each point in a 100 pix height rectangle. We can use a pre-computed sin table and work with integers. This way displaying the sin value of a point would be much quicker. This concept can be refined to get a nicer result.
Also, there are some ways to retain only significant points of a curve so that the displayed curve actually looks like the original (see the Ramer–Douglas–Peucker algorithm on wikipedia). But I found it to be inefficient for quickly displaying ever-changing data.
Using multicore rendering
You could compute different areas of the curve using multiple cores (can be tricky)
Or you could use pre-computing using several cores, and one core to do finish the job.