Can Isolation Forest be used to detect colored anomalies in images? - object-detection

I am a bit confused about how isolation forest might work on images.
My problem is the following:
I have lots of images and on some of them there is a colored dot-ish anomaly somewhere. I would like to detect that and put a bounding box around it.
Is isolation forest a good method to do so?

Related

Blender UV Mapping, coherence of textures

I created a connected surface (several deformed and bent planes). Long story short, the UV mapping looked like this:
I did not find any short tutorial on how to yield a connected surface. I am also aware of the mathematics of a curve, and I don't need absolute control on the look of the texture, only the way it is right now, the textures really look very roughly pixelated.
Is it possible to pull a texture over the surface and to have a more connected UV map?
I also think this question is important so others don't have to buy a whole Udemy course as there has to be a simpler way.

Basics of face Sculpting in Blender

I mean, the basics..
1) I have seen in the Online videos, that they are modelling a character (or anything) through one object only, they are extruding, loop cut, scaling, etc and model a character, why don't they design different objects separately (like hands separately, legs separately, body separate and then join them together and make one object)..??????
2) Like What the texturing department has to see so that they should not return the model back to the modelling department. I mean like the meshes(polygons) over the model face must be quad, etc not triangle. while modelling a character..
what type of basics i should know , means is there any check list or is there any basics which i should see before modelling a character..
Please correct me if i am wrong , and answer my both questions.. Thanks
It may be common but it definitely isn't mandatory to have a model as one solid mesh. Some models will have parts of the body underneath clothing removed to reduce the poly count. How the model is to be used will be a big factor to how you model it, that is a for a single image it is easy to get away with multiple parts, while a character that will be animated in a cartoony animation could be stretched and distorted in ways that could show holes in a model with multiple pieces. When working in a team, there may be rules in place determining whether a solid or multi-part model is considered acceptable.
An example of an animated model made from multiple parts is Sintel, the main character in the Sintel short animation.
There is nothing stopping you from making a library of separate body parts and joining them together when you make your model. Be aware that this can bring complications, if you model an arm with 12 verts and then you make your hand with 15, then you have to fiddle around to merge them together.
You will also find some extra freedom to work with multiple body parts during the sculpting phase as you are creating a high density mesh that is used as a template to model a clean mesh over. This step is called retopology.
It is more likely that the rigging department will send a model back for fixing than the texturing department. When adding a rig and deforming the mesh in different ways, any parts that deform badly will be revealed and need fixing.
[...] (like hands separately, legs separately, body separate and then
join them together and make one object) [...]
Some modelers I know do precisely this and they do it in a way where they block in the design using broad primitive shapes, start slicing some edge loops and add broad details, then merge everything together, then sculpt it a bit further with high-res sculpting tools, and finally retopologize everything.
The main modelers I know who do this, however, model in a way that tries to adhere as close as possible to the concept artist's illustration. They're not creating their own models from scratch but are instead given top/front/back/side illustrations of a character, for example, and are just trying to match it as closely as possible.
When you start modeling everything in small pieces, it helps to have that concept illustration since you can get lost in the topology otherwise and fusing organic meshes together can be difficult to do in a clean way.
[...] why don't they design different objects separately? [...]
Again they sometimes do, but one of the appeals of creating organic meshes by keeping it seamless the entire time is that you can start to focus on how edge loops propagate across the entire model. It helps to know that the base of a finger is a hexagon, for example, in figuring out how to cleanly propagate and terminate the edge loops for a hand, and likewise have a strategy for the hand to cleanly propagate and terminate edge loops as it joins into the forearm.
It can be hard to get the topology to match up cleanly if you designed everything in small pieces and then had to figure out how to merge it all together. Polygonal modeling is very topology-oriented. It tends to require as much thinking about the wireframe and edge flows as it does the shape of the model, since it needs to be a certain way for everything to subdivide cleanly and smoothly and animate predictably with subdivision surfaces.
I used to work with developers who took one glance at the topology-dominated workflow of polygonal modeling and immediately wanted to jump to seeking alternatives, like voxel sculpting. With voxels you could be able to potentially model everything in pieces and foose it all together in a nice and smooth organic way without thinking about topology whatsoever.
However, that loses sight of the key appeal of polygonal meshes. Their wire flow forms a control lattice with a very finite number of control points for the artist to animate and move around to predictably control the shape of their model. You immediately lose that with a voxel representation -- so while voxels free the artist of thinking about how the topology works and how the wireframe flows through the model, it also loses all those control benefits of having that. So often if people use voxel sculpting, they end up meticulously retopologizing everything at the end anyway to gain back that level of coarse and predictable control they have with polygonal meshes.
I mean like the
meshes(polygons) over the model face must be quad, etc not triangle.
while modelling a character..
This is all in the context of subdivision surfaces: the most popular of which are variants of catmull-clark. That favors quads to get the most predictable subdivision. It's much easier for the artist to predict how everything will look like and deform if they favor, as much as possible, uniform grids of quadrangles wrapped around their model with 4-valence vertices and every polygon having 4 points. Then only in the case where they kind of need to "join" these quad grids together, they might create some funky topology: a 5-valence vertex here, a 3-valence vertex there, a 5-sided polygon here, a triangle there -- but those cases tend to deform a bit unpredictably (at least unintuitively), so artists tend to try to avoid these as much as possible.
Because when artists model polygonal meshes in this way, they are not just trying to create a statue with a nice shape. If that's all they wanted to do, they'd save themselves a lot of grief avoiding dealing with things in terms of individual vertices/edges/polygons in the first place and using something like Sculptris. Instead they are designing not only shapes but also designing a control lattice, a wire flow and a set of control points they can easily move around in the future to get predictable behavior out of their control cage. They're basically designing controls or an "interactive GUI/rig" almost for themselves with how they design the topology.
2) Like What the texturing department has to see so that they should
not return the model back to the modelling department.
Generally how a mesh is modeled in a direct sense shouldn't affect the texture department's work much at all if they're working with UV maps and painting textures over them (at that point it doesn't really matter if a model has clean wire flows or not, since all the texture artists do is pain images over the 2D UV map or directly onto the 3D model).
However, if the modeler does the UV mapping, then regardless of whether he uses quad meshes and clean wire flows or not, if the UV mapping is poor, then the resulting texture images will look all distorted. So the UV maps need to be made well with minimal distortion, though that's usually easy to do automatically these days.
The other exception is if the department doesn't use UV maps and instead uses, say, PTex from Disney. PTex really favors quads. In the original paper at least, it only worked with quads.

using HAAR training for post-it note recognition

I need to be able to detect a variety of coloured post-it notes via a Microsoft Kinect video stream. I have tried using Emgucv for edge detection but it doesn't seem to locate the vertices/edges and also colour segmentation/detection however considering the variety of colours that may not be robust enough.
I am attempting to use HAAR classification. Can anyone suggest the best variety of positive/negative images to use. For example, for the positive images should I take pictures of many different coloured post-it notes in various lighting conditions and orientations? Seeing as it is quite a simple shape ( a square) is using HAAR classification over-complicating things?
I haar classifiers are typically used on black and white images and trigger primarily on morphologic edge like feature. Seems like if you want to find post it notes in an image the easiest method would be to look at colors (since they come in very distinct colors). Have you tried training a SVM of Random forest classifier to detect post it notes based on just color? Once you've identified areas in the image that are probably post it notes you could start looking at things like the shape as additional validation that you are indeed looking at a post it note.
Take a look at the following as an example of how to find rectangles in an image using hough transform:
https://opencv-code.com/tutorials/automatic-perspective-correction-for-quadrilateral-objects/

Suggestions for optimizing a fractal visualization method

I've written up a variation on Melinda Green's Buddhabrot method for visualizing the Mandelbrot set. Here it is:
http://pastebin.com/RH6dD77F
To create an animation I rendered hundreds of the individual images with slight variations. The variation is a transformation of the coefficients of the generating function as if they were an abstract vector in a space of coefficients. All of that produced incredible structures in the video...
http://www.youtube.com/watch?v=S2uMAvL_5Fo
The problem? As you can tell, the quality on each image is rather low because it takes forever using the method I came up with (the copies I have on my computer are a little better quality, but still look like old reel-to-reel movies). I'm hoping to find a few methods for increasing quality or lowering output time.
Thanks for any suggestions. I would really like to produce more detailed versions of these. Obviously there is much more structure in the graininess of these images.
You can try something like boxcounting, http://imagej.nih.gov/ij/plugins/fraclac/FLHelp/BoxCounting.htm. If buddhabrot is some sort mandelbrot you can skip some empty boxes. You can use a kd-tree like in packing lightmaps to subdivide the surface.

Elegant representations of graphs in R^3

If I have a graph of a reasonable size (e.g. ~100 nodes, ~40 edges coming out of each node) and I want to represent it in R^3 (i.e. map each node to a point in R^3 and draw a straight line between any two nodes which are connected in the original graph) in a way which would make it easy to understand its structure, what do you think would make a good drawing criterion?
I know this question is ill-posed; it's not objective. The idea behind it is easier to understand with an extreme case. Suppose you have a connected graph in which each node connects to two and only two other nodes, except for two nodes which only connect to one other node. It's not difficult to see that this graph, when drawn in R^3, can be drawn as a straight line (with nodes sprinkled over the line). Nevertheless, it is possible to draw it in a way which makes it almost impossible to see its very simple structure, e.g. by "twisting" it as much as possible around some fixed point in R^3. So, for this simple case, it's clear that a simple 3D representation is that of a straight line. However, it is not clear what this simplicity property is in the general case.
So, the question is: how would you define this simplicity property?
I'm happy with any kind of answer, be it a definition of "simplicity" computable for graphs, or a greedy approximated algorithm which transforms graphs and that converges to "simpler" 3D representations.
Thanks!
EDITED
In the mean time I've put force-based graph drawing ideas suggested in the answer into practice and wrote an OCaml/openGL program to simulate how imposing an electrical repulsive force between nodes (Coulomb's Law) and a spring-like behaviour on edges (Hooke's law) would turn out. I've posted the video on youtube. The video starts with an initial graph of 100 nodes each with approximately 1-2 outgoing edges and places the nodes randomly in 3D space. Then all the forces I mentioned are put into place and the system is left to move around subject to those forces. In the beginning, the graph is a mess and it's very difficult to see the structure. Closer to the end, it is clear that the graph is almost linear. I've also experience with larger-sized graphs but sometimes the geometry of the graph is just a mess and no matter how you plot it, you won't be able to visualise anything. And here is an even more extreme example with 500 nodes.
One simple approach is described, e.g., at http://en.wikipedia.org/wiki/Force-based_algorithms_%28graph_drawing%29 . The underlying notion of "simplicity" is something like "minimal potential energy", which doesn't really correspond to simplicity in any useful sense but might be good enough in practice.
(If you have 100 nodes of degree 40, I have some doubt as to whether any way of drawing them is going to reveal much in the way of human-accessible structure. That's a lot of edges. Still, good luck!)