Where is the shape feature with loop subdivision - cgal

Is there shape feature with loop subdivision or others subdivision algorithms in the CGAL? Thank you so much!

It's not there because there is a patent by Pixar on it. If you need sharp features, you can use the OpenSubdiv library from Pixar that includes sharp feature handling without the patent restriction IIRC.

Related

Library for fitting parametric curves

Does anyone know of a library (any language, though preferably python/R/matlab) for parametric curve fitting, i.e. if you have a set of points in the plane {(x_i,y_i)} you can find parameter estimates for two (polynomial) functions y=f_y(t) and x=f_x(t) for some (arc-length?) parametrization t? This is especially useful if you have some multi-valued function (e.g. a circle) for which regression wouldn't work.
There are a number of papers detailing algorithms (e.g. 'Parametric Curve Fitting', Grossman 1971) but I can't find any corresponding software that would save a lot of time coding up.
For future reference, I ended up using the princurve library in R based on principal curves by Trevor Hastie.

using HAAR training for post-it note recognition

I need to be able to detect a variety of coloured post-it notes via a Microsoft Kinect video stream. I have tried using Emgucv for edge detection but it doesn't seem to locate the vertices/edges and also colour segmentation/detection however considering the variety of colours that may not be robust enough.
I am attempting to use HAAR classification. Can anyone suggest the best variety of positive/negative images to use. For example, for the positive images should I take pictures of many different coloured post-it notes in various lighting conditions and orientations? Seeing as it is quite a simple shape ( a square) is using HAAR classification over-complicating things?
I haar classifiers are typically used on black and white images and trigger primarily on morphologic edge like feature. Seems like if you want to find post it notes in an image the easiest method would be to look at colors (since they come in very distinct colors). Have you tried training a SVM of Random forest classifier to detect post it notes based on just color? Once you've identified areas in the image that are probably post it notes you could start looking at things like the shape as additional validation that you are indeed looking at a post it note.
Take a look at the following as an example of how to find rectangles in an image using hough transform:
https://opencv-code.com/tutorials/automatic-perspective-correction-for-quadrilateral-objects/

open scence graph non-uniform terrain support

I would like to add terrain to my project, which uses OSG.
I've read osgTerrain documentation. As I understand from it's interface, it treats data as uniform height field -- grid of heights.
I want terrain to be non-uniform. It would be represented as triangulation wuth height specified at vertices.
Does osgTerrain supports this out of the box? Or should I implement myself, deriving from Layer? Where to find extensive docs? Where to start from?
osgTerrain at one point, through the VPB tool, supported irregular triangulated terrain models. There's nothing in OSG itself that prevents you from doing this still. However, I must question your reasons for doing so. Are you looking for performance? The reason osg uses regular heightfields now is that with modern GPUs, they're just as fast as the old indexed triangles. Are you planning on doing some modifications to the terrain at runtime that requires a irregular mesh?
Also, you might consider osgEarth. It is sort of the replacement terrain subsystem for OSG. It is much more feature-filled than osgTerrain. It uses quadtrees of regular grids too though.

What frameworks for depth cameras are out there?

I want to evaluate the performance of several SDKs / frameworks for depth cameras. These cameras can either be using Time-of-Flight or structured light.
The framework should be capable (at least) of person tracking / blob detection and gesture recognition.
So far I found the following frameworks:
OpenNI (structured light only)
Microsoft Kinect SDK (Kinect only)
Beckon SDK by Omek Interactive (ToF and structured light)
iisu by SoftKinetic (ToF and structured light)
Are there any other frameworks I should be aware of?
EDIT: I found this article by Techradar that seems to indicate that these are indeed the only options currently available.
Any feedback would be very much appreciated!
I have found some interesting links on this. You can take MIT's approach using CodAC . They list lots of facts on this post, the most important ones I will post here.
9. What are limitations of this technique?
The main limitation of our framework is inapplicability to scenes with curvilinear
objects, which would require extensions of the current mathematical model.
Another limitation is that a periodic light source creates a wrap-around error
as it does in other TOF devices. For scenes in which surfaces have high reflectance
or texture variations, availability of a traditional 2D image prior to our data
acquisition allows for improved depth map reconstruction as discussed in our paper.
10. What are advantages of this technique/device and how does it
compare with existing TOF-based range sensing techniques?
In laser scanning, spatial resolution is limited by the scanning time.
TOF cameras do not provide high spatial resolution because they rely on a
low-resolution 2D pixel array of range-sensing pixels. CoDAC is a single-sensor,
high spatial resolution depth camera which works by exploiting the sparsity of natural
scene structure.
11. What is the range resolution and spatial resolution of the CoDAC system?
We have demonstrated sub-centimeter range resolution in our experiments.
This is significantly better than fundamental limit of about 10 cm that would
arise from using a detector with 0.7 nanosecond rise time if we were not using
parametric signal modeling. The improvement in range resolution comes from the
parametric modeling and deconvolution in our framework. We refer the reader to
our publications for complete details and analysis.
We have demonstrated 64-by-64 pixel spatial resolution,
as this is the spatial resolution of our spatial light modulator.
Spatially patterning with a digital micromirror device (DMD) will enable
much higher spatial resolution. Our experiments use only 205 projection patterns,
which correspond to just 5% of number of pixels in the reconstructed depth map.
This is a significant improvement over raster scanning in LIDAR, and it is
obtained without the 2D sensor array used in TOF cameras.
Also another interesting project I found on Youtube uses libfreenect and libusb
There is also dSensingNI which is described as
This work presents an approach to overcome the disadvantages of existing interaction
frameworks and technologies for touch detection and object interaction. The robust and
easy to use framework dSensingNI (Depth Sensing Natural Interaction) is described,
which supports multitouch and tangible interaction with arbitrary objects. It uses
images from a depth-sensing camera and provides tracking of users fingers of palm of
hands and combines this with object interaction, such as grasping, grouping and
stacking, which can be used for advanced interaction techniques.
So you have hit most of them out there, especially that use Kinect, but there are a few other options out there! Hope this Helps!

Non-Speech Noise or Sound Recognition Software?

I'm working on some software for children, and looking to add the ability for the software to respond to a number of non-speech sounds. For instance, clapping, barking, whistling, fart noises, etc.
I've used CMU Sphinx and the Windows Speech API in the past, however, as far as I can tell neither of these have any support for non-speech noises, and in fact I believe actively filter them out.
In general I'm looking for "How do I get this functionality" but I suspect it may help if I break it down into three questions that are my guesses for what to search for next:
Is there a way to use one of the main speech recognition engines to recognize non-word sounds by changing an acoustic model or pronunciation lexicon?
(or) Is there already an existing library to do non-word noise recognition?
(or) I have a bit of familiarity with Hidden Markov Models and the underlying tech of voice recognition from college, but no good estimate on how difficult it would be to create a very small noise/sound recognizer from scratch (suppose <20 noises to be recognized). If 1) and 2) fail, any estimation on how long it would take to roll my own?
Thanks
Yes, you can use speech recognition software like CMU Sphinx for recognition of non-speech sounds. For this, you need to create your own acoustical and language models and define the lexicon restricted to your task. But to train the corresponding acoustic model, you must have enough training data with annotated sounds of interest.
In short, the sequence of steps is the following:
First, prepare resources for training: lexicon, dictionary etc. The process is described here: http://cmusphinx.sourceforge.net/wiki/tutorialam. But in your case, you need to redefine phoneme set and the lexicon. Namely, you should model fillers as real words (so, no ++ around) and you don't need to define the full phoneme set. There are many possibilities, but probably the most simple one is to have a single model for all speech phonemes. Thus, your lexicon will look like:
CLAP CLAP
BARK BARK
WHISTLE WHISTLE
FART FART
SPEECH SPEECH
Second, prepare training data with labels: Something similar to VoxForge, but text annotations must contain only labels from your lexicon. Of course, non-speech sounds must be labeled correctly as well. Good question here is where to get large enough amount of such data. But I guess it should be possible.
Having that, you can train your model. The task is simpler compared to speech recognition, for instance, you don't need to use triphones, just monophones.
Assuming equal prior probability of any sound/speech, the simplest language model can be a loop-like grammar (http://cmusphinx.sourceforge.net/wiki/tutoriallm):
#JSGF V1.0;
/**
* JSGF Grammar for Hello World example
*/
grammar foo;
public <foo> = (CLAP | BARK | WHISTLE | FART | SPEECH)+ ;
This is the very basic approach to using ASR toolkit for your task. In can be further improved by fine-tuning HMMs configurations, using statistical language models and using fine-grained phonemes modeling (e.g. distinguishing vowels and consonants instead of having single SPEECH model. It depends on nature of your training data).
Outside the framework of speech recognition, you can build a simple static classifier that will analyze the input data frame by frame. Convolutional neural networks that operate over spectrograms perform quite well for this task.
I don't know any existing libraries you can use, I suspect you may have to roll your own.
Would this paper be of interest? It has some technical detail, they seem to be able to recognise claps and differentiate them from whistles.
http://www.cs.bham.ac.uk/internal/courses/robotics/halloffame/2001/team14/sound.htm