Issue precision CGAL's Delaunay triangulation - cgal

I am having an issue that is probably due to the mismatch between the precision of a double and the infinite precision given in CGAL, but I cannot seem to solve it, and do not see anyway to set a tolerance.
I input a set of points (initially the locations are doubles).
When the points are aligned horizontally on the upper part (and only happens there) I sometimes (not always) get an issue of too many triangles (with a really small area, almost 0) being generated: see image
(*notice in the upper section, how there is a line that seems to be thicker than the rest, because there are at least 3 triangles there).
This is what I am doing in my code, I tried to set the kernel to handle the imprecision of doubles:
typedef CGAL::Simple_cartesian<double> CK;
typedef CGAL::Filtered_kernel<CK> K;
//typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef K::FT FT;
typedef K::Point_2 Point;
typedef K::Segment_2 Segment;
typedef CGAL::Polygon_2<K> Polygon_2;
typedef CGAL::Triangulation_vertex_base_with_info_2<unsigned long, K> Vb2;
typedef CGAL::Triangulation_data_structure_2<Vb2,Fb> Tds2;
typedef CGAL::Delaunay_triangulation_2<K,Tds2> Delaunay;
std::vector<std::vector<long> > Geometry::delaunay(std::vector<double> xs,std::vector<double> ys){
std::vector<Point> points;
std::vector<unsigned long> indices;
points.resize(xs.size());
indices.resize(xs.size());
for(long i=0;i<xs.size();i++){
indices[i]=i;
points[i]=Point(xs[i],ys[i]);
}
std::vector<long> idAs;
std::vector<long> idBs;
Delaunay dt;
dt.insert( boost::make_zip_iterator(boost::make_tuple( points.begin(),indices.begin() )),boost::make_zip_iterator(boost::make_tuple( points.end(),indices.end() ) ) );
for(Delaunay::Finite_edges_iterator it = dt.finite_edges_begin(); it != dt.finite_edges_end(); ++it)
{
Delaunay::Edge e=*it;
long i1= e.first->vertex( (e.second+1)%3 )->info();
long i2= e.first->vertex( (e.second+2)%3 )->info();
idAs.push_back(i1);
idBs.push_back(i2);
}
std::vector<std::vector<long> > result;
result.resize(2);
result[0]=(idAs);
result[1]=(idBs);
return result;
}
I am completely new to CGAL, and this code is something that I have been able to put together after a lot of looking up in web pages in the last 2 days. So if there is something else that might be improved, please, do not hesitate to mention it, the sintaxis of CGAL is not really straightforward.
*the code works perfectly for random points, and 70% of the times even for points that are aligned, but the other 30% worries me.
THE QUESTION IS, how can I set a tolerance, so CGAL does not generate triangles on top of points that are almost almost aligned??, or is there a better kernel for this? (as you see I tried also the Exact_predicates_inexact_constructions_kernel, but it is even worst).

Related

PS::Simplification with 3d points

Im trying to simplify 3d polylines using CGAL::Simplify, is a terrain so the elevation does not matter for the simplification but I need to carry them because I need them on simplified polylines. The polylines can be open or closed (polygon).
The problem occurs when I try to call PS::simplify with 3d points. I checked and it work ok with 2d points. The funny is that it accept 3d points for parameters begin and end of the polyline but do not accept for the back_inserter parameter.
Is there any version of simplify that work completely with 3d points or I´m missing something?
on the code:
PS::simplify(P1.begin(), P1.end(), CostSquare(), Stop(0.5), std::back_inserter(Result),Closed);
Templates and parametes definitions:
namespace PS = CGAL::Polyline_simplification_2;
typedef CGAL::Exact_predicates_exact_constructions_kernel Epic;
typedef CGAL::Projection_traits_xy_3<Epic> K;
typedef CGAL::Polygon_2<K> Polygon_2;
Polygon_2_2 P1;
std::deque<Point_2> Result;
typedef K::Point_2 Point_2;
Thank you
Carlos A. Rabelo

Using data type like uint64_t in cgal's exact kernel

I am beginning with CGAL. What I would like to do is to create point that coordinates are number ~ 2^51.
typedef CGAL::Exact_predicates_exact_constructions_kernel K;
typedef K::Point_2 P;
uint_64 x,y;
//init them somehow
P sp0(x,y);
Then I got a long template error. Someone could help?
I guess you realize that changing the kernel may have other effects on your program.
Concerning your original question, if your integer values are smaller than 2^51, then they fit exactly in doubles (with 53 bit mantissa), so one simple option is to cast them to double, as in :
P sp0((double)x,(double)y);
Otherwise, the Exact_predicates_exact_construction_kernel should have its main number type be able to read your uint64 values (maybe cast them to unsigned long long if it's OK on your platform) :
typedef K::FT FT;
P sp0((FT)x,(FT)y);
CGAL Number types are only documented to interoperate with int and double. I recently added some code so we can construct more numbers from long (required for Eigen), and your code will work in the next version of CGAL (except that you typo-ed uint64_t) on platforms where uint64_t is unsigned int or unsigned long (not windows). For long long support, since many of our number types are based on other libraries (GMP) that do not support long long themselves yet, it may have to wait a bit.
Ok. I think that I found solution. The problem was that I used exact Kernel that supports only double, switching to inexact kernel solved the problem. It was also possible to use just double. (one of the requirements was to use data type that supports intergers up to 2^48).

How to bitwise-and CFBitVector

I have two instances of CFMutableBitVector, like so:
CFBitVectorRef ref1, ref2;
How can I do bit-wise operations to these guys? For right now, I only care about and, but obviously xor, or, etc would be useful to know.
Obviously I can iterate through the bits in the vector, but that seems silly when I'm working at the bit level. I feel like there are just some Core Foundation functions that I'm missing, but I can't find them.
Thanks,
Kurt
Well a
CFBitVectorRef
is a
typedef const struct __CFBitVector *CFBitVectorRef;
which is a
struct __CFBitVector {
CFRuntimeBase _base;
CFIndex _count; /* number of bits */
CFIndex _capacity; /* maximum number of bits */
__CFBitVectorBucket *_buckets;
};
Where
/* The bucket type must be unsigned, at least one byte in size, and
a power of 2 in number of bits; bits are numbered from 0 from left
to right (bit 0 is the most significant) */
typedef uint8_t __CFBitVectorBucket;
So you can dive in a do byte wise operations which could speed things up. Of course being non-mutable might hinder things a bit :D

How to 'checksum' an array of noisy floating point numbers?

What is a quick and easy way to 'checksum' an array of floating point numbers, while allowing for a specified small amount of inaccuracy?
e.g. I have two algorithms which should (in theory, with infinite precision) output the same array. But they work differently, and so floating point errors will accumulate differently, though the array lengths should be exactly the same. I'd like a quick and easy way to test if the arrays seem to be the same. I could of course compare the numbers pairwise, and report the maximum error; but one algorithm is in C++ and the other is in Mathematica and I don't want the bother of writing out the numbers to a file or pasting them from one system to another. That's why I want a simple checksum.
I could simply add up all the numbers in the array. If the array length is N, and I can tolerate an error of 0.0001 in each number, then I would check if abs(sum1-sum2)<0.0001*N. But this simplistic 'checksum' is not robust, e.g. to an error of +10 in one entry and -10 in another. (And anyway, probability theory says that the error probably grows like sqrt(N), not like N.) Of course, any checksum is a low-dimensional summary of a chunk of data so it will miss some errors, if not most... but simple checksums are nonetheless useful for finding non-malicious bug-type errors.
Or I could create a two-dimensional checksum, [sum(x[n]), sum(abs(x[n]))]. But is the best I can do, i.e. is there a different function I might use that would be "more orthogonal" to the sum(x[n])? And if I used some arbitrary functions, e.g. [sum(f1(x[n])), sum(f2(x[n]))], then how should my 'raw error tolerance' translate into 'checksum error tolerance'?
I'm programming in C++, but I'm happy to see answers in any language.
i have a feeling that what you want may be possible via something like gray codes. if you could translate your values into gray codes and use some kind of checksum that was able to correct n bits you could detect whether or not the two arrays were the same except for n-1 bits of error, right? (each bit of error means a number is "off by one", where the mapping would be such that this was a variation in the least significant digit).
but the exact details are beyond me - particularly for floating point values.
i don't know if it helps, but what gray codes solve is the problem of pathological rounding. rounding sounds like it will solve the problem - a naive solution might round and then checksum. but simple rounding always has pathological cases - for example, if we use floor, then 0.9999999 and 1 are distinct. a gray code approach seems to address that, since neighbouring values are always single bit away, so a bit-based checksum will accurately reflect "distance".
[update:] more exactly, what you want is a checksum that gives an estimate of the hamming distance between your gray-encoded sequences (and the gray encoded part is easy if you just care about 0.0001 since you can multiple everything by 10000 and use integers).
and it seems like such checksums do exist: Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d − 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired.
so, just in case it's not clear:
multiple by minimum error to get integers
convert to gray code equivalent
use an error detecting code with a minimum hamming distance larger than the error you can tolerate.
but i am still not sure that's right. you still get the pathological rounding in the conversion from float to integer. so it seems like you need a minimum hamming distance that is 1 + len(data) (worst case, with a rounding error on each value). is that feasible? probably not for large arrays.
maybe ask again with better tags/description now that a general direction is possible? or just add tags now? we need someone who does this for a living. [i added a couple of tags]
I've spent a while looking for a deterministic answer, and been unable to find one. If there is a good answer, it's likely to require heavy-duty mathematical skills (functional analysis).
I'm pretty sure there is no solution based on "discretize in some cunning way, then apply a discrete checksum", e.g. "discretize into strings of 0/1/?, where ? means wildcard". Any discretization will have the property that two floating-point numbers very close to each other can end up with different discrete codes, and then the discrete checksum won't tell us what we want to know.
However, a very simple randomized scheme should work fine. Generate a pseudorandom string S from the alphabet {+1,-1}, and compute csx=sum(X_i*S_i) and csy=sum(Y_i*S_i), where X and Y are my original arrays of floating point numbers. If we model the errors as independent Normal random variables with mean 0, then it's easy to compute the distribution of csx-csy. We could do this for several strings S, and then do a hypothesis test that the mean error is 0. The number of strings S needed for the test is fixed, it doesn't grow linearly in the size of the arrays, so it satisfies my need for a "low-dimensional summary". This method also gives an estimate of the standard deviation of the error, which may be handy.
Try this:
#include <complex>
#include <cmath>
#include <iostream>
// PARAMETERS
const size_t no_freqs = 3;
const double freqs[no_freqs] = {0.05, 0.16, 0.39}; // (for example)
int main() {
std::complex<double> spectral_amplitude[no_freqs];
for (size_t i = 0; i < no_freqs; ++i) spectral_amplitude[i] = 0.0;
size_t n_data = 0;
{
std::complex<double> datum;
while (std::cin >> datum) {
for (size_t i = 0; i < no_freqs; ++i) {
spectral_amplitude[i] += datum * std::exp(
std::complex<double>(0.0, 1.0) * freqs[i] * double(n_data)
);
}
++n_data;
}
}
std::cout << "Fuzzy checksum:\n";
for (size_t i = 0; i < no_freqs; ++i) {
std::cout << real(spectral_amplitude[i]) << "\n";
std::cout << imag(spectral_amplitude[i]) << "\n";
}
std::cout << "\n";
return 0;
}
It returns just a few, arbitrary points of a Fourier transform of the entire data set. These make a fuzzy checksum, so to speak.
How about computing a standard integer checksum on the data obtained by zeroing the least significant digits of the data, the ones that you don't care about?

How to do numerical integration with quantum harmonic oscillator wavefunction?

How to do numerical integration (what numerical method, and what tricks to use) for one-dimensional integration over infinite range, where one or more functions in the integrand are 1d quantum harmonic oscillator wave functions. Among others I want to calculate matrix elements of some function in the harmonic oscillator basis:
phin(x) = Nn Hn(x) exp(-x2/2)
where Hn(x) is Hermite polynomial
Vm,n = \int_{-infinity}^{infinity} phim(x) V(x) phin(x) dx
Also in the case where there are quantum harmonic wavefunctions with different widths.
The problem is that wavefunctions phin(x) have oscillatory behaviour, which is a problem for large n, and algorithm like adaptive Gauss-Kronrod quadrature from GSL (GNU Scientific Library) take long to calculate, and have large errors.
An incomplete answer, since I'm a little short on time at the moment; if others can't complete the picture, I can supply more details later.
Apply orthogonality of the wavefunctions whenever and wherever possible. This should significantly cut down the amount of computation.
Do analytically whatever you can. Lift constants, split integrals by parts, whatever. Isolate the region of interest; most wavefunctions are band-limited, and reducing the area of interest will do a lot to save work.
For the quadrature itself, you probably want to split the wavefunctions into three pieces and integrate each separately: the oscillatory bit in the center plus the exponentially-decaying tails on either side. If the wavefunction is odd, you get lucky and the tails will cancel each other, meaning you only have to worry about the center. For even wavefunctions, you only have to integrate one and double it (hooray for symmetry!). Otherwise, integrate the tails using a high order Gauss-Laguerre quadrature rule. You might have to calculate the rules yourself; I don't know if tables list good Gauss-Laguerre rules, as they're not used too often. You probably also want to check the error behavior as the number of nodes in the rule goes up; it's been a long time since I used Gauss-Laguerre rules and I don't remember if they exhibit Runge's phenomenon. Integrate the center part using whatever method you like; Gauss-Kronrod is a solid choice, of course, but there's also Fejer quadrature (which sometimes scales better to high numbers of nodes, which might work nicer on an oscillatory integrand) and even the trapezoidal rule (which exhibits stunning accuracy with certain oscillatory functions). Pick one and try it out; if results are poor, give another method a shot.
Hardest question ever on SO? Hardly :)
I'd recommend a few other things:
Try transforming the function onto a finite domain to make the integration more manageable.
Use symmetry where possible - break it up into the sum of two integrals from negative infinity to zero and zero to infinity and see if the function is symmetry or anti-symmetric. It could make your computing easier.
Look into Gauss-Laguerre quadrature and see if it can help you.
The WKB approximation?
I am not going to explain or qualify any of this right now. This code is written as is and probably incorrect. I am not even sure if it is the code I was looking for, I just remember that years ago I did this problem and upon searching my archives I found this. You will need to plot the output yourself, some instruction is provided. I will say that the integration over infinite range is a problem that I addressed and upon execution of the code it states the round off error at 'infinity' (which numerically just means large).
// compile g++ base.cc -lm
#include <iostream>
#include <cstdlib>
#include <fstream>
#include <math.h>
using namespace std;
int main ()
{
double xmax,dfx,dx,x,hbar,k,dE,E,E_0,m,psi_0,psi_1,psi_2;
double w,num;
int n,temp,parity,order;
double last;
double propogator(double E,int parity);
double eigen(double E,int parity);
double f(double x, double psi, double dpsi);
double g(double x, double psi, double dpsi);
double rk4(double x, double psi, double dpsi, double E);
ofstream datas ("test.dat");
E_0= 1.602189*pow(10.0,-19.0);// ev joules conversion
dE=E_0*.001;
//w^2=k/m v=1/2 k x^2 V=??? = E_0/xmax x^2 k-->
//w=sqrt( (2*E_0)/(m*xmax) );
//E=(0+.5)*hbar*w;
cout << "Enter what energy level your looking for, as an (0,1,2...) INTEGER: ";
cin >> order;
E=0;
for (n=0; n<=order; n++)
{
parity=0;
//if its even parity is 1 (true)
temp=n;
if ( (n%2)==0 ) {parity=1; }
cout << "Energy " << n << " has these parameters: ";
E=eigen(E,parity);
if (n==order)
{
propogator(E,parity);
cout <<" The postive values of the wave function were written to sho.dat \n";
cout <<" In order to plot the data should be reflected about the y-axis \n";
cout <<" evenly for even energy levels and oddly for odd energy levels\n";
}
E=E+dE;
}
}
double propogator(double E,int parity)
{
ofstream datas ("sho.dat") ;
double hbar =1.054*pow(10.0,-34.0);
double m =9.109534*pow(10.0,-31.0);
double E_0= 1.602189*pow(10.0,-19.0);
double dx =pow(10.0,-10);
double xmax= 100*pow(10.0,-10.0)+dx;
double dE=E_0*.001;
double last=1;
double x=dx;
double psi_2=0.0;
double psi_0=0.0;
double psi_1=1.0;
// cout <<parity << " parity passsed \n";
psi_0=0.0;
psi_1=1.0;
if (parity==1)
{
psi_0=1.0;
psi_1=m*(1.0/(hbar*hbar))* dx*dx*(0-E)+1 ;
}
do
{
datas << x << "\t" << psi_0 << "\n";
psi_2=(2.0*m*(dx/hbar)*(dx/hbar)*(E_0*(x/xmax)*(x/xmax)-E)+2.0)*psi_1-psi_0;
//cout << psi_1 << "=psi_1\n";
psi_0=psi_1;
psi_1=psi_2;
x=x+dx;
} while ( x<= xmax);
//I return 666 as a dummy value sometimes to check the function has run
return 666;
}
double eigen(double E,int parity)
{
double hbar =1.054*pow(10.0,-34.0);
double m =9.109534*pow(10.0,-31.0);
double E_0= 1.602189*pow(10.0,-19.0);
double dx =pow(10.0,-10);
double xmax= 100*pow(10.0,-10.0)+dx;
double dE=E_0*.001;
double last=1;
double x=dx;
double psi_2=0.0;
double psi_0=0.0;
double psi_1=1.0;
do
{
psi_0=0.0;
psi_1=1.0;
if (parity==1)
{double psi_0=1.0; double psi_1=m*(1.0/(hbar*hbar))* dx*dx*(0-E)+1 ;}
x=dx;
do
{
psi_2=(2.0*m*(dx/hbar)*(dx/hbar)*(E_0*(x/xmax)*(x/xmax)-E)+2.0)*psi_1-psi_0;
psi_0=psi_1;
psi_1=psi_2;
x=x+dx;
} while ( x<= xmax);
if ( sqrt(psi_2*psi_2)<=1.0*pow(10.0,-3.0))
{
cout << E << " is an eigen energy and " << psi_2 << " is psi of 'infinity' \n";
return E;
}
else
{
if ( (last >0.0 && psi_2<0.0) ||( psi_2>0.0 && last<0.0) )
{
E=E-dE;
dE=dE/10.0;
}
}
last=psi_2;
E=E+dE;
} while (E<=E_0);
}
If this code seems correct, wrong, interesting or you do have specific questions ask and I will answer them.
I am a student majoring in physics, and I also encountered the problem. These days I keep thinking about this question and get my own answer. I think it may help you solve this question.
1.In gsl, there are functions can help you integrate the oscillatory function--qawo & qawf. Maybe you can set a value, a. And the integration can be separated into tow parts, [0,a] and [a,pos_infinity]. In the first interval, you can use any gsl integration function you want, and in the second interval, you can use qawo or qawf.
2.Or you can integrate the function to a upper limit, b, that is integrated in [0,b]. So the integration can be calculated using a gauss legendry method, and this is provided in gsl. Although there maybe some difference between the real value and the computed value, but if you set b properly, the difference can be neglected. As long as the difference is less than the accuracy you want. And this method using the gsl function is only called once and can use many times, because the return value is point and its corresponding weight, and integration is only the sum of f(xi)*wi, for more details you can search gauss legendre quadrature on wikipedia. Multiple and addition operation is much faster than integration.
3.There is also a function which can calculate the infinity area integration--qagi, you can search it in the gsl-user's guide. But this is called everytime you need to calculate the integration, and this may cause some time consuming, but I'm not sure how long will it use in you program.
I suggest NO.2 choice I offered.
If you are going to work with Harmonic oscillator functions less than n = 100 you might want to try:
http://www.mymathlib.com/quadrature/gauss_hermite.html
The program computes an integral via gauss-hermite quadrature with 100 zeroes and weights (the zeroes of H_100). Once you go over Hermite_100 the integrals are not as accurate.
Using this integration method I wrote a program calculating exactly what you want to calculate and it works fairly well. Also, there might be a way to go beyond n=100 by using the asymptotic form of the Hermite-polynomial zeroes but I haven't looked into it.