remeshing a vtk file with mmg (mmgs) file using a size map - mesh

I’m using mmgtools's mmgs to remesh some polydata (vtp files). I need to control the cell size according to a metric, so I provide a size map. However, I can’t succeed to make mmgs take this size map into account. For now, I'm trying with just a constant size.
If I provide a constant size at the command line (mmgs_O3 test.vtp -hsiz .001), it works as expected.
However if I save this same size in point data, suffixed with :metric (as explained in the prerequisite section):
> mesh.point_data["size:metric"]
pyvista_ndarray([0.001, 0.001, 0.001, ..., 0.001, 0.001, 0.001])
Then mmgs (mmgs_O3 test.vtp) just remeshes ignoring the size map.
I note however that mmgs does read this field as if I create another one suffixed with :metric, it fails with an error ## Error:MMG5_count_vtkEntities: 2 metric fields detected (labelled with a string containing the 'metric' keyword)..
So, I must be missing something, but can’t find what. Does anyone have experience with this tool? What do I miss for mmgs to take this size into account?
Thank you in advance!

To answer myself, it's because the mesh contained other data values. In that case mmgs doesn't fail, but remeshes while ignoring the passed size metric.
In order to work, the mesh's cells and point must be stripped of any other data, and contain only the :metric value.

Related

Use of base anchor size in Single Shot Multi-box detector

I was digging in the Tensorflow Object Detection API in order to check out the anchor box generations for SSD architecture. In this py file where the anchor boxes are generated on the fly, I am unable to understand the usage of base_anchor_size. In the corresponding paper, there is no mention of such thing. Two questions in short:
What is the use of base_anchor_size parameter? Is it important?
How does this parameter affect the training in the cases where the original input image is square in shape and the case when it isn't square?
In SSD architecture there are scales for anchors which are fixed ahead, e.g. linear values across the range 0.2-0.9. These values are relative to the image size. For example, given 320x320 image, then smallest anchor (with 1:1 ratio) will be 64x64, and largest anchor will be 288x288. However, if you wish to insert to your model a larger image, e.g. 640x640, but without changing the anchor sizes (for example since these are images of far objects, so there's no need for large objects; not leaving the anchor sizes untouched allows you not to fine-tune the model on the new resolution), then you can simply have a base_anchor_size=0.5, meaning the anchor scales would be 0.5*[0.2-0.9] relative to the input image size.
The default value for this parameter is [1.0, 1.0], meaning not having any affect.
The entries correspond to [height, width] relative to the maximal square you can fit in the image, meaning [min(image_height,image_width),min(image_height,image_width)]. So, if for example, your input image is VGA, i.e. 640x480, then the base_anchor_size is taken to be relative to [480,480].

How can I plot a histogram of discrete distribution on tensorboard?

I'm using tensorboard (tensorflow 1.1.0) to show the result of my CNN classifier.
I added some output vector as tf.summary.histogram in order to show the counts of output in each bin, but tensorboard seems to automatically compute interpolation and show them as (somehow) smoothed distribution
(and therefore I can not find the exact counts for the bins).
Could someone tell me how can I avoid the interpolation and show usual histograms using bars?
I not sure that there is easy way to do it.
I very unsure in below text, correct me if I wrong.
From this file https://github.com/tensorflow/tensorboard/blob/master/tensorboard/plugins/histogram/vz_histogram_timeseries/index.html it seems that histogram comes to tensorboard in double values.
Summary op uses either histogram from https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/python/ops/histogram_ops.py (1) or https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/core/lib/histogram/histogram.cc (2)
I suppose that it uses 2nd because here https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/python/summary/summary.py#L189 it calls function from generated file. In my package code in this generated file there is another function call:
result = _op_def_lib.apply_op("HistogramSummary", tag=tag, values=values,
name=name)
I have grep all repo and seems like there is no other python code which define something with "HistogramSummary", so it seems like it's really defined here https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/core/kernels/summary_op.cc and this code uses code mentioned above (2).
So, it seems to me that histogram which is used now is buried deep inside of framework and I not sure that it's easy to rewrite it.
In this page there is email for support https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/summary . I suppose that it's better to contact this person or make issue on github.

Error on mean sigma differ if fit without normalization or with it. Why?

When I fitted a histogram using Gaussian its error on mean and sigma seems to be fine. You can look at it here .
But, when I first normalized the histogram and fitted it with Gaussian its parameters value is exactly same as previous case but the error on mean and sigma is almost equal to the actual value or greater.
One of the reason for this is that it may be happening because it is taking error as 1/sqrt{n} and after normalizing n decreased and hence error increased.
Please let me know what is happening and how I can fix it?
You probably want to call
hist->Sumw2()
before rescaling the histogram. Otherwise the uncertainties on all bin contents are just the square roots of the bin contents (which is a huge relative error for bin contents smaller than 1, which is the case when after rescaling). SumW2 triggers to store the sum of all weights squared and not only the bin contents (i.e. the sum of weights in each bin).
See also the documentation of Sumw2() for further details (and also the explanation of weights on the top of the TH1 documentation page).

How to make gnuplot generate figures with smaller/fized size (Bytes)?

I would like to avoid using every command since it simply discards data that, however, might be very important (like a spike for instance). I would like also to avoid posterior downsizing since this might lead to the deterioration of the text on the figure...
Is there a manner/option to force gnuplot generating files (eps) with maximum size?
You'd need some adaptive compression on your data. Without actually knowing it, that's rather tough.
The stats command can tell you how many datapoints you actually have, and you can then adjust the every statement to a sensible value. Otherwise, you can use smooth to achieve a predefined (set sample) number of datapoints, or (if you have a sensible model for you data) you can do a fit and simply plot the fitted model function instead of you dataset.
If you specifically want outliers to show in the plot, this might be helpful:
fit f(x) data via *parameter*
plot f(x), data using ((abs($2-f($1)) > threshold) ? $2 : NaN)
It plots a fit to your dataset, and all actual datapoints that deviate from the fit by more than threshold.

How to make the sizes of two images match in Idrisi ?

So, I have this usual error message where the number of rows and columns of my images don't match (in cross ref).
I have generalised one of my images and then use expand to make the resolutions match again.
However, in the process I lost a few columns (which doesn't bother me), however, I don't know what to do in order to make my both images the same size again.
Can someone help me ?
Thank you very much
L.
To make similar column and rows,
Convert layer from raster to vector (rastervector conversion tool)
Convert the vector converted layer in to raster (resample the layer that has required column and row)