I remember that is a function to calculate the differential/derivative for a line plot in a DM version, it looks lik in the process- non linear filter- derivative. But I do not remember which version has this function, any suggestion?
The UI functionality for spectral filtering is found in the Spectrum menu:
Since GMS 3 this functionality is part oft the free software, before it was part of the Spectroscopy license (any).
The menu only works on line profiles which are spectra, for which the Convert Data To menu would be used when required.
As all "menu" commands, you can access them using the ChooseMenuItem command as in:
GetFrontImage().SelectImage() // Make sure the image window is selected, or the menu is disabled if the script-window is frontmost!
ChooseMenuItem("Spectrum","Numerical Filters","First derivative")
The mathematical functions behind this menu are also available as (unofficial, undocumented) script commands. They do not use the preferences but the parameters directly, using uncalibrated 'channel' scale.
So you could also use:
image src := GetFrontImage()
number chWidth = 5 // The values matching the settings
number chDelta = 1 // The values matching the settings
number chShift = trunc((chWidth + chDelta)/2 + 0.5)
number norm = chWidth + chDelta
image fDev := src.FDeriv_Spectrum( chWidth, chShift, norm )
fDev.ShowImage()
Just be warned that there is no guarantee that the command FDeriv_Spectrum will be continued in future versions of GMS (It is not an officially supported command.)
Finally, the math of a first derivative are really simple, so you could just recreate the function with pure DM-script commands like offset and arithmetic operators.
A simple, non-smoothed 1-channel derivative would simply be:
image src := GetFrontImage()
image fdev := src - src.offset(-1,0)
fdev.ShowImage()
Related
I'm relatively new to Yosys. I've been tinkering with it with some proprietary standard cell libraries and am trying to extract some QoR/PPA metrics, similar to those you can get from DC.
Minimum slack (including worst-case negative slack/WNS)
Max logic depth [0]
Cell area [1]
For [0], I know there's the ltp command, but it only reports topological paths per module. I tried flattening the design using flatten, but there still seems to be a hierarchy in the netlist. Where should I insert the flatten command to actually flatten the netlist?
For [1], I know you can get the number of cells in the netlist using the stat command, but this doesn't tell me the equivalent of DC's CellArea metric (since each cell has a different area). I could just build a library of cell areas for each cell type based on the cell library datasheet, but that's rather laborious.
Also, is it possible to specify a target clock rate for synthesis? I think for abc there was a -D flag for delay, but this sounds to me more like input delay rather than clock period.
Thanks!
-D passed to abc is indeed clock period, not input delay. When specified this should also cause abc to print slack information.
Have you tried stat -liberty file.lib to use a liberty file for cell areas? If this isn't calculating areas as expected (I didn't quite understand your issue) then please create a feature request on GitHub with the difference.
flatten should be run after hierarchy -top top_module_name to do hierarchical elaboration and set the top module.
Using pyiron, I want to calculate the mean square displacement of the ions in my system. How do I see the total displacement (i.e. not folded back by periodic boundary conditions) without dumping very frequently and checking when an atom passes over the boundary and gets wrapped?
Try to compare job['output/generic/unwrapped_positions'][-1] and job.structure.positions+job.output.total_displacements[-1]. If they deliver the same values, it's definitely fine both ways. If not, you can post the relevant lines in your notebook here.
I'd like to add a few comments to Jan's answer:
While job['output/generic/unwrapped_positions'] returns the unwrapped positions parsed from the output files, job.output.total_displacements returns the displacement of atoms calculated from each pair of consecutive snapshots. So if an atom moves more than half the box length in any direction, job.output.total_displacements will give wrong coordinates. Therefore, job['output/generic/unwrapped_positions'] is generally more trustworthy, but it is not available in all the codes (since some codes simply do not provide an output for unwrapped positions).
Moreover, if an interactive job is used, it is possible that job.structure.positions does not return the initial positions, i.e. job.structure.positions+job.output.total_displacements won't be initial positions + displacements.
So, in short, my answer to your question would be rather "Use job['output/generic/unwrapped_positions'] and if it's not available, use job.structure.positions+job.output.total_displacements but be aware of potential problems you might be running into."
I am currently trying to do the signal processing of multiple channels in parallel using a custom source block. Up to now I created an OOT-source block which streams data for only one channel into one output perfectly fine.
Now I am searching for a way to expand this module so that it can support a higher number of channels ( = outputs of the source block; up to 64) in parallel. Because the protocol used to pull the samples pulls them all at one it is not possible to use more instances of the same source block.
Things I have found so far:
A pdf which seems to explain exactly what to do already but it seems that it is outdated and that this functionality is no longer supported under GNU Radio.
The description of a feature which should be implemented in the future.
Is there are known solution or workaround for this problem?
Look at the add block: It has configurable many inputs!
Now, the trick here is threefold:
define an io_signature as input and output that allows for adjustable numbers. If you use gr_modtool add to create a new block, your io_signatures will be filled with <+MIN_IN+>, <+MAX_IN+>, <+MIN_OUT+> and <+MAX_OUT+>. Adjust these to reflect your actual minimum and maximum in- and output port numbers. If you want to have 1 to infinity inputs, use 1, -1.
in your (general_)work method, check for the number of inputs by doing something like ninputs = input_items.size(), and for the number of outputs by doing noutputs = output_items.size().
(optionally, if you want to use GRC) modify the <sink>/<source> definitions in your block GRC XML:
<sink>
<name>in</name>
<type>complex</type>
<nports>$num_inputs</nports>
</sink>
num_inputs could be a block parameter; compare the add_XX block source code.
I'm using Selenium to automate webpage functional testing. It's important for us to do a pixel-by-pixel comparison when we roll out new code, so we're using Selenium to take screenshots and comparing the base64 encoded strings to see if anything has changed.
We're finding that in practice, it's hard to get complete pixel consistency, especially with images. I would like minor blurriness / rendering artifacts to count as a "pass" instead of a "fail", so I'm wondering if there's a way of doing a fuzzy comparison to make our tests a bit less fragile.
I was thinking of maybe looking at the Levenshtein distance between the base64 strings as a starting point, but I don't really know if that's a good approach, or what the tolerances should be that distinguish "something moved on the page" from "rendering artifact". Any ideas / approaches?
So I ended up going with the ImageMagick command-line tool (because why re-invent image comparison). The "Peak Absolute Error" metric of the "compare" tool tells you how much you have to fuzz pixels before two images are identical. This seems to work well... for an image with slight graphical distortions, there might be a lot of pixels that don't match, but slight fuzzing is enough to make them match; but for two images that are actually different, even though most pixels might match, the ones that don't tend to be very different. Right now I'm checking for a PAE of less than 15% to see if the images should be counted as identical. Command line I'm using is:
compare -metric PAE original.png new.png comparison.png
Documentation on ImageMagick's compare tool is here: http://www.imagemagick.org/script/compare.php
I've been using perceptualdiff which uses a model of the human visual system to try to avoid reporting unnoticeable changes (the authors used for renderer regression testing). Usage is quite simple:
perceptualdiff -output diff.ppm baseline.png test.png
(where diff.ppm is a PPM format image highlighting the areas of difference)
The needle regression testing framework has support for using pdiff to compare screenshots:
http://needle.readthedocs.org/en/latest/#engines
Use an image format that does not create artifacts (like BMP or PNG) then you can do a per-pixel comparison.
I think you can check each pixel with a common Euclidean Distance.
To improve performance a little, do not calculate the square root but check the squares of the distances
// Maximum color distance allowed to define pixel consistency.
const float maxDistanceAllowed = 5.0;
// Square of the distance, used in calculations.
float maxD = maxDistanceAllowed * maxDistanceAllowed;
public bool isPixelConsistent(Color pixel1, Color pixel2)
{
// Euclidean distance in 3-dimensions.
float distanceSquared = (pixel1.R - pixel2.R)*(pixel1.R - pixel2.R) + (pixel1.G - pixel2.G)*(pixel1.G - pixel2.G) + (pixel1.B - pixel2.B)*(pixel1.B - pixel2.B);
// If the actual distance is less than the max allowed, the pixel is
// consistent and the method returns TRUE
return distanceSquared <= maxD;
}
Didn't test the C# code, but it should give you the idea. Give some tries and adjust the maxDistanceAllowed to your needs.
If anyone else is looking for something similar there is Depicted-dpxdt. It is designed to be used as part of a CI/CD process.
It combines perceptual diff with server, commandline tool, wrapper for phantom js.
It has functionality demonstrated like crawling entire site and comparing pages for differences.
In the Gimp GUI, the QuickMask is very useful for many things, but this functionality doesn't seem to be directly available through script-fu. No obvious equivalents were apparent to me in the procedure browser.
In particular, putting the (value/gray) pixels of a layer into the selection mask is the basic thing I need to do. I tried using gimp-image-get-selection to get the selection channel's id number, then gimp-edit-paste into it, but the following anchor operation caused Gimp to crash.
My other answer contains the "theoretical" way of doing it - however, the O.P. found a bug in GIMP, as of version 2.6.5, as can be seem on the comments to that answer.
I got a workaround for what the O.P. intends to do: paste the contents of a given image layer to the image selection. As denoted, edit-copy -> edit-paste on the selection drawable triggers a program crash.
The workaround is to create a new image channel with the desired contents, through the copy and paste method, and then use gimp-selection-load to make the selection equal the channel contents:
The functions that need to be called are thus (I won't paste scheme code, as I am not proficient in all the parenthesis - I did the tests using the Python console in GIMP):
>>> img = gimp.image_list()[0]
>>> ch = pdb.gimp_channel_new(img, img.width, img.height, "bla", 0, (0,0,0))
>>> ch
<gimp.Channel 'bla'>
>>> pdb.gimp_edit_copy(img.layers[0])
1
>>> pdb.gimp_image_add_channel(img, ch, 0)
>>> fl = pdb.gimp_edit_paste(ch, 0)
> >> fl
<gimp.Layer 'Pasted Layer'>
>>> pdb.gimp_floating_sel_anchor(fl)
>>> pdb.gimp_selection_load(ch)
Using QuickMask through the User interface is exactly equivalent to draw on the Selection, treating the selection as a drawable object.
So, to use the equivalent of "quickmask" on script-fu all one needs to is to retrieve the Selection as a drawable and pass that as a parameter to the calls that will modify it -
And to get the selection, one just have to call 'gimp-image-get-selection'