I'm relatively new to Yosys. I've been tinkering with it with some proprietary standard cell libraries and am trying to extract some QoR/PPA metrics, similar to those you can get from DC.
Minimum slack (including worst-case negative slack/WNS)
Max logic depth [0]
Cell area [1]
For [0], I know there's the ltp command, but it only reports topological paths per module. I tried flattening the design using flatten, but there still seems to be a hierarchy in the netlist. Where should I insert the flatten command to actually flatten the netlist?
For [1], I know you can get the number of cells in the netlist using the stat command, but this doesn't tell me the equivalent of DC's CellArea metric (since each cell has a different area). I could just build a library of cell areas for each cell type based on the cell library datasheet, but that's rather laborious.
Also, is it possible to specify a target clock rate for synthesis? I think for abc there was a -D flag for delay, but this sounds to me more like input delay rather than clock period.
Thanks!
-D passed to abc is indeed clock period, not input delay. When specified this should also cause abc to print slack information.
Have you tried stat -liberty file.lib to use a liberty file for cell areas? If this isn't calculating areas as expected (I didn't quite understand your issue) then please create a feature request on GitHub with the difference.
flatten should be run after hierarchy -top top_module_name to do hierarchical elaboration and set the top module.
Related
I need some help in creating a VI that generates virtual or calculated channels based on several channels I measure.
e.g.
I measure voltage on several AI, lets say, ch A,B,C,D,E were B,C and E represent current on a shunt and would like to calculate a the power of the system
Q[A] = B+C
R[W] = A*Q
S[W] = D*E
T[W] = R+S
I would like to load the equations externally from a configuration file that may vary from one project to another equations would come in a format of a string Q=A+B , R= A*Q .....
*(during a run equation and channel count don't change - only when loading config).
The main issues that I am facing is that the inputs to each equation may have dependencies on virtual channels that do not have data yet
Was trying to use:
formula nodes/ Math scripts: https://zone.ni.com/reference/en-XX/help/371361R-01/lvconcepts/formula_nodes/
https://knowledge.ni.com/KnowledgeArticleDetails?id=kA03q000000x30HCAQ&l=en-IL
All data that should be chunked into a data stream (continues sampling) that can be presented on a Chart/Graph and saved to CSV/TDMS
do I need some additional packages?
I have tried the following based on the the example given - getting strange result
Answer
The elements you are looking for are not the Formula/Math Nodes but rather the:
Formula Parsing VIs
Using these VIs you are able to pass a calculation in the form of a string and an array of variable names and then evaluate the formula. This allows for run-time variable scripting, where most other nodes require compile time formula evaluation (With the exception of the python node).
Example
Example of using a very simple program to evaluate two different calculations using the same values and variables.
Using pyiron, I want to calculate the mean square displacement of the ions in my system. How do I see the total displacement (i.e. not folded back by periodic boundary conditions) without dumping very frequently and checking when an atom passes over the boundary and gets wrapped?
Try to compare job['output/generic/unwrapped_positions'][-1] and job.structure.positions+job.output.total_displacements[-1]. If they deliver the same values, it's definitely fine both ways. If not, you can post the relevant lines in your notebook here.
I'd like to add a few comments to Jan's answer:
While job['output/generic/unwrapped_positions'] returns the unwrapped positions parsed from the output files, job.output.total_displacements returns the displacement of atoms calculated from each pair of consecutive snapshots. So if an atom moves more than half the box length in any direction, job.output.total_displacements will give wrong coordinates. Therefore, job['output/generic/unwrapped_positions'] is generally more trustworthy, but it is not available in all the codes (since some codes simply do not provide an output for unwrapped positions).
Moreover, if an interactive job is used, it is possible that job.structure.positions does not return the initial positions, i.e. job.structure.positions+job.output.total_displacements won't be initial positions + displacements.
So, in short, my answer to your question would be rather "Use job['output/generic/unwrapped_positions'] and if it's not available, use job.structure.positions+job.output.total_displacements but be aware of potential problems you might be running into."
I am trying to get to get familiar with working with a 74LS148 Priority Encoder IC. I am providing a 5 Volt constant current voltage source to the IC and have a 0 Volt ground Voltage. I tried connecting every input to the 5 Volt source to set it on a HIGH logic state, which should give, according to the truth table, HIGH in the output. Then tried setting some inputs to LOW, but that had no effect in the output which remained in a HIGH state.
I then tried using pull up resistors, the use and circuit configuration of which is not entirely clear to me, which I think is the problem. I connected the resistors as shown in the picture below, which should give a HIGH state in the output. I then tried connecting some inputs, along with their resistors, to the Ground. The output still remained on a HIGH state, around 4.3 Volts.
I repeated the entire process with another 74LS148 IC to make sure the first one was working.
I'd could really use a little help. Thank you all!
74LS148 has the following truth table:
EI (pin-5) must low (connect to ground)
The enable E1 must be ground. you also your diagram indicates inputs 0 through 7 are pulled high. This will always produce your outputs to be HIGH
I think you should include a bleeder resistor at output to draw some constant current. This will help regulator function as desired.
I am currently trying to do the signal processing of multiple channels in parallel using a custom source block. Up to now I created an OOT-source block which streams data for only one channel into one output perfectly fine.
Now I am searching for a way to expand this module so that it can support a higher number of channels ( = outputs of the source block; up to 64) in parallel. Because the protocol used to pull the samples pulls them all at one it is not possible to use more instances of the same source block.
Things I have found so far:
A pdf which seems to explain exactly what to do already but it seems that it is outdated and that this functionality is no longer supported under GNU Radio.
The description of a feature which should be implemented in the future.
Is there are known solution or workaround for this problem?
Look at the add block: It has configurable many inputs!
Now, the trick here is threefold:
define an io_signature as input and output that allows for adjustable numbers. If you use gr_modtool add to create a new block, your io_signatures will be filled with <+MIN_IN+>, <+MAX_IN+>, <+MIN_OUT+> and <+MAX_OUT+>. Adjust these to reflect your actual minimum and maximum in- and output port numbers. If you want to have 1 to infinity inputs, use 1, -1.
in your (general_)work method, check for the number of inputs by doing something like ninputs = input_items.size(), and for the number of outputs by doing noutputs = output_items.size().
(optionally, if you want to use GRC) modify the <sink>/<source> definitions in your block GRC XML:
<sink>
<name>in</name>
<type>complex</type>
<nports>$num_inputs</nports>
</sink>
num_inputs could be a block parameter; compare the add_XX block source code.
The only effect AudioUnit on iOS is the "iTunes EQ", which only lets you use EQ pre-sets. I would like to use a customized eq in my audio graph
I came across this question on the subject and saw an answer suggesting using this DSP code in the render callback. This looks promising and people seem to be using this effectively on various platforms. However, my implementation has a ton of noise even with a flat eq.
Here's my 20 line integration into the "MixerHostAudio" class of Apple's "MixerHost" example application (all in one commit):
https://github.com/tassock/mixerhost/commit/4b8b87028bfffe352ed67609f747858059a3e89b
Any ideas on how I could get this working? Any other strategies for integrating an EQ?
Edit: Here's an example of the distortion I'm experiencing (with the eq flat):
http://www.youtube.com/watch?v=W_6JaNUvUjA
In the code in EQ3Band.c, the filter coefficients are used without being initialized. The init_3band_state method initialize just the gains and frequencies, but the coefficients themselves - es->f1p0 etc. are not initialized, and therefore contain some garbage values. That might be the reason for the bad output.
This code seems wrong in more then one way.
A digital filter is normally represented by the filter coefficients, which are constant, the filter inner state history (since in most cases the output depends on history) and the filter topology, which is the arithmetic used to calculate the output given the input and the filter (coeffs + state history). In most cases, and of course when filtering audio data, you expect to get 0's at the output if you feed 0's to the input.
The problems in the code you linked to:
The filter coefficients are changed in each call to the processing method:
es->f1p0 += (es->lf * (sample - es->f1p0)) + vsa;
The input sample is usually multiplied by the filter coefficients, not added to them. It doesn't make any physical sense - the sample and the filter coeffs don't even have the same physical units.
If you feed in 0's, you do not get 0's at the output, just some values which do not make any sense.
I suggest you look for another code - the other option is debugging it, and it would be harder.
In addition, you'd benefit from reading about digital filters:
http://en.wikipedia.org/wiki/Digital_filter
https://ccrma.stanford.edu/~jos/filters/