I have recently started using yosys, synthesized a DSP block with cmos_cells.lib and got following results:
ABC RESULTS: NAND cells: 2579
ABC RESULTS: NOR cells: 2771
ABC RESULTS: NOT cells: 447
ABC RESULTS: internal signals: 3728
ABC RESULTS: input signals: 133
ABC RESULTS: output signals: 128
I don't have access to commercial standard cell library at the moment, but I am trying to get an estimate of the die size for this design with, e.g., TSMC 28nm process.
I would appreciate if someone could help me with this
Thanks
There's no getting around needing a cell library for (roughly) the process you want. Once you have one, map to it and then run stat -liberty cells.lib to calculate total cell area.
Related
Has anyone managed to obtain a Monte Carlo error for a parameter when running bayesian model un R2OpenBugs?
It is provided in a standard output of OpenBugs, but when run under R2OpenBugs, the log file doesn't have MC error.Is there a way to ask R2OpenBugs to calculate MC error? Or maybe there is a way to calculate it manually? Please, let me know if you heard of any way to do that. Thank you!
Here is the standard log output of R2OpenBugs:
$stats
mean sd val2.5pc median val97.5pc sample
beta0 1.04700 0.13250 0.8130 1.03800 1.30500 1500
beta1 -0.31440 0.18850 -0.6776 -0.31890 0.03473 1500
beta2 -0.05437 0.05369 -0.1648 -0.05408 0.04838 1500
deviance 588.70000 7.87600 575.3000 587.50000 606.90000 1500
$DIC
Dbar Dhat DIC pD
t 588.7 570.9 606.5 17.78
total 588.7 570.9 606.5 17.78
A simple way to calculate Monte Carlo standard error (MCSE) is to divide the standard deviation of the chain by the square root of the effective number of samples. The standard deviation is provided in your output, but the effective sample size should be given as n.eff (the rightmost column) when you print the model output - or at least that is the impression I get from:
https://cran.r-project.org/web/packages/R2OpenBUGS/vignettes/R2OpenBUGS.pdf
I don't use OpenBugs any more so can't easily check for you, but there should be something there that indicates the effective sample size (this is NOT the same as the number of iterations you have sampled, as it also takes into account the loss of information due to correlation within the chains).
Otherwise you can obtain it yourself by extracting the raw MCMC chains and then either computing the effective sample size using the coda package (?coda::effectiveSize) or just use LaplacesDemon::MCSE to calculate the Monte Carlo standard error directly. For more information see:
https://rdrr.io/cran/LaplacesDemon/man/MCSE.html
Note that some people (including me!) would suggest focusing on the effective sample size directly rather than looking at the MCSE, as the old "rule of thumb" that MCSE should be less than 5% of the sample standard deviation is equivalent to saying that the effective sample size should be at least 400 (1/0.05^2). But opinions do vary :)
The MCMC-error is named Time-series SE, and can be found in the statistics section of the summary of the coda object:
library(R2OpenBUGS)
library(coda)
my_result <- bugs(...., codaPg = TRUE)
my_coda <- read.bugs(my_result)
summary(my_coda$statistics)
Is there a way to pass extra feature tokens along with the existing word token (training features/source file vocabulary) and feed it to the encoder RNN of seq2seq?. Since, it currently accepts only one word token from the sentence at a time.
Let me put this in a more concrete fashion; Consider the example of machine translation/nmt - say I have 2 more feature columns for the corresponding source vocabulary set( Feature1 here ). For example, consider this below:
+---------+----------+----------+
|Feature1 | Feature2 | Feature3 |
+---------+----------+----------+
|word1 | x | a |
|word2 | y | b |
|word3 | y | c |
|. | | |
|. | | |
+---------+----------+----------+
To summarise, currently seq2seq dataset is the parallel data corpora has a one-to one mapping between he source feature(vocabulary,i.e Feature1 alone) and the target(label/vocabulary). I'm looking for a way to map more than one feature(i.e Feature1, Feature2,Feature3) to the target(label/vocabulary).
Moreover, I believe this is glossed over in the seq2seq-pytorch tutorial(https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb) as quoted below:
When using a single RNN, there is a one-to-one relationship between
inputs and outputs. We would quickly run into problems with different
sequence orders and lengths that are common during translation…….With
the seq2seq model, by encoding many inputs into one vector, and
decoding from one vector into many outputs, we are freed from the
constraints of sequence order and length. The encoded sequence is
represented by a single vector, a single point in some N dimensional
space of sequences. In an ideal case, this point can be considered the
"meaning" of the sequence.
Furthermore, I tried tensorflow and took me a lot of time to debug and make appropriate changes and got nowhere. And heard from my colleagues that pytorch would have the flexibility to do so and would be worth checking out.
Please share your thoughts on how to achieve the same in tensorflow or pytorch. Would be great of anyone tells how to practically implement/get this done. Thanks in advance.
So I have followed this tutorial and retrained using my own images.
https://www.tensorflow.org/tutorials/image_retraining
So I now have an "output_graph.pb" and a "output_labels.txt" (which I can use with other code to classify images).
But how do I actually generate a confusion matrix using a folder of testing images (or at least with the images it was trained on)?
There is https://www.tensorflow.org/api_docs/python/tf/confusion_matrix
but that doesnt seem very helpful.
This thread seems to just be using numbers to represent labels rather than actual files, but not really sure: how to create confusion matrix for classification in tensorflow
And Im not really sure how to use the code in this thread either:
How do i create Confusion matrix of predicted and ground truth labels with Tensorflow?
I would try to create you confusion matrix manually, using something like this steps:
Modify the label_image example to print out just the top label.
Write a script to call the modified label_image repeatedly for all images in a folder.
Have the script print out the ground truth label, and then call label_image to print the predicted one.
You should now have a text list of all your labels in the console, something like this:
apple,apple
apple,pear
pear,pear
pear,orange
...
Now, create a spreadsheet with both row and column names for all the labels:
| apple | pear | orange
-------+----------------------
apple |
pear |
orange |
The value for each cell will be the number of pairs that show up in your console list for row, column. For a small set of images you can compute this manually, or you can write a script to calculate this if there's too many.
I am trying to implement Naive Bayes Algorithm - by writing my own code in MATLAB. I was confused what distribution to choose for one of the continuous attributes. It has values as follows:
MovieAge :
1
2
3
4
..
10
1
11
2
12
1
3
13
2
1
4
14
3
2
5
15
4
3
6
16
5
4
....
32
9
3
15
Please let me know which distribution to use for such data? and in my test set, this attribute will contain values (some times) that are not included in training data. how to handle this problem? Thanks
15
Like #Ben's answer, starting with Histogram sounds good.
I take your input, and the histogram looks like below:
Save your data into a text file called histdata, one line per value:
Python code used to generate the plot:
import matplotlib.pyplot as plt
data = []
for line in file('./histdata'):
data.append(int(line))
plt.hist(data, bins=10)
plt.xlabel('Movie Age')
plt.ylabel('Counts')
plt.show()
Assuming this variable takes integer values, rather than being continuous (based on the example), the simplest method is a histogram-type approach: the probability of some value is the fraction of times it occurs in the training data. Consider a final bin for all values above some number (maybe 20 or so based on your example). If you have problems with zero counts, add one to all of them (can be seen as a Dirichlet prior if you're that way inclined).
As for a parametric form, if you prefer one, the Poisson distribution is a possibility. A qq plot, or even a goodness of fit test, will suggest how appropriate this is in your case, but I suspect you're going to be better with the histogram based method.
I'm trying to do simple voice to text mapping using pocketsphinx (. The grammar is very simple such as:
public <grammar> = (Matt, Anna, Tom, Christine)+ (One | Two | Three | Four | Five | Six | Seven | Eight | Nine | Zero)+ ;
e.g:
Tom Anna Three Three
yields
Tom Anna 33
I adapted the acoustic model (to take into account my foreign accent) and after that I received decent performance (~94% accuracy). I used training dataset of ~3minutes.
Right now I'm trying to do the same but by whispering to the microphone. The accuracy dropped significantly to ~50% w/o training. With training for accent
I got ~60%. I tried other thinks including denoising and boosting volume. I read the whole docs but was wondering if anyone could answer some questions so I can
better know in which direction should I got to improve performance.
1) in tutorial you are adapting hub4wsj_sc_8k acustic model. I guess "8k" is a sampling parameter. When using sphinx_fe you use "-samprate 16000". Was it used deliberately to train 8k model using data with 16k sampling rate? Why data with 8k sampling haven't been used? Does it have influence on performance?
2) in sphinx 4.1 (in comparison to pocketsphinx) there are differenct acoustic models e.g. WSJ_8gau_13dCep_16k_40mel_130Hz_6800Hz.jar. Can those models be used with pocketsphinx? Will acustic model with 16k sampling have typically better performance with data having 16k sampling rate?
3) when using data for training should I use those with normal speaking mode (to adapt only for my accent) or with whispering mode (to adapt to whisper and my accent)? I think I tried both scenarios and didn't notice any difference to draw any conclussion but I don't know pocketsphinx internals so I might be doing something wrong.
4) I used the following script to record adapting training and testing data from the tutorial:
for i in `seq 1 20`; do
fn=`printf arctic_%04d $i`;
read sent; echo $sent;
rec -r 16000 -e signed-integer -b 16 -c 1 $fn.wav 2>/dev/null;
done < arctic20.txt
I noticed that each time I hit Control-C this keypress is distinct in the recorded audio that leaded to errors. Trimming audio somtimes helped to correct to or lead to
other error instead. Is there any requirement that each recording has some few seconds of quite before and after speaking?
5) When accumulating observation counts is there any settings I can tinker with to improve performance?
6) What's the difference between semi-continuous and continuous model? Can pocketsphinx use continuous model?
7) I noticed that 'mixture_weights' file from sphinx4 is much smaller comparing to the one you got in pocketsphinx-extra. Does it make any difference?
8) I tried different combination of removing white noise (using 'sox' toolkit e.g. sox noisy.wav filtered.wav noisered profile.nfo 0.1). Depending on the last parameter
sometimes it improved a little bit (~3%) and sometimes it makes worse. Is it good to remove noise or it's something pocketsphinx doing as well? My environment is quite
is there is only white noise that I guess can have more inpack when audio recorded whispering.
9) I noticed that boosting volume (gain) alone most of the time only maked the performance a little bit worse even though for humans it was easier to distinguish words. Should I avoid it?
10) Overall I tried different combination and the best results I got is ~65% when only removing noise, so only slight (5%) improvement. Below are some stats:
//ORIGNAL UNPROCESSED TESTING FILES
TOTAL Words: 111 Correct: 72 Errors: 43
TOTAL Percent correct = 64.86% Error = 38.74% Accuracy = 61.26%
TOTAL Insertions: 4 Deletions: 13 Substitutions: 26
//DENOISED + VOLUME UP
TOTAL Words: 111 Correct: 76 Errors: 42
TOTAL Percent correct = 68.47% Error = 37.84% Accuracy = 62.16%
TOTAL Insertions: 7 Deletions: 4 Substitutions: 31
//VOLUME UP
TOTAL Words: 111 Correct: 69 Errors: 47
TOTAL Percent correct = 62.16% Error = 42.34% Accuracy = 57.66%
TOTAL Insertions: 5 Deletions: 12 Substitutions: 30
//DENOISE, threshold 0.1
TOTAL Words: 111 Correct: 77 Errors: 41
TOTAL Percent correct = 69.37% Error = 36.94% Accuracy = 63.06%
TOTAL Insertions: 7 Deletions: 3 Substitutions: 31
//DENOISE, threshold 0.21
TOTAL Words: 111 Correct: 80 Errors: 38
TOTAL Percent correct = 72.07% Error = 34.23% Accuracy = 65.77%
TOTAL Insertions: 7 Deletions: 3 Substitutions: 28
Those processing I was doing only for testing data. Should the training data be processed in the same way? I think I tried that but there was barely any difference.
11) In all those testing I used ARPA language model. When using JGSF results where usually much worse (I have the latest pocketsphinx branch). Why is that?
12) Because is each sentence the maximum number would be '999' and no more than 3 names, I modified the JSGF and replaced repetition sign '+' by repeating content in the parentheses manually. This time the result where much closer to ARPA. Is there any way in grammar to tell maximum number of repetition like in regular expression?
13) When using ARPA model I generated it by using all possible combinations (since dictionary is fixed and really small: ~15 words) but then testing I was still receiving somtimes illegal results e.g. Tom Anna (without any required number). Is there any way to enforce some structure using ARPA model?
14) Should the dictionary be limited only to those ~15 words or just full dictionary will only affect speed but not performance?
15) Is modifying dictionary (phonemes) the way to go to improve recognition when whispering? (I'm not an expert but when we whisper I guess some words might sounds different?)
16) Any other tips how to improve accuracy would be really helpful!
Regarding whispering: when you do so, the sound waves don't have meaningful aperiodic parts (vibrations that result from your vocal cords resonating normally, but not when whispering). You can try this by putting your finger to your throat while loudly speaking 'aaaaaa', and then just whispering it.
AFAIR acoustic modeling relies a lot on taking the frequency spectrum of the sound to detect peaks (formants) and relate them to phones (like vowels).
Educated guess: when whispering, the spectrum is mostly white-noise, slightly shaped by the oral position (tongue, openness of mouth, etc), which is enough for humans, but far not enough to make the peeks distinguishable by a computer.