we have this:
Decoder
m=4 data bits
r=3 check bits
the check bits are at position 1,2 and 4, i.e. at the powers of two.
All other positions are data bits.
The test bits were chosen in such a way that red is for p1, green for p2 and blue for p3.
Now we had to represent this hammingcode with gates and circuits, but how can that be done?
Related
I am studying Advance electronics subject and i came across topics one quadrant multiplier and four quadrant multiplier. What i can't understand is that how one quadrant multiplier can operate on only when both inputs are positive (eg: +2 and +1 ) but cannot operate when input voltages are both negative and opposite polarity (+-, -+, --) and four quadrant multiplier can operate on all four possible input values (++, -+ ,+-, --). I will be very happy for your explanation. Thank you.
[Four quadrant multiplierone quadrant multiplier](https://i.stack.imgur.com/Bkg2H.jpg)
I just don't understand how one quadrant multiplier cannot operate when inputs are +-, -+ or --. I just want explanation how it cannot operate on these values and also how four quadrant multiplier can operate on those values.
Required to turn an image into N triangles with Delaunay triangulation. One color for each triangle, and colors can be repeated.
The loss function is given by the square of the difference in the color of each pixel.
So how to optimize the color and the vertices of triangles?
A recursive splitting procedure outline:
Terminate the recursion if N < 2
Split the given area A in two triangles A1 and A2 in such a way that the
sum of standard deviations of the pixel colors is cut in halves.
Assign N/2 colors to A1 and N - N/2 colors to A2.
Recursively split A1 and A2.
The resulting net of N triangles is colored to minimize the loss function:
For every triangle the color chosen is the average color of the pixels within that triangle.
It might be worthwhile to conduct a survey of existing literature on the topic. A first search engine hit returned Fractal image compression based on Delaunay triangulation and vector quantization
At first blush this presumably means -
(1) looking only at lower IR frequencies,
(2) select a IR frequency cut-off for low frequency buckets of the u/v FFT grid
(3) Once we have that, derive the power distribution - squares of amplitudes - for that IR range of frequency buckets the camera supports.
(4) Fit that distribution against the Rayleigh-Jones classical Black Box radiation formula:
(https://en.wikipedia.org/wiki/Rayleigh%E2%80%93Jeans_law#Other_forms_of_Rayleigh%E2%80%93Jeans_law)
(5) Assign a Temperature of 'best fit'.
The units for B(ν,T) are Power per unit frequency per unit surface area at equilibrium Temperature
Of course, this leaves many details out, such as (6) cancelling background, etc, but one could perhaps use the opposite facing camera to assist in that. Where buckets do not straddle the temperature of interest, (7) use a one-sided distribution to derive an inferred Gaussian curve to fit the Rayleigh-Jeans curve at that derived central frequency ν, for measured temperature T.
Finally (8) check if this procedure can consistently detect a high vs low surface temperature (9) check if it can consistently identify a 'fever' temperature (say, 101 Fahrenheit / 38 Celcius) pointing at a forehead.
If all that can be done, (10) Voila! a body fever detector
So those who are capable can fill us in on whether this is possible to do so for eventual posting at an app store as a free Covid19 safe body temperature app? I have a strong sense there's quite a few out there who can verify this in a week or two!
It appears that the analog signal assumed in (1) and (2) are not available in the Android digital Camera2 interface.
Android RAW image stream, that is uncompressed YUV, is already encoded Y green monochrome, and U,V are blue and red shifts from zero for converting green monochrome to color.
The original analog frequency / energy signal is not immediately accessible. So adaptation is not possible (yet).
I am learning about attention models, and following along with Jay Alammar's amazing blog tutorial on The Illustrated Transformer. He gives a great walkthrough for how the attention scores are calculated, but I get a bit lost at a certain point, and am not seeing how the attention score Z matrix he explains is used to interpret strength of associations between different words within an input sequence.
He mentions that given some input matrix X, with shape N x D, where N is the number of elements in an input sequence, and D is the input dimensionality, we multiply X with three separate weight matrices of shape D x d, where d is some lower dimensionality that represents the projected space of the query, key, and value matrices:
The query and key matrices are dotted, and then divided by a scaling factor usually the square root of the projected dimensionality, and then run through a softmax function. This produces a weight matrix of size N x N, which is multiplied by the value matrix to get an output Z of shape N x d, which Jay says
That concludes the self-attention calculation. The resulting vector is
one we can send along to the feed-forward neural network.
The screenshot from his blog for this calculation is below:
However, this is where I'm confused. Z is N x d. However, I don't particularly understand what I'm supposed to do with this matrix from an interpretability sense, and as far as I understand, for a particular sequence element (ie. the word cats in the sequence I love pets, especially cats), self-attention is supposed to score other parts of the sequence high when it is relevant or strong associated with that word embedding. However, I'd expect then that Z is N x N, so I could say that I can select the Z[i,j] and say for the i-th word in the sequence, the j-th word relates or associates with it this or that much.
In fact, wouldn't it make much more sense to use only the softmax output of the weights (without multiplying them by the value matrix), since it already is N x N? In essence, how is Jay determining the strength of these associations in this particular sequence with the word it?
This is an N by 1 relationship he is showing - there are N values that correspond with the strength of association to the word it.
When checking this method, I was expecting for red, green and blue to be in the 0-255 range. Instead, it's in 0-1.
Am I the only one who thinks this is weird?
Is there any reason not o use the more common 0-255 values for RGB, or even hex numbers (as in html)?
In my opinion this is not weird. Both 0-255 and 0.0-1.0 levels are widely used in different platforms. You can always convert that by using something like this:
#define FLOAT_COLOR_VALUE(n) (n)/255.0
The reason sometimes RGB values are represented as float values rather than 0 to 255 is because 0 to 255 assumes you are using 8 bits to represent each colour component and hence have to use 24 bits for each colour in your frame buffers. This may not be the case if you are using displays that only support 256 colours in total or more than 16 million etc.
In theory then can be an infinite number of shades of red, green or blue. The number of bits you use to represent them depends on how accurate you need to represent colour and how much memory you have on graphics cards to represent images etc.
For many cases 0 to 255 is fine. But there is another world out there where it isn't fine, and for those devices / accurate rendering requirements, floating point numbers provide a much needed alternative.