number of bits in an RGB image - embedded

Im chanced upon this statement while studying the allocation of RAM in embedded device.
Basically suppose we have an image sensor that uses RGB 5-6-5 format. It captures an image size of 320x240. The author proceeds to use
"There are two 150-KB data buffers that contain the raw data from the image sensor (320x240 in the
RGB 5-6-5 format). "
Does anyone know how is two 150KB data buffers enough to store the raw image? How can i calculate the image bits?
I tried calculating
( 2^5 * 2^6 * 2^5 * 320 * 240 ) * 0.000125 = 629145.6 // in KB.

You should look closer at the definition of the RGB 5:6:5 format. Each color takes up 2 bytes (5 bits for red, 6 bits for green and 5 bits for blue; adding up to 16 bits == 2 bytes), so a raw 320x240 picture takes 320 * 240 * 2 bytes, i.e. 153600 bytes or 150 KB.

Related

Using GPU to rasterize image with 128 color channels

I need to rasterize a multispectral image, where each pixel contains the intensity (8 bits) at 128 different wavelengths, for a total of 1024 bits/pixel.
Currently I am using OpenGL, and rasterizing in 43 passes, each producing an image with 3 of the 128 channels, but this is too slow.
Is it possible to do it in a single pass by somehow telling the GPU to rasterize a 128 color component image (not necessarily using OpenGL)?

How to get FLAC frame length

I'm study FLAC decode problem, but can't figure out how to get FLAC frame length. Please help.
https://xiph.org/flac/format.html
I docoded METADATA_BLOCK_STREAMINFO, and get below data:
mMinBlock: 4096
mMaxBlock: 4096
mMinFrame: 1201
mMaxFrame: 12804
mSampleRate: 44100
mBitPerSample: 16
mTotalSample: 14170212
Then I start to analyse first Frame, below is the info from first Frame header:
isFixBlock = true
blockSize = 12
sampleRate = 9
channel = 10
sampleSize = 4
number = 0
Blocking strategy is fixed-blocksize;
Block size: 1100, it means 256 * (2^(12-8)) samples = 4096 samples;
Sample rate: 1001 : 44.1kHz;
Channel: 2;
Sample size: 100 : 16 bits per sample;
So from above infomation, we'll know this frame has 4096 samples, and sample size is 16 bits per sample. That means this frame length is at least(ignore subframe header and frame footer,etc.) 4096 * 16 / 8 = 8192 bytes. But if I check the FLAC file manually, the offset gap of first and second frame is only 2976 bytes, this means the frame length of first frame is only 2976 bytes. Is there anything wrong for my calculation?
My purpose is get frame offset and frame length of every frame, is there any good way? I know there is sync code 0xFF F8, but it's very low efficiency.
Thanks in advance!
From http://lists.xiph.org/pipermail/flac-dev/2016-February/005845.html
The frame length you calculated (8192 bytes) is that of the decoded
frame, not of the FLAC frame. As it is compressed, it should be indeed
smaller than 8192 bytes.
There is no direct way to find the frame length except finding where the
next frame starts.

anyone know 10-bit raw rgb? about omnivision

i'm using Omnivision ov5620
http://electronics123.net/amazon/datasheet/OV5620_CLCC_DS%20(1.3).pdf
this is datasheet.
than, you can see the Output Format 10-bit digital RGB Raw data.
first, i know RGB raw data is bayer array.
so, 10-bit RGB mean each channel of 1024 scale? range is 0~1023?
or 8-bit RGB each channel and four-LSB[2:0] is new fifth pixel data?
please refer the image
which is correct?
They pack every four adjacent 10-bit pixels (0..1023) of the line into 5 sequential bytes, where each of the first 4 bytes contains the MSB part of the pixel, and the 5th byte contains LSBs of all four pixels packed together into one byte.
This is convenient format because if you want to convert it to RGB8 you just ignore that fifth byte.
Also each displayed line begins with the packer header (PH) byte and terminates with the packer footer (PF) byte. And the whole frame begins with the frame start (FS) byte and terminates with the frame end (FE) byte.

Extra bytes on the end of YUV buffer - RaspberryPi

I've started editing the RaspiStillYUV.c code. I eventually want to process the image I receive, but for now, I'm just working to understand it. Why am I working with YUV instead of RGB? So I can learn something new. I've made minor changes to the function camera_buffer_callback. All I am doing is the following:
fprintf(stderr, "GREAT SUCCESS! %d\n", buffer->length);
The line this is replacing:
bytes_written = fwrite(buffer->data, 1, buffer->length, pData->file_handle);
Now, the dimensions should be 2592 x 1944 (w x h) as set in the code. Working off of Wikipedia (YUV420) I have come to the conclusion that the file size should be w * h * 1.5. Since the Y component has 1 byte of data for each pixel and the U and V components have 1 byte of data for every 4 pixels (1 + 1/4 + 1/4 = 1.5). Great. Doing the math in Python:
>>> 2592 * 1944 * 1.5
7558272.0
Unfortunately, this does not line up with the output of my program:
GREAT SUCCESS! 7589376
That leaves a difference of 31104 bytes.
I figure that the buffer is allocated in fixed size chunks (the output size is evenly divisible by 512). While I would like to understand that mystery, I'm fine with the fixed size chunk explanation.
My question is if I am missing something. Are the extra bytes beyond the expected size meaningful in this format? Should they be ignored? Are my calculations off?
The documentation at this location supports your theory on padding: http://www.raspberrypi.org/wp-content/uploads/2013/07/RaspiCam-Documentation.pdf
Specifically:
Note that the image buffers saved in raspistillyuv are padded to a
horizontal size divisible by 16 (so there may be unused bytes at the
end of each line to made the width divisible by 16). Buffers are also
padded vertically to be divisible by 16, and in the YUV mode, each
plane of Y,U,V is padded in this way.
So my interpretation of this is the following.
The width is 2592 (divisible by 16 so this is ok).
The height is 1944 which is 8 short of being divisible by 16 so an extra 8*2592 are added (also multiplied by 1.5) thus giving your 31104 extra bytes.
Although this kindof helps with the size of the file, it doesn't explain the structure of the YUV output properly. I am having a look at this description to see if this provides a hint to start with: http://en.wikipedia.org/wiki/YUV#Y.27UV420p_.28and_Y.27V12_or_YV12.29_to_RGB888_conversion
From this I believe it is as follows:
Y Channel:
2592 * (1944+8) = 5059584
U Channel:
1296 * (972+4) = 1264896
V Channel:
1296 * (972+4) = 1264896
Giving a sum of :
5059584 + 2*1264896 = 7589376
This makes the numbers add up so only thing left is to confirm if this interpretation is correct.
I am also trying to do the YUV decode (for image comparisons) so if you can confirm if this actually does correspond to what you are reading in the YUV file this would be much appreciated.
You have to read the manual carefully. Buffers are padded to multiples of 16, but colour data is half-size, so your image size needs to be in multiples of 32 to avoid problems with padding breaking external software.

how to calculate how much data can be embeded into an image

I want to know how much data can be embedded into an image of different sizes.
For example in 30kb image file how much data can be stored without distortion of the image.
it depends on the image type , algoridum , if i take a example as a 24bitmap image to store ASCII character
To store a one ASCII Character = Number of Pixels / 8 (one ASCII = 8bits )
It depends on two points:
How much bits per pixel in your image.
How much bits you will embed in one pixel .
O.K lets suppose that your color model is RGB and each pixel = 8*3 bits (one byte for each color), and you want embed 3 bits in one pixel.
data that can be embedded into an image = (number of pixels * 3) bits
If you would use the LSB to hide your information this would give 30000Bits of available space to use. 3750 bytes.
As the LSB represents 1 or 0 into a byte that gets values from 0-256 this gives you in the worst case scenario that you are going to modify all the LSBs distortion of 1/256 that equals 0,4%.
In the statistical average scenario you would get 0,2% distortion.
So depends on which bit of the byte you are going to change.