Dcraw bit depth of 14 bit in output 16 bit tiff - dcraw

How is it possible to convert a 14-bit raw (Nikon D7000 NEF) to 16-bit tiff preserving the bit depth (ie. max of 16383) and not getting a max of 65535.
I'm using
dcraw -T -4 input.NEF

Related

number of bits in an RGB image

Im chanced upon this statement while studying the allocation of RAM in embedded device.
Basically suppose we have an image sensor that uses RGB 5-6-5 format. It captures an image size of 320x240. The author proceeds to use
"There are two 150-KB data buffers that contain the raw data from the image sensor (320x240 in the
RGB 5-6-5 format). "
Does anyone know how is two 150KB data buffers enough to store the raw image? How can i calculate the image bits?
I tried calculating
( 2^5 * 2^6 * 2^5 * 320 * 240 ) * 0.000125 = 629145.6 // in KB.
You should look closer at the definition of the RGB 5:6:5 format. Each color takes up 2 bytes (5 bits for red, 6 bits for green and 5 bits for blue; adding up to 16 bits == 2 bytes), so a raw 320x240 picture takes 320 * 240 * 2 bytes, i.e. 153600 bytes or 150 KB.

Coverting Tiff to JPEG with Gdal_tranlate

I'm converting a 16-bit tiff image to JPEG with the gdal_translate library. But it drops from 16 bits to 8 bits. Is here any way that I can keep the bit depth at 16. Or should I need to get tiff image with higher bit depth.
Options
scale = "0-65535"
options_list = [
'-ot Byte',
'-of JPEG',
'-b 1',
'-co QUALITY=100',
scale
]
You have used -ot Byte which is 8 bits by definition. I don't think UInt16 JPEG is supported. It should work with PNG.

How to convert 16 bit grayscale image to 8 bit using linear mapping with libvips

I would like to convert a 16 bit grayscale image to 8 bit, where the lowest value of the 16 bit image becomes 0, and the highest becomes 255
As far I can see, I can call vips-hist-norm, which will map it across the full 16 bit range.
However its unclear to me how I can then convert to 8 bit.
vips-scale will perform linear mapping of current type to uchar type.

gdal_translate 8bits (Byte format), how to handle nodata-values

I have a satellite image in GTiff with a range of 0 - 65535 which I need to convert to Byte format (0-255).
using:
gdal_translate [-a_nodata 0 and -scale 0 65535 0 255] -ot Byte infile.if outfile.tif
This works fine but I get a lot of pixels which are rounded down (truncated) to 0, which is my nodata value, this means that they become transparent, when it is visualized.
I have tried playing around with -a_nodata 0 and -scale 1 65535 0 255, but I haven't been able to find a solution that works for me.
What I'm looking for is getting 0 as nodata and 1-255 as the data range.
If anybody else stumbles onto this, I would just like to post the solution I found.
The routine gdal_calc.py, which enables one to use Python functions from e.g. numpy and math, can do the trick easily.
gdal_calc.py -A inputfile.tif --outfile=outputfile.tif --calc="A/258+1" --NoDataValue=0
Then one just needs to convert it to Byte format by gdal_translate og gdalwarp (if one needs to re-grid the data as well).

How to write integer value "60" in 16bit binary, 32bit binary & 64bit binary

How to write integer value "60" in other binary formats?
8bit binary code of 60 = 111100
16bit binary code of 60 = ?
32bit binary code of 60 = ?
64bit binary code of 60 = ?
is it 111100000000 for 16 bit binary?
why does 8bit binary code contain 6bits instead of 8?
I googled for the answers but I'm not able to get these answers. Please provide answers as I'm still a beginner of this area.
Imagine you're writing the decimal value 60. You can write it using 2 digits, 4 digits or 8 digits:
1. 60
2. 0060
3. 00000060
In our decimal notation, the most significant digits are to the left, so increasing the number of digits for representation, without changing the value, means just adding zeros to the left.
Now, in most binary representations, this would be the same. The decimal 60 needs only 6 bits to represent it, so an 8bit or 16bit representation would be the same, except for the left-padding of zeros:
1. 00111100
2. 00000000 00111100
Note: Some OSs, software, hardware or storage devices might have different Endianness - which means they might store 16bit values with the least significant byte first, then the most signficant byte. Binary notation is still MSB-on-the-left, as above, but reading the memory of such little-endian devices will show any 16bit chunk will be internally reversed:
1. 00111100 - 8bit - still the same.
2. 00111100 00000000 - 16bit, bytes are flipped.
every number has its own binary number, that means that there is only one!
on a 16/32/64 bit system 111100 - 60 would just look the same with many 0s added infront of the number (regulary not shown)
so on 16 bit it would be 0000000000111100
32 bit - 0000000000000000000000000011110
and so on
For storage Endian matters ... otherwise bitwidth zeros are always prefixed so 60 would be...
8bit: 00111100
16bit: 0000000000111100