I need to encode an image in 16UC1 format, but I receive the error:
cv_bridge.core.CvBridgeError:encoding specified as 16UC1, but image has incompatible type 32FC1
I tried to use skimage function img_as_uint but since my image values are not between -1 and 1 it doesn't work. i also tried to "normalize" my values by dividing all of them by the value obtained from np.amax, but using the skimage function only returns a blank image.
Is there a way of achieving this conversion?
This is the original 32FC1 image
With numpy you should be able to:
import numpy as np
img = np.random.normal(0, 1, (300, 300, 3)).astype(np.float32) # simulated image
uimg = img.astype(np.uint16)
You probably will first want to do some kind of normalization if it isn't already in an unsigned range. Probably something like:
img_normalized = (img-img.min())/(img.max()-img.min())*256**2
But your normalization strategy will depend on what you want to accomplish.
Thanks for sharing an image. I can visualize as follows:
import numpy as np
import matplotlib.pyplot as plt
arr = np.load('32FC1_image.npz')
img = arr['arr_0']
img = np.squeeze(img) # this gets rid of the extra dimensions that are causing matplotlib to not recognize it as an image, the extra dimensions also may be causing your problems
img_normalized = (img-img.min())/(img.max()-img.min())*256**2
img_normalized = img_normalized.astype(np.uint16)
plt.imshow(img_normalized)
Try using the normalized 16 bit image.
Related
Im trying to convert the numpy array of the PIL image I got to a binary one but anything I have tried doesn't work.
this is what I got so far:
from PIL import Image
import numpy as np
pixels=np.array(Image.open("covid_encrypted_new.png").getdata())
def to_bin(pixels):
return [format(i,"08b") for i in pixels]
also when I tried to iterate over the array and change each value to type bin it also didnt go well for me.
What else can I try?
thanks
This could be what your looking for
Ori here: How to read the file and convert it to a binary image in Python
# Read Image
img= Image.open(file_path)
# Convert Image to Numpy as array
img = np.array(img)
# Put threshold to make it binary
binarr = np.where(img>128, 255, 0)
# Covert numpy array back to image
binimg = Image.fromarray(binarr)
You could even use opencv to convert
img = np.array(Image.open(file_path))
_, bin_img = cv2. threshold(img,127,255,cv2.THRESH_BINARY)
I separated the 3 channels of an colour image. I created a new NumPy array of the same size as the image, and stored the 3 channels of the image into 3 slices of the 3D NumPy array. After plotting the NumPy array, the plotted image is not same as original image. Why is this happening?
Both img and new_img array have same elements, but image is different.
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
img=mpimg.imread('/storage/emulated/0/1sumint/kali5.jpg')
new_img=np.empty(img.shape)
new_img[:,:,0]=img[:,:,0]
new_img[:,:,1]=img[:,:,1]
new_img[:,:,2]=img[:,:,2]
plt.imshow(new_img)
plt.show()
Expect the same image as original image.
The problem is that your new image will be created with the default data type of float64 on this line:
new_img=np.empty(img.shape)
unless you specify a different dtype.
You can either (best) copy the original image's dtype like this:
new_img = np.empty(im.shape, dtype=img.dtype)
or use something like this:
new_img = np.zeros_like(im)
or (worst) specify one you happen to know matches your data, like this,
new_img = np.empty(im.shape, dtype=np.uint8)
I presume you have some reason for copying one channel at a time, but if not, you can avoid all the foregoing issues and just do:
new_img = np.copy(img)
I have an image which I converted to numpy array and after that applying median filter using scipy library on that changes all elements of that ndarray to zero.I don't know why this is happening and I assume it should not happen.
from PIL import Image
import numpy as np
from scipy.signal import medfilt
train = np.array(Image.open("26.jpg").getdata() ,dtype=float).reshape(176, 208, 1)
s = np.sum(train, axis =0)
print(s)
train = medfilt(train, kernel_size= 3)
s1 = np.sum(train, axis =0)
print(s1)
Due to this issue I can't go for further image processing.
medfilt effectively zeropads at the boundaries. Since you have one dimension of size 1, in this direction every pixel is sandwiched between two zeros which outvote everything.
Try omitting the third dimension
train = np.array(Image.open("26.jpg").getdata() ,dtype=float).reshape(176, 208)
and you should be fine.
You can add it after filtering if need be.
original_picture (size:128*128) like this:
after using this function
image = tf.image.resize_images(original_image,(128,128))
finally I use plt.imshow() to show my hand picture
The problem comes from tensorflow's resize_images function returning floats.
To properly resize and view the image you would need something like:
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
with tf.Session() as sess:
tf.global_variables_initializer().run()
image = tf.image.resize_images(original_image,(128,128))
# Cast image to np.uint8 so it can be properly viewed
# eval() tensor to get numpy array.
image = tf.cast(image, np.uint8).eval()
plt.imshow(image)
The colours are inverted, i.e. each pixel's colour [r, g, b] is being displayed as [255 - r, 255 - g, 255 - b].
This could have something to do with the data type of the image you obtain in step 2. Try the following after resizing the image:
image = image.astype(np.uint8)
I will be using tensorflow library as tf.
tf.image.resize resizes the images(correctly) and then when we use plt.imshow on it .
But plt.imshow if it sees a float value be it .5 or 221.3 it clips that into the range[0,1].
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
This was the problem in my case ,
Original Image pixels[ 91 105 166] .
After resizing tf.Tensor([ 91.01 105.01 166.01], shape=(3,), dtype=float32)
You can see that the resizing is correct but the clipping is the one hurting .
To use the function properly .
img_resize = tf.image.resize(random_img,[250,250])
img_resize = tf.cast(img_resize,'int64')
plt.imshow(img_resize)
This should take care of the issues .
I'm trying to convert numpy arrays into Nifti file format using Nibabel. Some of my Numpy arrays have dtype('<i8') when it should be dtype('uint8') when Nibabel calls for the data type.
arr.get_data_dtype()
Does anyone know how to convert and save Numpy arrays' data type?
The question of the title is slightly different than the question in the text. So...
If you want to change the data-type of a numpy array arr to np.int8, you are looking for arr.astype(np.int8).
Mind that you may lose precision due to data casting (see astype documentation)
To save it afterwards you may want to see ?np.save and ?np.savetxt (or to check the library pickle, to save more general objects than numpy array).
If you want to change the data-type of a nifti image saved in my_image.nii.gz
you have to go for:
import nibabel as nib
import numpy as np
image = nib.load('my_image.nii.gz')
# to be extra sure of not overwriting data:
new_data = np.copy(image.get_data())
hd = image.header
# in case you want to remove nan:
new_data = np.nan_to_num(new_data)
# update data type:
new_dtype = np.int8 # for example to cast to int8.
new_data = new_data.astype(new_dtype)
image.set_data_dtype(new_dtype)
# if nifty1
if hd['sizeof_hdr'] == 348:
new_image = nib.Nifti1Image(new_data, image.affine, header=hd)
# if nifty2
elif hd['sizeof_hdr'] == 540:
new_image = nib.Nifti2Image(new_data, image.affine, header=hd)
else:
raise IOError('Input image header problem')
nib.save(new_image, 'my_image_new_datatype.nii.gz')
Finally if you have a numpy array my_arr and you want to save it into a nifti image with a given data-type np.my_dtype, you can do:
import nibabel as nib
import numpy as np
new_image = nib.Nifti1Image(my_arr, np.eye(4))
new_image.set_data_dtype(np.my_dtype)
nib.save(new_image, 'my_arr.nii.gz')
Hope it helps!
NOTE: If you are using ITKsnap you may want to use np.float32, np.float64, np.uint16, np.uint8, np.int16, np.int8. Other choices may not produce images that can be open with this software.
Seems like you could also do
import nibabel
img = nibabel.load(filename)
img.set_data_dtype(dtype)
img.to_filename(new_filename)
You can use nilearn for a tidy solution. Here is an example if you want to change the data type of nifti image to int16:
from nilearn import image
import numpy as np
vol = image.load_img(input_file)
vol = image.new_img_like(vol, np.int16(vol.get_fdata()))
vol.to_filename(output_file)
Datatypes for .nii files can also be specified in the .to_filename() function:
import nibabel as nib
new_image = nib.Nifti2Image(my_arr, affine)
new_image.to_filename(fn, dtype=np.uint8)