I have an image with (128,19,3) dimension(which its name is frame_0). i create a 5D zeros tensor (which its name is e) and wants to add the image to the e[0,0] but i got below error. thanks if any one can help
Error:
TypeError: Invalid dimensions for image data
import os
import cv2
import numpy as np
image_folder = 'D:\\thesis\\Paper 3\\Feature Extraction\\two_dimension_Feature_extraction\\stft_feature\\Training_set\\P300'
images = [img for img in os.listdir(image_folder) if img.endswith(".png")]
frame_0 = cv2.imread(os.path.join(image_folder, images[0]))
e = np.zeros((1100,5,128,19,3),dtype=np.uint8)
e[0,0]=frame_0
Related
here is the part of my code
import PIL
import numpy as np
ramp = "$#B%8&WM#*oahkbdpqwmZO0QLCJUYXzcvunxrjft/\|()1{}[]?-_+~<>i!lI;:,^`'."
def average(image):
im = np.array(image)
return np.average(im.flatten())
def convert(path, imgScale, fontScale):
if imgScale>1:
raise Exception('isnt right scale')
image = Image.open(path).convert("L")
W, H = image.size
I used to watch for any solutions. People say it is because of pil version. But i have the last one(https://i.stack.imgur.com/04mlK.png)
There are a list of image files I want to convert to numpy arrays and append them to a txt file, each array line after line. This is my code:
from PIL import Image
import numpy as np
import os
data = os.listdir("inputs")
print(len(data))
with open('np_arrays.txt', 'a+') as file:
for dt in data:
img = Image.open("inputs\\" + dt)
np_img = np.array(img)
file.write(np_img)
file.write('\n')
but file.write() requires a string and does not accept a numpy ndarray. How can I solve this?
Numpy also allows you to save directly to .txt files with np.savetxt.
I'm still not entirely sure what format you want your text file to be in but a solution might be something like:
from PIL import Image
import numpy as np
import os
data = os.listdir("inputs")
print(len(data))
shape = ( len(data), .., .., ) # input the desired shape
np_imgs = np.empty(shape)
for i, dt in enumerate(data):
img = Image.open("inputs\\" + dt)
np_imgs[i] = np.array(img) # a caveat here is that all images should be of the exact same shape, to fit nicely in a numpy array
np.savetxt('np_arrays.txt', np_imgs)
Note that np.savetxt() has a lot of parameters that allow you to finetune the outputted txt file.
The write() function only allows strings as its input. Try using numpy.array2string.
I am trying to create a simple image classification tool.
I would like the code below to work with classifying images. It works fine when it is a non image NumPy array.
#https://e2eml.school/images_to_numbers.html
import numpy as np
from sklearn.utils import Bunch
from PIL import Image
monkey = [1]
dog = [2]
example_animals = Bunch(data = np.array([monkey,dog]),target = np.array(['monkey','dog']))
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2) #with KMeans you get to pre specify the number of Clusters
KModel = kmeans.fit(example_animals.data) #fit a model using the training data , in this case original example animal data passed through
import pandas as pd
crosstab = pd.crosstab(example_animals.target,KModel.labels_)
print(crosstab)
I have looked into how to make an image into a NumPy array at https://e2eml.school/images_to_numbers.html
The code below where I have converted images to NumPy array doesn't work.
When run it gets the following error
** 'setting an array element with a sequence'**
#https://e2eml.school/images_to_numbers.html
import numpy as np
from sklearn.utils import Bunch
from PIL import Image
monkey = np.asarray(Image.open("monkey.jpg"))
dog = np.asarray(Image.open("dog.jpeg"))
example_animals = Bunch(data = np.array([monkey,dog]),target = np.array(['monkey','dog']))
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2) #with KMeans you get to pre specify the number of Clusters
KModel = kmeans.fit(example_animals.data) #fit a model using the training data , in this case original example animal data passed through
import pandas as pd
crosstab = pd.crosstab(example_animals.target,KModel.labels_)
print(crosstab)
I would appreciate any insight how I fix the error 'setting an array element with a sequence' so that the images will be compatible with the sklearn processing.
You need to be sure that your images "monkey.jpg" and "dog.jpeg" have the same number of pixels. Otherwise, you will have to resize the images to have the same size. Moreover, the data of your Bunch object need to be of shape (n_samples, n_features) (you can check the documentation https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans.fit)
You need to be aware that you use an unserpervised learning model (Kmeans). So the output of the model is not directly "monkey" or "dog".
I found the solution to error setting an array element with a sequence
Kmeans requires the data arrays for comparison need to be the same size.
This means if importing pictures, the pictures need to be resized, converted into a numpy array (a format that is compatible with Kmeans) and finally made into a 1 dimensional array.
#https://e2eml.school/images_to_numbers.html
#https://machinelearningmastery.com/how-to-load-and-manipulate-images-for-deep-learning-in-python-with-pil-pillow/
import numpy as np
from matplotlib import pyplot as plt
from sklearn.utils import Bunch
from PIL import Image
from sklearn.cluster import KMeans
import pandas as pd
monkey = Image.open("monkey.jpg")
dog = Image.open("dog.jpeg")
#resize pictures
monkey1 = monkey.resize((180,220))
dog1 = dog.resize((180,220))
#make pictures into numpy array
monkey2 = np.asarray(monkey1)
dog2 = np.asarray(dog1)
#https://www.quora.com/How-do-I-convert-image-data-from-2D-array-to-1D-using-python
#make numpy array into 1 dimensional array
monkey3 = monkey2.reshape(-1)
dog3 = dog2.reshape(-1)
example_animals = Bunch(data = np.array([monkey3,dog3]),target = np.array(['monkey','dog']))
kmeans = KMeans(n_clusters=2) #with KMeans you get to pre specify the number of Clusters
KModel = kmeans.fit(example_animals.data) #fit a model using the training data , in this case original example food data passed through
crosstab = pd.crosstab(example_animals.target,KModel.labels_)
print(crosstab)
I'm experimenting with numpy and I'd like to ask a solution for the following code. I'd like to, actually, generate a 256x256 image, from start using a random rgb schema -- probably that would be the way to go. Any numpy insights would be welcome!
# -*- coding: utf-8 -*-
from PIL import Image
import numpy as np
def transform_matrice(data):
aux_data = []
for e in data:
aux = []
for a in e:
aux.append(np.array([[random.randrange(255), random.randrange(255), random.randrange(255)]]))
aux_data.append(aux)
return aux_data
w, h = 250, 250
data = np.zeros((h, w, 3), dtype=np.uint8)
ret = transform_matrice(data)
img = Image.fromarray(ret, 'RGB')
img.save('eg.png')
img.show()
with this code I got the following error:
AttributeError: 'list' object has no attribute '__array_interface__'
You do not need to create a empty data table neither you need to use for loops, numpy can do it for you!
np.random.randint will create you a 3D matrix of size (w,h,3) with integers from 0 to 255 using the following command:
def transform_matrice(w,h):
return np.random.randint(0,256,size=(w,h,3)).astype('uint8')
ret = transform_matrice(250,250)
None that I put 256 and not 255 as second parameter since the parameter is one above the largest integer you want
I am trying to use TF.dataset.map to port over this old code because I get a deprecation warning.
Old code which reads a set of custom protos from a TFRecord file:
record_iterator = tf.python_io.tf_record_iterator(path=filename)
for record in record_iterator:
example = MyProto()
example.ParseFromString(record)
I am trying to use eager mode and map, but I get this error.
def parse_proto(string):
proto_object = MyProto()
proto_object.ParseFromString(string)
dataset = tf.data.TFRecordDataset(dataset_paths)
parsed_protos = raw_tf_dataset.map(parse_proto)
This code works:
for raw_record in raw_tf_dataset:
proto_object = MyProto()
proto_object.ParseFromString(raw_record.numpy())
But the map gives me an error:
TypeError: a bytes-like object is required, not 'Tensor'
What is the right way to take use the argument the function results of the map and treat them like a string?
You need to extract string form the tensor and use in the map function. Below are the steps to be implemented in the code to achieve this.
You have to decorate the map function with tf.py_function(get_path, [x], [tf.float32]). You can find more about tf.py_function here. In tf.py_function, first argument is the name of map function, second argument is the element to be passed to map function and final argument is the return type.
You can get your string part by using bytes.decode(file_path.numpy()) in map function.
So modify your program as below,
parsed_protos = raw_tf_dataset.map(parse_proto)
to
parsed_protos = raw_tf_dataset.map(lambda x: tf.py_function(parse_proto, [x], [function return type]))
Also modify parse_proto as below,
def parse_proto(string):
proto_object = MyProto()
proto_object.ParseFromString(string)
to
def parse_proto(string):
proto_object = MyProto()
proto_object.ParseFromString(bytes.decode(string.numpy()))
In the below simple program, we are using tf.data.Dataset.list_files to read path of the image. Next in the map function we are reading the image using load_img and later doing the tf.image.central_crop function to crop central part of the image.
Code -
%tensorflow_version 2.x
import tensorflow as tf
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array, array_to_img
from matplotlib import pyplot as plt
import numpy as np
def load_file_and_process(path):
image = load_img(bytes.decode(path.numpy()), target_size=(224, 224))
image = img_to_array(image)
image = tf.image.central_crop(image, np.random.uniform(0.50, 1.00))
return image
train_dataset = tf.data.Dataset.list_files('/content/bird.jpg')
train_dataset = train_dataset.map(lambda x: tf.py_function(load_file_and_process, [x], [tf.float32]))
for f in train_dataset:
for l in f:
image = np.array(array_to_img(l))
plt.imshow(image)
Output -
Hope this answers your question. Happy Learning.