Why do I get empty dataframe after LazyClassifier? - dataframe

Given train data with x_train data frame and y_train data frame, I also have the test data frame and the actual classes. I want to use LazyClassifier to see which classifier will fit my data the best. When I insert my data into the fit method of the classifier I get an empty result. Here is what I have and what I get:
enter image description here
enter image description here

Related

Regression analysis on color images using CNN and underlying image array to extract parameter

I am doing a regression analysis to extract the parameter from the data which is a 2D-array using CNN. The array represents some sort of map of the underlying parameter I am trying to extract. I converted the array into jpgs,pngs and fed into 3-channel 2D CNN model. So far the CNN model is able to extract the underlying parameter from the images but the way images are generated is by converting the array into image using plt.imshow() function in matplotlib which gives color image(3-channel) with compression in the data.
The issue in this case is loss of information in compression of array size during conversion to RGB images. So, I tried building a CNN model where I directly input the raw array into the network without converting it into image but the regression is very poor whereas for the same datasets the regression is quite good if I feed converted jpg or png images.
I suspect that 3-channel in the image is responsible for CNN to perform better with images. Logically speaking, an array is converted into RGB mode from 0 to 255 levels for each channel, isn't it same as feature scaling the data but from 0 to 255 instead of 0 to 1.
Prediction for color images
Prediction for raw array
So, I tried scaling the data raw array from 0 to 1 and stacking up 3 times making it 3 channel raw array and feed the data into the network but the prediction was still quite poor.
If my logic is correct then I want to make use of 3-channel in CNN to extract the parameter from the raw array without loss of information. Is there anyway to do it? What else can I implement to get the similar prediction as from images but from the raw 2D-array instead.

Output vector given an image for Siamese model

This page (https://keras.io/examples/mnist_siamese/) highlights how we train a Siamese model. The model will output a score given two images input. What I want to do is that, during inference, given an image, I want it to return a 128-dimension vector that represents the image, how should I achieve that?
If you run model.summary() you will see a summary of all model layers. In your case 'model' appears to be the layer of interest. Then you can select the layer that contains the 128D output using the get_layer() method. Finally you can extract the output as below.
model.get_layer('model').output

ConvLSTM2D data preparation

I'm trying to use ConvLSTM2D for 1700 of 90x3 data in keras.
I already did CONV2D which data is (1700x90x30x1). Data format is (batch, rows, cols, channels)
Now I want to use CONVLSTM2D but I found out I should change the data format to (samples, time, rows, cols, channels).
samples=1700 , row=90 , cols=30, channels=1
How to determine the "time"?
ConvLSTM2D, or LSTM as a special type of recurrent neural network in general, are used when the input data is a time series. This enables to take advantage of temporal properties within the data.
In case of ConvLSTM2D, the input is usually a video, consisting of multiple frames. Consequently, you have to reshape the data the following way:
samples=1700 , time=t, row=90 , cols=30, channels=1
where t is the number of frames in the video.
As an example, let's say we want to do video classification (or frame prediction) based on a short video clip of 10 frames, then t=10.
This of course only makes sense in case the image frames you are having are in a temporal order. Simply use tf.reshape(...).

What is the structure of the data and labels in tensorflow.examples.tutorials.mnist input_data

I'm trying to learn to introduce data to conv nets properly in Tensorflow, and a majority of example code uses from import tensorflow.examples.tutorials.mnist import input_data.
It's simple when you can use this to access mnist data, but not helpful when trying to establish the equivalent way to structure and introduce non-mnist data to similar models.
What is the structure of the data being imported through the mnist examples, so that I can use example cnn walkthrough code and manipulate my data to mirror the structure of the mnist data?
The format of the MNIST data obtained from that example code depends on exactly how you initialize the DataSet class. Calling DataSet.next_batch(batch_size) returns two NumPy arrays, representing batch_size images and labels respectively. They have the following formats.
If the DataSet was initialized with reshape=True (the default), the images array is a batch_size by 784 matrix, in which each row contains the pixels of one MNIST image. The default type is tf.float32, and the values are pixel intensities between 0.0 and 1.0.
If the DataSet was initialized with reshape=False, the images array is batch_size by 28 by 28 by 1 4-dimensional tensor. The 28 corresponds to the height and width of each image in pixels; the 1 corresponds to the number of channels in the images, which are grayscale and so have only a single channel.
If the DataSet was initialized with one_hot=False (the default), the labels array is a vector of length batch_size, in which each value is the label (an integer from 0 to 9) representing the digit in the respective image.
If the DataSet was initialized with one_hot=True, the labels array is a batch_size by 10 matrix, in which each row is all zeros, except for a 1 in the column that corresponds to the label of the respective image.
Note that if you are interested in convolutional networks, initializing the DataSet with reshape=False is probably what you want, since that will retain spatial information about the images that will be used by the convolutional operators.

TensorFlow: Convolution Neural Network with non-image input

I am interested in using Tensorflow for training my data for binary classification based on CNN.
Now I wonder about how to set the filter value, number of output nodes in the convolution process.
I have read many tutorials and example. However, most of them use image data and I cannot compare it with my data that is customer data, not pixel.
So could you suggest me about this issue?
If you data varies in time or space then you can use CNN,I am currently working with EEG data set which varies in time.Also you can refer to this paper
http://www.nlpr.ia.ac.cn/english/irds/People/lwang/M-MCG_EN/Publications/2015/YD2015ACPR.pdf
were the input data(Which is not an image) is presented as an image to the CNN.
You have to reshape the data to be 4d. In this example, I have only 4 column.
x_train = np.reshape(x_train, (x_train.shape[0],2, 2,1))
x_test = np.reshape(x_test, (x_test.shape[0],2,2, 1))
This is a good example to use none image data
https://github.com/fengjiqiang/LSTM-Wind-Speed-Forecasting
You just need to change the following :
prediction_cols
feature_cols
features
and dataload
This tutorial for text :
Here !
You might use one of following classes:
class Dataset: Represents a potentially large set of elements.
class FixedLengthRecordDataset: A Dataset of fixed-length records
from one or more binary files.
class Iterator: Represents the state of iterating through a Dataset.
class TFRecordDataset: A Dataset comprising records from one or more
TFRecord files.
class TextLineDataset: A Dataset comprising lines from one or more
text files.
Tutorial
official documentation