I am running Tensor flow in a docker container and i am a newbie. This is the code which I just copied from TensorFlow tutorial to load CSV dataset.
import tensorflow as tf
import numpy as np
# Data sets
IRIS_TRAINING = "iris_training.csv"
IRIS_TEST = "iris_test.csv"
# Load datasets.
training_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TRAINING, target_dtype=np.int)
test_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TEST, target_dtype=np.int)
Anyways this throws the following error :
AttributeError: 'module' object has no attribute 'datasets'
My question is how to load a csv file which i have locally downloaded in my desktop? should be something like this ?
IRIS_TRAINING = "C:\Users\priya\Desktop\iris_training.csv"
IRIS_TEST = "C:\Users\priya\Desktop\iris_test.csv"
How to load CSV? Any documentation available?
Related
I'm trying to run the GAN in the link with my own dataset. First of all, I wanted to try with MNIST dataset and see the results. I am running it on COLAB. When I use the existing versions of tensorflow and keras in Colab, the outputs are noisy and have bad results. An example from 1400.epoch:
But when I downgrade tensorflow to 2.2.0 and keras to 2.3.1 the results are very good. An example from 1350.epoch:
Then, when I ran it with my own dataset without changing the existing library versions in COLAB, I still got noisy and bad results. So I just updated the versions as before. However, as a result, I get the following error:
FailedPreconditionError: Error while reading resource variable
_AnonymousVar45 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource
localhost/_AnonymousVar45/N10tensorflow3VarE does not exist. [[node
mul_1/ReadVariableOp (defined at
/usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:3009)
]] [Op:__inference_keras_scratch_graph_5103]
Function call stack: keras_scratch_graph
If this error was caused by tensorflow and keras versions, I think I would get the same error when I tried with MNIST. So I couldn't find the source of the error. Maybe it has to do with the way I load my data. However, existing library versions had no problems with this. Anyway I'm adding the way I load the data here:
import zipfile # unziping
import glob # finding image paths
import numpy as np # creating numpy arrays
from skimage.io import imread # reading images
from skimage.transform import resize # resizing images
# 1. Unzip images
path = '/content/gdrive/My Drive/gan/RealImages.zip'
with zipfile.ZipFile(path, 'r') as zip_ref:
zip_ref.extractall('/content/gdrive/My Drive/gan/extracted')
# 2. Obtain paths of images (.png used for example)
img_list = sorted(glob.glob('/content/gdrive/My Drive/gan/extracted/RealImages/RealImages/*.jpg'))
print(img_list)
# 3. Read images & convert to numpy arrays
## create placeholding numpy arrays
IMG_SIZE = 28
x_data = np.empty((len(img_list), IMG_SIZE, IMG_SIZE, 3), dtype=np.float32)
## read and convert to arrays
for i, img_path in enumerate(img_list):
# read image
img = imread(img_path)
print(img_path)
# resize image (1 channel used for example; 1 for gray-scale, 3 for RGB-scale)
img = resize(img, output_shape=(IMG_SIZE, IMG_SIZE,3), preserve_range=True)
# save to numpy array
x_data[i] = img
Then, I changed the old line:
(X_train, _), (_, _) = mnist.load_data()
to:
X_train=x_data
I couldn't find what I did wrong. I would be very happy if you help.
I'm trying to learn how to use some ML stuff for Android. I got the Text Classification demo working and seems to work fine. So then I tried creating my own model.
The code I used to create my own model was this:
import numpy as np
import os
from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.text_classifier import AverageWordVecSpec
from tflite_model_maker.text_classifier import DataLoader
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
spec = model_spec.get('mobilebert_classifier')
train_data = DataLoader.from_csv(
filename='/path to file/train.csv',
text_column='sentence',
label_column='label',
model_spec=spec,
is_training=True)
model = text_classifier.create(train_data, model_spec=spec, epochs=10)
model.export(export_dir='average_word_vec')
The code appeared to run fine and it created a model.tflite file for me. I then replaced the demo tflite file with mine. But when I run the demo I get the following error:
java.lang.AssertionError: Error occurred when initializing NLClassifier: Type mismatch for input tensor serving_default_input_type_ids:0. Requested STRING, got INT32.
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.initJniWithByteBuffer(Native Method)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.access$100(NLClassifier.java:67)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier$2.createHandle(NLClassifier.java:223)
at org.tensorflow.lite.task.core.TaskJniUtils.createHandleFromLibrary(TaskJniUtils.java:91)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.createFromBufferAndOptions(NLClassifier.java:219)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.createFromFileAndOptions(NLClassifier.java:175)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.createFromFile(NLClassifier.java:150)
at org.tensorflow.lite.examples.textclassification.client.TextClassificationClient.load(TextClassificationClient.java:44)
at org.tensorflow.lite.examples.textclassification.MainActivity.lambda$onStart$1$MainActivity(MainActivity.java:67)
at org.tensorflow.lite.examples.textclassification.-$$Lambda$MainActivity$eJaQnJq74KcmPEczFE5swJIGydg.run(Unknown Source:2)
What am I missing?
In your codes you trained a MobileBERT model, but saved to the path of average_word_vec?
spec = model_spec.get('mobilebert_classifier')
model.export(export_dir='average_word_vec')
One posssiblity is: you use the model of average_word_vec, but add MobileBERT metadata, thus the preprocessing doesn't match.
Could you follow the Model Maker tutorial and try again?
https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb
Make sure change the export path.
I have to try the quantization to my model(tflite).
I want to change float32 to float 16 through the dynamic range quantization.
This is my code:
import tensorflow as tf
import json
import sys
import pprint
from tensorflow import keras
import numpy as np
converter = tf.lite.TFLiteConverter.from_saved_model('models')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_quant_model = converter.convert()
open("quant.tflite", "wb").write(tflite_quant_model)
In my MacBook, there is a folder called 'models', which contains two tflite files there.
When I execute the code, the following error occurs:
converter = tf.lite.TFLiteConverter.from_saved_model('quantization')
OSError: SavedModel file does not exist at: models/{saved_model.pbtxt|saved_model.pb}
I checked most of the posts in stack overflow, but I couldn't find a solution.
Please review my code and give me some advice.
I uploaded my tflite file because I guess it would be necessary to check if there was a problem.
This is my model(download link):
https://drive.google.com/file/d/13gft7bREsv2vZYFvfoCiP5ndxHkfGKIM/view?usp=sharing
Thank you so much.
The tf.lite.TFLiteConverter.from_saved_model function takes a tensorflow (.pb) model as a parameter. On the other hand, you give a tensorflowlite (.tflite) model, which necessarily leads to an error. If you want to convert your model to float 16, the only way I know of is to take the original model in ".pb" format and you convert it as you want
This question already has answers here:
How do I load python modules which are not available in Sagemaker?
(2 answers)
Closed 3 years ago.
I have deployed a TensorFlow model on AWS SageMaker, and I want to be able to invoke it using a csv file as the body of the call. The documentation says about creating a serving_input_function like the one below:
def serving_input_fn(hyperparameters):
# Logic to the following:
# 1. Defines placeholders that TensorFlow serving will feed with inference requests
# 2. Preprocess input data
# 3. Returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver,
# which packages the placeholders and the resulting feature Tensors together.
In step 2, where it says preprocess input data, how do I get a handle on input data to process them?
I had the same problem but I wanted to handle jpeg requests.
Once you have your model_data ready, you can deploy it with the following lines.
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(
model_data = 's3://path/to/model/model.tar.gz',
role = role,
framework_version = '1.12',
entry_point = 'train.py',
source_dir='my_src',
env={'SAGEMAKER_REQUIREMENTS': 'requirements.txt'}
)
predictor = sagemaker_model.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
endpoint_name='resnet-tensorflow-classifier'
)
Your notebook should have a my_src directory which contains a file train.py and a requirements.txt file. The train.py file should have a function input_fn defined. For me, that function handled image/jpeg content, but the pattern is the same.
import io
import numpy as np
from PIL import Image
from keras.applications.resnet50 import preprocess_input
from keras.preprocessing import image
JPEG_CONTENT_TYPE = 'image/jpeg'
CSV_CONTENT_TYPE = 'text/csv'
# Deserialize the Invoke request body into an object we can perform prediction on
def input_fn(request_body, content_type=JPEG_CONTENT_TYPE):
# process an image uploaded to the endpoint
if content_type == JPEG_CONTENT_TYPE:
img = Image.open(io.BytesIO(request_body)).resize((300, 300))
img_array = np.array(img)
expanded_img_array = np.expand_dims(img_array, axis=0)
x = preprocess_input(expanded_img_array)
return x
# you would have something like this:
if content_type == CSV_CONTENT_TYPE:
# handle input
return handled_input
else:
raise errors.UnsupportedFormatError(content_type)
If yourtrain.py code imports some modules, you must supply requirements.txt defining those dependencies (that was the part I had trouble finding in the docs).
Hope this helps someone in the future.
You can preprocess input data by adding an input_fn which will be invoked every time u invoke and endpoint. It receives the input data and the content type of the data.
def input_fn(data, content_type):
// do some data preprocessing.
return preprocessed_data
This article explains it in more depth:
https://docs.aws.amazon.com/sagemaker/latest/dg/tf-training-inference-code-template.html
I ran this code snippet:
import os
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib.tensorboard.plugins import projector
LOG_DIR = 'logs'
metadata = os.path.join(LOG_DIR, 'metadata.tsv')
mnist = input_data.read_data_sets('MNIST_data')
input_1 = mnist.train.next_batch(10)
images = tf.Variable(input_1[0], name='images')
with open(metadata, 'w') as metadata_file:
for row in input_1[1]:
metadata_file.write('%d\n' % row)
with tf.Session() as sess:
saver = tf.train.Saver([images])
sess.run(images.initializer)
saver.save(sess, os.path.join(LOG_DIR, 'images.ckpt'))
config = projector.ProjectorConfig()
# One can add multiple embeddings.
# Link this tensor to its metadata file (e.g. labels).
embedding = config.embeddings.add()
embedding.tensor_name = images.name
embedding.metadata_path = metadata
# Saves a config file that TensorBoard will read during startup.
projector.visualize_embeddings(tf.summary.FileWriter(LOG_DIR), config)
And after this, I opened tensorboard embedding tab and it showed parsing metadata. However, it kept on loading that way endlessly. I tried another code and in that case, it kept loading on fetching spite Image. Is there something wrong with my tensorboard?
The problem is that TensorBoard couldn't find your metadata file, because it looks for the metadata file relative to the directory that you have specified with your '--logdir' parameter of the 'tensorboard' command.
So if you are opening TensorBoard with 'tensorboard --logdir logs', it will look for the metadata file in 'logs/logs/metadata.tsv'.
A possible fix for your code is to replace this line
embedding.metadata_path = metadata
with this one:
embedding.metadata_path = 'metadata.tsv'
In general, in order to debug errors TensorBoard you have to look at the response of the error messages in your browser console when looking at TensorBoard.