I have built a deep learning model which classifies cats and dogs. I have successfully mounted Google drive and trained the images as needed. However, I am trying to make a single prediction by uploading one image and having Keras make a prediction.
In a regular IDE like Spyder, it's like this :
test_image = image.load_img('image1.jpg',target_size=(64,64))
But it throws this error :
Transport endpoint is not connected: 'image1.jpg'
I remounted the drive, and then it tells me :
No such file or directory: 'image1.jpg'
After that, I played with how I would write the directory on the image.load() method, but ran out of ideas at this point.
You can mount the drive, connect with an authentication, then import the files you want and predict using your model. Please check a GitHub gist here. You can follow the steps below and let me know if you need any help. Thanks!
from google.colab import drive
# This will prompt for authorization.
drive.mount('/content/drive')
# Predicting Roses
img=mpimg.imread('/content/drive/My Drive/5602220566_5cdde8fa6c_n.jpg')
imgplot = plt.imshow(img)
img = tf.expand_dims(img,0) # need this to make batch_shape = 1
img=img/255 # normalizing the image
img=tf.image.resize(img,size=(224, 224)) # resizing image
Prob=loaded_model.predict(img) # prediction
indd=tf.argmax(Prob[0],axis=-1).numpy()
print(tf.argmax(Prob[0],axis=-1).numpy())
print(labels_string[indd])
Related
I am training a machine learning model in google collab and fixed the automatically disconnecting by using this code in the console inspector view of this question (Google Colab session timeout):
function ClickConnect(){
console.log("Working");
document.querySelector("#top-toolbar > colab-connect-button").shadowRoot.querySelector("#connect").click();
}
var clicker = setInterval(ClickConnect,60000);
However, after a given amount of time, all of my variables become undefined, which I require after the model has been trained. Is there any way to avoid this?
I'm using Google's Natural Language API to run sentiment analysis on text blocks, and according to some instructions I'm following, I need t be using the latest version of google-cloud-language.
So I'm running this at the start of my Colab notebook.
!pip install --upgrade google-cloud-language
When I get to the end of that, it requires me to restart the runtime, which means I can't automatically run this along with my entire code, instead having to manually attend to the runtime restart.
This SO post touches on the topic, but only offers the 'crash' solution, and I'm wondering if anything else is available now 3 years later.
Restart kernel in Google Colab
So I'm curious if there's any workaround, or way to permanently upgrade google-cloud-language to avoid that?
Thank you for any input.
Here's the NL code I'm running, if helpful.
# Imports the Google Cloud client library
from google.cloud import language_v1
# Instantiates a client
client = language_v1.LanguageServiceClient()
def get_sentiment(text):
# The text to analyze
document = language_v1.Document(
content=text,
type_=language_v1.types.Document.Type.PLAIN_TEXT
)
# Detects the sentiment of the text
sentiment = client.analyze_sentiment(
request={"document": document}
).document_sentiment
return sentiment
dfTW01["sentiment"] = dfTW01["text"].apply(get_sentiment)
I am trying to save my model by using tf.keras.callbacks.ModelCheckpoint with filepath as some folder in drive, but I am getting this error:
File system scheme '[local]' not implemented (file: './ckpt/tensorflow/training_20220111-093004_temp/part-00000-of-00001')
Encountered when executing an operation using EagerExecutor. This error cancels all future operations and poisons their output tensors.
Does anybody know what is the reason for this and the workaround for this?
Looks to me that you are trying to access the file system of your host VM from the TPU which is not directly possible.
When using the TPU and you want to access files in e.g. GoogleColab you should place it within:
with tf.device('/job:localhost'):
<YOUR_CODE>
Now to your problem:
The local host acts as parameter server when training on TPU. So if you want to checkpoint your training, the localhost must do so.
When you check the documention for said callback, you cann find the parameter options.
checkpoint_options = tf.train.CheckpointOptions(experimental_io_device='/job:localhost')
checkpoint = tf.keras.callbacks.ModelCheckpoint(<YOUR_PATH>, options = checkpoint_options)
Hope this solves your issue!
Best,
Sascha
So basically I forgot to save my model for each training loops. how do I save the /tmp/tflearn_logs/subdir into the model? is there any way to collect it as model like:
# Save a model
model.save('my_model.tflearn')
from the event logs?
And after that I can automatically load it with:
# Load a model
model.load('my_model.tflearn')
Here are my event logs:
Thank you..
Nevermind, There's no method to do that. Because logs only save our events data for visualization purpose, while model used for learning model using tflearn. they both working in the opposite way. Solution: Recreate save('.model.tflearn') and run it from 0. :)
I know how to load models into the tensorflow serving container and communicate with it via http request but I am a little confused of how to use protobufs. What are the steps for using protobufs? Shall I just load a model into the container and use something like below:
from tensorflow_serving.apis import
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet'
request.model_spec.signature_name = 'serving_default'
Or after/before loading the model I have to do some extra steps?
Here is the sample code for making an inferencing call to gRPC in Python:
resnet_client_grpc.py
In the same folder above, you will find example for calling the REST endpoint.