How to Use Embedding Projector with the latest version of TensorBoard ? (TF 2.0) - tensorflow

I'm trying to build a model so as i can predict the gender from handwrting, so it's basically a binary classification problem, where lables are either 0 for men or 1 for women, and to have better control on how my model is behaving, i'm using TensorBoard Callback in model.fil(), however, i have two issues :
1- When i try to visualize my images in image dash board on TB, all i get is grey and black boxes even though i set write_image to True, how can this can be solved ? (code below and image) ;
2- When i try to use the Projector dash board, i visualise my data without lables, so all i can see is set of the data in 3 dimensions (PCA Set de 3), does anyone have an idea of how it can be solved ? (image)
import os
!rm -rf logs
log_dir = "./logs/1"
if not os.path.exists(log_dir):
os.makedirs(log_dir)
tbCallBack = TensorBoard(log_dir=log_dir,
histogram_freq=1,
embeddings_freq=1,
write_graph=False,
write_images=True)

Related

How to use FasterRCNN Openimages v4?

I can't seem to find any documentation on how to use this model.
I am trying to use it to print out the objects that appear in a video
any help would be greatly appreciated
I am just starting out so go easy on me
I am trying to use it to print out the objects that appear in a video
I interpret that your problem is to print out the name of the found objects.
I don't know how you implemented where you got Fast RCNN trained on OpenImages v4. Therefore, I will give you the way with the model from Tensorflow Hub. Google Colab. AI Hub
After some digging around and a LOT of trial and error I came up with this
#!/home/ahmed/anaconda3/envs/TensorFlow/bin/python3.8
import tensorflow as tf
import tensorflow_hub as hub
import time,imageio,sys,pickle
# sys.argv[1] is used for taking the video path from the terminal
video = sys.argv[1]
#passing the video file to ImageIO to be read later in form of frames
video = imageio.get_reader(video)
dictionary = {}
#download and extract the model( faster_rcnn/openimages_v4/inception_resnet_v2 or
# openimages_v4/ssd/mobilenet_v2) in the same folder
module_handle = "*Path to the model folder*"
detector = hub.load(module_handle).signatures['default']
#looping over every frame in the video
for index, frames in enumerate(video):
# converting the images ( video frames ) to tf.float32 which is the only acceptable input format
image = tf.image.convert_image_dtype(frames, tf.float32)[tf.newaxis]
# passing the converted image to the model
detector_output = detector(image)
class_names = detector_output["detection_class_entities"]
scores = detector_output["detection_scores"]
# in case there are multiple objects in the frame
for i in range(len(scores)):
if scores[i] > 0.3:
#converting form bytes to string
object = class_names[i].numpy().decode("ascii")
#adding the objects that appear in the frames in a dictionary and their frame numbers
if object not in dictionary:
dictionary[object] = [index]
else:
dictionary[object].append(index)
print(dictionary)

fastai ulmfit model trained on cuda machin for cpu

I have an export.pkl model which has been trained on a cuda machine. I want to use it on a macbook:
from fastai.text import load_learner
from utils import get_corpus
learner = load_learner('./models')
corpus = get_corpus()
res = [ str(learner.predict(c)[0]) for c in corpus ]
I get the following error:
...
File "/Users/gautiergilabert/Envs/cc/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 146, in forward
"them on device: {}".format(self.src_device_obj, t.device))
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
I have two questions:
I found the raise in my export.pkl:
for t in chain(self.module.parameters(), self.module.buffers()):
if t.device != self.src_device_obj:
raise RuntimeError("module must have its parameters and buffers "
"on device {} (device_ids[0]) but found one of "
"them on device: {}".format(self.src_device_obj, t.device))
It is said about the module in the docstring: module to be parallelized. I don't really understand what it is. My macbook ?
Apart my macbook, I would like to run the model on a cpu
Is there a way to make this export.pkl model works on a cpu ?
Is there a way to make another export.pkl on cuda and make it available on a cpu ?
Thanks
One way is to load the learner by specifying the model with empty dataset and loading the model weights afterwards. For resnet image classifier something like this should work:
from fastai.vision import *
# path where the model is saved under path/models/model-name
path = "model_path"
tfms = get_transforms()
data = ImageDataBunch.single_from_classes(".", classes=["class1", "class2"], ds_tfms=tfms)
learner = cnn_learner(data, models.resnet34, metrics=accuracy)
# loads model from model_path/models/model_name.pth
learner.load("model_name")
image = open_image("test.jpg")
pred_class, pred_idx, outputs = learner.predict(image)

Training a model to be biased to a given background

I am re-training an inception resnet v2 model to recognize a sequence of 3 digit numbers(of a particular font type).The sequence is artificially generated with black lettering on a white background. I am of the opinion that subjecting the model to see only a specific background will help me eliminate false detections(in this case any other 3 digit sequence not on a white background), as the model will not predict(high probabilities) for sequences whose backgrounds are not white.Is it a valid assumption to make?
PS:I have tried using tesseract previously to perform text extraction from an image. I used east text detector for detection, which gave me the bounding boxes for the text. I followed that with OCR using pytesseract, but it always returned an empty string. Furthermore, on rotating of the digits, east-text detector failed to recognize the rotated sequence of digits. I was hence left with no option but to train and perform text detection and extractiion using a Neural Network Model.
code for pytesseract:
import cv2
import numpy as np
import pytesseract
from pytesseract import image_to_string
from PIL import Image
refPt=[(486,302),(540,308),(538,328),(484,323)] #the bbox returned by east
refpt = np.array(refPt,dtype=np.int32)
roi_corners=np.array(refPt[0:4],np.int32).reshape((-1,1,2))
inp_img=cv2.imread("1.jpg")
mask = np.zeros(inp_img.shape, dtype=np.uint8)
channel_count = inp_img.shape[2]
ignore_mask_color = (255,)*channel_count
mask = cv2.fillPoly(mask, np.array(refPt[0:4],np.int32).reshape((-1,1,2))], ignore_mask_color)
masked_image = cv2.bitwise_and(inp_img, mask)
print (image_to_string(Image.fromarray(masked_image),lang='eng'))

How to get chosen class images from Imagenet?

Background
I have been playing around with Deep Dream and Inceptionism, using the Caffe framework to visualize layers of GoogLeNet, an architecture built for the Imagenet project, a large visual database designed for use in visual object recognition.
You can find Imagenet here: Imagenet 1000 Classes.
To probe into the architecture and generate 'dreams', I am using three notebooks:
https://github.com/google/deepdream/blob/master/dream.ipynb
https://github.com/kylemcdonald/deepdream/blob/master/dream.ipynb
https://github.com/auduno/deepdraw/blob/master/deepdraw.ipynb
The basic idea here is to extract some features from each channel in a specified layer from the model or a 'guide' image.
Then we input an image we wish to modify into the model and extract the features in the same layer specified (for each octave),
enhancing the best matching features, i.e., the largest dot product of the two feature vectors.
So far I've managed to modify input images and control dreams using the following approaches:
(a) applying layers as 'end' objectives for the input image optimization. (see Feature Visualization)
(b) using a second image to guide de optimization objective on the input image.
(c) visualize Googlenet model classes generated from noise.
However, the effect I want to achieve sits in-between these techniques, of which I haven't found any documentation, paper, or code.
Desired result (not part of the question to be answered)
To have one single class or unit belonging to a given 'end' layer (a) guide the optimization objective (b) and have this class visualized (c) on the input image:
An example where class = 'face' and input_image = 'clouds.jpg':
please note: the image above was generated using a model for face recognition, which was not trained on the Imagenet dataset. For demonstration purposes only.
Working code
Approach (a)
from cStringIO import StringIO
import numpy as np
import scipy.ndimage as nd
import PIL.Image
from IPython.display import clear_output, Image, display
from google.protobuf import text_format
import matplotlib as plt
import caffe
model_name = 'GoogLeNet'
model_path = 'models/dream/bvlc_googlenet/' # substitute your path here
net_fn = model_path + 'deploy.prototxt'
param_fn = model_path + 'bvlc_googlenet.caffemodel'
model = caffe.io.caffe_pb2.NetParameter()
text_format.Merge(open(net_fn).read(), model)
model.force_backward = True
open('models/dream/bvlc_googlenet/tmp.prototxt', 'w').write(str(model))
net = caffe.Classifier('models/dream/bvlc_googlenet/tmp.prototxt', param_fn,
mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent
channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
# a couple of utility functions for converting to and from Caffe's input image layout
def preprocess(net, img):
return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data']
def deprocess(net, img):
return np.dstack((img + net.transformer.mean['data'])[::-1])
def objective_L2(dst):
dst.diff[:] = dst.data
def make_step(net, step_size=1.5, end='inception_4c/output',
jitter=32, clip=True, objective=objective_L2):
'''Basic gradient ascent step.'''
src = net.blobs['data'] # input image is stored in Net's 'data' blob
dst = net.blobs[end]
ox, oy = np.random.randint(-jitter, jitter+1, 2)
src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
net.forward(end=end)
objective(dst) # specify the optimization objective
net.backward(start=end)
g = src.diff[0]
# apply normalized ascent step to the input image
src.data[:] += step_size/np.abs(g).mean() * g
src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
if clip:
bias = net.transformer.mean['data']
src.data[:] = np.clip(src.data, -bias, 255-bias)
def deepdream(net, base_img, iter_n=20, octave_n=4, octave_scale=1.4,
end='inception_4c/output', clip=True, **step_params):
# prepare base images for all octaves
octaves = [preprocess(net, base_img)]
for i in xrange(octave_n-1):
octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1))
src = net.blobs['data']
detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details
for octave, octave_base in enumerate(octaves[::-1]):
h, w = octave_base.shape[-2:]
if octave > 0:
# upscale details from the previous octave
h1, w1 = detail.shape[-2:]
detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1)
src.reshape(1,3,h,w) # resize the network's input image size
src.data[0] = octave_base+detail
for i in xrange(iter_n):
make_step(net, end=end, clip=clip, **step_params)
# visualization
vis = deprocess(net, src.data[0])
if not clip: # adjust image contrast if clipping is disabled
vis = vis*(255.0/np.percentile(vis, 99.98))
showarray(vis)
print octave, i, end, vis.shape
clear_output(wait=True)
# extract details produced on the current octave
detail = src.data[0]-octave_base
# returning the resulting image
return deprocess(net, src.data[0])
I run the code above with:
end = 'inception_4c/output'
img = np.float32(PIL.Image.open('clouds.jpg'))
_=deepdream(net, img)
Approach (b)
"""
Use one single image to guide
the optimization process.
This affects the style of generated images
without using a different training set.
"""
def dream_control_by_image(optimization_objective, end):
# this image will shape input img
guide = np.float32(PIL.Image.open(optimization_objective))
showarray(guide)
h, w = guide.shape[:2]
src, dst = net.blobs['data'], net.blobs[end]
src.reshape(1,3,h,w)
src.data[0] = preprocess(net, guide)
net.forward(end=end)
guide_features = dst.data[0].copy()
def objective_guide(dst):
x = dst.data[0].copy()
y = guide_features
ch = x.shape[0]
x = x.reshape(ch,-1)
y = y.reshape(ch,-1)
A = x.T.dot(y) # compute the matrix of dot-products with guide features
dst.diff[0].reshape(ch,-1)[:] = y[:,A.argmax(1)] # select ones that match best
_=deepdream(net, img, end=end, objective=objective_guide)
and I run the code above with:
end = 'inception_4c/output'
# image to be modified
img = np.float32(PIL.Image.open('img/clouds.jpg'))
guide_image = 'img/guide.jpg'
dream_control_by_image(guide_image, end)
Question
Now the failed approach how I tried to access individual classes, hot encoding the matrix of classes and focusing on one (so far to no avail):
def objective_class(dst, class=50):
# according to imagenet classes
#50: 'American alligator, Alligator mississipiensis',
one_hot = np.zeros_like(dst.data)
one_hot.flat[class] = 1.
dst.diff[:] = one_hot.flat[class]
To make this clear: the question is not about the dream code, which is the interesting background and which is already working code, but it is about this last paragraph's question only: Could someone please guide me on how to get images of a chosen class (take class #50: 'American alligator, Alligator mississipiensis') from ImageNet (so that I can use them as input - together with the cloud image - to create a dream image)?
The question is how to get images of the chosen class #50: 'American alligator, Alligator mississipiensis' from ImageNet.
Go to image-net.org.
Go to "Download".
Follow the instructions for "Download Image URLs":
How to download the URLs of a synset from your Brower?
1. Type a query in the Search box and click "Search" button
The alligator is not shown. ImageNet is under maintenance. Only ILSVRC synsets are included in the search results. No problem, we are fine with the similar animal "alligator lizard", since this search is about getting to the right branch of the WordNet treemap. I do not know whether you will get the direct ImageNet images here even if there were no maintenance.
2. Open a synset papge
Scrolling down:
Scrolling down:
Searching for the American alligator, which happens to be a saurian diapsid reptile as well, as a near neighbour:
3. You will find the "Download URLs" button under the left-bottom corner of the image browsing window.
You will get all of the URLs with the chosen class. A text file pops up in the browser:
http://image-net.org/api/text/imagenet.synset.geturls?wnid=n01698640
We see here that it is just about knowing the right WordNet id that needs to be put at the end of the URL.
Manual image download
The text file looks as follows:
http://farm1.static.flickr.com/136/326907154_d975d0c944.jpg
http://weeksbay.org/photo_gallery/reptiles/American20Alligator.jpg
...
till image number 1261.
As an example, the first URL links to:
And the second is a dead link:
The third link is dead, but the fourth is working.
The images of these URLs are publicly available, but many links are dead, and the pictures are of lower resolution.
Automated image download
From the ImageNet guide again:
How to download by HTTP protocol? To download a synset by HTTP
request, you need to obtain the "WordNet ID" (wnid) of a synset first.
When you use the explorer to browse a synset, you can find the WordNet
ID below the image window.(Click Here and search "Synset WordNet ID"
to find out the wnid of "Dog, domestic dog, Canis familiaris" synset).
To learn more about the "WordNet ID", please refer to
Mapping between ImageNet and WordNet
Given the wnid of a synset, the URLs of its images can be obtained at
http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=[wnid]
You can also get the hyponym synsets given wnid, please refer to API
documentation to learn more.
So what is in that API documentation?
There is everything needed to get all of the WordNet IDs (so called "synset IDs") and their words for all synsets, that is, it has any class name and its WordNet ID at hand, for free.
Obtain the words of a synset
Given the wnid of a synset, the words of
the synset can be obtained at
http://www.image-net.org/api/text/wordnet.synset.getwords?wnid=[wnid]
You can also Click Here to
download the mapping between WordNet ID and words for all synsets,
Click Here to download the
mapping between WordNet ID and glosses for all synsets.
If you know the WordNet ids of choice and their class names, you can use the nltk.corpus.wordnet of "nltk" (natural language toolkit), see the WordNet interface.
In our case, we just need the images of class #50: 'American alligator, Alligator mississipiensis', we already know what we need, thus we can leave the nltk.corpus.wordnet aside (see tutorials or Stack Exchange questions for more). We can automate the download of all alligator images by looping through the URLs that are still alive. We could also widen this to the full WordNet with a loop over all WordNet IDs, of course, though this would take far too much time for the whole treemap - and is also not recommended since the images will stop being there if 1000s of people download them daily.
I am afraid I will not take the time to write this Python code that accepts the ImageNet class number "#50" as the argument, though that should be possible as well, using mapping tables from WordNet to ImageNet. Class name and WordNet ID should be enough.
For a single WordNet ID, the code could be as follows:
import urllib.request
import csv
wnid = "n01698640"
url = "http://image-net.org/api/text/imagenet.synset.geturls?wnid=" + str(wnid)
# From https://stackoverflow.com/a/45358832/6064933
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0'})
with open(wnid + ".csv", "wb") as f:
with urllib.request.urlopen(req) as r:
f.write(r.read())
with open(wnid + ".csv", "r") as f:
counter = 1
for line in f.readlines():
print(line.strip("\n"))
failed = []
try:
with urllib.request.urlopen(line) as r2:
with open(f'''{wnid}_{counter:05}.jpg''', "wb") as f2:
f2.write(r2.read())
except:
failed.append(f'''{counter:05}, {line}'''.strip("\n"))
counter += 1
if counter == 10:
break
with open(wnid + "_failed.csv", "w", newline="") as f3:
writer = csv.writer(f3)
writer.writerow(failed)
Result:
If you need the images even behind the dead links and in original quality, and if your project is non-commercial, you can sign in, see "How do I get a copy of the images?" at the Download FAQ.
In the URL above, you see the wnid=n01698640 at the end of the URL which is the WordNet id that is mapped to ImageNet.
Or in the "Images of the Synset" tab, just click on "Wordnet IDs".
To get to:
or right-click -- save as:
You can use the WordNet id to get the original images.
If you are commercial, I would say contact the ImageNet team.
Add-on
Taking up the idea of a comment: If you do not want many images, but just the "one single class image" that represents the class as much as possible, have a look at Visualizing GoogLeNet Classes and try to use this method with the images of ImageNet instead. Which is using the deepdream code as well.
Visualizing GoogLeNet Classes
July 2015
Ever wondered what a deep neural network thinks a Dalmatian should
look like? Well, wonder no more.
Recently Google published a post describing how they managed to use
deep neural networks to generate class visualizations and modify
images through the so called “inceptionism” method. They later
published the code to modify images via the inceptionism method
yourself, however, they didn’t publish code to generate the class
visualizations they show in the same post.
While I never figured out exactly how Google generated their class
visualizations, after butchering the deepdream code and this ipython
notebook from Kyle McDonald, I managed to coach GoogLeNet into drawing
these:
... [with many other example images to follow]

TensorFlow: Opening log data written by SummaryWriter

After following this tutorial on summaries and TensorBoard, I've been able to successfully save and look at data with TensorBoard. Is it possible to open this data with something other than TensorBoard?
By the way, my application is to do off-policy learning. I'm currently saving each state-action-reward tuple using SummaryWriter. I know I could manually store/train on this data, but I thought it'd be nice to use TensorFlow's built in logging features to store/load this data.
As of March 2017, the EventAccumulator tool has been moved from Tensorflow core to the Tensorboard Backend. You can still use it to extract data from Tensorboard log files as follows:
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
event_acc = EventAccumulator('/path/to/summary/folder')
event_acc.Reload()
# Show all tags in the log file
print(event_acc.Tags())
# E. g. get wall clock, number of steps and value for a scalar 'Accuracy'
w_times, step_nums, vals = zip(*event_acc.Scalars('Accuracy'))
Easy, the data can actually be exported to a .csv file within TensorBoard under the Events tab, which can e.g. be loaded in a Pandas dataframe in Python. Make sure you check the Data download links box.
For a more automated approach, check out the TensorBoard readme:
If you'd like to export data to visualize elsewhere (e.g. iPython
Notebook), that's possible too. You can directly depend on the
underlying classes that TensorBoard uses for loading data:
python/summary/event_accumulator.py (for loading data from a single
run) or python/summary/event_multiplexer.py (for loading data from
multiple runs, and keeping it organized). These classes load groups of
event files, discard data that was "orphaned" by TensorFlow crashes,
and organize the data by tag.
As another option, there is a script
(tensorboard/scripts/serialize_tensorboard.py) which will load a
logdir just like TensorBoard does, but write all of the data out to
disk as json instead of starting a server. This script is setup to
make "fake TensorBoard backends" for testing, so it is a bit rough
around the edges.
I think the data are encoded protobufs RecordReader format. To get serialized strings out of files you can use py_record_reader or build a graph with TFRecordReader op, and to deserialize those strings to protobuf use Event schema. If you get a working example, please update this q, since we seem to be missing documentation on this.
I did something along these lines for a previous project. As mentioned by others, the main ingredient is tensorflows event accumulator
from tensorflow.python.summary import event_accumulator as ea
acc = ea.EventAccumulator("folder/containing/summaries/")
acc.Reload()
# Print tags of contained entities, use these names to retrieve entities as below
print(acc.Tags())
# E. g. get all values and steps of a scalar called 'l2_loss'
xy_l2_loss = [(s.step, s.value) for s in acc.Scalars('l2_loss')]
# Retrieve images, e. g. first labeled as 'generator'
img = acc.Images('generator/image/0')
with open('img_{}.png'.format(img.step), 'wb') as f:
f.write(img.encoded_image_string)
You can also use the tf.train.summaryiterator: To extract events in a ./logs-Folder where only classic scalars lr, acc, loss, val_acc and val_loss are present you can use this GIST: tensorboard_to_csv.py
Chris Cundy's answer works well when you have less than 10000 data points in your tfevent file. However, when you have a large file with over 10000 data points, Tensorboard will automatically sampling them and only gives you at most 10000 points. It is a quite annoying underlying behavior as it is not well-documented. See https://github.com/tensorflow/tensorboard/blob/master/tensorboard/backend/event_processing/event_accumulator.py#L186.
To get around it and get all data points, a bit hacky way is to:
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
class FalseDict(object):
def __getitem__(self,key):
return 0
def __contains__(self, key):
return True
event_acc = EventAccumulator('path/to/your/tfevents',size_guidance=FalseDict())
It looks like for tb version >=2.3 you can streamline the process of converting your tb events to a pandas dataframe using tensorboard.data.experimental.ExperimentFromDev().
It requires you to upload your logs to TensorBoard.dev, though, which is public. There are plans to expand the capability to locally stored logs in the future.
https://www.tensorflow.org/tensorboard/dataframe_api
You can also use the EventFileLoader to iterate through a tensorboard file
from tensorboard.backend.event_processing.event_file_loader import EventFileLoader
for event in EventFileLoader('path/to/events.out.tfevents.xxx').Load():
print(event)
Surprisingly, the python package tb_parse has not been mentioned yet.
From documentation:
Installation:
pip install tensorflow # or tensorflow-cpu pip install -U tbparse # requires Python >= 3.7
Note: If you don't want to install TensorFlow, see Installing without TensorFlow.
We suggest using an additional virtual environment for parsing and plotting the tensorboard events. So no worries if your training code uses Python 3.6 or older versions.
Reading one or more event files with tbparse only requires 5 lines of code:
from tbparse import SummaryReader
log_dir = "<PATH_TO_EVENT_FILE_OR_DIRECTORY>"
reader = SummaryReader(log_dir)
df = reader.scalars
print(df)