LICENSE.txt when loading data into tensorflow transfer learning - tensorflow

I am using code provided by tensorflow to load data: https://www.tensorflow.org/beta/tutorials/load_data/text
When I put in my own photos, it sends to a different directory. The code wants attributions from my LICENSE.txt, but I am not sure what the purpose of this code segment is.
I made my own LICENSE.txt file by just making a text file with each line being a title of an image. When I do this, it makes attributions a dictionary in which each key is the filename and each corresponding value is ''. When I run another method, I get a key error for every file.
import os
attributions = (data_root/"LICENSE.txt").open(encoding='utf- 8').readlines()
attributions = [line.split('\n') for line in attributions]
print(attributions)
attributions = dict(attributions)
import IPython.display as display
def caption_image(image_path):
image_rel = pathlib.Path(image_path).relative_to(data_root)
return "Image (CC BY 2.0) " + ' -'.join(attributions[str(image_rel)].split(' - ')[:-1])
for n in range(3):
image_path = random.choice(all_image_paths)
display.display(display.Image(image_path))
print(caption_image(image_path))
print()
I do not really know what to expect when I run the for loop in jupyter notebook, but it gives me a key error, the key being the file name.

I wrote that tutorial. The license lookup is only there so we can directly arttribute the individual photographers when we publish it. If you're working with your own images you don't need that part of the code at all.
All it's really doing is choosing a random image and displaying it. You can simplify it to:
import os
import IPython.display as display
for n in range(3):
image_path = random.choice(all_image_paths)
display.display(display.Image(image_path))

Related

Is there a way I can build a language model as described below?

Here's what I wanna do:
I have collected data in a txt file (ie lyrics of some 100 odd songs from an artist) and I want to use it to train an AI language model, then want it to give me some kind of output when I input a small phrase.
Here's what I tried:
I used the gpt-2-simple (by Max Woolf on github) and trained it with a .txt file that i created but it gave me a completely unrelated random output.
How can I do what I want to do?
Here's the code that I used:
import gpt_2_simple as gpt2
import os
import requests
model_name = "124M"
if not os.path.isdir(os.path.join("models", model_name)):
print(f"Downloading {model_name} model...")
gpt2.download_gpt2(model_name=model_name)
file_name = "myway.txt"
sess = gpt2.start_tf_sess()
gpt2.finetune(sess,
file_name,
model_name=model_name,
steps=10)
gpt2.generate(sess)
The output that I got
The file has lyrics from Led Zeppelin so I doubt that the output is relevant to the training that I wanted to give.

Template Matching through python API on Linux desktop

I'm following the tutorial on using your own template images to do object 3D pose tracking, but I'm trying to get it working on Ubuntu 20.04 with a live webcam stream.
I was able to successfully make my index .pb file with extracted KNIFT features from my custom images.
It seems the next thing to do is load the provided template matching graph (in mediapipe/graphs/template_matching/template_matching_desktop.pbtxt) (replacing the index_proto_filename of the BoxDetectorCalculator with my own index file), and run it on a video input stream to track my custom object.
I was hoping that would be easiest to do in python, but am running into dependency problems.
(I installed mediapipe python with pip3 install mediapipe)
First, I couldn't find how to directly load a .pbtxt file as a graph in the mediapipe python API, but that's ok. I just load the text it contains and use that.
template_matching_graph_filepath=os.path.abspath("~/mediapipe/mediapipe/graphs/template_matching/template_matching_desktop.pbtxt")
graph = mp.CalculatorGraph(graph_config=open(template_matching_graph_filepath).read())
But I get missing calculator targets.
No registered object with name: OpenCvVideoDecoderCalculator; Unable to find Calculator "OpenCvVideoDecoderCalculator"
or
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:309] Error parsing text-format mediapipe.CalculatorGraphConfig: 54:70: Could not find type "type.googleapis.com/mediapipe.TfLiteInferenceCalculatorOptions" stored in google.protobuf.Any.
It seems similar to this troubleshooting case but, since I'm not trying to compile an application, I'm not sure how to link in the missing calculators.
How to I make the mediapipe python API aware of these graphs?
UPDATE:
I made decent progress by adding the graphs that the template_matching depends on to the cc_library deps of the mediapipe/python/BUILD file
cc_library(
name = "builtin_calculators",
deps = [
"//mediapipe/calculators/image:feature_detector_calculator",
"//mediapipe/calculators/image:image_properties_calculator",
"//mediapipe/calculators/video:opencv_video_decoder_calculator",
"//mediapipe/calculators/video:opencv_video_encoder_calculator",
"//mediapipe/calculators/video:box_detector_calculator",
"//mediapipe/calculators/tflite:tflite_inference_calculator",
"//mediapipe/calculators/tflite:tflite_tensors_to_floats_calculator",
"//mediapipe/calculators/util:timed_box_list_id_to_label_calculator",
"//mediapipe/calculators/util:timed_box_list_to_render_data_calculator",
"//mediapipe/calculators/util:landmarks_to_render_data_calculator",
"//mediapipe/calculators/util:annotation_overlay_calculator",
...
I also modified solution_base.py so it knows about BoxDetector's options.
from mediapipe.calculators.video import box_detector_calculator_pb2
...
CALCULATOR_TO_OPTIONS = {
'BoxDetectorCalculator':
box_detector_calculator_pb2
.BoxDetectorCalculatorOptions,
Then I rebuilt and installed mediapipe python from source with:
~/mediapipe$ python3 setup.py install --link-opencv
Then I was able to make my own class derived from SolutionBase
from mediapipe.python.solution_base import SolutionBase
class ObjectTracker(SolutionBase):
"""Process a video stream and output a video with edges of templates highlighted."""
def __init__(self,
object_knift_index_file_path):
super().__init__(binary_graph_path=object_pose_estimation_binary_file_path,
calculator_params={"BoxDetector.index_proto_filename": object_knift_index_file_path},
)
def process(self, image: np.ndarray) -> NamedTuple:
return super().process(input_data={'input_video':image})
ot = ObjectTracker(object_knift_index_file_path="/path/to/my/object_knift_index.pb")
Finally, I process a video frame from a cv2.VideoCapture
cv_video = cv2.VideoCapture(0)
result, frame = cv_video.read()
input_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
res = ot.process(image=input_frame)
So close! But I run into this error which I just don't know what to do with.
/usr/local/lib/python3.8/dist-packages/mediapipe/python/solution_base.py in process(self, input_data)
326 if data.shape[2] != RGB_CHANNELS:
327 raise ValueError('Input image must contain three channel rgb data.')
--> 328 self._graph.add_packet_to_input_stream(
329 stream=stream_name,
330 packet=self._make_packet(input_stream_type,
RuntimeError: Graph has errors:
Calculator::Open() for node "BoxDetector" failed: ; Error while reading file: /usr/local/lib/python3.8/dist-packages/
Looks like CalculatorNode::OpenNode() is trying to open the python API install path as a file. Maybe it has to do with the default_context. I have no idea where to go from here. :(

error: (-215:Assertion failed) !_src.empty() in function 'cvtColor' while using OpenCV 4.2 with swift [duplicate]

I am trying to do a basic colour conversion in python however I can't seem to get past the below error. I have re-installed python, opencv and tried on both python 3.4.3 (latest) and python 2.7 (which is on my Mac).
I installed opencv using python's package manager opencv-python.
Here is the code that fails:
frame = cv2.imread('frames/frame%d.tiff' % count)
frame_HSV= cv2.cvtColor(frame,cv2.COLOR_RGB2HSV)
This is the error message:
cv2.error: OpenCV(3.4.3) /Users/travis/build/skvark/opencv-python/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
This error happened because the image didn't load properly. So you have a problem with the previous line cv2.imread. My suggestion is :
check if the image exists in the path you give
check if the count variable has a valid number
If anyone is experiencing this same problem when reading a frame from a webcam:
Verify if your webcam is being used on another task and close it. This wil solve the problem.
I spent some time with this error when I realized my camera was online in a google hangouts group. Also, Make sure your webcam drivers are up to date
I kept getting this error too:
Traceback (most recent call last):
File "face_detector.py", line 6, in <module>
gray_img=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor
My cv2.cvtColor(...) was working fine with \photo.jpg but not with \news.jpg. For me, I finally realized that when working on Windows with python, those escape characters will get you every time!! So my "bad" photo was being escaped because of the file name beginning with "n". Python took the \n as an escape character and OpenCV couldn't find the file!
Solution:
Preface file names in Windows python with r"...\...\" as in
cv2.imread(r".\images\news.jpg")
If the path is correct and the name of the image is OK, but you are still getting the error
use:
from skimage import io
img = io.imread(file_path)
instead of:
cv2.imread(file_path)
The function imread loads an image from the specified file and returns
it. If the image cannot be read (because of missing file, improper permissions, unsupported or invalid format), the function returns an empty matrix ( Mat::data==NULL ).
check if the image exists in the path and verify the image extension (.jpg or .png)
Check whether its the jpg, png, bmp file that you are providing and write the extension accordingly.
Another thing which might be causing this is a 'weird' symbol in your file and directory names. All umlaut (äöå) and other (éóâ etc) characters should be removed from the file and folder names. I've had this same issue sometimes because of these characters.
Most probably there is an error in loading the image, try checking directory again.
Print the image to confirm if it actually loaded or not
In my case, the image was incorrectly named. Check if the image exists and try
import numpy as np
import cv2
img = cv2.imread('image.png', 0)
cv2.imshow('image', img)
I've been in same situation as well, and My case was because of the Korean letter in the path...
After I remove Korean letters from the folder name, it works.
OR put
[#-*- coding:utf-8 -*-]
(except [ ] at the edge)
or something like that in the first line to make python understand Korean or your language or etc.
then it will work even if there is some Koreans in the path in my case.
So the things is, it seems like there is something about path or the letter.
People who answered are saying similar things. Hope you guys solve it!
I had the same problem and it turned out that my image names included special characters (e.g. château.jpg), which could not bet handled by cv2.imread. My solution was to make a temporary copy of the file, renaming it e.g. temp.jpg, which could be loaded by cv2.imread without any problems.
Note: I did not check the performance of shutil.copy2 vice versa other options. So probably there is a better/faster solution to make a temporary copy.
import shutil, sys, os, dlib, glob, cv2
for f in glob.glob(os.path.join(myfolder_path, "*.jpg")):
shutil.copy2(f, myfolder_path + 'temp.jpg')
img = cv2.imread(myfolder_path + 'temp.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
os.remove(myfolder_path + 'temp.jpg')
If there are only few files with special characters, renaming can also be done as an exeption, e.g.
for f in glob.glob(os.path.join(myfolder_path, "*.jpg")):
try:
img = cv2.imread(f)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
except:
shutil.copy2(f, myfolder_path + 'temp.jpg')
img = cv2.imread(myfolder_path + 'temp.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
os.remove(myfolder_path + 'temp.jpg')
In my case it was a permission issue. I had to:
chmod a+wrx the image,
then it worked.
must please see guys that the error is in the cv2.imread() .Give the right path of the image. and firstly, see if your system loads the image or not. this can be checked first by simple load of image using cv2.imread().
after that ,see this code for the face detection
import numpy as np
import cv2
cascPath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site- packages/cv2/data/haarcascade_frontalface_default.xml"
eyePath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site-packages/cv2/data/haarcascade_eye.xml"
smilePath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site-packages/cv2/data/haarcascade_smile.xml"
face_cascade = cv2.CascadeClassifier(cascPath)
eye_cascade = cv2.CascadeClassifier(eyePath)
smile_cascade = cv2.CascadeClassifier(smilePath)
img = cv2.imread('WhatsApp Image 2020-04-04 at 8.43.18 PM.jpeg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here, cascPath ,eyePath ,smilePath should have the right actual path that's picked up from lib/python3.7/site-packages/cv2/data here this path should be to picked up the haarcascade files
Your code can't find the figure or the name of your figure named the by error message.
Solution:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img=cv2.imread('哈哈.jpg')#solution:img=cv2.imread('haha.jpg')
print(img)
If anyone is experiencing this same problem when reading a frame from a webcam [with code similar to "frame = cv2.VideoCapture(0)"] and work in Jupyter Notebook, you may try:
ensure previously tried code is not running already and restart Jupyter Notebook kernel
SEPARATE code "frame = cv2.VideoCapture(0)" in separate cell on place where it is [previous code put in cell above, code under put to cell down]
then run all the code above cell where is "frame = cv2.VideoCapture(0)"
then try run next cell with its only code "frame = cv2.VideoCapture(0)" - AND - till you will continue in executing other cells - ENSURE - that ASTERIX on the left side of this particular cell DISAPEAR and command order number appear instead - only then continue
now you can try execute the rest of your code as your camera input should not be empty anymore :-)
After end, ensure you close all your program and restart kernel to prepare it for another run
As #shaked litbak , this error arised with my initial use with the ASCII-generator , as i naively thought i just had to add to the ./data directory , with its load automatically .
I had to append the --input option with the desired file path .
I checked my image file path and it was correct. I made sure there was no corrupt images.The problem was with my mac. It sometimes have a hidden file called .DS_Store which was saved together with the image file path. Therefore cv2 was having a problem with that file.So I solved the problem by deleting .DS_Store
I also encountered this type of error:
error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
The solution was to load the image properly. Since the file mentioned was wrong, images were not loaded and hence it threw this error. You can check the path of the image or if uploading an image through colab or drive, make sure that the image is present in the drive.
I encounter the problem when I try to load the image from non-ASCII path.
If I simply use imread to load the image, I am only able to get None.
Here is my solution:
import cv2
import numpy as np
path = r'D:\map\上海地图\abc.png'
image = cv2.imdecode(np.fromfile(path, dtype=np.uint8), cv2.IMREAD_UNCHANGED)
Similar thing will happen when I save the image in a non-ASCII path. It will not be successfully saved without any warnings. And here is what I did.
import cv2
import numpy as np
path = r'D:\map\上海地图\abc.png'
cv2.imencode('.png', image)[1].tofile(path)
path = os.path.join(raw_folder, folder, file)
print('[DEBUG] path:', path)
img = cv2.imread(path) #read path Image
if img is None: # check if the image exists in the path you give
print('Wrong path:', path)
else: # It completes the steps
img = cv2.resize(img, dsize=(128,128))
pixels.append(img)
The solution os to ad './' before the name of image before reading it...
Just Try Degrading the OpenCV
in python Shell (in cmd)
>>> import cv2
>>> cv2.__version__
after Checking in cmd
pip uninstall opencv-python
after uninstalling the version of opencv install
pip install opencv-python==3.4.8.29

How to get chosen class images from Imagenet?

Background
I have been playing around with Deep Dream and Inceptionism, using the Caffe framework to visualize layers of GoogLeNet, an architecture built for the Imagenet project, a large visual database designed for use in visual object recognition.
You can find Imagenet here: Imagenet 1000 Classes.
To probe into the architecture and generate 'dreams', I am using three notebooks:
https://github.com/google/deepdream/blob/master/dream.ipynb
https://github.com/kylemcdonald/deepdream/blob/master/dream.ipynb
https://github.com/auduno/deepdraw/blob/master/deepdraw.ipynb
The basic idea here is to extract some features from each channel in a specified layer from the model or a 'guide' image.
Then we input an image we wish to modify into the model and extract the features in the same layer specified (for each octave),
enhancing the best matching features, i.e., the largest dot product of the two feature vectors.
So far I've managed to modify input images and control dreams using the following approaches:
(a) applying layers as 'end' objectives for the input image optimization. (see Feature Visualization)
(b) using a second image to guide de optimization objective on the input image.
(c) visualize Googlenet model classes generated from noise.
However, the effect I want to achieve sits in-between these techniques, of which I haven't found any documentation, paper, or code.
Desired result (not part of the question to be answered)
To have one single class or unit belonging to a given 'end' layer (a) guide the optimization objective (b) and have this class visualized (c) on the input image:
An example where class = 'face' and input_image = 'clouds.jpg':
please note: the image above was generated using a model for face recognition, which was not trained on the Imagenet dataset. For demonstration purposes only.
Working code
Approach (a)
from cStringIO import StringIO
import numpy as np
import scipy.ndimage as nd
import PIL.Image
from IPython.display import clear_output, Image, display
from google.protobuf import text_format
import matplotlib as plt
import caffe
model_name = 'GoogLeNet'
model_path = 'models/dream/bvlc_googlenet/' # substitute your path here
net_fn = model_path + 'deploy.prototxt'
param_fn = model_path + 'bvlc_googlenet.caffemodel'
model = caffe.io.caffe_pb2.NetParameter()
text_format.Merge(open(net_fn).read(), model)
model.force_backward = True
open('models/dream/bvlc_googlenet/tmp.prototxt', 'w').write(str(model))
net = caffe.Classifier('models/dream/bvlc_googlenet/tmp.prototxt', param_fn,
mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent
channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
# a couple of utility functions for converting to and from Caffe's input image layout
def preprocess(net, img):
return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data']
def deprocess(net, img):
return np.dstack((img + net.transformer.mean['data'])[::-1])
def objective_L2(dst):
dst.diff[:] = dst.data
def make_step(net, step_size=1.5, end='inception_4c/output',
jitter=32, clip=True, objective=objective_L2):
'''Basic gradient ascent step.'''
src = net.blobs['data'] # input image is stored in Net's 'data' blob
dst = net.blobs[end]
ox, oy = np.random.randint(-jitter, jitter+1, 2)
src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
net.forward(end=end)
objective(dst) # specify the optimization objective
net.backward(start=end)
g = src.diff[0]
# apply normalized ascent step to the input image
src.data[:] += step_size/np.abs(g).mean() * g
src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
if clip:
bias = net.transformer.mean['data']
src.data[:] = np.clip(src.data, -bias, 255-bias)
def deepdream(net, base_img, iter_n=20, octave_n=4, octave_scale=1.4,
end='inception_4c/output', clip=True, **step_params):
# prepare base images for all octaves
octaves = [preprocess(net, base_img)]
for i in xrange(octave_n-1):
octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1))
src = net.blobs['data']
detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details
for octave, octave_base in enumerate(octaves[::-1]):
h, w = octave_base.shape[-2:]
if octave > 0:
# upscale details from the previous octave
h1, w1 = detail.shape[-2:]
detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1)
src.reshape(1,3,h,w) # resize the network's input image size
src.data[0] = octave_base+detail
for i in xrange(iter_n):
make_step(net, end=end, clip=clip, **step_params)
# visualization
vis = deprocess(net, src.data[0])
if not clip: # adjust image contrast if clipping is disabled
vis = vis*(255.0/np.percentile(vis, 99.98))
showarray(vis)
print octave, i, end, vis.shape
clear_output(wait=True)
# extract details produced on the current octave
detail = src.data[0]-octave_base
# returning the resulting image
return deprocess(net, src.data[0])
I run the code above with:
end = 'inception_4c/output'
img = np.float32(PIL.Image.open('clouds.jpg'))
_=deepdream(net, img)
Approach (b)
"""
Use one single image to guide
the optimization process.
This affects the style of generated images
without using a different training set.
"""
def dream_control_by_image(optimization_objective, end):
# this image will shape input img
guide = np.float32(PIL.Image.open(optimization_objective))
showarray(guide)
h, w = guide.shape[:2]
src, dst = net.blobs['data'], net.blobs[end]
src.reshape(1,3,h,w)
src.data[0] = preprocess(net, guide)
net.forward(end=end)
guide_features = dst.data[0].copy()
def objective_guide(dst):
x = dst.data[0].copy()
y = guide_features
ch = x.shape[0]
x = x.reshape(ch,-1)
y = y.reshape(ch,-1)
A = x.T.dot(y) # compute the matrix of dot-products with guide features
dst.diff[0].reshape(ch,-1)[:] = y[:,A.argmax(1)] # select ones that match best
_=deepdream(net, img, end=end, objective=objective_guide)
and I run the code above with:
end = 'inception_4c/output'
# image to be modified
img = np.float32(PIL.Image.open('img/clouds.jpg'))
guide_image = 'img/guide.jpg'
dream_control_by_image(guide_image, end)
Question
Now the failed approach how I tried to access individual classes, hot encoding the matrix of classes and focusing on one (so far to no avail):
def objective_class(dst, class=50):
# according to imagenet classes
#50: 'American alligator, Alligator mississipiensis',
one_hot = np.zeros_like(dst.data)
one_hot.flat[class] = 1.
dst.diff[:] = one_hot.flat[class]
To make this clear: the question is not about the dream code, which is the interesting background and which is already working code, but it is about this last paragraph's question only: Could someone please guide me on how to get images of a chosen class (take class #50: 'American alligator, Alligator mississipiensis') from ImageNet (so that I can use them as input - together with the cloud image - to create a dream image)?
The question is how to get images of the chosen class #50: 'American alligator, Alligator mississipiensis' from ImageNet.
Go to image-net.org.
Go to "Download".
Follow the instructions for "Download Image URLs":
How to download the URLs of a synset from your Brower?
1. Type a query in the Search box and click "Search" button
The alligator is not shown. ImageNet is under maintenance. Only ILSVRC synsets are included in the search results. No problem, we are fine with the similar animal "alligator lizard", since this search is about getting to the right branch of the WordNet treemap. I do not know whether you will get the direct ImageNet images here even if there were no maintenance.
2. Open a synset papge
Scrolling down:
Scrolling down:
Searching for the American alligator, which happens to be a saurian diapsid reptile as well, as a near neighbour:
3. You will find the "Download URLs" button under the left-bottom corner of the image browsing window.
You will get all of the URLs with the chosen class. A text file pops up in the browser:
http://image-net.org/api/text/imagenet.synset.geturls?wnid=n01698640
We see here that it is just about knowing the right WordNet id that needs to be put at the end of the URL.
Manual image download
The text file looks as follows:
http://farm1.static.flickr.com/136/326907154_d975d0c944.jpg
http://weeksbay.org/photo_gallery/reptiles/American20Alligator.jpg
...
till image number 1261.
As an example, the first URL links to:
And the second is a dead link:
The third link is dead, but the fourth is working.
The images of these URLs are publicly available, but many links are dead, and the pictures are of lower resolution.
Automated image download
From the ImageNet guide again:
How to download by HTTP protocol? To download a synset by HTTP
request, you need to obtain the "WordNet ID" (wnid) of a synset first.
When you use the explorer to browse a synset, you can find the WordNet
ID below the image window.(Click Here and search "Synset WordNet ID"
to find out the wnid of "Dog, domestic dog, Canis familiaris" synset).
To learn more about the "WordNet ID", please refer to
Mapping between ImageNet and WordNet
Given the wnid of a synset, the URLs of its images can be obtained at
http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=[wnid]
You can also get the hyponym synsets given wnid, please refer to API
documentation to learn more.
So what is in that API documentation?
There is everything needed to get all of the WordNet IDs (so called "synset IDs") and their words for all synsets, that is, it has any class name and its WordNet ID at hand, for free.
Obtain the words of a synset
Given the wnid of a synset, the words of
the synset can be obtained at
http://www.image-net.org/api/text/wordnet.synset.getwords?wnid=[wnid]
You can also Click Here to
download the mapping between WordNet ID and words for all synsets,
Click Here to download the
mapping between WordNet ID and glosses for all synsets.
If you know the WordNet ids of choice and their class names, you can use the nltk.corpus.wordnet of "nltk" (natural language toolkit), see the WordNet interface.
In our case, we just need the images of class #50: 'American alligator, Alligator mississipiensis', we already know what we need, thus we can leave the nltk.corpus.wordnet aside (see tutorials or Stack Exchange questions for more). We can automate the download of all alligator images by looping through the URLs that are still alive. We could also widen this to the full WordNet with a loop over all WordNet IDs, of course, though this would take far too much time for the whole treemap - and is also not recommended since the images will stop being there if 1000s of people download them daily.
I am afraid I will not take the time to write this Python code that accepts the ImageNet class number "#50" as the argument, though that should be possible as well, using mapping tables from WordNet to ImageNet. Class name and WordNet ID should be enough.
For a single WordNet ID, the code could be as follows:
import urllib.request
import csv
wnid = "n01698640"
url = "http://image-net.org/api/text/imagenet.synset.geturls?wnid=" + str(wnid)
# From https://stackoverflow.com/a/45358832/6064933
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0'})
with open(wnid + ".csv", "wb") as f:
with urllib.request.urlopen(req) as r:
f.write(r.read())
with open(wnid + ".csv", "r") as f:
counter = 1
for line in f.readlines():
print(line.strip("\n"))
failed = []
try:
with urllib.request.urlopen(line) as r2:
with open(f'''{wnid}_{counter:05}.jpg''', "wb") as f2:
f2.write(r2.read())
except:
failed.append(f'''{counter:05}, {line}'''.strip("\n"))
counter += 1
if counter == 10:
break
with open(wnid + "_failed.csv", "w", newline="") as f3:
writer = csv.writer(f3)
writer.writerow(failed)
Result:
If you need the images even behind the dead links and in original quality, and if your project is non-commercial, you can sign in, see "How do I get a copy of the images?" at the Download FAQ.
In the URL above, you see the wnid=n01698640 at the end of the URL which is the WordNet id that is mapped to ImageNet.
Or in the "Images of the Synset" tab, just click on "Wordnet IDs".
To get to:
or right-click -- save as:
You can use the WordNet id to get the original images.
If you are commercial, I would say contact the ImageNet team.
Add-on
Taking up the idea of a comment: If you do not want many images, but just the "one single class image" that represents the class as much as possible, have a look at Visualizing GoogLeNet Classes and try to use this method with the images of ImageNet instead. Which is using the deepdream code as well.
Visualizing GoogLeNet Classes
July 2015
Ever wondered what a deep neural network thinks a Dalmatian should
look like? Well, wonder no more.
Recently Google published a post describing how they managed to use
deep neural networks to generate class visualizations and modify
images through the so called “inceptionism” method. They later
published the code to modify images via the inceptionism method
yourself, however, they didn’t publish code to generate the class
visualizations they show in the same post.
While I never figured out exactly how Google generated their class
visualizations, after butchering the deepdream code and this ipython
notebook from Kyle McDonald, I managed to coach GoogLeNet into drawing
these:
... [with many other example images to follow]

Accessing carray of pointcloud using pytables

I am having a hard time understanding how to access the data in a carray.
http://carray.pytables.org/docs/manual/index.html
I have a carray that I can view in a group structure using vitables - but how to open it and retrieve the data it beyond me.
The data are a point cloud that is 3 levels down that I want to make a scatter plot of and extract as a .obj file..
I then have to loop through (many) clouds and do the same thing..
Is there anyone that can give me a simple example of how to do this?
This was my attempt:
import carray as ca
fileName = 'hdf5_example_db.h5'
a = ca.open(rootdir=fileName)
print a
I managed to solve my issue.. I wasn't treating the carray differently to the rest of the hierarchy. I needed to first load the entire db, then refer to the data I needed. I ended up not having to use carray, and just stuck to h5py:
from __future__ import print_function
import h5py
import numpy as np
# read the hdf5 format file
fileName = 'hdf5_example_db.h5'
f = h5py.File(fileName, 'r')
# full path of carry type data (which is in ply format)
dataspace = '/objects/object_000/object_model'
# view the data
print(f[dataspace])
# print to ply file
with open('object_000.ply', 'w') as fo:
for line in f[dataspace]:
fo.write(line+'\n')