I have a problem with my dataset clasification - tensorflow

I latino so mi english is not so good, but i try to explain my problem.
I have a dataset with 3 clasess, but when i try validation the dataset with else image, only recognize a 1 class, but i have 3, i follow the tutorials but i think that the problem is the file coco128.yml when i try to custom.
train: /content/dataset-train/images/train
val: /content/dataset-train/images/validation
test: # test images (optional)
# Classes
nc: 3
names:
0: pitbull
1: husky
2: sanbernando
this is my code, but only recognize the class "pitbull" someone can help me?
train: /content/dataset-train/images/train
val: /content/dataset-train/images/validation
test: # test images (optional)
# Classes
nc: 3
names: ["pitbull","husky","sanbernando"]
before try this way, but cant work same, i dont know how fix.

Related

How to custom data classes on Yolov6? my data class don't work

Question How could I set my custom classes on yolov6?
I have tried this:
https://github.com/meituan/YOLOv6/blob/main/docs/Train_custom_data.md
my data.yaml:
train: ./data/volleyball_07_03/img_label/images/train
val: ./data/volleyball_07_03/img_label/images/val
test: ./data/volleyball_07_03/img_label/images/val
is_coco: False
nc: 5 # 類別數量
names: ['player','libero','umpire','volleyball','net']
this is my inference result:
https://github.com/pei-ci/photos/blob/main/test_1.jpg?raw=true
I don't have the class named person, but it appears in the results.
my data.yaml can work on yolov5!
I solved the problem.
Aadd --yaml data/data.yaml into:
python tools/infer.py --weights output_dir/name/weights/best_ckpt.pt --source img.jpg --device 0

ValueError: Error when checking input: expected keras_layer_input to have 4 dimensions, but got array with shape (10, 1)

Before this gets marked as duplicate, I already tried all of the the similar questions and most of them were not resolved, if they have an answer, it did not work with my problem. The original code has more than 10 samples.
Input: list of model input np.arrays. sample_train_emb1 has length = 2
Problem: model.fit() error ValueError: Error when checking input: expected keras_layer_input to have 4 dimensions, but got array with shape (10, 1)
Here is my plot_model image:
The model.fit() looks like this:
model.fit(
sample_train_emb1,
sample_y_train,
validation_data=(sample_valid_emb1, sample_y_valid),
epochs=epoch,
batch_size=batch_size,
verbose=1,
)
Thank you! Let me know if you need more details to help me solve this problem. It has many similar posts that remained unresolved so I thought it will help anybody who might face the same problem in the future.
What I've tried so far:
Swapping the two features.
Converting the image feature into a `TensorShape([Dimension(1),
Dimension(224), Dimension(224), Dimension(3)]) based on a similar question's answer
I eventually figured it out. Using the answer from this post.
sample_train_emb1[1] = np.array([x for x in sample_train_emb1[1]])
Hope this helps in the future to anyone.

How to handle punctuation and symbol in rasa?

Rasa version - 1.3.7
pipeline: “supervised_embeddings”
I have trained the bot with no punctuation in intent like.
intent: ask_holiday_in_a_year
How many holidays do we have in a year?
If i ask below question to bot
How many holidays do we have in a year? - ( NLU is able to recognize
it correctly).
How, many ()? Holidays!!,do!##we have$%^ in a %^& year. - (NLU is
able to recognize it correctly.)
How many ###################### holidays do we have in a year? .(NLU
is not able to recognize it correctly.)
How many ####### holidays %%^&*$$% do we have in a year? .(NLU is
not able to recognize it correctly.)
For cases 1 and 2, it worked but for case 3 and 4, it didn’t work? Is there any way(adding some settings in the pipeline)i can handle these symbols and punctuation and give the expected result?
i'm not sure weather this is the correct approach. but, you can try creating random data using a tool like Chatito while including symbols in your train data. again, i'm not sure if this is correct or not
First of all rasa recognize examples based on what you give to it. If you have examples as you 3 and 4 sentences rasa will recognize it. If you think outside of the box there can be multiple issues like this and there is no way rasa can recognize wthat is this and so on. So you want to give examples which are somewhat related to probable questions that can be asked from bot.
This can be handled using a custom nlu component.
In the third parameter of the maketrans function, add the symbols you want to delete.
This custom pipeline will remove all defined keywords and send the filtered text to nlu.
from rasa.nlu.components import Component
import typing
from typing import Any, Optional, Text, Dict
if typing.TYPE_CHECKING:
from rasa.nlu.model import Metadata
class DeleteSymbols(Component):
provides = ["text"]
#requires = []
defaults = {}
language_list = None
def __init__(self, component_config=None):
super(DeleteSymbols, self).__init__(component_config)
def train(self, training_data, cfg, **kwargs):
pass
def process(self, message, **kwargs):
mt = message.text
message.text = mt.translate(mt.maketrans('', '', '$%&(){}^'))
def persist(self, file_name: Text, model_dir: Text) -> Optional[Dict[Text, Any]]:
pass
#classmethod
def load(
cls,
meta: Dict[Text, Any],
model_dir: Optional[Text] = None,
model_metadata: Optional["Metadata"] = None,
cached_component: Optional["Component"] = None,
**kwargs: Any
) -> "Component":
"""Load this component from file."""
if cached_component:
return cached_component
else:
return cls(meta)
Add the pipeline in config.yml
# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: en
pipeline:
- name: "Pipelines.TextParsing.TextParsingPipeline"
- name: "WhitespaceTokenizer"
- name: "RegexFeaturizer"
- name: "CRFEntityExtractor"
- name: "EntitySynonymMapper"
- name: "CountVectorsFeaturizer"
- name: "EmbeddingIntentClassifier"
Source - https://forum.rasa.com/t/how-to-handle-punctuation-and-symbol-in-rasa/19454

TensorFlow Serving Error: 'StatelessIf has '_lower_using_switch_merge' attr set but it does not support lowering.'

When attempting to serve a new model coded using TensorFlow 2.0 with TensorFlow serving, I get the following error from my Docker container logs:
2019-09-03 08:56:24.984824: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/model_modeFact/1567500955
2019-09-03 08:56:24.989902: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2019-09-03 08:56:25.002593: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: fail. Took 17772 microseconds.
2019-09-03 08:56:25.002658: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: model_modeFact version: 1567500955} failed: Internal: Node {{node zero_fraction/total_zero/zero_count/else/_1/zero_fraction/cond}} of type StatelessIf has '_lower_using_switch_merge' attr set but it does not support lowering.
Using the saved_model_cli, the model works fine and can make predictions.
Initially I was getting this error: "TensorFlow Serving crossed columns strange error"
I found that this error might be fixed by swapping to tf-nightly-2.0-preview==2.0.0.dev20190819
But instead I am now I can't even get my model to be served.
The only changes I made to the code to compile my model in TF2 are:
# Added this line to force eager execution, necessary for tf.placeholders
tf.compat.v1.disable_eager_execution()
# For every usage of tf.estimator...
tf.compat.v1.estimator
# For every usage of tf.placeholder...
tf.compat.v1.placeholder
Like the previous problem, the goal is to have a prediction output from my served model, an output similar to when I use saved_model_cli. Something like this:
Result for output key all_class_ids:
[[0 1 2 3 4 5]]
Result for output key all_classes:
[[b'0' b'1' b'2' b'3' b'4' b'5']]
Result for output key class_ids:
[[2]]
Result for output key classes:
[[b'2']]
Result for output key logits:
[[ 0.11128154 -0.44881764 0.31520572 -0.08318427 -0.3479367 -0.08883157]]
Result for output key probabilities:
[[0.19719791 0.11263006 0.2418051 0.16234797 0.12458517 0.16143374]]
Most probably it happens because you are using TF2 docker image, try
tensorflow/serving:1.15.0-rc2
docker image, I hope it fixes this problem .
Try also calling tf.compat.v1.disable_v2_behavior() when app that is saving your model is started.

How to get chosen class images from Imagenet?

Background
I have been playing around with Deep Dream and Inceptionism, using the Caffe framework to visualize layers of GoogLeNet, an architecture built for the Imagenet project, a large visual database designed for use in visual object recognition.
You can find Imagenet here: Imagenet 1000 Classes.
To probe into the architecture and generate 'dreams', I am using three notebooks:
https://github.com/google/deepdream/blob/master/dream.ipynb
https://github.com/kylemcdonald/deepdream/blob/master/dream.ipynb
https://github.com/auduno/deepdraw/blob/master/deepdraw.ipynb
The basic idea here is to extract some features from each channel in a specified layer from the model or a 'guide' image.
Then we input an image we wish to modify into the model and extract the features in the same layer specified (for each octave),
enhancing the best matching features, i.e., the largest dot product of the two feature vectors.
So far I've managed to modify input images and control dreams using the following approaches:
(a) applying layers as 'end' objectives for the input image optimization. (see Feature Visualization)
(b) using a second image to guide de optimization objective on the input image.
(c) visualize Googlenet model classes generated from noise.
However, the effect I want to achieve sits in-between these techniques, of which I haven't found any documentation, paper, or code.
Desired result (not part of the question to be answered)
To have one single class or unit belonging to a given 'end' layer (a) guide the optimization objective (b) and have this class visualized (c) on the input image:
An example where class = 'face' and input_image = 'clouds.jpg':
please note: the image above was generated using a model for face recognition, which was not trained on the Imagenet dataset. For demonstration purposes only.
Working code
Approach (a)
from cStringIO import StringIO
import numpy as np
import scipy.ndimage as nd
import PIL.Image
from IPython.display import clear_output, Image, display
from google.protobuf import text_format
import matplotlib as plt
import caffe
model_name = 'GoogLeNet'
model_path = 'models/dream/bvlc_googlenet/' # substitute your path here
net_fn = model_path + 'deploy.prototxt'
param_fn = model_path + 'bvlc_googlenet.caffemodel'
model = caffe.io.caffe_pb2.NetParameter()
text_format.Merge(open(net_fn).read(), model)
model.force_backward = True
open('models/dream/bvlc_googlenet/tmp.prototxt', 'w').write(str(model))
net = caffe.Classifier('models/dream/bvlc_googlenet/tmp.prototxt', param_fn,
mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent
channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
# a couple of utility functions for converting to and from Caffe's input image layout
def preprocess(net, img):
return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data']
def deprocess(net, img):
return np.dstack((img + net.transformer.mean['data'])[::-1])
def objective_L2(dst):
dst.diff[:] = dst.data
def make_step(net, step_size=1.5, end='inception_4c/output',
jitter=32, clip=True, objective=objective_L2):
'''Basic gradient ascent step.'''
src = net.blobs['data'] # input image is stored in Net's 'data' blob
dst = net.blobs[end]
ox, oy = np.random.randint(-jitter, jitter+1, 2)
src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
net.forward(end=end)
objective(dst) # specify the optimization objective
net.backward(start=end)
g = src.diff[0]
# apply normalized ascent step to the input image
src.data[:] += step_size/np.abs(g).mean() * g
src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
if clip:
bias = net.transformer.mean['data']
src.data[:] = np.clip(src.data, -bias, 255-bias)
def deepdream(net, base_img, iter_n=20, octave_n=4, octave_scale=1.4,
end='inception_4c/output', clip=True, **step_params):
# prepare base images for all octaves
octaves = [preprocess(net, base_img)]
for i in xrange(octave_n-1):
octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1))
src = net.blobs['data']
detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details
for octave, octave_base in enumerate(octaves[::-1]):
h, w = octave_base.shape[-2:]
if octave > 0:
# upscale details from the previous octave
h1, w1 = detail.shape[-2:]
detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1)
src.reshape(1,3,h,w) # resize the network's input image size
src.data[0] = octave_base+detail
for i in xrange(iter_n):
make_step(net, end=end, clip=clip, **step_params)
# visualization
vis = deprocess(net, src.data[0])
if not clip: # adjust image contrast if clipping is disabled
vis = vis*(255.0/np.percentile(vis, 99.98))
showarray(vis)
print octave, i, end, vis.shape
clear_output(wait=True)
# extract details produced on the current octave
detail = src.data[0]-octave_base
# returning the resulting image
return deprocess(net, src.data[0])
I run the code above with:
end = 'inception_4c/output'
img = np.float32(PIL.Image.open('clouds.jpg'))
_=deepdream(net, img)
Approach (b)
"""
Use one single image to guide
the optimization process.
This affects the style of generated images
without using a different training set.
"""
def dream_control_by_image(optimization_objective, end):
# this image will shape input img
guide = np.float32(PIL.Image.open(optimization_objective))
showarray(guide)
h, w = guide.shape[:2]
src, dst = net.blobs['data'], net.blobs[end]
src.reshape(1,3,h,w)
src.data[0] = preprocess(net, guide)
net.forward(end=end)
guide_features = dst.data[0].copy()
def objective_guide(dst):
x = dst.data[0].copy()
y = guide_features
ch = x.shape[0]
x = x.reshape(ch,-1)
y = y.reshape(ch,-1)
A = x.T.dot(y) # compute the matrix of dot-products with guide features
dst.diff[0].reshape(ch,-1)[:] = y[:,A.argmax(1)] # select ones that match best
_=deepdream(net, img, end=end, objective=objective_guide)
and I run the code above with:
end = 'inception_4c/output'
# image to be modified
img = np.float32(PIL.Image.open('img/clouds.jpg'))
guide_image = 'img/guide.jpg'
dream_control_by_image(guide_image, end)
Question
Now the failed approach how I tried to access individual classes, hot encoding the matrix of classes and focusing on one (so far to no avail):
def objective_class(dst, class=50):
# according to imagenet classes
#50: 'American alligator, Alligator mississipiensis',
one_hot = np.zeros_like(dst.data)
one_hot.flat[class] = 1.
dst.diff[:] = one_hot.flat[class]
To make this clear: the question is not about the dream code, which is the interesting background and which is already working code, but it is about this last paragraph's question only: Could someone please guide me on how to get images of a chosen class (take class #50: 'American alligator, Alligator mississipiensis') from ImageNet (so that I can use them as input - together with the cloud image - to create a dream image)?
The question is how to get images of the chosen class #50: 'American alligator, Alligator mississipiensis' from ImageNet.
Go to image-net.org.
Go to "Download".
Follow the instructions for "Download Image URLs":
How to download the URLs of a synset from your Brower?
1. Type a query in the Search box and click "Search" button
The alligator is not shown. ImageNet is under maintenance. Only ILSVRC synsets are included in the search results. No problem, we are fine with the similar animal "alligator lizard", since this search is about getting to the right branch of the WordNet treemap. I do not know whether you will get the direct ImageNet images here even if there were no maintenance.
2. Open a synset papge
Scrolling down:
Scrolling down:
Searching for the American alligator, which happens to be a saurian diapsid reptile as well, as a near neighbour:
3. You will find the "Download URLs" button under the left-bottom corner of the image browsing window.
You will get all of the URLs with the chosen class. A text file pops up in the browser:
http://image-net.org/api/text/imagenet.synset.geturls?wnid=n01698640
We see here that it is just about knowing the right WordNet id that needs to be put at the end of the URL.
Manual image download
The text file looks as follows:
http://farm1.static.flickr.com/136/326907154_d975d0c944.jpg
http://weeksbay.org/photo_gallery/reptiles/American20Alligator.jpg
...
till image number 1261.
As an example, the first URL links to:
And the second is a dead link:
The third link is dead, but the fourth is working.
The images of these URLs are publicly available, but many links are dead, and the pictures are of lower resolution.
Automated image download
From the ImageNet guide again:
How to download by HTTP protocol? To download a synset by HTTP
request, you need to obtain the "WordNet ID" (wnid) of a synset first.
When you use the explorer to browse a synset, you can find the WordNet
ID below the image window.(Click Here and search "Synset WordNet ID"
to find out the wnid of "Dog, domestic dog, Canis familiaris" synset).
To learn more about the "WordNet ID", please refer to
Mapping between ImageNet and WordNet
Given the wnid of a synset, the URLs of its images can be obtained at
http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=[wnid]
You can also get the hyponym synsets given wnid, please refer to API
documentation to learn more.
So what is in that API documentation?
There is everything needed to get all of the WordNet IDs (so called "synset IDs") and their words for all synsets, that is, it has any class name and its WordNet ID at hand, for free.
Obtain the words of a synset
Given the wnid of a synset, the words of
the synset can be obtained at
http://www.image-net.org/api/text/wordnet.synset.getwords?wnid=[wnid]
You can also Click Here to
download the mapping between WordNet ID and words for all synsets,
Click Here to download the
mapping between WordNet ID and glosses for all synsets.
If you know the WordNet ids of choice and their class names, you can use the nltk.corpus.wordnet of "nltk" (natural language toolkit), see the WordNet interface.
In our case, we just need the images of class #50: 'American alligator, Alligator mississipiensis', we already know what we need, thus we can leave the nltk.corpus.wordnet aside (see tutorials or Stack Exchange questions for more). We can automate the download of all alligator images by looping through the URLs that are still alive. We could also widen this to the full WordNet with a loop over all WordNet IDs, of course, though this would take far too much time for the whole treemap - and is also not recommended since the images will stop being there if 1000s of people download them daily.
I am afraid I will not take the time to write this Python code that accepts the ImageNet class number "#50" as the argument, though that should be possible as well, using mapping tables from WordNet to ImageNet. Class name and WordNet ID should be enough.
For a single WordNet ID, the code could be as follows:
import urllib.request
import csv
wnid = "n01698640"
url = "http://image-net.org/api/text/imagenet.synset.geturls?wnid=" + str(wnid)
# From https://stackoverflow.com/a/45358832/6064933
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0'})
with open(wnid + ".csv", "wb") as f:
with urllib.request.urlopen(req) as r:
f.write(r.read())
with open(wnid + ".csv", "r") as f:
counter = 1
for line in f.readlines():
print(line.strip("\n"))
failed = []
try:
with urllib.request.urlopen(line) as r2:
with open(f'''{wnid}_{counter:05}.jpg''', "wb") as f2:
f2.write(r2.read())
except:
failed.append(f'''{counter:05}, {line}'''.strip("\n"))
counter += 1
if counter == 10:
break
with open(wnid + "_failed.csv", "w", newline="") as f3:
writer = csv.writer(f3)
writer.writerow(failed)
Result:
If you need the images even behind the dead links and in original quality, and if your project is non-commercial, you can sign in, see "How do I get a copy of the images?" at the Download FAQ.
In the URL above, you see the wnid=n01698640 at the end of the URL which is the WordNet id that is mapped to ImageNet.
Or in the "Images of the Synset" tab, just click on "Wordnet IDs".
To get to:
or right-click -- save as:
You can use the WordNet id to get the original images.
If you are commercial, I would say contact the ImageNet team.
Add-on
Taking up the idea of a comment: If you do not want many images, but just the "one single class image" that represents the class as much as possible, have a look at Visualizing GoogLeNet Classes and try to use this method with the images of ImageNet instead. Which is using the deepdream code as well.
Visualizing GoogLeNet Classes
July 2015
Ever wondered what a deep neural network thinks a Dalmatian should
look like? Well, wonder no more.
Recently Google published a post describing how they managed to use
deep neural networks to generate class visualizations and modify
images through the so called “inceptionism” method. They later
published the code to modify images via the inceptionism method
yourself, however, they didn’t publish code to generate the class
visualizations they show in the same post.
While I never figured out exactly how Google generated their class
visualizations, after butchering the deepdream code and this ipython
notebook from Kyle McDonald, I managed to coach GoogLeNet into drawing
these:
... [with many other example images to follow]