How does one create a custom dataset of images with masks for image segmentation?(specifically for Tensorflow) - tensorflow

Every tutorial I find involves using a pre-made, but the project I'm trying to do is image segmentation on pictures if playing cards. The dataset will be one I create but I'm finding little to no resources about creating the dataset and needed image masks. Any help would be great!

I use Gimp (https://www.gimp.org/) with layers. You can use several useful tools, such as the "BucketFill", to quickly color a region. Then you just have to export the layers to a new file to obtain the mask. VGG image annotator is also useful (https://www.robots.ox.ac.uk/~vgg/software/via/via-1.0.6.html)
For 3D images you can use VTK and ITKsnap (http://www.itksnap.org/pmwiki/pmwiki.php) for volume identification, visualization and exporting. MIPAV (https://mipav.cit.nih.gov/) is also useful.

VGG Image Annotator (VIA), here is a quick demo. There is also labelme

Related

How to pick the right images for an object detection model for only 1 tag?

UseCase: I'm trying to extract certain parts of a screenshot which is taken from a game (with a tf object detection model) and extract the text within this part (custom model for the font used in the game).
I have trained a custom model based on SSD Mobilenet V2 and the object detection works quite okish, but sometimes the bounding box is off. I googled about selecting the right images and the right amount for training the custom model, but I couldn't find a good hint in the right direction.
I try to extract the following (surrounded by red):
The environmen can change:
Resolution of the game can be different (1920x1080, WHQD etc.)
Text in the box is not always the same
I have trained with 120 self made images (1920x1080) (90% for training 10% for test) (all of these images where a screenshot of the game) and as I mentioned the results are okish. Sometimes the detected area is off (cutting the content of the box or including a lot area of the box surroundings).
Maybe someone can help me/answering the following questions:
Could a bigger training dataset increase the accuracy?
Should I also take different resolutions into account when creating the training data?
Would it make sense to feed only the boxes without the rest of the game screenshot into the training? Or should I mix screenshots of the whole game and only box screenshots?
Thank you in advance ! :)

How to get labels for ILSVRC2012 Classification Task

The ILSVRC 2012 small classification dataset is not separated by folder and don't have a labels file. How get the labels for the training set?
I tried on nonpub downloads page but does not exist anymore, and i tried by the filenames but their don't have the synset id on it.
I've been having the same issue today following this tutorial on reproducing ImageNet Validation results. I think I've found an answer, even if partial
In the article they point out to this link to get the validation set for object detection. I downloaded it and had the same issue as yourself, it only contains images without labels. What I've found is that this same website had this other link for the bounding boxes. I've downloaded it and alongside with the bboxes it comes with the proper class for each image
Hope this helps!

extract text from image using Tensorflow

Apart from Image Classification and other cool application is there any way we can extract text from images using Tensorflow, Image can be any format or pdf?
With Tensorflow you would have to train a model to detect digital or handwritten characters. The better way would be to use Opencv and pytesseract

How to display filters in tensorboard

I have a simple MNIST model from the tensorflow tutorial. I want to see how the first convolutional layer's filters changes with time. When I use tf.summary.image, only one of the steps is displayed, and the rest is ignored. Is there any way to work this around?
TF does not have videos, but you can generate image at each step, save them in some directory and then create a video from them.

Correct way to export Ogre from Blender into jmonkeyengine?

I'm learning Jmonkeyengine and I'm still at about the same stage as in this question where I ask about loading models
Enabling materials and textures for OGre 3D model in jmonkeyengine?
Now I looked more at Blender and now at least I can get the basic usecase to work, export to Ogre 3D from Blender and then loading it in jmonkeyengine. But for more advanced models with textures, it won̈́t work.
I'm trying to load an Ogre 3D into jmonkeyengine but I think the conversion to Ogre format is not working. I can open the model in Blender but when I try to export it all I can get is a .scene file and no .mesh.xml
Could you tell me what I'm doing wrong?
For instance opening this model in Blender and exporting it to Ogre doesn't work for me.
For what it's worth: after hours of trying to figure out why JME wouldn't locate the materials, esp. when using submeshes, turned out you need to rename your material file to .material. If you use more than 1 material, append all material files to that one file.