Can anyone please help me with generating new images using GAN? - google-colaboratory

I've been trying to generate (255x255) images using GAN, although I am able to train and generate 28x28 MNIST images, but I'm not able to generate custom images. I tried changing a lot of parameters in generator and discriminator networks, but I'm not able to generate even a single image. All I can see is a blank or dark image. Please help me with a proper solution as I'm approaching my semester deadline.
the output which im getting
output which im expecting

Related

How to add hard negative examples in Tensorflow object detection api

I am training pre-trained SSD on my custom dataset which is working fine with test images. But when a new image comes with no object(hard negative), it detects false positive. I was looking for a way for adding these hard negative examples in training but did not find an exact procedure. Could someone help me out here?
How do I add hard negative examples to training?
Do I have to create XML files/ bounding boxes for hard negative images?
How do I create tf records for these hard negative images?
Do I have to edit code files to generate tf records or config file?

Extract 2D-image patches for training image and mask used in Semantic Segmentation

I was wondering as to what would be my best approach to this problem. I have a 6000-by-6000 image with a 6000-by-6000 mask. I wanted to crop the image into several sub-images before training and came across Extract_Patches_2d in Scikit learn. Looks like the tool to get the job done but I have one issue. If I run this on a single image, how can I be sure that it will use the same patch as a blueprint for the image mask as well ?

Object detection using CNTK

I am very new to CNTK.
I wanted to train a set of images (to detect objects like alcohol glasses/bottles) using CNTK - ResNet/Fast-R CNN.
I am trying to follow below documentation from GitHub; However, it does not appear to be a straight forward procedure. https://github.com/Microsoft/CNTK/wiki/Object-Detection-using-Fast-R-CNN
I cannot find proper documentation to generate ROI's for the images with different sizes and shapes. And how to create object labels based on the trained models? Can someone point out to a proper documentation or training link using which I can work on the cntk model? Please see the attached image in which I was able to load a sample image with default ROI's in the script. How do I properly set the size and label the object in the image ? Thanks in advance!
sample image loaded for training
Not sure what you mean by proper documentation. This is an implementation of the paper (https://arxiv.org/pdf/1504.08083.pdf). Looks like you are trying to generate ROI's. Can you look through the helper functions as documented at the site to parse what you might need:
To run the toy example, make sure that in PARAMETERS.py the datasetName is set to "grocery".
Run A1_GenerateInputROIs.py to generate the input ROIs for training and testing.
Run A2_RunCntk_py3.py to train a Fast R-CNN model using the CNTK Python API and compute test results.
The algo will work on several candidate regions and then generate outputs: one for the classes of objects and another one that generates the bounding boxes for the objects belonging to those classes. Please refer to the code for getting the details of the implementation.
Can someone point out to a proper documentation or training link using which I can work on the cntk model?
You can take a look at my repository on GitHub.
It will guide you through all the steps required to train your own model for object detection and classification with CNTK.
But in short the proper steps should look something like this:
Setup environment
Prepare data
Tag images (ground truth)
Download pretrained model and create mappings for your custom dataset
Run training
Evaluate the model on test set

How to train a set of images (apart form the tutorial) and use it to classify in Tensorflow?

I am using from the TensorPy GitHub repo (built by Michael Mintz) (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py)to easily handle image classifications of either individual or multiple images directly from web pages. However i want to train the system with my own set of images and then run the classifier. Could anyone please point me in the right direction for training a new model and how do i use it in classifying a set of bulk images?
Thank you.

Tensorflow Serving with image input

I'm trying to send image input over http to classify using tensorflow. I have looked in detail in the c++ code for https://www.tensorflow.org/versions/r0.9/tutorials/image_recognition/index.html
I have implemented the inception-v3 example model using C++ API. It takes image input in the following form:
bazel-bin/tensorflow/examples/label_image/label_image --image=my_image.png
However, I want to add the case of:
bazel-bin/tensorflow/examples/label_image/label_image --image=http://www.somewebsite.com/my_image.png
This is due to the fact that it only accepts local image files. I want to add the functionality to take file pointers from online images and classify it in memory. I'm currently working on this, but so far no luck. Can anyone offer some insight how I would go about implementing this?