project topic for neural network for freshers? [closed] - project

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to start working on neural network for my final project I want a topic which could be completed on 2-3 months of work and also It should be of good understanding for a fresher, as I am new to this topic and I want to learn by doing this project. It should not be very tough to understand and start work.

You could write a simple OCR using a Hopfield neural network.
A good start would be:
A comparative study of neural network algorithms applied to optical character recognition
Hopfield Networks: A Simple OCR Application
It is a relatively simple fun project.
It would be even easier if you could use Matlab and some of its modules. But even if you were to implement it in Java or some similar language, I think it should be doable in 3 months for a beginner.
In Matlab, you could start with the following:
Hopfield Neural Network
Hopfield Two Neuron Design
You will need the Neural Network Toolbox which has to be purchased separately I think.

Your first question should be what and not how you want to classify. Depending on the problem, you can choose a fitting classifier. It's hard for you to decide the detailed solution before knowing the actual problem.
Simple topics (depending on your personal background) can be text, audio or image analysis. OCR is quite typical (you can use the MNIST database for that, it's well researched so you can compare your own results). To get a general idea of what applications are out there, you should also definitely have a look at the UCI database. It has all sorts of data.
The easiest Type of Neural Network to understand and implement is a Single Layer Perceptron. To also classify non-linearly seperable data (which is needed in most real-world scenarios), you can use a Multi Layer Perceptron with 3 layers (in/hidden/out).

Related

predict the position of an image in another image [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
If one image is a part of another image, then how to compute the accurate location in deep learning way?
Now I could compute this by extracting and matching key points using OpenCV, but I hope to solve it with neural networks.
Any ideas to design the networks and loss functions?
Thanks very much.
This is a detection problem. The simplest approach to do it is to create a a network with two heads, one for classification and the other for the bounding box (regression).
you feed your network with the image and respective label, and sum the lossess and do a backward. train for some epochs and you'll get your self a detection model that you can use to detect what you need. but its just a simple approach and it can get much more complex.
You may as well skip this and use an existing detection architecture or better framework which simplifies your life much better.
For Tensorflow I belive you can use ObjectDetctionAPI and for Pytorch you can use Detectron, Detectron2, mmdetection among others.

Why are carboard boxes hard to detect? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm trying to train a Neural Network how to detect cardboard boxes along with multiple classes of persons (people).
Although it's easy to detect persons and correctly classifies them, it's incredibly hard to detect cardboard boxes.
The boxes look like this:
My suspicion is that box is too simple of an object, and the neural network has a hard time detecting it because there are too few features to extract from the object.
The division of the dataset looks like this:
personA: 1160
personB: 1651
personC: 2136
person: 1959
box: 2798
Persons are wearing different safety items, based on the items are classified, while detected as whole person, not just the item.
I tried to use following architectures:
ssd300_incetpionv2
ssd512_inceptionv2
faster_rcnn_inceptionv2
All of these are detecting and classifying persons much better than boxes. I cannot provide exact mAP (don't have it).
i used pertained CoCo model from tensorflow model zoo.
Any ideas why is so hard to detect boxes?
Thanks.
PS: I have asked this question on data science stack exchange but didn't got relevant answer.
You are starting from a model pre-trained on COCO, which includes itself the "person" category, but not the "box" category so it sound normal to me that the box category is harder.
I don't think your hypothesis is correct since a CNN should be more than capable of extracting the right features for simple objects as well as complex ones.

Machine Learning & Image Recognition: How to start? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I've been a full stack web developer for 15 years now and would like to be involved in machine learning. There is already a specific scenario for this: We have a database with several million products and one product image each. There is also a database with about 5000 terms.
A product image is linked to several terms (usually 3 - 20), whereby the link still has a weighting (1-100%). The terms are always of a visual nature, that is, they describe a visually recognizable feature on the image.
The aim should now be to upload a new image (of course with thematic reference) and to get an answer with possible terms (including probability) based on the already classified images.
Do you have any advice on how best to start here? Is there a framework that comes close to this scenario? Is TensorFlow relevant for this task? What new language should I learn?
Thank you very much!
TensorFlow can be used, it's pretty "low-level" though. So if you're just starting out you might be better off using Keras with a TensorFlow backend as it's more userfriendly.
Regarding languages you will probably use Python. So if you don't know it already you should get started. In my opinion you can also learn it on-the-fly while practicing as you're already a developer.
As for tutorials you will have to probably pick out the relevant bits of many different tutorials. You could get started with something like this:
https://www.pyimagesearch.com/2018/05/07/multi-label-classification-with-keras/

Any way to manually make a variable more important in a machine learning model? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Sometimes you know by experience or by some expert knowledge some variable will play a key role in this model, is there a way to manually make the variable count more so the training process can speed up and the method can combine some human knowledge/wisdom/intelligence.
I still think machine learning combined with human knowledge is the strongest weapon we have now
This might work by scaling your input data accordingly.
On the other hand the strength of a neural network is to figure out
which features are in fact important and which combinations with other
features are important - from the data.
You might argue, that you'll decrease training time. Somebody else might argue that you're biasing your training in such a way that it might even take more time.
Anyway if you would want to do this, assuming a fully connected layer, you could increasedly initialize the weights of the input feature you found important.
Another way, could be to first pretrain the model according to a training loss, that should have your feature as an output. Than keep the weights and switch to the actual loss - I have never tried this, but it could work.

How dependency parse tree is used for sentiment analysis? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
With the announcement from Google on release of Parsey McParseface syntaxnet
which is claimed to be the most accurate dependency parser. I want to understand how this parser can be used for more accurate sentiment analysis ? If someone can share some blogs or research papers or tutorials which can help me understand the overall flow.
Good question, Im not expert, in fact I got intrigued when you asked the question.
td;lr; more accurate dependency parsers would in allow one propagate sentiment through a graph and thus better determine sentiment polarity, at least in theory.
It seems from my reading that sentiment analysis using dependency tree graphs propagate the independent (prior -- sentiment you might get from a lexicon) sentiment of words to compose overall sentiment polarity of the text.
This approach uses the composition of language (its grammatical structure) to determine sentiment. This is somewhat* opposed to a statistical (naives bayes, logistics regression, neural networks) approach to understanding sentiment.
Here's the paper I scanned:
http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS14/paper/viewFile/7869/7837
For a deeper exploration of whats possible, this might help:
https://arxiv.org/pdf/1401.6330.pdf
A more through introduction to dependency parsing if you're interested might be https://web.stanford.edu/~jurafsky/slp3/14.pdf
*somewhat in the sense that (in particular) convolution networks do learn a certain composition of language so do rnns.