I was wondering if there is any good and clean object-oriented programming (OOP) implementation of Bayesian filtering for spam and text classification? This is just for learning purposes.
I definitely recommend Weka which is an Open Source Data Mining Software written in Java:
Weka is a collection of machine learning algorithms for data mining tasks. The algorithms can either be applied directly to a dataset or called from your own Java code. Weka contains tools for data pre-processing, classification, regression, clustering, association rules, and visualization. It is also well-suited for developing new machine learning schemes.
As mentioned above, it ships with a bunch of different classifiers like SVM, Winnow, C4.5, Naive Bayes (of course) and many more (see the API doc).
Note that a lot of classifiers are known to have much better perfomance than Naive Bayes in the field of spam detection or text classification.
Furthermore Weka brings you a very powerful GUIā¦
Check out Chapter 6 of Programming Collective Intelligence
Maybe https://ci-bayes.dev.java.net/ or http://www.cs.cmu.edu/~javabayes/Home/node2.html?
I never played with it either.
Here is an implementation of Bayesian filtering in C#: A Naive Bayesian Spam Filter for C# (hosted on CodeProject).
nBayes - another C# implementation hosted on CodePlex
In French, but you should be able to find the download link :)
PHP Naive Bayesian Filter
Related
Suppose that I have want to make a model that does something. Now when I search about the topic in Google or YouTube, I find many related tutorials and it seems like some clever programmer had already implemented that model with Deep learning.
But how do they know that what type of layers, what type of activation functions, loss functions, optimizer, number of units etc. they need to solve that certain problem using deep learning.
Are there any techniques for knowing this, or its just a matter of understanding and experience? Also it would be very helpful if somebody could point me to some videos or articles answering my question.
This is more of a matter of understanding and experience. When building a model from scratch, you must understand which optimizer, loss, etc. makes sense for your particular problem. In order to choose these appropriately, you must understand the differences between the available optimizers, loss functions, etc.
In regards to choosing how many layers and nodes, what batch size, what learning rate, etc.-- these are all hyperparameters that you will need to test and tune as you experiment with your model.
I have a Deep Learning Fundamentals YouTube playlist that you may find helpful. It covers the fundamental basics of each of these topics in short videos. Additionally, this Deep Learning with Keras playlist may also be beneficial if you're wanting to focus more on coding after getting the basic concepts down.
Thanks for the question.
The CS231n Stanford lectures on CNN is the best for beginners refer to the video lectures here and class notes are available here
After watching the lectures and completing the assignments, you will get a basic idea of what Deep Learning is and all the algorithms available etc.
But when it comes to solving real-world problems this won't be sufficient So take this course by Jeremy Howard where he teaches more on how to approach a problem using Kaggle platform.
Keep on solving more problems experimenting new models and algorithms using several platforms like hackerearth, Kaggle, topcoder etc.
I am newbie in Tensorflow
Actually, I am testing some example in Tensorflow web-site, and I start to understand some features of the framwork, but what I don't understand is how I can design my architecture, I mean number of layers, type of Layer "conv, pool...", and if it is necessery to do that, because there are many predifined architectures like AmexNet,
Thanks,
I would strongly recommend working through their hands on tutorial, depending on if you have previous ML experience (https://www.tensorflow.org/get_started/mnist/pros) or not (https://www.tensorflow.org/get_started/mnist/beginners). The questions you are asking are answered in there.
The question on using predefined architectures or self defined depends on your use case. If you want to do something easy like classifying if there is only a car in the scene or not a more shallow architecture might work better, because it is faster and a more deep one is overkill. However most architectures are similar to the ones already defined in literature.
Another question that arises naturally, while talking about pre defined architecture is about transfer learning / fine tuning. Often pre defined architectures are already learned on some big dataset (mostly ImageNet) and already perform really well out of the box for many tasks. With little training data it makes a lot of sense to use this. With lots of training data it can hinder your progress though.
I want to study on the research of deep learning, but I don't know which framwork should I choice between TensorFlow and PaddlePaddle. who can make a contrast between the two frameworks? which one is better? especially in the running efficiency of CPU
It really depends what you are shooting for...
If you plan on training, CPU is not going to work well for you. Use colab or kaggle.
Assuming you do get a GPU, it depends if you want to focus on classification or object detection.
If you focus on classification, Keras is probably the easiest to work with or pytorch if you want some advanced stuff and to be able to change things.
If you plan on object detection, things are getting complicated... Inference is reasonably easy but training is complicated. There are actually 4 platforms you should consider:
Tensorflow - powerful but very difficult to work with. If you do not use Keras (and for OD you usually can't), you need to preprocess the dataset into tfrecords and it is a pain. The OD Api has very cryptic messages and it is very sensitive to the combination of tf version and api version. On the other hand, cool models like efficientdet are more or less easy to use.
MMdetection - very powerful framework, has lots of advanced models and once you understand how to work with it, you can easily work with and of the models it supports. Downside is that some models are slow to arrive (efficientdet, for example)
paddlepaddle - if you know Chinese, this should work ok, maybe. The documentation is a bit behind and usually requires lots of improvisation. Basically it is similar to mmdetection just with a few unique models and a few missing models.
detectron2 - I didn't work with this one, but it seems to support only a few models.
You probably need first to define for yourself what do you want to do and then choose.
Good luck!
It is not that trivial. Some models run faster with one kind of framework others with another. Furthermore, it depends on the hardware as well. See this blog. If inference is your only concern, then you can develop your model in any of the popular frameworks like TensorFlow, PyTorch, etc. In the end convert your model to ONNX format and benchmark its performance with DNN-Bench to choose the best inference engine for your application.
I am trying to Code a Bayesian Network in .NET. I found a library called Infer.Net by Microsoft Research which is used for Probabilistic Reasoning about the Networks. But it would be easier if I could find a simple Example implementing a Bayesian Network using Infer.Net. I searched and unable to couldn't find one. Can some one point me out with the exemplified implementation of Bayesian Net in .Net or using Infer.Net.
Thanks,
I'm not aware of a .NET implementation of bayesian networks, but I'm using SMILE for my research (http://genie.sis.pitt.edu/). It's a C++ library but they provide a .NET wrapper. It's pretty well documented so it should be a good starting point.
There is a simple Bayesian network example included with Infer.NET. The code is at (your install folder)/Samples/C#/ExamplesBrowser/WetGrassSprinklerRain.cs.
I see that Bayesian filters are use well for binary choices - (spam:not spam, male:female etc). Is there any way for it to categorize multiple values (eg php+javascript, house+yard).
I've seen Naive bayesian classifier - multiple decisions but I want to know if multiple outputs are possible.
If not, what are other suggested approaches for categorization (with or without learning). Especially for php.
As the accepted answer of the question you linked to says: "It's definitely possible to have more than two classes.". In practice, one approach is to train multiple classifiers in parallel, e.g. one classifier for php vs. not php and another classifier for javascript vs. not javascript.
Other widely used multivariate classification methods include
artificial neural networks (also called multilayer perceptrons)
(boosted) decision trees
support vector machines
If you have a more detailed/follow up question on this, post it on http://stats.stackexchange.com .
I'm not sure what libraries for such a task are available for php but Swig is a tool to make libraries written in C/C++ usable from php.