Is there any way to customize qna maker in order to understand semantic phrases in a sentence?
Questions from my KB are:
What is the company address?
Alternative question is: Company headoffice?
when user ask the "company head office?" question (Note:space between head and office) qna maker responds with some other answer which contains head office term in it.
ex:
Q. Company head office?
A: the head office has all the facilities......
Here qna maker matches the head office term from the answer and written the response.
So is there any possible way to change underlying Ml algorithms.
Can we change the underlying ML algorithm for QnA Maker, sorry the answer is "No". QnAMaker is a prebuilt model offered as a service, which does not support any customization.
QnAMaker uses Natural Language Processing (NLP) and Azure Search layered ranking over the KnowledgeBase which is stored in the blob storage. In the example provided "Head Office" will be matched with the facilities related response which is expected behavior due to the higher score on ranking, but you can try adding synonyms using alterations to improve the ranking for your scenario.
Related
I'm new to the Machine-Learning (AI) technology. I'm developing a messenger app for Android/IOs where I would like to recommend the users based on the texts/word/conversation a product from a relative small product portfolio.
Example 1:
In case the user of the messenger writes a sentence including the words "vine", "dinner", "date" the AI should recommend a bottle of vine to the user.
Example 2:
In case the user of the app writes that he has drunk a good coffee this morning, the AI should recommend a mug to the user.
Example 3:
In case the user writes something about a cute boy she met last day, the AI should recommend a "teddy bear" to the user.
I'm a Software Developer since almost 20 year with experience in the development of C/C++/Java based application (Android and IOs apps) as well as some experience in Google Cloud Platform. The ML/AI technology is completely new to me. Okay, I know the basics (input data is needed to train the ML/AI system etc.), but I wonder If there is already a framework which could help me to develop such a system which solves the above described uses-case.
I would appreciate it, if you could give me some hints where and how to start.
Thank you and regards
It is definitely possible to implement such an application, in case you want to do it in Google Cloud you will need some understanding of Tensorflow.
First of all, I recommend to you to do the Machine Learning Crash Course, for a good introduction to Machine Learning and to start to familiarize yourself with TensorFlow. Afterwards I recommend to take a look into Tensorflow tutorials which will give you a more practical introduction to Tensorflow, and include various examples on building/training/testing models.
Once you are famirialized with Tensorflow, you can jump into learning how to run jobs in the Machine Learning engine, you can start by following the quickstart. The documentation includes detailed guides on how to use the ml-engine, plus multiple samples and tutorials.
Since I believe that your application would fall into the Recommender System type, here you can see an example model, in Google Cloud ML Engine, on how to recommend items to users based on his previous searches. In your case, you would have to build a model in order to recommend items to users based on his previous words in the sentence.
The second option, in case you don't want to go through the hassle of building a new model from scratch, would be to use the Google Cloud Natural Language API, which you can understand as pre-trained models using Google (incredibly big) data. In your case, I believe that the Content Classifying API would help you achieve what your application intends to do, however, the outputs (which you can see here) are limited to what the model was trained to do, and might not be specific enough for your application, however it is an easy solution and you can still profit of this API in order to extract labels/information and send it as input to another model.
I hope that these links provide you with some foundations on what is possible to do with Tensorflow in the ML Engine, and are useful to you.
I am making a little personal project.
Ideally I would like to be able to make programmatically a google search and have the count of results. (My goal is to compare the results count between a lot (100000+) of different phrases).
Is there a free way to make a web search and compare the popularity of different texts, by using Google Bing or whatever (the source is not really important).
I tried Google but seems that freely I can do only 10 requests per day.
Bing is more permissive (5000 free requests per month).
Is there other tools or way to have a count of number of results for a particular sentence freely ?
Thanks in advance.
There are several things you're going to need if you're seeking to create a simple search engine.
First of all you should read and understand where the field of information retrieval started with G. Salton's paper or at least read the wiki page on the vector space model. It will require you learning at least some undergraduate linear algebra. I suggest Gilbert Strang's MIT video lectures for this.
You can then move to the Brin/Page Pagerank paper which outlays the original concept behind the hyperlink matrix and quickly calculating eigenvectors for ranking or read the wiki page.
You may also be interested in looking at the code for Apache Lucene
To get into contemporary search algorithm techniques you need calculus and regression analysis to learn machine learning and deep learning as the current google search has moved away from Pagerank and utilizes these. This is partially due to how link farming enabled people to artificially engineer search results and the huge amount of meta data that modern browsers and web servers allow to be collected.
EDIT:
For the webcrawler only portion I'd recommend WebSPHINX. I used this in my senior research in college in conjunction with Lucene.
Need some info on machine learning, especially sentiment analysis. I need a software that can parse through a comment(collected from social media platforms) and then judge its polarity(positiveness vs negativeness) on multiple attributes.
Say - the attributes are cleanliness, service-promptness, ease of room booking
And the comment is -
" was able to book a room easily. However rooms werent very clean"
Then the result would be - 1. Cleanliness- negative 2. ease of booking - positive 3. promptness- neutral
Any leads on what software I can go for or if there are any pre-written programs on this in a language that is easily available online.
I'm researching the same area as you, from what i found out, SVM Classifier gives the best result regarding sentiment-analysis
I am a PhD student in translation studies and I am currently working on my dissertation. I am using LSA Similarity interface as a method of analysis in my dissertation. My background is in linguistics and not computer science. I tried to find an easy LSA document categorisation tool but I could not find any. I tried to play with Gensim, I did not work. I think my problem is with linking my corpus (txt files) with the Gensim tool to do the analysis (I don't know how o do this step). I would greatly appreciate if anyone could help me with the analysis or direct me to any tool or easy tutorials to do it using Gensim.
I want to do the following: I want to apply document-doecument queries to retrieve the most relevant 5 documents from the corpus to the query document.
I have 15 query document
I have one corpus of (150 texts)The texts are short stories
I am desperate and I was hesitant to post this question here. I am sure that applying LSA in translation studies would add to the field and this makes me more persistent to find a way to do my analysis.
The only really easy, user-friendly tool for LSA that is out there right now is http://lsa.colorado.edu/ . Unfortunately, it is a web-based tool only, and it does not allow you to train LSA on your own corpora. But depending on your needs, that may not matter.
If I'm understanding you correctly, you need document-document similarity scores between each of 15 query documents and each of 150 short stories (a total of 15*150=2250 similarity scores). If these query documents and short stories are in English, then you can use the version of LSA that is trained on the TASA corpus used in many studies of LSA as follows:
Go to http://lsa.colorado.edu/
Select One-To-Many Comparison
Copy-paste one of the short stories in the "Main text" box, and the 15 queries separated with a blank line in the "Texts to compare" box
Repeat for each of your short stories. A huge pain? Yes. But if you are desperate...
If you program a little bit in Python or R, other tools for LSA include http://clic.cimec.unitn.it/composes/toolkit/introduction.html and http://cran.r-project.org/web/packages/lsa/lsa.pdf , and would save you the manual labor of the above suggestion. Also, I know you already tried Gensim, but there is a nice tutorial for it at http://radimrehurek.com/gensim/tutorial.html that you might try following if you haven't already.
And especially did they release some API to use it in our development?
I think that an OCR trained over million of books should have simply become perfect and I've also heard that Google improved a lot his algorithm through neural networks for the recognizing of numbers of houses in street view.
So I really wonder if this API are available.
Basically for google books , Google customs ocropus opensource OCR with a lot of techniques.
The funniest is the crowdsourcing one via the captchas composed of 2 word, one generated and insuring security, the second one is an unrecognized word in google books or streetview (building numbers) ...