named entity tagged corpus - entity

I am looking for named entity tagged corpus for English news domain in text and speech (transcribed) at same period of time. If anybody has any information about the corpus please send me the link.
Thanks
Khadaka

I've found the Open American National Corpus to be quite useful. They do provide a named-entity tagged portion containing both news text and transcribed speech, but note that it's tagged using the BBN NE Tagger, not an army of people. I've had decent results bootstrapping other models using this kind of corpus, though.
Best of luck. I'd be curious to hear of your results.

Related

Can any TTS engine change a voice's language, and subsequently its phoneme?

Lets say I want to have some English text spoken in an Italian accent.
Many of the engine demos I have tried on their respected sites will have the Italian language available, but when you try to get it to pronounce a few sentences in English, they often become highly unintelligible because they are operating by a different phoneme.
There are phoneme tags in SSML, and I know one site that allows you to actually demo with SSML. I try putting in this common and generic Italian conversation into their Italian voice:
Mama mia! Princess Peach and my friends have been kidnapped?
Chase Bowser, so we can eat some spaghetti!
And it is fairly unintelligible. Utilizing SSML or something else; Can I keep the accent, but correct the speech phoneme enough to make it intelligible?
You can hire a voice-talent with Italian accent and make a new TTS model where such option is available. Even with a several hours of speech you can get a decent model.
The second option is speech morphing, but it requires some efforts as well as knowledge in the domain.

WordNet, Query Expansion, Step by Step

I want to make a project about query expansion using WordNet,but it's hard to find step by step method to do it.
Based on this article, I should take the following steps (assuming a sentence as input to the program):
Tokenization
Tagging part of speech
Stemming word
Word sense disambiguation
Semantic similarity between the two synsets (it still confusing)
...and then we can conclude that the word with larger score is the query expansion from the input. However, I'm still confused about how to perform each of these steps. Is there any source which covers these in more detail?
Query Expansion is a huge field in itself under IR (Information Retrieval).
Also, WordNet is by itself huge, and so it is difficult to find single step-by-step directions.
However, there are tons of very good resources. I got started with it by taking several web examples and trying them out myself.
Resources you will find useful in getting started.
The wordnet site itself (with examples)
The WordNet Wikipedia page
Python Programming.net has a WordNet tutorial page
Even if you don't know Python, I would highly recommend the O'Reilly book "Natural Language Processing with Python". It's website has TONS of examples to get you started.
Hope that helps you get going.

How I can start building wordnet for Turkish language to use in sentiment analysis

Although I hold EE background, I didn't get chance to attend Natural Language processing classes.
I would like to build sentiment analysis tool for Turkish language. I think it is best to create a Turkish wordnet database rather than translating the text to English and analyze it with buggy translated text with provided tools. (is it?)
So what do you guys recommend me to do ? First of all taking NLP classes from an open class website? I really don't know where to start. Could you help me and maybe provide me step by step guide? I know this is an academic project but I am interested to build skills as a hobby in that area.
Thanks in advance.
Here is the process I have used before (making Japanese, Chinese, German and Arabic semantic networks):
Gather at least two English/Turkish dictionaries. They must be independent, not derived from each other. You can use Wikipedia to auto-generate one of your dictionaries. If you need to publish your network, then you may need open source dictionaries, or license fees, or a lawyer.
Use those dictionaries to translate English Wordnet, producing a confidence rating for each synset.
Keep those with strong confidence, manually approving or fixing through those with medium or low confidence.
Finish it off manually
I expanded on this in the "Automatic Translation Of WordNet" section of my 2008 paper: http://dcook.org/mlsn/about/papers/nlp2008.MLSN_A_Multilingual_Semantic_Network.pdf
(For your stated goal of a Turkish sentiment dictionary, there are other approaches, not involving a semantic network. E.g. "Semantic Analysis and Opinion Mining", by Bing Liu, is a good round-up of research. But a semantic network approach will, IMHO, always give better results in the long run, and has so many other uses.)

Entity Extraction/Recognition with free tools while feeding Lucene Index

I'm currently investigating the options to extract person names, locations, tech words and categories from text (a lot articles from the web) which will then feeded into a Lucene/ElasticSearch index. The additional information is then added as metadata and should increase precision of the search.
E.g. when someone queries 'wicket' he should be able to decide whether he means the cricket sport or the Apache project. I tried to implement this on my own with minor success so far. Now I found a lot tools, but I'm not sure if they are suited for this task and which of them integrates good with Lucene or if precision of entity extraction is high enough.
Dbpedia Spotlight, the demo looks very promising
OpenNLP requires training. Which training data to use?
OpenNLP tools
Stanbol
NLTK
balie
UIMA
GATE -> example code
Apache Mahout
Stanford CRF-NER
maui-indexer
Mallet
Illinois Named Entity Tagger Not open source but free
wikipedianer data
My questions:
Does anyone have experience with some of the listed tools above and its precision/recall? Or if there is training data required + available.
Are there articles or tutorials where I can get started with entity extraction(NER) for each and every tool?
How can they be integrated with Lucene?
Here are some questions related to that subject:
Does an algorithm exist to help detect the "primary topic" of an English sentence?
Named Entity Recognition Libraries for Java
Named entity recognition with Java
The problem you are facing in the 'wicket' example is called entity disambiguation, not entity extraction/recognition (NER). NER can be useful but only when the categories are specific enough. Most NER systems doesn't have enough granularity to distinguish between a sport and a software project (both types would fall outside the typically recognized types: person, org, location).
For disambiguation, you need a knowledge base against which entities are being disambiguated. DBpedia is a typical choice due to its broad coverage. See my answer for How to use DBPedia to extract Tags/Keywords from content? where I provide more explanation, and mentions several tools for disambiguation including:
Zemanta
Maui-indexer
Dbpedia Spotlight
Extractiv (my company)
These tools often use a language-independent API like REST, and I do not know that they directly provide Lucene support, but I hope my answer has been beneficial for the problem you are trying to solve.
You can use OpenNLP to extract names of people, places, organisations without training. You just use pre-exisiting models which can be downloaded from here: http://opennlp.sourceforge.net/models-1.5/
For an example on how to use one of these model see: http://opennlp.apache.org/documentation/1.5.3/manual/opennlp.html#tools.namefind
Rosoka is a commercial product that provides a computation of "Salience" which measures the importance of the term or entity to the document. Salience is based on the linguistic usage and not the frequency. Using the salience values you can determine the primary topic of the document as a whole.
The output is in your choice of XML or JSON which makes it very easy to use with Lucene.
It is written in java.
There is an Amazon Cloud version available at https://aws.amazon.com/marketplace/pp/B00E6FGJZ0. The cost to try it out is $0.99/hour. The Rosoka Cloud version does not have all of the Java API features available to it that the full Rosoka does.
Yes both versions perform entity and term disambiguation based on the linguistic usage.
The disambiguation, whether human or software requires that there is enough contextual information to be able to determine the difference. The context may be contained within the document, within a corpus constraint, or within the context of the users. The former being more specific, and the later having the greater potential ambiguity. I.e. typing in the key word "wicket" into a Google search, could refer to either cricket, Apache software or the Star Wars Ewok character (i.e. an Entity). The general The sentence "The wicket is guarded by the batsman" has contextual clues within the sentence to interpret it as an object. "Wicket Wystri Warrick was a male Ewok scout" should enterpret "Wicket" as the given name of the person entity "Wicket Wystri Warrick". "Welcome to Apache Wicket" has the contextual clues that "Wicket" is part of a place name, etc.
Lately I have been fiddling with stanford crf ner. They have released quite a few versions http://nlp.stanford.edu/software/CRF-NER.shtml
The good thing is you can train your own classifier. You should follow the link which has the guidelines on how to train your own NER. http://nlp.stanford.edu/software/crf-faq.shtml#a
Unfortunately, in my case, the named entities are not efficiently extracted from the document. Most of the entities go undetected.
Just in case you find it useful.

Context Specific Spelling Engine

I'm sure more than a few of you will have seen the Google Wave demonstration. I was wondering about the spell checking technology specificially. How revolutionary is a spell checker which works by figuring out where a word appears contextually within a sentence to make these suggestions ?
I haven't seen this technique before, but are there examples of this elsewhere?
and if so are there code examples and literature into its workings ?
My 2 cents. Given the fact that translate.google.com is a statistical machine translation engine and "The Unreasonable Effectiveness of Data" from A Halevy, P Norvig (Director of Research at Google) & F Pereira: I make the assumption (bet) that this is a statistically driven spell checker.
How it could work: you collect a very large corpus of the language you want to spell check. You store this corpus as phrase-tables in adapted datastructures (suffix arrays for example if you have to count the n-grams subsets) that keep track of the count (an so an estimated probability of) the number of n-grams.
For example, if your corpus is only constitued of:
I had bean soup last diner.
From this entry, you will generate the following bi-grams (sets of 2 words):
I had, had bean, bean soup, soup last, last diner
and the tri-grams (sets of 3 words):
I had bean, had bean soup, bean soup last, soup last diner
But they will be pruned by tests of statistical relevance, for example: we can assume that the tri-gram
I had bean
will disappear of the phrase-table.
Now, spell checking is only going to look is this big phrase-tables and check the "probabilities". (You need a good infrastructure to store this phrase-tables in an efficient data structure and in RAM, Google has it for translate.google.com, why not for that ? It's easier than statistical machine translation.)
Ex: you type
I had been soup
and in the phrase-table there is a
had bean soup
tri-gram with a much higher probability than what you just typed! Indeed, you only need to change one word (this is a "not so distant" tri-gram) to have a tri-gram with a much higher probability. There should be an evaluating function dealing with the trade-off distance/probability. This distance could even be calculated in terms of characters: we are doing spell checking, not machine translation.
This is only my hypothetical opinion. ;)
You should also watch an official video by Casey Whitelaw of the Google Wave team that describes the techniques used: http://www.youtube.com/watch?v=Sx3Fpw0XCXk
You can learn all about topics like this by diving into natural language processing. You can even go as in-depth as making a statistical guess as to which word will come next after a string of given words.
If you are interested in such a topic, I highly suggest using the NLTK (natural language toolkit) written entirely in python. it is a very expansive work, having many tools and pretty good documentation.
There are a lot of papers on this subject. Here are some good resources
This doesn't use context sensitivity, but it's a good base to build from
http://norvig.com/spell-correct.html
This is probably a good and easy to understand view of a more powerful spell checker
http://acl.ldc.upenn.edu/acl2004/emnlp/pdf/Cucerzan.pdf
From here you can dive deep on the particulars. I'd recommend using google scholar and looking up the references in the paper above, and searching for 'spelling correction'