Protege fuzzyOWL2 plugin fuzzy query - sparql

I'm learning fuzzy ontology . There are several different ways like F-swrl , owl2 fuzzy plugin, fsparql. But it seems owl2 fuzzy plugin to be the most famous plugin for fuzzifying.
Some researchers have only used SWRL and others used a combination of SWRL and fuzzyowl2 plugin to implement fuzzy ontology.
I've downloaded fuzzyWine.owl, which includes a number of classes, data/object properties and datatypes. But there's no example for fuzzy queries.
I do not know if the query should be run in fuzzy reasoner tab or by SWRL rules. I need to have a simple example of fuzzy query using this ontology to understand the fuzzy ontology better.

The Plugin "OWL2 Fuzzy" was documented with examples in a published paper by its authors.
Fernando Bobillo, Umberto Straccia. Fuzzy ontology representation using OWL 2. International Journal of Approximate Reasoning archive, Volume 52 Issue 7, Pages 1073-1094, October, 2011.
Hope you success,
With best Wishes

Related

Probabilistic and/or defeasible reasoning in Protege?

I have found some promising (old) articles but the trail has run cold.
Ideally I am looking for working plugins/code, but if they are simply not available, then any concrete directions on how to build probabilistic and/or defeasible reasoning for integration into Protege would still be useful.
RaMP Defeasible Reasoning plugin for Protege. Appears to be dormant/abandoned. Perhaps project/code was lost along with http://code.google.com/p/nomor/
PR-OWL http://www.pr-owl.org/. Extends OWL to support probabilistic ontologies. Appears to be dormant/abandoned.
Defeasible RuleML looked interesting, but I cannot find any concrete implementations/plugins/code. http://ruleml.org/1.0/defeasible/defeasible.html.
Defeasible Logic RuleML-compatible Rule
Language. Even if code can be found, this implementation looks experimental, and does not leverage more recent standards and formats. Paper: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.103.5914&rep=rep1&type=pdf Resources: http://lpis.csd.auth.gr/systems/resources.html#drdchairruleml2010
There's Pronto (available here), which is built on top of Pellet. I cannot recall if it comes as a Protege plugin, but as it is tied to Pellet I imagine it shouldn't be too hard to implement an OWLReasoner wrapper for the two.

How does a semantic reasoner for protegé is made?

I'm new in "ontology world". I've been practicing Protegé and ontologies for 2 months and now I would like to understand (and if it is possible to create) a reasoner. But I don't know what is its structure, the language used by it and so on.
Can you please me provide me a piece of information and something to read? Thank you.
The task of a reasoner is to produce inferences. Standard reasoning tasks are consistency check, realization, instance check and satisfiability. You can find all these defined in a number of books and articles about description logic.
Protege uses the OWL API to interface to reasoners so they are implementations of OWLReasoner. Not all of them are written in Java (e.g., FaCT++ is written in C++).
They are quite complex systems, so describing how to implement one takes chapters - too big for an answer here.
I'd recommend exploring the source code of a few of them. Open Source ones, off the top of my head: HermiT, FaCT++, Pellet, JFact, ELK.

What can be done using OWL reasoning?

I'm working on an OWL ontology and I need some specific issues
I only need ontology schema (TBox) and I got lost, what are the operations that can be
completed using reasoning and sparql and OWL API?
More specifically, I need the following:
1- check cardinalities between classes and properties.
2- find subsumption relationships for a specific class.
3- check whether specific facts hold (e.g. are two classes are disjoint)
4- find the paths (a class-property series) between a set of classes.
What each of reasoning, sparql and OWL API used for? and which one is suitable for my situation?
Actually I don't know how to start and what technique to use.
In addition. Would you please refer me to some reference?
Thanks.
Number 1 is not clear: do you want to know which cardinality axioms are asserted? This can be done without a reasoner. Number 4 is a bit vague as well, can you provide an example?
2, 3 and 5 require a reasoner to be perform accurately.
A reasoner is a program that will explicit implicit information: subsumption, realisation, consistency checks are all operations for which a reasoner is needed. In your tasks, subsumption is clearly needed.
OWLAPI is a Java API to manipulate OWL ontologies; in your case, it could be useful to write the connecting code to use a reasoner for your tasks. Compatible reasoners are Pellet, HermiT, FaCT++, and a few more.
SPARQL is an RDF query language. OWLAPI does not support it. You could use it for your tasks, but they look more OWL oriented than RDF oriented to me. Jena is a Java library supporting RDF, OWL, SPARQL and interfaces with reasoners such as Pellet. Depending on how you decide to solve the above tasks, it might fit more of your requirements than the OWLAPI.
Jena tutorials:
https://jena.apache.org/tutorials/index.html
OWLAPI documentation:
https://github.com/owlcs/owlapi/wiki/Documentation

Entity Extraction/Recognition with free tools while feeding Lucene Index

I'm currently investigating the options to extract person names, locations, tech words and categories from text (a lot articles from the web) which will then feeded into a Lucene/ElasticSearch index. The additional information is then added as metadata and should increase precision of the search.
E.g. when someone queries 'wicket' he should be able to decide whether he means the cricket sport or the Apache project. I tried to implement this on my own with minor success so far. Now I found a lot tools, but I'm not sure if they are suited for this task and which of them integrates good with Lucene or if precision of entity extraction is high enough.
Dbpedia Spotlight, the demo looks very promising
OpenNLP requires training. Which training data to use?
OpenNLP tools
Stanbol
NLTK
balie
UIMA
GATE -> example code
Apache Mahout
Stanford CRF-NER
maui-indexer
Mallet
Illinois Named Entity Tagger Not open source but free
wikipedianer data
My questions:
Does anyone have experience with some of the listed tools above and its precision/recall? Or if there is training data required + available.
Are there articles or tutorials where I can get started with entity extraction(NER) for each and every tool?
How can they be integrated with Lucene?
Here are some questions related to that subject:
Does an algorithm exist to help detect the "primary topic" of an English sentence?
Named Entity Recognition Libraries for Java
Named entity recognition with Java
The problem you are facing in the 'wicket' example is called entity disambiguation, not entity extraction/recognition (NER). NER can be useful but only when the categories are specific enough. Most NER systems doesn't have enough granularity to distinguish between a sport and a software project (both types would fall outside the typically recognized types: person, org, location).
For disambiguation, you need a knowledge base against which entities are being disambiguated. DBpedia is a typical choice due to its broad coverage. See my answer for How to use DBPedia to extract Tags/Keywords from content? where I provide more explanation, and mentions several tools for disambiguation including:
Zemanta
Maui-indexer
Dbpedia Spotlight
Extractiv (my company)
These tools often use a language-independent API like REST, and I do not know that they directly provide Lucene support, but I hope my answer has been beneficial for the problem you are trying to solve.
You can use OpenNLP to extract names of people, places, organisations without training. You just use pre-exisiting models which can be downloaded from here: http://opennlp.sourceforge.net/models-1.5/
For an example on how to use one of these model see: http://opennlp.apache.org/documentation/1.5.3/manual/opennlp.html#tools.namefind
Rosoka is a commercial product that provides a computation of "Salience" which measures the importance of the term or entity to the document. Salience is based on the linguistic usage and not the frequency. Using the salience values you can determine the primary topic of the document as a whole.
The output is in your choice of XML or JSON which makes it very easy to use with Lucene.
It is written in java.
There is an Amazon Cloud version available at https://aws.amazon.com/marketplace/pp/B00E6FGJZ0. The cost to try it out is $0.99/hour. The Rosoka Cloud version does not have all of the Java API features available to it that the full Rosoka does.
Yes both versions perform entity and term disambiguation based on the linguistic usage.
The disambiguation, whether human or software requires that there is enough contextual information to be able to determine the difference. The context may be contained within the document, within a corpus constraint, or within the context of the users. The former being more specific, and the later having the greater potential ambiguity. I.e. typing in the key word "wicket" into a Google search, could refer to either cricket, Apache software or the Star Wars Ewok character (i.e. an Entity). The general The sentence "The wicket is guarded by the batsman" has contextual clues within the sentence to interpret it as an object. "Wicket Wystri Warrick was a male Ewok scout" should enterpret "Wicket" as the given name of the person entity "Wicket Wystri Warrick". "Welcome to Apache Wicket" has the contextual clues that "Wicket" is part of a place name, etc.
Lately I have been fiddling with stanford crf ner. They have released quite a few versions http://nlp.stanford.edu/software/CRF-NER.shtml
The good thing is you can train your own classifier. You should follow the link which has the guidelines on how to train your own NER. http://nlp.stanford.edu/software/crf-faq.shtml#a
Unfortunately, in my case, the named entities are not efficiently extracted from the document. Most of the entities go undetected.
Just in case you find it useful.

Developing a Semantic Web Application

Although i have a little bit of experience in developing dynamic websites using ASP technologies, but I am new to semantic web programming, and i intend to implement a website based on semantic web technology.I would like to develop a search engine, where a web user can query for keywords from the backend RDF triple store.I want to implement the website using Java and JSP.I have following questions:
I am currently studying Jena framework and SPARQL to start with,but
i am not sure what other technologies i need to study in order to
implement the website.
What is the difference between RDF and OWL, I have gone through a
lot of web resources but i am still confused.As per my understanding
RDF and OWL both define relationship between concepts but OWL is
more rich in terms of defining relations.
What is meant by different OWL Vocabularies like FOAF, SIOC etc.Why
do we need these vocabularies?
What exactly is the purpose of Virtuso Open Link
Software(http://ods.openlinksw.com/dataspace/dav/wiki/Main/VirtJenaProvider)
Any help would be highly appreciated.
Thanks!
I would definitely like to be kept up to date of your progress. I'm not experienced with java or jsp. I wonder if this could be done in php? I know that some work has been done in python on this kind of thing.
There are some extensions to drupal that work with these semantic web technologies and Semantic Media Wiki is good too.
Check out this and the related links at the bottom. The difference between microformats and vocabularies can be difficult to understand but I think there is a difference, say between a vocabulary like FOAF and a microformat like hCard, hCalendar or hResume. Oh, the link:
http://en.wikipedia.org/wiki/FOAF_(software)
Anyway these related terms are included.
Thanks,
Bruce
http://futurewavedesigns.com
Re: your first question - why do you want to use RDF to implement a keyword search? Keyword search isn't semantic, and there are many established frameworks and APIs for keyword search, such as Lucene.
Re: your second question, comparing RDF and OWL is comparing apples and oranges. RDF is basically for declaring data, but OWL is a layer on top of RDF that is for declaring ontologies (schemas). A more meaningful comparison would be between RDFS (RDF Schema) and OWL, which both address the ontology layer.
Example:
In RDF you might state that John Smith is a Person who hasAge "42" and is marriedTo Jill Smith.
In RDFS or OWL you would declare that Person is a class, hasAge is a property (with domain of Person and range of xsd:integer) and marriedTo is a property (with domain and range of Person).
In OWL you can also declare that marriedTo is a symmetric property (if A is marriedTo B, then B must be marriedTo A). RDF isn't this powerful, so you can't make this particular statement, so can't make inferences about symmetric properties etc.