Semantic web : Degree of relationship between 2 entities in a single ontology - semantic-web

Suppose that we have concepts in an ontology like: grand_mother, mother, and son.
grand_mother concept has some entities like: Mrs. Brown, Mrs. Linda...
mother concept has some entities like: Mrs. Jennifer, Mrs. King..
son concept has: Mike, Bill..
The degree of entities belong to 2 neighbor concepts (like mother and son, or grand_mother and mother) is 0.6. the degree of entities belongs to 2 far concepts (like son and grand_mother) is : 0.6 * 0.6.
User can type some keywords to the search box, and I must measure the degree of them.
For example, 1st keyword is Mrs. Brown and 2nd is Mike.
I have no idea how to do it? (use reasoners but I don't know how to measure the degree of them). Have technologies to do it?
Can anyone help me?
Thanks in advanced.

I think the keyword that you're looking for is weighted graphs, and a shortest path algorithm eg. http://en.wikipedia.org/wiki/Dijkstra's_algorithm

Related

How to find number of [instanceOf OR subclass of] hops between any two wikidata entities?

For example, if I want to find number of hops between Politician (Q82955) and President of the US (Q11696), the answer should be 2.
POTUS (Q11696) – subclassOf -> HeadOfGovernment (Q2285706) – subclassOf -> Politician (Q82955)
How can I write a query for this? I know I need to use count but I don't know how.

What is Query Relaxation of RDF Query

I have done some reasearches on knowledge bases and I found myself in front of a new term i ve looked it up everywhere but still do not get the idea what do we mean by Query relaxation ?
Query relaxation is one of the cooperative techniques that allows providing users with alternative answers instead of an empty result,
ie.. return data satisfying query conditions with varying degrees of exactness, and also to rank the results of a query depending on how "closely" they satisfy the query conditions
Tony Vincent's answer is right but I'll give an example to illustrate.
Suppose you are looking for blonde girls with dark eyes who are 170 cm tall, are PhD students working in Tokyo and whose parents are both Full Professors in the USA.
Probably, you'd get nothing.
But the system may relax the query and tell you about a few female students in Tokyo who have parents in academia.

How to implement weighted data property in protege 4

I am implementing an ontology to check for semantic similarity between individuals of different classes of animals. Say Cow is exactly similar to Cow and nearly similar to buffalo/bull etc. but cow is not similar to dog. i have different data properties that i want to associate weight or degree. Eg for cow, isDomestic property has weight 90 (i.e in 90% cases a cow would be kept as domestic animal) but isSecurity will have weight 0 i.e in no case a cow is kept for security. while Dog isDometic is again 90, but Dog isSecurity will have value say 70 or more. Thats how cow and dog would not be similar.
i have came across RDF reification but i need to implement it in Protege and then use OWL api to reason and check for semantic similarity of individuals.
Any hint or guidance will be appreciated. Thanks in advance.

How to *really* write UML cardinalities?

I would like to know once and for all how to write UML cardinalities, since I very often had to debate about them (so proofs and sources are very welcome :)
If I want to explain that a Mother can have several Children but a Child has one and only one Mother, should I write:
Mother * ---------- 1 Child
Or
Mother 1 ---------- * Child
?
the second one
Mother 1 ----------------- 1..* Child
You would find many example in the UML specification for all figure related to the Abstract Syntax...
Of course Red Beard is right, the correct answer is the second one.
As for a tip for remembering this, I advise to think in english: You say "A child has ONE mother", and in this sentence like in UML, ONE is written next to Mother. Fairly simple.
Many people have this question when they start using UML, especially when they come from another notation where the names are always read clockwise, regardless of which end of the line they're on. That's really confusing!
Red Beard is correct, although the UML spec does not explicitly state where association-end information (i.e., name and multiplicity) is written, it implies it in several places. For example, Figures 7.11 (showing attributes) and 7.12 (showing unidirectional associations with association ends next to the arrowheads) are equivalent property notations; thus, the multiplicity does indeed go next to the property's type.
One way I learned to remember which end has which multiplicity is to imagine a unidirectional graph of instances and write the number next to the arrowheads that point at the target.
BTW, you should use descriptive association end names. These often turn into attribute names in Java, element names in XSD, and so on. For example, in Java, the Mother class might have a "children" attribute of type "Set<Child>". If you don't name them, you'll often get undesirable default names.

how did WordNet come in being

I wonder how the hierarchical relationship in WordNet between the words are retrieved.
Is that manually done or via computer techniques.
If based on computer techniques, what are they?
From the FAQ:
q.1.2 Where do you get the definitions for WordNet? (short answer) Our
lexicographers write them.
Where do you get the definitions for WordNet? (long answer) From the
foreword to WordNet: An Electronic Lexical Database, pp. xviii-xix:
People sometimes ask, "Where did you get your words?" We began in 1985
with the words in Kučera and Francis's Standard Corpus of Present-Day
Edited English (familiarly known as the Brown Corpus), principally
because they provided frequencies for the different parts of speech.
We were well launched into that list when Henry Kučera warned us that,
although he and Francis owned the Brown Corpus, the syntactic tagging
data had been sold to Houghton Mifflin. We therefore dropped our plan
to use their frequency counts (in 1988 Richard Beckwith developed a
polysemy index that we use instead). We also incorporated all the
adjectives pairs that Charles Osgood had used to develop the semantic
differential. And since synonyms were critically important to us, we
looked words up in various thesauruses: for example, Laurence Urdang's
little "Basic Book of Synonyms and Antonyms" (1978), Urdang's revision
of Rodale's "The Synonym Finder" (1978), and Robert Chapman's 4th
edition of "Roget's International Thesaurus" (1977) -- in such works,
one word quickly leads on to others. Late in 1986 we received a list
of words compiled by Fred Chang at the Naval Personnel Research and
Development Center, which we compared with our own list; we were
dismayed to find only 15% overlap.
So Chang's list became input. And in 1993 we obtained the list of
39,143 words that Ralph Grishman and his colleagues at New York
University included in their common lexicon, COMLEX; this time we were
dismayed that WordNet contained only 74% of the COMLEX words. But that
list, too, became input. In short, a variety of sources have
contributed; we were not well disciplined in building our vocabulary.
The fact is that the English lexicon is very large, and we were lucky
that our sponsors were patient with us as we slowly crawled up the
mountain.