I have a knowledge base which includes multiple graphs. I want a way to formally define these graphs in my metadata layer. But I can't seem to find a standard way to do it. More specifically If I want to say A is a class I can use rdfs:class. But what if I want to have a collection that contains names of all of my graphs and I want to say these names are named graphs. I was thinking about it and all I could think of was to define a graph as a rdf:bag of rdf:statement. But I don't think this is a good one. Is there any existing vocabulary that I can use for this?
It seems to me the DCAT Initiative might be worth having a look.
Dataset Catalogue Vocabulary (DCAT) is a W3C-authored RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web.
I am learning about this subject matter as well. Apparently an Australian regulating body (for georesources) seems to have bundled their Graphs (concerning Geology the science, and Mining the Engineering discipline) with DCAT: CGIVocPrez. You might look here for an application example. But I cannot tell how "good" it actually is.
Related
I want to develop smart e-learning system which retrieve the suitable learning objects to the learner according to his/her learning style.
now, I need two ontology, the first for the learner model and the second for the learning object.
please, i want reusable ontology about a course like('java.owl', 'c#.owl', or 'any course.owl').
i tries to search a lot but i cannot find what i need.
For example when i writes this words('java.owl', 'c#.owl', or 'any course.owl') on the search engine, i cannot find any effective result about the course ontology.
can any one help me please. Thanks in advance.
I don't know where to find ontologies specific to programming languages, but there are a few ontologies about learning objects. One example would be the ontologies used at BBC for describing school curricula, available here. I believe the license allows you to reuse them.
How do we know which vocabulary/namespace to use to describe data with RDFa?
I have seen a lot of examples that use xmlns:dcterms="http://purl.org/dc/terms/" or xmlns:sioc="http://rdfs.org/sioc/ns#" then there is this video that uses FOAF vocabulary.
This is all pretty confusing and I am not sure what these vocabularies mean or what is best to use for the data I am describing. Is there some trick I am missing?
There are many vocabularies. And you could create your own, too, of course (but you probably shouldn’t before you checked possible alternatives).
You’d have to look for vocabularies for your specific needs, for example
by browsing and searching on http://lov.okfn.org/dataset/lov/ (they collect and index open vocabularies),
on W3C’s RDFa Core Initial Context (it lists vocabularies that have pre-defined prefixes for use with RDFa), or
by browsing through http://prefix.cc/ (it’s a lookup for typically used namespaces, but you might get an overview by that).
After some time you get to know the big/broad ones: Schema.org, Dublin Core, FOAF, RSS, SKOS, SIOC, vCard, DOAP, Open Graph, Ontology for Media Resources, GoodRelations, DBpedia Ontology, ….
The simplest thing is to check if schema.org covers your needs. Schema.org is backed by Google and the other major search engines and generally pretty awesome.
If it doesn't suit your needs, then enter a few of the terms you need into a vocabulary search engine. My recommendation is LOV.
Another option is to just ask the community about the best vocabularies for the specific domain you need to represent. A good place is answers.semanticweb.com, which is like StackOverflow but with more RDF experts hanging out.
Things have changed quite a bit since that video was posted. First, like Richard said, you should check if schema.org fits your needs. Personally when I need to describe something that's not covered on schema.org, I check LOV as well. If, and only if I can't find anything in LOV, I will then consider creating a new type or property. A quick way to do this is to use http://open.vocab.org/
A newer version of RDFa was published since that video was released: RDFa 1.1 and RDFa Lite. If you want to use schema.org only, I'd recommend to check http://www.w3.org/TR/rdfa-lite/
Vocabularies are usually domain specific. The xmlns line is deprecated. The RDFa profile at http://www.w3.org/profile/rdfa-1.1 lists the vocabularies available as part of initial context. Sometimes vocabularies may overlap in the context of your data. Analogous to solving math prb by either Algebraic or Geometric or other technique, mixing up vocabularies is fine. Equal terms can be found using http://sameas.org/ For addressing your consumer base's favoritism amongst vocab recognition, skos:closeMatch and skos:exactMatch may be used, eg. "gr:Brand skos:closeMatch owl:Thing" with any terms you please. Prefix attribute can be used with vocabularies besides those covered by initial context like: prefix="fb: http://ogp.me/ns/fb# vocab2: path2 ..." For cross-cutting concern across different domain vocabularies such as customizing presentation in search results microdata using schema.org guidelines should be beneficial. However, as this has nothing to do with specialization in any peculiar domain, prefixes are unavailable in this syntax. RDFa vocab have been helpful in such specific domain contexts that content seems to appeal further to participative audience while microdata targets those who've lost their way. For tasks that are too simple to merit full-fledged vocab, but have semantic implications, try http://microformats.org/ Interchanging usage of REST profile URIs for vocabs amongst the 3 syntaxes is valid, but useless owing to lack of affordable manpower to implement alternative support for the vocabs on the Web scale. How & why schema.org vocab merited separate microdata syntax of its own is discussed by Google employee Ian Hickson a. k. a. Hixie- the editor of WHATWG HTML5 draft at http://logbot.glob.com.au/?c=freenode%23whatwg&s=28+Nov+2012&e=28+Nov+2012#c747855 or http://krijnhoetmer.nl/irc-logs/whatwg/20121128#l-1122 If only Google had smart enough employees to implement parser for 1 syntax whose WG included its own employee also, then RDFa Lite inside RDFa would have been another course like Core Java within Java, & no need of separate microdata named mocking rip-off, but alas- our's is an imperfect world!
Although I hold EE background, I didn't get chance to attend Natural Language processing classes.
I would like to build sentiment analysis tool for Turkish language. I think it is best to create a Turkish wordnet database rather than translating the text to English and analyze it with buggy translated text with provided tools. (is it?)
So what do you guys recommend me to do ? First of all taking NLP classes from an open class website? I really don't know where to start. Could you help me and maybe provide me step by step guide? I know this is an academic project but I am interested to build skills as a hobby in that area.
Thanks in advance.
Here is the process I have used before (making Japanese, Chinese, German and Arabic semantic networks):
Gather at least two English/Turkish dictionaries. They must be independent, not derived from each other. You can use Wikipedia to auto-generate one of your dictionaries. If you need to publish your network, then you may need open source dictionaries, or license fees, or a lawyer.
Use those dictionaries to translate English Wordnet, producing a confidence rating for each synset.
Keep those with strong confidence, manually approving or fixing through those with medium or low confidence.
Finish it off manually
I expanded on this in the "Automatic Translation Of WordNet" section of my 2008 paper: http://dcook.org/mlsn/about/papers/nlp2008.MLSN_A_Multilingual_Semantic_Network.pdf
(For your stated goal of a Turkish sentiment dictionary, there are other approaches, not involving a semantic network. E.g. "Semantic Analysis and Opinion Mining", by Bing Liu, is a good round-up of research. But a semantic network approach will, IMHO, always give better results in the long run, and has so many other uses.)
I'm currently investigating the options to extract person names, locations, tech words and categories from text (a lot articles from the web) which will then feeded into a Lucene/ElasticSearch index. The additional information is then added as metadata and should increase precision of the search.
E.g. when someone queries 'wicket' he should be able to decide whether he means the cricket sport or the Apache project. I tried to implement this on my own with minor success so far. Now I found a lot tools, but I'm not sure if they are suited for this task and which of them integrates good with Lucene or if precision of entity extraction is high enough.
Dbpedia Spotlight, the demo looks very promising
OpenNLP requires training. Which training data to use?
OpenNLP tools
Stanbol
NLTK
balie
UIMA
GATE -> example code
Apache Mahout
Stanford CRF-NER
maui-indexer
Mallet
Illinois Named Entity Tagger Not open source but free
wikipedianer data
My questions:
Does anyone have experience with some of the listed tools above and its precision/recall? Or if there is training data required + available.
Are there articles or tutorials where I can get started with entity extraction(NER) for each and every tool?
How can they be integrated with Lucene?
Here are some questions related to that subject:
Does an algorithm exist to help detect the "primary topic" of an English sentence?
Named Entity Recognition Libraries for Java
Named entity recognition with Java
The problem you are facing in the 'wicket' example is called entity disambiguation, not entity extraction/recognition (NER). NER can be useful but only when the categories are specific enough. Most NER systems doesn't have enough granularity to distinguish between a sport and a software project (both types would fall outside the typically recognized types: person, org, location).
For disambiguation, you need a knowledge base against which entities are being disambiguated. DBpedia is a typical choice due to its broad coverage. See my answer for How to use DBPedia to extract Tags/Keywords from content? where I provide more explanation, and mentions several tools for disambiguation including:
Zemanta
Maui-indexer
Dbpedia Spotlight
Extractiv (my company)
These tools often use a language-independent API like REST, and I do not know that they directly provide Lucene support, but I hope my answer has been beneficial for the problem you are trying to solve.
You can use OpenNLP to extract names of people, places, organisations without training. You just use pre-exisiting models which can be downloaded from here: http://opennlp.sourceforge.net/models-1.5/
For an example on how to use one of these model see: http://opennlp.apache.org/documentation/1.5.3/manual/opennlp.html#tools.namefind
Rosoka is a commercial product that provides a computation of "Salience" which measures the importance of the term or entity to the document. Salience is based on the linguistic usage and not the frequency. Using the salience values you can determine the primary topic of the document as a whole.
The output is in your choice of XML or JSON which makes it very easy to use with Lucene.
It is written in java.
There is an Amazon Cloud version available at https://aws.amazon.com/marketplace/pp/B00E6FGJZ0. The cost to try it out is $0.99/hour. The Rosoka Cloud version does not have all of the Java API features available to it that the full Rosoka does.
Yes both versions perform entity and term disambiguation based on the linguistic usage.
The disambiguation, whether human or software requires that there is enough contextual information to be able to determine the difference. The context may be contained within the document, within a corpus constraint, or within the context of the users. The former being more specific, and the later having the greater potential ambiguity. I.e. typing in the key word "wicket" into a Google search, could refer to either cricket, Apache software or the Star Wars Ewok character (i.e. an Entity). The general The sentence "The wicket is guarded by the batsman" has contextual clues within the sentence to interpret it as an object. "Wicket Wystri Warrick was a male Ewok scout" should enterpret "Wicket" as the given name of the person entity "Wicket Wystri Warrick". "Welcome to Apache Wicket" has the contextual clues that "Wicket" is part of a place name, etc.
Lately I have been fiddling with stanford crf ner. They have released quite a few versions http://nlp.stanford.edu/software/CRF-NER.shtml
The good thing is you can train your own classifier. You should follow the link which has the guidelines on how to train your own NER. http://nlp.stanford.edu/software/crf-faq.shtml#a
Unfortunately, in my case, the named entities are not efficiently extracted from the document. Most of the entities go undetected.
Just in case you find it useful.
I am currently trying to draw a set of UML diagrams to represent products, offers, orders, deliveries and payments. These diagrams have probably been invented by a million developers before me.
Are there any efforts to standardize the modeling of such common things? Or even the modeling of specific domains (for example car-manufacturing).
Do you know if there is some sort of repository containing UML diagrams (class diagrams, sequence diagrams, state diagrams...)?
There is a movement for documenting (as opposed to standardizing) models for certain domains. These are called analysis patterns and is a term Martin Fowler came up with. He actually wrote a book called Analysis patterns. Also, he has a dedicated section on his website where he presents some of these patterns accompanied by UML diagrams.
Maybe you'll find some inspiration that will help you in modeling your domain. I've stressed the word inspiration as I think different businesses have different requirements although they operate the same domain so the solutions you might read about may not be appropriate for your problem.
There are many tools out there that do both - but they're generally not free!
Microsoft Visio does both and is extensible. For UML artefacts they come with auto generators into VB/Java template code - but you can modify them to auto-generate any code. There are many users of Visio that have created models from which to use as templates.
Artisan Enterprize is by far the most powerful UML tool (but it's not cheap).
Some would argue that Rational Rose or RUP is the better tool
But for Car-Manufacturing and other similar real world modelling, by far the best tool is Mathworks Simulink (not because it's one of the most expensive). It is by far the best tool beccause you can animate the model - you can prove the model working before generating the slik code (in whatever grammar/language/other Models you care to push it)!
You can obtain a student license for around £180; with the 'real thing' pushing £4000 (for car-related artefacts). The full product with all the trimmings is about £15k. Simulink is also extensible with a C like language though there is a .Net addin and APIs to use a plethora of other langhuages. And, just like Visio there is a world-wide forum creating saleable, shareware & freeware real world model templates. Many world-wide Auto-Manufacturers are already using Simulink.
I think that MiniQuark question is really good and will sooner or later be provided by vendors such as Omondo, Rational IBM etc... Users doesn't just need tools, they need models out of the box and just add their business rules inside an existing well defined architecture. Why to develop from scratch a new architecture if the job has already be done ? In Java we use plenty of frameworks, existing methods etc...so why not to go one level higher and reuse architecture ? It is today impossible to guess how a project will evole and new demands are coming every day. We therefore need a stable architecture which has been tested previously and is extensible. I have seen so many projects starting with a nice architecture then realizing in the middle of the project that this is not what is the best and then changing their architecture. Renaming classes, splitting classes, creating packages etc...after the first iteration it is getting a real mess. Could you imagine what we found after 10 iterations !! a total mess !!
This mess would had been avoided if using a predefined model which has been tested previously because the missing class, or package etc..would have already been created and only a class rename would be sufficient for architecture purposes. Adding business rules methods will end the codding stage before deployment test.
I think there is a confusion between patterns and the initial question which is related to UML model re usability.
There is no today any reusable model out of the box which has been developped. This is really strange but the job has never been done or never been shared.
Omondo has tried to launch an initiative without real success. I have heard that they are working on hundred of out of box models which will be open source and given for free to the community. I hope this will be done because this is really important for me and would save me a lot of time at the beginning of a project.