semantic web reasoner and on the fly rules injection - semantic-web

Is there any semantic web reasoner (e.g. Pellet) that accepts rules (SWRL) on the fly ?
or rules must be hard coded before starting the reasoner

It is unclear what you mean by "on the fly". You can edit a reasoner's rule base, so it is not something you need to set in stone from the outset. But the reasoner will need to perform some book keeping if you add new rules, or any other axioms, before it can be used to answer queries. There's no way around that.

Related

designing an ontology semantic web

I'm trying to design an ontology and i'm forced to use SEMFacet as a part of the project.
SemFacet is an open source search engine that is built over Semantic web technology it works as follow i create an ontology using protege and i upload it to SemFacet and i start searching my ontology.
My ontology has courses and a predicate that describes what these courses are about. So for example let's suppose i have an individual course CS101 that is instantiated from courses class. The course class has a data-object property called description its type is xsd^^string.
My problem is that whenever the predicate i.e. description property is preceded by a URI "Imaginary URI" SemFacet can't find what i'm taking about. But if i remove the URI everything seems to work just fine.
I told my professor about the issue, he told me that because you are using a URI that does not exist. to be honest i'm not convinced about using a URI that does not exist.
What do you think?
Chances are, SEMFacet does not support blank nodes (that's a proper name for "imaginary URIs") correctly.
Unless SEMFacet tries to resolve the resources pointed by the URI, you don't need to create a live URI (i.e. the one with HTTP 200 OK response), but only a valid one.
Make sure that you don't leave empty IRIs in Protégé.
#berezovskiy I think OP did not mean blank nodes with imaginary URIs but he meant URIs which he created himself and do not exist like : http://mysuperfancyuri.com
So maybe your professor just wants you to be more standard conform and use already existing predicates instead of creating your own ones. You could for example look at dcterms:description (http://purl.org/dc/terms/description) for a description predicate.

Understanding the difference between SPARQL and semantic reasoning using Pellet

I have a pizza ontology that defines different types of pizzas, ingredients and relations among them.
I just want to understand several basic things:
Is it correct that I should apply SPARQL if I want to obtain information without
reasoning? E.g. which pizzas contain onion?
What is the difference between SPARQL and reasoning algorithms
like Pellet? Which queries cannot be answered by SPARQL, while can
be answered by Pellet? Some examples of queries (question-like) for the pizza ontology would be helpful.
As far as I understand to use SPARQL from Java with Jena, I
should save my ontology in RDF/XML format. However, to use Pellet
with Jena, which format do I need to select? Pellet uses OWL2...
SPARQL is a query language, that is, a language for formulating questions in. Reasoning, on the other hand, is the process of deriving new information from existing data. These are two different, complementary processes.
To retrieve information from your ontology you use SPARQL, yes. You can do this without reasoning, or in combination with a reasoner, too. If you have a reasoner active it means your queries can be simpler, and in some cases reasoners can derive information that is not really retrievable at all with just a query.
Reasoners like Pellet don't really answer queries, they just reason: they figure out what implicit information can be derived from the raw facts, and can do things like verifying that things are consistent (i.e. that there are no logical contradictions in your data). Pellet can figure out that that if you own a Toyota, which is of type Car, you own a Vehicle (because a Car is a type of Vehicle). Or it can figure out that if you define a pizza to have the ingredient "Parmesan", you have a pizza of type "Cheesy" (because it knows Parmesan is a type of Cheese). So you use a reasoner like Pellet to derive this kind of implicit information, and then you use a query language like SPARQL to actually ask: "Ok, give me an overview of all Cheesy pizzas that also have anchovies".
APIs like Jena are toolkits that treat RDF as an abstract model. Which syntax format you save your file in is immaterial, it can read almost any RDF syntax. As soon as you have it read in a Jena model you can execute the Pellet reasoner on it - it doesn't matter which syntax your original file was in. Details on how to do this can be found in the Jena documentation.

Semantic Model from a grammar

I would like ask for some thoughts about the concepts: Domain Object and a Semantic Model.
So, I really want to understand what's a Domain Object / Semantic Model for and what's not Domain Object / Semantic Model for.
As far I've been able to figure out, given a grammar is absolutly advisable do these separation concepts.
However, I'm not quite figure out how to do it. For example, given this slight grammar, how do you build a Domain Object or a Semantic Model.
It's exactly what I'm trying to figure out...
Most of books suggest this approach in order to go through an AST. Instead of directly translate at the same time you go throguh the AST creating a semantic model and then connect to it an interpreter.
Example (SQL Syntax Tree):
Instead of generate directly a SQL sentence, I create a semantic model, and then I'm able to connent an interpreter that translate this semantic model to a SQL sentence.
Abstract Systex Tree -> Semantic Model -> Interpreter
By this way, I could have a Transact-SQL Interpreter and another onr for SqLite.
The terms "domain object" and "semantic model" aren't really standard terms from the compiler literature, so you'll get lots of random answers.
The usual terms related to parsing are "concrete syntax tree" (matches the shape of the grammar rules), "abstract syntax tree" (an attempt to make a tree which contains less accidental detail, although it might not be worth the trouble.).
Parsing is only a small part of the problem of processing a language. You need a lot of semantic interpretation of syntax, however you represent it (AST, CST, ...). This includes concepts such as :
Name resolution (for every identifier, where is it defined? used?
Type resolution (for every identifier/expression/syntax construct, what is the type of that entity?
Type checking (is that syntax construct used in a valid way?)
Control flow analysis (what order are the program parts executed in, possibly even parallel/dynamic/constraint-determined)
Data flow analysis (where are values defined? consumed?)
Optimization (replacement of one set of syntax constructs by another semantically equivalent set with some nice property [executes faster after compilation is common]), at high or low levels of abstraction
High level code generation, e.g, interpreting sets of syntactic constructs in the language, to equivalent sets in the targeted [often assembly-language like] language
Each of these concepts more or less builds on top of the preceding ones.
The closest I can come to "semantic model" is that high-level code generation. That takes a lot of machinery that you have to build on top of trees.
ANTLR parses. You have to do/supply the rest.

Should an aggregate root's behaviour be dependent on other aggregate root's attributes?

I'm reading a book about DDD and i see an example domain that involves cars, engines, wheels and tires .
Above is the model as it is in the book . Customer is also aggregate root .
Having that model , there might be a case where the engine could have height, width and length attributes.
What happens when you need to attach a big engine on a small car ? The engine could not fit .
Is it a problem if the car checks for the engine attributes and allows it or not to be a part of the car ?
The engine has global identity (like you know each engine has a serial/manufacturer number). Maybe the engines need to be tracked by the manufacturer .
So I'm asking again , is it a problem if the car is using the engine's attributes to fit it inside (to allow it or not to be part of it) ?
Is it a problem if the car checks for the engine attributes and allows it or not to be a part of the car?
No.
That being said, your validation may be complex enough to introduce a domain service. Since two aggregates are involved you could have this:
car.Fit(engine)
Or this:
engine.Fit(car)
However, you probably want to be checking against a car model anyway :)
Since the rules are going to be somewhat more advanced and involve some data you probably want to introduce a domain service and possibly use double-dispatch on the objects:
So rather than car.Fit(engine) you could have this:
car.Fit(engine, IModelServiceImplementation)
And in the Fit method call:
if (!IModelServiceImplementation.CanFit(car, engine)) { throw new Exception(); }
The service could possibly load the correct model and rather check that against the engine. Depending on the domain one may even have modification levels and other rules to deal with.
Since a Car instance would not contain the actual Engine instance but only the EngineId or possibly some value object there would be no real assignment of engine to car. You could still pass the engine instance to the car and have it create the association however it sees fit.
The solution proposed by 'Enrico S.' is possibly more relevant to scenarios where changes are effected on the aggregate roots where one may not have all the aggregate roots available or even where aggregate roots live in separate bounded contexts. Even if Car and Engine were in separate BCs one would probably be able to query the validity somehow. Some things are fine for eventual consistency but others may not be.
As usual there are many things to consider :)
From DDD book, p128:
Any rule that spans AGGREGATES will not be expected to be up-to-date at all times. Through event processing, batch processing, or other update mechanisms, other dependencies can be resolved within some specific time.
So, it really depend on what Car aggregate is deigned for: if it requires strong consistency with the Engine, then the Engine should be part of the Car aggregate.
Other way, if it require only "eventual consistency", you might put that validation logic inside a Domain Event.
See this Udi Dahan's post

Conceptual / Rule Implementation / String Manipulation

So I'm working on a software in VB.Net where I need to scrape information and process it according to rules. For example, simple string replace rules like making "Det" into "Detached" for a specific field, or a split/join rule, basic string operations. All my scraping rules are RegEx and I store them in a database in rows of rule sets for different situations.
What is the best way behind creating/storing rules to manipulate text? I do not want to hardcode the rules into the software, but rather be able to add more as there will be a need for them. I want to store them in a database, but then how do I interpret them? I'm assuming I would have to create a whole system to interpret them, like a rules engine? Maybe you can give me a different outlook on this problem.
I've written rules engines before. They are usually a Bad Idea (tm).
I would consider writing the rules in your application code. Leave the database and rules engine out of it. First, a rules engine often confuses intent. It's hard to see exactly what is going on when you come back in a couple months for a maintenance patch. Second, VB (or C# or any other language you choose) contains a more appropriate vocabulary for defining rules than anything you will likely have time to implement. Trust me, XML is a poor representation of rules. Lastly, non-programmers won't be able to write regex anyhow.. so you aren't gaining anything for all your added complexity.
You can mitigate most of the deployment headaches by using ClickOnce deployment.
Hope that helps.