Is there a way to get inferences from HermiT reasoner that contain negation (ObjectComplementOf)? Here is what I tried:
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
dataFactory = manager.getOWLDataFactory();
IRI iri = IRI.create("http://www.test.owl");
OWLOntology ontology = manager.createOntology(iri);
OWLClass clsA = dataFactory.getOWLClass(IRI.create(iri + "#A"));
OWLClass clsB = dataFactory.getOWLClass(IRI.create(iri + "#B"));
OWLAxiom axiom = dataFactory.getOWLSubClassOfAxiom(clsA, clsB.getComplementNNF());
OWLIndividual john = dataFactory.getOWLNamedIndividual(IRI.create(iri + "#JOHN"));
OWLClassAssertionAxiom assertionAxiom = dataFactory.getOWLClassAssertionAxiom(clsA, john);
ontology.add(axiom);
ontology.add(assertionAxiom);
OWLReasonerFactory reasoner_factory = new ReasonerFactory();
OWLReasoner reasoner = reasoner_factory.createReasoner(ontology);
OWLOntology inferred_ontology = manager.createOntology();
// Create an inferred axiom generator, and add the generators of choice.
List<InferredAxiomGenerator<? extends OWLAxiom>> gens = new ArrayList<>();
gens.add(new InferredSubClassAxiomGenerator());
gens.add(new InferredClassAssertionAxiomGenerator());
gens.add(new InferredDisjointClassesAxiomGenerator());
gens.add(new InferredEquivalentClassAxiomGenerator());
// Create the inferred ontology generator, and fill the empty ontology.
InferredOntologyGenerator iog = new InferredOntologyGenerator(reasoner, gens);
iog.fillOntology(dataFactory, inferred_ontology);
The (cleaned) result:
//KB: A SubClassOf not(B), A(JOHN)
ENTAILMENTS:{
SubClassOf(A owl:Thing),
SubClassOf(B owl:Thing),
DisjointClasses(A owl:Nothing),
DisjointClasses(B owl:Nothing),
DisjointClasses(A B),
ClassAssertion(owl:Thing JOHN),
ClassAssertion(A JOHN)
}
My question: How can I also get this assertion:
ClassAssertion(ObjectComplementOf(B) JOHN)?
It's not possible to generate all inferred axioms that use class or property expressions. The reason for that is that the list of class (and property) expressions is infinite, so the reasoner would embark on an impossible task if it tried to generate all axioms of that kind.
If you have a criterion for selecting a finite subset of class expressions (e.g., the list of complements for each named class in the ontology) you could implement an axiom generator that asks the reasoner whether those classes have instances, and create the axioms that way.
Reasoners could also do the same, to provide a partial answer to a question with infinite answers - but, far as I'm aware, there are no reasoners which do that. HermiT certainly does not.
Related
I am currently in the process of upgrading a search engine application from Lucene 3.5.0 to version 4.10.3. There have been some substantial API changes in version 4 that break backward compatibility. I have managed to fix most of them, but a few issues remain that I could use some help with:
"cannot override final method from Analyzer"
The original code extended the Analyzer class and the overrode tokenStream(...).
#Override
public TokenStream tokenStream(String fieldName, Reader reader) {
CharStream charStream = CharReader.get(reader);
return
new LowerCaseFilter(version,
new SeparationFilter(version,
new WhitespaceTokenizer(version,
new HTMLStripFilter(charStream))));
}
But this method is final now and I am not sure how to understand the following note from the change log:
ReusableAnalyzerBase has been renamed to Analyzer. All Analyzer implementations must now use Analyzer.TokenStreamComponents, rather than overriding .tokenStream() and .reusableTokenStream() (which are now final).
There is another problem in the method quoted above:
"The method get(Reader) is undefined for the type CharReader"
There seem to have been some considerable changes here, too.
"TermPositionVector cannot be resolved to a type"
This class is gone now in Lucene 4. Are there any simple fixes for this? From the change log:
The term vectors APIs (TermFreqVector, TermPositionVector, TermVectorMapper) have been removed in favor of the above flexible indexing APIs, presenting a single-document inverted index of the document from the term vectors.
Probably related to this:
"The method getTermFreqVector(int, String) is undefined for the type IndexReader."
Both problems occur here, for instance:
TermPositionVector termVector = (TermPositionVector) reader.getTermFreqVector(...);
("reader" is of Type IndexReader)
I would appreciate any help with these issues.
I found core developer Uwe Schindler's response to your question on the Lucene mailing list. It took me some time to wrap my head around the new API, so I need to write down something before I forget.
These notes apply to Lucene 4.10.3.
Implementing an Analyzer (1-2)
new Analyzer() {
#Override
protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
Tokenizer source = new WhitespaceTokenizer(new HTMLStripCharFilter(reader));
TokenStream sink = new LowerCaseFilter(source);
return new TokenStreamComponents(source, sink);
}
};
The constructor of TokenStreamComponents takes a source and a sink. The sink is the end result of your token stream, returned by Analyzer.tokenStream(), so set it to your filter chain. The source is the token stream before you apply any filters.
HTMLStripCharFilter, despite its name, is actually a subclass of java.io.Reader which removes HTML constructs, so you no longer need CharReader.
Term vector replacements (3-4)
Term vectors work differently in Lucene 4, so there are no straightforward method swaps. The specific answer depends on what your requirements are.
If you want positional information, you have to index your fields with positional information in the first place:
Document doc = new Document();
FieldType f = new FieldType();
f.setIndexed(true);
f.setStoreTermVectors(true);
f.setStoreTermVectorPositions(true);
doc.add(new Field("text", "hello", f));
Finally, in order to get at the frequency and positional info of a field of a document, you drill down the new API like this (adapted from this answer):
// IndexReader ir;
// int docID = 0;
Terms terms = ir.getTermVector(docID, "text");
terms.hasPositions(); // should be true if you set the field to store positions
TermsEnum termsEnum = terms.iterator(null);
BytesRef term = null;
// Explore the terms for this field
while ((term = termsEnum.next()) != null) {
// Enumerate through documents, in this case only one
DocsAndPositionsEnum docsEnum = termsEnum.docsAndPositions(null, null);
int docIdEnum;
while ((docIdEnum = docsEnum.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
for (int i = 0; i < docsEnum.freq(); i++) {
System.out.println(term.utf8ToString() + " " + docIdEnum + " "
+ docsEnum.nextPosition());
}
}
}
It'd be nice if Terms.iterator() returned an actual Iterable.
I have written an interface that given for example the Pizza ontology it can distinguish between asserted and inferred axioms. For example, given Class: Food SubClassOf: Thing it will tell you that it is inferred. My question is that if I have an unsatisfiable class TomatoTopping, and I want to know if Class: TomatoTopping SubClassOf: Nothing is asserted, inferred or none, I will get the response that the axiom doesn't exist. Although I can see TomatoTopping under Nothing in the hierarchy. What is the problem? Can't I query such an axiom or my logic is flawed?
This is the code that identifies Class: Food SubClassOf: Thing but not Class: TomatoTopping SubClassOf: Nothing
// parsing input
Set<OntologyAxiomPair> frame = parsingProcess(tool, input);
Iterator<OntologyAxiomPair> frameit = frame.iterator();
while (frameit.hasNext()) {
OntologyAxiomPair newPair = frameit.next();
OWLAxiom tempAx = newPair.getAxiom();
// the axiom type must be anything but declaration
if (!tempAx.isOfType(AxiomType.DECLARATION)) {
// get asseted and inferred axioms
Set<OWLAxiom> asserted = getOntology().getAxioms();
if (asserted.contains(tempAx)) {
Result = "System: this axiom is an asserted axiom.";
displayUserMessage(Result, asetBlue);
break;
}
// Use an inferred axiom generators
else
{
// Calling TrOWL
RELReasoner reasoner = relfactory.createReasoner(getOntology());
if(reasoner.isEntailed( tempAx)){
Result = "System: this axiom is an inferred axiom.";
displayUserMessage(Result, asetMagenta);
break;
}
}
}
}
I think I discovered what the problem is. The Manchester Syntax parser needs a datafactory.
OWLOntologyManager manager = inputOntology.getOWLOntologyManager();
this.dataFactory = manager.getOWLDataFactory();
Since the manager is not classified by a reasoner, the OWL:Nothing does not appear. Do you know of a way to classify a manager and then extract datafactroy?
I'm writing a program in Java that exploits the OWL API version 3.1.0. I have a String that represents an axiom using the Manchester OWL Syntax, I would like to convert this string in a OWLAxiom object, because I need to add the resulting axiom into an ontology using the method addAxiom(OWLOntology owl, OWLAxiom axiom) (It's a method of OWLOntologyManager). How can I do that?
How about something like the following Java code? Note that I'm parsing a complete, but small, ontology. If you're actually expecting just some Manchester text that won't be parsable as a complete ontology, you may need to prepend some standard prefix to everything. That's more of a concern for the particular application though. You'll also need to make sure that you're getting the kinds of axioms that you're interested in. There will, necessarily, be declaration axioms (e.g., that Person is a class), but you're more likely interested in TBox and ABox axioms, so I've added some notes about how you can get those.
One point to note is that if you're only trying to add the axioms to an existing ontology, that's what the OWLParser methods do, although the Javadoc doesn't make this particularly clear (in my opinion). The documentation about OWLParser says that
An OWLParser parses an ontology document into an OWL API object representation of an ontology.
and that's not strictly true. If the ontology argument to parse() already has content, and parse() doesn't remove it, then the ontology argument ends up being an object representation of a superset of the ontology document (it's the ontology document plus the prior content). Fortunately, though, this is exactly what you want in your case: you want to read a snippet of text and add it to an existing ontology.
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import org.coode.owlapi.manchesterowlsyntax.ManchesterOWLSyntaxParserFactory;
import org.semanticweb.owlapi.apibinding.OWLManager;
import org.semanticweb.owlapi.io.OWLParser;
import org.semanticweb.owlapi.io.StreamDocumentSource;
import org.semanticweb.owlapi.model.OWLAxiom;
import org.semanticweb.owlapi.model.OWLOntology;
import org.semanticweb.owlapi.model.OWLOntologyCreationException;
import org.semanticweb.owlapi.model.OWLOntologyManager;
public class ReadManchesterString {
public static void main(String[] args) throws OWLOntologyCreationException, IOException {
// Get a manager and create an empty ontology, and a parser that
// can read Manchester syntax.
final OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
final OWLOntology ontology = manager.createOntology();
final OWLParser parser = new ManchesterOWLSyntaxParserFactory().createParser( manager );
// A small OWL ontology in the Manchester syntax.
final String content = "" +
"Prefix: so: <http://stackoverflow.com/q/21005908/1281433/>\n" +
"Class: so:Person\n" +
"Class: so:Young\n" +
"\n" +
"Class: so:Teenager\n" +
" SubClassOf: (so:Person and so:Young)\n" +
"";
// Create an input stream from the ontology, and use the parser to read its
// contents into the ontology.
try ( final InputStream in = new ByteArrayInputStream( content.getBytes() ) ) {
parser.parse( new StreamDocumentSource( in ), ontology );
}
// Iterate over the axioms of the ontology. There are more than just the subclass
// axiom, because the class declarations are also axioms. All in all, there are
// four: the subclass axiom and three declarations of named classes.
System.out.println( "== All Axioms: ==" );
for ( final OWLAxiom axiom : ontology.getAxioms() ) {
System.out.println( axiom );
}
// You can iterate over more specific axiom types, though. For instance,
// you could just iterate over the TBox axioms, in which case you'll just
// get the one subclass axiom. You could also iterate over
// ontology.getABoxAxioms() to get ABox axioms.
System.out.println( "== ABox Axioms: ==" );
for ( final OWLAxiom axiom : ontology.getTBoxAxioms( false ) ) {
System.out.println( axiom );
}
}
}
The output is:
== All Axioms: ==
SubClassOf(<http://stackoverflow.com/q/21005908/1281433/Teenager> ObjectIntersectionOf(<http://stackoverflow.com/q/21005908/1281433/Person> <http://stackoverflow.com/q/21005908/1281433/Young>))
Declaration(Class(<http://stackoverflow.com/q/21005908/1281433/Person>))
Declaration(Class(<http://stackoverflow.com/q/21005908/1281433/Young>))
Declaration(Class(<http://stackoverflow.com/q/21005908/1281433/Teenager>))
== ABox Axioms: ==
SubClassOf(<http://stackoverflow.com/q/21005908/1281433/Teenager> ObjectIntersectionOf(<http://stackoverflow.com/q/21005908/1281433/Person> <http://stackoverflow.com/q/21005908/1281433/Young>))
I'm using the OWL API for OWL 2.0 and there is one thing I can't seem to figure out. I have an OWL/XML file and I would like to retrieve the annotations for my object property assertions. Here are snippets from my OWL/XML and Java code:
OWL:
<ObjectPropertyAssertion>
<Annotation>
<AnnotationProperty abbreviatedIRI="rdfs:comment"/>
<Literal datatypeIRI="http://www.w3.org/2001/XMLSchema#string">Bob likes sushi</Literal>
</Annotation>
<ObjectProperty IRI="#Likes"/>
<NamedIndividual IRI="#UserBob"/>
<NamedIndividual IRI="#FoodSushi"/>
</ObjectPropertyAssertion>
Java:
OWLIndividual bob = manager.getOWLDataFactory().getOWLNamedIndividual(IRI.create(base + "#UserBob"));
OWLObjectProperty likes = manager.getOWLDataFactory().getOWLObjectProperty(IRI.create(base + "#Likes"));
OWLIndividual sushi = factory.getOWLNamedIndividual(IRI.create(base + "#FoodSushi"));
OWLObjectPropertyAssertionAxiom ax = factory.getOWLObjectPropertyAssertionAxiom(likes, bob, sushi);
for(OWLAnnotation a: ax.getAnnotations()){
System.out.println(a.getValue());
}
Problem is, nothing gets returned even though the OWL states there is one rdfs:comment. It has been troublesome to find any documentations on how to retrieve this information. Adding axioms with comments or whatever is not an issue.
In order to retrieve the annotations you need to walk over the axioms of interest. Using the getSomething() adds things to the ontology, as noted in the comments, it is not possible to retrieve your axiom this way. Here is the code adapted from the OWL-API guide:
//Get rdfs:comment
final OWLAnnotationProperty comment = factory.getRDFSComment();
//Create a walker
OWLOntologyWalker walker =
new OWLOntologyWalker(Collections.singleton(ontology));
//Define what's going to visited
OWLOntologyWalkerVisitor<Object> visitor =
new OWLOntologyWalkerVisitor<Object>(walker) {
//In your case you visit the annotations made with rdfs:comment
//over the object properties assertions
#Override
public Object visit(OWLObjectPropertyAssertionAxiom axiom) {
//Print them
System.out.println(axiom.getAnnotations(comment));
return null;
}
};
//Walks over the structure - triggers the walk
walker.walkStructure(visitor);
I have some data coming in from a RabbitMQ. The data is formatted as triples, so a message from the queue could look something like this:
:Tom foaf:knows :Anna
where : is the standard namespace of the ontology into which I want to import the data, but other prefixes from imports are also possible. The triples consist of subject, property/predicate and object and I know in each message which is which.
On the receiving side, I have a Java program with an OWLOntology object that represents the ontology where the newly arriving triples should be stored temporarily for reasoning and other stuff.
I kind of managed to get the triples into a Jena OntModel but that's where it ends. I tried to use OWLRDFConsumer but I could not find anything about how to apply it.
My function looks something like this:
public void addTriple(RDFTriple triple) {
//OntModel model = ModelFactory.createOntologyModel();
String subject = triple.getSubject().toString();
subject = subject.substring(1,subject.length()-1);
Resource s = ResourceFactory.createResource(subject);
String predicate = triple.getPredicate().toString();
predicate = predicate.substring(1,predicate.length()-1);
Property p = ResourceFactory.createProperty(predicate);
String object = triple.getObject().toString();
object = object.substring(1,object.length()-1);
RDFNode o = ResourceFactory.createResource(object);
Statement statement = ResourceFactory.createStatement(s, p, o);
//model.add(statement);
System.out.println(statement.toString());
}
I did the substring operations because the RDFTriple class adds <> around the arguments of the triple and the constructor of Statement fails as a consequence.
If anybody could point me to an example that would be great. Maybe there's a much better way that I haven't thought of to achieve the same thing?
It seems like the OWLRDFConsumer is generally used to connect the RDF parsers with OWL-aware processors. The following code seems to work, though, as I've noted in the comments, there are a couple of places where I needed an argument and put in the only available thing I could.
The following code: creates an ontology; declares two named individuals, Tom and Anna; declares an object property, likes; and declares a data property, age. Once these are declared we print the ontology just to make sure that it's what we expect. Then it creates an OWLRDFConsumer. The consumer constructor needs an ontology, an AnonymousNodeChecker, and an OWLOntologyLoaderConfiguration. For the configuration, I just used one created by the no-argument constructor, and I think that's OK. For the node checker, the only convenient implementer is the TurtleParser, so I created one of those, passing null as the Reader. I think this will be OK, since the parser won't be called to read anything. Then the consumer's handle(IRI,IRI,IRI) and handle(IRI,IRI,OWLLiteral) methods are used to process triples one at a time. We add the triples
:Tom :likes :Anna
:Tom :age 35
and then print out the ontology again to ensure that the assertions got added. Since you've already been getting the RDFTriples, you should be able to pull out the arguments that handle() needs. Before processing the triples, the ontology contained:
<NamedIndividual rdf:about="http://example.org/Tom"/>
and afterward this:
<NamedIndividual rdf:about="http://example.org/Tom">
<example:age rdf:datatype="http://www.w3.org/2001/XMLSchema#integer">35</example:age>
<example:likes rdf:resource="http://example.org/Anna"/>
</NamedIndividual>
Here's the code:
import java.io.Reader;
import org.coode.owlapi.rdfxml.parser.OWLRDFConsumer;
import org.semanticweb.owlapi.apibinding.OWLManager;
import org.semanticweb.owlapi.model.IRI;
import org.semanticweb.owlapi.model.OWLDataFactory;
import org.semanticweb.owlapi.model.OWLDataProperty;
import org.semanticweb.owlapi.model.OWLEntity;
import org.semanticweb.owlapi.model.OWLNamedIndividual;
import org.semanticweb.owlapi.model.OWLObjectProperty;
import org.semanticweb.owlapi.model.OWLOntology;
import org.semanticweb.owlapi.model.OWLOntologyCreationException;
import org.semanticweb.owlapi.model.OWLOntologyLoaderConfiguration;
import org.semanticweb.owlapi.model.OWLOntologyManager;
import org.semanticweb.owlapi.model.OWLOntologyStorageException;
import uk.ac.manchester.cs.owl.owlapi.turtle.parser.TurtleParser;
public class ExampleOWLRDFConsumer {
public static void main(String[] args) throws OWLOntologyCreationException, OWLOntologyStorageException {
// Create an ontology.
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLDataFactory factory = manager.getOWLDataFactory();
OWLOntology ontology = manager.createOntology();
// Create some named individuals and an object property.
String ns = "http://example.org/";
OWLNamedIndividual tom = factory.getOWLNamedIndividual( IRI.create( ns+"Tom" ));
OWLObjectProperty likes = factory.getOWLObjectProperty( IRI.create( ns+"likes" ));
OWLDataProperty age = factory.getOWLDataProperty( IRI.create( ns+"age" ));
OWLNamedIndividual anna = factory.getOWLNamedIndividual( IRI.create( ns+"Anna" ));
// Add the declarations axioms to the ontology so that the triples involving
// these are understood (otherwise the triples will be ignored).
for ( OWLEntity entity : new OWLEntity[] { tom, likes, age, anna } ) {
manager.addAxiom( ontology, factory.getOWLDeclarationAxiom( entity ));
}
// Print the the ontology to see that the entities are declared.
// The important result is
// <NamedIndividual rdf:about="http://example.org/Tom"/>
// with no properties
manager.saveOntology( ontology, System.out );
// Create an OWLRDFConsumer for the ontology. TurtleParser implements AnonymousNodeChecker, so
// it was a candidate for use here (but I make no guarantees about whether it's appropriate to
// do this). Since it won't be reading anything, we pass it a null InputStream, and this doesn't
// *seem* to cause any problem. Hopefully the default OWLOntologyLoaderConfiguration is OK, too.
OWLRDFConsumer consumer = new OWLRDFConsumer( ontology, new TurtleParser((Reader) null), new OWLOntologyLoaderConfiguration() );
// The consumer handles (IRI,IRI,IRI) and (IRI,IRI,OWLLiteral) triples.
consumer.handle( tom.getIRI(), likes.getIRI(), anna.getIRI() );
consumer.handle( tom.getIRI(), age.getIRI(), factory.getOWLLiteral( 35 ));
// Print the ontology to see the new object and data property assertions. The import contents is
// still Tom:
// <NamedIndividual rdf:about="http://example.org/Tom">
// <example:age rdf:datatype="http://www.w3.org/2001/XMLSchema#integer">35</example:age>
// <example:likes rdf:resource="http://example.org/Anna"/>
// </NamedIndividual>
manager.saveOntology( ontology, System.out );
}
}
In ONT-API, which is an extended Jena-based implementation of OWL-API, it is quite simple:
OWLOntologyManager manager = OntManagers.createONT();
OWLOntology ontology = manager.createOntology(IRI.create("http://example.com#test"));
((Ontology)ontology).asGraphModel().createResource("http://example.com#clazz1").addProperty(RDF.type, OWL.Class);
ontology.axioms(AxiomType.DECLARATION).forEach(System.out::println);
For more information see ONT-API wiki, examples