I'm using the OWL API for OWL 2.0 and there is one thing I can't seem to figure out. I have an OWL/XML file and I would like to retrieve the annotations for my object property assertions. Here are snippets from my OWL/XML and Java code:
OWL:
<ObjectPropertyAssertion>
<Annotation>
<AnnotationProperty abbreviatedIRI="rdfs:comment"/>
<Literal datatypeIRI="http://www.w3.org/2001/XMLSchema#string">Bob likes sushi</Literal>
</Annotation>
<ObjectProperty IRI="#Likes"/>
<NamedIndividual IRI="#UserBob"/>
<NamedIndividual IRI="#FoodSushi"/>
</ObjectPropertyAssertion>
Java:
OWLIndividual bob = manager.getOWLDataFactory().getOWLNamedIndividual(IRI.create(base + "#UserBob"));
OWLObjectProperty likes = manager.getOWLDataFactory().getOWLObjectProperty(IRI.create(base + "#Likes"));
OWLIndividual sushi = factory.getOWLNamedIndividual(IRI.create(base + "#FoodSushi"));
OWLObjectPropertyAssertionAxiom ax = factory.getOWLObjectPropertyAssertionAxiom(likes, bob, sushi);
for(OWLAnnotation a: ax.getAnnotations()){
System.out.println(a.getValue());
}
Problem is, nothing gets returned even though the OWL states there is one rdfs:comment. It has been troublesome to find any documentations on how to retrieve this information. Adding axioms with comments or whatever is not an issue.
In order to retrieve the annotations you need to walk over the axioms of interest. Using the getSomething() adds things to the ontology, as noted in the comments, it is not possible to retrieve your axiom this way. Here is the code adapted from the OWL-API guide:
//Get rdfs:comment
final OWLAnnotationProperty comment = factory.getRDFSComment();
//Create a walker
OWLOntologyWalker walker =
new OWLOntologyWalker(Collections.singleton(ontology));
//Define what's going to visited
OWLOntologyWalkerVisitor<Object> visitor =
new OWLOntologyWalkerVisitor<Object>(walker) {
//In your case you visit the annotations made with rdfs:comment
//over the object properties assertions
#Override
public Object visit(OWLObjectPropertyAssertionAxiom axiom) {
//Print them
System.out.println(axiom.getAnnotations(comment));
return null;
}
};
//Walks over the structure - triggers the walk
walker.walkStructure(visitor);
Related
Is there a way to get inferences from HermiT reasoner that contain negation (ObjectComplementOf)? Here is what I tried:
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
dataFactory = manager.getOWLDataFactory();
IRI iri = IRI.create("http://www.test.owl");
OWLOntology ontology = manager.createOntology(iri);
OWLClass clsA = dataFactory.getOWLClass(IRI.create(iri + "#A"));
OWLClass clsB = dataFactory.getOWLClass(IRI.create(iri + "#B"));
OWLAxiom axiom = dataFactory.getOWLSubClassOfAxiom(clsA, clsB.getComplementNNF());
OWLIndividual john = dataFactory.getOWLNamedIndividual(IRI.create(iri + "#JOHN"));
OWLClassAssertionAxiom assertionAxiom = dataFactory.getOWLClassAssertionAxiom(clsA, john);
ontology.add(axiom);
ontology.add(assertionAxiom);
OWLReasonerFactory reasoner_factory = new ReasonerFactory();
OWLReasoner reasoner = reasoner_factory.createReasoner(ontology);
OWLOntology inferred_ontology = manager.createOntology();
// Create an inferred axiom generator, and add the generators of choice.
List<InferredAxiomGenerator<? extends OWLAxiom>> gens = new ArrayList<>();
gens.add(new InferredSubClassAxiomGenerator());
gens.add(new InferredClassAssertionAxiomGenerator());
gens.add(new InferredDisjointClassesAxiomGenerator());
gens.add(new InferredEquivalentClassAxiomGenerator());
// Create the inferred ontology generator, and fill the empty ontology.
InferredOntologyGenerator iog = new InferredOntologyGenerator(reasoner, gens);
iog.fillOntology(dataFactory, inferred_ontology);
The (cleaned) result:
//KB: A SubClassOf not(B), A(JOHN)
ENTAILMENTS:{
SubClassOf(A owl:Thing),
SubClassOf(B owl:Thing),
DisjointClasses(A owl:Nothing),
DisjointClasses(B owl:Nothing),
DisjointClasses(A B),
ClassAssertion(owl:Thing JOHN),
ClassAssertion(A JOHN)
}
My question: How can I also get this assertion:
ClassAssertion(ObjectComplementOf(B) JOHN)?
It's not possible to generate all inferred axioms that use class or property expressions. The reason for that is that the list of class (and property) expressions is infinite, so the reasoner would embark on an impossible task if it tried to generate all axioms of that kind.
If you have a criterion for selecting a finite subset of class expressions (e.g., the list of complements for each named class in the ontology) you could implement an axiom generator that asks the reasoner whether those classes have instances, and create the axioms that way.
Reasoners could also do the same, to provide a partial answer to a question with infinite answers - but, far as I'm aware, there are no reasoners which do that. HermiT certainly does not.
I'm trying to create a individual (instance) using Jena with the method below:
public void createInstance(String name) {
String NS = ontology.getNsPrefixURI("http://james.miranda.br/Onto");
OntClass class = ontology.createClass(NS + "Requisito");
Individual instance = class.createIndividual(NS + name);
System.out.println("Instance created:" + instance.getURI());
}
ontology is a OntModel instance based on this ontology (some terms are in Portuguese). This method is not working, because the getNsPrefixURI is returning null.
When I iterate over the classes using the code below:
ExtendedIterator<OntClass> classIterator = ontology.listClasses();
while (classIterator.hasNext()) {
OntClass ontClass = classIterator.next();
System.out.println(ontClass.toString());
}
the (partial) result is:
http://james.miranda.br/Onto#Requisito
http://james.miranda.br/Onto#Micro
http://james.miranda.br/Onto#Certo
http://james.miranda.br/Onto#Objetivo
http://james.miranda.br/Onto#Individuo
Using getNsPrefixURI("") I have the NS http://www.w3.org/2002/07/owl and my method does not work too. I was looking for how to define the base uri here in SO, but the solution did not work in my case.
Trying to get the all the namespaces, I used the code:
Map<String,String> list = ontology.getNsPrefixMap();
System.out.println(list.toString());
The result is: {=http://www.w3.org/2002/07/owl#, xsd=http://www.w3.org/2001/XMLSchema#, rdfs=http://www.w3.org/2000/01/rdf-schema#, owl=http://www.w3.org/2002/07/owl#, rdf=http://www.w3.org/1999/02/22-rdf-syntax-ns#}.
I didn't receive the prefix for "http://james.miranda.br/Onto". Should it be declared anywhere?
Are there anything wrong with my code?
The documentation of getNsPrefixURI says "Get the URI bound to a specific prefix, null if there isn't one." i.e. prefix in, URI out.
"http://james.miranda.br/Onto" is not a legal prefix.
There is no namespace prefix for http://james.miranda.br/Onto.
This does not set a prefix:
<Ontology rdf:about="http://james.miranda.br/Onto">
We would expect to see:
xmlns:onto="http://james.miranda.br/Onto#"
I am currently in the process of upgrading a search engine application from Lucene 3.5.0 to version 4.10.3. There have been some substantial API changes in version 4 that break backward compatibility. I have managed to fix most of them, but a few issues remain that I could use some help with:
"cannot override final method from Analyzer"
The original code extended the Analyzer class and the overrode tokenStream(...).
#Override
public TokenStream tokenStream(String fieldName, Reader reader) {
CharStream charStream = CharReader.get(reader);
return
new LowerCaseFilter(version,
new SeparationFilter(version,
new WhitespaceTokenizer(version,
new HTMLStripFilter(charStream))));
}
But this method is final now and I am not sure how to understand the following note from the change log:
ReusableAnalyzerBase has been renamed to Analyzer. All Analyzer implementations must now use Analyzer.TokenStreamComponents, rather than overriding .tokenStream() and .reusableTokenStream() (which are now final).
There is another problem in the method quoted above:
"The method get(Reader) is undefined for the type CharReader"
There seem to have been some considerable changes here, too.
"TermPositionVector cannot be resolved to a type"
This class is gone now in Lucene 4. Are there any simple fixes for this? From the change log:
The term vectors APIs (TermFreqVector, TermPositionVector, TermVectorMapper) have been removed in favor of the above flexible indexing APIs, presenting a single-document inverted index of the document from the term vectors.
Probably related to this:
"The method getTermFreqVector(int, String) is undefined for the type IndexReader."
Both problems occur here, for instance:
TermPositionVector termVector = (TermPositionVector) reader.getTermFreqVector(...);
("reader" is of Type IndexReader)
I would appreciate any help with these issues.
I found core developer Uwe Schindler's response to your question on the Lucene mailing list. It took me some time to wrap my head around the new API, so I need to write down something before I forget.
These notes apply to Lucene 4.10.3.
Implementing an Analyzer (1-2)
new Analyzer() {
#Override
protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
Tokenizer source = new WhitespaceTokenizer(new HTMLStripCharFilter(reader));
TokenStream sink = new LowerCaseFilter(source);
return new TokenStreamComponents(source, sink);
}
};
The constructor of TokenStreamComponents takes a source and a sink. The sink is the end result of your token stream, returned by Analyzer.tokenStream(), so set it to your filter chain. The source is the token stream before you apply any filters.
HTMLStripCharFilter, despite its name, is actually a subclass of java.io.Reader which removes HTML constructs, so you no longer need CharReader.
Term vector replacements (3-4)
Term vectors work differently in Lucene 4, so there are no straightforward method swaps. The specific answer depends on what your requirements are.
If you want positional information, you have to index your fields with positional information in the first place:
Document doc = new Document();
FieldType f = new FieldType();
f.setIndexed(true);
f.setStoreTermVectors(true);
f.setStoreTermVectorPositions(true);
doc.add(new Field("text", "hello", f));
Finally, in order to get at the frequency and positional info of a field of a document, you drill down the new API like this (adapted from this answer):
// IndexReader ir;
// int docID = 0;
Terms terms = ir.getTermVector(docID, "text");
terms.hasPositions(); // should be true if you set the field to store positions
TermsEnum termsEnum = terms.iterator(null);
BytesRef term = null;
// Explore the terms for this field
while ((term = termsEnum.next()) != null) {
// Enumerate through documents, in this case only one
DocsAndPositionsEnum docsEnum = termsEnum.docsAndPositions(null, null);
int docIdEnum;
while ((docIdEnum = docsEnum.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
for (int i = 0; i < docsEnum.freq(); i++) {
System.out.println(term.utf8ToString() + " " + docIdEnum + " "
+ docsEnum.nextPosition());
}
}
}
It'd be nice if Terms.iterator() returned an actual Iterable.
I am programming an XML validator according to schemas using Rest-Assured. However, I am having trouble handling XSDs that reference other XSDs, because I retrieve the original XSD from a URL using GET.
I have been trying to implement my own parsing to consolidate the XSDs(Strings) into one XSD(String), but it is becoming a recursive monster, and extremely inefficient/difficult. To see the algorithm, look at the end of the post.
I have two questions:
1) My problem is that I am using GET to retrieve the XSD, so it's not within the namespace. Is there a way to GET all referenced XSDs and consolidate them using Rest-Assured? I wouldn't have a clue about how to go about this.
2) Is there a better way to handle includes in general? As you can see, my algorithm is very costly and overcomplicated (especially the ref attribute), and I'm sure something will break easily if I change my test cases.
My algorithm(Pseudo-Code to avoid complexity) so far is like the following:
boolean xmlValid(String xmlAddress, String xsdAddress){
LinkedList XSDList = new LinkedList;
XSDList.add(xsdAddress);
xsdString = getExternalXSDStrings(XSDList);
try{ //No PseudoCode here
RestAssured.expect().
statusCode(200).
body(
RestAssuredMatchers.matchesXsd(xsdString)).
when().
get(xmlAddress);
}catch Exceptions{...}
}
String getExternalXSDStrings(LinkedList xsdReferences, String prevString){
LinkedList recursiveXSDReferences = new LinkedList();
for(xsdRef:xsdReferences){
xsdAddress = "http://..." + xsdRef;
Open InputStream From URL;
while(inputLine != null){
if(prologFlag) //Do Nothing, this is to avoid multiple prologs ;
else if(includeFlag){
if(refFlag) Note Reference;
else recursiveXSDReferences.add(includeReference);
}else if(refFlag){
referenceDefinition = Extract Reference Element Definition;
xsdString = xsdString + referenceDefinition;
}else{
xsdString = xsdString + inputLine;
}
}
Close input stream;
}
xsdString = prevString + xsdString;
if(xsdReferences.length > 0) return getExternalXSDStrings(recursiveXSDReferences , xsdString);
else return xsdString;
}
Thank you very much in advance!
Perhaps can make use of the XmlConfig in detailed configuration. This gives you access to configure features and namespaces etc. For example if you want to disable the loading of external DTD's you could do:
given().config(RestAssured.config().xmlConfig(xmlConfig().disableLoadingOfExternalDtd())). ..
So perhaps you could look in the "disableLoadingOfExternalDtd" method to see how it's implemented to get some hints.
I have some data coming in from a RabbitMQ. The data is formatted as triples, so a message from the queue could look something like this:
:Tom foaf:knows :Anna
where : is the standard namespace of the ontology into which I want to import the data, but other prefixes from imports are also possible. The triples consist of subject, property/predicate and object and I know in each message which is which.
On the receiving side, I have a Java program with an OWLOntology object that represents the ontology where the newly arriving triples should be stored temporarily for reasoning and other stuff.
I kind of managed to get the triples into a Jena OntModel but that's where it ends. I tried to use OWLRDFConsumer but I could not find anything about how to apply it.
My function looks something like this:
public void addTriple(RDFTriple triple) {
//OntModel model = ModelFactory.createOntologyModel();
String subject = triple.getSubject().toString();
subject = subject.substring(1,subject.length()-1);
Resource s = ResourceFactory.createResource(subject);
String predicate = triple.getPredicate().toString();
predicate = predicate.substring(1,predicate.length()-1);
Property p = ResourceFactory.createProperty(predicate);
String object = triple.getObject().toString();
object = object.substring(1,object.length()-1);
RDFNode o = ResourceFactory.createResource(object);
Statement statement = ResourceFactory.createStatement(s, p, o);
//model.add(statement);
System.out.println(statement.toString());
}
I did the substring operations because the RDFTriple class adds <> around the arguments of the triple and the constructor of Statement fails as a consequence.
If anybody could point me to an example that would be great. Maybe there's a much better way that I haven't thought of to achieve the same thing?
It seems like the OWLRDFConsumer is generally used to connect the RDF parsers with OWL-aware processors. The following code seems to work, though, as I've noted in the comments, there are a couple of places where I needed an argument and put in the only available thing I could.
The following code: creates an ontology; declares two named individuals, Tom and Anna; declares an object property, likes; and declares a data property, age. Once these are declared we print the ontology just to make sure that it's what we expect. Then it creates an OWLRDFConsumer. The consumer constructor needs an ontology, an AnonymousNodeChecker, and an OWLOntologyLoaderConfiguration. For the configuration, I just used one created by the no-argument constructor, and I think that's OK. For the node checker, the only convenient implementer is the TurtleParser, so I created one of those, passing null as the Reader. I think this will be OK, since the parser won't be called to read anything. Then the consumer's handle(IRI,IRI,IRI) and handle(IRI,IRI,OWLLiteral) methods are used to process triples one at a time. We add the triples
:Tom :likes :Anna
:Tom :age 35
and then print out the ontology again to ensure that the assertions got added. Since you've already been getting the RDFTriples, you should be able to pull out the arguments that handle() needs. Before processing the triples, the ontology contained:
<NamedIndividual rdf:about="http://example.org/Tom"/>
and afterward this:
<NamedIndividual rdf:about="http://example.org/Tom">
<example:age rdf:datatype="http://www.w3.org/2001/XMLSchema#integer">35</example:age>
<example:likes rdf:resource="http://example.org/Anna"/>
</NamedIndividual>
Here's the code:
import java.io.Reader;
import org.coode.owlapi.rdfxml.parser.OWLRDFConsumer;
import org.semanticweb.owlapi.apibinding.OWLManager;
import org.semanticweb.owlapi.model.IRI;
import org.semanticweb.owlapi.model.OWLDataFactory;
import org.semanticweb.owlapi.model.OWLDataProperty;
import org.semanticweb.owlapi.model.OWLEntity;
import org.semanticweb.owlapi.model.OWLNamedIndividual;
import org.semanticweb.owlapi.model.OWLObjectProperty;
import org.semanticweb.owlapi.model.OWLOntology;
import org.semanticweb.owlapi.model.OWLOntologyCreationException;
import org.semanticweb.owlapi.model.OWLOntologyLoaderConfiguration;
import org.semanticweb.owlapi.model.OWLOntologyManager;
import org.semanticweb.owlapi.model.OWLOntologyStorageException;
import uk.ac.manchester.cs.owl.owlapi.turtle.parser.TurtleParser;
public class ExampleOWLRDFConsumer {
public static void main(String[] args) throws OWLOntologyCreationException, OWLOntologyStorageException {
// Create an ontology.
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLDataFactory factory = manager.getOWLDataFactory();
OWLOntology ontology = manager.createOntology();
// Create some named individuals and an object property.
String ns = "http://example.org/";
OWLNamedIndividual tom = factory.getOWLNamedIndividual( IRI.create( ns+"Tom" ));
OWLObjectProperty likes = factory.getOWLObjectProperty( IRI.create( ns+"likes" ));
OWLDataProperty age = factory.getOWLDataProperty( IRI.create( ns+"age" ));
OWLNamedIndividual anna = factory.getOWLNamedIndividual( IRI.create( ns+"Anna" ));
// Add the declarations axioms to the ontology so that the triples involving
// these are understood (otherwise the triples will be ignored).
for ( OWLEntity entity : new OWLEntity[] { tom, likes, age, anna } ) {
manager.addAxiom( ontology, factory.getOWLDeclarationAxiom( entity ));
}
// Print the the ontology to see that the entities are declared.
// The important result is
// <NamedIndividual rdf:about="http://example.org/Tom"/>
// with no properties
manager.saveOntology( ontology, System.out );
// Create an OWLRDFConsumer for the ontology. TurtleParser implements AnonymousNodeChecker, so
// it was a candidate for use here (but I make no guarantees about whether it's appropriate to
// do this). Since it won't be reading anything, we pass it a null InputStream, and this doesn't
// *seem* to cause any problem. Hopefully the default OWLOntologyLoaderConfiguration is OK, too.
OWLRDFConsumer consumer = new OWLRDFConsumer( ontology, new TurtleParser((Reader) null), new OWLOntologyLoaderConfiguration() );
// The consumer handles (IRI,IRI,IRI) and (IRI,IRI,OWLLiteral) triples.
consumer.handle( tom.getIRI(), likes.getIRI(), anna.getIRI() );
consumer.handle( tom.getIRI(), age.getIRI(), factory.getOWLLiteral( 35 ));
// Print the ontology to see the new object and data property assertions. The import contents is
// still Tom:
// <NamedIndividual rdf:about="http://example.org/Tom">
// <example:age rdf:datatype="http://www.w3.org/2001/XMLSchema#integer">35</example:age>
// <example:likes rdf:resource="http://example.org/Anna"/>
// </NamedIndividual>
manager.saveOntology( ontology, System.out );
}
}
In ONT-API, which is an extended Jena-based implementation of OWL-API, it is quite simple:
OWLOntologyManager manager = OntManagers.createONT();
OWLOntology ontology = manager.createOntology(IRI.create("http://example.com#test"));
((Ontology)ontology).asGraphModel().createResource("http://example.com#clazz1").addProperty(RDF.type, OWL.Class);
ontology.axioms(AxiomType.DECLARATION).forEach(System.out::println);
For more information see ONT-API wiki, examples