How to add RDF triples to an OWLOntology? - java

I have some data coming in from a RabbitMQ. The data is formatted as triples, so a message from the queue could look something like this:
:Tom foaf:knows :Anna
where : is the standard namespace of the ontology into which I want to import the data, but other prefixes from imports are also possible. The triples consist of subject, property/predicate and object and I know in each message which is which.
On the receiving side, I have a Java program with an OWLOntology object that represents the ontology where the newly arriving triples should be stored temporarily for reasoning and other stuff.
I kind of managed to get the triples into a Jena OntModel but that's where it ends. I tried to use OWLRDFConsumer but I could not find anything about how to apply it.
My function looks something like this:
public void addTriple(RDFTriple triple) {
//OntModel model = ModelFactory.createOntologyModel();
String subject = triple.getSubject().toString();
subject = subject.substring(1,subject.length()-1);
Resource s = ResourceFactory.createResource(subject);
String predicate = triple.getPredicate().toString();
predicate = predicate.substring(1,predicate.length()-1);
Property p = ResourceFactory.createProperty(predicate);
String object = triple.getObject().toString();
object = object.substring(1,object.length()-1);
RDFNode o = ResourceFactory.createResource(object);
Statement statement = ResourceFactory.createStatement(s, p, o);
//model.add(statement);
System.out.println(statement.toString());
}
I did the substring operations because the RDFTriple class adds <> around the arguments of the triple and the constructor of Statement fails as a consequence.
If anybody could point me to an example that would be great. Maybe there's a much better way that I haven't thought of to achieve the same thing?

It seems like the OWLRDFConsumer is generally used to connect the RDF parsers with OWL-aware processors. The following code seems to work, though, as I've noted in the comments, there are a couple of places where I needed an argument and put in the only available thing I could.
The following code: creates an ontology; declares two named individuals, Tom and Anna; declares an object property, likes; and declares a data property, age. Once these are declared we print the ontology just to make sure that it's what we expect. Then it creates an OWLRDFConsumer. The consumer constructor needs an ontology, an AnonymousNodeChecker, and an OWLOntologyLoaderConfiguration. For the configuration, I just used one created by the no-argument constructor, and I think that's OK. For the node checker, the only convenient implementer is the TurtleParser, so I created one of those, passing null as the Reader. I think this will be OK, since the parser won't be called to read anything. Then the consumer's handle(IRI,IRI,IRI) and handle(IRI,IRI,OWLLiteral) methods are used to process triples one at a time. We add the triples
:Tom :likes :Anna
:Tom :age 35
and then print out the ontology again to ensure that the assertions got added. Since you've already been getting the RDFTriples, you should be able to pull out the arguments that handle() needs. Before processing the triples, the ontology contained:
<NamedIndividual rdf:about="http://example.org/Tom"/>
and afterward this:
<NamedIndividual rdf:about="http://example.org/Tom">
<example:age rdf:datatype="http://www.w3.org/2001/XMLSchema#integer">35</example:age>
<example:likes rdf:resource="http://example.org/Anna"/>
</NamedIndividual>
Here's the code:
import java.io.Reader;
import org.coode.owlapi.rdfxml.parser.OWLRDFConsumer;
import org.semanticweb.owlapi.apibinding.OWLManager;
import org.semanticweb.owlapi.model.IRI;
import org.semanticweb.owlapi.model.OWLDataFactory;
import org.semanticweb.owlapi.model.OWLDataProperty;
import org.semanticweb.owlapi.model.OWLEntity;
import org.semanticweb.owlapi.model.OWLNamedIndividual;
import org.semanticweb.owlapi.model.OWLObjectProperty;
import org.semanticweb.owlapi.model.OWLOntology;
import org.semanticweb.owlapi.model.OWLOntologyCreationException;
import org.semanticweb.owlapi.model.OWLOntologyLoaderConfiguration;
import org.semanticweb.owlapi.model.OWLOntologyManager;
import org.semanticweb.owlapi.model.OWLOntologyStorageException;
import uk.ac.manchester.cs.owl.owlapi.turtle.parser.TurtleParser;
public class ExampleOWLRDFConsumer {
public static void main(String[] args) throws OWLOntologyCreationException, OWLOntologyStorageException {
// Create an ontology.
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLDataFactory factory = manager.getOWLDataFactory();
OWLOntology ontology = manager.createOntology();
// Create some named individuals and an object property.
String ns = "http://example.org/";
OWLNamedIndividual tom = factory.getOWLNamedIndividual( IRI.create( ns+"Tom" ));
OWLObjectProperty likes = factory.getOWLObjectProperty( IRI.create( ns+"likes" ));
OWLDataProperty age = factory.getOWLDataProperty( IRI.create( ns+"age" ));
OWLNamedIndividual anna = factory.getOWLNamedIndividual( IRI.create( ns+"Anna" ));
// Add the declarations axioms to the ontology so that the triples involving
// these are understood (otherwise the triples will be ignored).
for ( OWLEntity entity : new OWLEntity[] { tom, likes, age, anna } ) {
manager.addAxiom( ontology, factory.getOWLDeclarationAxiom( entity ));
}
// Print the the ontology to see that the entities are declared.
// The important result is
// <NamedIndividual rdf:about="http://example.org/Tom"/>
// with no properties
manager.saveOntology( ontology, System.out );
// Create an OWLRDFConsumer for the ontology. TurtleParser implements AnonymousNodeChecker, so
// it was a candidate for use here (but I make no guarantees about whether it's appropriate to
// do this). Since it won't be reading anything, we pass it a null InputStream, and this doesn't
// *seem* to cause any problem. Hopefully the default OWLOntologyLoaderConfiguration is OK, too.
OWLRDFConsumer consumer = new OWLRDFConsumer( ontology, new TurtleParser((Reader) null), new OWLOntologyLoaderConfiguration() );
// The consumer handles (IRI,IRI,IRI) and (IRI,IRI,OWLLiteral) triples.
consumer.handle( tom.getIRI(), likes.getIRI(), anna.getIRI() );
consumer.handle( tom.getIRI(), age.getIRI(), factory.getOWLLiteral( 35 ));
// Print the ontology to see the new object and data property assertions. The import contents is
// still Tom:
// <NamedIndividual rdf:about="http://example.org/Tom">
// <example:age rdf:datatype="http://www.w3.org/2001/XMLSchema#integer">35</example:age>
// <example:likes rdf:resource="http://example.org/Anna"/>
// </NamedIndividual>
manager.saveOntology( ontology, System.out );
}
}

In ONT-API, which is an extended Jena-based implementation of OWL-API, it is quite simple:
OWLOntologyManager manager = OntManagers.createONT();
OWLOntology ontology = manager.createOntology(IRI.create("http://example.com#test"));
((Ontology)ontology).asGraphModel().createResource("http://example.com#clazz1").addProperty(RDF.type, OWL.Class);
ontology.axioms(AxiomType.DECLARATION).forEach(System.out::println);
For more information see ONT-API wiki, examples

Related

Get actual field name from JPMML model's InputField

I have a scikit model that I'm using in my java app using JPMML. I'm trying to set the InputFields using the name of the column that was used during training, but "inField.getName().getValue()" is obfuscated to "x{#}". Is there anyway i could map "x{#}" back to the original feature/attribute name?
Map<FieldName, FieldValue> arguments = new LinkedHashMap<>();
or (InputField inField : patternEvaluator.getInputFields()) {
int value = activeFeatures.contains(inField.getName().getValue()) ? 1 : 0;
FieldValue inputFieldValue = inField.prepare(value);
arguments.put(inField.getName(), inputFieldValue);
}
Map<FieldName, ?> results = patternEvaluator.evaluate(arguments);
Here's how I'm generating the modal
from sklearn2pmml import PMMLPipeline
from sklearn2pmml import PMMLPipeline
import os
import pandas as pd
from sklearn.pipeline import Pipeline
import numpy as np
data = pd.read_csv('/pydata/training.csv')
X = data[data.keys()[:-1]].as_matrix()
y = data['classname'].as_matrix()
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=0)
estimators = [("read", RandomForestClassifier(n_jobs=5,n_estimators=200, max_features='auto'))]
pipe = PMMLPipeline(estimators)
pipe.fit(X_train,y_train)
pipe.active_fields = np.array(data.columns)
sklearn2pmml(pipe, "/pydata/model.pmml", with_repr = True)
Thanks
Does the PMML document contain actual field names at all? Open it in a text editor, and see what are the values of /PMML/DataDictionary/DataField#name attributes.
Your question indicates that the conversion from Scikit-Learn to PMML was incomplete, because it didn't include information about active field (aka input field) names. In that case they are assumed to be x1, x2, .., xn.
Your pipeline only includes the estimator, that is why the names are lost. You have to include all the preprocessing steps as well in order to get them into the PMML.
Let's assume you do not do any preprocessing at all, then that is probably what you need (I do not repeat parts of your code which are required in this snippet):
nones = [(d, None) for d in data.columns]
mapper = DataFrameMapper(nones,df_out=True)
lm = PMMLPipeline([
("mapper", mapper),
("estimator", estimators)
])
lm.fit(X_train,y_train)
sklearn2pmml(lm, "ScikitLearnNew.pmml", with_repr=True)
In case you do require some preprocessing on your data, instead of None you can use any other transformator (e.g. LabelBinarizer). But the preprocessing has to be happening inside the pipeline in order to be included in the PMML.

Jena - why is MinCardinalityRestriction set "1 Thing"?

I'm trying to learn how to use Jena. I have found this code on the net. The code runs and it creates an ontology but I have some questions about it.
This is the code:
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import com.hp.hpl.jena.ontology.AllValuesFromRestriction;
import com.hp.hpl.jena.ontology.DatatypeProperty;
import com.hp.hpl.jena.ontology.IntersectionClass;
import com.hp.hpl.jena.ontology.MaxCardinalityRestriction;
import com.hp.hpl.jena.ontology.MinCardinalityRestriction;
import com.hp.hpl.jena.ontology.ObjectProperty;
import com.hp.hpl.jena.ontology.OntClass;
import com.hp.hpl.jena.ontology.OntModel;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.rdf.model.RDFList;
import com.hp.hpl.jena.rdf.model.RDFNode;
import com.hp.hpl.jena.vocabulary.XSD;
public class people {
public static void main(String[] args) {
// Create an empty ontology model
OntModel ontModel = ModelFactory.createOntologyModel();
String ns = new String("http://www.example.com/onto1#");
String baseURI = new String("http://www.example.com/onto1");
// Create ‘Person’, ‘MalePerson’ and ‘FemalePerson’ classes
OntClass person = ontModel.createClass(ns + "Person");
OntClass malePerson = ontModel.createClass(ns + "MalePerson");
OntClass femalePerson = ontModel.createClass(ns + "FemalePerson");
// FemalePerson and MalePerson are subclasses of Person
person.addSubClass(malePerson);
person.addSubClass(femalePerson);
// FemalePerson and MalePerson are disjoint
malePerson.addDisjointWith(femalePerson);
femalePerson.addDisjointWith(malePerson);
// Create object property ‘hasSpouse’
ObjectProperty hasSpouse = ontModel.createObjectProperty(ns + "hasSpouse");
hasSpouse.setDomain(person);
hasSpouse.setRange(person);
// Create an AllValuesFromRestriction on hasSpouse:
// MalePersons hasSpouse only FemalePerson
AllValuesFromRestriction onlyFemalePerson = ontModel.createAllValuesFromRestriction(null, hasSpouse, femalePerson);
// A MalePerson can have at most one spouse -> MaxCardinalityRestriction
MaxCardinalityRestriction hasSpouseMaxCard = ontModel.createMaxCardinalityRestriction(null, hasSpouse, 1);
// Constrain MalePerson with the two constraints defined above
malePerson.addSuperClass(onlyFemalePerson);
malePerson.addSuperClass(hasSpouseMaxCard);
// Create class ‘MarriedPerson’
OntClass marriedPerson = ontModel.createClass(ns + "MarriedPerson");
MinCardinalityRestriction mincr = ontModel.createMinCardinalityRestriction(null, hasSpouse, 1);
// A MarriedPerson A Person, AND with at least 1 spouse
// A list must be created, that will hold the Person class
// and the min cardinality restriction
RDFNode[] constraintsArray = { person, mincr };
RDFList constraints = ontModel.createList(constraintsArray);
// The two classes are combined into one intersection class
IntersectionClass ic = ontModel.createIntersectionClass(null, constraints);
// ‘MarriedPerson’ is declared as an equivalent of the
// intersection class defined above
marriedPerson.setEquivalentClass(ic);
ontModel.write(System.out, "RDF/XML");
}
}
When I open it on protegé I see on "marriedPerson" : Person and (hasSpouse min 1 Thing).
The questions are :
how can I set the marriedPerson section in order to have Person and (hasSpouse min 1 Person) ?
At the moment, after running the code the ontology sets the marriedPerson section as Equivalent to Person and hasSpouse min 1 Thing... Is it better to have Person and hasSpouse min 1 Person or 1 Thing?
A class expression like hasSpouse min 1 Person is a qualified cardinality restriction. These didn't exist in the original OWL, but were added in OWL2. Jena doesn't officially support OWL2, so there's no convenient way to add the qualified cardinality restriction.
That said, Jena is an RDF API, not an OWL API, and it is just providing a wrapper around the RDF serialization of OWL ontologies. You can access that serialization directly and create the triples that encode a qualified cardinality restriction.
See How to add qualified cardinality in JENA.
1. (hasSpouse min 1 Person)
This requires a qualified minimal cardinality restriction (i.e. Q instead of N). In Jena there are two different methods to create these.
Replace
ontModel.createMinCardinalityRestriction(null, hasSpouse, 1);
by
ontModel.createMinCardinalityQRestriction(null, hasSpouse, 1, person);
2. Is it better to have Person and (hasSpouse min 1 Person) or 1 Thing?
You already have
hasSpouse.setDomain(person);
which globally asserts that everything that hasSpouse points to is a Person. Hence the qualification in the cardinality restriction is redundant, both versions are semantically equivalent.
The question to answer is: Is the qualification a property of the restriction or is it a property of the object property/role itself.

How to create a dynamic Interface with properties file at compile time?

The problem here is that the property file we use has insanely huge name as the key and most of us run into incorrect key naming issues . so it got me thinking if there's a way to generate the following interface based on the property file. Every change we make to the property file will auto-adjust the Properties interface. Or is there other solution?
Property File
A=Apple
B=Bannana
C=Cherry
Should Generate The following Interface
interface Properties{
public static final String A = "A" // keys
public static final String B = "B";
public static final String C = "C";
}
So in my application code
String a_value = PROP.getString(Properties.A);
There is an old rule about programming and not only about it, if something looks beautiful, then most probably it is the right way to do.
This approach does not look good, from my point of view.
The first thing:
Do not declare constants in interfaces. It violates the incapsulation approach. Check this article please: http://en.wikipedia.org/wiki/Constant_interface
The second thing:
Use a prefix for name part of your properties which are somehow special, let say: key_
And when you load your properties file, iterate over keys and extract keys with name that starts with key_ and use values of these keys as you planned to use those constants in your question.
UPDATE
Assume, we generate a huge properties file upon compilation process, using our Apache Ant script.
For example, let's properties file (myapp.properties) looks like that:
key_A = Apple
key_B = Banana
key_C = Cherry
anotherPropertyKey1 = blablabla1
anotherPropertyKey2 = blablabla2
our special properties which we want to handle have key names start with key_ prefix.
So, we write the following code (please note, it is not optimized, it is just proof of concept):
package propertiestest;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.util.Arrays;
import java.util.Enumeration;
import java.util.HashSet;
import java.util.Properties;
import java.util.Set;
public class PropertiesTest {
public static void main(String[] args) throws IOException {
final String PROPERTIES_FILENAME = "myapp.properties";
SpecialPropertyKeysStore spkStore =
new SpecialPropertyKeysStore(PROPERTIES_FILENAME);
System.out.println(Arrays.toString(spkStore.getKeysArray()));
}
}
class SpecialPropertyKeysStore {
private final Set<String> keys;
public SpecialPropertyKeysStore(String propertiesFileName)
throws FileNotFoundException, IOException {
// prefix of name of a special property key
final String KEY_PREFIX = "key_";
Properties propertiesHandler = new Properties();
keys = new HashSet<>();
try (InputStream input = new FileInputStream(propertiesFileName)) {
propertiesHandler.load(input);
Enumeration<?> enumeration = propertiesHandler.propertyNames();
while (enumeration.hasMoreElements()) {
String key = (String) enumeration.nextElement();
if (key.startsWith(KEY_PREFIX)) {
keys.add(key);
}
}
}
}
public boolean isKeyPresent(String keyName) {
return keys.contains(keyName);
}
public String[] getKeysArray() {
String[] strTypeParam = new String[0];
return keys.toArray(strTypeParam);
}
}
Class SpecialPropertyKeysStore filters and collects all special keys into its instance.
And you can get an array of these keys, or check whether is key present or not.
If you run this code, you will get:
[key_C, key_B, key_A]
It is a string representation of returned array with special key names.
Change this code as you want to meet your requirements.
I would not generate a class or interface from properties because you would lose the abilities to :
document those properties, as they would be represented by a java element + javadocs
references those properties in your code, as they would be play old java constant, and the compiler would have full knowledge of them. Refactoring them would also be possible while it would not be possible with automatic names.
You can also use enums, or create some special Property class, with a name as only and final field. Then, you only need a get method that would take a Properties, a Map or whatever.
As for your request, you can execute code with the maven-exec-plugin.
You should simply create a main that would read your properties file, and for each keys:
convert the key to a valid java identifier (you can use isJavaIdentifierStart and isJavaIdentifierPart to replace invalid char by a _)
write your class/interface/whatever you like using plain old Java (and don't forget to escape for eventual doublequote or backslashes !)
Since it would be a part of your build, say before building other classes that would depends on those constants, I would recommend you to create a specific maven project to isolate those build.
Still, I would really don't do that and use a POJO loaded with whatever need (CDI, Spring, Static initialization, etc).

Convert String in manchester syntax to OWLAxiom object using owlapi 3 in Java

I'm writing a program in Java that exploits the OWL API version 3.1.0. I have a String that represents an axiom using the Manchester OWL Syntax, I would like to convert this string in a OWLAxiom object, because I need to add the resulting axiom into an ontology using the method addAxiom(OWLOntology owl, OWLAxiom axiom) (It's a method of OWLOntologyManager). How can I do that?
How about something like the following Java code? Note that I'm parsing a complete, but small, ontology. If you're actually expecting just some Manchester text that won't be parsable as a complete ontology, you may need to prepend some standard prefix to everything. That's more of a concern for the particular application though. You'll also need to make sure that you're getting the kinds of axioms that you're interested in. There will, necessarily, be declaration axioms (e.g., that Person is a class), but you're more likely interested in TBox and ABox axioms, so I've added some notes about how you can get those.
One point to note is that if you're only trying to add the axioms to an existing ontology, that's what the OWLParser methods do, although the Javadoc doesn't make this particularly clear (in my opinion). The documentation about OWLParser says that
An OWLParser parses an ontology document into an OWL API object representation of an ontology.
and that's not strictly true. If the ontology argument to parse() already has content, and parse() doesn't remove it, then the ontology argument ends up being an object representation of a superset of the ontology document (it's the ontology document plus the prior content). Fortunately, though, this is exactly what you want in your case: you want to read a snippet of text and add it to an existing ontology.
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import org.coode.owlapi.manchesterowlsyntax.ManchesterOWLSyntaxParserFactory;
import org.semanticweb.owlapi.apibinding.OWLManager;
import org.semanticweb.owlapi.io.OWLParser;
import org.semanticweb.owlapi.io.StreamDocumentSource;
import org.semanticweb.owlapi.model.OWLAxiom;
import org.semanticweb.owlapi.model.OWLOntology;
import org.semanticweb.owlapi.model.OWLOntologyCreationException;
import org.semanticweb.owlapi.model.OWLOntologyManager;
public class ReadManchesterString {
public static void main(String[] args) throws OWLOntologyCreationException, IOException {
// Get a manager and create an empty ontology, and a parser that
// can read Manchester syntax.
final OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
final OWLOntology ontology = manager.createOntology();
final OWLParser parser = new ManchesterOWLSyntaxParserFactory().createParser( manager );
// A small OWL ontology in the Manchester syntax.
final String content = "" +
"Prefix: so: <http://stackoverflow.com/q/21005908/1281433/>\n" +
"Class: so:Person\n" +
"Class: so:Young\n" +
"\n" +
"Class: so:Teenager\n" +
" SubClassOf: (so:Person and so:Young)\n" +
"";
// Create an input stream from the ontology, and use the parser to read its
// contents into the ontology.
try ( final InputStream in = new ByteArrayInputStream( content.getBytes() ) ) {
parser.parse( new StreamDocumentSource( in ), ontology );
}
// Iterate over the axioms of the ontology. There are more than just the subclass
// axiom, because the class declarations are also axioms. All in all, there are
// four: the subclass axiom and three declarations of named classes.
System.out.println( "== All Axioms: ==" );
for ( final OWLAxiom axiom : ontology.getAxioms() ) {
System.out.println( axiom );
}
// You can iterate over more specific axiom types, though. For instance,
// you could just iterate over the TBox axioms, in which case you'll just
// get the one subclass axiom. You could also iterate over
// ontology.getABoxAxioms() to get ABox axioms.
System.out.println( "== ABox Axioms: ==" );
for ( final OWLAxiom axiom : ontology.getTBoxAxioms( false ) ) {
System.out.println( axiom );
}
}
}
The output is:
== All Axioms: ==
SubClassOf(<http://stackoverflow.com/q/21005908/1281433/Teenager> ObjectIntersectionOf(<http://stackoverflow.com/q/21005908/1281433/Person> <http://stackoverflow.com/q/21005908/1281433/Young>))
Declaration(Class(<http://stackoverflow.com/q/21005908/1281433/Person>))
Declaration(Class(<http://stackoverflow.com/q/21005908/1281433/Young>))
Declaration(Class(<http://stackoverflow.com/q/21005908/1281433/Teenager>))
== ABox Axioms: ==
SubClassOf(<http://stackoverflow.com/q/21005908/1281433/Teenager> ObjectIntersectionOf(<http://stackoverflow.com/q/21005908/1281433/Person> <http://stackoverflow.com/q/21005908/1281433/Young>))

Get annotations from ObjectPropertyAssertion OWLAPI

I'm using the OWL API for OWL 2.0 and there is one thing I can't seem to figure out. I have an OWL/XML file and I would like to retrieve the annotations for my object property assertions. Here are snippets from my OWL/XML and Java code:
OWL:
<ObjectPropertyAssertion>
<Annotation>
<AnnotationProperty abbreviatedIRI="rdfs:comment"/>
<Literal datatypeIRI="http://www.w3.org/2001/XMLSchema#string">Bob likes sushi</Literal>
</Annotation>
<ObjectProperty IRI="#Likes"/>
<NamedIndividual IRI="#UserBob"/>
<NamedIndividual IRI="#FoodSushi"/>
</ObjectPropertyAssertion>
Java:
OWLIndividual bob = manager.getOWLDataFactory().getOWLNamedIndividual(IRI.create(base + "#UserBob"));
OWLObjectProperty likes = manager.getOWLDataFactory().getOWLObjectProperty(IRI.create(base + "#Likes"));
OWLIndividual sushi = factory.getOWLNamedIndividual(IRI.create(base + "#FoodSushi"));
OWLObjectPropertyAssertionAxiom ax = factory.getOWLObjectPropertyAssertionAxiom(likes, bob, sushi);
for(OWLAnnotation a: ax.getAnnotations()){
System.out.println(a.getValue());
}
Problem is, nothing gets returned even though the OWL states there is one rdfs:comment. It has been troublesome to find any documentations on how to retrieve this information. Adding axioms with comments or whatever is not an issue.
In order to retrieve the annotations you need to walk over the axioms of interest. Using the getSomething() adds things to the ontology, as noted in the comments, it is not possible to retrieve your axiom this way. Here is the code adapted from the OWL-API guide:
//Get rdfs:comment
final OWLAnnotationProperty comment = factory.getRDFSComment();
//Create a walker
OWLOntologyWalker walker =
new OWLOntologyWalker(Collections.singleton(ontology));
//Define what's going to visited
OWLOntologyWalkerVisitor<Object> visitor =
new OWLOntologyWalkerVisitor<Object>(walker) {
//In your case you visit the annotations made with rdfs:comment
//over the object properties assertions
#Override
public Object visit(OWLObjectPropertyAssertionAxiom axiom) {
//Print them
System.out.println(axiom.getAnnotations(comment));
return null;
}
};
//Walks over the structure - triggers the walk
walker.walkStructure(visitor);

Categories