Basic RDFS inferencing with the Jena API - java

I'm currently following the Jena API inferencing tutorial:
https://jena.apache.org/documentation/inference/
and as an exercise to test my understanding, I'd like to rewrite the first example, which demonstrates a trivial RDFS reasoning from a programmatically built model:
import com.hp.hpl.jena.rdf.model.*;
import com.hp.hpl.jena.vocabulary.*;
public class Test1 {
static public void main(String...argv) {
String NS = "foo:";
Model m = ModelFactory.createDefaultModel();
Property p = m.createProperty(NS, "p");
Property q = m.createProperty(NS, "q");
m.add(p, RDFS.subPropertyOf, q);
m.createResource(NS + "x").addProperty(p, "bar");
InfModel im = ModelFactory.createRDFSModel(m);
Resource x = im.getResource(NS + "x");
// verify that property q of x is "bar" (which follows
// from x having property p, and p being a subproperty of q)
System.out.println("Statement: " + x.getProperty(q));
}
}
to something which does the same, but with the model read from this Turtle file instead (which is my own translation of the above, and thus might be buggy):
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>.
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
#prefix foo: <http://example.org/foo#>.
foo:p a rdf:Property.
foo:q a rdf:Property.
foo:p rdfs:subPropertyOf foo:q.
foo:x foo:p "bar".
with this code:
public class Test2 {
static public void main(String...argv) {
String NS = "foo:";
Model m = ModelFactory.createDefaultModel();
m.read("foo.ttl");
InfModel im = ModelFactory.createRDFSModel(m);
Property q = im.getProperty(NS + "q");
Resource x = im.getResource(NS + "x");
System.out.println("Statement: " + x.getProperty(q));
}
}
which doesn't seem to be the right approach (I suspect in particular that my extraction of the q property is somehow not right). What am I doing wrong?

String NS = "foo:";
m.createResource(NS + "x")
creates a URI but the Turtle version has foo:x = http://example.org/foo#x
See the differences by printing the model im.write(System.out, "TTL");
Change NS = "foo:" to NS = "http://example.org/foo#"

Related

I can't find individuals in protege that i creat in eclipse

I've made an Ontology in Protege and i want to create new individuals using eclipse i' using this code
public class testOwl2 {
public static final String SOURCE_URL = "http://www.semanticweb.org/nira/ontologies/2022/3/untitled-ontology-9";
// where we've stashed it on disk for the time being
protected static final String SOURCE_FILE = "C:\\Users\\benni\\Ontologies\\L'ontologie classique.owl";
// the namespace of the ontology
public static final String NS = SOURCE_URL + "#";
/***********************************/
/* External signature methods */
/***********************************/
public void run() {
OntModel m = ModelFactory.createOntologyModel( OntModelSpec.OWL_MEM );
loadModel( m );
// get an OntClass reference to one of the classes in the model
// note: ideally, we would delegate this step to Jena's schemagen tool
OntClass Patient = m.getOntClass( NS + "Patient" );
//OntProperty Patient_relation = m.getObjectProperty( NS + "Has_sign" );
// similarly a reference to the attack duration property,
// and again, using schemagen would be better
OntProperty Patient_Crea = m.getDatatypeProperty( NS + "Creatinine_value" );
// create an instance of the attack class to represent the current attack
Individual Patient1 = m.createIndividual( NS + "P4", Patient );
// add a duration to the attack
Patient1.addProperty( Patient_Crea, m.createTypedLiteral( 10 ) );
m.prepare();
// finally, print out the model to show that we have some data
m.write( System.out, "Turtle" );
}
/***********************************/
/* Internal implementation methods */
/***********************************/
/** read the ontology and add it as a sub-model of the given ontmodel */
protected void loadModel( OntModel m ) {
FileManager.get().getLocationMapper().addAltEntry( SOURCE_URL, SOURCE_FILE );
Model baseOntology = FileManager.get().loadModel( SOURCE_URL );
m.addSubModel( baseOntology );
// for compactness, add a prefix declaration st: (for Sam Thomas)
m.setNsPrefix( "st", NS );
}
public static void main( String[] args ) {
new testOwl2().run();}}
Output
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
#prefix st: <http://www.semanticweb.org/nira/ontologies/2022/3/untitled-ontology-9#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix owl: <http://www.w3.org/2002/07/owl#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
st:P4
a st:Patient ;
st:Creatinine_value "10"^^xsd:int .
But in the owl file (protege) i dont have any individuals that i creat in my ontology could u guys tell what wrong with this code and where can i find this individuals ...thank you

How can I feed a sparse placeholder in a TensorFlow model from Java

I'm trying to calculate the best match for a given address with the kNN algorithm in TensorFlow, which works pretty good, but when I'm trying to export the model and use it in our Java Environment I got stuck on how to feed the sparse placholders from Java.
Here is a pretty much stripped down version of the python part, which returns the smallest distance between the test name and the best reference name. So far this work's as expected. When I export the model and import it in my Java program it always returns the same value (distance of the placeholders default). I asume, that the python function sparse_from_word_vec(word_vec) isn't in the model, which would totally make sense to me, but then how should i make this sparse tensor? My input is a single string and I need to create a fitting sparse tensor (value) to calculate the distance. I also searched for a way to generate the sparse tensor on the Java side, but without success.
import tensorflow as tf
import pandas as pd
d = {'NAME': ['max mustermann',
'erika musterfrau',
'joseph haydn',
'johann sebastian bach',
'wolfgang amadeus mozart']}
df = pd.DataFrame(data=d)
input_name = tf.placeholder_with_default('max musterman',(), name='input_name')
output_dist = tf.placeholder(tf.float32, (), name='output_dist')
test_name = tf.sparse_placeholder(dtype=tf.string)
ref_names = tf.sparse_placeholder(dtype=tf.string)
output_dist = tf.edit_distance(test_name, ref_names, normalize=True)
def sparse_from_word_vec(word_vec):
num_words = len(word_vec)
indices = [[xi, 0, yi] for xi,x in enumerate(word_vec) for yi,y in enumerate(x)]
chars = list(''.join(word_vec))
return(tf.SparseTensorValue(indices, chars, [num_words,1,1]))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
t_data_names=tf.constant(df['NAME'])
reference_names = [el.decode('UTF-8') for el in (t_data_names.eval())]
sparse_ref_names = sparse_from_word_vec(reference_names)
sparse_test_name = sparse_from_word_vec([str(input_name.eval().decode('utf-8'))]*5)
feeddict={test_name: sparse_test_name,
ref_names: sparse_ref_names,
}
output_dist = sess.run(output_dist, feed_dict=feeddict)
output_dist = tf.reduce_min(output_dist, 0)
print(output_dist.eval())
tf.saved_model.simple_save(sess,
"model-simple",
inputs={"input_name": input_name},
outputs={"output_dist": output_dist})
And here is my Java method:
public void run(ApplicationArguments args) throws Exception {
log.info("Loading model...");
SavedModelBundle savedModelBundle = SavedModelBundle.load("/model", "serve");
byte[] test_name = "Max Mustermann".toLowerCase().getBytes("UTF-8");
List<Tensor<?>> output = savedModelBundle.session().runner()
.feed("input_name", Tensor.<String>create(test_names))
.fetch("output_dist")
.run();
System.out.printl("Nearest distance: " + output.get(0).floatValue());
}
I was able to get your example working. I have a couple of comments on your python code before diving in.
You use the variable output_dist for 3 different value types throughout the code. I'm not a python expert, but I think it's bad practice. You also never actually use the input_name placeholder, except for exporting it as an input. Last one is that tf.saved_model.simple_save is deprecated, and you should use the tf.saved_model.Builder instead.
Now for the solution.
Looking at the libtensorflow jar file using the command jar tvf libtensorflow-x.x.x.jar (thanks to this post), you can see that there are no useful bindings for creating a sparse tensor (maybe make a feature request?). So we have to change the input to a dense tensor, then add operations to the graph to convert it to sparse. In your original code the sparse conversion was on the python side which means that the loaded graph in java wouldn't have any ops for it.
Here is the new python code:
import tensorflow as tf
import pandas as pd
def model():
#use dense tensors then convert to sparse for edit_distance
test_name = tf.placeholder(shape=(None, None), dtype=tf.string, name="test_name")
ref_names = tf.placeholder(shape=(None, None), dtype=tf.string, name="ref_names")
#Java Does not play well with the empty character so use "/" instead
test_name_sparse = tf.contrib.layers.dense_to_sparse(test_name, "/")
ref_names_sparse = tf.contrib.layers.dense_to_sparse(ref_names, "/")
output_dist = tf.edit_distance(test_name_sparse, ref_names_sparse, normalize=True)
#output the index to the closest ref name
min_idx = tf.argmin(output_dist)
return test_name, ref_names, min_idx
#Python code to be replicated in Java
def pad_string(s, max_len):
return s + ["/"] * (max_len - len(s))
d = {'NAME': ['joseph haydn',
'max mustermann',
'erika musterfrau',
'johann sebastian bach',
'wolfgang amadeus mozart']}
df = pd.DataFrame(data=d)
input_name = 'max musterman'
#pad dense tensor input
max_len = max([len(n) for n in df['NAME']])
test_input = [list(input_name)]*len(df['NAME'])
#no need to pad, all same length
ref_input = list(map(lambda x: pad_string(x, max_len), [list(n) for n in df['NAME']]))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
test_name, ref_names, min_idx = model()
#run a test to make sure the model works
feeddict = {test_name: test_input,
ref_names: ref_input,
}
out = sess.run(min_idx, feed_dict=feeddict)
print("test output:", out)
#save the model with the new Builder API
signature_def_map= {
"predict": tf.saved_model.signature_def_utils.predict_signature_def(
inputs= {"test_name": test_name, "ref_names": ref_names},
outputs= {"min_idx": min_idx})
}
builder = tf.saved_model.Builder("model")
builder.add_meta_graph_and_variables(sess, ["serve"], signature_def_map=signature_def_map)
builder.save()
And here is the java to load and run it. There is probably a lot of room for improvement here (java isn't my main language), but it gives you the idea.
import org.tensorflow.Graph;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
import org.tensorflow.TensorFlow;
import org.tensorflow.SavedModelBundle;
import java.util.ArrayList;
import java.util.List;
import java.util.Arrays;
public class Test {
public static byte[][] makeTensor(String s, int padding) throws Exception
{
int len = s.length();
int extra = padding - len;
byte[][] ret = new byte[len + extra][];
for (int i = 0; i < len; i++) {
String cur = "" + s.charAt(i);
byte[] cur_b = cur.getBytes("UTF-8");
ret[i] = cur_b;
}
for (int i = 0; i < extra; i++) {
byte[] cur = "/".getBytes("UTF-8");
ret[len + i] = cur;
}
return ret;
}
public static byte[][][] makeTensor(List<String> l, int padding) throws Exception
{
byte[][][] ret = new byte[l.size()][][];
for (int i = 0; i < l.size(); i++) {
ret[i] = makeTensor(l.get(i), padding);
}
return ret;
}
public static void main(String[] args) throws Exception {
System.out.println("Loading model...");
SavedModelBundle savedModelBundle = SavedModelBundle.load("model", "serve");
List<String> str_test_name = Arrays.asList("Max Mustermann",
"Max Mustermann",
"Max Mustermann",
"Max Mustermann",
"Max Mustermann");
List<String> names = Arrays.asList("joseph haydn",
"max mustermann",
"erika musterfrau",
"johann sebastian bach",
"wolfgang amadeus mozart");
//get the max length for each array
int pad1 = str_test_name.get(0).length();
int pad2 = 0;
for (String var : names) {
if(var.length() > pad2)
pad2 = var.length();
}
byte[][][] test_name = makeTensor(str_test_name, pad1);
byte[][][] ref_names = makeTensor(names, pad2);
//use a with block so the close method is called
try(Tensor t_test_name = Tensor.<String>create(test_name))
{
try (Tensor t_ref_names = Tensor.<String>create(ref_names))
{
List<Tensor<?>> output = savedModelBundle.session().runner()
.feed("test_name", t_test_name)
.feed("ref_names", t_ref_names)
.fetch("ArgMin")
.run();
System.out.println("Nearest distance: " + output.get(0).longValue());
}
}
}
}

OWL Class Expression for Data Property

In my ontology, I have an individual that has this data property
hasName "somaName"^^string,
however, when i'm building a class expression and sending to the reasoner for get the instances, I'm getting an empty set with the following query,
OWLClassExpression x = schema.getFactory().getOWLDataHasValue(schema.getDataProperty("hasName"), schema.getFactory().getOWLLiteral("somaName"));
System.out.println(reasoner.getInstances(x, true));
the getDataProperty is just a small method:
public OWLDataProperty getDataProperty(String dataProperty){
return factory.getOWLDataProperty("#"+dataProperty,pm);
}
The following code snippet works, compare it to your code to see what's different. You should use a reasoner that support this type of construct (Hermit does).
//Initiate everything
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
String base = "http://www.example.org/";
OWLOntology ontology = manager.createOntology(IRI.create(base + "ontology.owl"));
OWLDataFactory factory = manager.getOWLDataFactory();
//Add the stuff to the ontology
OWLDataProperty hasName = factory.getOWLDataProperty(IRI.create(base + "hasName"));
OWLNamedIndividual john = factory.getOWLNamedIndividual(IRI.create(base + "john"));
OWLLiteral lit = factory.getOWLLiteral("John");
OWLDataPropertyAssertionAxiom ax =
factory.getOWLDataPropertyAssertionAxiom(hasName, john, lit);
AddAxiom addAx = new AddAxiom(ontology, ax);
manager.applyChange(addAx);
//Init of the reasoner
//I use Hermit because it supports the construct of interest
OWLReasonerFactory reasonerFactory = new Reasoner.ReasonerFactory();
OWLReasoner reasoner = reasonerFactory.createReasoner(ontology);
reasoner.precomputeInferences();
//Prepare the expression for the query
OWLDataProperty p = factory.getOWLDataProperty(IRI.create(base + "hasName"));
OWLClassExpression ex =
factory.getOWLDataHasValue(p, factory.getOWLLiteral("John"));
//Print out the results, John is inside
Set<OWLNamedIndividual> result = reasoner.getInstances(ex, true).getFlattened();
for (OWLNamedIndividual owlNamedIndividual : result) {
System.out.println(owlNamedIndividual);
}

Read restriction values using Jena

I have an object restriction defined as follows
hasYear some integer[minLength 2, maxLength 4, >=1995, <=2012]
How can i read the individual values defined in the restriction using Jena.
You can use different approaches. First of all you can traverse Jena Model by the following code:
model.read(...);
StmtIterator si = model.listStatements(
model.getResource("required property uri"), RDFS.range, (RDFNode) null);
while (si.hasNext()) {
Statement stmt = si.next();
Resource range = stmt.getObject().asResource();
// get restrictions collection
Resource nextNode = range.getPropertyResourceValue(OWL2.withRestrictions);
for (;;) {
Resource restr = nextNode.getPropertyResourceValue(RDF.first);
if (restr == null)
break;
StmtIterator pi = restr.listProperties();
while (pi.hasNext()) {
Statement restrStmt = pi.next();
Property restrType = restrStmt.getPredicate();
Literal value = restrStmt.getObject().asLiteral();
// print type and value for each restriction
System.out.println(restrType + " = " + value);
}
// go to the next element of collection
nextNode = nextNode.getPropertyResourceValue(RDF.rest);
}
}
If you use OntModel representation of RDF graph code can be simplified by using of
model.listRestrictions()
ontClass.asRestriction()
etc.
Good example of such approach (thanks to Ian Dickinson)
Another way is to use SPARQL 1.1 query with the same meaning
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT ?datatype ?restr_type ?restr_value {
?prop rdfs:range ?range.
?range owl:onDatatype ?datatype;
owl:withRestrictions ?restr_list.
?restr_list rdf:rest*/rdf:first ?restr.
?restr ?restr_type ?restr_value
}

How to find min-max occurrence of an element in xsd using xsom

I want to find out the minimum occurence maximm occurence of an xsd element using xsom of java.I got this code to find out complex elements.Can anyone help me in find out occurence of all the xsd element.Atlest give me a code snippet with the class and method to be used to find the occurrence
xmlfile = "Calendar.xsd"
XSOMParser parser = new XSOMParser();
parser.parse(new File(xmlfile));
XSSchemaSet sset = parser.getResult();
XSSchema s = sset.getSchema(1);
if (s.getTargetNamespace().equals("")) // this is the ns with all the stuff
// in
{
// try ElementDecls
Iterator jtr = s.iterateElementDecls();
while (jtr.hasNext())
{
XSElementDecl e = (XSElementDecl) jtr.next();
System.out.print("got ElementDecls " + e.getName());
// ok we've got a CALENDAR.. what next?
// not this anyway
/*
*
* XSParticle[] particles = e.asElementDecl() for (final XSParticle p :
* particles) { final XSTerm pterm = p.getTerm(); if
* (pterm.isElementDecl()) { final XSElementDecl ed =
* pterm.asElementDecl(); System.out.println(ed.getName()); }
*/
}
// try all Complex Types in schema
Iterator<XSComplexType> ctiter = s.iterateComplexTypes();
while (ctiter.hasNext())
{
// this will be a eSTATUS. Lets type and get the extension to
// see its a ENUM
XSComplexType ct = (XSComplexType) ctiter.next();
String typeName = ct.getName();
System.out.println(typeName + newline);
// as Content
XSContentType content = ct.getContentType();
// now what?
// as Partacle?
XSParticle p2 = content.asParticle();
if (null != p2)
{
System.out.print("We got partical thing !" + newline);
// might would be good if we got here but we never do :-(
}
// try complex type Element Decs
List<XSElementDecl> el = ct.getElementDecls();
for (XSElementDecl ed : el)
{
System.out.print("We got ElementDecl !" + ed.getName() + newline);
// would be good if we got here but we never do :-(
}
Collection<? extends XSAttributeUse> c = ct.getAttributeUses();
Iterator<? extends XSAttributeUse> i = c.iterator();
while (i.hasNext())
{
XSAttributeDecl attributeDecl = i.next().getDecl();
System.out.println("type: " + attributeDecl.getType());
System.out.println("name:" + attributeDecl.getName());
}
}
}
Assuming you are referring to com.sun.xml.xsom, the occurrence is specific to a particle (elements are not the only particles).
Here are the APIs: maxOccurs and minOccurs
For one source to see how to traverse a schema tree using XSOM please take a look here. It shows basically how the visitor patterns works with XSOM (for which Sun built a package).

Categories