Given an index created with Lucene-8, but without knowledge of the fields used, how can I programmatically extract all the fields? (I'm aware that the Luke browser can be used interactively (thanks to #andrewjames) Examples for using latest version of Lucene. ) The scenario is that, during a development phase, I have to read indexes without prescribed schemas.
I'm using
IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(index)));
IndexSearcher searcher = new IndexSearcher(reader);
The reader has methods such as:
reader.getDocCount(field);
but this requires knowing the fields in advance.
I understand that documents in the index may be indexed with different fields; I'm quite prepared to iterate over all documents and extract the fields on a regular basis (these indexes are not huge).
I'm using Lucene 8.5.* so post and tutorials based on earlier Lucene versions may not work.
You can access basic field info as follows:
import java.util.List;
import java.io.IOException;
import java.nio.file.Paths;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexableField;
import org.apache.lucene.store.FSDirectory;
public class IndexDataExplorer {
private static final String INDEX_PATH = "/path/to/index/directory";
public static void doSearch() throws IOException {
IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
for (int i = 0; i < reader.numDocs(); i++) {
Document doc = reader.document(i);
List<IndexableField> fields = doc.getFields();
for (IndexableField field : fields) {
// use these to get field-related data:
//field.name();
//field.fieldType().toString();
}
}
}
}
Related
I'm relativly new to Lucene and want to implement my own CustomScoreQuery since I need it for my University.
I used the Lucene demo as my starting point to index all documents in a Folder and want to score them using my own algorithm.
Here are the links to the source code of the demo.
https://lucene.apache.org/core/7_1_0/demo/src-html/org/apache/lucene/demo/IndexFiles.html
https://lucene.apache.org/core/7_1_0/demo/src-html/org/apache/lucene/demo/SearchFiles.html
I'm checking with Luke: Lucene Toolbox Project to see my Index which is as expected. My problem occurs accessing it.
package CustomModul;
import java.io.IOException;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Terms;
import org.apache.lucene.queries.CustomScoreProvider;
import org.apache.lucene.queries.CustomScoreQuery;
import org.apache.lucene.search.Query;
public class CountingQuery extends CustomScoreQuery {
public CountingQuery(Query subQuery) {
super(subQuery);
}
public class CountingQueryScoreProvider extends CustomScoreProvider {
String _field;
public CountingQueryScoreProvider(String field, LeafReaderContext context) {
super(context);
_field = field;
}
public float customScore(int doc, float subQueryScore, float valSrcScores[]) throws IOException {
IndexReader r = context.reader();
//getTermVector returns Null
Terms vec = r.getTermVector(doc, _field);
//*TO-DO* Algorithm
return (float)(1.0f);
}
}
protected CustomScoreProvider getCustomScoreProvider(
LeafReaderContext context) throws IOException {
return new CountingQueryScoreProvider("contents", context);
}
}
In my customScore function I access the Index like described in most Tutorials. I should get access to the Index using getTermVector but it returns NULL.
In other posts I read that this could be caused by contents being a TextField which is declared in the Lucene Demo IndexFiles.
After trying a lot of different approaches I came to the conclusion that I need help and here I am.
My Question now is if I need to adjust the Index Process (how?) or is there another way to access the Index in the ScoreProvider other then getTermVector?
I was able to solve the Problem myself and wanted to share my solution if someone finds this Question looking for answers.
The Problem was indeed caused by the contents being a TextField in
https://lucene.apache.org/core/7_1_0/demo/src-html/org/apache/lucene/demo/IndexFiles.html
To solve this Problem one has to construct his own Field which I did replacing line 193 in said IndexFile with
FieldType myFieldType = new FieldType(TextField.TYPE_STORED);
myFieldType.setOmitNorms(true);
myFieldType.setIndexOptions(IndexOptions.DOCS_AND_FREQS);
myFieldType.setStored(false);
myFieldType.setStoreTermVectors(true);
myFieldType.setTokenized(true);
myFieldType.freeze();
Field myField = new Field("contents",
new BufferedReader(new InputStreamReader(stream,
StandardCharsets.UTF_8)),
myFieldType);
doc.add(myField);
this allows the use of getTermVector in the customScore Function. Hope this will help someone in the future.
I want to use stanford parser within the coreNLP.
I already got this example working:
http://stanfordnlp.github.io/CoreNLP/simple.html
BUT: I need the german model. So i downloaded "stanford-german-2016-01-19-models.jar".
But how can I set this jar file for usage?
I only found:
LexicalizedParser lp = LexicalizedParser.loadModel("englishPCFG.ser.gz");
but i have a jar with the germn models, NOT a ...ser.gz.
Can anyboady help?
Here is some sample code for parsing a German sentence:
import edu.stanford.nlp.io.IOUtils;
import edu.stanford.nlp.ling.CoreAnnotations;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.simple.*;
import edu.stanford.nlp.trees.*;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.util.PropertiesUtils;
import edu.stanford.nlp.util.StringUtils;
import java.util.*;
public class SimpleGermanExample {
public static void main(String[] args) {
String sampleGermanText = "...";
Annotation germanAnnotation = new Annotation(sampleGermanText);
Properties germanProperties = StringUtils.argsToProperties(
new String[]{"-props", "StanfordCoreNLP-german.properties"});
StanfordCoreNLP pipeline = new StanfordCoreNLP(germanProperties);
pipeline.annotate(germanAnnotation);
for (CoreMap sentence : germanAnnotation.get(CoreAnnotations.SentencesAnnotation.class)) {
Tree sentenceTree = sentence.get(TreeCoreAnnotations.TreeAnnotation.class);
System.out.println(sentenceTree);
}
}
}
Make sure you download the full toolkit to use this sample code.
http://stanfordnlp.github.io/CoreNLP/
Also make sure you have there German models jar in your CLASSPATH. The code above will know to look at all the jars in your CLASSPATH and will recognize that file as being in the German jar.
First of all: This works, Thank you!
But, I don't need this complex way with all these annotators. Thats why I wanted to start with the simple CoreNLP Api. Thats my code:
import edu.stanford.nlp.simple.*;
import java.util.*;
public class Main {
public static void main(String[] args) {
Sentence sent = new Sentence("Lucy is in the sky with diamonds.");
List<String> posTags = sent.posTags();
List<String> words = sent.words();
for (int i = 0; i < posTags.size(); i++) {
System.out.println(words.get(i)+" "+posTags.get(i));
}
}
}
How can I get the german prperties file work with this example?
Or the other way: How do I get only the word with the pos tag in your example?
The german equivalent to the english example is the following:
LexicalizedParser lp = LexicalizedParser.loadModel("germanPCFG.ser.gz");
Extract the latest stanford-german-corenlp-2018-10-05-models.jar file and you will find it inside the folder: stanford-german-corenlp-2018-10-05-models\edu\stanford\nlp\models\lexparser
I have a bunch of source files for java classes. I want to find those classes which are annotated by a given annotation class. The names of those classes should be written to a service provider list file.
Is there any machinery I could use to help me with this task? Or do I have to implement this myself from scratch?
If I had to do this myself, there are several approaches I can think of.
Write an Ant Task in Java. Have it create a ClassLoader using a suitable (probably configurable) class path. Use that loader to (attempt to) load the classes matching the input files, in order to inspect their annotations. Requires annotation retention at runtime, and full initialization of all involved classes and their dependencies.
Use javap to inspect the classes. Since I don't know of a programmatic interface to javap (do you?), this probably means iterating over the files and running a new process for each of them, then massaging the created output in a suitable way. Perhaps a <scriptdef>-ed task could be used for this. This would work with class-file annotation retention, and require no initialization.
Use an annotation processor to collect the information at compile-time. This should be able to work with sourcecode-only retention. But I have no experience writing or using annotation compilers, so I'm not sure this will work, and will need a lot of research to figure out some of the details. In particular how to activate the task for use by ant (Java 6 annotation processing configuration with Ant gives some pointers on this, as does What is the default annotation processors discovery process?) and when to create the output file (in each round, or only in the last round).
Which of these do you think has the greatest chances of success? Can you suggest code samples for one of these, which might be close to what I want and which I could adapt appropriately?
Encouraged by Thomas' comment, I gave approach 3 a try and got the following annotation processor working reasonably well:
import java.io.IOException;
import java.io.Writer;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import javax.annotation.processing.AbstractProcessor;
import javax.annotation.processing.ProcessingEnvironment;
import javax.annotation.processing.RoundEnvironment;
import javax.annotation.processing.SupportedSourceVersion;
import javax.lang.model.SourceVersion;
import javax.lang.model.element.Element;
import javax.lang.model.element.QualifiedNameable;
import javax.lang.model.element.TypeElement;
import javax.tools.StandardLocation;
#SupportedSourceVersion(SourceVersion.RELEASE_7)
public class AnnotationServiceProcessor extends AbstractProcessor {
// Map name of the annotation to name of the corresponding service interface
private static Map<String, String> annotationToServiceMap = new HashMap<>();
static {
// Adapt this to your use, or make it configurable somehow
annotationToServiceMap.put("Annotation1", "Service1");
annotationToServiceMap.put("Annotation2", "Service2");
}
#Override public Set<String> getSupportedAnnotationTypes() {
return annotationToServiceMap.keySet();
}
// Map name of the annotation to list of names
// of the classes which carry that annotation
private Map<String, List<String>> classLists;
#Override public void init(ProcessingEnvironment env) {
super.init(env);
classLists = new HashMap<>();
for (String ann: getSupportedAnnotationTypes())
classLists.put(ann, new ArrayList<String>());
}
public boolean process(Set<? extends TypeElement> annotations,
RoundEnvironment env) {
for (TypeElement ann: annotations) {
List<String> classes =
classLists.get(ann.getQualifiedName().toString());
for (Element elt: env.getElementsAnnotatedWith(ann)) {
QualifiedNameable qn = (QualifiedNameable)elt;
classes.add(qn.getQualifiedName().toString());
}
}
if (env.processingOver()) { // Only write results at the end
for (String ann: getSupportedAnnotationTypes()) {
try {
write(ann, classLists.get(ann));
} catch (IOException e) {
throw new RuntimeException(e); // UGLY!
}
}
}
return true;
}
// Write the service file for each annotation we found
private void write(String ann, List<String> classes) throws IOException {
if (classes.isEmpty())
return;
String service = annotationToServiceMap.get(ann);
Writer w = processingEnv.getFiler()
.createResource(StandardLocation.CLASS_OUTPUT,
"", "META-INF/services/" + service)
.openWriter();
classes.sort(null); // Make the processing order irrelevant
for (String cls: classes) {
w.write(cls);
w.write('\n');
}
w.close();
}
}
So far I've hooked this up to ant using <compilerarg>s from https://stackoverflow.com/a/3644624/1468366. I'll try something better and if I succeed will edit this post to include some ant snippet.
I am trying to use Hermit Reasoner to check consistency. Hermit reasoner by default does not provide any justification/explanations for the inconsistencies.
EDITED VERSION:: I'm currently trying with OWLReasoner, but still it which throws an error.
import java.util.Set;
import org.semanticweb.HermiT.Reasoner;
import org.semanticweb.owl.explanation.api.Explanation;
import org.semanticweb.owl.explanation.api.ExplanationGeneratorFactory;
import org.semanticweb.owl.explanation.api.ExplanationManager;
import org.semanticweb.owl.explanation.impl.blackbox.checker.InconsistentOntologyExplanationGeneratorFactory;
import org.semanticweb.owlapi.apibinding.OWLManager;
import org.semanticweb.owlapi.model.IRI;
import org.semanticweb.owlapi.model.OWLAxiom;
import org.semanticweb.owlapi.model.OWLClass;
import org.semanticweb.owlapi.model.OWLOntology;
import org.semanticweb.owlapi.model.OWLOntologyManager;
import org.semanticweb.owlapi.reasoner.Node;
import org.semanticweb.owlapi.reasoner.OWLReasoner;
import org.semanticweb.owl.explanation.api.ExplanationGenerator;
import org.semanticweb.owlapi.model.OWLDataFactory;
import org.semanticweb.owlapi.model.OWLNamedIndividual;
import org.semanticweb.owlapi.model.OWLOntologyCreationException;
import org.semanticweb.owlapi.reasoner.OWLReasonerFactory;
public class ConsistencyChecker {
public static void main(String[] args) throws Exception {
OWLOntologyManager m=OWLManager.createOWLOntologyManager();
OWLOntology o=m.loadOntologyFromOntologyDocument(IRI.create("http://www.cs.ox.ac.uk/isg/ontologies/UID/00793.owl"));
// Reasoner hermit=new Reasoner(o);
OWLReasoner owlreasoner=new Reasoner.ReasonerFactory().createReasoner(o);
System.out.println(owlreasoner.isConsistent());
//System.out.println(hermit.isConsistent());
//---------------------------- Copied from example---------
OWLDataFactory df = m.getOWLDataFactory();
OWLClass testClass = df.getOWLClass(IRI.create("urn:test#testclass"));
m.addAxiom(o, df.getOWLSubClassOfAxiom(testClass, df.getOWLNothing()));
OWLNamedIndividual individual = df.getOWLNamedIndividual(IRI
.create("urn:test#testindividual"));
m.addAxiom(o, df.getOWLClassAssertionAxiom(testClass, individual));
//----------------------------------------------------------
Node<OWLClass> unsatisfiableClasses = owlreasoner.getUnsatisfiableClasses();
//Node<OWLClass> unsatisfiableClasses = hermit.getUnsatisfiableClasses();
for (OWLClass owlClass : unsatisfiableClasses) {
System.out.println(owlClass.getIRI());
}
//-----------------------------
ExplanationGeneratorFactory<OWLAxiom> genFac = ExplanationManager.createExplanationGeneratorFactory((OWLReasonerFactory) owlreasoner);
ExplanationGenerator<OWLAxiom> gen = genFac.createExplanationGenerator(o);
//-------------------------
InconsistentOntologyExplanationGeneratorFactory igf = new InconsistentOntologyExplanationGeneratorFactory((OWLReasonerFactory) owlreasoner, 10000);
//InconsistentOntologyExplanationGeneratorFactory igf = new InconsistentOntologyExplanationGeneratorFactory((OWLReasonerFactory) hermit, 10000);
ExplanationGenerator<OWLAxiom> generator = igf.createExplanationGenerator(o);
OWLAxiom entailment = df.getOWLClassAssertionAxiom(df.getOWLNothing(),
individual);
//-------------
Set<Explanation<OWLAxiom>> expl = gen.getExplanations(entailment, 5);
//------------
System.out.println("Explanation "
+ generator.getExplanations(entailment, 5));
}
}
The output is
true
http://www.w3.org/2002/07/owl#Nothing
http://www.co-ode.org/ontologies/pizza/pizza.owl#CheeseyVegetableTopping
http://www.co-ode.org/ontologies/pizza/pizza.owl#IceCream
Exception in thread "main" java.lang.ClassCastException: org.semanticweb.HermiT.Reasoner cannot be cast to org.semanticweb.owlapi.reasoner.OWLReasonerFactory
at ConsistencyChecker.main(ConsistencyChecker.java:82)
Any help in integrating owlexplanation api [1] with the Hermit Reasoner/OWLReasoner would be appreciated.
[1]https://github.com/matthewhorridge/owlexplanation
The error is because you're casting an OWLReasoner to an OWLReasonerFactory.
The OWLReasonerFactory for HermiT is the one you've used a few lines above:
new Reasoner.ReasonerFactory()
//package org.semanticweb.HermiT.examples;
import java.util.Set;
import org.semanticweb.HermiT.Configuration;
import org.semanticweb.HermiT.Reasoner;
import org.semanticweb.HermiT.Reasoner.ReasonerFactory;
import org.semanticweb.owlapi.apibinding.OWLManager;
import org.semanticweb.owlapi.model.IRI;
import org.semanticweb.owlapi.model.OWLAxiom;
import org.semanticweb.owlapi.model.OWLClass;
import org.semanticweb.owlapi.model.OWLDataFactory;
import org.semanticweb.owlapi.model.OWLOntology;
import org.semanticweb.owlapi.model.OWLOntologyManager;
import org.semanticweb.owlapi.reasoner.OWLReasoner;
import com.clarkparsia.owlapi.explanation.BlackBoxExplanation;
import com.clarkparsia.owlapi.explanation.HSTExplanationGenerator;
public class Explanations {
public static void main(String[] args) throws Exception {
// First, we create an OWLOntologyManager object. The manager will load and
// save ontologies.
OWLOntologyManager manager=OWLManager.createOWLOntologyManager();
// We will create several things, so we save an instance of the data factory
OWLDataFactory dataFactory=manager.getOWLDataFactory();
// Now, we create the file from which the ontology will be loaded.
// Here the ontology is stored in a file locally in the ontologies subfolder
// of the examples folder.
//File inputOntologyFile = new File("examples/ontologies/pizza.owl");
// We use the OWL API to load the ontology.
//OWLOntology ontology=manager.loadOntologyFromOntologyDocument(inputOntologyFile);
// We use the OWL API to load the Pizza ontology.
OWLOntology ontology=manager.loadOntologyFromOntologyDocument(IRI.create("http://www.cs.ox.ac.uk/isg/ontologies/UID/00793.owl"));
// Lets make things worth and turn Pizza into an inconsistent ontology by asserting that the
// unsatisfiable icecream class has some instance.
// First, create an instance of the OWLClass object for the unsatisfiable icecream class.
IRI icecreamIRI=IRI.create("http://www.co-ode.org/ontologies/pizza/pizza.owl#IceCream");
OWLClass icecream=dataFactory.getOWLClass(icecreamIRI);
// Now we can start and create the reasoner. Since explanation is not natively supported by
// HermiT and is realised in the OWL API, we need to instantiate HermiT
// as an OWLReasoner. This is done via a ReasonerFactory object.
ReasonerFactory factory = new ReasonerFactory();
// We don't want HermiT to thrown an exception for inconsistent ontologies because then we
// can't explain the inconsistency. This can be controlled via a configuration setting.
Configuration configuration=new Configuration();
configuration.throwInconsistentOntologyException=false;
// The factory can now be used to obtain an instance of HermiT as an OWLReasoner.
OWLReasoner reasoner=factory.createReasoner(ontology, configuration);
// Let us confirm that icecream is indeed unsatisfiable:
System.out.println("Is icecream satisfiable? "+reasoner.isSatisfiable(icecream));
System.out.println("Computing explanations...");
// Now we instantiate the explanation classes
BlackBoxExplanation exp=new BlackBoxExplanation(ontology, factory, reasoner);
HSTExplanationGenerator multExplanator=new HSTExplanationGenerator(exp);
// Now we can get explanations for the unsatisfiability.
Set<Set<OWLAxiom>> explanations=multExplanator.getExplanations(icecream);
// Let us print them. Each explanation is one possible set of axioms that cause the
// unsatisfiability.
for (Set<OWLAxiom> explanation : explanations) {
System.out.println("------------------");
System.out.println("Axioms causing the unsatisfiability: ");
for (OWLAxiom causingAxiom : explanation) {
System.out.println(causingAxiom);
}
System.out.println("------------------");
}
// Let us make the ontology inconsistent to also get explanations for an
// inconsistency, which is slightly more involved since we dynamically
// have to change the factory constructor; otherwise, we can't suppress
// the inconsistent ontology exceptions that the OWL API requires a
// reasoner to throw.
// Let's start by adding a dummy individual to the unsatisfiable Icecream class.
// This will cause an inconsistency.
OWLAxiom ax=dataFactory.getOWLClassAssertionAxiom(icecream, dataFactory.getOWLNamedIndividual(IRI.create("http://www.co-ode.org/ontologies/pizza/pizza.owl#dummyIndividual")));
manager.addAxiom(ontology, ax);
// Let us confirm that the ontology is inconsistent
reasoner=factory.createReasoner(ontology, configuration);
System.out.println("Is the changed ontology consistent? "+reasoner.isConsistent());
// Ok, here we go. Let's see why the ontology is inconsistent.
System.out.println("Computing explanations for the inconsistency...");
factory=new Reasoner.ReasonerFactory() {
protected OWLReasoner createHermiTOWLReasoner(org.semanticweb.HermiT.Configuration configuration,OWLOntology ontology) {
// don't throw an exception since otherwise we cannot compte explanations
configuration.throwInconsistentOntologyException=false;
return new Reasoner(configuration,ontology);
}
};
exp=new BlackBoxExplanation(ontology, factory, reasoner);
multExplanator=new HSTExplanationGenerator(exp);
// Now we can get explanations for the inconsistency
explanations=multExplanator.getExplanations(dataFactory.getOWLThing());
// Let us print them. Each explanation is one possible set of axioms that cause the
// unsatisfiability.
for (Set<OWLAxiom> explanation : explanations) {
System.out.println("------------------");
System.out.println("Axioms causing the inconsistency: ");
for (OWLAxiom causingAxiom : explanation) {
System.out.println(causingAxiom);
}
System.out.println("------------------");
}
}
}
This is the code I have written but the new built-in does not seem to work. I get the error:
Exception in thread "main" com.hp.hpl.jena.reasoner.rulesys.impl.LPRuleSyntaxException: Syntax error in backward rule: matematica Unknown builtin operation mysum
Can anyone tell me where the error is? Here is my code:
package JenaRules;
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Arrays;
import java.util.List;
import org.semanticweb.owlapi.model.OWLOntologyCreationException;
import org.semanticweb.owlapi.model.OWLOntologyStorageException;
import com.hp.hpl.jena.graph.Node;
import com.hp.hpl.jena.query.Query;
import com.hp.hpl.jena.query.QueryExecution;
import com.hp.hpl.jena.query.QueryExecutionFactory;
import com.hp.hpl.jena.query.QueryFactory;
import com.hp.hpl.jena.query.ResultSet;
import com.hp.hpl.jena.query.ResultSetFormatter;
import com.hp.hpl.jena.rdf.model.InfModel;
import com.hp.hpl.jena.rdf.model.Model;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.rdf.model.Resource;
import com.hp.hpl.jena.reasoner.Reasoner;
import com.hp.hpl.jena.reasoner.rulesys.*;
import com.hp.hpl.jena.reasoner.rulesys.builtins.BaseBuiltin;
import com.hp.hpl.jena.util.FileManager;
import com.hp.hpl.jena.vocabulary.RDFS;
import com.hp.hpl.jena.vocabulary.ReasonerVocabulary;
public class RulesOntology_MT {
public static void main(String[] args) throws OWLOntologyStorageException,
OWLOntologyCreationException, IOException {
BuiltinRegistry.theRegistry.register(new BaseBuiltin() {
#Override
public String getName() {
return "mysum";
}
#Override
public int getArgLength() {
return 2;
}
#Override
public boolean bodyCall(Node[] args, int length, RuleContext context) {
checkArgs(length, context);
BindingEnvironment env = context.getEnv();
Node n1 = getArg(0, args, context);
Node n2 = getArg(1, args, context);
if (n1.isLiteral() && n2.isLiteral()) {
Object v1 = n1.getLiteralValue();
Object v2 = n2.getLiteralValue();
Node sum = null;
if (v1 instanceof Number && v2 instanceof Number) {
Number nv1 = (Number)v1;
Number nv2 = (Number)v2;
int sumInt = nv1.intValue()+nv2.intValue();
sum = Util.makeIntNode(sumInt);
return env.bind(args[2], sum);
}
}
return false;
}
});
// NON SERVE
// final String exampleRuleString2 =
// "[mat1: equal(?s ?p )\n\t-> print(?s ?p ?o),\n\t (?s ?p ?o)\n]"+
// "";
final String exampleRuleString =
"[matematica:"+
"(?p http://www.semanticweb.org/prova_rules_M#totale_crediti ?x)"+
" -> " +
"(?p rdf:type http://www.semanticweb.org/prova_rules_M#:Persona)"+
"(?e rdf:type http://www.semanticweb.org/prova_rules_M#:Esame)"+
"(?p http://www.semanticweb.org/prova_rules_M#:haSostenutoEsameDi ?e)"+
"(?e http://www.semanticweb.org/prova_rules_M/persona#crediti_esame ?cr)"+
"mysum(?cr,2)"+
"]";
System.out.println(exampleRuleString);
/* I tend to use a fairly verbose syntax for parsing out my rules when I construct them
* from a string. You can read them from whatever other sources.
*/
final List<Rule> rules;
try( final BufferedReader src = new BufferedReader(new InputStreamReader(new ByteArrayInputStream(exampleRuleString.getBytes()))) ) {
rules = Rule.parseRules(Rule.rulesParserFromReader(src));
}
/* Construct a reasoner and associate the rules with it */
// create an empty non-inferencing model
GenericRuleReasoner reasoner = (GenericRuleReasoner) GenericRuleReasonerFactory.theInstance().create(null);
reasoner.setRules(rules);
/* Create & Prepare the InfModel. If you don't call prepare, then
* rule firings and inference may be deferred until you query the
* model rather than happening at insertion. This can make you think
* that your Builtin is not working, when it is.
*/
InfModel infModel = ModelFactory.createInfModel(reasoner, ModelFactory.createDefaultModel());
infModel.prepare();
infModel.createResource(RDFS.Class);
//write down the result in RDFXML form
infModel.write(System.out);
}
}
Using the code that you provided, and Apache Jena 2.11.1, I cannot replicate the exception you are getting. Do note that when you call BuiltinRegistry.theRegistry.register(...), you are telling the reasoner that the builtin exists.
Solution
The exception that you are getting is likely because, in your actual code, you are not calling BuiltinRegistry.theRegistry.register(...) prior to calling Rule.parseRules(Rule.rulesParserFromReader(src));, so as far as the rule parser is concerned, you are using a Builtin which doesn't exist. To fix it, merely call register before parsing your rules. The toy example provided does not have this problem.
Using the example provided
I also noted that the provided code example did not include anything that would actually stimulate the rule to fire, so, in lieu of infModel.createResource(RDFS.Class);, I added the following lines:
final Resource s = infModel.createResource();
final Property p = infModel.createProperty("http://www.semanticweb.org/prova_rules_M#totale_crediti");
final Resource o = infModel.createResource();
infModel.add(s,p,o);
This stimulated the rule to fire, and led to the following exception trace:
com.hp.hpl.jena.reasoner.rulesys.BuiltinException: Error in clause of rule (matematica) mysum: builtin mysum not usable in rule heads
at com.hp.hpl.jena.reasoner.rulesys.builtins.BaseBuiltin.headAction(BaseBuiltin.java:86)
at com.hp.hpl.jena.reasoner.rulesys.impl.RETEConflictSet.execute(RETEConflictSet.java:184)
at com.hp.hpl.jena.reasoner.rulesys.impl.RETEConflictSet.add(RETEConflictSet.java:81)
at com.hp.hpl.jena.reasoner.rulesys.impl.RETEEngine.requestRuleFiring(RETEEngine.java:249)
at com.hp.hpl.jena.reasoner.rulesys.impl.RETETerminal.fire(RETETerminal.java:80)
at com.hp.hpl.jena.reasoner.rulesys.impl.RETEClauseFilter.fire(RETEClauseFilter.java:227)
at com.hp.hpl.jena.reasoner.rulesys.impl.RETEEngine.inject(RETEEngine.java:469)
at com.hp.hpl.jena.reasoner.rulesys.impl.RETEEngine.runAll(RETEEngine.java:451)
at com.hp.hpl.jena.reasoner.rulesys.impl.RETEEngine.add(RETEEngine.java:174)
at com.hp.hpl.jena.reasoner.rulesys.FBRuleInfGraph.performAdd(FBRuleInfGraph.java:654)
at com.hp.hpl.jena.graph.impl.GraphBase.add(GraphBase.java:202)
at com.hp.hpl.jena.rdf.model.impl.ModelCom.add(ModelCom.java:1138)
at SO.test(SO.java:108)
As a note: my test class is SO.java and line 108 is where we call infModel.add(s,p,o).
The exception that I get is different than the exception you encountered, but it is worth explaining. The implementation that you provided implements Builtin#bodyCall(...), but not Builtin#headAction(...). We can see the exception is thrown from BaseBuiltin#headAction(...). This default behavior assumes that you didn't implement the method because your Builtin doesn't support it. In the toy problem, this is correct behavior because the example implementation cannot be used in rule heads.