How can I modify or remove properties values? - Jena API - java

I'm using Jena. I would like to know if there is a method that allows to modify or remove properties values of an instance?
Thanks

Statements in Jena are, by design, immutable. To change the value of a property p of some subject s, you need to add a new statement with the same subject and predicate, and remove the old statement. This is always true in Jena, even if the API sometimes hides this from you. For example, OntResource and its subclasses have a variety of setProperty variants, but under the hood these are performing the same add-the-new-triple-and-delete-the-old process.

It depends which Jena API you are using. For instance, if you are using Jena 3.0 and the Model API, you can use Model.remove(Statement) to remove a property by choosing the appropriate subject/predicate/object for the Statement. Modification can be achieved by removing the old version of a Statement and adding the new version.

To only remove the statement itself, i.e. the relation between the instance and the property value, you can use:
OntResource.removeProperty(Property, RDFNode)
If you want to remove the property value altogether, i.e. the value and all relations to it, you can use: OntResource.remove()

I had the similar task: I need to delete the property with the specified value. Hope the following code snippet will help someone.
public void removeLabel(String language, String value) {
NodeIterator nodeIterator = resource.getModel().listObjectsOfProperty(RDFS.label);
RDFNode foundToDelete = null;
while (nodeIterator.hasNext()) {
RDFNode next = nodeIterator.next();
boolean langsAreIdentical = next.asLiteral().getLanguage().equals(language);
boolean valuesAreIdentical = next.asLiteral().getLexicalForm().equals(value);
if (langsAreIdentical && valuesAreIdentical) {
foundToDelete = next;
break;
}
}
resource.getModel().remove(resource, RDFS.label, foundToDelete);
}

Related

Parse a .ttl file and map it to a Java class

I am new to OWL 2, and I want to parse a ".ttl" file with OWL API, but I found that OWL API is not same as the API I used before. It seems that I should write a "visitor" if I want to get the content within a OWLAxiom or OWLEntity, and so on. I have read some tutorials, but I didn't get the proper way to do it. Also, I found the tutorials searched were use older version of owl api. So I want a detailed example to parse a instance, and store the content to a Java class.
I have made some attempts, my codes are as follows, but I don't know to go on.
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
File file = new File("./source.ttl");
OWLOntology localAcademic = manager.loadOntologyFromOntologyDocument(file);
Stream<OWLNamedIndividual> namedIndividualStream = localAcademic.individualsInSignature();
Iterator<OWLNamedIndividual> iterator = namedIndividualStream.iterator();
while (iterator.hasNext()) {
OWLNamedIndividual namedIndividual = iterator.next();
}
Instance for example are as follows. Specially, I want store the "#en" in the object of "ecrm:P3_has_note".
<http://data.doremus.org/performance/4db95574-8497-3f30-ad1e-f6f65ed6c896>
a mus:M42_Performed_Expression_Creation ;
ecrm:P3_has_note "Créée par Teodoro Anzellotti, son commanditaire, en novembre 1995 à Rotterdam"#en ;
ecrm:P4_has_time-span <http://data.doremus.org/performance/4db95574-8497-3f30-ad1e-f6f65ed6c896/time> ;
ecrm:P9_consists_of [ a mus:M28_Individual_Performance ;
ecrm:P14_carried_out_by "Teodoro Anzellotti"
] ;
ecrm:P9_consists_of [ a mus:M28_Individual_Performance ;
ecrm:P14_carried_out_by "à Rotterdam"
] ;
efrbroo:R17_created <http://data.doremus.org/expression/2fdd40f3-f67c-30a0-bb03-f27e69b9f07f> ;
efrbroo:R19_created_a_realisation_of
<http://data.doremus.org/work/907de583-5247-346a-9c19-e184823c9fd6> ;
efrbroo:R25_performed <http://data.doremus.org/expression/b4bb1588-dd83-3915-ab55-b8b70b0131b5> .
The contents I want are as follows:
class Instance{
String subject;
Map<String, Set<Object>> predicateToObject = new HashMap<String,Set<Object>>();
}
class Object{
String value;
String type;
String language = null;
}
The version of owlapi I am using is 5.1.0. I download the jar and the doc from there. I just want to know how to get the content I need in the java class.
If there are some tutorials that describe the way to do it, please tell me.
Thanks a lot.
Update: I have known how to do it, when I finish it, I will write an answer, I hope it can help latecomers of OWLAPI.
Thanks again.
What you need, once you have the individual, is to retrieve the data property assertion axioms and collect the literals asserted for each property.
So, in the for loop in your code:
// Let's rename your Object class to Literal so we don't get confused with java.lang.Object
Instance instance = new Instance();
localAcademic.dataPropertyAssertionAxioms()
.forEach(ax -> instance.predicateToObject.put(
ax.getProperty().getIRI().toString(),
Collections.singleton(new Literal(ax.getObject))));
This code assumes properties only appear once - if your properties appear multiple times, you'll have to check whether a set already exists for the property and just add to it instead of replacing the value in the map. I left that out to simplify the example.
A visitor is not necessary for this scenario, because you already know what axiom type you're interested in and what methods to call on it. It could have been written as an OWLAxiomVisitor implementing only visit(OWLDataPropertyAssertionAxiom) but in this case there would be little advantage in doing so.

Force YAML Tag on JavaBean properties

I have been using SnakeYAML for certain serialization/deserialization. My application combines Python and Java, so I need some "reasonable behaviour" on the Tags and the Types.
My problem / actual status on the YAML document:
!!mypackage.MyClassA
someFirstField: normal string
someSecondField:
a: !!mypackage.ThisIsIt
subField: 1
subOtherField: 2
b: !!mypackage.ThisIsIt
subField: 3
subOtherField: 4
someThirdField:
subField: 5
subOtherField: 6
I achieved the use of the tags inside collections (see example someSecondField) by reimplementing checkGlobalTag and simply performing return. This, if I understood correctly, ensures no smart cleanness of snakeyaml and maintains the tags. So far so good: I need the type everywhere.
However, this is not enough, because someThirdField is also a !!mypackage.ThisIsIt but it has implicit tag and this is a problem (Python does not understand it). There are some other corner cases which are not to the point (tried to take some shortcuts in the Python side, and they became a Bad Idea).
Which is the correct way to ensure that the tags appear for all user-defined classes? I assume that I should override some methods on the Representer, but I have not been able to find which one.
The line responsible for that "smart tag auto-clean" is the following:
if (property.getType() == propertyValue.getClass())
Which can be found in representJavaBeanProperty, for the class Representer.
The (ugly) solution I found is to extend the Representer and #Override the representJavaBeanProperty with the following:
protected NodeTuple representJavaBeanProperty(Object javaBean,
Property property,
Object propertyValue,
Tag customTag) {
// Copy paste starts here...
ScalarNode nodeKey = (ScalarNode) representData(property.getName());
// the first occurrence of the node must keep the tag
boolean hasAlias = this.representedObjects.containsKey(propertyValue);
Node nodeValue = representData(propertyValue);
if (propertyValue != null && !hasAlias) {
NodeId nodeId = nodeValue.getNodeId();
if (customTag == null) {
if (nodeId == NodeId.scalar) {
if (propertyValue instanceof Enum<?>) {
nodeValue.setTag(Tag.STR);
}
}
// Copy-paste ends here !!!
// Ignore the else block --always maintain the tag.
}
}
return new NodeTuple(nodeKey, nodeValue);
This also forces the explicit-tag-on-lists behaviour (previously enforced through the override of the checkGlobalTag method, now already implemented in the representJavaBeanProperty code).

Optimized way to populate data in a object

I am working on a solution where I need to populate certain fields in a DataObject, though fields are predefined but source from where I need to populate this data is not in my control and I can not do any modification or changes.
This is a structure of my Source Object
SourceObject
-Collection<Features>
-Collection<FeatureData>
Attribute Name is defined in SourceObject which will help me to decided if I want that attribute value or not (There are many attributes (Framework Provided + Custom one)) and Value is being provided from Collection<FeatureData>
for(SourceData sourceData : productData.getSourceData())
{
if(sourceData.getCode().equalsIgnoreCase("classification"))
{
if(CollectionUtils.isNotEmpty(sourceData.getFeatures()))
{
for(FeatureData featureData : sourceData.getFeatures()){
if(CollectionUtils.isNotEmpty(featureData.getFeatureValues())){
if(featureData .getCode().contains("customValue1")){
for(FeatureValueData featureDataValue: featureData.getFeatureValues()){
productData.setPower(featureDataValue.getValue()));
break;
}
}
}
break;
}
}
}
}
But that means I have to do this (Check and Fill) for all my custom attributes. Is there way I can handle it in good way ?.
Please do not pay much attention to syntax or any potential NPE etc, as I will going to take care of those issues
From what I can comprehend from your code is that you are trying to find the very first sourceData whose code matches the "classification" value and also whose features collection isn't empty.
Then once such sourceData is found, you are trying to find the very first feature from the sourceData's features collection whose code contains the "customValue1" in it and whose featureValueData collection isn't empty.
Once such feature is found, you are effectively setting the productData's power as the value that is held by the very first featureValueData of the featureValueData collection of the feature.
Such code can be rewritten as follows:
// Start
SourceData sourceData = findFirstValidSource(productData.getSourceData());
if (sourceData == null) // Can remove this check if sure that at least one valid source data will always exist.
{
return;
}
FeatureData feature = findFirstValidFeature(sourceData.getFeatures());
if (feature == null) // Can also remove this check if sure that at least one valid feature data will always exist.
{
return;
}
FeatureValueData featureValueData = feature.getFeatureValues().iterator().next();
productData.setPower(featureValueData.getValue());
// End
The findFirstValidSource() method is implemented as follows:
private SourceData findFirstValidSource(Collection<SourceData> sources)
{
for (SourceData source : sources)
{
if (source.getCode().equalsIgnoreCase("classification") && CollectionUtils.isNotEmpty(source.getFeatures()))
{
return source;
}
}
return null;
}
The findFirstValidFeature() method is implemented as follows:
private FeatureData findFirstValidFeature(Collection<FeatureData> features)
{
for (FeatureData feature : features)
{
if (feature.getCode().contains("customValue1") && CollectionUtils.isNotEmpty(feature.getFeatureValues()))
{
return feature;
}
}
return null;
}
The above code will also do the exact same thing that your code is doing, except that its more readable and understandable now. The code can save some more processing if you make your getFeatureValues() method return a List instead of a Collection, as then it can grab the very first element through indexing (if implementation is ArrayList based) or by getting the first element (if implementation is LinkedList based), which will take constant time.
Not that the iterator().next() takes any more time, it just is useless to create an iterator in this case when there's no requirement of it as I did in the line:
FeatureValueData featureValueData = feature.getFeatureValues().iterator().next();
If getFeatureValues() would have returned a List, we could have written:
FeatureValueData featureValueData = feature.getFeatureValues().get(0);
instead.
Furthermore, the code can be made more compact by use of Lambda Expressions and Streams which are introduced in JDK 8. If you have no issues using the new features of Java 8, I can update my answer to incorporate a terser solution as well.
Let me know if my answer helped or if your expectations are still not met.

Upsert for LDAP directory in Java

I'm attempting to execute an Upsert using the Novell JLDAP library, unfortunately, I'm having trouble finding an example of this. Currently, I have to:
public EObject put(EObject eObject){
Subject s = (Subject) eObject;
//Query and grab attributes from subject
LDAPAttributes attr = resultsToAttributes(getLDAPConnection().get(s));
//No modification needed - return
if(s.getAttributes().equals(attr)){
return eObject;
} else {
//Keys:
//REPLACE,ADD,DELETE, depending on which attributes are present in the maps, I choose the operation which will be used
Map<String,LDAPAttribute> operationalMap = figureOutWhichAttributesArePresent(c.getAttributes(),attr);
//Add the Modifcations to a modification array
ArrayList<LDAPModification> modList = new ArrayList<LDAPModification>();
for(Entry entry: operationalMap.getEntrySet()){
//Specify whether it is an update, delete, or insert here. (entry.getKey());
modList.add(new LDAPModification(entry.getKey(),entry.getValue());
}
//commit
connection.modify("directorypathhere",modList.toArray(new LDAPModification[modList.size()]));
}
I'd prefer to not have to query on the customer first, which results in cycling through the subject's attributes as well. Is anyone aware if JNDI or another library is able to execute an update/insert without running multiple statements against LDAP?
Petesh was correct - the abstraction was implemented within the Novell library (as well as the UnboundId library). I was able to "upsert" values using the Modify.REPLACE param for every attribute that came in, passing in null for empty values. This effectively created, updated, and deleted the attributes without having to parse them first.
In LDAP, via LDIF files, an upset would be a single event with two steps. A remove and add of a value. This is denoted by a single dash on a line, between the remove then the add.
I am not sure how you would do it in this library. I would would try to modList.remove and then modList.add one after another and see if that works.

Get declared methods in order they appear in source code

The situation seems to be abnormal, but I was asked to build serializer that will parse an object into string by concatenating results of "get" methods. The values should appear in the same order as their "get" equivalent is declared in source code file.
So, for example, we have
Class testBean1{
public String getValue1(){
return "value1";
}
public String getValue2(){
return "value2";
}
}
The result should be:
"value1 - value2"
An not
"value2 - value1"
It can't be done with Class object according to the documentation. But I wonder if I can find this information in "*.class" file or is it lost? If such data exists, maybe, someone knows a ready to use tool for that purpose? If such information can't be found, please, suggest the most professional way of achieving the goal. I thought about adding some kind of custom annotations to the getters of the class that should be serialized.
If you want that you have to parse the source code, not the byte code.
There are a number of libraries that parse a source file into a node tree, my favorite is the javaparser (hosted at code.google.com), which, in a slightly modified version, is also used by spring roo.
On the usage page you can find some samples. Basically you will want to use a Visitor that listens for MethodDefinitions.
Although reflection does not anymore (as of java 7 I think) give you the methods in the order in which they appear in the source code, the class file appears to still (as of Java 8) contain the methods in the order in which they appear in the source code.
So, you can parse the class file looking for method names and then sort the methods based on the file offset in which each method was found.
If you want to do it in a less hacky way you can use Javassist, which will give you the line number of each declared method, so you can sort methods by line number.
I don't think the information is retained.
JAXB, for example, has #XmlType(propOrder="field1, field2") where you define the order of the fields when they are serialized to xml. You can implemenet something similar
Edit: This works only on concrete classes (the class to inspect has its own .class file). I changed the code below to reflect this. Until diving deeper into the ClassFileAnalyzer library to work with classes directly instead of reading them from a temporary file this limitation exists.
Following approach works for me:
Download and import following libarary ClassFileAnalyzer
Add the following two static methods (Attention! getClussDump() needs a little modification for writing out the class file to a temporary file: I removed my code here because it's very special at this point):
public static String getClassDump(Class<?> c) throws Exception {
String classFileName = c.getSimpleName() + ".class";
URL resource = c.getResource(classFileName);
if (resource == null) {
throw new RuntimeException("Works only for concreate classes!");
}
String absolutePath = ...; // write to temp file and get absolute path
ClassFile classFile = new ClassFile(absolutePath);
classFile.parse();
Info infos = new Info(classFile, absolutePath);
StringBuffer infoBuffer = infos.getInfos();
return infoBuffer.toString();
}
public static <S extends List<Method>> S sortMethodsBySourceOrder(Class<?> c, S methods) throws Exception {
String classDump = getClassDump(c);
int index = classDump.indexOf("constant_pool_count:");
final String dump = classDump.substring(index);
Collections.sort(methods, new Comparator<Method>() {
public int compare(Method o1, Method o2) {
Integer i1 = Integer.valueOf(dump.indexOf(" " + o1.getName() + lineSeparator));
Integer i2 = Integer.valueOf(dump.indexOf(" " + o2.getName() + lineSeparator));
return i1.compareTo(i2);
}});
return methods;
}
Now you can call the sortMethodsBySourceOrder with any List of methods (because sorting arrays is not very comfortable) and you will get the list back sorted.
It works by looking at the class dumps constant pool which in turn can be determined by the library.
Greetz,
GHad
Write your custom annotation to store ordering data, then use Method.getAnnotation(Class annotationClass)

Categories