The OWL API provides several IRI mappers to cache ontology documents locally. Do any of them use Oasis XML Catalogs, as Protege does? Even better, is there one to automatically cache read-in ontologies locally and check the original IRI for updates before using the local copy?
The Protege team have released the xmlcatalog component as a standalone (from the rest of Protege) module and it has an implementation of OWLOntologyIRIMapper:
https://github.com/protegeproject/xmlcatalog/blob/master/src/main/java/org/protege/xmlcatalog/owlapi/XMLCatalogIRIMapper.java
I just went through the source code, looking for implementations of OWLOntologyIRIMapper. As far as I can tell, none of the implementations save their mappings to disk, much less in the Oasis XML Catalog format.
I'd be very happy to find out I am wrong, so please let me know if I am!
Related
I am working on a project that requires the validation of many XML files against their XSD, the trouble I am having is that many of the XSD files depend on others XSDs, making the usual validation kind of troublesome, is there an elegant way to resolve this issue?
I would prefer if possible to work with those files in memory, the files are not in a concise directory structure that conforms with their importation paths.
Just to note I am working with the Java language.
Assuming here that you work with JAXP, so that you can setSchema() on either SAXParserFactory or `DocumentBuilderFactory.
One solution I was part of, was to read all XSD sources into an aggregated Schema object using SchemaFactory.newSchema(Source[] schemas). This aggregated Schema was then able to validate any XML document that referenced any "top" schema; all imported schemas had to be part of the aggregated schema. As I remember it, it was necessary to order the Source array by dependency, so that if Schema A imported Schema B, Schema B had to occur befor Schema A in the array.
Also, as I recall, <include> didn't work very well with this mechanism.
Another solution would be to set an LSResourceResolver on the ShemaFactory. You would have to implement your own LSResourceresolver that serves byte- or character streams based on the input to the resolver. I haven't personally used or researched this solution.
The first solution has of course the benefit that schema parsing and processing can be done once and reused for all validations that follows; something that will probably be difficult to achieve with the second option.
Another thing to keep in mind (depending on your context): It is a good design choice to control the whole "resolving" process (i.e. control how the parsers get access to external resources), from a performance as well as a security perspective.
this is more of a high level question about using jaxb and xslt, as I try to gain more of an understanding of what I need to do, and what I need to learn more about.
I have inherited an application that has Java class files generated from an xsd schema (using jaxb), does some stuff, then writes one of these objects to a serialized 'save file'.
I currently need to make changes to the xsd, which of course will mean some of my originally generated classes will be updated. However, I still need to be able to load the old serialized saved files for backwards compatibility - does this mean I need to maintain a copy of the current xsd, and all generated class files in order to load the old serialized save files? Does anyone have a suggested way I can do this, if I must be able to load the old files?
For all future version of the xsd, I intend to output saved files to xml, and use xslt to transform the file before unmarshalling the xml, which I think will work, as mentioned in this thread How should I manage different incompatible formts of Xml based documents. Doesn't help me with the older serialized files though - any ideas?
Thanks.
Probably the main drawback of JAXB, and of data binding in general, is that it makes schema evolution very cumbersome. XML is a technology where people expect to change and extend the schema/data model frequently, whereas in Java it is hard-coded and hard to change. Use of XML-oriented languages like XSLT and XQuery is a big advantage in such situations.
Saving persistent data in the form of serialized Java objects seems completely perverse to me. Before you move to your new schema format, convert it all back to XML. The whole point of XML is that the data is then in a format that is far more durable, and not dependent on the continued existence of the software that created it.
Is anyone aware of a library that makes the process of creating XSDs in Java a little easier than using something like a DocumentBuilder? Something like a helper or utility class. I came across the org.eclipse.xsd maven jar but I'm having ClassNotFoundException issues when working with it in Eclipse and I'm not entirely sure it's meant to be used as a standalone kind of thing. This is a bit difficult to Google for as well since there are lot of search results around automatic generation/translation from Java to XSD and vice versa.
Essentially what I need to do is to programmatically create an XSD from a certain source of data -- not Java classes.
Apache XMLSchema is a lightweight Java object model that can be used to manipulate and generate XML schema representations. You can use it to read XML Schema (xsd) files into memory and analyze or modify them, or to create entirely new schemas from scratch.
The fact that with this API one can create an XSD from scratch, it sounds as a starting point to achieve the ask; as to the fitness, it depends on what that "certain source of data" is.
I have an OWL ontology and I want to store data as RDF . When I search in the Google I saw Jena library is used for this purpose. But I could not understand how can I represent data as RDF in Jade.. Plz can somebody help me ???
Jade and Jena are more-or-less independent libraries, so using them both in a project is not hard. Indeed, they have been used in various projects - try a Google search for AgentOWL,for example.
Your agents will need one or more Jena Model objects to hold the RDF information they are going to reason with. These models can be loaded into memory in each agent instance, or you can use a persistent store, such as TDB.
When agents need to send inter-agent messages via Jade, as I recall the default mechanism that Jade uses is Java object serialization (this may have changed, it has been a while since I looked at Jade). Serialization won't work for Jena objects, you'll need to construct a model that contains just the RDF triples you want to send, and then toString() that into the content for an ACL message. I'd suggest using Turtle as the serialization format; it's more compact and easier to read.
I have created an ontology. Now I want to create an application but how can I perform CRUD operations in owl file. I came across different apis like Dotnetrdf, jena etc all support RDF/RDFS but there is not support for owl file
http://www.semanticoverflow.com/questions/2704/using-jena-to-query-owl-files
Problem of reading OWL/XML
Also, most of apis are available in Java and I dont know how to write simple hello world program in java. I am confused with servlet, jsp and .java and lots of configuration is required. So I prefer php.
So is there any api or any alternative way to query owl file in php ?
Regards,
anas anjaria
The only libraries I know that support SW standards in PHP are rdfapi [1] and redland php binding [2], but the level is RDF (i.e. the building block of RDFS and OWL) you will need to add CRUD operations at the triple level (i.e. simple axioms like foaf:knows )
[1] http://www4.wiwiss.fu-berlin.de/bizer/rdfapi/
[2] http://librdf.org/docs/php.html
So, it looks like you're talking about the Web Ontology Language, an XML/RDF dialect.
A few moments in Google shows pretty much zero interest in this in the world of PHP.
But, being XML, you can use one of the PHP XML extensions so read and work with the XML directly without a problem. How well this will actually work for you, I can't say. OWL looks freakishly complex, and working with it at the DOM node level will very likely stretch your sanity far worse than working with mature, established libraries in Java.
i made my final project at the university by using Jena. The Research Group where i work develop ontology generator tool which is capable of all crud operations. They also developed the Eclipse plug-in of this project.
You just create your OWL Data Model in the editor and right click the data model create everything, i creates owl files, Crud class and it's test codes for you.
Let's check it out
Download
Name of Plug-in is "SEAGENT Ontology Generator Plugin (Beta)"
I hope it will be beneficial for you like me