I need to convert data from OWLOntology object(part of OWL api) to Model object(part of Jena Api). My Java program should be able to load owl file and send its content to fuseki server. According to what I read, working with fuseki server via Java program is possible only with Jena Api, that's why I use it.
So I found some example of sending ontologies to fuseki server using Jena api, and modified it to this function :
private static void sendOntologyToFuseki(DatasetAccessor accessor, OWLOntology owlModel){
Model model;
/*
..
conversion from OWLOntology to Model
..
*/
if(accessor != null){
accessor.add(model);
}
}
This function should add new ontologies to fuseki server. Any ideas how to fill missing conversion? Or any other ideas, how to send ontologies to fuseki server using OWL api?
I read solution of this :
Sparql query doesn't upadate when insert some data through java code
but purpose of my java program is to send these ontologies incrementally, because it's quite big data and if I load them into local memory, my computer does not manage it.
The idea is to write to a Java OutputStream and pipe this into an InputStream. A possible implementation could look like this:
/**
* Converts an OWL API ontology into a JENA API model.
* #param ontology the OWL API ontology
* #return the JENA API model
*/
public static Model getModel(final OWLOntology ontology) {
Model model = ModelFactory.createDefaultModel();
try (PipedInputStream is = new PipedInputStream(); PipedOutputStream os = new PipedOutputStream(is)) {
new Thread(new Runnable() {
#Override
public void run() {
try {
ontology.getOWLOntologyManager().saveOntology(ontology, new TurtleDocumentFormat(), os);
os.close();
} catch (OWLOntologyStorageException | IOException e) {
e.printStackTrace();
}
}
}).start();
model.read(is, null, "TURTLE");
return model;
} catch (Exception e) {
throw new RuntimeException("Could not convert OWL API ontology to JENA API model.", e);
}
}
Alternatively, you could simply use ByteArrayOutputStream and ByteArrayInputStream instead of piped streams.
To avoid such kind of dreadful transformation through i/o streams you can use ONT-API: it implements direct reading of the owl-axioms from the graph without any conversion
Related
I am still searching around this subject, but I cannot find a simple solution, and I don't sure it doesn't exist.
Part 1
I have a service on my application that's generating an excel doc, by the dynamic DB data.
public static void
notiSubscribersToExcel(List<NotificationsSubscriber>
data) {
//generating the file dynamically from DB's data
String prefix = "./src/main/resources/static";
String directoryName = prefix + "/documents/";
String fileName = directoryName + "subscribers_list.xlsx";
File directory = new File(directoryName);
if (! directory.exists()){
directory.mkdir();
// If you require it to make the entire directory path including parents,
// use directory.mkdirs(); here instead.
}
try (OutputStream fileOut = new FileOutputStream(fileName)) {
wb.write(fileOut);
fileOut.close();
wb.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Part 2
I want to access it from the browser, so when I call it will get downloaded.
I know that for the static content, all I need to do is to call to the file, from the browser like that:
http://localhost:8080/documents/myfile.xlsx
After I would be able to do it, all I need is to create link to this url from my client app.
The problem -
Currently if I call to the file as above, it will download only the file which have been there in the compiling stage, but if I am generating a new files after the app is running the content won't be available.
It seems that the content is (as it's called) "static" and cannot be changed after startup.
So my question is
is there is a way to define a folder on the app structure that will be dynamic? I just want to access the new generated file.
BTW I found this answer and others which doing configuration methods, or web services, but I don't want all this. And I have tried some of them, but the result is the same.
FYI I don't bundle my client app with the server app, I run them from different hosts
The problem is to download the file with the dynamic content from a Spring app.
This can be solved with Spring BOOT. Here is the solution as shown in this illustration - when i click Download report, my app generates a dynamic Excel report and its downloaded to the browser:
From a JS, make a get request to a Spring Controller:
function DownloadReport(e){
//Post the values to the controller
window.location="../report" ;
}
Here is the Spring Controller GET Method with /report:
#RequestMapping(value = ["/report"], method = [RequestMethod.GET])
#ResponseBody
fun report(request: HttpServletRequest, response: HttpServletResponse) {
// Call exportExcel to generate an EXCEL doc with data using jxl.Workbook
val excelData = excel.exportExcel(myList)
try {
// Download the report.
val reportName = "ExcelReport.xls"
response.contentType = "application/vnd.ms-excel"
response.setHeader("Content-disposition", "attachment; filename=$reportName")
org.apache.commons.io.IOUtils.copy(excelData, response.outputStream)
response.flushBuffer()
} catch (e: Exception) {
e.printStackTrace()
}
}
This code is implemented in Kotlin - but you can implement it as easily in Java too.
I'm working in a Spring Boot api that can receive very large objects and try to save it in a MongoDB database. Because of this the program sometimes throws me the next error:
org.bson.BsonMaximumSizeExceededException: Payload document size is larger than maximum of 16793600.
I'd read that MongoDB only permits objects of size below 16MB, this is very inconvenient for my system because an object can easily surpass this gap. To solve this I had read about GridFS, technology that allows to surpass the 16MB files gap.
Now I'm trying to implement GridFS in my system but I only had seen examples using files to save in the database, something like this:
gridFsOperations.store(new FileInputStream("/Users/myuser/Desktop/text.txt"), "myText.txt", "text/plain", metaData);
But I want to do is not to take the data from a file, but to the api to receive a object and save it, something like this:
#PostMapping
public String save(#RequestBody Object object){
DBObject metaData = new BasicDBObject();
metaData.put("type", "data");
gridFsOperations.store(object, metaData);
return "Stored successfully...";
}
Is it a posible way to doing this?
Get an InputStream from the request and pass it to a GridFSBucket. Here's a rough example:
In your controller:
#PostMapping
public ResponseEntity<String> uploadFile(MultipartHttpServletRequest request)
{
Iterator<String> iterator = request.getFilenames();
String filename = iterator.next();
MultipartFile mf = request.getFile(filename);
// I always have a service layer between controller and repository but for purposes of this example...
myDao.uploadFile(filename, mf.getInputStream());
}
In your DAO/repository:
private GridFSBucket bucket;
#Autowired
void setMongoDatabase(MongoDatabase db)
{
bucket = GridFSBuckets.create(db);
}
public ObjectId uploadFile(String filename, InputStream is)
{
Document metadata = new Document("type", "data");
GridFSUploadOptions opts = new GridFSUploadOptions().metadata(metadata);
ObjectId oid = bucket.uploadFromStream(filename, is, opts);
try
{
is.close();
}
catch (IOException ioe)
{
throw new UncheckedIOException(ioe);
}
return oid;
}
I paraphrased this from existing code so it may not be perfect but will be good enough to point you in the right direction.
How can I create a connection to a Fuseki server through the android studio and upload my owl file into the Fuseki server in order to send SPARQL query and get the result?
I did it from the command-line and it works fine but I need to do it through the android studio.
I found some code but "DatasetAccessor and DatasetAccessorFactory" can not be resolved
public static void uploadRDF(File rdf, String serviceURI)
throws IOException {
// parse the file
Model m = ModelFactory.createDefaultModel();
try (FileInputStream in = new FileInputStream(rdf)) {
m.read(in, null, "RDF/XML");
}
// upload the resulting model
DatasetAccessor accessor = DatasetAccessorFactory
.createHTTP(serviceURI);
accessor.putModel(m);
}
I've been trying to save a JSON (currently, it's simply a generic file) to my device's internal storage. I've looked at tons of examples that look exactly like what I've written, but I can never find the file on my device.
Code to save file
public void saveJSONStringToFile(String json) {
FileOutputStream outfile = null;
String filename = "json_test.json";
try {
outfile = openFileOutput(filename, this.MODE_PRIVATE);
outfile.write(json.getBytes());
outfile.close();
} catch (Exception e) {
e.printStackTrace();
}
}
I know that it's not working because I've also implemented code to retrieve that file and the app just hangs when I try that, implying that it's looking for something that isn't there.
I have a java web application based on Spring MVC.
The task is to generate a pdf file. As all knows the spring engine has its own built-in iText library so the generating of pdf file is really simple. First of all we need to do is to overload AbstractView and create some PdfView. And the seconf thing is to use that view in controller. But in my application I am also have to be able to store generated pdf files on local drive or give my users some link to download that file. So the view in that case is not suitable for me.
I want to create some universal pdf generator that creates a pdf file and returns the bytes array. So I can use that array for file storing (on hard drive) or printing it directly in browser. And the question is - are there any way to use such engine (that returns only the bytes array) in PdfVIew solution? I am asking because overloaded buildPdfDocument method (in PdfView) already have PdfWriter and Document parameters.
Thank you
tldr; you should be able to use a view and save it to a file.
Try using Flying Saucer and its iTextRenderer when you overload AbstractPdfView.
import org.xhtmlrenderer.pdf.ITextRenderer;
public class MyAbstractView extends AbstractView {
OutputStream os;
public void buildPdfDocument(Map<String,Object> model, com.lowagie.text.Document document, com.lowagie.text.pdf.PdfWriter writer, HttpServletRequest request, HttpServletResponse response){
//process model params
os = new FileOutputStream(outputFile);
ITextRenderer renderer = new ITextRenderer();
String url = "http://www.mysite.com"; //set your sample url namespace here
renderer.setDocument(document, url); //use the passed in document
renderer.layout();
renderer.createPDF(os);
os.close();
}
}
protected final void renderMergedOutputModel(Map<String,Object> model,
HttpServletRequest request,
HttpServletResponse response)
throws Exception{
if(os != null){
response.outputStream = os;
}
public byte[] getPDFAsBytes(){
if(os != null){
byte[] stuff;
os.write(stuff);
return stuff;
}
}
}
You'll probably have to tweak the sample implementation shown here, but that should provide a basic gist.