Sparql query doesn't upadate when insert some data through java code - java

I'm trying to insert data through my java code to the owl file which is loaded into Fuseki server. Update query doesn't give any error message. But owl file doesn't update.I'm using jena library and implemented using java code. What is the wrong in my code?
public boolean addLecturerTriples(String fName, String lName,
String id, String module) {
try{
ArrayList<String> subject = new ArrayList<String>();
ArrayList<String> predicate = new ArrayList<String>();
ArrayList<String> object = new ArrayList<String>();
subject.add("<http://people.brunel.ac.uk/~csstnns/university.owl#"+fName+">");
predicate.add("<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>");
object.add("<http://people.brunel.ac.uk/~csstnns/university.owl#Lecturer>");
for(int i = 0; i < subject.size(); i++){
String qry = "INSERT DATA"+
"{"+
subject.get(i)+"\n"+
predicate.get(i)+"\n"+
object.get(i)+"\n"+
"}";
UpdateRequest update = UpdateFactory.create(qry);
UpdateProcessor qexec = UpdateExecutionFactory.createRemote(update, "http://localhost:3030/ds/update");
qexec.execute();
}
}catch(Exception e){
return false;
}
return true;
}

It would help if you have provided a minimal complete example i.e. you had included your Fuseki configuration and the details of how your OWL file is loaded into Fuseki.
However I will assume you have not used any specific configuration and just launching Fuseki like so:
java -jar fuseki-server-VER.jar --update --loc /path/to/db /ds
So what you've done here is launch Fuseki with updates enabled and using the location /path/to/db as the on-disk TDB database location and the URL /ds for your dataset
The you open your browser and click through Control Panel > /ds and then use the Upload file function to upload your OWL file. When you upload a file it is read into Fuseki and copied into the dataset, in this example your dataset is the on disk TDB database located at /path/to/db.
It is important to understand that no reference to the original file is kept since Fuseki has simply copied the data from the file to the dataset.
You then use the SPARQL Update form to add some data (or in your case you do this via Java code). The update is applied to the dataset which to reiterate is in this example the on disk TDB database located at /path/to/db which has no reference to the original file. Therefore your original file will not change.
Using SPARQL Update to update the original file
If Fuseki is not essential then you could just load your file into local memory and run the update there instead:
Model m = ModelFactory.createDefaultModel();
m.read("example.owl", "RDF/XML");
// Prepare your update...
// Create an UpdateExecution on the local model
UpdateProcessor processor = UpdateExecutionFactory.create(update, GraphStoreFactory.create(m));
processor.execute();
// Save the updated model
updated.write(new FileOutputStream("example.owl"), "RDF/XML");
However if you want to/must stick with using Fuseki you can update your original file by retrieving the modified graph from Fuseki and writing it back out to your file e.g.
DatasetAccessor accessor = DatasetAccessorFactory.createHTTP("http://localhost:3030/ds/data");
// Download the updated model
Model updated = accessor.getModel();
// Save the updated model over the original file
updated.write(new FileOutputStream("example.owl"), "RDF/XML");
This example assumes that you have loaded the OWL file into the default graph, if not use the getModel("http://graph") overload to load the relevant named graph

Related

How do I reveal a MS Graph (search result) driveItem File's path (folder)?

I have following (kotlin/java based) query in MSGraph
var driveItemSearchCollectionRequestBuilder =
safeGraphServiceClient
.sites(SHAREPOINT_SITE_ID)
.drive()
.root()
.search("¤A=118628")
do{
driveItemSearchCollectionPage = driveItemSearchCollectionRequestBuilder?.buildRequest()?.get()?:break
driveItemSearchCollectionPage.currentPage.map {driveItem->
driveItem?.let{ safeDriveItem ->
//Here I need to find my `safeDriveItem`'s (which is a file) path (where the file is stored)... (or folder)
//`safeDriveItem.folder` is null... (since this is a file)
}
}
driveItemSearchCollectionRequestBuilder = driveItemCollectionPage.nextPage
}while(driveItemSearchCollectionRequestBuilder!=null)
which results in a set (page) of driveItems. This search can find the file in any folder in my sharepoint tree. Where (or how) can I find the drivItem file's folder (i.e. '\MyFolder\Level1\Level2\Level3')? (The folder item is null for driveItem here, and I haven't found any value which contains it). Or do I need to do som "clever" backtracking?
When ever you search using the above code, as you said you will be getting the driveItems. Pick the id of the driveItem which you want the Folder path for and then call
https://graph.microsoft.com/v1.0/sites/{siteid}/drive/Items/{driveItemid}
which will pull the whole drive item object which has a parentReferrence object which internally have path property in it.
Sharepoint has two different data sources where search will pull the data by indexing from one source and few properties may not showup. So pulling an object directly gives you all the properties.
#Shiva found a solution, and I can now query the path for my file like this:
var driveItemSearchCollectionRequestBuilder =
safeGraphServiceClient
.sites(SHAREPOINT_SITE_ID)
.drive()
.root()
.search("¤A=118628")
do{
driveItemSearchCollectionPage = driveItemSearchCollectionRequestBuilder?.buildRequest()?.get()?:break
driveItemSearchCollectionPage.currentPage.map {driveItem->
driveItem?.let{ safeDriveItem ->
val pathItem = safeGraphServiceClient
.sites(SHAREPOINT_SITE_ID)
.drive()
.items(safeDriveItem.id)
.buildRequest()
.get()
val path = pathItem.parentReference.path
galleryItems.add(path, driveItem.name) //My functiomn adds now path and file to db
}
}
driveItemSearchCollectionRequestBuilder = driveItemCollectionPage.nextPage
}while(driveItemSearchCollectionRequestBuilder!=null)
I hope in future the driveItem.parentReference.path could be populated so we can avoid a secondary call to Graph, or there is some switch to set on the search phrase to disclose the path (communication cost perspective).

Is there any way to get records from a database as a comma seperated text file

Friends,
I have stuck in this job
I have a table empdb having 1000 rows like below
id name desgn
--- ------- ----------------
1 Mike Analyst
2 Jim Manager
3 John Engg
etc . etc so on
I want to write a servlet that will query this table
and download in text format say "emp_info.txt" like below
1,sam,Engg
2,Mike,Excecutive
3,Jim,Manager
I mean every record in db should be on seperate line
Please guide
We can't get directly records from db as csv file, there are many third-party libraries available for working with CSV files,
Let's take a look at a few of them:
Open CSV: Another popular and actively-maintained CSV library
Apache Commons CSV: Apache's CSV offering for working with CSV Files
Flatpack: An open-source CSV library being actively developed
CSVeed: Open-source and actively-maintained.
Sample code for open csv, add your open csv jar to you class path.
create one csv file and add its location to the code below
File file = new File(filePath);
try {
// create FileWriter object with file as parameter
FileWriter outputfile = new FileWriter(file);
// create CSVWriter object filewriter object as parameter
CSVWriter writer = new CSVWriter(outputfile);
// adding header to csv
String[] header = { "id", "name", "design" };
writer.writeNext(header);
// add data to csv
// in loop
String[] data = {}
writer.writeNext(data);
// execute this two line until your data remains.
// closing writer connection
writer.close();
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
After creating the file set content type as csv and flush the buffer the browser will download the file for end user.
resp.setContentType("application/csv");
resp.setHeader(HttpHeaders.CONTENT_DISPOSITION, file);
inStrm = new FileInputStream(file);
FileCopyUtils.copy(inStrm, resp.getOutputStream());
resp.flushBuffer();
I assume that you want the user who comes to your web app to be able to download all data in this "empdb" DB table as an "emp_info.txt" file. Just to be clear what is servlet:
Servlet - resides at server side and generates a dynamic web page
You should split this task to different parts:
Connect your database with java application by using JPA and e.x Hibernate, also implement a repository.
Create service that will user repository to fetch all data from the table and write it to .txt file.
Implement button or other functionality in GUI which will use the service you have created and will send the file for the user.

Merge RDF .ttl files into one file database - filtering and keeping only the data/triples needed

I need to merge 1000+ .ttl files into one file database. How can I merge them with filtering the data in the source files and keep only the data needed in the target file?
Thanks
There's a number of options, but the simplest way is probably to have use a Turtle parser to read all the files, and let that parser pass its output to a handler which does the filtering before in turn passing the data to a Turtle writer.
Something like this would probably work (using RDF4J):
RDFWriter writer = org.eclipse.rdf4j.rio.Rio.createWriter(RDFFormat.TURTLE, outFile);
writer.startRDF();
for (File file : // loop over your 100+ input files) {
Model data = Rio.parse(new FileInputStream(file), "", RDFFormat.TURTLE);
for (Statement st: data) {
if (// you want to keep this statement) {
writer.handleStatement(st);
}
}
}
writer.endRDF();
Alternatively, just load all the files into an RDF Repository, and use SPARQL queries to get the data out and save to an output file, or if you prefer: use SPARQL updates to remove the data you don't want before exporting the entire repository to a file.
Something along these lines (again using RDF4J):
Repository rep = ... // your RDF repository, e.g. an in-memory store or native RDF database
try (RepositoryConnection conn = rep.getConnection()) {
// load all files into the database
for (File file: // loop over input files) {
conn.add(file, "", RDFFormat.TURTLE);
}
// do a sparql update to remove all instances of ex:Foo
conn.prepareUpdate("DELETE WHERE { ?s a ex:Foo; ?p ?o }").execute();
// export to file
con.export(Rio.createWriter(RDFFormat.TURTLE, outFile));
} finally {
rep.shutDown();
}
Depending on the amount of data / the size of your files, you may need to extend this basic setup a bit (for example by using transactions instead of just letting the connection auto-commit). But you get the general idea, hopefully.

Saving existent CSV file in Android with Processing

I am new in android programming and I am struggeling with saving an existent CSV file. I wrote the Code with Java in Processing. and it works on the PC but now I would like to switch to android Mode - but how can I Move the CSV file to my phone? and is there an easy way so i can use the Command:
table = loadTable("Vocstest.csv", "header");
I use Processing 3.37
The code you wrote will work just fine on an android phone. I have used the same code as yours in an app.
The difference may be that i do not try to override it (by saving) I am only accessing it to retrieve the data.
You have to add your file in the "data" directory in your project. If the "data" folder does not exist you can create it and put in your csv file.
example:
Table aQaK = loadTable("aQaK_ar.csv", "header");
TableRow myrow = aQaK.getRow(myversenum);
String myversetxt = myrow.getString("AyahText");
Hope this helps. Peace.

How to save models from ML Pipeline to S3 or HDFS?

I am trying to save thousands of models produced by ML Pipeline. As indicated in the answer here, the models can be saved as follows:
import java.io._
def saveModel(name: String, model: PipelineModel) = {
val oos = new ObjectOutputStream(new FileOutputStream(s"/some/path/$name"))
oos.writeObject(model)
oos.close
}
schools.zip(bySchoolArrayModels).foreach{
case (name, model) => saveModel(name, Model)
}
I have tried using s3://some/path/$name and /user/hadoop/some/path/$name as I would like the models to be saved to amazon s3 eventually but they both fail with messages indicating the path cannot be found.
How to save models to Amazon S3?
One way to save a model to HDFS is as following:
// persist model to HDFS
sc.parallelize(Seq(model), 1).saveAsObjectFile("hdfs:///user/root/linReg.model")
Saved model can then be loaded as:
val linRegModel = sc.objectFile[LinearRegressionModel]("linReg.model").first()
For more details see (ref)
Since Apache-Spark 1.6 and in the Scala API, you can save your models without using any tricks. Because, all models from the ML library come with a save method, you can check this in the LogisticRegressionModel, indeed it has that method. By the way to load the model you can use a static method.
val logRegModel = LogisticRegressionModel.load("myModel.model")
So FileOutputStream saves to local filesystem (not through the hadoop libraries), so saving to a locally directory is the way to go about doing this. That being said, the directory needs to exist, so make sure the directory exists first.
That being said, depending on your model you may wish to look at https://spark.apache.org/docs/latest/mllib-pmml-model-export.html (pmml export).

Categories