I created an ontology model using protege. ,
I used java to populate my ontology( reate user , resources..)
and then I save all modification into a file.
Now I need to integrate an RDF server to save changes
after some research I found that Fuseki is one of the best servers that I can use ..
After some more research I also found that I need to use RDFCOnnexion to communicate with the fuseki server but I am having some difficulties with integrating the server and manipulating all of my Java classes.
To request my ontology , I used RDFconnexion:
example :
public static void main(String[] args) {
RDFConnection conn1 =
RDFConnectionFactory.connect("http://localhost:3030/test/") ;
try( QueryExecution qExec = conn1.query("PREFIX ex: <http://example.org/>
SELECT * { ?s ?p ?o }") ) {
ResultSet rs = qExec.execSelect();
ResultSetFormatter.out(rs, qExec.getQuery());
}
}
but I am running into issues trying to create the Agent (user) ,or resource..
below you will find just a part of my Java code :
private final OntModel onto;
private OntModel inferred;
public test() {
onto = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM);
OntDocumentManager manager = onto.getDocumentManager();
manager.addAltEntry("http://www-test/1.0.0", "ontologies/test.owl");
}
public String createUri(String prefix, String localName){
String uri = prefix + "#" + localName ;
uri = uri.replaceAll(" ", "_") ;
return uri ;
}
// to create Agent
public Resource createAgent(String uri) throws
AlreadyExistingRdfResourceException {
Resource agent = this.createEntity(uri) ;
if (agent==null) return null ;
Property prop ; Statement s ;
s = ResourceFactory.createStatement(agent, RDF.type,
onto.getIndividual(EngineConstants.CD_Agent)) ;
onto.add(s) ;
this.synchronize();
return agent ;
}
// TO get Agent Activty
public Set<Resource> getAgentActivities(String agentUri){
final String query = "SELECT ?entity WHERE { ?entity CD:hasAgent <"+
agentUri +">}" ;
ResultSet resultSet = this.queryExec(this.getInferred(), query);
return this.getRdfResources(resultSet, "entity") ;
}
I need to know if someone can help me and give me an example how I can use and integrate Fuseki to ( modify and request my ontology).
thank you for your help
Note you probably first want to retrieve your graph using the fetch() method - http://jena.apache.org/documentation/javadoc/rdfconnection/org/apache/jena/rdfconnection/RDFDatasetAccessConnection.html#fetch-java.lang.String- - which will be more efficient than querying for it as you do now e.g.
Model model = connection.fetch("http://your-graph-name");
If you are just using the default graph you can just do connection.fetch() to retrieve that.
Once you have the local copy modify it with Jena APIs as you desire.
You can then use the put() method to update a graph - http://jena.apache.org/documentation/javadoc/rdfconnection/org/apache/jena/rdfconnection/RDFConnection.html#put-java.lang.String-org.apache.jena.rdf.model.Model- - with your local changes e.g.
connection.put("http://your-graph-name", model);
This will overwrite the existing graph with the current contents of model. Again if you are just using the default graph you can just do connection.put(model).
Related
I have a Java method in my code, in which I am using following line of code to fetch any data from azure cosmos DB
Iterable<FeedResponse<Object>> feedResponseIterator =
cosmosContainer
.queryItems(sqlQuery, queryOptions, Object.class)
.iterableByPage(continuationToken, pageSize);
Now the whole method looks like this
public List<LinkedHashMap> getDocumentsFromCollection(
String containerName, String partitionKey, String sqlQuery) {
List<LinkedHashMap> documents = new ArrayList<>();
String continuationToken = null;
do {
CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions();
CosmosContainer cosmosContainer = createContainerIfNotExists(containerName, partitionKey);
Iterable<FeedResponse<Object>> feedResponseIterator =
cosmosContainer
.queryItems(sqlQuery, queryOptions, Object.class)
.iterableByPage(continuationToken, pageSize);
int pageCount = 0;
for (FeedResponse<Object> page : feedResponseIterator) {
long startTime = System.currentTimeMillis();
// Access all the documents in this result page
page.getResults().forEach(document -> documents.add((LinkedHashMap) document));
// Along with page results, get a continuation token
// which enables the client to "pick up where it left off"
// in accessing query response pages.
continuationToken = page.getContinuationToken();
pageCount++;
log.info(
"Cosmos Collection {} deleted {} page with {} number of records in {} ms time",
containerName,
pageCount,
page.getResults().size(),
(System.currentTimeMillis() - startTime));
}
} while (continuationToken != null);
log.info(containerName + " Collection has been collected successfully");
return documents;
}
My question is that can we use same line of code to execute delete query like (DELETE * FROM c)? If yes, then what it would be returning us in Iterable<FeedResponse> feedResponseIterator object.
SQL statements can only be used for reads. Delete operations must be done using DeleteItem().
Here are Java SDK samples (sync and async) for all document operations in Cosmos DB.
Java v4 SDK Document Samples
I've recently developed a "classic" 3-tier web applications using Java EE.
I've used GlassFish as application server, MS SQL Server as DBMS and xhtml pages with primefaces components for the front end.
Now, for educational purposes, I want to substitute the relational db with a pure triplestore database but I'm not sure about the procedure to follow.
I've searched a lot on google and on this site but I didn't find what I was looking for, because every answer I found was more theoretical than practical.
If possible, I need a sort of tutorial or some practical tips.
I've read the documentation about Apache Jena but I'm not able to find a solid starting point.
In particoular:
- In order to use MS SQL Server with GlassFish I've used a JDBC Driver, created a datasource and a connection pool. Does it exist an equivalent procedure to set up a triple store database?
- To handle users authentication, I've used a Realm. What should I do now?
For the moment I've created "by hand" a RDF schema and using Jena Schemagen I've translated it into a Java Class. What should I do now?
After several attempts and other research on the net I finally achieved my goal.
I decided to develop a hybrid solution in which I manage users login and their navigation permits via MS SQL Server and JDBCRealm, while I use Jena TDB to save all the other data.
Starting with an RDF schema, I created a Java class that contains resources and properties to easily create my statements via code. Here's an example:
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns="http://www.stackoverflow.com/example#"
xml:base="http://www.stackoverflow.com/example">
<rdfs:Class rdf:ID="User"></rdfs:Class>
<rdfs:Class rdf:ID="Project"></rdfs:Class>
<rdf:Property rdf:ID="email"></rdf:Property>
<rdf:Property rdf:ID="name"></rdf:Property>
<rdf:Property rdf:ID="surname"></rdf:Property>
<rdf:Property rdf:ID="description"></rdf:Property>
<rdf:Property rdf:ID="customer"></rdf:Property>
<rdf:Property rdf:ID="insertProject">
<rdfs:domain rdf:resource="http://www.stackoverflow.com/example#User"/>
<rdfs:range rdf:resource="http://www.stackoverflow.com/example#Project"/>
</rdf:Property>
</rdf:RDF>
And this is the Java class:
public class MY_ONTOLOGY {
private static final OntModel M = ModelFactory.createOntologyModel(OntModelSpec.RDFS_MEM);
private static final String NS = "http://www.stackoverflow.com/example#";
private static final String BASE_URI = "http://www.stackoverflow.com/example/";
public static final OntClass USER = M.createClass(NS + "User");
public static final OntClass PROJECT = M.createClass(NS + "Project");
public static final OntProperty EMAIL = M.createOntProperty(NS + "hasEmail");
public static final OntProperty NAME = M.createOntProperty(NS + "hasName");
public static final OntProperty SURNAME = M.createOntProperty(NS + "hasSurname");
public static final OntProperty DESCRIPTION = M.createOntProperty(NS + "hasDescription");
public static final OntProperty CUSTOMER = M.createOntProperty(NS + "hasCustomer");
public static final OntProperty INSERTS_PROJECT = M.createOntProperty(NS + "insertsProject");
public static final String getBaseURI() {
return BASE_URI;
}
}
Then I've created a directory on my PC where I want to store the data, like C:\MyTDBdataset.
To store data inside it, I use the following code:
String directory = "C:\\MyTDBdataset";
Dataset dataset = TDBFactory.createDataset(directory);
dataset.begin(ReadWrite.WRITE);
try {
Model m = dataset.getDefaultModel();
Resource user = m.createResource(MY_ONTOLOGY.getBaseURI() + "Ronnie", MY_ONTOLOGY.USER);
user.addProperty(MY_ONTOLOGY.NAME, "Ronald");
user.addProperty(MY_ONTOLOGY.SURNNAME, "Red");
user.addProperty(MY_ONTOLOGY.EMAIL, "ronnie#myemail.com");
Resource project = m.createResource(MY_ONTOLOGY.getBaseURI() + "MyProject", MY_ONTOLOGY.PROJECT);
project.addProperty(MY_ONTOLOGY.DESCRIPTION, "This project is fantastic");
project.addProperty(MY_ONTOLOGY.CUSTOMER, "Customer & Co");
m.add(user, MY_ONTOLOGY.INSERTS_PROJECT, project);
dataset.commit();
} finally {
dataset.end();
}
If I want to read statements in my TDB, I can use something like this:
dataset.begin(ReadWrite.READ);
try {
Model m = dataset.getDefaultModel();
StmtIterator iter = m.listStatements();
while (iter.hasNext()) {
Statement stmt = iter.nextStatement();
Resource subject = stmt.getSubject();
Property predicate = stmt.getPredicate();
RDFNode object = stmt.getObject();
System.out.println(subject);
System.out.println("\t" + predicate);
System.out.println("\t\t" + object);
System.out.println("");
}
m.write(System.out, "RDF/XML"); //IF YOU WANT TO SEE AT CONSOLE YOUR DATA AS RDF/XML
} finally {
dataset.end();
}
If you want to navigate your model in different ways, look at this tutorial provided by Apache.
If you want to remove specific statements in your model, you can write something like this:
dataset.begin(ReadWrite.WRITE);
try {
Model m = dataset.getDefaultModel();
m.remove(m.createResource("http://http://www.stackoverflow.com/example/Ronnie"), MY_ONTOLOGY.NAME, m.createLiteral("Ronald"));
dataset.commit();
} finally {
dataset.end();
}
That's all! Bye!
I'm using JavaLite ActiveJDBC to pull data from a local MySQL server. Here is my simple RestController:
#RequestMapping(value = "/blogs")
#ResponseBody
public Blog getAllBlogs( )
throws SQLException {
Base.open( "com.mysql.jdbc.Driver", "jdbc:mysql://localhost:3306/rainydaymatt", "root", "" ) ;
List<Blog> blogs = Blog.where( "postType = 'General'" ) ;
return blogs.get( 0 ) ;
}
And here's my simple model, which extends the ActiveJDBC Model class:
public class Blog
extends Model {
}
Now, here's the problem:when I navigate to the path handled by the controller, I get this output stream:
{"frozen":false,"id":1,"valid":true,"new":false,"compositeKeys":null,"modified":false,"idName":"id","longId":1}
I can tell that this is metadata about the returned objects because the number of these clusters changes based on my parameters - i.e., when I select all, there are four, when I use a parameter, I get the same number as meets the criteria, and only one when I pull the first. What am I doing wrong? Interestingly, when I revert to an old-school DataSource and use the old Connection/PreparedStatement/ResultSet, I'm able to pull data just fine, so the problem can't be in my Tomcat's context.xml or in the path of the Base.open.
ActiveJDBC models can't be dumped out as raw output streams. You need to do something like this to narrow your selection down to one model, and then refer to its fields.
Base.open( "com.mysql.jdbc.Driver", "jdbc:mysql://localhost:3306/rainydaymatt", "root", "" ) ;
List<Blog> blogs = Blog.where( "postType = 'General'" ) ;
Blog tempBlog = blogs.get( 0 ) ;
return (String)tempBlog.get( "postBody" ) ;
as you stated already, a model is not String, so dumping it into a stream is not the best idea. Since you are writing a service, you probably need JSON, XML or some other form of a String. Your alternatives are:
JSON:
public Blog getAllBlogs() throws SQLException {
Base.open( "com.mysql.jdbc.Driver", "jdbc:mysql://localhost:3306/rainydaymatt", "root", "" ) ;
String json = Blog.where( "postType = 'General'" ).toJson(false); ;
Base.close();
return json ;
}
XML:
public Blog getAllBlogs() throws SQLException {
Base.open( "com.mysql.jdbc.Driver", "jdbc:mysql://localhost:3306/rainydaymatt", "root", "" ) ;
String xml = toXml(true, false);
Base.close();
return xml;
}
Maps:
public Blog getAllBlogs() throws SQLException {
Base.open( "com.mysql.jdbc.Driver", "jdbc:mysql://localhost:3306/rainydaymatt", "root", "" ) ;
List<Map> maps = Blog.where( "postType = 'General'" ).toMaps() ;
String output = doWhatYouNeedToGenerateOutput(maps);
Base.close();
return output;
}
Additionally, I must say you are not closing the connection in your code, and generally opening connections inside methods like this is not the best idea. Your code will be littered by connection open and close statements in each method. Best to use a Servlet filter to open and close connections before/ after specific URIs.
First of all, I'm not sure about if the title or the tags are correct. If not, someone please correct me
My question is if there are any tools or ways to create an autocomplete list with items from an external source, having netbeans parsing it and warn me if there are any errors.
-- The problem: I use JDBC and I want to model somehow all my schemas, tables and columns so that netbeans can parse it and warn me if I have anything wrong. For example with a normal use of JDBC I would had a function:
ResultSet execSelect( String cols, String table ){
return statement.executeQuery("SELECT "+cols+" FROM "+table); }
The problem is that someone should know exactly what are the available params for that to pass the correct strings.
I would like netbeans to show me somehow an autocomplete list with all available options.
PS. I had exactly the same problem when I was building a web application and I wanted somehow to get all paths for my external resources like images, .js files, .css files etc.
-- Thoughts so far:
My thoughts till now were to put a .java file with public static final String vars with some how nested static classes so that I could access from anywhere. For example:
DatabaseModel.MySchema.TableName1.ColumnName2
would be a String varialble with the 'ColumnName2' column and 'TableName1' table. That would help me with autocompletes but the problem is that there is no type checking. In other words someone could use any string, global defined or not as a table and as a column which is not correct either. I'm thinking to use nested enums somehow to cover these cases about type checking but I'm not sure if that would be a good solution in any case.
Any thoughts?
Finally I came up with writting a "script" that connects to mysql gets all meta data (every column of every table of every schema) and creates a java file with predefined classes and Strings that describes the model. For example:
- If you want the name of the column C1 from table T1 from schema S1 you would type DatabaseModel.S1.T1.C1._ which is a public static final String with the column name.
- If you want the table T2 from schema S2 you would type DatabaseModel.S2.T2 which is a class that implements DatabaseTable interface. So the function: execSelect could take a DatabaseTable and a DatabaseColumn as a parameter.
Here is the code (not tested but the idea is clear I think).
public static void generateMysqlModel(String outputFile) throws IOException, SQLException{
//** Gather the database model
// Maps a schema -> table -> column
HashMap<String,HashMap<String,ArrayList<String>>> model =
new HashMap<String,HashMap<String,ArrayList<String>>>();
openDatabase();
Connection sqlConn = DriverManager.getConnection(url, username, password);
DatabaseMetaData md = sqlConn.getMetaData();
ResultSet schemas = md.getSchemas(); // Get schemas
while( schemas.next() ){ // For every schema
String schemaName = schemas.getString(1);
model.put( schemaName, new HashMap<String,ArrayList<String>>() );
ResultSet tables = md.getTables(null, null, "%", null); // Get tables
while (tables.next()) { // For every table
String tableName = tables.getString(3);
model.get(schemaName).put( tableName, new ArrayList<String>() );
// Get columns for table
Statement s = sqlConn.createStatement(); // Get columns
s.execute("show columns in "+tables.getString(3)+";");
ResultSet columns = s.getResultSet();
while( columns.next() ){ // For every column
String columnName = columns.getString(1);
model.get(schemaName).get(tableName).add( columnName );
}
}
}
closeDatabase();
//** Create the java file from the collected model
new File(outputFile).createNewFile();
BufferedWriter bw = new BufferedWriter( new FileWriter(outputFile) ) ;
bw.append( "public class DatabaseModel{\n" );
bw.append( "\tpublic interface DatabaseSchema{};\n" );
bw.append( "\tpublic interface DatabaseTable{};\n" );
bw.append( "\tpublic interface DatabaseColumn{};\n\n" );
for( String schema : model.keySet() ){
HashMap<String,ArrayList<String>> schemaTables = model.get(schema);
bw.append( "\tpublic static final class "+schema+" implements DatabaseSchema{\n" );
//bw.append( "\t\tpublic static final String _ = \""+schema+"\";\n" );
for( String table : schemaTables.keySet() ){
System.out.println(table);
ArrayList<String> tableColumns = schemaTables.get(table);
bw.append( "\t\tpublic static final class "+table+" implements DatabaseTable{\n" );
//bw.append( "\t\t\tpublic static final String _ = \""+table+"\";\n" );
for( String column : tableColumns ){
System.out.println("\t"+column);
bw.append( "\t\t\tpublic static final class "+column+" implements DatabaseColumn{"
+ " public static final String _ = \""+column+"\";\n"
+ "}\n" );
}
bw.append( "\t\t\tpublic static String val(){ return this.toString(); }" );
bw.append( "\t\t}\n" );
}
bw.append( "\t\tpublic static String val(){ return this.toString(); }" );
bw.append( "\t}\n" );
}
bw.append( "}\n" );
bw.close();
}
PS. For the resources case in a web application I guess someone could get all files recursively from the "resources" folder and fill in the model variable. That will create a java file with the file paths. The interfaces in that case could be the file types or any other "file view" you want.
I also thought that it would be useful to create the .java file from an XML file for any case, so anyone would just create some kind of defintion in an xml file for that purpose.
If someone implements anything like that can post it here.
Any comments/improvements will be welcomed.
My problem is similar to the one asked on this question:
Is there a differense between a CONSTRUCT queries sent to a virtuoso endpoint and one sent to a Jena one?
I am using Virtuoso opensource as my graphstore and using the jena provider in order to access the data in that graphstore. I am doing several querys and everything is working fine (except for the amazing amount of memory and time that takes every inference with virtuoso but that should go in another question...).
The problem came when I tried to generate a model using a construct query. I have try using the VirtuosoQueryExecutionFactory and the query as string and the default QueryExecutionFactory with the query factory:
qexec = VirtuosoQueryExecutionFactory.create(queryString,inputModel);
model = qexec.execConstruct();
And
Query query = QueryFactory.create(queryString);
qexec = QueryExecutionFactory.create(query,inputModel);
model = qexec.execConstruct();
The query gives the expected result in the sparql endpoint but an empty model when querying in the code.
LOGGER.info("The model is: {}", model);
LOGGER.info("The size is: {}", model.size());
Gives the following output:
The model is: <ModelCom {} | >
The size is: 0
The model where I execute the querys is not empty and I did the same query from the sparql endpoint, as I said, recieving the spected results.
Anyone know where could be the mistake?
Thanks.
Daniel.
EDIT:
Here is the query I am trying to execute.
PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl:<http://www.w3.org/2002/07/owl#>
PREFIX xsd:<http://www.w3.org/2001/XMLSchema#>
PREFIX spatiaCore:<http://www.cedint.upm.es/residentialontology.owl#>
PREFIX test:<http://test.url#>
PREFIX spatiaCore: <http://www.cedint.upm.es/residentialontology.owl#>
CONSTRUCT {
?u ?p ?o1.
?o1 ?p2 ?o2.
?o2 ?p3 ?o3.
?o3 ?p4 ?o4.
?o4 ?p5 ?o5.
?o6 ?p6 ?u.
?o7 ?p7 ?o6
}
WHERE {
?u rdf:type spatiaCore:User.
?u spatiaCore:id "0000000003B3B474"^^<http://www.w3.org/2001/XMLSchema#string>.
?u ?p ?o1.
OPTIONAL {
?o1 ?p2 ?o2.
OPTIONAL {
?o2 ?p3 ?o3.
OPTIONAL {
?o3 ?p4 ?o4.
OPTIONAL {
?o4 ?p5 ?o5.
}
}
}
}
OPTIONAL {
?o6 ?p6 ?u.
OPTIONAL {
?o7 ?p7 ?o6
}
}
}
As you can see, the query tries to construct a graph with all the nodes the user is linked to with a depth of max five relationships and the nodes that are linked to the user with a depth of max two relationships.
What kind of method did you use for create VirtModel object?
NOTE:
If you used:
public static VirtModel openDefaultModel(DataSource ds);
public static VirtModel openDefaultModel(String url, String user, String password);
so the Model will contains only data from "virt:DEFAULT" graph.
And VirtuosoQueryExecutionFactory will add next pragma to query text:
define input:default-graph-uri <virt:DEFAULT>
If you used something like:
public static VirtModel openDatabaseModel(String graphName, DataSource ds);
public static VirtModel openDatabaseModel(String graphName, String url, String user, String password)
so the Model will contains only data from graphName graph.
And VirtuosoQueryExecutionFactory will add next pragma to query text:
define input:default-graph-uri <graphName>
If you want to use data from all graphs, you must call:
VirtModel vmodel = ....create model method...
vmodel.setReadFromAllGraphs(true);
If you set above to TRUE, the pragma for default-graph-uri will not added.
The worked example of using Construct with Virtuoso Jena provider:
url = "jdbc:virtuoso://localhost:1111";
VirtGraph set = new VirtGraph ("test1", url, "dba", "dba");
set.clear();
String qry = "INSERT INTO GRAPH <test1> { <aa> <bb> 'cc' . <aa1> <bb> 'zz' }";
VirtuosoUpdateRequest vur = VirtuosoUpdateFactory.create(qry, set);
vur.exec();
Model inputModel = new VirtModel(set);
System.out.println("InputModel :"+inputModel);
System.out.println("InputModel size :"+inputModel.size());
System.out.println();
qry = "CONSTRUCT { ?x <a> ?y } WHERE { ?x <bb> ?y }";
QueryExecution vqe = VirtuosoQueryExecutionFactory.create (qry, inputModel);
Model model = vqe.execConstruct();
System.out.println("Model :"+model);
System.out.println("Model size :"+model.size());