I have the endpoint which lists all games which include {name} parameter and what I want to implement currently is to give user an option of choosing the ordering of the results.
games?name={game}
Something similar as:
games?name={game}&order={order}
you can see the partial implementation of my endpoint. Currently api:orderBy is statically written.
api:selector [
api:where " ?item a epic:Game . ?item epic:Name ?name . FILTER (regex(?name, ?game, 'i')) " ;
api:orderBy "DESC(?name)"
]
.
I am using ELDA.
Related
I have a neo4j query like:
...
"WITH DISTINCT k " +
// classic for each loop for the new rankings information
"FOREACH (app in $apps | " +
// upsert the app
" MERGE (a:App{appId:app.appId}) " +
...
// end of loop
") " +
I'm using gremlin-java. In here, I want to give $apps as custom parameter. I've just checked gremlin document but I couldn't find a foreach step. Is there a suggestion?
graph.foreach(apps: map)...
Solved with:
...constant($apps).unfold().as(app)...
As you noted you can use a constant step to inject a value into a query. However you can also use the inject step to insert a collection of values in a similar way. Here are a couple of simple examples - you can extend these patterns to include id, label and multiple property values as needed.
gremlin> g.inject([[id:1],[id:2],[id:3],[id:4]]).
unfold().as('a').
addV('test').
property('SpecialId',select('a').select('id'))
==>v[61367]
==>v[61369]
==>v[61371]
==>v[61373]
gremlin> g.V().hasLabel('test').valueMap(true)
==>[id:61367,label:test,SpecialId:[1]]
==>[id:61369,label:test,SpecialId:[2]]
==>[id:61371,label:test,SpecialId:[3]]
==>[id:61373,label:test,SpecialId:[4]]
gremlin> g.inject(1,2,3,4).as('a').
addV('test2').
property('SpecialId',select('a'))
==>v[61375]
==>v[61377]
==>v[61379]
==>v[61381]
gremlin> g.V().hasLabel('test2').valueMap(true)
==>[id:61375,label:test2,SpecialId:[1]]
==>[id:61377,label:test2,SpecialId:[2]]
==>[id:61379,label:test2,SpecialId:[3]]
==>[id:61381,label:test2,SpecialId:[4]]
gremlin>
The first query injects a list of maps. The second a simple list. This is a bit like the UNWIND pattern you may be used to in Cypher and it works in a similar way.
(Using GraphDB 8.1 free).
http://graphdb.ontotext.com/documentation/free/full-text-search.html says that I can enable a custom AnalyzerFactory for GraphDB full-text search, using the luc:analyzer param, by implemeting the interface com.ontotext.trree.plugin.lucene.AnalyzerFactory. However I can't find this interface anywhere. It is not in the jar graphdb-free-runtime-8.1.0.jar.
I checked the feature matrix at http://ontotext.com/products/graphdb/editions/#feature-comparison-table and it seems this feature '"Connectors Lucene" is available for the free edition of GraphDB.
In which jar is the com.ontotext.trree.plugin.lucene.AnalyzerFactory interface located ? what do I need to import in my project to implement this interface ?
Is there pre-existing AnalyzerFactories included with GraphDB to use Lucene other analyzers ? (I am interested in using a FrenchAnalyzer).
Thanks !
GraphDB offers two different Lucene-based plugins.
Lucene FTS plugin indexes RDF molecules and the correct documentation link is: http://graphdb.ontotext.com/documentation/free/full-text-search.html
Lucene Connector performs online synchronization between the RDF and Lucene document models using sequences of configurations like ?subject propertyPath ?object to id|fild value. The correct documentation link is: http://graphdb.ontotext.com/documentation/free/lucene-graphdb-connector.html
I encourage you to use the Lucene Connector, unless you don't have a special case for RDF molecules. Here is a simple example how to configure the connector with French analyzer and index all values for rdfs:label predicate for resources of type urn:MyClass. Select a repository and from the SPARQL query view execute:
PREFIX :<http://www.ontotext.com/connectors/lucene#>
PREFIX inst:<http://www.ontotext.com/connectors/lucene/instance#>
INSERT DATA {
inst:labelFR-copy :createConnector '''
{
"fields": [
{
"indexed": true,
"stored": true,
"analyzed": true,
"multivalued": true,
"fieldName": "label",
"propertyChain": [
"http://www.w3.org/2000/01/rdf-schema#label"
],
"facet": true
}
],
"types": [
"urn:MyClass"
],
"stripMarkup": false,
"analyzer": "org.apache.lucene.analysis.fr.FrenchAnalyzer"
}
''' .
}
Then manually add some sample test data from Import > Text area:
<urn:instance:test> <http://www.w3.org/2000/01/rdf-schema#label> "C'est une example".
<urn:instance:test> a <urn:MyClass>.
Once you commit the transaction, the Connector will update the Lucene index. Now you can run search queries like:
PREFIX : <http://www.ontotext.com/connectors/lucene#>
PREFIX inst: <http://www.ontotext.com/connectors/lucene/instance#>
SELECT ?entity ?snippetField ?snippetText {
?search a inst:labelFR ;
:query "label:*" ;
:entities ?entity .
?entity :snippets _:s .
_:s :snippetField ?snippetField ;
:snippetText ?snippetText .
}
To create a custom analyzer follow the instructions in the documentation and extend org.apache.lucene.analysis.Analyzer class. Put the custom analyzer JAR in lib/plugins/lucene-connector/ path.
The problem (or missing feature) is the lack of expression possibility between different query parameters. As I see it you can only specify and between parameters, but how do you solve it if you want to have not equal, or or xor?
I would like to be able to express things like:
All users with age 20 or the name Bosse
/users?age=22|name=Bosse
All users except David and Lennart
/users?name!=David&name!=Lennart
My first idea is to use a query parameter called _filter and take a String with my expression like this:
All users with with age 22 or a name that is not Bosse
/users?_filter=age eq 22 or name neq Bosse
What is the best solution for this problem?
I am writing my API with Java and Jersey, so if there is any special solution for Jersey, let me know.
I can see two solutions to achieve that:
Using a special query parameter containing the expression when executing a GET method. It's the way OData does with its $filter parameter (see this link: https://msdn.microsoft.com/fr-fr/library/gg309461.aspx#BKMK_filter). Here is a sample:
/AccountSet?$filter=AccountCategoryCode/Value eq 2 or AccountRatingCode/Value eq 1
Parse.com also uses such approach with its where parameter but the query is described using a JSON structure (see this link: https://parse.com/docs/rest/guide#queries). Here is a sample:
curl -X GET \
-H "X-Parse-Application-Id: ${APPLICATION_ID}" \
-H "X-Parse-REST-API-Key: ${REST_API_KEY}" \
-G \
--data-urlencode 'where={"score":{"$gte":1000,"$lte":3000}}' \
https://api.parse.com/1/classes/GameScore
If it's something too difficult to describe, you could also use a POST method and specify the query in the request payload. ElasticSearch uses such approach for its query support (see this link: https://www.elastic.co/guide/en/elasticsearch/reference/current/search.html). Here is a sample:
$ curl -XGET 'http://localhost:9200/twitter/tweet/_search?routing=kimchy' -d '{
"query": {
"bool" : {
"must" : {
"query_string" : {
"query" : "some query string here"
}
},
"filter" : {
"term" : { "user" : "kimchy" }
}
}
}
}
'
Hope it helps you,
Thierry
OK so here it is
You could add + or - to include or exclude , and an inclusive filter keyword for AND and OR
For excluding
GET /users?name=-David,-Lennart
For including
GET /users?name=+Bossee
For OR
GET /users?name=+Bossee&age=22&inclusive=false
For AND
GET /users?name=+Bossee&age=22&inclusive=true
In this way the APIs are very intuitive, very readable also does the work you want it to do.
EDIT - very very difficult question , however I would do it this way
GET /users?name=+Bossee&age=22&place=NewYork&inclusive=false,true
Which means the first relation is not inclusive - or in other words it is OR
second relation is inclusive - or in other words it is AND
The solution is with the consideration that evaluation is from left to right.
Hey it seems impossible if you go for queryparams...
If it is the case to have advanced expressions go for PathParams so you will be having regular expressions to filter.
But to allow only a particular name="Bosse" you need to write a stringent regex.
Instead of taking a REST end point only for condition sake, allow any name value and then you need to write the logic to check manually with in the program.
I'm using the Java API of Apache Jena to store and retrieve documents and the words within them. For this I decided to set up the following datastructure:
_dataset = TDBFactory.createDataset("./database");
_dataset.begin(ReadWrite.WRITE);
Model model = _dataset.getDefaultModel();
Resource document= model.createResource("http://name.space/Source/DocumentA");
document.addProperty(RDF.value, "Document A");
Resource word = model.createResource("http://name.space/Word/aword");
word.addProperty(RDF.value, "aword");
Resource resource = model.createResource();
resource.addProperty(RDF.value, word);
resource.addProperty(RSS.items, "5");
document.addProperty(RDF.type, resource);
_dataset.commit();
_dataset.end();
The code example above represents a document ("Document A") consisting of five (5) words ("aword"). The occurences of a word in a document are counted and stored as a property. A word can also occur in other documents, therefore the occurence count relating to a specific word in a specific document is linked together by a blank node. (I'm not entirely sure if this structure makes any sense as I'm fairly new to this way of storing information, so please feel free to provide better solutions!)
My major question is: How can I get a list of all distinct words and the sum of their occurences over all documents?
Your data model is a bit unconventional, in my opinion. With your code, you'll end up with data that looks like this (in Turtle notation), and which uses rdf:type and rdf:value in unconventional ways:
:doc rdf:value "document a" ;
rdf:type :resource .
:resource rdf:value :word ;
:items 5 .
:word rdf:value "aword" .
It's unusual, because usually you wouldn't have such complex information on the type attribute of a resource. From the SPARQL standpoint though, rdf:type and rdf:value are properties just like any other, and you can still retrieve the information you're looking for with a simple query. It would look more or less like this (though you'll need to define some prefixes, etc.):
select ?word (sum(?n) as ?nn) where {
?document rdf:type ?type .
?type rdf:value/rdf:value ?word ;
:items ?n .
}
group by ?word
That query will produce a result for each word, and with each will be the sum of all the values of the :items properties associated with the word. There are lots of questions on Stack Overflow that have examples of running SPARQL queries with Jena. E.g., (the first one that I found with Google): Query Jena TDB store.
I am quite new to Java Sesame. I am following this tutorial: http://openrdf.callimachus.net/sesame/2.7/docs/users.docbook?view . I know how to create statements and add them into the Sesame repository. At the moment, I am trying to describe classes and properties for the statements I am going to add. For example, I am having the ones below:
:Book rdf:type rdfs:Class .
:bookTitle rdf:type rdf:Property .
:bookTitle rdfs:domain :Book .
:bookTitle rdfs:range rdfs:Literal .
:MyBook rdf:type :Book .
:MyBook :bookTitle "Open RDF" .
As shown, Book is defined as a Class. bookTitle is defined as a Property. My question is: How can I do this in Java Openrdf using org.openrdf.model.vocabulary.RDFS. To clarify the point, Here is another example:
con.add(alice, RDF.TYPE, person);
alice is a type of person. How can I define person as a class using org.openrdf.model.vocabulary.RDFS. Your assistance would be very much appreciated.
You'd do this in exactly the same way as you're describing alice as a person. Like this:
con.add(person, RDF.TYPE, RDFS.CLASS);
Similarly for the other things you want to add (assuming you've created a URI for bookTitle):
con.add(bookTitle, RDF.TYPE, RDF.PROPERTY);
con.add(bookTitle, RDFS.DOMAIN, book);
etc.
I should point out that although it is of course possible to create your schema or ontology in this fashion, it might be easier to instead create a file containing your ontology (e.g. in Turtle or N-Triples syntax), and then simply upload that file to your Sesame repository.