The problem (or missing feature) is the lack of expression possibility between different query parameters. As I see it you can only specify and between parameters, but how do you solve it if you want to have not equal, or or xor?
I would like to be able to express things like:
All users with age 20 or the name Bosse
/users?age=22|name=Bosse
All users except David and Lennart
/users?name!=David&name!=Lennart
My first idea is to use a query parameter called _filter and take a String with my expression like this:
All users with with age 22 or a name that is not Bosse
/users?_filter=age eq 22 or name neq Bosse
What is the best solution for this problem?
I am writing my API with Java and Jersey, so if there is any special solution for Jersey, let me know.
I can see two solutions to achieve that:
Using a special query parameter containing the expression when executing a GET method. It's the way OData does with its $filter parameter (see this link: https://msdn.microsoft.com/fr-fr/library/gg309461.aspx#BKMK_filter). Here is a sample:
/AccountSet?$filter=AccountCategoryCode/Value eq 2 or AccountRatingCode/Value eq 1
Parse.com also uses such approach with its where parameter but the query is described using a JSON structure (see this link: https://parse.com/docs/rest/guide#queries). Here is a sample:
curl -X GET \
-H "X-Parse-Application-Id: ${APPLICATION_ID}" \
-H "X-Parse-REST-API-Key: ${REST_API_KEY}" \
-G \
--data-urlencode 'where={"score":{"$gte":1000,"$lte":3000}}' \
https://api.parse.com/1/classes/GameScore
If it's something too difficult to describe, you could also use a POST method and specify the query in the request payload. ElasticSearch uses such approach for its query support (see this link: https://www.elastic.co/guide/en/elasticsearch/reference/current/search.html). Here is a sample:
$ curl -XGET 'http://localhost:9200/twitter/tweet/_search?routing=kimchy' -d '{
"query": {
"bool" : {
"must" : {
"query_string" : {
"query" : "some query string here"
}
},
"filter" : {
"term" : { "user" : "kimchy" }
}
}
}
}
'
Hope it helps you,
Thierry
OK so here it is
You could add + or - to include or exclude , and an inclusive filter keyword for AND and OR
For excluding
GET /users?name=-David,-Lennart
For including
GET /users?name=+Bossee
For OR
GET /users?name=+Bossee&age=22&inclusive=false
For AND
GET /users?name=+Bossee&age=22&inclusive=true
In this way the APIs are very intuitive, very readable also does the work you want it to do.
EDIT - very very difficult question , however I would do it this way
GET /users?name=+Bossee&age=22&place=NewYork&inclusive=false,true
Which means the first relation is not inclusive - or in other words it is OR
second relation is inclusive - or in other words it is AND
The solution is with the consideration that evaluation is from left to right.
Hey it seems impossible if you go for queryparams...
If it is the case to have advanced expressions go for PathParams so you will be having regular expressions to filter.
But to allow only a particular name="Bosse" you need to write a stringent regex.
Instead of taking a REST end point only for condition sake, allow any name value and then you need to write the logic to check manually with in the program.
Related
When using java api as below
query.must(matchQuery("name", object.getName()));
The resulted elastic query is
"bool":{
"must":[
{"match":{"name":{"query":"De Michael Schuster","operator":"OR","boost":1.0}}}
.....
Right now I am getting back document with name : De OR Michael OR Schuster as expected.
I want to change the operator to AND to match the whole string.
I know I can use term query, but that is not an option in my scenario.
I came across this, but the answer is not given - https://discuss.elastic.co/t/changing-the-default-operator-for-search-api/47033
How can I achieve this using Java ?
Simply like this:
query.must(matchQuery("name", object.getName()).operator(Operator.AND));
I want to write a little .jar which is used as a "translator" for SQL-Queries directed to a z/OS-DB2-Database.
My goal is that the application accepts SQL-Queries as Command Line Arguments manually or via shell script/cron, next to other parameters like IP, Port, User etc.
Is there a way to leave those arguments unaffected while passing them to the jar?
Example:
java -jar db2sql.jar SQL=={SELECT * FROM TABLE1 TAB1, TABLE2 TAB2 WHERE TAB1.XYZ = TAB2.ZYX AND TAB2.ABC LIKE 'blabla' AND TAB1.DATE >= '01.01.2015'} IP=={192.168.0.1} User=={Santa} Password=={CLAUS}
(please ignore that this statement is senseless, but i hope you get the trick)
My Problem is reading out that Command Line parameters, mostly special characters like * , " ' etc.
Questions:
Is there a list of all possible SQL-Parameters which must be escaped?
Is there a special character which can be used as delimiter that will never occur in an SQL-Query?
Is it possible to pass all kind of SQL Statments as ONE argument?
Is it possible to leave special characters unhandled, e.g. Argument "" = String "", and not .classpath etc. ?
Kind Regards
Although I wouldn't recommend what you're trying to do for several reasons, at least in a *NIX environment you could just use the standard way.
java -jar foo.jar -s "SELECT * FROM SOMETHING WHERE foo = 2" -u username -h hostname
You can use additional libraries to parse the parameters, but this way you would use -s to specify the SQL query, and wrap the param value in " to make it a single argument with automatic escape.
In your main method you can then get the full query with (simplified)
if(args[0].equals("-s"))
sqlString = args[1];
While sending off Cypher queries to Neo4J's transactional Cypher API, I am running into the following error:
Neo.ClientError.Request.InvalidFormat Unable to deserialize request:
Unrecognized character escape ''' (code 39)
My Cypher query looks like this
MATCH (n:Test {id:'test'}) SET n.`label` = 'John Doe\'s house';
While this query works just fine when executed in Neo4J's browser interface it fails when using the REST API. Is this a bug or am I doing something wrong? In case this is not a bug, how do I have to escape ' to get it working in both?
Edit:
I found this answer and tested the triple single and triple double quotes but they just caused another Neo.ClientError.Request.InvalidFormat error to be thrown.
Note: I am using Neo4J 2.2.2
Note 2: Just in case it's important, below is the JSON body I am sending to the endpoint.
{"statements":[
{"statement": "MATCH (n:Test {id:'test'}) SET n.`label` = 'John Doe\'s house';"}
]}
You'll have to escape the \ too:
{"statements":[
{"statement": "MATCH (n:Test {id:'test'}) SET n.`label` = 'John Doe\\'s house';"}
]}
But if you use parameters (recommended), you can do
{"statements":[
{"statement": "MATCH (n:Test {id:'test'}) SET n.`label` = {lbl}",
"parameters" : {"lbl" : "Jane Doe's house"}
}
]}
I would like to know if there is a way to specify to elastic search that I don't mind missing or erroneous indices on my search query. In other words I have a query which tries to query 7 different indices but one of them might be missing depending on the circumstances. What I want to know is that if there is a way to say, forget the broken one and get me the results of the other 6 indices?
SearchRequestBuilder builder = elasticsearchClient.getClient().prepareSearch(indices)
.setQuery(Query.buildQueryFrom(term1, term2))
.addAggregation(AggregationBuilders.terms('term')
.field('field')
.shardSize(shardSize)
.size(size)
.minDocCount(minCount));
As an example query you can find the above one.
Take a look at the ignore_unavailable option, which is part of the multi index syntax. This has been available since at least version 1.3 and allows you to ignore missing or closed indexes when performing searches (among other multi index operations).
It is exposed in the Java API by IndicesOptions. Browsing through the source code, I found there is a setIndicesOptions() method on the SearchRequestBuilder used in the example. You need to pass it an instance of IndicesOptions.
There are various static factory methods on the IndicesOptions class for building an instance with your specific desired options. You would probably benefit from using the more convenient lenientExpandOpen() factory method (or the deprecated version, lenient(), depending on your version) which sets ignore_unavailable=true,allow_no_indices=true, and expand_wildcards=open.
Here is a modified version of the example query which should provide the behavior you are looking for:
SearchRequestBuilder builder = elasticsearchClient.getClient().prepareSearch(indices)
.setQuery(Query.buildQueryFrom(term1, term2))
.addAggregation(AggregationBuilders.terms('term')
.field('field')
.shardSize(shardSize)
.size(size)
.minDocCount(minCount))
.setIndicesOptions(IndicesOptions.lenientExpandOpen());
Have you tried using Index Aliases?
Rather than referring to individual aliases you can specify a single index value. Behind this can be several indexes.
Here I'm adding two indexes to the alias and removing the missing / broken one:
curl -XPOST 'http://localhost:9200/_aliases' -d '
{
"actions" : [
{ "remove" : { "index" : "bad-index", "alias" : "alias-index" } },
{ "add" : { "index" : "good-index1", "alias" : "alias-index" } },
{ "add" : { "index" : "good-index2", "alias" : "alias-index" } }
]
}'
I have a regular expression I am trying to use to rewrite an incoming REST url and am getting stuck on one use case when one section of the URL is excluded.
Here is the regex I'm currently using:
^(/[^/]+/(?:books))/([^/]+?)(?:/(?:(?!page).+?))?(?:/page/(\\d+))?$
As example I'm using "$1 - $2 - $3" as parts to use in writing new URL.
Here are the examples that are working correctly...
"/mySite/books/topic1/page/2" results in "/mySite/books - topic1 - 2"
"/mySite/books/topic1/subtopic1/page/2" results in "/mySite/books - topic1 - 2"
All the above work as intended. The problem is when the URL excludes the "topic1" part of the URL then the results are not what I need. Example:
"/mySite/books/page/2" results in "/mySite/books - page - "
What I need is the $2 to be blank, because there is no topic, and the page number still as $3. What I need as output...
"/mySite/books/page/2" results in "/mySite/books - - 2"
What can I change in my regex to satisfy that scenario without disrupting the existing ones that work correctly? This is being done in Java.
You might try to use regex pattern
^(/[^/]+/books)/(?:(?!page/)([^/]+)/)?page/(\\d+)$
It should suffice to make your second group ungreedy. Then the engine will first try to find a match without using it (trying only /page/\\d+ instead). And if that fails it tries to include the second group:
^(/[^/]+/(?:books))/([^/]+?)(?:/(?:(?!page).+?))??(?:/page/(\\d+))?$
Prepending any kind of quantifier (+, *, ? and {..} with ?) makes it ungreedy.