I was thinking about API path like .../lists/items?listId=1,2,3 to get a response like below.
// Map<Integer, List>
{
1: [
{
"field": "item1"
},
{
"field": "item11"
}
],
2: [
{
"field": "item2"
},
{
"field": "item22"
}
],
3: [
{
"field": "item3"
},
{
"field": "item33"
}
]
}
or
// List<Object>
{
[
{
"field": "item1"
},
{
"field": "item11"
},
{
"field": "item2"
},
{
"field": "item22"
},
{
"field": "item3"
}
{
"field": "item33"
},
]
}
But at the same time, I thought that the api path could be used to express to get following payload.
// List<List<Object>>
{
[
{
"field": "item1"
},
{
"field": "item11"
}
],
[
{
"field": "item2"
},
{
"field": "item22"
}
],
[
{
"field": "item3"
},
{
"field": "item33"
}
]
}
Is there correct answer for three cases above to write REST API path? If not, it would be appreciate to share your experience. Thanks.
I'd use the first method.
The second is the worst, since you have to divide again the lists clientside, which is useless. JSON can send them separately
The third could be used, but you would have to recreate the connection id/list, since lists don't have a key.
The map method maintains the connection id/list and separate all the lists
Related
I am using Java to perform queries on Elasticsearch, via the ElasticSearchClient. As there are big variables returned, I would like to only retrieve the ones that are relevant but the variables in _source are nested.
Below is a sample index response (multiple indexes can be returned with same _source structure)
[
{
"_index": "kn-tas-20200630",
"_type": "_doc",
"_id": "1122334455",
"_score": null,
"_source": {
"variables": [
{
"rawValue": "DEFH",
"name": "MANAGER"
},
{
"rawValue": "ABCD",
"name": "EMPLOYEE"
},
{
"rawValue": "[{\"rowId\":102030,\"rowType\":\"SIM\"}]",
"name": "extData"
}
]
},
"sort": [
1665735632119
]
}
]
I would like to create a query using SearchSourceBuilder to query ES and only retrieve the following:
Get the rawValue by name (I provide Manager, I get "DFEH")
Get the rowType value (I provide extData + row Type, I get "SIM")
Below is my query:
{
"from": 0,
"size": 100,
"query": {
"bool": {
"must": [
{
"terms": {
"prcKey": [
"K-112"
],
"boost": 1.0
}
}
],
"must_not": [
{
"exists": {
"field": "endDate",
"boost": 1.0
}
},
{
"term": {
"personInCharge": {
"value": "ABC",
"boost": 1.0
}
}
}
],
"adjust_pure_negative": true,
"boost": 1.0
}
},
"_source": {
"includes": [
"variables.name",
"variables.rawValue"
],
"excludes": []
},
"sort": [
{
"createTime": {
"order": "desc"
}
}
]
}
How can I fix my query? I tried using nested queries but without any luck.
I have a set of the following phrases: [remix], [18+], etc. How can I make a search by one character, for example "[", to find all these variants ?
Right now I have the following analyzers config:
{
"analysis": {
"analyzer": {
{ "bigram_analyzer": {
{ "type": "custom",
{ "tokenizer": { "keyword",
{ "filter": [
{ "lowercase",
"bigram_filter".
]
},
{ "full_text_analyzer": {
{ "type": "custom",
{ "tokenizer": { "ngram_tokenizer",
{ "filter": [
"lowercase"
]
}
},
{ "filter": {
{ "bigram_filter": {
{ "type": "edge_ngram",
{ "max_gram": 2
}
},
{ "tokenizer": {
{ "ngram_tokenizer": {
{ "type": "ngram",
{ "min_gram": 3,
{ "max_gram": 3,
{ "token_chars": [
{ "letter",
{ "digit",
{ "symbol",
"punctuation"
]
}
}
}
}
Mapping occurs at the java entity level using the spring boot data elasticsearch starter
If I understand your problem correctly - you want to implement an autocomplete analyzer that will return any term that starts with [ or any other character. To do so you can create a custom analyzer using ngram autocomplete. Here is an example:
Here is the testing index:
PUT /testing-index-v3
{
"settings": {
"number_of_shards": 1,
"analysis": {
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 15
}
},
"analyzer": {
"autocomplete": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase",
"autocomplete_filter"
]
}
}
}
},
"mappings": {
"properties": {
"term": {
"type": "text",
"analyzer": "autocomplete"
}
}
}
}
Here is the documents input:
POST /testing-index-v3/_doc
{
"term": "[+18]"
}
POST testing-index-v3/_doc
{
"term": "[remix]"
}
POST testing-index-v3/_doc
{
"term": "test"
}
And finally our search:
GET testing-index-v3/_search
{
"query": {
"match": {
"term": {
"query": "[remi",
"analyzer": "keyword",
"fuzziness": 0
}
}
}
}
As you can see I chose the keyword tokenizer for the autocomplete filter. I'm using ngram filter with min_gram: 1 and max_gram 15 which means our query will be separated into tokens like this:
input-query = i, in, inp, inpu, input .. and etc. Separates up to 15 tokens. This is wanted only at indexing time. Looking at the query we specify keyword analyzer as well - this analyzer is for the search time and it hard matches results. Here are some example searches and results:
GET testing-index-v3/_search
{
"query": {
"match": {
"term": {
"query": "[",
"analyzer": "keyword",
"fuzziness": 0
}
}
}
}
result:
"hits" : [
{
"_index" : "testing-index-v3",
"_type" : "_doc",
"_id" : "w5c_IHsBGGZ-oIJIi-6n",
"_score" : 0.7040055,
"_source" : {
"term" : "[remix]"
}
},
{
"_index" : "testing-index-v3",
"_type" : "_doc",
"_id" : "xJc_IHsBGGZ-oIJIju7m",
"_score" : 0.7040055,
"_source" : {
"term" : "[+18]"
}
}
]
GET testing-index-v3/_search
{
"query": {
"match": {
"term": {
"query": "[+",
"analyzer": "keyword",
"fuzziness": 0
}
}
}
}
result:
"hits" : [
{
"_index" : "testing-index-v3",
"_type" : "_doc",
"_id" : "xJc_IHsBGGZ-oIJIju7m",
"_score" : 0.7040055,
"_source" : {
"term" : "[+18]"
}
}
]
Hope this answer helps you. Good luck with your adventures with elasticsearch!
elasticsearch version is 7.x
here has some nested data blow :
data1:
[{name:"tom"},{name:"jack"}]
data2:
[{name:"tom"},{name:"rose"}]
data3:
[{name:"tom"},{name:"rose3"}]
...
dataN:
[{name:"tom"},{name:"roseN"}]
when i use the terms query , I just want to search tom, jack, But don't want to include rose...roseN
query:{
terms:{["tom","jack"]}
}
this code is not effective
Adding a working example
Index Data:
PUT /_doc/1
{
"names": [
{
"name": "tom"
},
{
"name": "jack"
}
]
}
PUT /_doc/2
{
"names": [
{
"name": "tom"
},
{
"name": "rose"
}
]
}
Search Query:
{
"query": {
"bool": {
"must": {
"terms": {
"names.name": [
"tom",
"jack"
]
}
},
"must_not": {
"match": {
"names.name": "rose"
}
}
}
}
}
Search Result:
"hits": [
{
"_index": "65838516",
"_type": "_doc",
"_id": "1",
"_score": 1.0,
"_source": {
"names": [
{
"name": "tom"
},
{
"name": "jack"
}
]
}
}
]
This is my query, where it does the nested sort but I want it to sort the data in item_numbers array together with the nested sort in a single query in elastic search.
{
"query": {
"nested": {
"query": {
"bool": {
"must": [{
"match": {
"item_numbers.type": "catalog"
}
}]
}
},
"path": "item_numbers"
}
},
"sort": [{
"item_numbers.value.keyword": {
"order": "asc",
"nested": {
"path": "item_numbers"
}
}
}]
}
My output for the above query is below :
{
"data": [
{
"item_numbers": [
{
"value": "Ball",
"value_phonetic": "",
"type": "catalog"
},
{
"value": "Apple",
"value_phonetic": "",
"type": "catalog"
},
{
"value": "Cat",
"value_phonetic": "",
"type": "catalog"
}
]
},
{
"item_numbers": [
{
"value": "Cococola",
"value_phonetic": "",
"type": "catalog"
},
{
"value": "Appy",
"value_phonetic": "",
"type": "catalog"
}
]
}
]
}
But I want to sort the document which contains multiple data in an array in a single document
Expected output :
{
"data": [
{
"item_numbers": [
{
"value": "Apple",
"value_phonetic": "",
"type": "catalog"
},
{
"value": "Ball",
"value_phonetic": "",
"type": "catalog"
},
{
"value": "Cat",
"value_phonetic": "",
"type": "catalog"
}
]
},
{
"item_numbers": [
{
"value": "Appy",
"value_phonetic": "",
"type": "catalog"
},
{
"value": "Cococola",
"value_phonetic": "",
"type": "catalog"
}
]
}
]
}
Does anyone know what changes to be made in the query to sort to get this output?
The global sort, even though it's nested, is only applied on the top level -- meaning the inner docs don't get sorted.
What you're looking for is sorted inner_hits:
{
"_source": "sorted_item_numbers", <--
"query": {
"nested": {
"query": {
"bool": {
"must": [
{
"match": {
"item_numbers.type": "catalog"
}
}
]
}
},
"inner_hits": { <--
"name": "sorted_item_numbers",
"sort": {
"item_numbers.value.keyword": "asc"
}
},
"path": "item_numbers"
}
},
"sort": [
{
"item_numbers.value.keyword": {
"order": "asc",
"nested": {
"path": "item_numbers"
}
}
}
]
}
Note that the response will be slightly different from the standard hits but both the top-level docs will be sorted (the doc with the best item_numbers.value taking precedence) as well as the actual contents of the item_numbers.
I have a collection down below. I am trying to update an array element.
I am trying to update if lineItem _id value is 1 then go to spec list and update characteristicsValue from 900 to 50 if specName is "Model", as you can see, _id is also an array.
collection data:
{
"_id": "100",
"name": "Campaign",
"status": "Active",
"parts": {
"lineItem": [
{
"_id": [
{
"name": "A",
"value": "1"
}
],
"spec": [
{
"specName": "Brand",
"characteristicsValue": [
{
"value": "500"
}
]
},
{
"specName": "Model",
"characteristicsValue": [
{
"value": "900"
}
]
}
]
},
{
"_id": [
{
"name": "B",
"value": "2"
}
],
"spec": [
{
"specName": "Brand",
"characteristicsValue": [
{
"value": "300"
}
]
},
{
"specName": "Model",
"characteristicsValue": [
{
"value": "150"
}
]
}
]
},
{
"_id": [
{
"name": "C",
"value": "2"
}
]
}
]
}
}
related update doesnt work as I expected.
db.Collection.update({"parts.lineItem._id.value" : "1",
"parts.lineItem.spec.specName" : "Model" },{ $set: {
"parts.lineItem.spec.$.characteristicsValue" : "50" } })
EDIT:
Every _id has a spec array. so, we need to find _id and then go to spec under _id array, find the brand and update the value.
Try this way:
db.Collection.update(
{},
{ $set: { "parts.lineItem.$[outer].spec.$[inner].characteristicsValue" : "50" } },
{ multi: true, arrayFilters: [{"outer._id.value" : "1"}, {"inner.specName" : "Model"}]}
);