I am ne on elastichSearch. i am trying the exact match and and operation. I tried so many ways but all the time the response is mess for me. It is like fuzzy match. I need exact match as RDBMS
SELECT * FROM IP="1.1.1.1" AND NAME="ETH1/10"
Thanks in advance.
If You need the exact match than instead of match query use term query
Adding a working example
Index mapping
{
"mappings": {
"properties": {
"name": {
"type": "keyword"
},
"ip" :{
"type" : "ip"
}
}
}
}
Index sample doc
{
"name" : "ETH1/10",
"ip" : "1.1.1.1"
}
And search query
{
"query": {
"bool": {
"filter": [ --> use `filter` as pointed by #Val in the comment.
{
"term": {
"ip": "1.1.1.1"
}
},
{
"term": { --> `term` query for exact match.
"name": "ETH1/10"
}
}
]
}
}
}
And search result
"hits": [
{
"_index": "65167713",
"_type": "_doc",
"_id": "1",
"_score": 0.0,
"_source": {
"name": "ETH1/10",
"ip": "1.1.1.1"
}
}
]
How about this?
{
"query":{
"bool":{
"must":[
{
"match":{
"IP":"1.1.1.1"
}
},
{
"match":{
"NAME":"ETH1/10"
}
}
]
}
}
}
}
Related
I have a set of the following phrases: [remix], [18+], etc. How can I make a search by one character, for example "[", to find all these variants ?
Right now I have the following analyzers config:
{
"analysis": {
"analyzer": {
{ "bigram_analyzer": {
{ "type": "custom",
{ "tokenizer": { "keyword",
{ "filter": [
{ "lowercase",
"bigram_filter".
]
},
{ "full_text_analyzer": {
{ "type": "custom",
{ "tokenizer": { "ngram_tokenizer",
{ "filter": [
"lowercase"
]
}
},
{ "filter": {
{ "bigram_filter": {
{ "type": "edge_ngram",
{ "max_gram": 2
}
},
{ "tokenizer": {
{ "ngram_tokenizer": {
{ "type": "ngram",
{ "min_gram": 3,
{ "max_gram": 3,
{ "token_chars": [
{ "letter",
{ "digit",
{ "symbol",
"punctuation"
]
}
}
}
}
Mapping occurs at the java entity level using the spring boot data elasticsearch starter
If I understand your problem correctly - you want to implement an autocomplete analyzer that will return any term that starts with [ or any other character. To do so you can create a custom analyzer using ngram autocomplete. Here is an example:
Here is the testing index:
PUT /testing-index-v3
{
"settings": {
"number_of_shards": 1,
"analysis": {
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 15
}
},
"analyzer": {
"autocomplete": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase",
"autocomplete_filter"
]
}
}
}
},
"mappings": {
"properties": {
"term": {
"type": "text",
"analyzer": "autocomplete"
}
}
}
}
Here is the documents input:
POST /testing-index-v3/_doc
{
"term": "[+18]"
}
POST testing-index-v3/_doc
{
"term": "[remix]"
}
POST testing-index-v3/_doc
{
"term": "test"
}
And finally our search:
GET testing-index-v3/_search
{
"query": {
"match": {
"term": {
"query": "[remi",
"analyzer": "keyword",
"fuzziness": 0
}
}
}
}
As you can see I chose the keyword tokenizer for the autocomplete filter. I'm using ngram filter with min_gram: 1 and max_gram 15 which means our query will be separated into tokens like this:
input-query = i, in, inp, inpu, input .. and etc. Separates up to 15 tokens. This is wanted only at indexing time. Looking at the query we specify keyword analyzer as well - this analyzer is for the search time and it hard matches results. Here are some example searches and results:
GET testing-index-v3/_search
{
"query": {
"match": {
"term": {
"query": "[",
"analyzer": "keyword",
"fuzziness": 0
}
}
}
}
result:
"hits" : [
{
"_index" : "testing-index-v3",
"_type" : "_doc",
"_id" : "w5c_IHsBGGZ-oIJIi-6n",
"_score" : 0.7040055,
"_source" : {
"term" : "[remix]"
}
},
{
"_index" : "testing-index-v3",
"_type" : "_doc",
"_id" : "xJc_IHsBGGZ-oIJIju7m",
"_score" : 0.7040055,
"_source" : {
"term" : "[+18]"
}
}
]
GET testing-index-v3/_search
{
"query": {
"match": {
"term": {
"query": "[+",
"analyzer": "keyword",
"fuzziness": 0
}
}
}
}
result:
"hits" : [
{
"_index" : "testing-index-v3",
"_type" : "_doc",
"_id" : "xJc_IHsBGGZ-oIJIju7m",
"_score" : 0.7040055,
"_source" : {
"term" : "[+18]"
}
}
]
Hope this answer helps you. Good luck with your adventures with elasticsearch!
elasticsearch version is 7.x
here has some nested data blow :
data1:
[{name:"tom"},{name:"jack"}]
data2:
[{name:"tom"},{name:"rose"}]
data3:
[{name:"tom"},{name:"rose3"}]
...
dataN:
[{name:"tom"},{name:"roseN"}]
when i use the terms query , I just want to search tom, jack, But don't want to include rose...roseN
query:{
terms:{["tom","jack"]}
}
this code is not effective
Adding a working example
Index Data:
PUT /_doc/1
{
"names": [
{
"name": "tom"
},
{
"name": "jack"
}
]
}
PUT /_doc/2
{
"names": [
{
"name": "tom"
},
{
"name": "rose"
}
]
}
Search Query:
{
"query": {
"bool": {
"must": {
"terms": {
"names.name": [
"tom",
"jack"
]
}
},
"must_not": {
"match": {
"names.name": "rose"
}
}
}
}
}
Search Result:
"hits": [
{
"_index": "65838516",
"_type": "_doc",
"_id": "1",
"_score": 1.0,
"_source": {
"names": [
{
"name": "tom"
},
{
"name": "jack"
}
]
}
}
]
I am trying to fetch records from elasticsearch using wildcard queries.
Please find the below query
get my_index12/_search
{
"query": {
"wildcard": {
"code.keyword": {
"value": "*ARG*"
}
}
}
}
It's working and giving expected results for the above query., but it is not working for the lower case value.
get my_index12/_search
{
"query": {
"wildcard": {
"code.keyword": {
"value": "*Arg*"
}
}
}
}
Try Following:
Mapping:
PUT my_index12
{
"settings": {
"analysis": {
"analyzer": {
"custom_analyzer": {
"type": "custom",
"tokenizer": "whitespace",
"char_filter": [
"html_strip"
],
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
},
"mappings": {
"doc": {
"properties": {
"code": {
"type": "text",
"analyzer": "custom_analyzer"
}
}
}
}
}
Then Run Query String Query
GET my_index12/_search
{
"query": {
"query_string": {
"default_field": "code",
"query": "AB\\-7000*"
}
}
}
It will also work for ab-7000*
Let me know if it works for you.
You have to normalize your keyword field:
ElasticSearch normalizer
Something like (from documentation):
PUT index
{
"settings": {
"analysis": {
"normalizer": {
"my_normalizer": {
"type": "custom",
"char_filter": [],
"filter": ["lowercase", "asciifolding"]
}
}
}
},
"mappings": {
"_doc": {
"properties": {
"foo": {
"type": "keyword",
"normalizer": "my_normalizer"
}
}
}
}
}
UPDATE
Some additional info:
Only parts of the analysis chain that operate at the character level are applied. So for instance, if the analyzer performs both lowercasing and stemming, only the lowercasing will be applied: it would be wrong to perform stemming on a word that is missing some of its letters.
By setting analyze_wildcard to true, queries that end with a * will be analyzed and a boolean query will be built out of the different tokens, by ensuring exact matches on the first N-1 tokens, and prefix match on the last token.
I was trying to search the following case
I want to search a name that ends with the particular word. For example:
name : group Test
name : group test
name : group test org
Here is my wild card query
"bool" : {
"must" : [
{
"wildcard" : {
"name.keyword"" : {
"wildcard" : "*test",
"boost" : 1.0
}
}
}
]
}
It returns me "group test" for case sensitive search
But I need to get both "group Test" and "group test" for case-insensitive search.
my mapping as follows:
"name":{
"type":"text",
"fielddata":true
"fields":{
"keyword":{
"type":"keyword"
}
}
}
Can anyone help me to find out queries in elasticsearch java api or any other way to search it.
Elastic search version 6.1.2
Any help is really appreciated.
Unfortunately there is no direct way to do this with ES configuration as keyword type does not have the analyzer property but I found a workaround. Please take a look on this solution:
PUT test
{
"settings": {
"analysis": {
"analyzer": {
"folding": {
"tokenizer": "standard",
"filter": [ "lowercase", "asciifolding" ]
}
},
"normalizer": {
"lowerasciinormalizer": {
"type": "custom",
"filter": [ "lowercase", "asciifolding" ]
}
}
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"string_as_keyword": {
"match_mapping_type": "string",
"match": "*_k",
"mapping": {
"type": "keyword",
"normalizer": "lowerasciinormalizer"
}
}
}
]
}
}
}
PUT test/1/123
{
"str_k" : "string âgáÈÒU is cool"
}
GET test/_search
{
"query": {
"wildcard": {
"str_k": "*agaeou*"
}
}
}
This is my location Collection in mongodb:
{ "_id" : ObjectId("5270d36f28f31fd8fa016441"), "stateName" : "A5", "cityName" : "ABCNEW2" }
{ "_id" : ObjectId("5270d37328f31fd8fa016442"), "stateName" : "A5", "cityName" : "ABC" }
{ "_id" : ObjectId("5270d37b28f31fd8fa016443"), "stateName" : "65", "cityName" : "ABCRW" }
I created one index using elastic Search:
POST /bwitter
{"index":
{ "number_of_shards": 1,
"analysis": {
"filter": {
"mynGram" : {"type": "nGram", "min_gram": 2, "max_gram": 10}
},
"analyzer": { "a1" : {
"type":"custom",
"tokenizer": "standard",
"filter": ["lowercase", "mynGram"]
}
}
}
}
}
I created one mapping using elastic search:
PUT /bwitter/bweet/_mapping
{
"bweet" : {
"index_analyzer" : "a1",
"search_analyzer" : "standard",
"properties" : {
"stateName": {"type":"string", "analyzer":"standard"},
"cityName" : {"type" : "string" }
}
}
}
Created river as follows:
PUT /_river/mongodb/_meta
{
"type": "mongodb",
"mongodb": {
"db": "rakeshdb",
"collection": "locations"
},
"index": {
"name": "locations",
"type": "bweet"
}
}
If, I query GET /locations/_search?q=ABC, I get only one record (Full word search is happening, not happening for the partial word)
I almost spend a whole day on this but not able to solve this. Where am I going wrong?
I guess that it should be:
PUT /_river/mongodb/_meta
{
"type": "mongodb",
"mongodb": {
"db": "rakeshdb",
"collection": "locations"
},
"index": {
"name": "bwitter",
"type": "bweet"
}
}