I have an elastic search query as below.
{
"query":{
"bool":{
"filter":{
"bool":{
"must_not":{
"terms":{
"names":[
"john",
"jose"
]
}
}
}
}
}
}
}
I am trying to build something like this in the code corresponding to the query.
BoolQuery.Builder builder = new BoolQueryBuilder();
List<String> names = ["john","jose"];
TermsQueryField field = new TermsQueryBuilder().value(names).build();
builder.mustNot(TermsQuery.of(t -> t.field("names").terms(field))._toQuery());
But I am getting error in this line as it is expecting List of FieldValue inside value function and not List of String.
TermsQueryField field = new TermsQueryBuilder().value(names).build();
Can someone help on this?
you need to use below code to create fieldValues for your names
List<FieldValue> fieldValues = names.stream().map(FieldValue::of).toList();
Related
I have a requirement to build a JSON dynamically and need to call an external API.
For instance,
Input : "FIRST_NAME": "XXX"
Based on the above input I need to build a JSON dynamically like below
{
"Req":{
"user":{
"CreatedTime":"2017-03-02T07:52:58Z",
"UpdatedTime":"2017-03-02T07:52:58Z",
"Details":{
"Names":[
{
"Name":{
"First":"kirtq"
}
}
]
}
}
}
}
If I get contact number as input : CONTACT_NUMBER:889999999
Then I have to build a JSON like below
{
"UpdateMemberReq": {
"Customer": {
"CreatedTime": "2017-03-02T07:52:58Z",
"UpdatedTime": "2017-03-02T07:52:58Z",
"CustomerDetails": {
"Contacts": {
"MobilePhone": {
"value": "07888728687"
}
}
}
}
}
}
Like this I have around 30 fields for each request I will get one filed based on that I have to build a JSON dynamically and once I prepared the JSON dynamically I have to call an external API (POST) by passing this JSON as raw type in the body.
I have implemented like below .
List list = new ArrayList();
Name user = mapper.readValue(json2, Name.class);
System.out.println(user);
Map<String, Object> name1 = new HashMap<>();
name1.put("Name", user);
list.add(name1);
Map<String, Object> map1 = new HashMap<>();
map1.put("Names", list);
Map<String, Object> map2 = new HashMap<>();
map2.put("CustomerDetails",map1);
Map<String,Object> map = new HashMap();
map.put("Customer",map2);
Can anyone suggest to me the best way to handle this in java/spring boot?
Thanks!!
Can anyone suggest to me the best way to handle this in java/spring boot?
Given that you don't have a fixed schema for which you want to create JSON, you'll have to do exactly like you do.
This means assembling a map dynamically and then mapping it to a json string.
What you can do to improve is try to extract common and reusable components, for building certain parts of the request.
I'd recommend you create a class structure to keep things manageable with some classes like ...
JsonGenerationService ( the main service the rest of the code uses )
UserJsonGenerator -> generates JSON for user entities
CustomerJsonGenerator -> generates JSON for customers
JsonGeneratorCommon -> contains all the common methods
I just wonder how I get fields from SearchResponse which is result of my query.
Below is my query:
{"size":99,"timeout":"10s","query":{"bool":{"filter":[{"bool":{"must":[{"range":{"LOG_GEN_TIME":{"from":"2018-11-01 12:00:01+09:00","to":"2018-11-01 23:59:59+09:00","include_lower":true,"include_upper":true,"boost":1.0}}},{"wrapper":{"query":"eyAiYm9vbCIgOiB7ICJtdXN0IiA6IFsgeyAidGVybSIgOiB7ICJBU1NFVF9JUCIgOiAiMTAuMTExLjI1Mi4xNiIgfSB9LCB7ICJ0ZXJtIiA6IHsgIkFDVElPTl9UWVBFX0NEIiA6ICIyIiB9IH0sIHsgInRlcm0iIDogeyAiRFNUX1BPUlQiIDogIjgwIiB9IH0gXSB9IH0="}}],"adjust_pure_negative":true,"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},"_source":{"includes":["LOG_GEN_TIME","LOG_NO","ASSET_NO"],"excludes":[]},"sort":[{"LOG_GEN_TIME":{"order":"desc"}},{"LOG_NO":{"order":"desc"}}]}
and when I query this, like below:
SearchResponse searchResponse = request.get();
I got right result:
{
"took":1071,
"timed_out":false,
"_shards":{
"total":14,
"successful":14,
"skipped":0,
"failed":0
},
"_clusters":{
"total":0,
"successful":0,
"skipped":0
},
"hits":{
"total":2,
"max_score":null,
"hits":[
{
"_index":"log_20181101",
"_type":"SEC",
"_id":"1197132746951492963",
"_score":null,
"_source":{
"ASSET_NO":1,
"LOG_NO":1197132746951492963,
"LOG_GEN_TIME":"2018-11-01 09:46:28+09:00"
},
"sort":[
1541033188000,
1197132746951492963
]
},
{
"_index":"log_20181101",
"_type":"SEC",
"_id":"1197132746951492963",
"_score":null,
"_source":{
"ASSET_NO":2,
"LOG_NO":1197337264704454700,
"LOG_GEN_TIME":"2018-11-01 23:00:06+09:00"
},
"sort":[
1541080806000,
1197337264704454700
]
}
]
}
}
To use this result, I need to map this by field and value.
I think there's a way to map the field and value to the 'fields' parameter so that we could use it nicely, but I cannot find.
I hope I can use the result like this way:
SearchHit hit = ...
Map<String, SearchHitField> fields = hit.getFields();
String logNo = fields.get("LOG_NO").value();
And It seems like this is the common way to use..
Or am I misunderstanding something? Tell me other way if there's better way, please.
Any comment would be appreciated. Thanks.
I'm not clear what client you are using to query elastic. If you are using elasticsearch high level rest client then you can loop through hits and to get source you can use hit.getSourceAsMap() to get the key value of fields.
For your comment:
Firstly create a POJO class which corresponds to _source (i.e. index properties; the way data is store in elastic)
The use hit.getSourceAsString() to get _source in json format.
Use jackson ObjectMapper to map json to your pojo
Assuming you created a POJO class AssetLog
SearchHit[] searchHits = searchResponse.getHits().getHits();
for (SearchHit searchHit : searchHits) {
String hitJson = searchHit.getSourceAsString();
ObjectMapper objectMapper = new ObjectMapper();
AssetLog source = objectMapper.readValue(hitJson, AssetLog.class);
//Store source to map/array
}
Hope this helps.
I think this should be easy to do, but I just couldn't figure it out.
What I'm trying to achieve is this query
{inbox:{$in:["main","fun-inbox"]} ,status:"Open"}
I managed to make it work like this
Bson q = Filters
.and(Filters
.in("inbox", inboxes),
Filters
.eq("status", statusID));
but is not the same thing because I used the $and operator
Can this be done using Document ?
Here is what I've tried and I know is wrong the way I define it, but I'll put the example just to better understand what I'm trying to achieve
Document q1 = new Document()
.append("inbox", Filters.in("inbox", inboxes))
.append("status", statusID);
What you have is correct and it is not explicitly $anded.
Java Mongo driver behind the scene figures out when to $and and when to not.
For example
Without $and
Bson bson = Filters.and(Filters.in("inbox", inboxes), Filters.eq("status", status));
BsonDocument bsonDocument = bson.toBsonDocument(BsonDocument.class, MongoClient.getDefaultCodecRegistry());
System.out.print(bsonDocument.toString()); //{ "inbox" : { "$in" : inboxes }, "status" : status }
With $and
Bson bson = Filters.and(Filters.in("inbox", inboxes), Filters.eq("inbox", inbox));
BsonDocument bsonDocument = bson.toBsonDocument(BsonDocument.class, MongoClient.getDefaultCodecRegistry());
System.out.print(bsonDocument.toString()); //{ "$and" : [{ "inbox" : { "$in" : inboxes } }, { "inbox" : inbox }] }
Converted your query to java code to return Iterable Document type
FindIterable<Document> iterable = database.getCollection("mails").find(new Document("inbox", new Document("$in", inValues)).append("status", "open"));
and inValues is an ArrayList as
ArrayList<String> inValues = new ArrayList<String> ();
inValues.add("main");
inValues.add("fun-inbox");
I'm using the Java API for ElasticSearch. I'm attempting to highlight my fields but it's not working. The correct results that match the search term are being returned, so there is content to highlight, but it simply won't do it. I set my SearchResponse and HighlightBuilder like this:
QueryBuilder matchQuery = simpleQueryStringQuery(searchTerm);
...
HighlightBuilder highlightBuilder = new HighlightBuilder()
.postTags("<highlight>")
.preTags("</highlight>")
.field("description");
SearchResponse response = client.prepareSearch("mediaitems")
.setTypes("mediaitem")
.setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
.setQuery(matchQuery) // Query
.setFrom(from)
.setSize(pageSize)
.setExplain(true)
.highlighter(highlightBuilder)
.get();
and in my JSON->POJO code, I check to see which fields have been highlighted, but the returned Map is empty.
Arrays.stream(hits).forEach((SearchHit hit) -> {
String source = hit.getSourceAsString();
Map<String, HighlightField> highlightFields = hit.getHighlightFields();
try {
MediaItem mediaItem = objectMapper.readValue(source, MediaItem.class);
mediaItemList.add(mediaItem);
} catch (IOException e) {
e.printStackTrace();
}
});
Why on earth is my highlighting request being ignored?
Any help is greatly appreciated.
You have to set the highlighted field in HighlightBuilder.
For example:
HighlightBuilder.Field field = new HighlightBuilder.Field(fieldName);
highlightBuilder.field(field);
I saw you are using simple query string query, so you can do the following:
Your query string: fieldname: searched text
So for example your query string is the following:
price: >2000 && city: Manchaster
With this query string you specified the fields in the query too.
Now highlighter should work.
According to the official documentation Update API - Upserts one can use scripted_upsert in order to handle update (for existing document) or insert (for new document) form within the script. The thing is they never show how the script should look to do that. The Java - Update API Doesn't have any information on the ScriptUpsert uses.
This is the code I'm using:
//My function to build and use the upsert
public void scriptedUpsert(String key, String parent, String scriptSource, Map<String, ? extends Object> parameters) {
Script script = new Script(scriptSource, ScriptType.INLINE, null, parameters);
UpdateRequest request = new UpdateRequest(index, type, key);
request.scriptedUpsert(true);
request.script(script);
if (parent != null) {
request.parent(parent);
}
this.bulkProcessor.add(request);
}
//A test call to validate the function
String scriptSource = "if (!ctx._source.hasProperty(\"numbers\")) {ctx._source.numbers=[]}";
Map<String, List<Integer>> parameters = new HashMap<>();
List<Integer> numbers = new LinkedList<>();
numbers.add(100);
parameters.put("numbers", numbers);
bulk.scriptedUpsert("testUser", null, scriptSource, parameters);
And I'm getting the following exception when "testUser" documents doesn't exists:
DocumentMissingException[[user][testUser]: document missing
How can I make the scriptUpsert work from the Java code?
This is how a scripted_upsert command should look like (and its script):
POST /sessions/session/1/_update
{
"scripted_upsert": true,
"script": {
"inline": "if (ctx.op == \"create\") ctx._source.numbers = newNumbers; else ctx._source.numbers += updatedNumbers",
"params": {
"newNumbers": [1,2,3],
"updatedNumbers": [55]
}
},
"upsert": {}
}
If you call the above command and the index doesn't exist, it will create it, together with the newNumbers values in the new documents. If you call again the exact same command the numbers values will become 1,2,3,55.
And in your case you are missing "upsert": {} part.
As Andrei suggested I was missing the upsert part, changing the function to:
public void scriptedUpsert(String key, String parent, String scriptSource, Map<String, ? extends Object> parameters) {
Script script = new Script(scriptSource, ScriptType.INLINE, null, parameters);
UpdateRequest request = new UpdateRequest(index, type, key);
request.scriptedUpsert(true);
request.script(script);
request.upsert("{}"); // <--- The change
if (parent != null) {
request.parent(parent);
}
this.bulkProcessor.add(request);
}
Fix it.