I use ElasticSearch 2.3.3.
Now i need update the one of many arrays in documents in my index.
This a part of document in ElasticSearch:
"sizes": [
{
"characteristicId": 11154209,
"localized": [
{
"country": "kz",
"price": 19580,
"priceWithSale": 15460,
"quantity": 6
},
{
"country": "ru",
"price": 3660,
"priceWithSale": 2891,
"quantity": 6
}
],
"typeId": 0,
"sizeName": "35",
"wbSize": {
"id": 19,
"value": "35"
},
"techSize": {
"id": 58,
"value": "35"
}
}
]
I try to use feature "Update by merging documents" on Java-API like this:
updateRequest = new UpdateRequest();
updateRequest.index("index)";
updateRequest.type("type");
updateRequest.id("2148069");
updateRequest.doc(XContentFactory.jsonBuilder()
.startObject()
.startArray("sizes")
.startObject()
.field("characteristicId", 9099140)
.startArray("localized")
.startObject()
.field("country", "kz")
.field("price", 15)
.field("priceWithSale", 15)
.endObject()
.startObject()
.field("country", "ru")
.field("price", 3)
.field("priceWithSale", 3)
.endObject()
.endArray()
.endObject()
.endArray()
.endObject());
client.update(updateRequest).get();
But it is just rewriting the entire array, while i need update some fields in array.
Is there some way?
Related
I have a JSON output like this:
{
"items": [
{
"id": "1",
"name": "Anna",
"values": [
{
"code": "Latin",
"grade": 1
},
{
"code": "Maths",
"grade": 5
}
]
},
{
"id": "2",
"name": "Mark",
"values": [
{
"code": "Latin",
"grade": 5
},
{
"code": "Maths",
"grade": 5
}
]
}
]
}
I need to get field values for "name": "Anna". I am getting RestAssured Response and would like to use my beans to do that, but I can also use jsonPath() or jsonObject(), but I don't know how. I searched many topics but did not find anything.
I am working with Elasticsearch recently, and I meet a problem that don't know how to solve it.
I have a Json like:
{
"objects": [
"object1": {
"id" : "12345",
"name":"abc"
},
"12345"
]
}
Object2 is a reference of object1, when I trying to saving(or called indexing) into elastic search, it says:
"org.elasticsearch.index.mapper.MapperParsingException: failed to parse"
After I google I found that because object1 is an object, but object 2 is considered as a string.
We cannot change our json in our project, so in this case how can I save it in the elasticsearch?
Thanks for any help and suggestion.
How do you do that?
I run this command and it works.
PUT test/t1/1
{
"objects": {
"object1": {
"id" : "12345",
"name":"abc"
},
"object2": "12345"
}
}
and the result is:
{
"_index": "test",
"_type": "t1",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 2,
"failed": 0
},
"created": true
}
UPDATE 1
Depending on your requirements one of these may solve your problem:
PUT test/t1/2
{
"objects": [
{
"object1": {
"id": "12345",
"name": "abc"
}
},
{
"object2": "12345"
}
]
}
PUT test/t1/2
{
"objects": [
{
"object1": {
"id": "12345",
"name": "abc"
},
"object2": "12345"
},
{
...
}
]
}
I have the following JSON:
{
"items": [
{
"id": "1",
"name": "John",
"location": {
"town": {
"id": "10"
},
"address": "600 Fake Street",
},
"creation_date": "2010-01-19",
"last_modified_date": "2017-05-18"
},
{
"id": "2",
"name": "Sarah",
"location": {
"town": {
"id": "10"
},
"address": "76 Evergreen Street",
},
"creation_date": "2010-01-19",
"last_modified_date": "2017-05-18"
},
{
"id": "3",
"name": "Hamed",
"location": {
"town": {
"id": "20"
},
"address": "50 East A Street",
},
"creation_date": "2010-01-19",
"last_modified_date": "2017-05-18"
}
]
}
And I need to get something like this, count how many times each townId appears:
[ { "10": 2 }, {"20": 1 }]
I'm trying to find the most eficient way to do this. Any idea?
Most efficient way is to load the String in a StringBuilder and remove all line breaks and white spaces. Then search for index of "town":{"id":" string (town start index) and then search for the end index (String `"}'). Using the 2 indexes you can extract town ids and count them.
No need to deserialize the JSON into POJO objects:) and extract values by xpath from the POJOs.
Can anyone help me with the following aggregate operation in mongodb: having a collection of items with ids and group ids, group them by group ids. For example, for collection of items:
{
"id": 1,
"group_id": 10,
"data": "some_data",
"name": "first"
},
{
"id": 2,
"group_id": 10,
"data": "some_data",
"name": "second"
},
{
"id": 3
"group_id": 20,
"data": "some_data",
"name": "third"
}
Create new collection of groups with the following structure:
{
"id": 10,
"items": [
{
"id": 1,
"group_id": 10,
"data": "some_data",
"name": "first"
},
{
"id": 2,
"group_id": 10,
"data": "some_data",
"name": "second"
}
]
},
{
"id": 10,
"items": [
{
"id": 2,
"group_id": 20,
"data": "some_data",
"name": "third"
}
]
}
The corresponding snippet with Java and spring-data-mongodb will also be appreciated.
In fact I'm doing the same right now with Java and want to move this logic to mongo for paging optimisation.
You can do it with the folowwing simple group aggregation:
db.table.aggregate(
[
{
$group: {
_id : "$group_id",
items : { "$push" : "$$ROOT" }
}
}
]
);
When you want to output the data from the aggregation into a new collection use the $out operator
I need to serve JSON from from my backend to the user. But before sending it over the wire I need to remove some data because it's confidential, every element who's key starts with conf_.
Assume I have the following JSON source:
{
"store": {
"book": [
{
"category": "reference",
"conf_author": "Nigel Rees",
"title": "Sayings of the Century",
"conf_price": 8.95
},
{
"category": "fiction",
"conf_author": "Evelyn Waugh",
"title": "Sword of Honour",
"conf_price": 12.99
},
{
"category": "fiction",
"conf_author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"conf_price": 8.99
},
{
"category": "fiction",
"conf_author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"conf_price": 22.99
}
],
"bicycle": {
"color": "red",
"conf_price": 19.95
}
},
"expensive": 10
}
Since the structure of the soruce JSON may vary (is not known), I need a way to identify the elements to remove by a pattern based on the key name (^conf_).
So the resulting JSON should be:
{
"store": {
"book": [
{
"category": "reference",
"title": "Sayings of the Century"
},
{
"category": "fiction",
"title": "Sword of Honour"
},
{
"category": "fiction",
"title": "Moby Dick",
"isbn": "0-553-21311-3"
},
{
"category": "fiction",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8"
}
],
"bicycle": {
"color": "red"
}
},
"expensive": 10
}
Since my source JSON will have 1m+ entries in the books array where every entry will have 100+ fields (child objects), I'm looking for some stream / event based approach like StAX rather then parsing the whole JSON into a JSONObject for manipulation for performance and resource reasons.
I looked at things like Jolt, JSONPath and JsonSurfer but these libraries did me get anywhere so far.
Can anyone provide some details on how my use case could be implemented best?
Regards!
You can use Jackson's Streaming API which can be used to parse huge JSON upto even giga bytes of size.It can be used to process huge files without loading them completely in memory.It allows get the data you want and ignore what you don't want also
Read more: http://wiki.fasterxml.com/JacksonStreamingApi