I have a JSON output like this:
{
"items": [
{
"id": "1",
"name": "Anna",
"values": [
{
"code": "Latin",
"grade": 1
},
{
"code": "Maths",
"grade": 5
}
]
},
{
"id": "2",
"name": "Mark",
"values": [
{
"code": "Latin",
"grade": 5
},
{
"code": "Maths",
"grade": 5
}
]
}
]
}
I need to get field values for "name": "Anna". I am getting RestAssured Response and would like to use my beans to do that, but I can also use jsonPath() or jsonObject(), but I don't know how. I searched many topics but did not find anything.
Related
I need to create an IDs lists for all objects that have identical (same value and quantity) parameters. I am looking for a solution that will be more efficient than two nested loops and an if.
Object structure in the dataset:
case class MergedProduct(id: String,
products: List[Product])
case class Product(productUrl: String, productId: String)
Example of data in dataset:
[ {
"id": "ID1",
"products": [
{
"product": {
"productUrl": "SOMEURL",
"productId": "1"
}
},
{
"product": {
"productUrl": "SOMEOTHERURL",
"productId": "1"
}
}
],
},
{
"id": "ID2",
"products": [
{
"product": {
"productUrl": "SOMEURL",
"productId": "1"
}
},
{
"product": {
"productUrl": "SOMEOTHERURL",
"productId": "1"
}
}
],
},
{
"id": "ID3",
"products": [
{
"product": {
"productUrl": "DIFFERENTURL",
"productId": "1"
}
},
{
"product": {
"productUrl": "SOMEOTHERURL",
"productId": "1"
}
}
],
},
{
"id": "ID4",
"products": [
{
"product": {
"productUrl": "SOMEOTHERURL",
"productId": "1"
}
},
{
"product": {
"productUrl": "DIFFERENTURL",
"productId": "1"
}
}
],
},
{
"id": "ID5",
"products": [
{
"product": {
"productUrl": "NOTDUPLICATEDURL",
"productId": "1"
}
},
{
"product": {
"productUrl": "DIFFERENTURL",
"productId": "1"
}
}
],
}
]
In this example, we have 4 objects that are duplicated, so I would like to get their ID in the corresponding lists.
Example output is List[List[String]]:
List(List("ID1", "ID2"), List("ID3","ID4"))
I am looking for something efficient and readable - the dataset we are talking about has nearly 700 million objects.
As I can remove the listed duplicates from the dataset (it does not affect the database) because the goal is one - logging them exists, so I was thinking about the solution of taking MergedProduct one by one, searching for other MergedProduct with identical Products, getting their ID, logging in they exist and then remove the mentioned MergedProduct ID from the dataset and move on to the next one until I check the whole dataset but in this case I would have to collect it first as a list of MergedProducts and then do all operations - seems like going around
After trying some options and looking for neat solutions- I think this is kinda ok:
private def getDuplicates(mergedProducts: List[MergedProduct]): List[List[String]] = {
val duplicates = mergedProducts.groupBy(_.products.sortBy(_.product.productId)).filter(_._2.size > 1).values.toList
duplicates.map(duplicates => duplicates.map(_.id))
}
I have this JSON schema. I want to read it line by line and populate all "name" key values to an ArrayList of String until it reaches a key called entity. When it reaches to an entity I want to read the entity value and populate it to a String.After that loop should go through the remains of the schema and read all the "name" and populate to a new ArrayList until it meets another entity and so on...
I've only included a part of the schema it's a valid schema.
{
"entity": "Data",
"xement": [
{
"code": "MA",
"entity": "MH",
"attr": [
{
"name": "Id",
"position": 1
},
{
"name": "mdc",
"position": 4
},
{
"name": "minr",
"position": 10
},
{
"name": "mx",
"position": 11
}
],
"maltr": [
{
"name": "sub",
"position": 13
}
]
},
{
"code": "war",
"entity": "subl",
"attr": [
{
"name": "se",
"position": 1
},
{
"name": "go",
"position": 2
},
{
"name": "re",
"position": 4
},
{
"name": "pr",
"position": 10
},
{
"name": "mxp",
"position": 11
},
{
"name": "rpl",
"position": 45
},
{
"name": "rtr",
"position": 47
},
{
"name": "net",
"position": 55
}
],
"groups": [
{
"entity": "ro",
"grattr": [
{
"name": "rmn",
"position": 5
},
{
"name": "aib",
"position": 6
},
{
"name": "nxr",
"position": 7
},
{
"name": "xer",
"position": 8
},
{
"name": "rog",
"position": 9
},
{
"name": "ccc",
"position": 16
}
]
}
]
}
]
}
What is the best way to do that?
I am working with Elasticsearch recently, and I meet a problem that don't know how to solve it.
I have a Json like:
{
"objects": [
"object1": {
"id" : "12345",
"name":"abc"
},
"12345"
]
}
Object2 is a reference of object1, when I trying to saving(or called indexing) into elastic search, it says:
"org.elasticsearch.index.mapper.MapperParsingException: failed to parse"
After I google I found that because object1 is an object, but object 2 is considered as a string.
We cannot change our json in our project, so in this case how can I save it in the elasticsearch?
Thanks for any help and suggestion.
How do you do that?
I run this command and it works.
PUT test/t1/1
{
"objects": {
"object1": {
"id" : "12345",
"name":"abc"
},
"object2": "12345"
}
}
and the result is:
{
"_index": "test",
"_type": "t1",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 2,
"failed": 0
},
"created": true
}
UPDATE 1
Depending on your requirements one of these may solve your problem:
PUT test/t1/2
{
"objects": [
{
"object1": {
"id": "12345",
"name": "abc"
}
},
{
"object2": "12345"
}
]
}
PUT test/t1/2
{
"objects": [
{
"object1": {
"id": "12345",
"name": "abc"
},
"object2": "12345"
},
{
...
}
]
}
I have the following JSON:
{
"items": [
{
"id": "1",
"name": "John",
"location": {
"town": {
"id": "10"
},
"address": "600 Fake Street",
},
"creation_date": "2010-01-19",
"last_modified_date": "2017-05-18"
},
{
"id": "2",
"name": "Sarah",
"location": {
"town": {
"id": "10"
},
"address": "76 Evergreen Street",
},
"creation_date": "2010-01-19",
"last_modified_date": "2017-05-18"
},
{
"id": "3",
"name": "Hamed",
"location": {
"town": {
"id": "20"
},
"address": "50 East A Street",
},
"creation_date": "2010-01-19",
"last_modified_date": "2017-05-18"
}
]
}
And I need to get something like this, count how many times each townId appears:
[ { "10": 2 }, {"20": 1 }]
I'm trying to find the most eficient way to do this. Any idea?
Most efficient way is to load the String in a StringBuilder and remove all line breaks and white spaces. Then search for index of "town":{"id":" string (town start index) and then search for the end index (String `"}'). Using the 2 indexes you can extract town ids and count them.
No need to deserialize the JSON into POJO objects:) and extract values by xpath from the POJOs.
Can anyone help me with the following aggregate operation in mongodb: having a collection of items with ids and group ids, group them by group ids. For example, for collection of items:
{
"id": 1,
"group_id": 10,
"data": "some_data",
"name": "first"
},
{
"id": 2,
"group_id": 10,
"data": "some_data",
"name": "second"
},
{
"id": 3
"group_id": 20,
"data": "some_data",
"name": "third"
}
Create new collection of groups with the following structure:
{
"id": 10,
"items": [
{
"id": 1,
"group_id": 10,
"data": "some_data",
"name": "first"
},
{
"id": 2,
"group_id": 10,
"data": "some_data",
"name": "second"
}
]
},
{
"id": 10,
"items": [
{
"id": 2,
"group_id": 20,
"data": "some_data",
"name": "third"
}
]
}
The corresponding snippet with Java and spring-data-mongodb will also be appreciated.
In fact I'm doing the same right now with Java and want to move this logic to mongo for paging optimisation.
You can do it with the folowwing simple group aggregation:
db.table.aggregate(
[
{
$group: {
_id : "$group_id",
items : { "$push" : "$$ROOT" }
}
}
]
);
When you want to output the data from the aggregation into a new collection use the $out operator