I Have a List of Facet fields added in a loop, these loops add all faceits fetched from solr faceit results, which has duplicate entry in final facieit field 'allFacetFields',
[
movie-1:[
manufaturer(10),
producers(5),
actors(12)
],
movie-2:[
manufaturer(10),
producers(5),
actors(12)
],
movie-3:[
manufaturer(10),
producers(5),
actors(12)
],
movie-1:[
manufaturer(3),
producers(2),
actors(2)
],
movie-2:[
manufaturer(4),
producers(7),
actors(6)
],
]
below code gets all faceit fields from solr query from loop and adds into allFacetFields from each facetFieldIterator
List<FacetField> allFacetFields = new ArrayList<FacetField>();
for (Map.Entry<String, Integer> entry : coresResult.entrySet()) {
List<FacetField> coreFacets = respForCores.getFacetFields();
Iterator<FacetField> facetFieldIterator = coreFacets.iterator();
while(facetFieldIterator.hasNext()){
allFacetFields.add(facetFieldIterator.next());
}
}
how to validate duplicate faceit before adding it to final faceit field allFacetFields and merger the results as following :
[
movie-1:[
manufaturer(13),
producers(7),
actors(14)
],
movie-2:[
manufaturer(14),
producers(12),
actors(18)
],
movie-3:[
manufaturer(10),
producers(5),
actors(12)
]
]
Related
I have a list of java POJO that's coming from a third party mysql stored procedure that includes location information mainly region,city,building and floor and the incident count for each location grouped by floors. I need to aggregate the incident count for each region,city and building as well. I also need to collect the children under each parent in a POJO as a location tree:
how can I do this with java stream api ? Thanks in advance!
input data :
output json :
[{
label : "EMEA",
type : "REGION",
incidentCount : 78,
children : [
{
label : "PAIS",
type : "CITY",
incidentCount : 37,
children : [
{
label: "F1",
TYPE : "FLOOR",
incidentCount: 18
},
{
label: "F2",
TYPE : "FLOOR",
incidentCount: 19
},
...
]
},
...
]
},
...
]
I have the following documents in one collection named as mail_test. Some of them have a tags field which is an array:
/* 1 */
{
"_id" : ObjectId("601a7c3a57c6eb4c1efb84ff"),
"email" : "aaaa#bbb.com",
"content" : "11111"
}
/* 2 */
{
"_id" : ObjectId("601a7c5057c6eb4c1efb8590"),
"email" : "aaaa#bbb.com",
"content" : "22222"
}
/* 3 */
{
"_id" : ObjectId("601a7c6d57c6eb4c1efb8675"),
"email" : "aaaa#bbb.com",
"content" : "33333",
"tags" : [
"x"
]
}
/* 4 */
{
"_id" : ObjectId("601a7c8157c6eb4c1efb86f4"),
"email" : "aaaa#bbb.com",
"content" : "4444",
"tags" : [
"yyy",
"zzz"
]
}
There are two documents with non-empty-tags, so I want the result to be 2.
I use the the following statement to aggregate and get the correct tag_count:
db.getCollection('mail_test').aggregate([{$group:{
"_id":null,
"all_count":{$sum:1},
"tag_count":{"$sum":{$cond: [ { $ne: ["$tags", undefined] }, 1, 0]}}
//if replace `undefined` with `null`, I got the tag_count as 4, that is not what I want
//I also have tried `$exists`, but it cannot be used here.
}}])
and the result is:
{
"_id" : null,
"all_count" : 4.0,
"tag_count" : 2.0
}
and I use spring data mongo in java to do this:
private void test(){
Aggregation agg = Aggregation.newAggregation(
Aggregation.match(new Criteria()),//some condition here
Aggregation.group(Fields.fields()).sum(ConditionalOperators.when(Criteria.where("tags").ne(null)).then(1).otherwise(0)).as("tag_count")
//I need an `undefined` instead of `null`,or is there are any other solution?
);
AggregationResults<MailTestGroupResult> results = mongoTemplate.aggregate(agg, MailTest.class, MailTestGroupResult.class);
List<MailTestGroupResult> mappedResults = results.getMappedResults();
int tag_count = mappedResults.get(0).getTag_count();
System.out.println(tag_count);//get 4,wrong
}
I need an undefined instead of null but I don't know how to do this,or is there are any other solution?
You can use Aggregation operators to check if the field tags exists or not with one of the following constructs in the $group stage of your query (to calculate the tag_count value):
"tag_count":{ "$sum": { $cond: [ { $gt: [ { $size: { $ifNull: ["$tags", [] ] }}, 0 ] }, 1, 0] }}
// - OR -
"tag_count":{ "$sum": { $cond: [ $eq: [ { $type: "$tags" }, "array" ] }, 1, 0] }
Both, return the same result (as you had posted).
I want to delete the data on the date basis(which is present inside the array). This is how my mongo document looks like.
{
"_id" : ObjectId("5d3d94df83f68f8bf751f367"),
"branchName" : "krishYogaCenter",
"Places" : [
"Pune",
"Bangalore",
"Hyderabad",
"Delhi"
],
"rulesForDateRanges" : [
{
"fromDate" : ISODate("2019-01-07T18:30:00.000Z"),
"toDate" : ISODate("2019-03-06T18:30:00.000Z"),
"place" : "Delhi",
"ruleIds" : [
5,
6,
7
]
},
{
"fromDate" : ISODate("2019-03-07T18:30:00.000Z"),
"toDate" : ISODate("2019-05-06T18:30:00.000Z"),
"place" : "Hyderabad",
"ruleIds" : [
1,
2
]
}
],
"updatedAt" : ISODate("2019-07-28T12:31:35.694Z"),
"updatedBy" : "Theia"
}
Here, if "toDate" is less than today I want to delete that object from the array "rulesForDateRanges". Searched on the google but did not get any way to do this in morphia.
If this date was not present internally in the array object then I could have used "delete document where the date is less than today". Here I want to remove that object from the array which is in no longer use, and if the array "rulesForDateRanges" becomes empty in that case only want to delete the whole document.
I am using morphia. Please suggest the way to do this in morphia or the query to do this.
Searched on google got this: We can get the document one by one from the collection using query and do UpdateOperation over that document. But here I have to perform updateOperation for each and every document.
I've got this json object
{
"type" : "employee",
"columns" :
[
{
"id" : 1,
"human" :
[
{
"name" : "ANA",
"age" : "23"
},
{
"name" : "IULIA",
"age" : "22"
}
]
},
{
"id" : 2,
"human" :
[
{
"name" : "ADI",
"age" : "21"
},
{
"name" : "GELU",
"age" : "18"
}
]
}
]
}
and I need to extract the first name from each human list.
I've tried .body("columns.human.name[0]", everyItem(containsString("A"))) but it doesn't work. Any ideas?
Using JsonPath you can get all columns and all humans.
Each of JSON Object is represented as HashMap<>. If it contains only fields it's HashMap<String, String> but if contains arrays or nested JSON Objects then it is HashMap<String, Object> where Object is either another JSON Object or array.
Given the above you can use following code to get all columns and name of first human in each column:
JsonPath path = response.jsonPath();
List<HashMap<String, Object>> columns = path.getList("columns");
for (HashMap<String, Object> singleColumn : columns) {
List<HashMap<String, Object>> humans = (List<HashMap<String, Object>>) singleColumn.get("human");
System.out.println(humans.get(0).get("name"));
}
The above code will print ANA and ADI in the console.
You can store the results in List<String> for further processing
you can get all "name" from humans with jsonPath : $.columns[*].human[*].name it will give below result :
[
"ANA",
"IULIA",
"ADI",
"GELU"
]
And if you want only first "name" then you need to use josn : $.columns[*].human[0].name this will give you below result:
[
"ANA",
"ADI"
]
I am trying to figure out an efficient way to fetch all great grant children from the root. I understand it is not the best way to store information but I have no control on why we have this structure. Here is my json
{
"root": "some value",
"other-attribute": "some value",
"Level1": [
{"attr": "value"},
"attr" : "value",
"Level2": [
{"attr": "value"},
"Level3" : [
{"attr": "value"},
"Level4" :[
{"attr": "value"},
]
]
]
]
}
How can I fetch list of all "Level3" and "Level4" elements from the json. Do I have to traverse through each hierarchy then my program complexity would be O(n3) for 3 level and O(n4) for 4 level.