I have a problem with the inclusion and exclusion of nested fields in the aggregation.. I have a collection that contains nested objects, so when I try to build the query by including only some fields of nested objects it only works with the query, but it doesn't work with aggregation
This is a simple preview of my collection
{
"id": "1234",
"name": "place name",
"address": {
"city": "city name",
"gov": "gov name",
"country": "country name",
"location": [0.0, 0.0],
//some other data
},
//some other data
}
when i configure my fields with query it works
query.fields().include("name").include("address.city").include("address.gov")
Also when I do it with aggregation using the shell it works
db.getCollection("places").aggregate([
{ $project: {
"name": 1,
"address.city": 1,
"address.gov": 1
} },
])
but it doesn't work with aggregation in spring
val aggregation = Aggregation.newAggregation(
Aggregation.match(criteria)
Aggregation.project().andInclude("name").andInclude("address.city").andInclude("address.gov")
)
This aggregation with spring always returns address field as null, but when I put the "address" field without its nested fields the result will contain the full address object, not just its nested fields that I want to include.
Can someone tell me how to fix that?
I found a solution, it's by using the nested() function
val aggregation = Aggregation.newAggregation(
Aggregation.match(criteria)
Aggregation.project().andInclude("name")
.and("address").nested(Fields.fields("address.city", "address.gov"))
)
But it only works with hardcoded fields.. So if you want to have a function to which you pass the list of fields to include or exclude, you can use this solution
fun fieldsConfig(fields : List<String>) : ProjectionOperation
{
val mainFields = fields.filter { !it.contains(".") }
var projectOperation = Aggregation.project().andInclude(*mainFields.toTypedArray())
val nestedFields = fields.filter { it.contains(".") }.map { it.substringBefore(".") }.distinct()
nestedFields.forEach { mainField ->
val subFields = fields.filter { it.startsWith("${mainField}.") }
projectOperation = projectOperation.and(mainField).nested(Fields.fields(*subFields.toTypedArray()))
}
return projectOperation
}
The problem with this solution is that there is a lot of code, memory allocation for objects, and configuration to include fields.. In addition, it works only with inclusion, if you use it to exclude fields, it throws an exception.. Also, it does not work with the deepest fields of your document.
So I implemented a simpler and more elegant solution, which covers most cases of inclusion and exclusion of fields.
This builder class allows you to create an object containing the fields you want to include or exclude.
class DbFields private constructor(private val list : List<String>, val include : Boolean) : List<String>
{
override val size : Int get() = list.size
//overridden functions of List class.
/**
* the builder of the fields.
*/
class Builder
{
private val list = ArrayList<String>()
private var include : Boolean = true
/**
* add a new field.
*/
fun withField(field : String) : Builder
{
list.add(field)
return this
}
/**
* add a new fields.
*/
fun withFields(fields : Array<String>) : Builder
{
fields.forEach {
list.add(it)
}
return this
}
fun include() : Builder
{
include = true
return this
}
fun exclude() : Builder
{
include = false
return this
}
fun build() : DbFields
{
if (include && !list.contains("id"))
{
list.add("id")
}
else if (!include && list.contains("id"))
{
list.remove("id")
}
return DbFields(list.distinct(), include)
}
}
}
To build your fields configuration
val fields = DbFields.Builder()
.withField("fieldName")
.withField("fieldName")
.withField("fieldName")
.include()
.build()
This object you can pass it to your repository to configure the inclusion or exlusion.
I also created this class to configures the inclusion and exclusion using raw documents that will be transformed to a custom aggregation operation.
class CustomProjectionOperation(private val fields : DbFields) : ProjectionOperation()
{
override fun toDocument(context : AggregationOperationContext) : Document
{
val fieldsDocument = BasicDBObject()
fields.forEach {
fieldsDocument.append(it, fields.include)
}
val operation = Document()
operation.append("\$project", fieldsDocument)
return operation
}
}
now, you have just to use this class in your aggregation
class RepoCustomImpl : RepoCustom
{
#Autowired
private lateint mongodb : MongoTemplate
override fun getList(fields : DbFields) : List<Result>
{
val aggregation = Aggregation.newAggregation(
CustomProjectionOperation(fields)
)
return mongodb.aggregate(aggregation, Result::class.java, Result::class.java).mappedResults
}
}
Related
If an "action" key-value pair is repeated, I want to append each associated "myObject" to a list as shown below. Is there a way to achieve this using GSON or JACKSON? Unfortunately, there is no option to edit the input JSON. If the ask is not clear, please let me know.
Input
[
{
myObject: {
name: "foo",
description: "bar"
},
action: "create",
},
{
myObject: {
name: "baz",
description: "qux"
},
action: "create",
},
];
Required Output
{
"action": "create",
"myObject": [
{
name: "foo",
description: "bar"
},
{
name: "baz",
description: "qux"
},
]
};
I am new to JSON parsing in Java and unfortunately haven't found a use case like mine on StackOverflow. I have tried configuring my ObjectMapper like so -
new ObjectMapper().configure(DeserializationFeature.ACCEPT_SINGLE_VALUE_AS_ARRAY, true);
and using
#JsonAnySetter
annotation, but haven't gotten them to work yet.
You could solve this with two separate model classes, one for the original structure and one for the transformed one. For simplicity I call them OriginalModel and TransformedModel below, you should probably pick more meaningful names. The following code uses Gson but you can probably achieve something similar with Jackson as well.
class OriginalModel {
String action;
MyObjectData myObject;
}
class TransformedModel {
String action;
List<MyObjectData> myObject;
public TransformedModel(String action, List<MyObjectData> myObject) {
this.action = action;
this.myObject = myObject;
}
}
class MyObjectData {
String name;
String description;
}
If you declare these classes as nested classes you should make them static.
Then you can first parse the JSON data with the original model class, manually create the desired result structure using the transformed class and serialize that to JSON:
Gson gson = new Gson();
List<OriginalModel> originalData = gson.fromJson(json, new TypeToken<List<OriginalModel>>() {});
// Group MyObjectData objects by action name
// Uses LinkedHashMap to preserve order
Map<String, List<MyObjectData>> actionsMap = new LinkedHashMap<>();
for (OriginalModel model : originalData) {
actionsMap.computeIfAbsent(model.action, k -> new ArrayList<>())
.add(model.myObject);
}
List<TransformedModel> transformedData = new ArrayList<>();
for (Map.Entry<String, List<MyObjectData>> entry : actionsMap.entrySet()) {
transformedData.add(new TransformedModel(entry.getKey(), entry.getValue()));
}
String transformedJson = gson.toJson(transformedData);
I can see heaps of these sorts of questions but looking through them I'm struggling to find the answer and have already spent a couple days on this issue. Could use some direction for deserializing a response I am receiving to pull the required fields into an iterable.
API: https://statsapi.web.nhl.com/api/v1/schedule?teamId=55&startDate=2022-10-01&endDate=2023-04-21
Problem for me is there are multiple levels here and I'm concerned the nested lists might be an issue. Trying to grab lower level objects continues to return a null for me. This is the example json (full output above).
{
"copyright" : "",
...
"metaData" : {
"timeStamp" : "20220723_234058"
},
"wait" : 10,
"dates" : [ {
"date" : "2022-10-12",
...
"games" : [ {
"gamePk" : 2022020009,
...
"status" : {
"abstractGameState" : "Preview",
...
},
"teams" : {
"away" : {
"leagueRecord" : {
"wins" : 0,
...
},
"score" : 0,
"team" : {
"id" : 55,
"name" : "Seattle Kraken",
"link" : "/api/v1/teams/55"
}
},
"home" : {
"leagueRecord" : {
"wins" : 0,
...
},
"score" : 0,
"team" : {
"id" : 24,
"name" : "Anaheim Ducks",
"link" : "/api/v1/teams/24"
}
}
},
"venue" : {
"id" : 5046,
"name" : "Honda Center",
"link" : "/api/v1/venues/5046"
},
"content" : {
"link" : "/api/v1/game/2022020009/content"
}
} ],
"events" : [ ],
"matches" : [ ]
}, ...
I started by just trying to slice it up on my controller for testing but going beyond the 'games' level it just starts returning null for everything. Dates were fairly easy enough to get but the actual team names just resulted in everything being returned as null.
#GetMapping("/test")
#ResponseBody
public ArrayList<String> teamSchedule(#RequestParam int team) throws JsonProcessingException {
String nhlScheduleAPI = "https://statsapi.web.nhl.com/api/v1/schedule?teamId=";
String nhlScheduleRange = "&startDate=2022-10-01&endDate=2023-04-21";
String teamScheduleURL = nhlScheduleAPI + team + nhlScheduleRange;
RestTemplate restTemplate = new RestTemplate();
JsonNode data = restTemplate.getForObject(teamScheduleURL, JsonNode.class);
ArrayList<String> dates = new ArrayList<>();
data.forEach(game -> {
dates.add(data.get("dates").toString());
});
return dates;
I've started to create a PoJo class but a bit overwhelmed by the number of fields and sub-classes being used. I am attempting to rebuild a schedule app that I previously created in Python/Django but struggling to sanitize the data from the api. I'm only needing three items for each of the 82 games.
[<date>, <home_team>, <away_team>]
Is there an easier way to do this? Really appreciate any guidance here.
If you inspect the Json node structure correctly you would access the dates like this:
JsonNode data = restTemplate.getForObject(teamScheduleURL, JsonNode.class);
data = data.get("dates");
ArrayList<String> dates = new ArrayList<>();
data.forEach(d -> {
dates.add(d.get("date").toString());
});
return dates;
For the sake of others who may be in need of learning Json with Java/Spring, or more specifically parsing the NHL API, I am adding my solution below. It likely isn't the best way to achieve a reduced list of games but it works. The problem I was having through this was not having a good understanding of how Java classes map to nested json objects.
SchedulePOjO
#JsonIgnoreProperties
public class SchedulePOjO {
private ArrayList<DatesPOjO> dates;
// Getters and Setters
}
DatesPOjO
#JsonIgnoreProperties
public class DatesPOjO {
private ArrayList<GamesPOjO> games;
public ArrayList<GamesPOjO> getGames() {
return games;
// Getters and Setters
}
GamesPOjO
#JsonIgnoreProperties
public class GamesPOjO {
private String gameDate;
private TeamsPOjO teams;
// Getters and Setters
}
TeamsPOjO
#JsonIgnoreProperties
public class TeamsPOjO {
private AwayPOjO away;
private HomePOjO home;
// Getters and Setters
}
AwayPOjO
#JsonIgnoreProperties
public class AwayPOjO {
private TeamPOjO team;
// Getters and Setters
}
TeamPOjO
#JsonIgnoreProperties
public class TeamPOjO {
private int id;
private String name;
private String link;
// Getters and Setters
}
ScheduleController
#GetMapping("/test")
#ResponseBody
public SchedulePOjO teamSchedule(#RequestParam int team) throws JsonProcessingException {
// construct url
String nhlScheduleAPI = "https://statsapi.web.nhl.com/api/v1/schedule?teamId=";
String nhlScheduleRange = "&startDate=2022-10-01&endDate=2023-04-21";
String teamScheduleURL = nhlScheduleAPI + team + nhlScheduleRange;
// collect data
RestTemplate restTemplate = new RestTemplate();
SchedulePOjO schedulePOjO = restTemplate.getForObject(teamScheduleURL, SchedulePOjO.class);
return schedulePOjO;
I would like to do mapping of nested object that needs value from the parent object. I could use solution mentioned here mapstruct - Propagate parent field value to collection of nested objects - either directly after mapping to set some value to the child object or to use context. But in my case I work with immutable objects.
example:
data class Worker(
val name: String,
val businessCard: BusinessCard? = null,
)
data class BusinessCard(
val companyName: String,
)
data class WorkerDto(
val name: String,
val businessCard: BusinessCardDto? = null,
)
data class BusinessCardDto(
val text: String, // "worker name | company name"
)
Is there a way how to directly map value without #AfterMapping modifications?
Something like this?
#Mapper(config = CustomMappingConfig::class, uses = [ComputerMapper::class])
abstract class WorkerMapper {
#Mapping(target = "businessCard.text", expression = "java(mapBcText(worker))")
abstract fun mapWorker(worker: Worker): WorkerDto
protected fun mapBcText(worker: Worker) = "${worker.name} | ${worker.businessCard?.companyName}"
}
But sadly the code above generates:
#Override
public WorkerDto mapWorker(Worker worker) {
if ( worker == null ) {
return null;
}
String name = null;
BusinessCardDto businessCard = null;
name = worker.getName();
businessCard = businessCardToBusinessCardDto( worker.getBusinessCard() );
WorkerDto workerDto = new WorkerDto( name, businessCard );
return workerDto;
}
protected BusinessCardDto businessCardToBusinessCardDto(BusinessCard businessCard) {
if ( businessCard == null ) {
return null;
}
BusinessCardDto businessCardDto = new BusinessCardDto();
businessCardDto.setText( mapBcText(worker) ); // WORKER IS NOT ACCESSIBLE HERE
return businessCardDto;
}
Does anybody have an idea how to achieve this mapping?
...I also tried to create custom BusinessCard mapper, but then I cannot access the parent data (Worker) in it then...
you need to use var instead of val in your dataclass.
mapstruct don't seem to manage immutable kotlin class for the moment.
I have documents with dynamic fields and I would need to find a count of matching records for a given complex query criteria
Example Entity
#Document(collection = "UserAttributes")
public class UserAttributesEntity {
#Id
#Getter
private String id;
#NotNull
#Size(min = 1)
#Getter #Setter
private String userId;
#NotNull
#Getter #Setter
private Map<String, Object> attributes = new HashMap<>();
}
Example Data
{
"_id" : ObjectId("6164542362affb14f3f2fef6"),
"userId" : "89ee6942-289a-48c9-b0bb-210ea7c06a88",
"attributes" : {
"age" : 61,
"name" : "Name1"
}
},
{
"_id" : ObjectId("6164548045cc4456792d5325"),
"userId" : "538abb29-c09d-422e-97c1-df702dfb5930",
"attributes" : {
"age" : 40,
"name" : "Name2",
"location" : "IN"
}
}
Expected Query Expression
"((attributes.name == 'Name1' && attributes.age > 40) OR (attributes.location == 'IN'))
The MongoDB Aggregation Query is as below for $match, however same is not available through spring mongo db api:
{
$expr:
{
"$and": [{
"$gt": ["$attributes.age", 40]
}, {
"$eq": ["$attributes.name", "Name2"]
}]
}
}
Am I missing anything here?
Library using: org.springframework.data:spring-data-mongodb:3.1.1
You can implement your own AggregationOperation to deal with your varying conditions. Haven't tried my own code, but it should be something like that:
AggregationOperation myMatch (List<Document> conditions) {
return new AggregationOperation() {
#Override
public String getOperator() {
return "$match";
}
#Override
public Document toDocument(AggregationOperationContext context) {
return new Document("$match",
new Document("$expr",
new Document("$and", conditions)
)
);
}
};
}
and call it that way (to match your question query):
void callMyMatch() {
myMatch(List.of(
new Document("$gt", List.of("$attributes.age", 40)),
new Document("$eq", List.of("$attributes.name", "Name2"))
));
}
The project stage allow us to use query expression and I converted my approach to the below to achieve the results:
private Mono<Long> aggregate() {
final Aggregation aggregation = Aggregation
.newAggregation(
Aggregation.project("userAttributes.playerLevel", "userAttributes.name")
.andExpression("((userAttributes.name == 'Name1' && userAttributes.age > 40) OR (userAttributes.location == 'IN'))")
.as("result"),
Aggregation.match(Criteria.where("result").is(true)),
Aggregation.group().count().as("count"));
return mongoTemplate.aggregate(aggregation, mongoTemplate.getCollectionName(UserAttributesEntity.class), Map.class)
.map(result -> Long.valueOf(result.get("count").toString()))
.next()
}
Support for $expr operator in spring-data-mongodb library is still non-existent. However there is a work around solution using MongoTemplate to solve this problem -
Aggregation.match() provides an overloaded method that accepts AggregationExpression as a parameter. This method can be used to create the query for $match aggregation pipeline with $expr operator like this -
Example usage of AggregationExpression for $match operator -
Aggregation aggregationQuery = Aggregation.newAggregation(Aggregation.match(AggregationExpression.from(MongoExpression.create("'$expr': { '$gte': [ '$foo', '$bar'] }"))));
mongoTemplate.aggregate(aggregationQuery, Entity.class);
Above code is the equivalent of query -
db.collection.aggregate([{"$match": {"$expr": {"$gte: ["$foo", "$bar"]}}}])
Code for the question would be something like this -
Aggregation aggregationQuery = Aggregation.newAggregation(Aggregation.match(AggregationExpression.from(MongoExpression.create("'$expr': { '$and': [{ '$gt': ['$attributes.age', 40] }, { '$eq': ['$attributes.name', "Name2"] }] }"))));
mongoTemplate.aggregate(aggregationQuery, Entity.class);
I got a tricky issue concerning collection filtering in kotlin...
I got a base class that manages a list of items and I want to be able to filter the list with a keyword so I extended the class with Filterable methods.
What I want to do is to be able to extend multiple classes with this 'base class' so the filter mecanism is the same for all classes.
These classes don't have the same properties... In one, the filtering must occur depending if the keyword is found in the 'name' while in another class the filtering is done on the 'comment' property.
Here some code:
data class ProductInfo(): {
var _name: String
var name: String
get() = _name
set(value) { _name = value }
}
abstract class BaseFirestoreAdapter<T : BaseFirestoreAdapter.DataInterface, VH : RecyclerView.ViewHolder> : RecyclerView.Adapter<VH>(), Filterable
{
var sourceList: MutableList<ProductInfo> = ArrayList()
...
override fun performFiltering(keyword: CharSequence): FilterResults {
val keywordRegex = keyword.toString().toRegex(setOf(RegexOption.IGNORE_CASE, RegexOption.LITERAL))
filteredList = sourceList.filter {
keywordRegex.containsMatchIn(Normalizer.normalize(it.name, Normalizer.Form.NFD).replace("[^\\p{ASCII}]".toRegex(RegexOption.IGNORE_CASE), ""))
}
results.values = filteredList.sortedWith(orderComparator)
results.count = filteredList.size
}
...
}
I developped the 'base class' so it works with the first class mentionned above (filtering is done with the 'it.name') and it works but now that I'm trying to make it generic (T) to use it with the second class (comments), I can't find a way to do it...
I thought I could pass a class related predicate defining how to match the items during the filtering but since the keyword is only known in the performFiltering method, I can't create properly the predicate outside of this method...
I'm kinda out of ideas now! lol
Any of you have an idea?
UPDATE: Following #Tenfour04's suggestion, I tried adapting it to my code which passes filtering predicates via a method instead of using the constructor but it does not compile unless I replace "ActivyInfo::comments" with something like "ActivyInfo::comments.name" but then the
value I get for "searchedProperty(it)" in debug is "name" which is not the comment value.
Here is the code:
CommentAdapter:
override fun getFilter(): Filter {
super.setFilter(
{ it.state != ProductState.HIDDEN },
{ ActivyInfo::comments },
compareBy<ProductInfo> { it.state }.thenBy(String.CASE_INSENSITIVE_ORDER) { it.name })
return super.getFilter()
}
BaseAdapter:
lateinit var defaultFilterPredicate : (T) -> Boolean
lateinit var searchedProperty : (T) -> CharSequence
lateinit var orderComparator : Comparator<T>
fun setFilter(defaultPredicate: (T) -> Boolean, property: (T) -> CharSequence, comparator: Comparator<T> ) {
defaultFilterPredicate = defaultPredicate
searchedProperty = property
orderComparator = comparator
}
override fun performFiltering(constraint: CharSequence): FilterResults {
...
filteredList = sourceList.filter {
constraintRegex.containsMatchIn(Normalizer.normalize(searchedProperty(it), Normalizer.Form.NFD).replace("[^\\p{ASCII}]".toRegex(RegexOption.IGNORE_CASE), ""))
}
...
}
You can pass into the constructor a parameter that specifies the property as a function.
abstract class BaseFirestoreAdapter<T : BaseFirestoreAdapter.DataInterface, VH : RecyclerView.ViewHolder>(val filteredProperty: (T) -> CharSequence) : RecyclerView.Adapter<VH>(), Filterable
{
var sourceList: MutableList<T> = ArrayList()
// ...
override fun performFiltering(keyword: CharSequence): FilterResults {
val keywordRegex = keyword.toString().toRegex(setOf(RegexOption.IGNORE_CASE, RegexOption.LITERAL))
filteredList = sourceList.filter {
keywordRegex.containsMatchIn(Normalizer.normalize(filteredProperty(it), Normalizer.Form.NFD).replace("[^\\p{ASCII}]".toRegex(RegexOption.IGNORE_CASE), ""))
}
results.values = filteredList.sortedWith(orderComparator)
results.count = filteredList.size
}
...
}
The changes I made to yours were adding the constructor parameter filteredProperty, Changing the sourceList type to T, and replacing it.name with filteredProperty(it).
So subclasses will have to call this super-constructor, passing the property in like this:
data class SomeData(val comments: String)
class SomeDataAdapter: BaseFirestoreAdapter<SomeData>(SomeData::comments) {
//...
}
Or if you want to keep it generic:
class SomeDataAdapter(filteredProperty: (T) -> CharSequence): BaseFirestoreAdapter<SomeData>(filteredProperty) //...