How to count distinct values of a reference collection in mongo - java

Having a list of books that points to a list of authors, I want to display a tree, having in each node the author name and the number of books he wrote. Initially, I have embedded the authors[] array directly into books collection, and this worked like a charm, using the magic of aggregation framework. However, later on, I realise that it would be nice to have some additional information attached to each author (e.g. it's picture, biographical data, birth date, etc). For the first solution, this is bad because:
it duplicates the data (not a big deal, and yes, I know that mongo's purpose is to encapsulate full objects, but let's ignore that for now);
whenever an additional property is created or updated on the old records won't benefit from this change, unless I specifically query for some unique old property and update all the book authors with the new/updated values.
The next thing was to use the second collection, called authors, and each books document is referencing a list of author ids, like this:
{
"_id" : ObjectId("58ed2a254374473fced950c1"),
"authors" : [
"58ed2a254d74s73fced950c1",
"58ed2a234374473fce3950c1"
],
"title" : "Book title"
....
}
For getting the author details, I have two options:
make an additional query to get the data from the author collection;
use DBRefs.
Questions:
Using DBRefs automatically loads the authors data into the book object, similar to what JPA #MannyToOne does for instance?
Is it possible to get the number of written books for each author, without having to query for each author's book count? When the authors were embedded, I was able to aggregate the distinct author name's and also the number of book documents that he was present on. Is such query possible between two collections?
What would be your recommendation for implementing this behaviour? (I am using Spring Data)

You can try the below query in the spring mongo application.
UnwindOperation unwindAuthorIds = Aggregation.unwind("authorsIds", true);
LookupOperation lookupAuthor = Aggregation.lookup("authors_collection", "authorsIds", "_id", "ref");
UnwindOperation unwindRefs = Aggregation.unwind("ref", true);
GroupOperation groupByAuthor = Aggregation.group("ref.authorName").count().as("count");
Aggregation aggregation = Aggregation.newAggregation(unwindAuthorIds, lookupAuthor, unwindRefs, groupByAuthor);
List<BasicDBObject> results = mongoOperations.aggregate(aggregation, "book_collection", BasicDBObject.class).getMappedResults();

Following #Veeram's suggestion, I was able to write this query:
db.book_collection.aggregate([
{
$unwind: "$authorsIds"
},
{
$lookup: {
from: "authors_collection",
localField: "authorsIds",
foreignField: "_id",
as: "ref"
}
},
{$group: {_id: "$ref.authorName", count: {$sum: 1}}}
])
which returns something like this:
{
"_id" : [
"Paulo Coelho"
],
"count" : 1
}
/* 2 */
{
"_id" : [
"Jules Verne"
],
"count" : 2
}
This is exactly what I needed, and it sounds about right. I only need to do an additional query now to get the books with no author set.

Related

optimize mongo query to get max date in a very short time

I'm using the query bellow to get max date (field named extractionDate) in a collection called KPI, and since I'm only interested in the field extractionDate:
#Override
public Mono<DBObject> getLastExtractionDate(MatchOperation matchOperation,ProjectionOperation projectionOperation) {
return Mono.from(mongoTemplate.aggregate(
newAggregation(
matchOperation,
projectionOperation,
group().max(EXTRACTION_DATE).as("result"),
project().andExclude("_id")
),
"kpi",
DBObject.class
));
}
And as you see above, I need to filter the result firstly using the match operation (matchOperation) after that, I'm doing a projection operation to extract only the max of field "extractionDate" and rename it as result.
But this query cost a lot of time (sometimes more than 20 seconds) because I have a huge amount of data, I already added an index on the field extractionDate but I did not gain a lot, so I'm looking for a way to mast it fast as max as possible.
update:
Number of documents we have in the collection kpi: 42.8m documents
The query that being executed:
Streaming aggregation: [{ "$match" : { "type" : { "$in" : ["INACTIVE_SITE", "DEVICE_NOT_BILLED", "NOT_REPLYING_POLLING", "MISSING_KEY_TECH_INFO", "MISSING_SITE", "ACTIVE_CIRCUITS_INACTIVE_RESOURCES", "INCONSISTENT_STATUS_VALUES"]}}}, { "$project" : { "extractionDate" : 1, "_id" : 0}}, { "$group" : { "_id" : null, "result" : { "$max" : "$extractionDate"}}}, { "$project" : { "_id" : 0}}] in collection kpi
explain plan:
Example of a document in the collection KPI:
And finally the indexes that already exist on this collection :
Index tuning will depend more on the properties in the $match expression. You should be able to run the query in mongosh with and get an explain plan to determine if your query is scanning the collection.
Other things to consider is the size of the collection versus the working set of the server.
Perhaps update your question with the $match expression, and the explain plan and perhaps the current set of index definitions and we can refine the indexing strategy.
Finally, "huge" is rather subjective? Are you querying millions or billions or documents, and what is the average document size?
Update:
Given that you're filtering on only one field, and aggregating on one field, you'll find the best result will be an index
{ "type":1,"extractionDate":1}
That index should cover your query -- because the $in will mean that a scan will be selected but a scan over a small index is significantly better than over the whole collection of documents.
NB. The existing index extractionDate_1_customer.irType_1 will not be any help for this query.
I was able to optimize the request thanks to previous answers using this approach:
#Override
public Mono<DBObject> getLastExtractionDate(MatchOperation matchOperation,ProjectionOperation projectionOperation) {
return Mono.from(mongoTemplate.aggregate(
newAggregation(
matchOperation,
sort(Sort.Direction.DESC,EXTRACTION_DATE),
limit(1),
projectionOperation
),
"kpi",
DBObject.class
));
}
Also I had to create a compound index on extractionDate and type (the field I had in matchOperation) like bellow:

Spring Data MongoDB - projection and search

I am using "Wildcard text index" in order to search for a pattern in every fields of my class. I am also using projection in order to remove a certain field:
#Query(value = "{$text: { $search: ?0 }, fields = "{'notWantedField':0}")
However, I would like to prevent from matching something from the unwanted field.
In other words, I would like first to project (and remove fields), then search on the remaining fields.
Is there a way to combine projection and search while keeping the wildcard search?
Thanks a lot.
I am using spring-data-mongodb 1.10.8
A possible solution could be a $and operator combined with a $regex.
For example following the Mongodb documentation https://docs.mongodb.com/manual/reference/operator/query/text, if you suppose to create a text index combining subject and author (db.articles.createIndex({"author": "text", "subject": "text"}), you can exclude author field with this query:
db.articles.find( {$and: [{ $text: { $search: "coffee" } }, {"author": {'$regex' : '^((?!coffe).)*$', '$options' : 'i'}}]}, {"author": 0})
In your case, considering that your index is a wildcard, you must exclude, using the regex, all the fields that are also in the projection.

mongodb java driver pullByFilter

I have document schema such as
{
"_id" : 18,
"name" : "Verdell Sowinski",
"scores" : [
{
"type" : "exam",
"score" : 62.12870233109035
},
{
"type" : "quiz",
"score" : 84.74586220889356
},
{
"type" : "homework",
"score" : 81.58947824932574
},
{
"type" : "homework",
"score" : 69.09840625499065
}
]
}
I have a solution using pull that copes with removing a single element at a time but saw
I want to get a general solution that would cope with irregular schema where there would be between one and many elements to the array and I would like to remove all elements based on a condition.
I'm using mongodb driver 3.2.2 and saw this pullByFilter which sounded good
Creates an update that removes from an array all elements that match the given filter.
I tried this
Bson filter = and(eq("type", "homework"), lt("score", highest));
Bson u = Updates.pullByFilter(filter);
UpdateResult ur = collection.updateOne(studentDoc, u);
Unsurprisingly, this did not have any effect since I wasn't specifying the array scores
I get an error
The positional operator did not find the match needed from the query. Unexpanded update: scores.$.type
when I change the filter to be
Bson filter = and(eq("scores.$.type", "homework"), lt("scores.$.score", highest));
Is there a one step solution to this problem?
There seems very little info on this particular method I can find. This question may relate to How to Update Multiple Array Elements in mongodb
After some more "thinking" (and a little trial and error), I found the correct Filters method to wrap my basic filter. I think I was focusing on array operators too much.
I'll not post it here in case of flaming.
Clue: think "matches..." (as in regex pattern matching) when dealing with Filters helper methods ;)

How to use distinct/aggregate to get all fields that match several queries

I just learned how to use distinct.
What I do is create a BasicDBObject, put as query parameter to distinct what I want to be equal, and as field parameter what I want returned.
Now I want to do something similar, but with several queries. That meaning, I want the query to match several keys of the document (id and date have to be the same as the input I get), and return what sessions match that in the collection.
I tried doing something similar to find, but for distinct, where you add with append() or put() more fields to the query parameter.
This syntax does not seem to work and I found no one using similar code, so I guess it's not possible.
I've found the aggregate() method, but it seems to be used to match several FIELDS, not queries. Explanation with code:
array.put(coll.distinct(field, query));
I want that query parameter to have several keys, so that all fields match my input, and I find unique values of field that match both (or as many) keys in query.
Thanks in advance!
Edit:
Basics: MongoDB 3.2.2
Data manipulation:
"Session" : "value1", "car" : "carNumber", "date" : "20130321"
I have a very large collection with a number of documents that have, among other keys, this ones. I want, given a car and a number, get every UNIQUE session value, and return it as a json (for which, so far, I put the values into an array, and transform into json).
driver/framework specific question: I do not know to query this in mongodb shell. I know to use distinct, but not aggregators.
There are multiple parts in your question. I would like to answer the last part which is highlighted in bold. The solution is written in Java as the thread is tagged as Java.
The below code would give you the distinct session values for a car and car number. You can change the filter accordingly for your requirement.
The below code satisfies the basic distinct concept for your requirement. I assume that you can add code to result set into JSON (you can use Jackson or Gson libs for generating JSON).
import com.mongodb.MongoClient;
import com.mongodb.client.MongoCursor;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.Filters;
public class MongoReadDistinct {
public static void main(String[] args) {
MongoClient client = new MongoClient();
MongoDatabase database = client.getDatabase("cars");
MongoCursor<String> mongoCursorIds = database
.getCollection("sessions").distinct("Session",
Filters.and(Filters.eq("car", "Nisson_Note"), Filters.eq("carnumber", 123)), String.class)
.iterator();
while (mongoCursorIds.hasNext()) {
System.out.println(mongoCursorIds.next());
//You can convert the result to JSON
}
}
}
Sample Data:-
/* 1 */
{
"_id" : ObjectId("576a6860d317ab85059c76d4"),
"Session" : "value1",
"car" : "Nisson_Note",
"carnumber" : 123,
"date" : "20130321"
}
/* 2 */
{
"_id" : ObjectId("576a6896d317ab85059c76d5"),
"Session" : "value2",
"car" : "Nisson_Note",
"carnumber" : 123,
"date" : "20130321"
}
/* 3 */
{
"_id" : ObjectId("576a68b4d317ab85059c76d6"),
"Session" : "value2",
"car" : "Nisson_Note",
"carnumber" : 123,
"date" : "20140321"
}
Output:-
value1
value2
Well, to answer my own question, it is actually possible to have several queries in distinct method, it can be done both in mongodb shell and in java driver (unfortunately I did not get the other answer to work, not that is wrong, I just didn't manage).
So for mongodb shell (I include it because I didn't know to do this, either, which was part of the problem):
db.colectionLocalCC.distinct("Session", {date: "20130303", Car: "55"})
And for mongodb:
BasicDBObject query = new BasicDBObject();
query.put("date", date);
query.put("car",car);
String fields = "Session";
array.put(coll.distinct(fields, query));

Mongo and Java: Create indexes for aggregation framework

Situation: I have collection with huge amount of documents after map reduce(aggregation). Documents in the collection looks like this:
/* 0 */
{
"_id" : {
"appId" : ObjectId("1"),
"timestamp" : ISODate("2014-04-12T00:00:00.000Z"),
"name" : "GameApp",
"user" : "test#mail.com",
"type" : "game"
},
"value" : {
"count" : 2
}
}
/* 1 */
{
"_id" : {
"appId" : ObjectId("2"),
"timestamp" : ISODate("2014-04-29T00:00:00.000Z"),
"name" : "ScannerApp",
"user" : "newUser#company.com",
"type" : "game"
},
"value" : {
"count" : 5
}
}
...
And I searching inside this collection with aggregation framework:
db.myCollection.aggregate([match, project, group, sort, skip, limit]); // aggregation can return result on Daily or Monthly time base depends of user search criteria, with pagination etc...
Possible search criteria:
1. {appId, timestamp, name, user, type}
2. {appId, timestamp}
3. {name, user}
I'm getting correct result, exactly what I need. But from optimisation point of view I have doubts about indexing.
Questions:
Is it possible to create indexes for such collection?
How I can create indexes for such object with complex _id field?
How I can do analog of db.collection.find().explain() to verify which index used?
And is good idea to index such collection or its my performance paranoia?
Answer summarisation:
MongoDB creates index by _id field automatically but that is useless in a case of complex _id field like in an example. For field like: _id: {name: "", timestamp: ""} you must use index like that: *.ensureIndex({"_id.name": 1, "_id.timestamp": 1}) only after that your collection will be indexed in proper way by _id field.
For tracking how your indexes works with Mongo Aggregation you can not use db.myCollection.aggregate().explain() and proper way of doing that is:
db.runCommand({
aggregate: "collection_name",
pipeline: [match, proj, group, sort, skip, limit],
explain: true
})
My testing on local computer sows that such indexing seems to be good idea. But this is require more testing with big collections.
First, indexes 1 and 3 are probably worth investigating. As for explain, you can pass explain as an option to your pipeline. You can find docs here and an example here

Categories