Performance Mongodb java driver - java

I'm using mongodb java driver in my project for execute queries (finds, aggregate, mapreduce,...) over a big collection (5 millions of documents)
The driver version is:
<!-- MongoDB driver-->
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.0.3</version>
</dependency>
My problem is when I use the api find with some filters from java, the operation takes 15 sec.
....
Iterable<Document> messageList = collection.find().filter(... some filters).sort(... fields);
// Find documents
for (Document message : messageList) {
....
// some code
....
}
I check the mongo server log file and see the trace is a COMMAND instead of a QUERY:
2015-09-01T12:11:47.496+0200 I COMMAND [conn503] command b.$cmd command: count { count: "logs", query: { timestamp: { $gte: new Date(1433109600000) }, aplicacion: "APP1", event: "Event1" } } planSummary: IXSCAN { timestamp: 1, aplicacion: 1 } keyUpdates:0 writeConflicts:0 numYields:19089 reslen:44 locks:{ Global: { acquireCount: { r: 19090 } }, MMAPV1Journal: { acquireCount: { r: 19090 } }, Database: { acquireCount: { r: 19090 } }, Collection: { acquireCount: { R: 19090 } } } 14297ms
If I run the same query from mongodb client (Robomongo), it takes 0.05 ms.
db.getCollection('logs').find({ timestamp: { $gte: new Date(1427839200000) }, aplicacion: "APP1", event: "Event1" })
and in the server log is as QUERY
All queries that are made (find, aggregate, ...) with the driver java commands are transformed? The performance is much worse than mongo shell.

I think the issue is when you ran the query in mongo shell it will return only top 20 result at a time, here you are trying to read all the document and put it into array
try this query and see
List messageList = collection.find(filter).sort(...field).limit(20).into(new ArrayList());
It's highly recommended to create a index on query field.

Related

(Datastax 4.1.0) (Cassandra )How do I collect all responses with session.executeAsync?

I wantta make async call to cassandra db with execute.Async call
in manuel I found this code but I couldn't understand how to collect all rows into any list.
Really basic call like Select * from table, and I want to store all the results.
https://docs.datastax.com/en/developer/java-driver/4.4/manual/core/async/
CompletionStage<CqlSession> sessionStage = CqlSession.builder().buildAsync();
// Chain one async operation after another:
CompletionStage<AsyncResultSet> responseStage =
sessionStage.thenCompose(
session -> session.executeAsync("SELECT release_version FROM system.local"));
// Apply a synchronous computation:
CompletionStage<String> resultStage =
responseStage.thenApply(resultSet -> resultSet.one().getString("release_version"));
// Perform an action once a stage is complete:
resultStage.whenComplete(
(version, error) -> {
if (error != null) {
System.out.printf("Failed to retrieve the version: %s%n", error.getMessage());
} else {
System.out.printf("Server version: %s%n", version);
}
sessionStage.thenAccept(CqlSession::closeAsync);
});
You need to refer to the section about asynchronous paging - you need to provide a callback that will collect data into list supplied as external object. Documentation has a following example:
CompletionStage<AsyncResultSet> futureRs =
session.executeAsync("SELECT * FROM myTable WHERE id = 1");
futureRs.whenComplete(this::processRows);
void processRows(AsyncResultSet rs, Throwable error) {
if (error != null) {
// The query failed, process the error
} else {
for (Row row : rs.currentPage()) {
// Process the row...
}
if (rs.hasMorePages()) {
rs.fetchNextPage().whenComplete(this::processRows);
}
}
}
in this case processRows can store data in the list that is the part of the current object, something like this:
class Abc {
List<Row> rows = new ArrayList<>();
// call to executeAsync
void processRows(AsyncResultSet rs, Throwable error) {
....
for (Row row : rs.currentPage()) {
rows.add(row);
}
....
}
}
but you'll need to be very careful with select * from table as it may return a lot of results, plus it may timeout if you have too much data - in this case it's better to perform token range scan (I have an example for driver 3.x, but no for 4.x yet).
Here is a sample for 4.x (you also find sample for reactive code available from 4.4 BTW)
https://github.com/datastax/cassandra-reactive-demo/blob/master/2_async/src/main/java/com/datastax/demo/async/repository/AsyncStockRepository.java

spark sql join performance issue with mongo-spark and spark-redshift connectors

We are using Apache-spark with mongo-spark library(for connecting with MongoDB) and spark-redshift library(for connecting with Amazon Redshift DWH).
And we are experiencing very bad performance for our job.
So I am hoping to get some help to understand whether we are doing anything wrong in our program or this is what we can expect with the infrastructure we have used.
We are running our job with MESOS resouce manager on 4 AWS EC2 nodes with following configuration with each node:
RAM: 16GB, CPU cores: 4, SSD: 200GB
We have 3 tables in Redshift cluster:
TABLE_NAME SCHEMA NUMBER_OF_ROWS
table1 (table1Id, table2FkId, table3FkId, ...) 50M
table2 (table2Id, phonenumber, email,...) 700M
table3 (table3Id, ...) 2K
and In MongoDB we have a collection having 35 million documents with a sample document as below (all email and phone numbers are unique here, no duplication):
{
"_id": "19ac0487-a75f-49d9-928e-c300e0ac7c7c",
"idKeys": {
"email": [
"a#gmail.com",
"b#gmail.com"
],
"phonenumber": [
"1111111111",
"2222222222"
]
},
"flag": false,
...
...
...
}
Which we are filtering and flattening(see the code at the end for mongo-spark aggregation pipeline) with spark-mongo connector to following format (as we need to JOIN data from Redshift and Mongo ON email OR phonenumber match for which another option available is array_contains() in spark SQL which is a bit slow) :
{"_id": "19ac0487-a75f-49d9-928e-c300e0ac7c7c", "email": "a#gmail.com", "phonenumber": null},
{"_id": "19ac0487-a75f-49d9-928e-c300e0ac7c7c","email": "b#gmail.com","phonenumber": null},
{"_id": "19ac0487-a75f-49d9-928e-c300e0ac7c7c","email": null,"phonenumber": "1111111111"},
{"_id": "19ac0487-a75f-49d9-928e-c300e0ac7c7c","email": null,"phonenumber": "22222222222"}
Spark computation steps (please refer the code below to understand these steps better):
First we are loading all data from 3 Redshift tables into table1Dataset, table2Dataset, table3Dataset respectively by using spark-redshift connector.
joining these 3 tables with SparkSQL and creating new Dataset redshiftJoinedDataset. (this operation independently finishes in 6 hours)
loading MongoDB data into mongoDataset by using mongo-spark connector.
joining mongoDataset and redshiftJoinedDataset. (here is the bottleneck as we need to join over 50million rows from redshift with over 100million flattened rows from mongodb)
Note:- Also mongo-spark seems to have some internal issue with its aggregation pipeline execution which might be making it very slow.
then we are doing some aggregation and grouping the data on finalId
here is the code for the steps mentioned above:
import com.mongodb.spark.MongoSpark;
import com.mongodb.spark.rdd.api.java.JavaMongoRDD;
import org.apache.spark.SparkContext;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.SparkSession;
import org.bson.Document;
import java.util.Arrays;
public class SparkMongoRedshiftTest {
private static SparkSession sparkSession;
private static SparkContext sparkContext;
private static SQLContext sqlContext;
public static void main(String[] args) {
sparkSession = SparkSession.builder().appName("redshift-spark-test").getOrCreate();
sparkContext = sparkSession.sparkContext();
sqlContext = new SQLContext(sparkContext);
Dataset table1Dataset = executeRedshiftQuery("(SELECT table1Id,table2FkId,table3FkId FROM table1)");
table1Dataset.createOrReplaceTempView("table1Dataset");
Dataset table2Dataset = executeRedshiftQuery("(SELECT table2Id,phonenumber,email FROM table2)");
table2Dataset.createOrReplaceTempView("table2Dataset");
Dataset table3Dataset = executeRedshiftQuery("(SELECT table3Id FROM table3");
table3Dataset.createOrReplaceTempView("table3Dataset");
Dataset redshiftJoinedDataset = sqlContext.sql(" SELECT a.*,b.*,c.*" +
" FROM table1Dataset a " +
" LEFT JOIN table2Dataset b ON a.table2FkId = b.table2Id" +
" LEFT JOIN table3Dataset c ON a.table3FkId = c.table3Id");
redshiftJoinedDataset.createOrReplaceTempView("redshiftJoinedDataset");
JavaMongoRDD<Document> userIdentityRDD = MongoSpark.load(getJavaSparkContext());
Dataset mongoDataset = userIdentityRDD.withPipeline(
Arrays.asList(
Document.parse("{$match: {flag: false}}"),
Document.parse("{ $unwind: { path: \"$idKeys.email\" } }"),
Document.parse("{$group: {_id: \"$_id\",emailArr: {$push: {email: \"$idKeys.email\",phonenumber: {$ifNull: [\"$description\", null]}}},\"idKeys\": {$first: \"$idKeys\"}}}"),
Document.parse("{$unwind: \"$idKeys.phonenumber\"}"),
Document.parse("{$group: {_id: \"$_id\",phoneArr: {$push: {phonenumber: \"$idKeys.phonenumber\",email: {$ifNull: [\"$description\", null]}}},\"emailArr\": {$first: \"$emailArr\"}}}"),
Document.parse("{$project: {_id: 1,value: {$setUnion: [\"$emailArr\", \"$phoneArr\"]}}}"),
Document.parse("{$unwind: \"$value\"}"),
Document.parse("{$project: {email: \"$value.email\",phonenumber: \"$value.phonenumber\"}}")
)).toDF();
mongoDataset.createOrReplaceTempView("mongoDataset");
Dataset joinRedshiftAndMongoDataset = sqlContext.sql(" SELECT a.* , b._id AS finalId " +
" FROM redshiftJoinedData AS a INNER JOIN mongoDataset AS b " +
" ON b.email = a.email OR b.phonenumber = a.phonenumber");
//aggregating joinRedshiftAndMongoDataset
//then storing to mysql
}
private static Dataset executeRedshiftQuery(String query) {
return sqlContext.read()
.format("com.databricks.spark.redshift")
.option("url", "jdbc://...")
.option("query", query)
.option("aws_iam_role", "...")
.option("tempdir", "s3a://...")
.load();
}
public static JavaSparkContext getJavaSparkContext() {
sparkContext.conf().set("spark.mongodb.input.uri", "");
sparkContext.conf().set("spark.sql.crossJoin.enabled", "true");
return new JavaSparkContext(sparkContext);
}
}
Time estimation to finish this job on above mentioned infrastructure is over 2 months.
So to summarize the joins quantitatively:
RedshiftDataWithMongoDataJoin => (RedshiftDataJoin) INNER_JOIN (MongoData)
=> (50M LEFT_JOIN 700M LEFT_JOIN 2K) INNER_JOIN (~100M)
=> (50M) INNER_JOIN (~100M)
Any help with this will be appreciated.
So after a lot of investigation we came to know that 90% of data in table2 had either email or phonenumber null and I had missed to handle joins on null values in the query.
So that was the main problem for this performance bottleneck.
After fixing this problem the job now runs within 2 hours.
So there are no issues with spark-redshift or mongo-spark those are performing exceptionally well :)

execute raw mondodb query from java mongodb driver

I'm developing a web application to run native mongo query through java driver, so that I could see the results in good UI. I didn't find a straight way to do that but running js functions seems to be one way to do that.
I can run the following script from mongo shell.
rs1:PRIMARY> function showShortedItems() { return db.Items.find({});}
rs1:PRIMARY> showShortedItems()
But while trying same thing from java driver, nope.
val db = connection.getDatabase(Database.Name)
val command = new BasicDBObject("eval", "function() { return db.Items.find(); }")
val result = db.runCommand(command)
Error :
Caused by: com.mongodb.MongoCommandException: Command failed with error 13: 'not authorized on shipping-db to execute command { eval: "function() { return db.Items.find(); }" }' on server localhost:27017.
The full response is { "ok" : 0.0, "errmsg" : "not authorized on shipping-db to execute command { eval: \"function() { return db.Items.find(); }\" }", "code" : 13 }
rs1:PRIMARY> db.system.users.find({}) is empty.
mongo.conf
storage:
journal:
enabled: false

Cannot start Neo4j Server after Spatial data load

I've been trying to use the Neo4j Spatial plugin with data loaded via Java. I have added the plugin, and when I start an empty database this is confirmed by the following GET request to the server.
{
"extensions": {
"SpatialPlugin": {
"addSimplePointLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addSimplePointLayer",
"findClosestGeometries": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/findClosestGeometries",
"addNodesToLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addNodesToLayer",
"addGeometryWKTToLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addGeometryWKTToLayer",
"findGeometriesWithinDistance": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/findGeometriesWithinDistance",
"addEditableLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addEditableLayer",
"addCQLDynamicLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addCQLDynamicLayer",
"addNodeToLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addNodeToLayer",
"getLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/getLayer",
"findGeometriesInBBox": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/findGeometriesInBBox",
"updateGeometryFromWKT": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/updateGeometryFromWKT"
}
},
"node": "http://localhost:7474/db/data/node",
"node_index": "http://localhost:7474/db/data/index/node",
"relationship_index": "http://localhost:7474/db/data/index/relationship",
"extensions_info": "http://localhost:7474/db/data/ext",
"relationship_types": "http://localhost:7474/db/data/relationship/types",
"batch": "http://localhost:7474/db/data/batch",
"cypher": "http://localhost:7474/db/data/cypher",
"indexes": "http://localhost:7474/db/data/schema/index",
"constraints": "http://localhost:7474/db/data/schema/constraint",
"transaction": "http://localhost:7474/db/data/transaction",
"node_labels": "http://localhost:7474/db/data/labels",
"neo4j_version": "2.3.2"
}
However, when I stop the server, load my spatial data via Java with a SpatialIndexProvider.SIMPLE_WKT_CONFIG index, then adding it with:
try (Transaction tx = db.beginTx()) {
Index<Node> index = db.index().forNodes("location", SpatialIndexProvider.SIMPLE_WKT_CONFIG);
for (String line : lines) {
String[] columns = line.split(",");
Node node = db.createNode();
node.setProperty("wkt", String.format("POINT(%s %s)", columns[4], columns[3]));
node.setProperty("name", columns[0]);
index.add(node, "dummy", "value");
}
tx.success();
}
After a restart, I get the error:
2016-02-23 13:44:36.747+0000 ERROR [o.n.k.KernelHealth] setting TM not OK. Kernel has encountered some problem, please perform necessary action (tx recovery/restart) No index provider 'spatial' found. Maybe the intended provider (or one more of its dependencies) aren't on the classpath or it failed to load.
in Messages.log inside the graph.db. Is there anything obvious that I'm doing wrong?
I'm on windows 8, Neo4j 2.3.2, Java 8 and neo4j-spatial-0.15-neo4j-2.3.0.jar
Did you unzip the full spatial zip into the plugins directory?
Otherwise some classes that spatial needs can't be found.

MongoDB Update with upsert and unique index atomicity when document is not in the working set

In summary, we have ran into this weird behavior in doing concurrent updates on an existing document when the document is not part of the working set (not in resident memory).
More details:
Given a collection with a unique index and when running concurrent updates (3 threads) with upsert as true on a given existing document, 1 to 2 threads raise the following exception:
Processing failed (Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$key_1 dup key: { : 1008 }'):
According to the documentation, I would expect all of the three updates to succeed because the document I am trying to update already exists. Instead, it looks like it is trying to do an insert on few or all of the update requests and few fails due to the unique index.
Repeating the same concurrent update on the document does not raise any exceptions. Also, using find() on a document to bring it to the working set, then running the concurrent updates on that document also runs as expected.
Also, using findAndModify with the same query and settings does not have the same problem.
Is this working as expected or am I missing something?
Setup:
-mongodb java driver 3.0.1
-3 node replica set running MongoDB version "2.6.3"
Query:
BasicDBObject query = new BasicDBObject();
query.put("docId", 123L);
collection.update (query, object, true, false);
Index:
name: docId_1
unique: true
key: {"docId":1}
background: true
Updated on May 28 to include sample code to reproduce the issue.
Run MongoDB locally as follow (Note that the test will write about ~4 GB of data):
./mongodb-osx-x86_64-2.6.10/bin/mongod --dbpath /tmp/mongo
Run the following code, restart the database, comment out "fillUpCollection(testMongoDB.col1, value, 0, 300);", then run the code again. Depending on the machine, you may need to tweak some of the numbers to be able to see the exceptions.
package test;
import com.mongodb.BasicDBObject;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.Mongo;
import com.mongodb.MongoClient;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class TestMongoDB {
public static final String DOC_ID = "docId";
public static final String VALUE = "value";
public static final String DB_NAME = "db1";
public static final String UNIQUE = "unique";
public static final String BACKGROUND = "background";
private DBCollection col1;
private DBCollection col2;
private static DBCollection getCollection(Mongo mongo, String collectionName) {
DBCollection col = mongo.getDB(DB_NAME).getCollection(collectionName);
BasicDBObject index = new BasicDBObject();
index.append(DOC_ID, 1);
DBObject indexOptions = new BasicDBObject();
indexOptions.put(UNIQUE, true);
indexOptions.put(BACKGROUND, true);
col.createIndex(index, indexOptions);
return col;
}
private static void storeDoc(String docId, DBObject doc, DBCollection dbCollection) throws IOException {
BasicDBObject query = new BasicDBObject();
query.put(DOC_ID, docId);
dbCollection.update(query, doc, true, false);
//dbCollection.findAndModify(query, null, null, false, doc, false, true);
}
public static void main(String[] args) throws Exception{
final String value = new String(new char[1000000]).replace('\0', 'a');
Mongo mongo = new MongoClient("localhost:27017");
final TestMongoDB testMongoDB = new TestMongoDB();
testMongoDB.col1 = getCollection(mongo, "col1");
testMongoDB.col2 = getCollection(mongo, "col2");
fillUpCollection(testMongoDB.col1, value, 0, 300);
//restart Database, comment out previous line, and run again
fillUpCollection(testMongoDB.col2, value, 0, 2000);
updateExistingDocuments(testMongoDB, value);
}
private static void updateExistingDocuments(TestMongoDB testMongoDB, String value) {
List<String> docIds = new ArrayList<String>();
for(int i = 0; i < 10; i++) {
docIds.add(new Random().nextInt(300) + "");
}
multiThreadUpdate(testMongoDB.col1, value, docIds);
}
private static void multiThreadUpdate(final DBCollection col, final String value, final List<String> docIds) {
Runnable worker = new Runnable() {
#Override
public void run() {
try {
System.out.println("Started Thread");
for(String id : docIds) {
storeDoc(id, getDbObject(value, id), col);
}
} catch (Exception e) {
System.out.println(e);
} finally {
System.out.println("Completed");
}
}
};
for(int i = 0; i < 8; i++) {
new Thread(worker).start();
}
}
private static DBObject getDbObject(String value, String docId) {
final DBObject object2 = new BasicDBObject();
object2.put(DOC_ID, docId);
object2.put(VALUE, value);
return object2;
}
private static void fillUpCollection(DBCollection col, String value, int from, int to) throws IOException {
for(int i = from ; i <= to; i++) {
storeDoc(i + "", getDbObject(value, i + ""), col);
}
}
}
Sample Output on the second run:
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "290" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "170" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "241" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "127" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "120" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "91" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "136" }'
Completed
Completed
This looks like a known issue with MongoDB, at least up to version 2.6. Their recommended fix is to have your code retry the upsert on error.
https://jira.mongodb.org/browse/SERVER-14322
Your query is too specific, not finding the document even if it's created, e.g. not only searching for the unique field. Then the upsert tries to create it a second time (another thread) but fails as it actually exists, but wasn't found. Please see http://docs.mongodb.org/manual/reference/method/db.collection.update/#upsert-behavior for more details.
Boil down from doc: To avoid inserting the same document more than once, only use upsert: true if the query field is uniquely indexed.
Use modify operators like $set, to include your query document into the upsert doc
If you feel that this isn't the case for you. Please provide us with the query and some information about your index.
Update:
If you try to run your code from cli, you'll see the following:
> db.upsert.ensureIndex({docid:1},{unique:true})
{
"createdCollectionAutomatically" : true,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> db.upsert.update({"docid":123},{one:1,two:2},true,false)
WriteResult({
"nMatched" : 0,
"nUpserted" : 1,
"nModified" : 0,
"_id" : ObjectId("55637413ad907a45eec3a53a")
})
> db.upsert.find()
{ "_id" : ObjectId("55637413ad907a45eec3a53a"), "one" : 1, "two" : 2 }
> db.upsert.update({"docid":123},{one:1,two:2},true,false)
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 11000,
"errmsg" : "insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.upsert.$docid_1 dup key: { : null }"
}
})
You have the following issue:
You want to update the document but don't find it. And your update contains no modify operators, thus your docid field won't be included in the newly created document (or better it's set to null, and null can be set only once in a unique index, too).
Next time you try to update your document, you still don't find it, because of the last step. So MongoDB tries to insert it following the same procedure as before, and fails again. No second null allowed.
Simply change your update query to this, to modify the document/ on upsert case include your query into it: db.upsert.update({"docid":123},{$set:{one:1,two:2}},true,false)
db.upsert.update({"docid":123},{$set:{one:1,two:2}},true,false)
WriteResult({
"nMatched" : 0,
"nUpserted" : 1,
"nModified" : 0,
"_id" : ObjectId("5562164f0f63858bf27345f3")
})
> db.upsert.find()
{ "_id" : ObjectId("5562164f0f63858bf27345f3"), "docid" : 123, "one" : 1, "two" : 2 }
> db.upsert.update({"docid":123},{$set:{one:1,two:2}},true,false)
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 0 })

Categories