After upgrading my MongoDB Java driver from version 2.14 to 3.2, I changed from using DBCursor to MongoCursor.
Previously, I was using snapshot() to prevent repetition when iterating through my large database of thousands of documents. However, I can't seem to find equivalent method for MongoCursor. This is causing troubling repetitions, e.g. 5571 loops for 4493 documents. That's like 24% more iterations! OMG!
So, my question is, is there a simple way or an equivalent method for MongoCursor that can prevent this from happening? If not, should I switch back to using DBCursor? It looks to still be supported in version 3.2.
Please kindly advise! Thank you!
After banging a few things through an checking the profiler logs I actually got a confirmation on this:
MongoCursor<Document> cursor = collection.find().modifiers(
new Document("$snapshot", true)
).iterator();
So you need to call the .modifiers() while still on a FindIterable with $snapshot as true. This is consistent over the wire with the .snaphot() cursor modifier.
Both record in the profiler like this:
"query" : {
"find" : "sample",
"filter" : {
},
"snapshot" : true
},
Showing the correct modifier placed.
Related
I am trying to aggregate a MongoDB (3.6) using Morphia (1.3.2).
Currently it is a simple match and unwind to understand Morphia's API.
The problem that I am facing however is related to MongoDB 3.6:
Changed in version 3.4: MongoDB 3.6 removes the use of aggregate command without the cursor option unless the command includes the explain option. Unless you include the explain option, you must specify the cursor option.
This paragraph comes directly from the MongoDB documentation. MongoDB Aggregate.
This means that a cursor is mandatory for the aggregate to work. However, I can't find a way to do this using Morphia. Therefore my aggregate does not work.
AggregationPipeline data = aggregation.match(query).unwind("data");
Iterator<LoraHourData> out = data.aggregate(Data.class);
The error that is produced using above code is as follows :
Command failed with error 9: The cursor option is required, except for aggregate with the explain argument on server localhost:27017. The full response is { ok : 0.0, errmsg: The cursor option is required, except for aggregate with the explain argument", code : 9, codeName : FailedToParse }
I want to create a query like
UPDATE foo set map_clm['bar'] = 'biz' where id = 7 if map_clm['boo'] = 'bang';
using QueryBuilder of Datastax's java driver for cassandra. I can create Assignment using something like QueryBuilder.put("map_clm", "bar", "biz"), but I am stuck in creating a clause for IF Condition map_clm['boo'] = 'bang'. Is there anyway to do that ?
IMHO, most straightforward way will be to use raw function and put your if condition into it. But you need to be careful with escaping of arguments if you aren't using bindings.
I've setup full text search and MongoDB and it's working quite well (Mongo 2.6.5).
However it does an OR instead of and AND.
1) Is it possible to make the query an AND query, while still getting all the benefits of full text search (stemming etc.)
2) And if so, is it possible to add this option via the Morphia wrapper library
EDIT
I see that the full text search includes a 'score' for each document returned. Is it possible to only return docs with a certain score or above. Is there some score that would represent a 'fuzzy' and query. That is usually all tokens are in the document but not absolutely always. If so this would solve the problem as well.
Naturally if possible to do this via Morphia that would be super helpful. But I can use the native java driver as well.
Any pointers in the correct direction, much appreciated.
EDIT
Code looks like this, I'm using Morphia 1.0.1:
Datastore ds = Dao.instance().getDatabase();
Query<Product> q = ds.createQuery(Product.class).search("grey vests");
List<Product> prods = q.asList();
Printing the query gives:
{ "$text" : { "$search" : "grey vests"}}
Note: I am able to do take an intersection of multiple result sets to create an AND query. However this is very slow since something like "grey" will return a massive result set and be slow at feeding the results back.
EDIT
I've tried to chain the search() calls and add a single 'token' to each call. But I am getting a run time error. Code becomes:
q.search("grey").search("vests");
The query I get is (which seems like it's doing the right thing) ...
{ "$and" : [ { "$text" : { "$search" : "grey"}} , { "$text" : { "$search" : "vests"}}]}
The error is:
com.mongodb.MongoQueryException: Query failed with error code 17287 and error message 'Can't canonicalize query: BadValue Too many text expressions' on server ...
at com.mongodb.connection.ProtocolHelper.getQueryFailureException(ProtocolHelper.java:93)
Using Mongo Java Driver 2.13 and Mongo 3.0.
I am trying to move from Spring Data save() to MongoDB API's Bulk Writing since I am saving/updating about 100K objects. I am trying to write the Service/Repository layer code where I can pass in a Collection of my specific Objects and be able to either create new records or update existing records, or in other words upsert. When I do an insert the performance is very acceptable.
If I update the code to do upserts the performance is just way too slow. Am I doing something wrong in the following code sample (note it is scaled down to just the necessary logic, i.e. no error handling):
public void save(Collection<MyDomainObject> objects) {
BulkWriteOperation bulkWriter = dbCollection.initializeUnorderedBulkOperation();
for(MyDomainObject mdo : objects) {
DBObject dbObject = convert(mdo);
bulkWriter.find(new BasicDBObject("id",mdo.getId()))
.upsert().updateOne(new BasicDBObject("$set",dbObject));
}
bulkWriter.execute(writeConcern);
}
Note that I also tried replaceOne() instead of updateOne() with the same results.
I also noticed in the Mongo log that "nscannedObjects" keeps increasing while "nMatched", "nModified" and "upsert" are never larger than 1. Does this mean that it is table scanning for each record?
Am I using upsert the correct way? Any other suggestions?
Thanks to ry_donahue I figured out the issue.
It was not using the correct ID field, which is the index. In the conversion of the domain object to a DBObject there ended up being an "id" and an "_id" field.
I also changed updateOne() to replaceOne(). So now the code looks like this:
public void save(Collection<MyDomainObject> objects) {
BulkWriteOperation bulkWriter = dbCollection.initializeUnorderedBulkOperation();
for(MyDomainObject mdo : objects) {
DBObject dbObject = convert(mdo);
bulkWriter.find(new BasicDBObject("_id",new ObjectId(mdo.getId()))).upsert().replaceOne(dbObject);
}
bulkWriter.execute(writeConcern);
}
This now gives very good performance.
new poster here, I found this previous post but it's on C#,
I tried doing this query straight into the java code of a JSP page, for some reason, it doesn't accept the info in the {} of the find() query and just gives out an error...
So peeps, how do I do this in Java:
// retrieve ssn field for documents where last_name == 'Smith':
db.users.find({last_name: 'Smith'}, {'ssn': 1});
Thanks!
PS: why the hell does C# have the nice little .Exclude() and .Include() commands and java doesn't? cries
The java driver follows the exact same API as the shell. Just pass a DBObject containing your field projection as the second argument to find or findOne
As far as I know the official C# driver doesn't expose Include() and Exclude() methods as they violate the standard API.