I'm using Spring Data Mongo version 1.10.18 with Java 8. I don't understand the behavior I am seeing with the GridFsOperations.findOne method.
Query maxAccountSetVersionQuery = new Query().addCriteria(GridFsCriteria.whereMetaData("tenantId").is(tenantId))
.addCriteria(GridFsCriteria.whereMetaData("contextId").is(businessContextId))
.addCriteria(GridFsCriteria.whereMetaData("collection").is("genericAuthorizationAccount"))
.with(new Sort(Sort.Direction.DESC, "metadata.accountSetVersion"));
final GridFSDBFile findOneResult = gridOperations.findOne(maxAccountSetVersionQuery);
final List<GridFSDBFile> gridFSDBFiles = gridOperations.find(maxAccountSetVersionQuery);
final GridFSDBFile firstInListResult = gridFSDBFiles.get(0);
final String output = String.format("findOneResult: %s\nfirstInListResult: %s",
findOneResult.getMetaData().get("accountSetVersion"),
firstInListResult.getMetaData().get("accountSetVersion"));
System.out.println(output);
Console output is:
findOneResult: 1
firstInListResult: 4
To be clear here the answer I am expecting is 4 which means firstInListResult is referencing the expected document.
So, two questions:
Why aren't findOneResult and firstInListResult referencing one in the same document? Or to ask it another way, Why doesn't findOne find the first document?
Is there a way to get Spring Data Mongo to find the first document in the sorted query results instead of my code having to load the entire collection into memory just to get the first element?
It turns out that this is currently a bug in Spring Data MongoDb's GridFsTemplate implementation. https://jira.spring.io/browse/DATAMONGO-2411 Surprisingly a pull request with a fix was created just 4 days ago, after I originally asked this question.
Related
I'm looking to perform a query on my Couchbase database using the Java client SDK, which will return a list of results that include the document id for each result. Currently I'm using:
Statement stat = select("*").from(i("myBucket"))
.where(x(fieldIwantToGet).eq(s(valueIwantToGet)));
N1qlQueryResult result = bucket.query(stat);
However, N1qlQueryResult seems to only return a list of JsonObjects without any of the associated meta data. Looking at the documentation it seems like I want a method that returns a list of Document objects, but I can't see any bucket methods that I call that do the job.
Anyone know a way of doing this?
You need to use the below query to get Document Id:
Statement stat = select("meta(myBucket).id").from(i("myBucket"))
.where(x(fieldIwantToGet).eq(s(valueIwantToGet)));
The above would return you an array of Document Id.
//Below code is Not Working
//Here, my query just returns one object, So I am trying to use findOne() //method.
Query<Topic> query = Ebean.find(Topic.class);
Topic topic = new Topic();
Topic topic=Topic.find.where().eq("columnName", "nameToMatch").findOne();
//Below part is working if I use findList(). But I have to do get(0) to //fetch the topic which is not good practice I think.
List<Topic> topicList = Ebean.find(Topic.class).where().eq("columnName", "NametoMatch").findList();
topicList.get(0)
Can anyone provide ideas how to return just One Object instead of list ?
I don't know if findOne exists in Ebean, but when I need to retrieve only one object I use findUnique()
If you're sure the object you want to find is unique, you can get it via findUnique(): Topic.find.where().eq("columnName", "nameToMatch").findUnique();
Otherwise you can use findList() with setMaxRows(), because you don't want to load in memory whole result set:
Topic.find.where().eq("columnName", "nameToMatch").setMaxRows(1).findList();
I am using mongodb 3.4 and I want to get the last inserted document id. I have searched all and I found out below code can be used if I used a BasicDBObject.
BasicDBObject docs = new BasicDBObject(doc);
collection.insertOne(docs);
ID = (ObjectId)doc.get( "_id" );
But the problem is am using Document type not BasicDBObject so I tried to get it as like this, doc.getObjectId();. But it asks a parameter which I actually I want, So does anyone know how to get it?
EDIT
This is the I am inserting it to mongo db.
Document doc = new Document("jarFileName", jarDataObj.getJarFileName())
.append("directory", jarDataObj.getPathData())
.append("version", jarDataObj.getVersion())
.append("artifactID", jarDataObj.getArtifactId())
.append("groupID", jarDataObj.getGroupId());
If I use doc.toJson() it shows me whole document. is there a way to extract only _id?
This gives me only the value i want it like the objectkey, So I can use it as reference key.
collection.insertOne(doc);
jarID = doc.get( "_id" );
System.out.println(jarID); //59a4db1a6812d7430c3ef2a5
Based on ObjectId Javadoc, you can simply instantiate an ObjectId from a 24 byte Hex string, which is what 59a4db1a6812d7430c3ef2a5 is if you use UTF-8 encoding. Why don't you just do new ObjectId("59a4db1a6812d7430c3ef2a5"), or new ObjectId("59a4db1a6812d7430c3ef2a5".getBytes(StandardCharsets.UTF_8))? Although, I'd say that exposing ObjectId outside the layer that integrates with Mongo is a design flaw.
In Neo4J, I want to use the bolt protocol.
I installed the 3.1 version of Neo4J.
I my Java project, that already works well with normal HTTP Rest API of Neo4J, I integrate with Maven the needed drivers and achieve to perform request with BOLT.
The problem is everywhere you make a search about bolt they give example like this one :
MATCH (a:Product) return a.name
But I don't want the name, I want all the data of all product, what ever i know or not before what are these columns, like here:
MATCH (a:Product) return * --> here I retrieve only the ids of nodes
I found there https://github.com/neo4j-contrib/neo4j-jdbc/tree/master/neo4j-jdbc-bolt we can "flatten" the result but it seems to not work or I didn't understand how it works:
GraphDatabase.driver( "bolt://localhost:7687/?flatten=-1", AuthTokens.basic( "neo4j", "......." ) );
I put the ?flatten=-1 at the end of my connection address... but that changed nothing.
Anyone can help? Or confirm it's not possible or not working ?
Thanks
Ok I understood my error, I didn’t dig enough in the object returned. So used to have a JSON formatted response, I didn’t see that I have to search in the StatementResult object to find the wanted object with its properties. In fact Eclipse in the “expressions” shows “in fly” only the ids, but inside the object data are there.
Record oneRecord = rs.next();
String src = oneRecord.get("m").get("source");
That way I can reconstruct my object
I'm new to DynamoDb and I'm struggling to work out how to do this (using the java sdk).
I currently have a table (in mongo) for notifications. The schema is basically as follows (I've simplified it)
id: string
notifiedUsers: [123, 345, 456, 567]
message: "this is a message"
created: 12345678000 (epoch millis)
I wanted to migrate to Dynamodb, but I can't work out the best way to select all notifications that went to a particular user after a certain date?
I gather I can't have an index on a list like notifiedUsers, therefore I can't use a query in this case - is that correct?
I'd prefer not to scan and then filter, there could be a lot of records.
Is there a way to do this using a query or another approach?
EDIT
This is what I'm trying now, it's not working and I'm not sure where to take it (if anywhere).
Condition rangeKeyCondition = new Condition()
.withComparisonOperator(ComparisonOperator.CONTAINS.toString())
.withAttributeValueList(new AttributeValue().withS(userId));
if(startTimestamp != null) {
rangeKeyCondition = rangeKeyCondition.withComparisonOperator(ComparisonOperator.GT.toString())
.withAttributeValueList(new AttributeValue().withS(startTimestamp));
}
NotificationFeedDynamoRecord replyKey = new NotificationFeedDynamoRecord();
replyKey.setId(partitionKey);
DynamoDBQueryExpression<NotificationFeedDynamoRecord> queryExpression = new DynamoDBQueryExpression<NotificationFeedDynamoRecord>()
.withHashKeyValues(replyKey)
.withRangeKeyCondition(NOTIFICATIONS, rangeKeyCondition);
In case anyone else comes across this question, in the end we flattened the schema, so that there is now a record per userId. This has lead to problems because it's not possible with dynamoDb to atomically batch write records. With the original schema we had one record, and could write it atomically ensuring that all users got that notification. Now we cannot be certain, and this is causing pain.