I use the jersey/jackson stack to address a neo4j database via the REST api, but I have some issues how to interpret the result.
If I read the node by its ID (/db/data/node/xxx) the result can be mapped to my DTO very easy by calling readEntity(MyDto.class) on the response. However, usage of internal IDs is not recommended and various use cases require to query by custom properties. Here cypher comes into play (/db/data/cypher).
Assuming a node exists with a property "myid" and a value of "1234", I can fetch it with the cypher query "MATCH (n {myid: 1234}) RETURN n". The result is a JSON string with a bunch of resources and eventually the "data" I want do unmarshall to a java object. Unmarshalling it directly fails with a ProcessingException (error reading entity from input stream). I see no API allowing to iterate the result's data.
My idea is to define some kind of generic wrapper class with an attribute "data", giving this one to the unmarshaller, and unwrapping my DTO afterwards. I wonder if there is a more elegant way to do this, like using "RETURN n.data" (which does not work) or something like this. Is it?
You should look into neo4j 2.0 where return n just returns the property map.
I usually tend to deserialize the result as a nested list/map (i.e. have ObjectMapper read to Object.class or Map.class) structure and grab the data map directly out of that.
There's probably a way to tell jackson to ignore all the information around that data field
If you want to have a nicer presentation you can also check out my cypher-rs project which returns only the data in question, nothing more.
Related
I have some incoming JSON (the field-order of which is not my choice) that embeds a dependent pair:
{
"data": {...},
"evt": "READY",
...
}
and what type I should read data into depends on the value of evt. With just a JsonParser this is impossible because there's no way to store data for later so that it can be returned to once evt is reached.
All of the data I'm parsing (unfortunately) already exists in a ByteBuffer, so is there a better interface to use than JsonParser? I don't want to bring in any more dependencies than jackson-core if it can be helped.
Looks like there is no simple way to achieve this without any additional dependencies.
I suppose, you need to add at least jackson-databind (and also jackson-annotations if not added automatically via Maven/Gradle).
Then you can use an ObjectMapper as an ObjectCodec for the parser and parse the complete JSON either into a TreeNode structure that can be partically parsed later into the correct type or - if you have objects for all types of data - you maybe can directly parse the complete object with matching data type. If needed, a custom ObjectCodec could be implemented to first collect the unknown data and then later process it when the type is known, but implementing an ObjectCode does not seem to be that easy.
Instead of Jackson you could use GSON which can either parse the data into the complete object structure or a generic JSON object tree without any additional dependencies.
If you really cannot add additional dependencies, then you could implement a SAX-XML-Parser-like logic using JsonParser.nextToken, but I suppose that would require a lot of custom logic.
Let's say I have a Foo class and an jax-rs FooResource that exposes an API to CRUD Foos.
Foo represents a MongoDB document.
In FooResource, I'll have something like this:
#PATCH
#Path("{id}")
public Response update(#PathParam("id") ObjectId id, Foo foo) {
return Response.ok(fooService.update(id, foo)).build();
}
The problem is that the foo object in json will only contain the fields that have changed, but I never know upfront what fields it will be.
I use the Quarkus with Panache extension and the only way I see is to retrieve the entity from the DB, and then check every single field in the foo object I received from the http request to see if it's null or not, and if not, set the new value in the entity and at the end, call update() on it.
But that would become a nightmare if I have a class with dozens of fields.
It's such a common use case that I can't imagine (or don't want to believe) that this is the only way to do this.
If there was a way to send an incomplete document to MongoDB so that it will take care of changing only the fields present in this document it would be perfect. But I didn't find a way to do this. Neither with quarkus (with or without panache), nor with the java driver for mongo API.
So is there an easier way to do this? I prefer a solution with the quarkus MongoDB with Panache extension but a solution without Panache or even directly with the java driver API would be ok.
PS: sending the full object from the frontend and replacing the whole document is not an option for me.
Thanks.
You can use the update(updateDocument, updateParams).where(query, params) method, it provides a more flexible update method.
From the example from the documentation guide:
// set the name of all living persons to 'Mortal'
long updated = Person.update("name", "Mortal").where("status", Status.Alive);
For your use case, the query part is on the _id field, but you still need to dynamically build the update part based on the field presence of your Foo object.
As you said, there is no way to do this easily thought the Java client as when you use typed collection (a collection bind to a Java type and not to a Document) null value fields should be reflected to the database (how can the MongoDB client knows that null means don't update the field sometimes or update the field to null).
So for this use case you can :
Forge an update Document with the needed fields (this is what ``update(updateDocument, updateParams).where(query, params)` do for you).
Get the entity first, merge it manually (you can easily create a reflection based helper for that), then update it.
I'm collecting infos from a neo4j db, but the values I return are picked out from multiple nodes, so what I'm basically returning is a table with some properties. For this example, let's say I return the properties color:String, name:String, count:String. I query these results using session.query(*QUERY*, queryParams).
Now, when I get the results, I want to map it to an existing Java Object, that I created to hold this data. This is kind of different to the 'normal' mapping, as in general, you want to map your graph nodes to objects that represent those nodes. Here, my POJOs have nothing to do with the graph nodes.
I managed to do this using custom CompositeAttributeConverter classes for each of my data-objects, but I feel there must be a better solution than writing a new class for every new object.
You might want to take a look at executing arbitrary Cypher queries using the Session object. You can get an Iterable<Map<String,Object>> from the returned Result object, which you could process over or just output to a collection of Map results.
Or, if you have APOC Procedures installed, you can always write up a query to return your results as a JSON string, and convert that to JSON objects in Java with the appropriate library and use those as needed.
I am trying to insert a document(json string) in a mongo db. One of the key "profile" of this has a value which is a json string. So, basically its a nested json structure. I know its possible to insert a nested json by abusing collection-refs / one-may relationships in the document class.
The issue I am facing here is that the json structure of the nested part is not fixed and hence cannot be abstracted to a java class as it is a custom data json fetched from social networking APIs. Defining "profile" as Java string inserts profile data with slashes thus escaping the double-quotes, curly brackets, etc. in json data .
Is there any other way without casting it to another object.
The way to go is probably to make profile a Map<String, Object> in the containing class. This way, you can store arbitrary data within it.
class MyDocument {
Map<String, Object> profile;
}
The answer of using a Map was a little unclear to me... how do you convert an arbitrary JSON String to a Map in a way that Spring Data will persist it as-is?
I found that using a property of type "com.mongodb.DBObject" works. Then set your property using JSON.parse:
profile = (DBObject) JSON.parse(arbitraryJSONString)
I am doing following in my application server,
DBCollection collection = mongoDB.getCollection("collection");
DBCursor dbCursor = collection.find();
I have to send a JSON object to the client from the server, so how can
I convert DBCursor Object to JSON.
Actually i am sending a large collection to client, and if i convert documents of this collection into java object than it eat lot of memory, so if there a way to convert dbcursor direct to json string , or any other method which solve my problem.
it will be great help.
Thanks.
Its look like mongodriver create DataObject's objects, when call find method, so it will not help to convert dbcursor into json (which is possible by using serialize() method of com.mongodb.util.JSON class.
Revise Question:
is there any way to get data of a collection directly without using find method.
You don't want to convert the cursor to a JSON String, the cursor is simply a way to iterate over the values returned from find.
Asking if there's a way to get data from the collection without using "find" is a bit like asking how to get data from a SQL database without querying it - you can't. At some point you have to make a call to the database to get the data you want. In the MongoDB Java driver, this is done via the find() method. If you want to limit the amount of objects returned (not surprisingly a whole collection could take a lot of memory), then you want to pass the query details into the find method to filter your results.
However, if you are working exclusively with JSON, you might want to investigate third party libraries like MongoJack - this uses the Java driver under the covers, but translates the results of your query directly into JSON for you.