I'm building a Java Jersey API which uses MongoDb and MongoDb driver.
The resources should output JSON of the stored MongoDb document to be used in the frontend project using Svelte.
Due to the standard org.bson.Document.toJson() implementation the output of my documents look somehow like:
[{ "_id" : { "$oid" : "5e97f08f2175aa9174dbec0e" }, "hour" : 8, "minute" : 15, "enabled" : true, "duration" : 120 }
I would rather like it to be:
[{ "_id" : "5e97f08f2175aa9174dbec0e", "hour" : 8, "minute" : 15, "enabled" : true, "duration" : 120 }
That way it's easier to handle the id in the frontend. So how to get rid of the $oid object?
I already managed to get the format as I wish by using:
JsonWriterSettings settings = JsonWriterSettings.builder()
.outputMode(JsonMode.RELAXED)
.objectIdConverter((value, writer) -> writer.writeString(value.toHexString()))
.build();
System.out.println(doc.toJson(settings));
But how to register this setting object globally so that every doc.toJson() call will use it?
And what will happen if I send modified or new documents from the frontend to the API and do:
Document document = Document.parse(doc);
Is my modified _id field automatically converted again to an ObjectId? Or do I need a org.bson.codecs.Decoder or CodecRegistry? How would this be done?
$oid refers to ObjectId field type in bson spec. As far as I know, you need to manipulate your document to replace ObjectId for your _id into String.
String oidAsString = document.getObjectId("_id").toString();
document.put("_id", oidAsString);
Related
I have a csv structure like this
and I also have one json response
[
{
"ID" : "1",
"Name" : "abc",
"Mobile" : "123456"
},
{
"ID" : "2",
"Name" : "cde",
"Mobile" : "123345"
}
]
I need the output like this
If your intention is to convert directly the JSON then that baeldung solution that you were given is good.
Otherwise, the way i see it and based on the info you're giving, you will need to have a representation of that JSON in a java object that will either represent some kind of request coming from somewhere or data you're getting from your database in order to be written on a csv.
Check out these, might be useful:
https://www.codejava.net/frameworks/spring-boot/csv-export-example
https://zetcode.com/springboot/csv/
I am executing NativeSearchQuery in Springframework to execute search operation. I am trying to fetch "_version" value from response through Java API. The following is the sample request and response related to this use case
GET /my-index/_doc/foo-bar
{
"_index" : "my-index",
"_type" : "_doc",
"_id" : "foo-bar",
"_version" : 88,
"_seq_no" : 169,
"_primary_term" : 1,
"found" : true,
"_source" : {
"_class" : "fooBar",
"value" : 1250087,
"creatdDateTime" : "20210203T124928.913Z",
"ModifiedDateTime" : "20210203T124928.913Z"
}
}
I am trying to fetch _version=88 from above response through JAVA API using NativeSearchQuery. How to get this value
When using Spring Data Elasticsearch, you need to have a property of type Long annotated with #Version. Spring Data Elasticsearch will then populate this property with the version value.
At the request level, you don't need to do anything as the _version field is always returned from search queries.
Each search hit present in the response will be transformed to a Document instance and if the _version value is >= 0, then the version value will be set on the Document instance.
Then the NativeSearchQuery.search() method will return an instance of SearchHits which contains and array of SearchHit and that class provides a getVersion() method
I have an Apache Beam streaming job which reads data from Kafka and writes to ElasticSearch using ElasticSearchIO.
The issue I'm having is that messages in Kafka already have key field, and using ElasticSearchIO.Write.withIdFn() I'm mapping this field to document _id field in ElasticSearch.
Having a big volume of data I don't want the key field to be also written to ElasticSearch as part of _source.
Is there an option/workaround that would allow doing that?
Using the Ingest API and the remove processor you´ll be able to solve this pretty easy only using your elasticsearch cluster. You can also simulate ingest pipeline and the results.
I´ve prepared a example which will probably cover your case:
POST _ingest/pipeline/_simulate
{
"pipeline": {
"description": "remove id form incoming docs",
"processors": [
{"remove": {
"field": "id",
"ignore_failure": true
}}
]
},
"docs": [
{"_source":{"id":"123546", "other_field":"other value"}}
]
}
You see, there is one test document containing a filed "id". This field is not present in the response/result anymore:
{
"docs" : [
{
"doc" : {
"_index" : "_index",
"_type" : "_type",
"_id" : "_id",
"_source" : {
"other_field" : "other value"
},
"_ingest" : {
"timestamp" : "2018-12-03T16:33:33.885909Z"
}
}
}
]
}
I've created a ticket in Apache Beam JIRA describing this issue.
For now the original issue can not be resolved as part of indexation process using Apache Beam API.
The workaround that Etienne Chauchot, one of the maintainers, proposed is to
have separate task which will clear indexed data afterwords.
See Remove a field from a Elasticsearch document for example.
For the future, if someone also would like to leverage such feature, you might want to follow the linked ticket.
I'd like to confirm that a parser I wrote is working correctly. It takes a JavaScript mongodb command that could be run from the terminal and converts it to a Java object for the MongoDB/Java drivers.
Is the following .toString() result valid?
{ "NumShares " : 1 , "attr4 " : 1 , "symbol" : { "$regex" : "ESLR%"}}
This was converted from the following JavaScript
db.STOCK.find({ "symbol": "ESLR%" }, { "NumShares" : 1, "attr4" : 1 })
And of course, the data as it rests in the collections
{ "_id" : { "$oid" : "538c99e41f12e5a479269ed1"} , "symbol" : "ESLR" , "NumShares" : 3471.0}
Thanks for all your help
You've combined the query document and the project document in that find() call in to one document. That's probably not what you want. But those documents are just json so you could use any parser to convert those. There's a few gotchas you'd have to deal with around ObjectIDs, dates, DBRefs, and particularly regular expressions but those can be managed without too much trouble by escaping/quoting them before parsing.
A very quick question, how am I going to do this below:
> db.blog.posts.findOne()
{
"_id" : ObjectId("4b253b067525f35f94b60a31"),
"title" : "A Blog Post",
"content" : "...",
"author" : {
"name" : "joe",
"email" : "joe#example.com"
}
}
I saw the answer in Javascript is like:
> db.blog.posts.update({"author.name" : "joe"}, {"$set" : {"author.name" : "joe schmoe"}})
But how am I going to do that in Java?
If I have a very deep level value has to be changed, am I supposed to use this way? like: "person.abc.xyz.name.address" ?
Using dot notation to access nested documents will work perfectly well in the Java Driver. Take a look at this StackOverflow answer:
MongoDB nested documents searching
For the Java Driver, the basic idea is to replace the Javascript objects with instances of BasicDBObject.
Here's another good reference for updating:
MongoDb's $set equivalent in its java Driver