What is the JacksonDB equivalent of db.collection.find({},{"fieldName":1})? - java

I am new to Jackson DB. Now I know that to get the entire document list of a collection using Jackson we need to do :
COllectionClass.coll.find().toArray();
which is Jackson DB equivalent to the mongodb command :
db.collection.find()
So What is the Jackson DB equivalent of say :
db.collection.find({},{"fieldName":1, "_id":0})

As given here, This might be helpful to you. (not tested)
coll.find(DBQuery.is(),//add your criteria for search here
DBProjection.include("fieldName")).toArray();

Related

How to convert result set of a query to pojo classes which can further be parsed to create json?

I have a requirement where in i've a complex db query returning certain result set. I have to map the result to POJO. How can i achieve this with an optimized code? Finally I have to parse the pojo to create a json (json schema is pasted below).
db_objects_json_schema_image
Example of query result set (pipe separated):
object_id|object_name|object_owner|object_type|status|parent_id|last_modified_timestamp
123_S1|ABC_S1|XYZ_S1|schema|valid|none|2019-11-09_20:40:11
123_S1T1|ABC_S1T1|XYZ_S1T1|table|valid|123_S1|2019-11-09_20:40:11
123_S1T1C1|ABC_S1T1C1|XYZ_S1T1C1|column|valid|123_S1T1|2019-11-09_20:40:11
123_S1T1C2|ABC_S1T1C2|XYZ_S1T1C2|column|valid|123_S1T1|2019-11-09_20:40:11
123_S1T1C3|ABC_S1T1C3|XYZ_S1T1C3|column|valid|123_S1T1|2019-11-09_20:40:11
123_S1T2|ABC_S1T2|XYZ_S1T2|table|valid|123_S1|2019-11-09_20:40:11
123_S1T2C1|ABC_S1T2C1|XYZ_S1T2C1|column|valid|123_S1T2|2019-11-09_20:40:11
123_S1T2C2|ABC_S1T2C2|XYZ_S1T2C2|column|valid|123_S1T2|2019-11-09_20:40:11
123_S1T2C3|ABC_S1T2C3|XYZ_S1T2C3|column|valid|123_S1T2|2019-11-09_20:40:11
123_S1V1|ABC_S1V1|XYZ_S1V1|view|valid|123_S1|2019-11-09_20:40:11
123_S1V1C1|ABC_S1V1C1|XYZ_S1V1C1|column|valid|123_S1V1|2019-11-09_20:40:11
123_S1V1C2|ABC_S1V1C2|XYZ_S1V1C2|column|valid|123_S1V1|2019-11-09_20:40:11
123_S1V1C3|ABC_S1V1C3|XYZ_S1V1C3|column|valid|123_S1V1|2019-11-09_20:40:11
123_S1V2|ABC_S1V2|XYZ_S1V2|view|valid|123_S1|2019-11-09_20:40:11
123_S1V2C1|ABC_S1V2C1|XYZ_S1V2C1|column|valid|123_S1V2|2019-11-09_20:40:11
123_S1V2C2|ABC_S1V2C2|XYZ_S1V2C2|column|valid|123_S1V2|2019-11-09_20:40:11
123_S1V2C3|ABC_S1V2C3|XYZ_S1V2C3|column|valid|123_S1V2|2019-11-09_20:40:11
PS: I tried row mapper approach but confused on how to maintain parent-child relationships like a schema can have list of tables/views. Similarly a table/view can have multiple columns.
It would be much easier if you used an ORM tool like hibernate. That way, your queries can easily return entity pojos which can later be converted to JSON using a tool like Jackson or GSON

How to get the newest record for every user using spring data mongodb?

I am struggling with a mongo query. I need to find a collection of documents in single query. The collection should contain document with newest date (field createdAt) for every user in single query.
There is a test case in Spock to demonstrate what I am trying to acheive:
def 'should filter the newest location for every user'() {
given:
List locationsInDb = [
buildLocation(USERNAME_1, '2017-02-03T10:37:30.00Z'),
buildLocation(USERNAME_1, '2017-03-04T10:37:30.00Z'),
buildLocation(USERNAME_2, '2017-02-05T10:37:30.00Z'),
buildLocation(USERNAME_2, '2017-03-06T10:37:30.00Z')
]
insertToMongo(locationsInDb)
when:
List filteredLocations = locationRepository.findLastForEveryUser()
then:
filteredLocations == [locationsInDb.get(1), locationsInDb.get(3)]
}
I found that distinct methods are a part of 2.1.0.M1 version so they are not available yet.
I was also trying with #Query annotation but the documentation (link below) does not specify how to create a query like mine.
https://docs.spring.io/spring-data/data-document/docs/current/reference/html/#d0e3309
Thanks for your help.
There are no means to express the query you are looking for via a derived query in Spring Data, nor using the MongoDB native query operators. Distinct as well will not do the job as it just extracts distinct values of a single field into an array.
Please consider using an Aggregation. Spring Data specifics can be found in the reference documentation.

Spring Data Elasticsearch - Create keyword field with normalizer

We are using the spring-data-elasticsearch project to interface with our elasticsearch clusters, and have been using it now for around a year. Recently, we moved to elasticsearch 5.x (from 2.x) where we now have the "keyword" datatype.
I would like to index these keywords as lowercase values, which I know can be done with field normalizers. I can't find anywhere in the documentation or online where I can add a normalizer to a field through the annotation based mapping.
E.g
#Field(type = FieldType.keyword, <some_other_param = some_normalizer>)
Is this something that can be done? I know that we can use JSON based mapping definitions as well, so I will fall back to that option if needed, but would like to be able to do it this way if possible.
Any help would be very appreciated!
Since the pull request of #xhaggi has been merged (spring-data-elasticsearch 3.1.3+ or Spring Boot 2.1.1), we have a normalizer field in the #Field annotation.
To use it, we need:
declare a #Field or an #InnerField with params type = FieldType.Keyword, normalizer = "%NORMALIZER_NAME%"
add #Setting(settingPath = "%PATH_TO_NORMALIZER_JSON_FILE%") at the class level.
put the normalizer mapping into a json file at %PATH_TO_NORMALIZER_JSON_FILE%
Example of usage
FYI, for anyone looking at this, the answer is there is not a way to do this at this time.
You can do this, however, by creating your mappings file as JSON in the Elasticsearch format. See:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html
You can then create that JSON file and link it to your Domain model with.
#Mapping(mappingPath = "some/path/mapping.json")
Note that this is not, in my experience, compatible with the provided annotation based mapping for fields.
There is a pending issue https://jira.spring.io/browse/DATAES-492 waiting for review.

Mapping a document with partly-defined schema

I'm writing a demo app using Spring & MongoDB as a database.
My main domain class looks like:
#Document
public class Person {
#Id
private String id;
//Some other fields
private DBObject additionalData;
}
The key is that additionalData is a subdocument with no schema specified, it is kind of user-defined JSON. But when I am parsing this json (using (DBObject) JSON.parse(value) expression), it is stored as a string in MongoDB, and I need it to be a nested document structure.
Searched for couple of hours, found no solution. Any ideas?
I'm not really sure of the expected result of casting the result of
JSON.parse(value)
to DBObject, which is an interface, not a class.
Try casting the result to an implementation of DBObject BasicDBObject (or BasicDBList), or a Map<String, Object> as mentioned in the comments (it is also an interface, but it does work).
If you're working with Spring Data Rest, you will probably not need to deserialize "manually", Spring will do it for you. Check this answer for a basic example of what to do.
Having data with no schema specified may not be the best idea around (mongodb saves you from doing it at the database level, but you should do it at the application level), but I use similar tricks in production, and you can somehow make it work.

Regular expression Spring data mongodb repositories

Good morning,
I´m trying to combine regular expression with Spring data mongodb repository using Query annotation.
What I want is search one substring inside one string attribute of my mongo document.
I have been looking in google and here, but I did not find anything elegant, and I was wondering if Spring data has something official about this using the repositories.
Regards.
It seems like an old question, so maybe you've already had a solution but here how I handled the same issue :
#Query(value = "{'title': {$regex : ?0, $options: 'i'}}")
Foo findByTitleRegex(String regexString);
using the /?0/ notation won't work since Spring Data places a String value with quotes

Categories