I'm currently using Spring Boot with Spring Data JPA to connect to an oracle database. With one parameter I just use the Spring Repository findById(Long id); and it works great. Search on the other hand is much more complicated for me. In our case, the user provides a nested JSON object with multiple optional search parameters (at this point I can't change the way the way they send their data it has to be through the nested JSON). Here is what the JSON input object looks like:
{
"agent_filter": {
"first_name": "string",
"last_name": "string",
"agentNumber": "string",
"agentCode": "string"
},
"account": "string",
"status": "string",
"paid": "string",
"amount": "string",
"person_filter": {
"date_of_birth": "string",
"first_name": "string",
"last_name": "string",
"tax_id": "string"
}
}
All the search criteria are optional (except at least 1 parameter)
On the back-end we have the following entities:
#Entity
Account{
#OneToMany
List<PersonRole> role;
#OneToMany
List<AgentRole> role;
}
#Entity
PersonRole{
String role;
Person person;
}
#Entity
AgentRole{
String role;
Agent agent;
}
#Entity
Person{...}
#Entity
Agent{...}
So to provide the search functionality I can do multiple joins. I started using JPQL with an #Query notation but I had to do is null or checks with each parameter and it's a big mess. I started looking into other options and I saw stuff about QueryDSL, criteria, specification but I wasn't sure which one I should focus on and learn about. Unfortunately I don't know a whole lot on this subject and I was hoping someone can point me in the right direction for a good implementation of this search. Thank you!
QueryDSL ftw!
Let me give you an example from my code when I had a very similar problem to you in that I had a bunch of stuff that I wanted to filter on and a lot of them could be null...
Btw, if you need fancy joins then you're probably going to use query dsl directly. These example are for QueryDSL 3 so you might have to change for QueryDSL 4. Because you've mentioned how you 'So to provide the search functionality I can do multiple joins' you're probably going to need to use QueryDSL directly.
First you create yourself and BooleanBuilder and then do something like this:
BooleanBuilder builder = new BooleanBuilder();
QContent content = QContent.content;
if (contentFilter.headlineFilter == null || contentFilter.headlineFilter.trim().length() == 0) {
// no filtering on headline as headline filter = null or blank
} else if (contentFilter.headlineFilter.equals(Filter.NULL_STRING)) {
// special case when you want to filter for specific null headline
builder.and(content.label.isNull());
} else {
try {
long parseLong = Long.parseLong(contentFilter.headlineFilter);
builder.and(content.id.eq(parseLong));
} catch (NumberFormatException e) {
builder.and(content.label.contains(contentFilter.headlineFilter));
}
}
if (contentFilter.toDate != null) {
builder.and(content.modifiedDate.loe(contentFilter.toDate));
}
if (contentFilter.fromDate != null) {
builder.and(content.modifiedDate.goe(contentFilter.fromDate));
}
So based on whether or not you have each field you can add it to the filter.
To get this to work you're going to need to generate the Query DSL meta data - that is done with the com.mysema.query.apt.jpa.JPAAnnotationProcessor annotation processor. It generates the QContent.content stuff above.
That BooleanBuilder is a subclass of Predicate.
However going with query dsl, criteria, specification is good approach but it will require to learn them.
Your problem can just be solved using JpaRepository only. Your AccountRepository might be extending JpaRepository which again extends QueryByExampleExecutor.
QueryByExampleExecutor provides some method like findOne(Example<S> example) and findAll(Example<S> example) which will give you result back based on the Example object you pass.
Creating Example is simple
Person person = new Person();
person.setFirstname("Dave");
Example<Person> example = Example.of(person);
This will match all Person which have firstName = Dave
Read more on Spring Data Query by Example.
You need to use custom query to create your own search query .
#Query("select u from User u where u.firstname = :#{#customer.firstname}")
List<User> findUsersByCustomersFirstname(#Param("customer") Customer customer);
Now you can add as many param as you want
Related
I have this entities
entity Employee {
firstName String
lastName String
}
entity Role {
Name String
}
relationship OneToMany {
Employee{role} to Role{employee required}
}
And i want to return in the ParentDTO something like this
{
"id": 1,
"firstName": "John",
"lastName": "Doe",
"roles":
[
{
"id": 1
"name": "Admin"
},
{
"id": 2
"name": "Collector"
}
]
}
I don't know how to do it and i want to learn.
Can please someone tell me how to do it manually or automatically and what to change specifically Iam new to jhipster and mapstruct.
Thanks and sorry for disturbing.
Have a look at A unidirectional one-to-many relationship. This is what you have defined but it is not supported as-is. From a DB perspective the entity of which there are many needs to keep track of the one it is associated with.
You probably need to review that entire page, but they recommend a bi-directional relationship:
relationship OneToMany { Owner{car} to Car{owner required}
I made the owner required so that the fake data would be generated. Remove it if cars can be created without owners.
Adding the DTO option automatically creates the services. You will need to modify the OwnerDTO to add the cars attribute. You will then need to modify the OwnerMapper to add in the cars, by getting them from the CarRepository to which you need to add findByOwner.
This should help, although it doesn't follow the same pattern as the latest generated code:
https://www.jhipster.tech/using-dtos/#advanced-mapstruct-usage
Correction: When children are not appearing for domain entities, that is just the default Lazy loading. You can change it by adding the fetch type: e.g.
#OneToMany(mappedBy = "owner", fetch = FetchType.EAGER)
I'm having trouble finding data with MongoDB. I can't query the data using the id.
This is my Entity:
#Document(language = "german", collection = "feature")
#ToString
#Data
#EqualsAndHashCode
public class Feature {
#Id private String id;
private String name;
private FeatureType type;
private List<String> productIds;
private Client client;
}
I'm querying the data like this:
mongoTemplate.find(query(where("_id").is(new ObjectId("60d0ae8d01aad528e2496930"))), Feature.class);
Either using ObjectId or just the id String doesn't work, I always get an empty array back. I have confirmed that the object with the id is in the database. It is no solution for me to just use the JPA method, because this is a more simple example for a custom query that does not work either.
Also, using id or _id both generates the same query (see third code listing), so replacing one with the other does not make any change
When I query the database directly using the outputted query it works though. The outputted query is the first one, I can't use $oid though, because it is an unknown operator, so I have to replace it with new ObjectId, resulting in the second query. According to Documentation it should make no difference though.
db.feature.find({"_id": {"$oid": "60d0ae8d01aad528e2496930"}});
db.feature.find({"_id": new ObjectId("60d0ae8d01aad528e2496930")});
Funnily, when I use is or in I get an empty array, when I use ne or nin I get all elements except the one with the given id.
mongoTemplate.find(query(where("_id").ne(new ObjectId("60d0ae8d01aad528e2496930"))), Feature.class);
Is this some mistake on my end or is there an error in the implementation of how the is method works? Does anyone have an idea?
Update:
Here is the document i expect from the database. I've had to replace the path of the class though.
[
{
"_id": {"$oid": "60d0ae8d01aad528e2496930"},
"_class": "<path>.Feature",
"client": "CLIENTNAME",
"name": "all products",
"productIds": ["60c6f7f7f0708603ff37bc5a", ... , "60c6fba7f0708603ff38b5f4"],
"type": "ALL"
}
]
I am trying to write deserialization code for responses of user-defined GraphQL queries. The code has access to the query response in JSON-serialized form and the underlying GraphQL schema (by querying the endpoint's schema.json or making introspection requests).
Assume the following schema:
scalar Date
type User {
name: String
birthday: Date
}
type Query {
allUsers: [User]
}
schema {
query: Query
}
And the following query:
query {
allUsers {
name
birthday
}
}
The response may look like this (only includes the data.allUsers-field from the full response for brevity):
[
{"name": "John Doe", "birthday": "1983-12-07"}
]
What I am attempting to do is deserialize the above response in a manner that preserves type information, including for any custom scalars. In the above example, I know by convention that the GraphQL scalar Date should be deserialized as LocalDate in Java, but just from the response alone I do not know that the birthday field represents the GraphQL scalar type Date, since it's serialized as a regular string in JSON.
What I can do is try to utilize the GraphQL schema for this. For the above example, the schema may look something like this (shortened for brevity):
...
"types": [
{
"kind": "OBJECT",
"name": "User",
"fields": [
{
"name": "name",
"type": {
"kind": "SCALAR",
"name": "String"
}
},
{
"name": "birthday"
"type": {
"kind": "SCALAR",
"name": "Date"
}
}
...
From this information I can deduce that that response's birthday field is of type Date, and deserialize it accordingly. However, things get more complicated if the query uses non-trivial GraphQL features. Take aliasing for example:
query {
allUsers {
name
dayOfBirth: birthday
}
}
At this point I would already need to keep track of any aliasing (which I could do since that information is available if I parse the query), and backtrack those to find the correct type. I fear it might get even more complicated if e.g. fragments are used.
Given that I use graphql-java, and it appears to already need to handle all of these cases for serialization, I wondered if there was an easier way to do this than to manually backtrack the types from the query and schema.
How about generating java classes from the schema and then using those classes to deserialize. There is one plugin which I have used before for this - graphql-java-generator
You may need to enhance the plugin a bit to support your custom scalars though
It basically generates a java client for invoking your GraphQL queries in a Java way.
I had the same problem to deserialize an LocalDate attribute, even using the graphql-java-extended-scalars library.
Researching I found that this library works well for queries but not so well for mutations.
I fixed my problem by customizing SchemaParserOptions, like this:
#Bean
public SchemaParserOptions schemaParserOptions() {
return SchemaParserOptions.newOptions().objectMapperConfigurer((mapper, context) -> {
mapper.registerModule(new JavaTimeModule());
}).build();
}
In the object i didn't use any serialization and deserialization annotations.
I need to support a Spring app used ElasticSearch as data storage, and I need extract some data filtering by term, like
POST http://localhost:1234/library/myType/_search
{
"query": {
"bool": {
"filter": {"term": {"myTextField": "filterValue"}}
}
}
}
The problem is fields in Java models are annotated like
#Field(type = FieldType.String)
not like
#Field(type = FieldType.Keyword)
I've tried to google Keyword annotation, but looks like there is a workaround I can't reveal. How to annotate model field for filtering it by terms in a query?
The keyword data type was added in ES 5, so you won't find it in spring-data-es 2.0.3.
You need to declare your field as not_analyzed, i.e. like this instead:
#Field(type = FieldType.String, index = FieldIndex.not_analyzed)
I just can't get my head around this one.
I have an object called PermissionDto:
#XmlRootElement(name = "permission")
#XmlAccessorType(FIELD)
public class PermissionDto implements Serializable {
private static final long serialVersionUID = 1L;
private Link entity;
... some other properties, constructors and getters
}
This will produce the following JSON:
{
"entity":
{
"rel": "users",
"href": ...
}
}
The entity rel can be (currently) "users" or "roles". What I would like would be to produce the following JSON when rel is "users":
{
"users":
{
"rel": "users",
"href": ...
}
}
and when the rel is "roles":
{
"roles":
{
"rel": "roles",
"href": ...
}
}
without having to create a UserPermissionDto and another RolePermissionDto considering they are exactly the same except for this one property. I can do entity.getRel() in order to know the rel of the Link.
Please note my server can also produce XML representations of these responses which means the tag for response
<entity rel="users" href="http://localhost:8080/users/1"/>
should also be represented as indicated above (either "users" or "roles" instead of "entity").
I use JAXB for XML and Jackson for JSON.
Any help is much appreciated.
Thanks
I'm not sure if this will work for JSON but try #XmlElements:
#XmlElements({
#XmlElement(name="users", type=Users.class),
#XmlElement(name="roles", type=Roles.class)
}
public Link entity;
This assumes Users and Roles extend Link. In this case, depending on whether your object has an instance of Users or Roles, you'll get users or roles element names. Not sure about JSON.
It may well be that this only works for collection properties.
We actually turned the problem upside down and found a better solution.
We decided to try to implement our API in full HATEOAS fashion. In this way, users and roles are nothing more than links to those specific entities.
By using Spring HATEOAS Project PermissionDto was refactored to a PermissionResource which extends from ResourceSupport. This means a PermissionResource can be composed with either user or role URIs dependending on which Permission Resource Assembler is being used.