Using Spring Data JPA 1.11.6.RELEASE, I have the following trouble;
With a simple 1-to-n relationship between them, I have TableA, and TableB, where A contains multiple B. And I have RepoA, that modifies not only TableA, but also TableB as a dependent child.
In this sense, when I have the following in DB (writing entities in JSON);
{
"uuid": "f10cdd75-ffbe-49e6-b7a5-ad6f8e15b2b5",
"name": "title",
"listOfB": [{
"pk": 1
}, {
"pk": 2
}]
}
and I'd like to update TableA, and consequentially TableB's with the following through RepoA;
{
"uuid": "f10cdd75-ffbe-49e6-b7a5-ad6f8e15b2b5",
"name": "title",
"listOfB": [{
"pk": 2
}, {
"pk": 3
}]
}
But I am getting the constraint violation due to Hibernate following its famous order of operations, it tries to insert all dependent TableB values, without removing the original ones.
Is there no way to overwrite the TableB entities in any way? I was able to find this solution;
select `TableA` with `TableB` values
clear `"listOfB"`
save & flush `TableA` // deletes all current `TableB`
add new `B`'s to `"listOfB"`
save again
But it is very laborious, and ugly, plus the more such tables I have, the more such code I have to write. Can't I have some definition in JPA to allow this behaviour automatically? Do not treat this table as a proper table, but only as a basic resource, that should be overwritten in all update requests?
Related
I have this entities
entity Employee {
firstName String
lastName String
}
entity Role {
Name String
}
relationship OneToMany {
Employee{role} to Role{employee required}
}
And i want to return in the ParentDTO something like this
{
"id": 1,
"firstName": "John",
"lastName": "Doe",
"roles":
[
{
"id": 1
"name": "Admin"
},
{
"id": 2
"name": "Collector"
}
]
}
I don't know how to do it and i want to learn.
Can please someone tell me how to do it manually or automatically and what to change specifically Iam new to jhipster and mapstruct.
Thanks and sorry for disturbing.
Have a look at A unidirectional one-to-many relationship. This is what you have defined but it is not supported as-is. From a DB perspective the entity of which there are many needs to keep track of the one it is associated with.
You probably need to review that entire page, but they recommend a bi-directional relationship:
relationship OneToMany { Owner{car} to Car{owner required}
I made the owner required so that the fake data would be generated. Remove it if cars can be created without owners.
Adding the DTO option automatically creates the services. You will need to modify the OwnerDTO to add the cars attribute. You will then need to modify the OwnerMapper to add in the cars, by getting them from the CarRepository to which you need to add findByOwner.
This should help, although it doesn't follow the same pattern as the latest generated code:
https://www.jhipster.tech/using-dtos/#advanced-mapstruct-usage
Correction: When children are not appearing for domain entities, that is just the default Lazy loading. You can change it by adding the fetch type: e.g.
#OneToMany(mappedBy = "owner", fetch = FetchType.EAGER)
I am trying to write deserialization code for responses of user-defined GraphQL queries. The code has access to the query response in JSON-serialized form and the underlying GraphQL schema (by querying the endpoint's schema.json or making introspection requests).
Assume the following schema:
scalar Date
type User {
name: String
birthday: Date
}
type Query {
allUsers: [User]
}
schema {
query: Query
}
And the following query:
query {
allUsers {
name
birthday
}
}
The response may look like this (only includes the data.allUsers-field from the full response for brevity):
[
{"name": "John Doe", "birthday": "1983-12-07"}
]
What I am attempting to do is deserialize the above response in a manner that preserves type information, including for any custom scalars. In the above example, I know by convention that the GraphQL scalar Date should be deserialized as LocalDate in Java, but just from the response alone I do not know that the birthday field represents the GraphQL scalar type Date, since it's serialized as a regular string in JSON.
What I can do is try to utilize the GraphQL schema for this. For the above example, the schema may look something like this (shortened for brevity):
...
"types": [
{
"kind": "OBJECT",
"name": "User",
"fields": [
{
"name": "name",
"type": {
"kind": "SCALAR",
"name": "String"
}
},
{
"name": "birthday"
"type": {
"kind": "SCALAR",
"name": "Date"
}
}
...
From this information I can deduce that that response's birthday field is of type Date, and deserialize it accordingly. However, things get more complicated if the query uses non-trivial GraphQL features. Take aliasing for example:
query {
allUsers {
name
dayOfBirth: birthday
}
}
At this point I would already need to keep track of any aliasing (which I could do since that information is available if I parse the query), and backtrack those to find the correct type. I fear it might get even more complicated if e.g. fragments are used.
Given that I use graphql-java, and it appears to already need to handle all of these cases for serialization, I wondered if there was an easier way to do this than to manually backtrack the types from the query and schema.
How about generating java classes from the schema and then using those classes to deserialize. There is one plugin which I have used before for this - graphql-java-generator
You may need to enhance the plugin a bit to support your custom scalars though
It basically generates a java client for invoking your GraphQL queries in a Java way.
I had the same problem to deserialize an LocalDate attribute, even using the graphql-java-extended-scalars library.
Researching I found that this library works well for queries but not so well for mutations.
I fixed my problem by customizing SchemaParserOptions, like this:
#Bean
public SchemaParserOptions schemaParserOptions() {
return SchemaParserOptions.newOptions().objectMapperConfigurer((mapper, context) -> {
mapper.registerModule(new JavaTimeModule());
}).build();
}
In the object i didn't use any serialization and deserialization annotations.
So I built the API with a crud on spring boot, the issue arises due to the bidirectional nature of the entities.
I can create it fine manually through the application (non-api) and it appears with children and all.
However, once the API is up, I try to post it (to create) a JSON such as this:
{
"idReserva": 1,
"comentarios": "",
"fechaIngreso": "0019-07-15",
"fechaSalida": "0019-10-30",
"cantidadDePersonas": 3,
"usuario": {
"idUsuario": 1,
"nombres": "test",
"apellidos": "test",
"contrasena": "1234",
"codUsuario": "USU01",
"email": "test#gmail.com",
"foto": ""
},
"pagos": [
{
"idPago": 1,
"tipo": "Efectivo",
"total": 1500
}
],
"habitaciones": [
{
"idHabitacion": 1,
"descripcion": "HabitaciĆ³n Ejecutiva",
"tipo": 3,
"numero": "5",
"codHabitacion": "HAB01",
"precio": "1500 dolares"
}
]
}
The issue comes that in my "create" method inside the repository, I can't receive the nested entities, it does create the "reserve" entry in the database, but it doesn't give it its children
List<Pago> listPagos = new ArrayList<>();
for (Pago pago : reserva.getPagos()){
log.info(pago.getIdPago()+"");
pagoService.create(pago);
listPagos.add(pago);
}
reserva.setPagos(listPagos);
I tried something such as that above to obtain each "pago"(payment) entity from the json and then create it/add it to reserve, since I need it to have the fields of its children payments in the database, but when I log the entities I receive "null" as if it's not receiving anything, is there any specific way I need to obtain the nested entities?
Alright so after a few hours of working around it, I found the issue. the API itself was missing something crucial, when you want to save inside the resource (api) layer, before you actually .save() using the service layer, you want to create an instance of the child entity, using a For: loop pass each entity inside the Json to an instance of that child, and JPA automatically will create them, and add them to the parent entity as well.
Example:
for (Habitacion habitacion : reserva.getHabitaciones()){
habitacion.setReserva(reserva);
}
for (Pago pago : reserva.getPagos()){
pago.setReserva(reserva);
}
Usuario usuario = reserva.getUsuario();
usuario.setReserva(reserva);
(this is inside the createReserva method from the resource layer)
I'm currently using Spring Boot with Spring Data JPA to connect to an oracle database. With one parameter I just use the Spring Repository findById(Long id); and it works great. Search on the other hand is much more complicated for me. In our case, the user provides a nested JSON object with multiple optional search parameters (at this point I can't change the way the way they send their data it has to be through the nested JSON). Here is what the JSON input object looks like:
{
"agent_filter": {
"first_name": "string",
"last_name": "string",
"agentNumber": "string",
"agentCode": "string"
},
"account": "string",
"status": "string",
"paid": "string",
"amount": "string",
"person_filter": {
"date_of_birth": "string",
"first_name": "string",
"last_name": "string",
"tax_id": "string"
}
}
All the search criteria are optional (except at least 1 parameter)
On the back-end we have the following entities:
#Entity
Account{
#OneToMany
List<PersonRole> role;
#OneToMany
List<AgentRole> role;
}
#Entity
PersonRole{
String role;
Person person;
}
#Entity
AgentRole{
String role;
Agent agent;
}
#Entity
Person{...}
#Entity
Agent{...}
So to provide the search functionality I can do multiple joins. I started using JPQL with an #Query notation but I had to do is null or checks with each parameter and it's a big mess. I started looking into other options and I saw stuff about QueryDSL, criteria, specification but I wasn't sure which one I should focus on and learn about. Unfortunately I don't know a whole lot on this subject and I was hoping someone can point me in the right direction for a good implementation of this search. Thank you!
QueryDSL ftw!
Let me give you an example from my code when I had a very similar problem to you in that I had a bunch of stuff that I wanted to filter on and a lot of them could be null...
Btw, if you need fancy joins then you're probably going to use query dsl directly. These example are for QueryDSL 3 so you might have to change for QueryDSL 4. Because you've mentioned how you 'So to provide the search functionality I can do multiple joins' you're probably going to need to use QueryDSL directly.
First you create yourself and BooleanBuilder and then do something like this:
BooleanBuilder builder = new BooleanBuilder();
QContent content = QContent.content;
if (contentFilter.headlineFilter == null || contentFilter.headlineFilter.trim().length() == 0) {
// no filtering on headline as headline filter = null or blank
} else if (contentFilter.headlineFilter.equals(Filter.NULL_STRING)) {
// special case when you want to filter for specific null headline
builder.and(content.label.isNull());
} else {
try {
long parseLong = Long.parseLong(contentFilter.headlineFilter);
builder.and(content.id.eq(parseLong));
} catch (NumberFormatException e) {
builder.and(content.label.contains(contentFilter.headlineFilter));
}
}
if (contentFilter.toDate != null) {
builder.and(content.modifiedDate.loe(contentFilter.toDate));
}
if (contentFilter.fromDate != null) {
builder.and(content.modifiedDate.goe(contentFilter.fromDate));
}
So based on whether or not you have each field you can add it to the filter.
To get this to work you're going to need to generate the Query DSL meta data - that is done with the com.mysema.query.apt.jpa.JPAAnnotationProcessor annotation processor. It generates the QContent.content stuff above.
That BooleanBuilder is a subclass of Predicate.
However going with query dsl, criteria, specification is good approach but it will require to learn them.
Your problem can just be solved using JpaRepository only. Your AccountRepository might be extending JpaRepository which again extends QueryByExampleExecutor.
QueryByExampleExecutor provides some method like findOne(Example<S> example) and findAll(Example<S> example) which will give you result back based on the Example object you pass.
Creating Example is simple
Person person = new Person();
person.setFirstname("Dave");
Example<Person> example = Example.of(person);
This will match all Person which have firstName = Dave
Read more on Spring Data Query by Example.
You need to use custom query to create your own search query .
#Query("select u from User u where u.firstname = :#{#customer.firstname}")
List<User> findUsersByCustomersFirstname(#Param("customer") Customer customer);
Now you can add as many param as you want
I am using Jackson's Hibernate4Module to deal with the serialization issues when dealing with a lazily loaded proxy in a Spring Data Rest project.
In general it solves the issue of Jackson trying to serialise uninitialized proxies however one side effect is that the JSON output differs:
Fetched directly: api/cases/5400
{
"id": 5400,
"practiceReference": "DWPYI9"
}
Fetched via a lazily loaded #ManyToOne: api/submissions/11901/parentCase
{
"content": {
"id": 5400,
"practiceReference": "DWPYI9"
}
}
Fetched via a non-lazily loaded #ManyToOne: api/submissions/11901/parentCase
{
"id": 5400,
"practiceReference": "DWPYI9"
}
As can be seen in the above, the JSON representation differs when serializing a lazy #ManyToOne association: the entity is wrapped in the "content" node.
If the association is non-Lazy then the same representation is written regardless of the path.
Is there a reason for this and can the additional "content" node somehow be prevented?
Update
I have found the same (deleted) question here:
https://stackoverflow.com/questions/33194554/two-different-resulting-jsons-when-serializing-lazy-objects-and-simple-objects
which is referenced from:
https://github.com/FasterXML/jackson-datatype-hibernate/issues/77
Also reported here so seems like a known issue:
https://github.com/FasterXML/jackson-datatype-hibernate/issues/97