Persisting a JSON Object using Hibernate and JPA - java

I am trying to store a JSON object in MySQL database in spring boot. I know I am doing something wrong but I a can't figure out what it is because I am fairly new to Spring.
I have a rest endpoint where I get the following JSON object (via HTTP PUT) and I need to store it in database so that the user can fetch it later (via HTTP GET).
{
"A": {
"Name": "Cat",
"Age": "1"
},
"B": {
"Name": "Dog",
"Age": "2"
},
"C": {
"Name": "Horse",
"Age": "1"
}
}
Note that in the above case The number of keys in the object may vary, Due to that requirement I am using a HashMap to catch the object in the controller.
#RequestMapping(method = RequestMethod.POST)
public String addPostCollection(#RequestBody HashMap<String, Animal> hp) {
hp.forEach((x, y) -> {
postRepository.save(hp.get(x));
});
return "OK";
}
As you can see in the method, I can iterate the HashMap and persist each Animal object in db. But I am looking for a way to persist the entire HashMap in a single record. I have did some reading and they suggest me to use a #ManyToMany mapping.
Can anyone point me in a direction to persist the HashMap in a different way? (or is using the #ManyToMany the only and right way to do this?)

Maven dependency
The first thing you need to do is to set up the following Hibernate Types Maven dependency in your project pom.xml configuration file:
<dependency>
<groupId>com.vladmihalcea</groupId>
<artifactId>hibernate-types-52</artifactId>
<version>${hibernate-types.version}</version>
</dependency>
Domain model
Let's assume you have the following entity:
#Entity(name = "Book")
#Table(name = "book")
#TypeDef(
typeClass = JsonType.class,
defaultForType = JsonNode.class
)
public class Book {
#Id
#GeneratedValue
private Long id;
#NaturalId
private String isbn;
#Column(columnDefinition = "jsonb")
private JsonNode properties;
//Getters and setters omitted for brevity
}
Notice the #TypeDef is used to instruct Hibernate to map the JsonNode object using the JsonType offered by the Hibernate Types project.
Testing time
Now, if you save an entity:
Book book = new Book();
book.setIsbn( "978-9730228236" );
book.setProperties(
JacksonUtil.toJsonNode(
"{" +
" \"title\": \"High-Performance Java Persistence\"," +
" \"author\": \"Vlad Mihalcea\"," +
" \"publisher\": \"Amazon\"," +
" \"price\": 44.99" +
"}"
)
);
entityManager.persist( book );
Hibernate is going to generate the following SQL statement:
INSERT INTO
book
(
isbn,
properties,
id
)
VALUES
(
'978-9730228236',
'{"title":"High-Performance Java Persistence","author":"Vlad Mihalcea","publisher":"Amazon","price":44.99}',
1
)
And you can also load it back and modify it:
Session session = entityManager.unwrap( Session.class );
Book book = session
.bySimpleNaturalId( Book.class )
.load( "978-9730228236" );
LOGGER.info( "Book details: {}", book.getProperties() );
book.setProperties(
JacksonUtil.toJsonNode(
"{" +
" \"title\": \"High-Performance Java Persistence\"," +
" \"author\": \"Vlad Mihalcea\"," +
" \"publisher\": \"Amazon\"," +
" \"price\": 44.99," +
" \"url\": \"https://www.amazon.com/High-Performance-Java-Persistence-Vlad-Mihalcea/dp/973022823X/\"" +
"}"
)
);
Hibernate taking caare of the UPDATE statement for you:
SELECT b.id AS id1_0_
FROM book b
WHERE b.isbn = '978-9730228236'
SELECT b.id AS id1_0_0_ ,
b.isbn AS isbn2_0_0_ ,
b.properties AS properti3_0_0_
FROM book b
WHERE b.id = 1
-- Book details: {"price":44.99,"title":"High-Performance Java Persistence","author":"Vlad Mihalcea","publisher":"Amazon"}
UPDATE
book
SET
properties = '{"title":"High-Performance Java Persistence","author":"Vlad Mihalcea","publisher":"Amazon","price":44.99,"url":"https://www.amazon.com/High-Performance-Java-Persistence-Vlad-Mihalcea/dp/973022823X/"}'
WHERE
id = 1

You can use FasterXML (or similar) to parse the Json into an actual object (you need to define the class) and use Json.toJson(yourObj).toString() to retrieve the Json String. It also simplifies working with the objects since your data class may also have functionality.

Your JSON is well structered, so usually theres no need to persist the entire map in one single record. You won't be able to use the Hibernate/JPA query functions and a lot more.
If you really want to persist the entire map in one single record, you could persist the map in its string representation and, as already proposed, use a JSON parser like Jackson to rebuild your HashMap
#Entity
public class Animals {
private String animalsString;
public void setAnimalsString(String val) {
this.animalsString = val;
}
public String getAnimalsString() {
return this.animalsMap;
}
public HashMap<String, Animal> getAnimalsMap() {
ObjectMapper mapper = new ObjectMapper();
TypeReference<HashMap<String,Animal>> typeRef = new TypeReference<HashMap<String,Animal>>() {};
return mapper.readValue(animalsString, typeRef);
}
}
Your animal class:
public class Animal {
private String name;
private int age;
/* getter and setter */
/* ... */
}
And you could change your controller method to
#RequestMapping(method = RequestMethod.POST)
public String addPostCollection(#RequestBody String hp) {
Animals animals = new Animals();
animals.setAnimalsString(hp);
animalsRepository.save(hp);
return "OK";
}

One animal is one record. You are saving more records, not one record. You can commit more records in one transaction.
See: How to persist a lot of entities (JPA)

Related

Spring Data Neo4J hierarchy mapping on multi-level projection

I have a simple hierarchy in neo4j directly derived from the business model.
#Node
public class Team {
#Id private String teamId;
private String name;
}
#Node
public class Driver {
#Id private String driverId;
private String name;
#Relationship(direction = Relationship.Direction.OUTGOING)
private Team team;
}
#Node
public class Car {
#Id private String carId;
private String name;
#Relationship(direction = Relationship.Direction.OUTGOING)
private Driver driver;
}
which results in the corresponding graph (Team)<--(Driver)<--(Car) usually all requests start at Car.
A new use case needs to create a tree structure starting at Team nodes. The Cypher query aggregates the data on neo and returns it to SDN.
public List<Projection> loadHierarchy() {
return neo4jClient.query("""
MATCH(t:Team)<--(d:Driver)<--(c:Car)
WITH t, d, collect(distinct c{.carId, .name}) AS carsEachDriver
WITH t, collect({driver: d{.driverId, .name}, cars: carsEachDriver }) AS driverEachTeam
WITH collect({team: t{.teamId, .name}, drivers: driverEachTeam }) as teams
RETURN teams
""")
.fetchAs(Projection.class)
.mappedBy((typeSystem, record) -> new Projection() {
#Override
public Team getTeam() {
return record.get... // how to access single object?
}
#Override
public List<Retailers> getRetailers() {
return record.get... // how to access nested list objects?
}
})
.all();
}
The result is a list of following objects:
{
"drivers": [
{
"driver": {
"name": "Mike",
"driverId": "15273c10"
},
"cars": [
{
"carId": "f4ca4581",
"name": "green car"
},
{
"carId": "11f3bcae",
"name": "red car"
}
]
}
],
"team": {
"teamId": "4586b33f",
"name": "Blue Racing Team"
}
}
The problem is now, how to map the response into an according Java model. I don't use the entity classes.
I tried multi-level projection with nested interfaces.
public interface Projection {
Team getTeam();
List<Drivers> getDrivers();
interface Drivers {
Driver getDriver();
List<Cars> getCars();
}
interface Driver {
String getDriverId();
String getName();
}
interface Car {
String getCarId();
String getName();
}
interface Team {
String getTeamId();
String getName();
}
}
I struggle to access the nested lists and objects, to put them into the model.
SDN is the Spring Boot Starter in version 2.6.3.
An example how to map a nested object in a list would be a good starting point.
Or may be my approach is totally wrong? Any help is appreciated.
Projection are not meant to be something like a view or wrapper of arbitrary data.
Within the context you can get a Neo4jMappingContext instance.
You can use this to obtain the mapping function for an already existing entity.
With this, you do not have to take care about mapping the Car and (partially because of the team relationship) the Drivers.
BiFunction<TypeSystem, MapAccessor, Car> mappingFunction = neo4jMappingContext.getRequiredMappingFunctionFor(Car.class);
The mapping function accepts an object of type MapAccessor.
This is a Neo4j Java driver type that is implemented besides others also by Node and MapValue.
You can use those values from your result e.g. drivers in a loop (should be possible to call asList on the record) and within this loop you would also assign the cars.
Of course using the mapping function would only make sense if you have a lot more properties to map because nothing in the return structure (as you already said between the lines) applies to the entity structure regarding the relationships.
Here is an example of using the mapping function and direct mapping.
You have to decide what matches best for your use case.
public Collection<Projection> loadHierarchy() {
var teamMappingFunction = mappingContext.getRequiredMappingFunctionFor(Team.class);
var driverMappingFunction = mappingContext.getRequiredMappingFunctionFor(Driver.class);
return neo4jClient.query("""
MATCH(t:Team)<--(d:Driver)<--(c:Car)
WITH t, d, collect(distinct c{.carId, .name}) AS carsEachDriver
WITH t, collect({driver: d{.driverId, .name}, cars: carsEachDriver }) AS driverEachTeam
WITH {team: t{.teamId, .name}, drivers: driverEachTeam } as team
RETURN team
""")
.fetchAs(Projection.class)
.mappedBy((typeSystem, record) -> {
Team team = teamMappingFunction.apply(typeSystem, record.get("team"));
List<DriverWithCars> drivers = record.get("team").get("drivers").asList(value -> {
var driver = driverMappingFunction.apply(typeSystem, value);
var cars = value.get("carsEachDriver").asList(carValue -> {
return new Car(value.get("name").asString());
});
return new DriverWithCars(driver, cars); // create wrapper object incl. cars
});
return new Projection(team, drivers);
})
.all();
}
(Disclaimer: I did not execute this on a data set, so there might be typos or wrong access to the record)
Please note that I changed the cypher statement a little bit to have get one Team per record instead of the whole list.
Maybe this is already what you have asked for.

Is there any in-built function in spring mongo which can be used to extract data from two different documents having onetoone relationship?

I am just looking for extracting data from two different documents in mongodb having onetoone relationship using Mongo Template in spring.
I have two documents "Rosters" (Embedded Document belonging to User) and "Unread" both are identified by "ucid" an acronym for "unique conversation identifier". On the basis of ucid i'd like to perform inner-join operation and extract the data.
There are ample examples where lookup and aggregations are used for OneToMany Relationship but not for OnetToOne.
Following are the class
**User**
class User
{
private List<Roster> rosterList;
}
**Roster**
public class Roster extends Parent
{
#Id
private ObjectId _id;
#Indexed
private String author;
#Unique
private String ucid;
}
**Unread**
public class
{
#Id
#Indexed(unique=true) //todo: create index on mongo side as well
private String ucid;
private Map<String,Long> memberAndCount;
}
----------------------------------------------------
Sample Data:
USER (roster)
{user:{id:1001, username: dilag,roster:
[{
ucid:r0s122,name:sam}
},{
ucid:r0s123,name:ram}
},{
ucid:r0s124,name:rat}
}]}
UNREAD
{
ucid:r0s122,usernameAndCount:[{username:dilag,count:100},{username:ramg,count:20}],
ucid:r0s123,usernameAndCount:[{username:dilag,count:100},{username:ramg,count:20}]
}
Desired Output
{
ucid:r0s122, name :sam,usernameAndCount:[{username:dilag,count:100},{username:ramg,count:20}],
ucid:r0s123,name:ram,usernameAndCount:[{username:dilag,count:100},{username:ramg,count:20}]
}
The following spring code is not tested, But it's written based on working Mongo playground.
Basically you need to know about joining in Mongodb using $lookup. Here I have used Joining Uncorrelated Sub-queries
public List<Object> test() {
Aggregation aggregation = Aggregation.newAggregation(
l-> new Document("$lookup",
new Document("from","user")
.append("let", new Document("uid","$uicd"))
.append("pipeline",
Arrays.asList(
new Document("$unwind", "$user.roster"),
new Document("$match",
new Document("$expr",
new Document("$eq",Arrays.asList("$user.roster.ucid","$$uid"))
)
)
)
).append("as","users")
),
a-> new Document("$addFields",
new Document("name",
new Document("$ifNull",
Arrays.asList(
new Document("$arrayElemAt", Arrays.asList("$users.user.roster.name",0))
,
""
)
)
)
)
).withOptions(AggregationOptions.builder().allowDiskUse(Boolean.TRUE).build());
return mongoTemplate.aggregate(aggregation, mongoTemplate.getCollectionName(Unread.class), Object.class).getMappedResults();
}
Read this to understand the trick how to do aggregation if mongo spring-data doesn't provide any operation using new Document().

Hibernate envers "cannot evaluate $HibernateProxy$.toString()"

I am using envers to audit my entities. My code looks somewhat like this
#Audited( targetAuditMode = RelationTargetAuditMode.NOT_AUDITED )
#AuditOverride( forClass = Task.class, isAudited = true )
public class Job extends Task
{...}
#Inheritance( strategy = InheritanceType.JOINED )
#Audited( targetAuditMode = RelationTargetAuditMode.NOT_AUDITED )
public class Task
{
...
#ManyToOne( fetch = FetchType.LAZY )
#LazyToOne( value = LazyToOneOption.NO_PROXY )
#Fetch( value = FetchMode.SELECT )
#JoinColumn( nam = "id_util" )
#Audited( targetAuditMode = RelationTargetAuditMode.AUDITED )
private Utility utility;
}
#Entity
#DynamicInsert
#DynamicUpdate
#Audited( targetAuditMode = RelationTargetAuditMode.NOT_AUDITED )
public class Utility
{
#Override
public String toString()
{
StringBuilder builder = new StringBuilder();
builder.append( this.getClass().getName() ).append( "#" ).append( getId() );
builder.append( "[" );
appendAttributeValues( builder );
builder.append( "]" );
return builder.toString();
}
public Long getId()
{
return id;
}
}
When I try to fetch the revisions of a certain job entity, the field utility is not loaded correctly. Instead, hibernate gives a
Method threw 'org.hibernate.exception.GenericJDBCException' exception. Cannot evaluate Utility$HibernateProxy$9GVDBIUC.toString()
The rest of the entity revisions which consists of strings and numbers is loaded just fine. I also don't get this error when auditing and querying other entities which do not have an inheritance structure.
The _aud tables for the entities Job, Task and Utility are all filled correctly. What might be causing this error?
Before using linked object you should "materialize" proxy-object created by hibernate, like sayed in this answer: https://stackoverflow.com/a/2216603/3744622
Hibernate.initialize(entity);
if (entity instanceof HibernateProxy) {
entity = (T) ((HibernateProxy) entity).getHibernateLazyInitializer()
.getImplementation();
}
return entity;
or, for instance, using special method unproxy:
Object unproxiedObject = Hibernate.unproxy(proxiedObject);
TLDR The reason is inconsistent audit data. It happens when you you start auditing already existing entities and their relations.
Solution Copy all entity data in the corresonding aud table with rev = 1,revtype = 0. Insert rev=1, revtstmp=0 into revinfo table.
See also Safe Envers queries when the audit history is incomplete
Explanation Because the entity data existed before envers is integrated, the audit tables are initially empty. When you update an entity A whichs owns a reference to entity B, an audit records is created in A_aud but not in B_aud. When A is queried for revisions it tries to look up the referred entity B in the emtpy B_aud and yields an EntityNotFoundException, which got wrapped up in the given GenericJDBCException.

Marshalling two similar json fields to the same java field

I have a sample dummy JSON response that looks like :
{
"id": 1,
"teacher_name": "Foo",
"teacher_address": "123 Main St.",
"teacher_phone_num": 1234567891,
"student_name": "Bar",
"student_address": "546 Main St.",
"student_phone_num": 9184248576
}
The above is a silly example, but it helps illustrate the issue I am having trying to de-serialize the above into a Java class called "Employee" using Jackson:
public class Employee {
String name;
String address;
String phoneNumber;
}
The issue is that the JSON has two different prepends so I cannot annotate each field in Employee and have the object mapper map teacher_name and student_name to the name field in an Employee object. Is there a way in Jackson to specify two differently named nodes to map to the same Java field?
So in my example, I should end up with two Employee objects (I am guaranteed to have one pair per response)
It is not possible with Jackson. It is designed to map one-to-one: one json object to one java object. But you want to end up with two java objects from one json.
I would recommend you to go with strait forward way by implementing some processing level that will consume Response and map it to two Employee objects.
I think you need to write code to do the mapping whether you use annotation or not.
Simple is the best.
If you read the json as JsonNode it should be trivial to code the assignments.
import com.fasterxml.jackson.annotation.JsonAutoDetect.Visibility;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.io.IOException;
public class Q44094751 {
public static void main(String[] args) throws IOException {
String json = "{\"id\": 1," +
"\"teacher_name\": \"Foo\", \"teacher_address\": \"123 Main St.\", \"teacher_phone_num\": 1234567891," +
"\"student_name\": \"Bar\", \"student_address\": \"546 Main St.\", \"student_phone_num\": 9184248576" +
"}";
ObjectMapper mapper = new ObjectMapper();
// Read
JsonNode node = mapper.readValue( json, JsonNode.class);
Employee teacher = new Employee(),
student = new Employee();
teacher.name = node.get( "teacher_name" ).toString();
teacher.address = node.get( "teacher_address" ).toString();
teacher.phoneNumber = node.get( "teacher_phone_num" ).toString();
student.name = node.get( "student_name" ).toString();
student.address = node.get( "student_address" ).toString();
student.phoneNumber = node.get( "student_phone_num" ).toString();
// Check
mapper.setVisibility(PropertyAccessor.FIELD, Visibility.ANY);
System.out.println( mapper.writeValueAsString( teacher ) );
}
public static class Employee {
String name;
String address;
String phoneNumber;
}
}
Given a "silly example", the answer that may looks silly.
For example if you have a hundred properties, a loop with reflection should work better.
If you have requirement(s) that is not reflected in the example,
please edit your question to better describe the actual problem(s) you are facing.
I believe that it is not possible with Jackson, The workarounds that I can think of are
Split it into 2 object Teacher and Student. You can still pass the same object twice but with different classes Teacher and Student, It works, but what about the id field?
Make a Java class similar to JSON.
If you would like to make it more meaning full structure then use this structure
{
"id": 1,
"teacher" :{
"name": "Foo",
"address": "123 Main St.",
"phone_num": 1234567891,
},
"student" :{
"name": "Bar",
"address": "546 Main St.",
"phone_num": 9184248576,
}
}
You may use #JsonUnwrapped annotation to unwrap the employee's properties inline in parent object. This annotation also provides the option to specify the prefix name of which will be used while ser/der the object.
Below is your example:
public static class Employee {
private String name;
private String address;
#JsonProperty("phone_num")
private String phoneNumber;
// setter / getter removed from brevity
}
public static class YourJsonObject {
private int id;
#JsonUnwrapped(prefix="teacher_")
private Employee teacher;
#JsonUnwrapped(prefix = "student_")
private Employee student;
// setter / getter removed from brevity
}
Now Configure the ObjectMapper to use the appropriate naming strategy( from your example,I see snake case strategy is desired)
Test Case:
#Test
public void peformTest() throws Exception {
final String inputJson = "{\n" +
" \"id\": 1,\n" +
" \"teacher_name\": \"Foo\",\n" +
" \"teacher_address\": \"123 Main St.\",\n" +
" \"teacher_phone_num\": 1234567891,\n" +
" \"student_name\": \"Bar\",\n" +
" \"student_address\": \"546 Main St.\",\n" +
" \"student_phone_num\": 9184248576\n" +
" }";
ObjectMapper mapper = new ObjectMapper();
// important one
mapper.setPropertyNamingStrategy(PropertyNamingStrategy.SNAKE_CASE);
// read your json as object
YourJsonObject myObject = mapper.readValue(inputJson, YourJsonObject.class);
assertThat(myObject.getTeacher().getName(), is("Foo"));
assertThat(myObject.getStudent().getName(), is("Bar"));
}
Hope this helps.

JPA: update only specific fields

Is there a way for updating only some fields of an entity object using the method save from Spring Data JPA?
For example I have a JPA entity like this:
#Entity
public class User {
#Id
private Long id;
#NotNull
private String login;
#Id
private String name;
// getter / setter
// ...
}
With its CRUD repo:
public interface UserRepository extends CrudRepository<User, Long> { }
In Spring MVC I have a controller that get an User object for update it:
#RequestMapping(value = "/rest/user", method = RequestMethod.PUT, produces = MediaType.APPLICATION_JSON_VALUE)
#ResponseBody
public ResponseEntity<?> updateUser(#RequestBody User user) {
// Assuming that user have its id and it is already stored in the database,
// and user.login is null since I don't want to change it,
// while user.name have the new value
// I would update only its name while the login value should keep the value
// in the database
userRepository.save(user);
// ...
}
I know that I could load the user using findOne, then change its name and update it using save... But if I have 100 fields and I want to update 50 of them it could be very annoying change each value..
Is there no way to tell something like "skip all null values when save the object"?
I had the same question and as M. Deinum points out, the answer is no, you can't use save. The main problem being that Spring Data wouldn't know what to do with nulls. Is the null value not set or is it set because it needs to be deleted?
Now judging from you question, I assume you also had the same thought that I had, which was that save would allow me to avoid manually setting all the changed values.
So is it possible to avoid all the manuel mapping then? Well, if you choose to adhere to the convention that nulls always means 'not set' and you have the original model id, then yes.
You can avoid any mapping yourself by using Springs BeanUtils.
You could do the following:
Read the existing object
Use BeanUtils to copy values
Save the object
Now, Spring's BeanUtils actual doesn't support not copying null values, so it will overwrite any values not set with null on the exiting model object. Luckily, there is a solution here:
How to ignore null values using springframework BeanUtils copyProperties?
So putting it all together you would end up with something like this
#RequestMapping(value = "/rest/user", method = RequestMethod.PUT, produces = MediaType.APPLICATION_JSON_VALUE)
#ResponseBody
public ResponseEntity<?> updateUser(#RequestBody User user) {
User existing = userRepository.read(user.getId());
copyNonNullProperties(user, existing);
userRepository.save(existing);
// ...
}
public static void copyNonNullProperties(Object src, Object target) {
BeanUtils.copyProperties(src, target, getNullPropertyNames(src));
}
public static String[] getNullPropertyNames (Object source) {
final BeanWrapper src = new BeanWrapperImpl(source);
java.beans.PropertyDescriptor[] pds = src.getPropertyDescriptors();
Set<String> emptyNames = new HashSet<String>();
for(java.beans.PropertyDescriptor pd : pds) {
Object srcValue = src.getPropertyValue(pd.getName());
if (srcValue == null) emptyNames.add(pd.getName());
}
String[] result = new String[emptyNames.size()];
return emptyNames.toArray(result);
}
Using JPA you can do it this way.
CriteriaBuilder builder = session.getCriteriaBuilder();
CriteriaUpdate<User> criteria = builder.createCriteriaUpdate(User.class);
Root<User> root = criteria.from(User.class);
criteria.set(root.get("lastSeen"), date);
criteria.where(builder.equal(root.get("id"), user.getId()));
session.createQuery(criteria).executeUpdate();
You are able to write something like
#Modifying
#Query("update StudentXGroup iSxG set iSxG.deleteStatute = 1 where iSxG.groupId = ?1")
Integer deleteStudnetsFromDeltedGroup(Integer groupId);
Or If you want to update only the fields that were modified you can use annotation
#DynamicUpdate
Code example:
#Entity
#Table(name = "lesson", schema = "oma")
#Where(clause = "delete_statute = 0")
#DynamicUpdate
#SQLDelete(sql = "update oma.lesson set delete_statute = 1, "
+ "delete_date = CURRENT_TIMESTAMP, "
+ "delete_user = '#currentUser' "
+ "where lesson_id = ?")
#JsonIgnoreProperties({"hibernateLazyInitializer", "handler"})
If you are reading request as JSON String, this could be done using Jackson API. Here is code below. Code compares an existing POJO Elements and create new one with updated fields. Use the new POJO to persist.
public class TestJacksonUpdate {
class Person implements Serializable {
private static final long serialVersionUID = -7207591780123645266L;
public String code = "1000";
public String firstNm = "John";
public String lastNm;
public Integer age;
public String comments = "Old Comments";
#Override
public String toString() {
return "Person [code=" + code + ", firstNm=" + firstNm + ", lastNm=" + lastNm + ", age=" + age
+ ", comments=" + comments + "]";
}
}
public static void main(String[] args) throws JsonProcessingException, IOException {
TestJacksonUpdate o = new TestJacksonUpdate();
String input = "{\"code\":\"1000\",\"lastNm\":\"Smith\",\"comments\":\"Jackson Update WOW\"}";
Person persist = o.new Person();
System.out.println("persist: " + persist);
ObjectMapper mapper = new ObjectMapper();
Person finalPerson = mapper.readerForUpdating(persist).readValue(input);
System.out.println("Final: " + finalPerson);
}}
Final output would be, Notice only lastNm and Comments are reflecting changes.
persist: Person [code=1000, firstNm=John, lastNm=null, age=null, comments=Old Comments]
Final: Person [code=1000, firstNm=John, lastNm=Smith, age=null, comments=Jackson Update WOW]
skip all null values when save the object
As others have pointed out, there is not straight forward solution in JPA.
But thinking out of the box you can use MapStruct for that.
This means you use the right find() method to get the object to update from the DB, overwrite only the non-null properties and then save the object.
You can use JPA as you know and just use MapStruct like this in Spring to update only the non-null properties of the object from the DB:
#Mapper(componentModel = "spring")
public interface HolidayDTOMapper {
/**
* Null values in the fields of the DTO will not be set as null in the target. They will be ignored instead.
*
* #return The target Holiday object
*/
#BeanMapping(nullValuePropertyMappingStrategy = NullValuePropertyMappingStrategy.IGNORE)
Holiday updateWithNullAsNoChange(HolidayDTO holidayDTO, #MappingTarget Holiday holiday);
}
See the MapStruct docu on that for details.
You can inject the HolidayDTOMapper the same way you do it with other beans (#Autowired, Lombok,...) .
the problem is not spring data jpa related but the jpa implementation lib that you are using related.
In case of hibernate you may have a look at:
http://www.mkyong.com/hibernate/hibernate-dynamic-update-attribute-example/
For long,int and other types;
you can use the following code;
if (srcValue == null|(src.getPropertyTypeDescriptor(pd.getName()).getType().equals(long.class) && srcValue.toString().equals("0")))
emptyNames.add(pd.getName());

Categories