I have a very simple class, with a #ManyToOne association.
#Entity
public class Place {
...
#ManyToOne
#RestResource(exported = false)
private Category category;
...
}
Both Place and Category have their own respective Repositories that are exported. But for our case, we need to have the Category field not exported here. IDs are exposed for Category.
However, when I try to update an existing Place with an existing Category, Hibernate fails the update.
Example PUT to /places/foo
{
...
"category": {
"name": "Farm",
"id": 4
},
...
}
Caused by: org.hibernate.HibernateException: identifier of an instance of com.phrankly.places.model.Category was altered from 2 to 4
I'm not sure how that's even happening, given I don't set any Cascade options - Categories are managed elsewhere.
I don't want to have to write a custom Controller for Place. I also tried using an EventHandler to manually set the field to see if that helped.
#Component
#RepositoryEventHandler(Place.class)
public class PlaceEventHandler {
#Autowired
private CategoryRepository categoryRepository;
...
#HandleBeforeSave
public void onUpdateExisting(Place place) {
if(place.getCategory() != null){
// or find by another field if you're not exposing IDs
place.setCategory(categoryRepository.findOne(place.getCategory().getId()));
}
}
...
}
But, no change in functionality. I know I'm going outside of what Spring Data REST suggests, but what can I do here? Is this even a SDR issue or am I mapping this incorrectly?
Related
I'm trying to use Spring Data Rest to implement a full set of services for about 60 entities. Right now, I'm getting by with just letting Spring use my repositories rather than implementing controllers, which is great!
The data I'm having to model isn't ideal--I'd prefer to have customerId come as part of the order object.
{
"tenantId": 42,
"id": "00000001",
"customer": {
"tenantId": 42,
"id": "CUST001",
"name": "Arthur Dent"
}
}
I have the ID for a related entity as a property on my JSON object.
public class Order {
Long tenantId;
String id;
String customerId;
}
I don't really want to pull the full Customer entity and all of the other related entities and place them as members on my Order object. Instead, I'd just like to add some links to the _links collection.
I believe I've figured out WebMvcLinkBuilder finally and I have the basic idea in place. However, JpaRepository.findById returns a java.util.Optional.
#Bean
public RepresentationModelProcessor<EntityModel<Order>> orderProcessor() {
return new RepresentationModelProcessor<EntityModel<Order>>() {
#Override
public EntityModel<Order> process(final EntityModel<Order> model) {
final CustomerRepository controller = WebMvcLinkBuilder.methodOn(CustomerRepository);
final CustomerId id = new CustomerId(model.getContent().getTenantId(), model.getContent().getCustomerId());
model.add(WebMvcLinkBuilder.linkTo(controller.findById(id)).withRel("customer"));
return model;
}
};
}
The error I get is:
Could not generate CGLIB subclass of class java.util.Optional: Common causes of this problem include using a final class or a non-visible class; nested exception is java.lang.IllegalArgumentException: Cannot subclass final class java.util.Optional
How can I add a link to my resource?
Hello everyone I'm new to Spring world. Actually I want to know how can we use converter to update object instead of updating each element one by one using set and get. Right now in my controller I've :
#PostMapping("/edit/{userId}")
public Customer updateCustomer(#RequestBody Customer newCustomer, #PathVariable final String userId)
{
return customerService.update(userId, newCustomer);
}
and this is how I'm updating the customer object :
#Override
public Customer update(String id, Customer newCustomer) {
Customer customer = customerRepository.findById(id).get();
customer.setFirstName(newCustomer.getFirstName());
customer.setLastName(newCustomer.getLastName());
customer.setEmail(newCustomer.getEmail());
return customerRepository.save(customer);
}
Instead of using each time set and get, I want to use a converter.
The approach of passing the entity's id as a path variable when you're updating it isn't really right. Think about this: you have a #RequestBody, why don't you include the id inside this body too? Why do you want to specify a path variable for it?
Now, if you have the full Customer with its id from the body, you don't have to make any calls to your repository because hibernate adds it to a persistent state already based on its id and a simple
public Customer update(Customer newCustomer) {
return customerRepository.save(newCustomer);
}
should work.
Q: What is a persistent state?
A: A persistent entity has been associated with a database table row and it’s being managed by the current running Persistence Context. ( customerRepository.findById() is just asking the DB if the entity with the specified id exists and add it to a persistent state. Hibernate manage all this process if you have an #Id annotated field and is filled, in other words:
Customer customer = new Customer();
customer.setId(1);
is ALMOST the same thing as :
Customer customer = customerRepository.findById(1).get();
)
TIPS: Anyway, you shouldn't have (if you didn't know) a model in the controller layer. Why? Let's say that your Customer model can have multiple permissions. One possible structure could look like this:
#Entity
public class Customer{
//private fields here;
#OneToMany(mappedBy="customer",--other configs here--)
private List<Permission> permissions;
}
and
#Entity
public class Permission{
#Id
private Long id;
private String name;
private String creationDate;
#ManyToOne(--configs here--)
private Customer customer;
}
You can see that you have a cross reference between Customer and Permission entity which will eventually lead to a stack overflow exception (if you don't understand this, you can think about a recursive function which doesn't have a condition to stop and it's called over and over again => stack overflow. The same thing is happening here).
What can you do? Creating a so called DTO class that you want the client to receive instead of a model. How can you create this DTO? Think about what the user NEEDS to know.
1) Is "creationDate" from Permission a necessary field for the user? Not really.
2) Is "id" from Permission a necessary field for the user? In some cases yes, in others, not.
A possible CustomerDTO could look like this:
public class CustomerDTO
{
private String firstName;
private String lastName;
private List<String> permissions;
}
and you can notice that I'm using a List<String> instead of List<Permission> for customer's permissions which are in fact the permissions' names.
public CustomerDTO convertModelToDto(Customer customer)
{
//hard way
CustomerDTO customerDTO = new CustomerDTO();
customerDTO.setFirstName(customer.getFirstName());
customerDTO.setLastName(customer.getLastName());
customerDTO.setPermissions(
customer.getPermissions()
.stream()
.map(permission -> permission.getName())
.collect(Collectors.toList());
);
// easy-way => using a ModelMapper
customerDTO = modelMapper.map(customer,CustomerDTO.class);
return customerDTO;
}
Use ModelMapper to map one model into another.
First define a function that can map source data into the target model. Use this as a library to use whenever want.
public static <T> void merge(T source, T target) {
ModelMapper modelMapper = new ModelMapper();
modelMapper.getConfiguration().setMatchingStrategy(MatchingStrategies.STRICT);
modelMapper.map(source, target);
}
Use merge for mapping data
Customer customer = customerRepository.findById(id).get();
merge(newCustomer, customer);
customerRepository.save(customer);
Add dependency in pom.xml for model mapper
<dependency>
<groupId>org.modelmapper</groupId>
<artifactId>modelmapper</artifactId>
<version>2.3.4</version>
</dependency>
Let's start with this entity:
#Entity
public class MyEntity {
...
#Column(length = 80)
private String description;
#Column(name = "enum_column", precision = 18)
#Convert(converter = EnumColumnConverter.class)
private MyEnum enumColumn;
...
}
Here, you see two columns that are nullable (in my entity and in the database). The converter replaces the enum with a Long value in the database. A repository class is defined accordingly:
#Repository
public interface MyEntityRepository extends JpaRepository<MyEntity, Long> {}
A DTO is defined from a service package:
public class MyEntityDto {
...
private String description;
private MyEnum enumColumn;
...
}
Mapping between DTOs and entities is done using Dozer. A DTO is modified from a Java FX UI. A service has been defined between UI and persistence to save modified entities.
#Service
#Transactional
public class MyEntityService {
#Autowired MyEntityRepository myEntityRepository;
...
public List<MyEntityDto> save(List<MyEntityDto> dtosToSave) {
List<MyEntityDto> results = Collections.emptyList();
if (dtosToSave != null && !dtosToSave.empty()) {
Iterable<MyEntity> entities = convertDtosWithDozer(dtosToSave);
List<MyEntity> savedEntities = myEntityRepository.saveAll(entities);
results = convertEntitiesWithDozer(savedEntities);
}
return results;
}
From the UI, I modify an existing row where both descriptionand enumColumn are not null. Both values are set to null.
The problem is that none of them is set to null in the database. In the logs, the update request generated by Hibernate does not include these columns. When I debug the code, these columns are null in dtosToSave, entities, savedEntities and results.
I created a unit test for MyEntityRepository where I save an entity with non null description and entityColumn. I reload the entity from the database using the repository to be sure these columns are not null. I set them to null, save the entity, and load it back from the database. Now both columns are indeed null, which is what I've been expecting.
My question: what am I missing here? Why the repository does not save null columns? If I set any non null values, it works perfectly.
Thanks in advance.
UPDATE: could my problem be related to this? Jpa Repository save() doesn't update existing data
You convert your dtos to entities via dozer but as this point entities are still in detached-state.... to update existing entities you first need to load them through database via your repository. Something like repository.findById(Id id);
Then you will get entity in "attached" state and so state transitions(update on fields) will be applied.
During the save() all your entity state transitions will be translated to corresponding DML and your update should work now.
And regarding this statement
I reload the entity from the database using the repository to be sure these columns are not null. I set them to null, save the entity, and load it back from the database. Now both columns are indeed null, which is what I've been expecting.
As you said you reload entity from the database so it works
I'm currently working on a SpringBoot API to interface with a MongoRepository, but I'm having trouble understanding how the JSON being passed becomes a Document for storage within Mongo. I currently have a simple API that stores a group of users:
#Document
#JsonInclude
public class Group {
#Id
#JsonView(Views.Public.class)
private String id;
#JsonView(Views.Public.class)
private String name;
#JsonView(Views.Public.class)
private Set<GroupMember> groupMembers = new HashSet<>();
}
There are also setter and getter methods for each of the fields, although I don't know how necessary those are either (see questions at the end).
Here is the straightforward component I'm using:
#Component
#Path("/groups")
#Api(value = "/groups", description = "Group REST")
public class Groups {
#Autowired
private GroupService groupService;
#GET
#Produces(MediaType.APPLICATION_JSON)
#ApiOperation(value = "Get all Groups", response = Group.class, responseContainer = "List")
#JsonView(Views.Public.class)
public List<Group> getAllGroups() {
return groupService.getAllGroups();
}
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
#ApiOperation(value = "Create a Group", response = Group.class)
#JsonView(Views.Detailed.class)
public Group submitGroup(Group group) {
return groupService.addGroup(group);
}
}
Finally, I have a Service class:
#Service
public class GroupServiceImpl implements GroupService {
#Autowired
private GroupRepository groupRepository;
#Override
public Group addGroup(Group group) {
group.setId(null);
return groupRepository.save(group);
}
#Override
public List<Group> getAllGroups() {
return groupRepository.findAll();
}
}
The GroupRespository is simply an interface which extends MongoRepository<Group,String>
Now, when I actually make a call to the POST method, with a body containing:
{
"name": "group001",
"groupMembers": []
}
I see that it properly inserts this group with a random Mongo UUID. However, if I try to insert GroupMember objects inside the list, I receive a null pointer exception. From this, I have two questions:
How does SpringBoot (Jackson?) know which fields to deserialize from the JSON being passed? I tested this after deleting the getter and setter methods, and it still works.
How does SpringBoot handle nested objects, such as the Set inside the class? I tested with List instead of Set, and it worked, but I have no idea why. My guess is that for each object that is both declared in my class and listed in my JSON object, SpringBoot is calling a constructor that it magically created behind the scenes, and one doesn't exist for the Set interface.
Suppose I'm adamant on using Set (the same user shouldn't show up twice anyway). What tools can I use to get SpringBoot to work as expected?
It seems to me that a lot of the things that happen in Spring are very behind-the-scenes, which makes it difficult for me to understand why things work when they do. Not knowing why things work makes it difficult to construct things from scratch, which makes it feel as though I'm hacking together a project rather than actually engineering one. So my last question is something like, is there a guide that explains the wiring behind the scenes?
Finally, this is my first time working with Spring... so please excuse me if my questions are entirely off the mark, but I would appreciate any answers nonetheless.
How can one configure their JPA Entities to not fetch related entities unless a certain execution parameter is provided.
According to Spring's documentation, 4.3.9. Configuring Fetch- and LoadGraphs, you need to use the #EntityGraph annotation to specify fetch policy for queries, however this doesn't let me decide at runtime whether I want to load those entities.
I'm okay with getting the child entities in a separate query, but in order to do that I would need to configure my repository or entities to not retrieve any children. Unfortunately, I cannot seem to find any strategies on how to do this. FetchPolicy is ignored, and EntityGraph is only helpful when specifying which entities I want to eagerly retrieve.
For example, assume Account is the parent and Contact is the child, and an Account can have many Contacts.
I want to be able to do this:
if(fetchPolicy.contains("contacts")){
account.setContacts(contactRepository.findByAccountId(account.getAccountId());
}
The problem is spring-data eagerly fetches the contacts anyways.
The Account Entity class looks like this:
#Entity
#Table(name = "accounts")
public class Account
{
protected String accountId;
protected Collection<Contact> contacts;
#OneToMany
//#OneToMany(fetch=FetchType.LAZY) --> doesn't work, Spring Repositories ignore this
#JoinColumn(name="account_id", referencedColumnName="account_id")
public Collection<Contact> getContacts()
{
return contacts;
}
//getters & setters
}
The AccountRepository class looks like this:
public interface AccountRepository extends JpaRepository<Account, String>
{
//#EntityGraph ... <-- has type= LOAD or FETCH, but neither can help me prevent retrieval
Account findOne(String id);
}
The lazy fetch should be working properly if no methods of object resulted from the getContacts() is called.
If you prefer more manual work, and really want to have control over this (maybe more contexts depending on the use case). I would suggest you to remove contacts from the account entity, and maps the account in the contacts instead. One way to tell hibernate to ignore that field is to map it using the #Transient annotation.
#Entity
#Table(name = "accounts")
public class Account
{
protected String accountId;
protected Collection<Contact> contacts;
#Transient
public Collection<Contact> getContacts()
{
return contacts;
}
//getters & setters
}
Then in your service class, you could do something like:
public Account getAccountById(int accountId, Set<String> fetchPolicy) {
Account account = accountRepository.findOne(accountId);
if(fetchPolicy.contains("contacts")){
account.setContacts(contactRepository.findByAccountId(account.getAccountId());
}
return account;
}
Hope this is what you are looking for. Btw, the code is untested, so you should probably check again.
You can use #Transactional for that.
For that you need to fetch you account entity Lazily.
#Transactional Annotations should be placed around all operations that are inseparable.
Write method in your service layer which is accepting one flag to fetch contacts eagerly.
#Transactional
public Account getAccount(String id, boolean fetchEagerly){
Account account = accountRepository.findOne(id);
//If you want to fetch contact then send fetchEagerly as true
if(fetchEagerly){
//Here fetching contacts eagerly
Object object = account.getContacts().size();
}
}
#Transactional is a Service that can make multiple call in single transaction
without closing connection with end point.
Hope you find this useful. :)
For more details refer this link
Please find an example which runs with JPA 2.1.
Set the attribute(s) you only want to load (with attributeNodes list) :
Your entity with Entity graph annotations :
#Entity
#NamedEntityGraph(name = "accountGraph", attributeNodes = {
#NamedAttributeNode("accountId")})
#Table(name = "accounts")
public class Account {
protected String accountId;
protected Collection<Contact> contacts;
#OneToMany(fetch=FetchType.LAZY)
#JoinColumn(name="account_id", referencedColumnName="account_id")
public Collection<Contact> getContacts()
{
return contacts;
}
}
Your custom interface :
public interface AccountRepository extends JpaRepository<Account, String> {
#EntityGraph("accountGraph")
Account findOne(String id);
}
Only the "accountId" property will be loaded eagerly. All others properties will be loaded lazily on access.
Spring data does not ignore fetch=FetchType.Lazy.
My problem was that I was using dozer-mapping to covert my entities to graphs. Evidently dozer calls the getters and setters to map two objects, so I needed to add a custom field mapper configuration to ignore PersistentCollections...
GlobalCustomFieldMapper.java:
public class GlobalCustomFieldMapper implements CustomFieldMapper
{
public boolean mapField(Object source, Object destination, Object sourceFieldValue, ClassMap classMap, FieldMap fieldMapping)
{
if (!(sourceFieldValue instanceof PersistentCollection)) {
// Allow dozer to map as normal
return;
}
if (((PersistentCollectiosourceFieldValue).wasInitialized()) {
// Allow dozer to map as normal
return false;
}
// Set destination to null, and tell dozer that the field is mapped
destination = null;
return true;
}
}
If you are trying to send the resultset of your entities to a client, I recommend you use data transfer objects(DTO) instead of the entities. You can directly create a DTO within the HQL/JPQL.
For example
"select new com.test.MyTableDto(my.id, my.name) from MyTable my"
and if you want to pass the child
"select new com.test.MyTableDto(my.id, my.name, my.child) from MyTable my"
That way you have a full control of what is being created and passed to client.