Cache invalidation through a setter or service? - java

I've recently had to implement a cache invalidation system, and ended up hesitating between several ways of doing it.
I'd like to know what the best practice is in my case. I have a classic java back-end, with entities, services and repositories.
Let's say I have a Person object, with the usual setters and getters, persisted in a database.
public class Person {
private Long id;
private String firstName;
private String lastName;
...
}
A PersonRepository, instance of JpaRepository<Person, Long>.
public class PersonRepository extends JpaRepository<Person, Long> {
public Person save(Person person) {return super.save(person);}
public Person find(Person person) {return super.find(person);}
public void delete(Person person) {super.delete(person);}
}
I have a PersonService, with the usual save(), find(), delete() methods and other more functional methods.
public class PersonService {
public Person save(Person person) {
doSomeValidation(person)
return personRepository.save(person);
}
...
}
Now I also have so jobs that run periodically and manipulate the Person objects. One of which is running every second and uses a cache of Person objects, that needs to be rebuilt only if the firstName attribute of a Person has been modified elsewhere in the application.
public class EverySecondPersonJob {
private List<Person> cache;
private boolean cacheValid;
public void invalidateCache() {
cacheValid = false;
}
public void execute() { // run every second
if (!cacheValid)
cache = buildCache();
doStuff(cache);
}
}
There are lots of places in the code that manipulate Person objects and persist them, some may change the firstName attribute, requiring an invalidation of the cache, some change other things, not requiring it, for example:
public class ServiceA {
public void doStuffA(Person person) {
doStuff();
person.setFirstName("aaa");
personRepository.save(person);
}
public void doStuffB(Person person) {
doStuff();
person.setLastName("aaa");
personService.save(person);
}
}
What is the best way of invaliding the cache?
First idea:
Create a PersonService.saveAndInvalidateCache() method then check every method that calls personService.save(), see if they modify an attribute, and if yes, make it call PersonService.saveAndInvalidateCache() instead:
public class PersonService {
public Person save(Person person) {
doSomeValidation(person)
return personRepository.save(person);
}
public Person saveAndInvalidateCache(Person person) {
doSomeValidation(person)
Person saved = personRepository.save(person);
everySecondPersonJob.invalidateCache();
return saved;
}
...
}
public class ServiceA {
public class doStuffA(Person person) {
doStuff();
person.setFirstName("aaa");
personService.saveAndInvalidateCache(person);
}
public class doStuffB(Person person) {
doStuff();
person.setLastName("aaa");
personService.save(person);
}
}
It requires lots of modifications and makes it error prone if doStuffX() are modified or added. Every doStuffX() has to be aware if they must invalidate or not the cache of an entirely unrelated job.
Second idea:
Modify the setFirstName() to track the state of th ePerson object, and make PersonService.save() handle the cache invalidation:
public class Person {
private Long id;
private String firstName;
private String lastName;
private boolean mustInvalidateCache;
setFirstName(String firstName) {
this.firstName = firstName;
this.mustInvalidateCache = true;
}
...
}
public class PersonService {
public Person save(Person person) {
doSomeValidation(person);
Person saved = personRepository.save(person);
if (person.isMustInvalidateCache)
everySecondPersonJob.invalidateCache();
}
...
}
That solution makes it less error prone by not making every doStuffX() need to be aware of if they must invalidate the cache or not, but it makes the setter do more than just change the attribute, which seems to be a big nono.
Which solution is the best practice and why?
Thanks in advance.
Clarification: My job running every second calls, if the cache is invalid, a method that retrieves the Person objects from the database, builds a cache of other objects based upon the properties of the Person objects (here, firstName), and doesn't modify the Person.
The job then uses that cache of other objects for its job, and doesn't persist anything in the database either, so there is no potential consistency issue.

1) You don't
In the usage scenario you described the best practice is not to do any self grown caching but use the cache inside the JPA implementation. A lot of JPA implementations provide that (e.g. Hibernate, EclipseLink, Datanucleus, Apache OpenJPA).
Now I also have so jobs that run periodically and manipulate the Person objects
You would never manipulate a cached object. To manipulate, you need a session/transaction context and the database JPA implementation makes sure that you have the current object.
If you do "invalidation", as you described, you loose transactional properties and get inconsistencies. What happens if a transaction fails and you updated the cache with the new value already? But if you update the cache after the transaction went through, concurrent jobs read the old value.
2) Different Usage Scenario with Eventual Consistent View
You could do caching "on top" of your data storage layer, that provides an eventual consistent view. But you cannot write data back into the same object.
JPA always updates (and caches) the complete object.
Maybe you can store the data that your "doStuff" code derives in another entity?
If this is a possibility, then you have several options. I would "wire in" the cache invalidation via JPA triggers or the "Change Data Capture" capabilities of the database. JPA triggers are similar to your second idea, except that you don't need that all code is using your PersonService. If you run the tigger inside the application, your application cannot have multiple instances, so I would prefer getting change events from the database. You should reread everything from time to time in case you miss an event.

Related

Hibernate/Spring - excessive memory usage in org.hibernate.engine.internal.StatefulPersistenceContext

I have an application that uses Hibernate and it's running out of memory with a medium volume dataset (~3 million records). When analysing the memory dump using Eclipse's Memory Analyser I can see that StatefulPersistenceContext appears to be holding a copy of the record in memory in addition to the object itself, doubling the memory usage.
I'm able to reproduce this on a slightly smaller scale with a defined workflow, but am unable to simplify it to the level that I can put the full application here. The workflow is:
Insert ~400,000 records (Fruit) into the database from a file
Get all of the Fruits from the database and find if there are any complementary items to create ~150,000 Baskets (containing two Fruits)
Retrieve all of the data - Fruits & Baskets - and save to a file
It's running out of memory at the final stage, and the heap dump shows StatefulPersistenceContext has hundreds of thousands of Fruits in memory, in addition to the Fruits we retrieved to save to the file.
I've looked around online and the suggestion appears to be to use QueryHints.READ_ONLY on the query (I put it on the getAll), or to wrap it in a Transaction with the readOnly property set - but neither of these seem to have stopped the massive StatefulPersistenceContext.
Is there something else I should be looking at?
Examples of the classes / queries I'm using:
public interface ShoppingService {
public void createBaskets();
public void loadFromFile(ObjectInput input);
public void saveToFile(ObjectOutput output);
}
#Service
public class ShoppingServiceImpl implements ShoppingService {
#Autowired
private FruitDAO fDAO;
#Autowired
private BasketDAO bDAO;
#Override
public void createBaskets() {
bDAO.add(Basket.generate(fDAO.getAll()));
}
#Override
public void loadFromFile(ObjectInput input) {
SavedState state = ((SavedState) input.readObject());
fDAO.add(state.getFruits());
bDAO.add(state.getBaskets());
}
#Override
public void saveToFile(ObjectOutput output) {
output.writeObject(new SavedState(fDAO.getAll(), bDAO.getAll()));
}
public static void main(String[] args) throws Throwable {
ShoppingService service = null;
try (ObjectInput input = new ObjectInputStream(new FileInputStream("path\\to\\input\\file"))) {
service.loadFromFile(input);
}
service.createBaskets();
try (ObjectOutput output = new ObjectOutputStream(new FileOutputStream("path\\to\\output\\file"))) {
service.saveToFile(output);
}
}
}
#Entity
public class Fruit {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE)
private Long id;
private String name;
// ~ 200 string fields
}
public interface FruitDAO {
public void add(Collection<Fruit> elements);
public List<Fruit> getAll();
}
#Repository
public class JPAFruitDAO implements FruitDAO {
#PersistenceContext
private EntityManager em;
#Override
#Transactional()
public void add(Collection<Fruit> elements) {
elements.forEach(em::persist);
}
#Override
public List<Fruit> getAll() {
return em.createQuery("FROM Fruit", Fruit.class).getResultList();
}
}
#Entity
public class Basket {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE)
private Long id;
#OneToOne
#JoinColumn(name = "arow")
private Fruit aRow;
#OneToOne
#JoinColumn(name = "brow")
private Fruit bRow;
public static Collection<Basket> generate(List<Fruit> fruits) {
// Some complicated business logic that does things
return null;
}
}
public interface BasketDAO {
public void add(Collection<Basket> elements);
public List<Basket> getAll();
}
#Repository
public class JPABasketDAO implements BasketDAO {
#PersistenceContext
private EntityManager em;
#Override
#Transactional()
public void add(Collection<Basket> elements) {
elements.forEach(em::persist);
}
#Override
public List<Basket> getAll() {
return em.createQuery("FROM Basket", Basket.class).getResultList();
}
}
public class SavedState {
private Collection<Fruit> fruits;
private Collection<Basket> baskets;
}
Have a look at this answer here... How does Hibernate detect dirty state of an entity object?
Without access to the heap dump or your complete code, I would believe that you are seeing exactly what you are saying that you see. As long as hibernate believes that it is possible that the entities will change, it keeps a complete copy in memory so that it can compare the current state of the object to the state as it was originally loaded from the database. Then at the end of the transaction (the transactional block of code), it will automatically write the changes to the database. In order to do this, it needs to know what the state of the object used to be in order to avoid a large number of (potentially expensive) write operations.
I believe that setting the transaction-block so that it is read-only is a step on the right-track. Not completely sure, but I hope the information here helps you at least understand why you are seeing large memory consumption.
1: Fetching all Fruits at once from DB, or Persisting large set of bucket once will impact DB performance as well as application performance because of huge objects in Heap memory (young gen + Old gen based on Object survive in heap). Use batch process instead of processing all data once.
use spring batch or implement or a custom logic to process data in set of chunks.
2: The persistence context stores newly created and modified entities in memory. Hibernate sends these changes to the database when the transaction is synchronized. This generally happens at the end of a transaction. However, calling EntityManager.flush() also triggers a transaction synchronization.
Secondly, the persistence context serves as an entity cache, also referred to as the first level cache. To clear entities in the persistence context, we can call EntityManager.clear().
Can take ref for batch processing from here.
3.If you don't plan on modifying Fruit, you could just fetch entries in read-only mode: Hibernate will not retain the dehydrated state which it normally uses for the dirty checking mechanism. So, you get half the memory footprint.
Quick Solution: If you just execute this method one time for db create increase jvm -Xmx value.
Real Solution: When you try to persist everything it will keep all datas in memory until commit, and memory easily consume, so rather than this, try to save datas part part like this dump modes. For example:
EntityManager em = ...;
for (Fruid fruid : fruids) {
try {
em.getTransaction().begin();
em.persist(fruid);
em.getTransaction().commit();
} finally {
if (em.getTransaction().isActive()) {
em.getTransaction().rollback();
}
if (em.isOpen())
em.close();
}
}

Axon: Create and Save another Aggregate in Saga after creation of an Aggregate

Update: The issue seems to be the id that I'm using twice, or in other words, the id from the product entity that I want to use for the productinventory entity. As soon as I generate a new id for the productinventory entity, it seems to work fine. But I want to have the same id for both, since they're the same product.
I have 2 Services:
ProductManagementService (saves a Product entity with product details)
1.) For saving the Product Entity, I implemented an EventHandler that listens to ProductCreatedEvent and saves the product to a mysql database.
ProductInventoryService (saves a ProductInventory entity with stock quantities of product to a certain productId defined in ProductManagementService )
2.) For saving the ProductInventory Entity, I also implemented an EventHandler that listens to ProductInventoryCreatedEvent and saves the product to a mysql database.
What I want to do:
When a new Product is created in ProductManagementService, I want to create a ProductInventory entity in ProductInventoryService directly afterwards and save it to my msql table. The new ProductInventory entity shall have the same id as the Product entity.
For that to accomplish, I created a Saga, which listes to a ProductCreatedEvent and sends a new CreateProductInventoryCommand. As soon as the CreateProductInventoryCommand triggers a ProductInventoryCreatedEvent, the EventHandler as described in 2.) should catch it. Except it doesn't.
The only thing thta gets saved is the Product Entity, so in summary:
1.) works, 2.) doesn't. A ProductInventory Aggregate does get created, but it doesn't get saved since the saving process that is connected to an EventHandler isn't triggered.
I also get an Exception, the application doesn't crash though: Command 'com.myApplication.apicore.command.CreateProductInventoryCommand' resulted in org.axonframework.commandhandling.CommandExecutionException(OUT_OF_RANGE: [AXONIQ-2000] Invalid sequence number 0 for aggregate 3cd71e21-3720-403b-9182-130d61760117, expected 1)
My Saga:
#Saga
#ProcessingGroup("ProductCreationSaga")
public class ProductCreationSaga {
#Autowired
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "productId")
public void handle(ProductCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductCreatedEvent");
String productInventoryId = event.productId;
SagaLifecycle.associateWith("productInventoryId", productInventoryId);
//takes ID from product entity and sets all 3 stock attributes to zero
commandGateway.send(new CreateProductInventoryCommand(productInventoryId, 0, 0, 0));
}
#SagaEventHandler(associationProperty = "productInventoryId")
public void handle(ProductInventoryCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductInventoryCreatedEvent");
SagaLifecycle.end();
}
}
The EventHandler that works as intended and saves a Product Entity:
#Component
public class ProductPersistenceService {
#Autowired
private ProductEntityRepository productRepository;
//works as intended
#EventHandler
void on(ProductCreatedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductCreatedEvent");
ProductEntity entity = new ProductEntity(event.productId, event.productName, event.productDescription, event.productPrice);
productRepository.save(entity);
}
#EventHandler
void on(ProductNameChangedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductNameChangedEvent");
ProductEntity existingEntity = productRepository.findById(event.productId).get();
ProductEntity entity = new ProductEntity(event.productId, event.productName, existingEntity.getProductDescription(), existingEntity.getProductPrice());
productRepository.save(entity);
}
}
The EventHandler that should save a ProductInventory Entity, but doesn't:
#Component
public class ProductInventoryPersistenceService {
#Autowired
private ProductInventoryEntityRepository productInventoryRepository;
//doesn't work
#EventHandler
void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventoryPersistenceService, EventHandler, ProductInventoryCreatedEvent");
ProductInventoryEntity entity = new ProductInventoryEntity(event.productInventoryId, event.physicalStock, event.reservedStock, event.availableStock);
System.out.println(entity.toString());
productInventoryRepository.save(entity);
}
}
Product-Aggregate:
#Aggregate
public class Product {
#AggregateIdentifier
private String productId;
private String productName;
private String productDescription;
private double productPrice;
public Product() {
}
#CommandHandler
public Product(CreateProductCommand command) {
System.out.println("Product, CommandHandler, CreateProductCommand");
AggregateLifecycle.apply(new ProductCreatedEvent(command.productId, command.productName, command.productDescription, command.productPrice));
}
#EventSourcingHandler
protected void on(ProductCreatedEvent event) {
System.out.println("Product, EventSourcingHandler, ProductCreatedEvent");
this.productId = event.productId;
this.productName = event.productName;
this.productDescription = event.productDescription;
this.productPrice = event.productPrice;
}
}
ProductInventory-Aggregate:
#Aggregate
public class ProductInventory {
#AggregateIdentifier
private String productInventoryId;
private int physicalStock;
private int reservedStock;
private int availableStock;
public ProductInventory() {
}
#CommandHandler
public ProductInventory(CreateProductInventoryCommand command) {
System.out.println("ProductInventory, CommandHandler, CreateProductInventoryCommand");
AggregateLifecycle.apply(new ProductInventoryCreatedEvent(command.productInventoryId, command.physicalStock, command.reservedStock, command.availableStock));
}
#EventSourcingHandler
protected void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventory, EventSourcingHandler, ProductInventoryCreatedEvent");
this.productInventoryId = event.productInventoryId;
this.physicalStock = event.physicalStock;
this.reservedStock = event.reservedStock;
this.availableStock = event.availableStock;
}
}
What you are noticing right now is the uniqueness requirement of the [aggregate identifier, sequence number] pair within a given Event Store. This requirement is in place to safe guard you from potential concurrent access on the same aggregate instance, as several events for the same aggregate all need to have a unique overall sequence number. This number is furthermore use to identify the order in which events need to be handled to guarantee the Aggregate is recreated in the same order consistently.
So, you might think this would opt for a "sorry there is no solution in place", but that is luckily not the case. There are roughly three things you can do in this set up:
Life with the fact both aggregates will have unique identifiers.
Use distinct bounded contexts between both applications.
Change the way aggregate identifiers are written.
Option 1 is arguably the most pragmatic and used by the majority. You have however noted the reuse of the identifier is necessary, so I am assuming you have already disregarded this as an option entirely. Regardless, I would try to revisit this approach as using UUIDs per default for each new entity you create can safe you from trouble in the future.
Option 2 would reflect itself with the Bounded Context notion pulled in by DDD. Letting the Product aggregate and ProductInventory aggregate reside in distinct contexts will mean you will have distinct event stores for both. Thus, the uniqueness constraint would be kept, as no single store is containing both aggregate event streams. Whether this approach is feasible however depends on whether both aggregates actually belong to the same context yes/no. If this is the case, you could for example use Axon Server's multi-context support to create two distinct applications.
Option 3 requires a little bit of insight in what Axon does. When it stores an event, it will invoke the toString() method on the #AggregateIdentifier annotated field within the Aggregate. As your #AggregateIdentifier annotated field is a String, you are given the identifier as is. What you could do is have typed identifiers, for which the toString() method doesn't return only the identifier, but it appends the aggregate type to it. Doing so will make the stored aggregateIdentifier unique, whereas from the usage perspective it still seems like you are reusing the identifier.
Which of the three options suits your solution better is hard to deduce from my perspective. What I did do, is order them in most reasonable from my perspective.
Hoping this will help your further #Jan!

Pattern for persisting data in Realm?

My issue is how to organize the code. Let say I have a User class
public class User extends RealmObject {
#PrimaryKey
private String id;
#Required
private String name;
public User() { // per requirement of no args constructor
id = UUID.randomUUID().toString();
}
// Assume getter & setter below...
}
and a Util class is needed to handles the save in an asynchronous manner since RealmObjects cannot have methods other than getter/setter.
public class Util {
public static void save(User user, Realm realm) {
RealmAsyncTask transaction = realm.executeTransaction(new Realm.Transaction() {
#Override
public void execute(Realm realm) {
realm.copyToRealm(user); // <====== Argument needs to be declared final in parent method's argument!
}
}, null);
}
}
The intention is to put save() in a Util class to prevent spreading similar save code all over the code-base so that every time I wanted to save I would just call it as such:
User u = new User();
u.setName("Uncle Sam");
Util.save(u, Realm.getDefaultInstance());
Not sure if this affects performance at all, but I was just going to save all fields overwriting what was there except for the unique id field every single time.
The problem is that I now need to set the "user" argument as final in the Util.save() method, which means I cannot pass in the object I need to save other than once.
Is there a different way of handling this? Maybe a different pattern? Or am I looking at this all wrong and should go back to SQLite?
Why is it a problem to set public static void save(final User user, Realm realm) ? It just means you cannot reassign the user variable to something else.
That said, the existence of a save() method can be a potential code smell as you then spread the update behaviour across the code base. I would suggest looking into something like the Repository pattern (http://martinfowler.com/eaaCatalog/repository.html) instead.
Realm is actually working on an example showing how you can combine the Model-View-Presenter architecture with a Repository to encapsulate updates which is a good pattern for what you are trying to do here. You can see the code for it here: https://github.com/realm/realm-java/pull/1960

Which blocks of code should be synchronized?

I have three different classes:
Managed bean (singleton scope)
Managed bean (session scope)
Spring #Controller
I read few posts here about synchronization, but I still don't understand how it should be and how it works.
Short examples:
1) Managed bean (singleton scope).
Here all class fields should be the same for all users. All user work with one instance of this object or with his copies(???).
public class CategoryService implements Serializable {
private CategoryDao categoryDao;
private TreeNode root; //should be the same for all users
private List<String> categories = new ArrayList<String>();//should be the same for all users
private List<CategoryEntity> mainCategories = new ArrayList<CategoryEntity>();
//should be the same for all users
public void initCategories() {
//get categories from database
}
public List<CategoryEntity> getMainCategories() {
return mainCategories;
}}
2) Managed bean (session scope)
In this case, every user have his own instance of object.
When user trying to delete category he should check are another users which trying to delete the same category, so we need to use synchronized block???
public class CategoryServiceSession implements Serializable {
private CategoryDao categoryDao;
private CategoryService categoryService;
private TreeNode selectedNode;
public TreeNode getSelectedNode() {
return selectedNode;
}
public void setSelectedNode(TreeNode selectedNode) {
this.selectedNode = selectedNode;
}
public void deleteCategory() {
CategoryEntity current = (CategoryEntity) selectedNode.getData();
synchronized (this) {
//configure tree
selectedNode = null;
categoryDao.delete(current);
}
categoryService.initCategories();
}}
3) Spring #Controller
Here all user may have an instance (or each user have his own instance???). But when some admin try to change parameter of some user he should check is another admin trying to do the same operation??
#Controller
#RequestMapping("/rest")
public class UserResource {
#Autowired
private UserDao userDao;
#RequestMapping(value = "/user/{id}", method = RequestMethod.PUT)
public #ResponseBody UserEntity changeBannedStatus(#PathVariable Long id) {
UserEntity user = userDao.findById(id);
synchronized (id) {
user.setBanned(!user.getBanned());
userDao.update(user);
}
return user;
}
}
So, how it should to be?
Sorry for my English.
In the code that you've posted -- nothing in particular needs to be synchronised, and the synchronised blocks you've defined won't protect you from anything. Your controller scope is singleton by default.
If your singletons change shared objects ( mostly just their fields) then you should likely flag the whole method as synchronised.
Method level variables and final parameters will likely never need synchronization ( at least in the programming model you seem to be using ) so don't worry about it.
The session object is guarded by serialisation, mostly, but you can still have data races if your user has concurrent requests -- you'll have to imagine creative ways to deal with this.
You may/will have concurrency issues in the database ( multiple users trying to delete or modify a database row concurrently ) but this should be handled by a pessimistic or optimistic locking and transaction policy in your DAO.
Luck.
Generally speaking, using synchronized statements in your code reduces scalability. If your ever try to use multiple server instances, your synchronized will most likely be useless. Transaction semantics (using either optimistic or pessimistic locking) should be enough to ensure that your object remains consistent. So in 2 und 3 you don't need that.
As for shared variables in CategoryService it may be possible to synchronize it, but your categories seem to be some kind of cache. If this is the case, you might try to use a cache of your persistence provider (e.g. in Hibernate a second-level cache or query cache) or of your database.
Also calling categoryService.initCategories() in deleteCategory() probably means you are reloading the whole list, which is not a good idea, especially if you have many categories.

Bi-directional one-to-many relationship does not work

I have made an application that displays a lot of questions from my database. For this I have made a question entity. I want to be able to "report" a question for being poor/good and so on, so for this I made a feedback entity.
The relationship between these would be: one question may have many feedbacks, and one feedback belongs to one question.
The problem is that when I save the question feedback instance it all maps perfectly in the database, but when I open a question and loops through all the feedbacks none of the feedbacks added is displayed. In order to have them displayed I need to re-deploy the web application.
Why does this happen?
For readability I only show the parts involved
QuestionFeedback entity
public class QuestionFeedback implements Serializable {
#ManyToOne
private Question question;
....
public void setQuestion(Question question) {
this.question = question;
if (!question.getFeedbacks().contains(this)) {
question.getFeedbacks().add(this);
}
}
....
}
Question entity
#Entity
public class Question implements Serializable {
#OneToMany(mappedBy = "question", fetch = FetchType.EAGER)
private List<QuestionFeedback> feedbacks;
public Question() {
feedbacks = new ArrayList<QuestionFeedback>();
}
public void addFeedback(QuestionFeedback questionFeedback) {
if (!getFeedbacks().contains(questionFeedback)) {
getFeedbacks().add(questionFeedback);
}
if (questionFeedback.getQuestion() != this) {
questionFeedback.setQuestion(this);
}
}
}
Backing bean for the report page
The question entity is already retrieved from the database.
public String flag() {
questionFeedback.setQuestion(question);
questionFeedbackService.persist(questionFeedback);
return "index";
}
DAO class
public void persist(QuestionFeedback questionFeedback) {
entityManager.persist(questionFeedback);
}
This is a simple instance of having a dirty session.
Although these can be caused by all sorts of issues, there are usually 2 simple things to keep in mind that will make it very easy to track this bug down .
First you must always remember that, when we persist our data in JPA/hibernate , we don't necessarily have any gaurantee that the transaction has completed in the database. The true meaning of the "persist" method is a common source of errors and questions, make sure you fully understand it and how it relates to your business logic. :
What's the advantage of persist() vs save() in Hibernate?
Second, after you have gauranteed that the transaction has been completed and data has been saved, you can use the EntityManager.refresh method to update the state of any objects from the database.
You can clear the JPA cache through the following code:
em.getEntityManagerFactory().getCache().evictAll();
For the record, I always flush after persisting data. Even though your database has the data, I would just try this.
public String flag() {
questionFeedback.setQuestion(question);
questionFeedbackService.persist(questionFeedback);
questionFeedbackService.flush();
return "index";
}

Categories