EntityManager.clear() doesn't stop writing of data to database - java

I have a situation where the user needs to import a file worth of data into a database. They can select at the start whether to do a standard import and then create a report summary file, or to 'simulate' the import (ie create the same report summary, but not actually import anything). The basic setup is below:
public class Importer {
#Autowired
protected IConverter converter;
#Autowired
protected ISerializer serializer;
#Autowired
protected IReporter reporter;
public void import( InputStream stream ) throws Exception {
CustomerData data = converter.convert( stream );
// ** database at this point has been updated! **
if( getContext().isSerialize() ) {
serializer.serialize( data );
}
if( getContext().isReport() ) {
reporter.report( data, "report.xls" );
}
}
}
public class Converter implements IConverter throws Exception {
#Transactional
public CustomerData convert( InputStream stream ) {
try {
CustomerData data = ... // read file and create/match with db entities
return data;
} finally {
if( !getContext().isSerialize() ) {
// clear any changes made to objects made in the db
getEntityManager().clear();
// ** database at this point is unaffected **
}
}
}
}
The Importer is a bean class configured in Spring 4.1. Database is JPA 2.1/Hibernate 4.3.11/MySQL 5.5. Using Java 8.
The CustomerData object is a tree of database entity objects, some of which have been matched with data in the database, (potentially with properties updated with data from the import file) and others which are new entites.
isSerialize() and isReport() allows control over whether the database is updated. When simulating the import, isSerialize() = false, isReport() = true.
Stepping through the code, when I enter the finally block and clear the entity manager, the data in the database is as it was before the import. However when I return to the import() method the database has been updated with the changes to the entities!
Clearly the transactional import() method completing commits the data, but why did the clear of the entity manager not stop the changes from happening? To make sure I set a breakpoint on [Hibernate] AbstractEntityManagerImpl.flush() and it's not called at all here.
Could someone please help me understand why clear() doesn't work, and what I should be doing instead.

Thanks, S. Piller for putting me on the right track. For anyone in a similar position, clearing the entity manager won't stop flushed changes being committed to the database at the end of the transaction - clear() will only clear changed entities since the last flush.
The way around it is to notify the transaction manager that the transaction needs to be rolled back:
#Transactional
public CustomerData convert( InputStream stream ) {
try {
CustomerData data = ... // read file and create/match with db entities
return data;
} finally {
if( !getContext().isSerialize() ) {
// Ensure the current transaction is rolled back once the topmost #Transactional method completes.
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
}
}
}

Related

Spring Data JPA flush does not save changes to database

I have the following code that first check record and if found delete that record and flush changes to the database. However, when I debug, I see that it does not reflect changes to the database when debugger hit the next code block (final Stock stock = new Stock();).
#Transactional
public CommandDTO createOrUpdate(StockRequest request) {
stockRepository.findByBrandUuidAndProductUuid(
request.getBrandUuid(),
request.getProductUuid())
.ifPresent(stock -> {
stockRepository.delete(stock);
stockRepository.flush();
});
final Stock stock = new Stock();
if (request.isOutOfStock()) {
stock.setBrandUuid(request.getBrandUuid());
stock.setProductUuid(request.getProductUuid());
stock.save(stock);
}
return CommandDTO.builder().uuid(stock.getUuid()).build();
}
So, what is the mistake in this approach?
JPA doesn't supports final field.
You can use two alternative solution for immutable class.
use #Immutable at entity class.
change entity class fields having only a getter.

How can I test a constraint violation when performing a delete in a Spring integration test?

I want to test that my controller endpoint returns an appropriate error code when trying to delete a record with referencing child records. In my integration test, I need to set up the state so that the related records exist, then invoke the deletion endpoint, expect the error condition, and then (ideally) roll the entire DB back to the state it was in before the test.
e.g.
INSERT INTO parent_rec (id) VALUES ("foo");
INSERT INTO child_rec (id, parent_id) VALUES ("bar", "foo");
COMMIT;
DELETE FROM parent_rec WHERE id = "foo"; -- bang!
#PersistenceContext
EntityManager em;
#Transactional
void testDelete() {
// Set up records
ParentRecord record = new ParentRecord("foo");
em.persist(record);
em.persist(new ChildRecord("bar", record));
//delete
mockMvc.perform(delete("/parent/foo")).andExpect(/* some error code */);
}
However, I'm running into issues. If I put the #Transactional annotation at the method or class level, the records aren't persisted until after the deletion is attempted so the deletion returns a 200 OK rather than a 400 Bad Request or similar.
The current solution is for the tests to be run in order (with a previous test setting up records which a subsequent test tries to operate on). However, this makes the tests pretty brittle and dependent on each other, which I'd like to avoid primarily to make changing the code easier.
Can I accomplish what I want without using an additional layer of tooling? In the past, I'd have used DBUnit to do something like this, but if I can avoid adding the additional dependency I'd prefer to keep it simple.
In JEE I solved these issues kind of simply by splitting my code into two parts:
#Transactional(propagation = Propagation.REQUIRES_NEW)
public class ParentRecordTestFacade {
public void create() {
// Create record here
}
public void delete() {
// Delete record here
}
}
and then call both methods in the actual unit test one after another.
Running only some code in a separate transaction also comes in handy. You can achieve it for example by creating a method fo the block of code to invoke in transaction:
protected <T> T getInsideTransaction(Function<EntityManager, T> transactional) {
EntityManager em = null;
EntityTransaction trx = null;
try {
em = entityManagerFactory.createEntityManager();
trx = em.getTransaction();
trx.begin();
return transactional.apply(em);
} catch (Throwable throwable) {
throw throwable;
} finally {
if (trx != null) {
if (!trx.getRollbackOnly()) {
trx.commit();
} else {
trx.rollback();
}
}
if (em != null) {
em.close();
}
}
}
Now you can invoke it like that:
void testDelete() {
// Set up records
getInsideTransaction(em -> {
ParentRecord record = new ParentRecord("foo");
em.persist(record);
em.persist(new ChildRecord("bar", record));
}
//delete
mockMvc.perform(delete("/parent/foo")).andExpect(/* some error code */);
}
You can invoke an arbitrary block of code within separate transaction that way.
In spring especially for test such cases in repository layer I using, looks like should works and for you - org.springframework.test.context.transaction.TestTransaction. Pay attention on #Commit annotation on test method, otherwise your record will not be saved.
#Commit
void testDelete() {
// Set up records
ParentRecord record = new ParentRecord("foo");
em.persist(record);
em.persist(new ChildRecord("bar", record));
TestTransaction.end()
TestTransaction.start()
//delete
mockMvc.perform(delete("/parent/foo")).andExpect(/* some error code */);
}
But of course after commit you should delete manually you record.

Hibernate 5.2.9.Final cache not updated

I have a service (which I for some reason call controller) that is injected into the Jersey resource method.
#Named
#Transactional
public class DocCtrl {
...
public void changeDocState(List<String> uuids, EDocState state, String shreddingCode) throws DatabaseException, WebserviceException, RepositoryException, ExtensionException, LockException, AccessDeniedException, PathNotFoundException, UnknowException {
List<Document2> documents = doc2DAO.getManyByUUIDs(uuids);
for (Document2 doc : documents) {
if (EDocState.SOFT_DEL == state) {
computeShreddingFor(doc, shreddingCode); //here the state change happens and it is persisted to db
}
if (EDocState.ACTIVE == state)
unscheduleShredding(doc);
}
}
}
doc2DAO.getManyByUUIDs(uuids); gets an Entity object from the database.
#Repository
public class Doc2DAO {
#PersistenceContext(name = Vedantas.PU_NAME, type = PersistenceContextType.EXTENDED)
private EntityManager entityManager;
public List<Document2> getManyByUUIDs(List<String> uuids) {
if (uuids.isEmpty())
uuids.add("-3");
TypedQuery<Document2> query = entityManager.createNamedQuery("getManyByUUIDs", Document2.class);
query.setParameter("uuids", uuids);
return query.getResultList();
}
}
However When I do second request to my API, I see state of this entity object unchanged, that means the same as before the logic above occoured.
In DB there is still changed status.
After the api service restart, I will get the entity in the correct state.
As I understand it, Hibernate uses it's L2 cache for the managed objects.
So can you, please point me to what I am doing wrong here? Obviously, I need to get cached entity with the changed state without service restart and I would like to keep entities attached to the persistence context for the performance reasons.
Now, can you tell me what I am
In the logic I am making some changes to this object. After the completition of the changeDocState method, the state is properly changed and persisted in the database.
Thanks for the answers;

Spring refresh entity if db state has changed

I have got a Springboot Application and a Oracle DB with lots of PL/SQL Procedures and these change the state of the DB all the Time.
So now I want to change a loaded entity an want to save it. If the entitystate of the entitymanager and the state of the db is equal everything works fine. But in some cases they are not equal. So if I load an entity and make some changes an druring this a PL/SQL Procedure changes the DB Table. If I save the Entity I will get an Execption of course. So I tried to catch the Exception and then in the catch block I want to refresh the Entity before saving it. But I still get an Exception. Is the Transaction not jet finished? How can I handle this Problem?
I hope the example code explains a little bit.
#RestController
#RequestMapping("/*")
public class FacadeController {
...
#ResponseStatus(HttpStatus.OK)
#RequestMapping( value= "/test4" , method=RequestMethod.GET)
public String test4(){
Unit unit = unitSvice.loadUnit(346497519L);
List<UnitEntry> entries = unit.getEntries();
for (UnitEntry g : entries) {
if (g.getUnitEntryId == 993610345L) {
g.setTag("AA");
g.setVersion(g.getVersion() + 1);
g.setstatus("SaveOrUpdate");
}
}
//<-- DB Table changed entity managed by entitymanger and DB Table
// are no langer equal.
try {
unitSvice.updateUnit(unit , false);
}catch(DataAccessException | IllegalArgumentException e) {
unitSvice.updateUnit(unit , true);
}
...
}
}
#Service("unitSvice")
public class UnitSvice {
#Autowired
private UnitDao repoUnit;
#Transactional
public Unit loadUnit(Long _id) {
Unit unit = repoUnit.findOne(_id);
return unit;
}
#Transactional
public void updateUnit(Unit unit, boolean _withrefrsh ) {
if(_withrefrsh) {
getEntityManager().refresh(unit.getId());
}
repoUnit.save(unit);
}
}
I hope, anyone can help me.
Thanks
yes the problem is ..when you call load all method which is transactional method where entities became detached from session/entitymanager when you are returning from that method.. so,next you are trying to persist detached object. That's why you get exception.
so probably you can use session.update() or session.merge() to save the new update into database.

After a commited & shutdown transaction which added new class to a graph - a new Tx doesn't see the class in schema, though it is persisted

We persist a graph in a piece of code and then have another code, that tries to retrieve it. We open our transacitons with this Spring bean. Anyone who wants to access the database always calls the getGraph() method of this bean.
public class OrientDatabaseConnectionManager {
private OrientGraphFactory factory;
public OrientDatabaseConnectionManager(String path, String name, String pass) {
factory = new OrientGraphFactory(path, name, pass).setupPool(1,10);
}
public OrientGraphFactory getFactory() {
return factory;
}
public void setFactory(OrientGraphFactory factory) {
this.factory = factory;
}
/**
* Method returns graph instance from the factory's pool.
* #return
*/
public OrientGraph getGraph(){
OrientGraph resultGraph = factory.getTx();
resultGraph.setThreadMode(OrientBaseGraph.THREAD_MODE.ALWAYS_AUTOSET);
return resultGraph;
}
}
(I was unable to quite understand the thread_mode fully, but I think it should not be related to the problem.)
The code, that persists the graph commits and shuts down, as you can see here:
OrientDatabaseConnectionManager connMan; //this is an injected bean from above.
public boolean saveGraphToOrientDB(
SparseMultigraph<SocialVertex, SocialEdge> graph, String label) {
boolean isSavedCorrectly = false;
OrientGraph graphO = connMan.getGraph();
try {
graphDBinput.saveGraph(graph, label, graphO);
// LOG System.out.println("Graph was saved with label "+label);
isSavedCorrectly = true;
} catch (AlreadyUsedGraphLabelException ex) {
Logger.getLogger(GraphDBFacade.class.getName()).log(Level.SEVERE, null, ex);
} finally {
graphO.shutdown(); //calls .commit() automatically normally, but commit already happens inside.
}
return isSavedCorrectly;
}
This commit works well - the data are always persisted, I checked everytime in the orientdb admin interface, and the first persisted graph is always viewable okay. It might be important to note, that during the saving the label used defines new class (thus modifying schema, as I understand it) and uses it for the persisted graph.
The retrieval of the graph looks something like this:
#Override
public SocialGraph getSocialGraph(String label) {
OrientGraph graph = connMan.getGraph();
SocialGraph socialGraph = null;
try {
socialGraph = new SocialGraph(getAllSocialNodes(label, graph), getAllSocialEdges(label, graph));
} catch (Exception e) {
logger.error(e);
} finally {
graph.shutdown();
}
return socialGraph;
}
public List<Node> getAllSocialNodes(String label, OrientGraph graph) {
return constructNodes(graphFilterMan.getAllNodesFromGraph(label, graph));
}
public Set<Vertex> getAllNodesFromGraph(String graphLabel, OrientGraph graph) {
Set<Vertex> labelledGraph = new HashSet<>();
try{
Iterable<Vertex> configGraph = graph.getVerticesOfClass(graphLabel);
for(Vertex v : configGraph){ //THE CODE CRASHES HERE, WITH "CLASS WITH NAME graphLabel DOES NOT EXIST
labelledGraph.add(v);
}
} catch(Exception ex){
logger.error(ex);
graph.rollback();
}
return labelledGraph;
}
So the problem is, that when we persist a new graph with a new class, say "graph01" and then we want to retrieve it, it goes okay. Later, we create a "graph02" and we want to retrieve it, but it crashes, as commented above - OrientDb tells you, that the class with "graph02" name does not exist.
It does exist in the admin interface at the time, however, when I debug, the class actually is not in the schema right after call of factory.getTx()
Right at the beginning, when we get a transaction graph instance from the factory, we get a graph with a context in which the rawGraph's underlying database's metadata have the schema proxy delegate schema shared classes WITHOUT the new class, which I can apparently see commited in the database.
Or here on picture:
There should be one more class in the schema. The one that was persisted (and commited ) a while ago - which can also be seen in the orientDb admin interface (not present in the variable)
What I presume is happening is that the pool, from which the factory gets the transaction has somewhat cached schema or something. It does not refresh the schema, when we add a new class.
Why does the schema not show the new class, when we are trying to get the new graph out? Does schema not get refreshed?
I found here in schema documentation that
NOTE: Changes to the schema are not transactional, so execute them outside a transaction.
So should we create the new class outside a transaction and then we would get an update in the schema in the context?
//Maybe I am understanding the concepts wrong - I got in contact with OrientDb just yesterday and I am to find out the problem in an already written code.
Db we use is a remote:localhost/socialGraph
OrientDB of version 1.7.4
We noticed in our code about the same issue, schema changes aren't visible in pooled connections.
We also have a sort of factory that gets a connection. What we do is keep a schema version number, and each time we have some operation that changes the schema, we bump the number and when a new connection is opened, we check the schema version, if it is changed.
When the schema is changed, we reload the schema, close the pool and recreate it. The method is proven for us to work (we are currently on version 2.0.15).
Here's the relevant code:
private static volatile int schemaVersion = -1;
private OPartitionedDatabasePool pool;
protected void createPool() {
pool = new OPartitionedDatabasePool(getUrl(), getUsername(), getPassword());
}
#Override
public synchronized ODatabaseDocumentTx openDatabase() {
ODatabaseDocumentTx db = pool.acquire();
//DatabaseInfo is a simple class put in a static contect that holds the schema version.
DatabaseInfo databaseInfo = CurrentDatabaseInfo.getDatabaseInfo();
ODocument document = db.load((ORID) databaseInfo.getId(), "schemaVersion:0", true);
Integer version = document.field("schemaVersion");
if (schemaVersion == -1) {
schemaVersion = version;
} else if (schemaVersion < version) {
db.getMetadata().getSchema().reload();
schemaVersion = version;
pool.close();
createPool();
db = pool.acquire();
}
return db;
}
In the end the problem was, that we had two liferay projects, each had its own spring application context in its WAR file and when we deployed these projects as portlets within Liferay, the two projects created two contexts, in each having one OrientDatabaseConnectionManager.
In one context the schema was being changed. And even though I reset the connection and reloaded the schema, it only happened with the connection manager / factory in one context. The retrieving of the graph was happening in the portlet of the other project though, resulting in an outdate schema (which was not reloaded, because the reloading happened in the other spring context) - thus the error.
So you have to be careful - either share one spring application context with beans for all your portlets (which is possible by having a parent application context, you can read more about it here)
OR
check for changes in the schema from within the same project which you will also use to retrieve the data later.

Categories