Flowable not updated after inserting data inside a transaction - java

When we use a Flowable to get an update notification after inserting a new data-base row, works fine.
But when the insertion is done inside another explicit transaction, the Flowable does not get a notification.
To illustrate the issue, I've forked the BasicRxJavaSample from android-architecture-components and added 2 test-methods to UserDaoTest.java
#Test
public void testFlowable() {
// When subscribing to the emissions of the user
final TestSubscriber<User> userTestSubscriber = mDatabase.userDao().getUser().test();
userTestSubscriber.assertValueCount(0);
// When inserting a new user in the data source
mDatabase.userDao().insertUser(USER);
userTestSubscriber.assertValueCount(1);
}
This works fine. But when I do the same inside of an explicit transaction it does not work:
#Test
public void testFlowableInTransaction() {
// When subscribing to the emissions of the user
final TestSubscriber<User> userTestSubscriber = mDatabase.userDao().getUser().test();
userTestSubscriber.assertValueCount(0);
// When inserting a new user in the data source
mDatabase.beginTransaction();
try {
mDatabase.userDao().insertUser(USER);
mDatabase.setTransactionSuccessful();
} finally {
mDatabase.endTransaction();
}
userTestSubscriber.assertValueCount(1);
// this fails - the userTestSubscriber is still empty!
}
Note: this example is of course simplified, just to illustrate the issue.
Here's the generated DaoImpl.insertUser():
public void insertUser(User user) {
__db.beginTransaction();
try {
__insertionAdapterOfUser.insert(user);
__db.setTransactionSuccessful();
} finally {
__db.endTransaction();
}
}
We can see that this code uses a transaction and also my test-code uses another transaction.
According to the SupportSQLiteDatabase-beginTransaction() docs, nesting transactions should be okay.
Is this maybe a room-bug?
The room version of this project is 1.0.0-alpha3, but I can also see this problem with version 1.0.0-alpha8 (in another project)

This was a bug in room: #65471397
I verified that the bug has been fixed with version 1.0.0-rc1

Related

How can I test a constraint violation when performing a delete in a Spring integration test?

I want to test that my controller endpoint returns an appropriate error code when trying to delete a record with referencing child records. In my integration test, I need to set up the state so that the related records exist, then invoke the deletion endpoint, expect the error condition, and then (ideally) roll the entire DB back to the state it was in before the test.
e.g.
INSERT INTO parent_rec (id) VALUES ("foo");
INSERT INTO child_rec (id, parent_id) VALUES ("bar", "foo");
COMMIT;
DELETE FROM parent_rec WHERE id = "foo"; -- bang!
#PersistenceContext
EntityManager em;
#Transactional
void testDelete() {
// Set up records
ParentRecord record = new ParentRecord("foo");
em.persist(record);
em.persist(new ChildRecord("bar", record));
//delete
mockMvc.perform(delete("/parent/foo")).andExpect(/* some error code */);
}
However, I'm running into issues. If I put the #Transactional annotation at the method or class level, the records aren't persisted until after the deletion is attempted so the deletion returns a 200 OK rather than a 400 Bad Request or similar.
The current solution is for the tests to be run in order (with a previous test setting up records which a subsequent test tries to operate on). However, this makes the tests pretty brittle and dependent on each other, which I'd like to avoid primarily to make changing the code easier.
Can I accomplish what I want without using an additional layer of tooling? In the past, I'd have used DBUnit to do something like this, but if I can avoid adding the additional dependency I'd prefer to keep it simple.
In JEE I solved these issues kind of simply by splitting my code into two parts:
#Transactional(propagation = Propagation.REQUIRES_NEW)
public class ParentRecordTestFacade {
public void create() {
// Create record here
}
public void delete() {
// Delete record here
}
}
and then call both methods in the actual unit test one after another.
Running only some code in a separate transaction also comes in handy. You can achieve it for example by creating a method fo the block of code to invoke in transaction:
protected <T> T getInsideTransaction(Function<EntityManager, T> transactional) {
EntityManager em = null;
EntityTransaction trx = null;
try {
em = entityManagerFactory.createEntityManager();
trx = em.getTransaction();
trx.begin();
return transactional.apply(em);
} catch (Throwable throwable) {
throw throwable;
} finally {
if (trx != null) {
if (!trx.getRollbackOnly()) {
trx.commit();
} else {
trx.rollback();
}
}
if (em != null) {
em.close();
}
}
}
Now you can invoke it like that:
void testDelete() {
// Set up records
getInsideTransaction(em -> {
ParentRecord record = new ParentRecord("foo");
em.persist(record);
em.persist(new ChildRecord("bar", record));
}
//delete
mockMvc.perform(delete("/parent/foo")).andExpect(/* some error code */);
}
You can invoke an arbitrary block of code within separate transaction that way.
In spring especially for test such cases in repository layer I using, looks like should works and for you - org.springframework.test.context.transaction.TestTransaction. Pay attention on #Commit annotation on test method, otherwise your record will not be saved.
#Commit
void testDelete() {
// Set up records
ParentRecord record = new ParentRecord("foo");
em.persist(record);
em.persist(new ChildRecord("bar", record));
TestTransaction.end()
TestTransaction.start()
//delete
mockMvc.perform(delete("/parent/foo")).andExpect(/* some error code */);
}
But of course after commit you should delete manually you record.

Unexpected duplicate key exception

I'm seeing a weird behavior with Spring Boot 2.0.4 + Hibernate.
I have an entity including a randomly generated code. If the generated code is already set for another entity, a DataIntegrityViolationException is thrown as expected. This way the loop can try again with a new code which hopefully is not used. When this happens, the loop continues, a new code is generated and the call to saveAndFlush() throws the same exception again saying that the original code that caused the problem (previous iteration) is already used (duplicate). However, I'm setting a new code now, not the one the exception mentions.
The only thing I can think of is that Hibernate doesn't remove the operation from the "queue" so when the second call to saveAndFlush() happens, it still tries to perform the first save and then the new one. Obviously, the first save fails as during the first iteration. Maybe I'm wrong, but then what is going on here?
#Entity
public class Entity {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
#Column(nullable = false, unique = true)
private int code;
public void setCode(int code) {
this.code = code;
}
//Other properties
}
#Transactional
public void myFunction() {
boolean saved = false;
do {
int code = /* Randomly generated code */;
if(entity == null) {
entity = new Entity(code, /* other properties */);
} else {
entity.setCode(code);
}
try {
entity = myRepository.saveAndFlush(entity);
saved = true;
} catch (DataIntegrityViolationException e) {
/* Ignore so that we can try again */
}
} while(!saved);
}
EDIT:
If I replace saveAndFlush() by save(), the issue disappears. I saw somewhere that doing a save after a previous save that failed may be problematic if flush() is also called. This is exactly my case. However, I don't understand why it is a problem. The only reason I call saveAndFlush() instead of save() is to catch the duplicate key exception. Using save(), if Hibernate doesn't perform the INSERT or UPDATE directly, the exception is thrown during the flush occurring just before the transaction is committed which is not really what I want.
You will need to restart your transaction in such a scenario, as the error is already bound to the transaction context/session.
So instead, put the retry logic outside of the transaction boundary, or if you need to maintain integrity (all or nothing saved), check first if it is present to avoid the exception being thrown.
If you debug the code and see what's the state of persistence context you may get your answer. You are right hibernate maintains a queue of sorts i.e. all the queries that are made during a transaction will run on commit/flush.
Please post the value of persistence context you get while debugging.

aggregate not found in the event store

I am trying to add data using CQRS framework AXON. But while hitting the API(used to add an order). I am getting the below error:-
Command 'com.cqrs.order.commands.CreateOrderCommand' resulted in org.axonframework.modelling.command.AggregateNotFoundException(The aggregate was not found in the event store)
But i already have an Aggregate in my code(OrderAggregate.Java).
The Full code can be found at - https://github.com/iftekharkhan09/OrderManagementSystem
API to add Order - http://localhost:8080/confirmOrder
Request Body:-
{
"studentName":"Sunny Khan"
}
Can anyone please tell me where am i doing wrong?
Any help is appreciated!
For other readers, let me share the Aggregate you've created in your repository:
#Aggregate
public class OrderAggregate {
public OrderAggregate(OrderRepositoryData orderRepositoryData) {
this.orderRepositoryData = orderRepositoryData;
}
#AggregateIdentifier
private Integer orderId;
private OrderRepositoryData orderRepositoryData;
#CommandHandler
public void handle(CreateOrderCommand command) {
apply(new OrderCreatedEvent(command.getOrderId()));
}
#EventSourcingHandler
public void on(OrderCreatedEvent event) {
this.orderId=event.getOrderId();
Order order=new Order("Order New");
orderRepositoryData.save(order);
}
protected OrderAggregate() {
// Required by Axon to build a default Aggregate prior to Event Sourcing
}
}
There are several things you can remove entirely from this Aggregate, which are:
The OrderRepositoryData
The OrderAggregate constructor which sets the OrderRepositoryData
The manually saving of an Order in the #EventSourcingHandler annotated function
What you're doing here is mixing the Command Model's concern of making decisions with creating a queryable Order for the Query Model. It would be better to remove this logic entirely from an Aggregate (the Command Model in your example) and move this to an Event Handling Component.
This is however not the culprit for the AggregateNotFoundException you're receiving.
What you've missed is to make the CreateOrderCommand command handler a constructor.
The CreateOrderCommand will create an Order, as it's name already suggests.
Hence, it should be handled by a constructor rather than a regular method.
So, instead of this:
#CommandHandler
public *void* handle(CreateOrderCommand command) {
apply(new OrderCreatedEvent(command.getOrderId()));
}
You should be doing this:
#CommandHandler
public OrderAggregate(CreateOrderCommand command) {
apply(new OrderCreatedEvent(command.getOrderId()));
}
Hope this helps you out #Sunny!
aggregate not found in the event store
The main reason for this exception is, When the axon is trying to save the aggregate it should create the aggragate first.
#CommandHandler
public OrderAggregate(CreateOrderCommand command) {
apply(new OrderCreatedEvent(command.getOrderId()));
}
Also in this way ur
private OrderRepositoryData orderRepositoryData;
won't be initialized, so autowired the orderRepositoryData also.
#Autowired
private OrderRepositoryData orderRepositoryData;
For the successive events you should use same OrderId ,else also it will throw
handleThrowable(java.lang.Throwable,org.springframework.web.context.request.WebRequest)
org.axonframework.modelling.command.AggregateNotFoundException: The aggregate was not found in the event store
at org.axonframework.eventsourcing.EventSourcingRepository.doLoadWithLock(EventSourcingRepository.java:122)

After a commited & shutdown transaction which added new class to a graph - a new Tx doesn't see the class in schema, though it is persisted

We persist a graph in a piece of code and then have another code, that tries to retrieve it. We open our transacitons with this Spring bean. Anyone who wants to access the database always calls the getGraph() method of this bean.
public class OrientDatabaseConnectionManager {
private OrientGraphFactory factory;
public OrientDatabaseConnectionManager(String path, String name, String pass) {
factory = new OrientGraphFactory(path, name, pass).setupPool(1,10);
}
public OrientGraphFactory getFactory() {
return factory;
}
public void setFactory(OrientGraphFactory factory) {
this.factory = factory;
}
/**
* Method returns graph instance from the factory's pool.
* #return
*/
public OrientGraph getGraph(){
OrientGraph resultGraph = factory.getTx();
resultGraph.setThreadMode(OrientBaseGraph.THREAD_MODE.ALWAYS_AUTOSET);
return resultGraph;
}
}
(I was unable to quite understand the thread_mode fully, but I think it should not be related to the problem.)
The code, that persists the graph commits and shuts down, as you can see here:
OrientDatabaseConnectionManager connMan; //this is an injected bean from above.
public boolean saveGraphToOrientDB(
SparseMultigraph<SocialVertex, SocialEdge> graph, String label) {
boolean isSavedCorrectly = false;
OrientGraph graphO = connMan.getGraph();
try {
graphDBinput.saveGraph(graph, label, graphO);
// LOG System.out.println("Graph was saved with label "+label);
isSavedCorrectly = true;
} catch (AlreadyUsedGraphLabelException ex) {
Logger.getLogger(GraphDBFacade.class.getName()).log(Level.SEVERE, null, ex);
} finally {
graphO.shutdown(); //calls .commit() automatically normally, but commit already happens inside.
}
return isSavedCorrectly;
}
This commit works well - the data are always persisted, I checked everytime in the orientdb admin interface, and the first persisted graph is always viewable okay. It might be important to note, that during the saving the label used defines new class (thus modifying schema, as I understand it) and uses it for the persisted graph.
The retrieval of the graph looks something like this:
#Override
public SocialGraph getSocialGraph(String label) {
OrientGraph graph = connMan.getGraph();
SocialGraph socialGraph = null;
try {
socialGraph = new SocialGraph(getAllSocialNodes(label, graph), getAllSocialEdges(label, graph));
} catch (Exception e) {
logger.error(e);
} finally {
graph.shutdown();
}
return socialGraph;
}
public List<Node> getAllSocialNodes(String label, OrientGraph graph) {
return constructNodes(graphFilterMan.getAllNodesFromGraph(label, graph));
}
public Set<Vertex> getAllNodesFromGraph(String graphLabel, OrientGraph graph) {
Set<Vertex> labelledGraph = new HashSet<>();
try{
Iterable<Vertex> configGraph = graph.getVerticesOfClass(graphLabel);
for(Vertex v : configGraph){ //THE CODE CRASHES HERE, WITH "CLASS WITH NAME graphLabel DOES NOT EXIST
labelledGraph.add(v);
}
} catch(Exception ex){
logger.error(ex);
graph.rollback();
}
return labelledGraph;
}
So the problem is, that when we persist a new graph with a new class, say "graph01" and then we want to retrieve it, it goes okay. Later, we create a "graph02" and we want to retrieve it, but it crashes, as commented above - OrientDb tells you, that the class with "graph02" name does not exist.
It does exist in the admin interface at the time, however, when I debug, the class actually is not in the schema right after call of factory.getTx()
Right at the beginning, when we get a transaction graph instance from the factory, we get a graph with a context in which the rawGraph's underlying database's metadata have the schema proxy delegate schema shared classes WITHOUT the new class, which I can apparently see commited in the database.
Or here on picture:
There should be one more class in the schema. The one that was persisted (and commited ) a while ago - which can also be seen in the orientDb admin interface (not present in the variable)
What I presume is happening is that the pool, from which the factory gets the transaction has somewhat cached schema or something. It does not refresh the schema, when we add a new class.
Why does the schema not show the new class, when we are trying to get the new graph out? Does schema not get refreshed?
I found here in schema documentation that
NOTE: Changes to the schema are not transactional, so execute them outside a transaction.
So should we create the new class outside a transaction and then we would get an update in the schema in the context?
//Maybe I am understanding the concepts wrong - I got in contact with OrientDb just yesterday and I am to find out the problem in an already written code.
Db we use is a remote:localhost/socialGraph
OrientDB of version 1.7.4
We noticed in our code about the same issue, schema changes aren't visible in pooled connections.
We also have a sort of factory that gets a connection. What we do is keep a schema version number, and each time we have some operation that changes the schema, we bump the number and when a new connection is opened, we check the schema version, if it is changed.
When the schema is changed, we reload the schema, close the pool and recreate it. The method is proven for us to work (we are currently on version 2.0.15).
Here's the relevant code:
private static volatile int schemaVersion = -1;
private OPartitionedDatabasePool pool;
protected void createPool() {
pool = new OPartitionedDatabasePool(getUrl(), getUsername(), getPassword());
}
#Override
public synchronized ODatabaseDocumentTx openDatabase() {
ODatabaseDocumentTx db = pool.acquire();
//DatabaseInfo is a simple class put in a static contect that holds the schema version.
DatabaseInfo databaseInfo = CurrentDatabaseInfo.getDatabaseInfo();
ODocument document = db.load((ORID) databaseInfo.getId(), "schemaVersion:0", true);
Integer version = document.field("schemaVersion");
if (schemaVersion == -1) {
schemaVersion = version;
} else if (schemaVersion < version) {
db.getMetadata().getSchema().reload();
schemaVersion = version;
pool.close();
createPool();
db = pool.acquire();
}
return db;
}
In the end the problem was, that we had two liferay projects, each had its own spring application context in its WAR file and when we deployed these projects as portlets within Liferay, the two projects created two contexts, in each having one OrientDatabaseConnectionManager.
In one context the schema was being changed. And even though I reset the connection and reloaded the schema, it only happened with the connection manager / factory in one context. The retrieving of the graph was happening in the portlet of the other project though, resulting in an outdate schema (which was not reloaded, because the reloading happened in the other spring context) - thus the error.
So you have to be careful - either share one spring application context with beans for all your portlets (which is possible by having a parent application context, you can read more about it here)
OR
check for changes in the schema from within the same project which you will also use to retrieve the data later.

How to refresh an entity in a Future?

I am not really sure where my problem lies, as I am experimenting in two areas that I don't have much experience with: JPA and Futures (using Play! Framework's Jobs and Promises).
I have the following bit of code, which I want to return a Meeting object, when one of the fields of this object has been given a value, by another thread from another HTTP request. Here is what I have:
Promise<Meeting> meetingPromise = new Job<Meeting> () {
#Override
public Meeting doJobWithResult() throws Exception {
Meeting meeting = Meeting.findById(id);
while (meeting.bbbMeetingId == null) {
Thread.sleep(1000);
meeting = meeting.refresh(); // I tried each of these
meeting = meeting.merge(); // lines but to no avail; I
meeting = Meeting.findById(id); // get the same result
}
return meeting;
}
}.now();
Meeting meeting = await(meetingPromise);
As I note in the comments, there are three lines in there, any one of which I think should allow me to refresh the contents of my object from the database. From the debugger, it seems that the many-to-one relationships are refreshed by these calls, but the single values are not.
My Meeting object extends Play! Framework's Model, and for convenience, here is the refresh method:
/**
* Refresh the entity state.
*/
public <T extends JPABase> T refresh() {
em().refresh(this);
return (T) this;
}
and the merge method:
/**
* Merge this object to obtain a managed entity (usefull when the object comes from the Cache).
*/
public <T extends JPABase> T merge() {
return (T) em().merge(this);
}
So, how can I refresh my model from the database?
So, I ended up cross-posting this question on the play-framework group, and I got an answer there. So, for the discussion, check out that thread.
In the interest of having the answer come up in a web search to anyone who has this problem in the future, here is what the code snippet that I pasted earlier looks like:
Promise<Meeting> meetingPromise = new Job<Meeting> () {
#Override
public Meeting doJobWithResult() throws Exception {
Meeting meeting = Meeting.findById(id);
while (meeting.bbbMeetingId == null) {
Thread.sleep(1000);
if (JPA.isInsideTransaction()) {
JPAPlugin.closeTx(false);
}
JPAPlugin.startTx(true);
meeting = Meeting.findById(id);
JPAPlugin.closeTx(false);
}
return meeting;
}
}.now();
Meeting meeting = await(meetingPromise);
I am not using the #NoTransaction annotation, because that messes up some other code that checks if the request is coming from a valid user.
I'm not sure about it but JPA transactions are managed automatically by Play in the request/controller context (the JPAPlugin opens a transaction before invocation and closes it after invocation).
But I'm not sure at all what happens within jobs and I don't think transactions are auto-managed (or it's a feature I don't know). So, is your entity attached to an entitymanager or still transient? Is there a transaction somewhere? I don't really know but it may explain some weird behavior if not...

Categories