I'm using mapreduce and I need to persist some entities when they are not in the datastore. I add the new entities to a DatastoreMutationPool so these entities could be persisted with batched calls. When the mapreduce ends a callback function is invoked. The callback function will use some of these entities. My question is, will all they entities be flushed to the datastore before the callback function is invoked or they can still be in the DatastoreMutationPool but not in the datastore.
Thanks.
Example of mapper:
public class MyMapper extends AppEngineMapper<Key, Entity, NullWritable, NullWritable> {
#Override
public void map(Key key, Entity value, Context context) {
...
DatastoreMutationPool mutationPool = this.getAppEngineContext(context).getMutationPool();
mutationPool.put(entity);
}
}
Example of callback:
#RequestMapping(value="/callback/function",method=RequestMethod.POST)
public void callback(#RequestParam("job_id") String jobIdName){
JobID jobId = JobID.forName(jobIdName);
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
//search for some entities persisted in the mapper
...
}
The mutation pool is per-mapper, while the callback is run outside the mappers once the mapreduce completes. As a result, you can expect that all the mutation pools will have been flushed by the time your callback is run.
Related
Update: The issue seems to be the id that I'm using twice, or in other words, the id from the product entity that I want to use for the productinventory entity. As soon as I generate a new id for the productinventory entity, it seems to work fine. But I want to have the same id for both, since they're the same product.
I have 2 Services:
ProductManagementService (saves a Product entity with product details)
1.) For saving the Product Entity, I implemented an EventHandler that listens to ProductCreatedEvent and saves the product to a mysql database.
ProductInventoryService (saves a ProductInventory entity with stock quantities of product to a certain productId defined in ProductManagementService )
2.) For saving the ProductInventory Entity, I also implemented an EventHandler that listens to ProductInventoryCreatedEvent and saves the product to a mysql database.
What I want to do:
When a new Product is created in ProductManagementService, I want to create a ProductInventory entity in ProductInventoryService directly afterwards and save it to my msql table. The new ProductInventory entity shall have the same id as the Product entity.
For that to accomplish, I created a Saga, which listes to a ProductCreatedEvent and sends a new CreateProductInventoryCommand. As soon as the CreateProductInventoryCommand triggers a ProductInventoryCreatedEvent, the EventHandler as described in 2.) should catch it. Except it doesn't.
The only thing thta gets saved is the Product Entity, so in summary:
1.) works, 2.) doesn't. A ProductInventory Aggregate does get created, but it doesn't get saved since the saving process that is connected to an EventHandler isn't triggered.
I also get an Exception, the application doesn't crash though: Command 'com.myApplication.apicore.command.CreateProductInventoryCommand' resulted in org.axonframework.commandhandling.CommandExecutionException(OUT_OF_RANGE: [AXONIQ-2000] Invalid sequence number 0 for aggregate 3cd71e21-3720-403b-9182-130d61760117, expected 1)
My Saga:
#Saga
#ProcessingGroup("ProductCreationSaga")
public class ProductCreationSaga {
#Autowired
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "productId")
public void handle(ProductCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductCreatedEvent");
String productInventoryId = event.productId;
SagaLifecycle.associateWith("productInventoryId", productInventoryId);
//takes ID from product entity and sets all 3 stock attributes to zero
commandGateway.send(new CreateProductInventoryCommand(productInventoryId, 0, 0, 0));
}
#SagaEventHandler(associationProperty = "productInventoryId")
public void handle(ProductInventoryCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductInventoryCreatedEvent");
SagaLifecycle.end();
}
}
The EventHandler that works as intended and saves a Product Entity:
#Component
public class ProductPersistenceService {
#Autowired
private ProductEntityRepository productRepository;
//works as intended
#EventHandler
void on(ProductCreatedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductCreatedEvent");
ProductEntity entity = new ProductEntity(event.productId, event.productName, event.productDescription, event.productPrice);
productRepository.save(entity);
}
#EventHandler
void on(ProductNameChangedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductNameChangedEvent");
ProductEntity existingEntity = productRepository.findById(event.productId).get();
ProductEntity entity = new ProductEntity(event.productId, event.productName, existingEntity.getProductDescription(), existingEntity.getProductPrice());
productRepository.save(entity);
}
}
The EventHandler that should save a ProductInventory Entity, but doesn't:
#Component
public class ProductInventoryPersistenceService {
#Autowired
private ProductInventoryEntityRepository productInventoryRepository;
//doesn't work
#EventHandler
void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventoryPersistenceService, EventHandler, ProductInventoryCreatedEvent");
ProductInventoryEntity entity = new ProductInventoryEntity(event.productInventoryId, event.physicalStock, event.reservedStock, event.availableStock);
System.out.println(entity.toString());
productInventoryRepository.save(entity);
}
}
Product-Aggregate:
#Aggregate
public class Product {
#AggregateIdentifier
private String productId;
private String productName;
private String productDescription;
private double productPrice;
public Product() {
}
#CommandHandler
public Product(CreateProductCommand command) {
System.out.println("Product, CommandHandler, CreateProductCommand");
AggregateLifecycle.apply(new ProductCreatedEvent(command.productId, command.productName, command.productDescription, command.productPrice));
}
#EventSourcingHandler
protected void on(ProductCreatedEvent event) {
System.out.println("Product, EventSourcingHandler, ProductCreatedEvent");
this.productId = event.productId;
this.productName = event.productName;
this.productDescription = event.productDescription;
this.productPrice = event.productPrice;
}
}
ProductInventory-Aggregate:
#Aggregate
public class ProductInventory {
#AggregateIdentifier
private String productInventoryId;
private int physicalStock;
private int reservedStock;
private int availableStock;
public ProductInventory() {
}
#CommandHandler
public ProductInventory(CreateProductInventoryCommand command) {
System.out.println("ProductInventory, CommandHandler, CreateProductInventoryCommand");
AggregateLifecycle.apply(new ProductInventoryCreatedEvent(command.productInventoryId, command.physicalStock, command.reservedStock, command.availableStock));
}
#EventSourcingHandler
protected void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventory, EventSourcingHandler, ProductInventoryCreatedEvent");
this.productInventoryId = event.productInventoryId;
this.physicalStock = event.physicalStock;
this.reservedStock = event.reservedStock;
this.availableStock = event.availableStock;
}
}
What you are noticing right now is the uniqueness requirement of the [aggregate identifier, sequence number] pair within a given Event Store. This requirement is in place to safe guard you from potential concurrent access on the same aggregate instance, as several events for the same aggregate all need to have a unique overall sequence number. This number is furthermore use to identify the order in which events need to be handled to guarantee the Aggregate is recreated in the same order consistently.
So, you might think this would opt for a "sorry there is no solution in place", but that is luckily not the case. There are roughly three things you can do in this set up:
Life with the fact both aggregates will have unique identifiers.
Use distinct bounded contexts between both applications.
Change the way aggregate identifiers are written.
Option 1 is arguably the most pragmatic and used by the majority. You have however noted the reuse of the identifier is necessary, so I am assuming you have already disregarded this as an option entirely. Regardless, I would try to revisit this approach as using UUIDs per default for each new entity you create can safe you from trouble in the future.
Option 2 would reflect itself with the Bounded Context notion pulled in by DDD. Letting the Product aggregate and ProductInventory aggregate reside in distinct contexts will mean you will have distinct event stores for both. Thus, the uniqueness constraint would be kept, as no single store is containing both aggregate event streams. Whether this approach is feasible however depends on whether both aggregates actually belong to the same context yes/no. If this is the case, you could for example use Axon Server's multi-context support to create two distinct applications.
Option 3 requires a little bit of insight in what Axon does. When it stores an event, it will invoke the toString() method on the #AggregateIdentifier annotated field within the Aggregate. As your #AggregateIdentifier annotated field is a String, you are given the identifier as is. What you could do is have typed identifiers, for which the toString() method doesn't return only the identifier, but it appends the aggregate type to it. Doing so will make the stored aggregateIdentifier unique, whereas from the usage perspective it still seems like you are reusing the identifier.
Which of the three options suits your solution better is hard to deduce from my perspective. What I did do, is order them in most reasonable from my perspective.
Hoping this will help your further #Jan!
The root problem: I want to set the id of [Entity A] in foreign key of [Entity B] but id of [Entity A] is not available until inserted in the database (because it is autogenerated by the DBMS).
Using architecture components (Room, ViewModel and LiveData), how can I perform a transaction that saves multiple related entities in the database? The following code currently resides in the ViewModel and works fine. The problem is I want to put this AsyncTask in the repository layer like other simple one-operation queries, but is it OK? Because in that case the repository would be responsible for managing relationships and knowing about entity details.
As I said above, the main problem is that I need id of the inserted entity so I can save it in another entity. If this requirement didn't exist, I would be able to persist each entity one by one in separate AsyncTasks in the repository.
MainViewModel.java:
public void buy(Item item, Store store) {
new AsyncTask<Void, Void, Void>() {
#Override
protected Void doInBackground(Void... voids) {
long storeId = mRepository.insertStore(store);
Purchase purchase = new Purchase(storeId); // here uses id of the store
long purchaseId = mRepository.insertPurchase(purchase);
item.setPurchaseId(purchaseId); // here uses id of the purchase
mRepository.updateItem(item);
return null;
}
}.execute();
}
I think what you're doing is fine if you keep this in the Repository layer. I don't think keeping this in the ViewModel is a good idea as it's suppose to be the Repository's responsibility to handle your data, in this case, the Item and Store objects. I believe that your Repository should be responsible for the management of this data and its relationships. To answer your question about receiving the ID of the updated entity, what you can do is have your AsyncTask implement the onPostExecute method and have your doInBackground method return an actual value (like the storeId) instead of null. You can then have onPostExecute retrieve that value and delegate control to a callback listener of some sort.
You can execute multiple database operations in a transaction using Android Room.
This way, you are ensured that your database integrity is not altered in case one of those operation fails (operations are rolled-back).
Here is how you can define a Transaction with Room in the Dao class:
#Dao
public abstract class MyDao {
#Insert
public abstract long insertStore(Store store);
#Insert(onConflict = OnConflictStrategy.ROLLBACK)
public abstract long recordPurchase(Purchase purchase);
#Update
public abstract void updateItem(Item updatedItem);
#Transaction
public void buyItemFromStore(Item boughtItem, Store store) {
// Anything inside this method runs in a single transaction.
long storedId = insertStore(store);
Purchase purchase = new Purchase(storeId);
long purchaseId = recordPurchase(purchase);
item.setPurchaseId(purchaseId);
updateItem(item);
}
}
You can refer to the documentation for an explanation on how #Transaction works.
Then in your repository class, call the buyItemFromStore from your AsyncTask:
public class MyRepository {
private MyDao dao;
public void buy(Item item, Store store) {
new AsyncTask<Void, Void, Void>() {
#Override
protected Void doOnBackground(Void... voids) {
// Everything is saved in a transaction.
dao.buyItemFromStore(item, store);
return null;
}
}
}
}
Note that this is perfectly fine for the Repository layer to be aware of relationships between entities, as long as the stored objects are related in some way (with Store Purchase and Item it seems to be the case).
If you are unable to alter your Dao class, consider RoomDatabase.runInTransaction.
I have a service (which I for some reason call controller) that is injected into the Jersey resource method.
#Named
#Transactional
public class DocCtrl {
...
public void changeDocState(List<String> uuids, EDocState state, String shreddingCode) throws DatabaseException, WebserviceException, RepositoryException, ExtensionException, LockException, AccessDeniedException, PathNotFoundException, UnknowException {
List<Document2> documents = doc2DAO.getManyByUUIDs(uuids);
for (Document2 doc : documents) {
if (EDocState.SOFT_DEL == state) {
computeShreddingFor(doc, shreddingCode); //here the state change happens and it is persisted to db
}
if (EDocState.ACTIVE == state)
unscheduleShredding(doc);
}
}
}
doc2DAO.getManyByUUIDs(uuids); gets an Entity object from the database.
#Repository
public class Doc2DAO {
#PersistenceContext(name = Vedantas.PU_NAME, type = PersistenceContextType.EXTENDED)
private EntityManager entityManager;
public List<Document2> getManyByUUIDs(List<String> uuids) {
if (uuids.isEmpty())
uuids.add("-3");
TypedQuery<Document2> query = entityManager.createNamedQuery("getManyByUUIDs", Document2.class);
query.setParameter("uuids", uuids);
return query.getResultList();
}
}
However When I do second request to my API, I see state of this entity object unchanged, that means the same as before the logic above occoured.
In DB there is still changed status.
After the api service restart, I will get the entity in the correct state.
As I understand it, Hibernate uses it's L2 cache for the managed objects.
So can you, please point me to what I am doing wrong here? Obviously, I need to get cached entity with the changed state without service restart and I would like to keep entities attached to the persistence context for the performance reasons.
Now, can you tell me what I am
In the logic I am making some changes to this object. After the completition of the changeDocState method, the state is properly changed and persisted in the database.
Thanks for the answers;
In my Java app I want to listen to the events when an object is persisted i.e added to the db. Then I use the id of this persisted object to perform some queries. This is the code
#Component
public class DataCreationListener extends EnversPostInsertEventListenerImpl {
private static final long serialVersionUID = 1L;
#Autowired
DataService DataService;
public DataCreationListener() {
super(null);
}
#Override
public void onPostInsert(PostInsertEvent event) {
if (event.getEntity() instanceof DataDAO) {
//Here I use the info from event.getEntity() to perform some queries.
Data Data = DataService.fromDao((DataDAO) event.getEntity());
// other stuff
}
}
}
But since the object for which the event has been fired has not been saved yet, the queries raise exceptions. If the listener name is onPostInsert I expect it to fire when an entry has already been made in the dB but that is not the case. Upon debugging I see that once the control flows out of the listener then the object gets saved.
Is there any way that I can get the desired functionality here i.e. get the listener to fire when the object is saved, or do it somehow myself in a reliable and safe way? Can someone point me in the right direction. Thanks !!
I try to use objectify transaction, but I have some issues when I need to reload an object created in the same transaction.
Take this sample code
#Entity
public class MyObject
{
#Parent
Key<ParentClass> parent;
#Index
String foo;
}
ofy().transact(new VoidWork()
{
#Override
public void vrun()
{
ParentClass parent = load();// load the parent
String fooValue = "bar";
Key<ParentClass> parentKey = Key.create(ParentClass.class, parent.getId())
MyObject myObject = new MyObject(parentKey);
myObject.setFoo(fooValue);
ofy().save().entity(myObject).now();
MyObject reloaded = ofy().load().type(MyObject.class).ancestor(parentKey).filter("foo", fooValue).first().now();
if(reloaded == null)
{
throw new RuntimeException("error");
}
}
});
My object reloaded is always null, maybe I miss something, but logically within a transaction I can query an object which was created in the same transaction?
Thanks
Cloud Datastore differs from relational databases in this particular case. The documentation states that -
Unlike with most databases, queries and gets inside a Cloud Datastore
transaction do not see the results of previous writes inside that
transaction. Specifically, if an entity is modified or deleted within
a transaction, a query or lookup returns the original version of the
entity as of the beginning of the transaction, or nothing if the
entity did not exist then.
https://cloud.google.com/datastore/docs/concepts/transactions#isolation_and_consistency