How Can I Test Aggregate if ID is randomly generated? - java

This may be more of a design question but I have an aggregate member that is generated via a command and need to be able to test that the Event is generated given the command was run.
however, I don't see any obvious way to do an anyString match on one field of the event in the TestFixture framework.
Is it "bad practice" to generate IDs in the aggregate when created? Should IDs be generated outside of the aggregate?
#AggregateMember(eventForwardingMode = ForwardMatchingInstances.class)
private List<TimeCardEntry> timeCardEntries = new ArrayList<>();
data class ClockInCommand(#TargetAggregateIdentifier val employeeName: String)
#CommandHandler
public TimeCard(ClockInCommand cmd) {
apply(new ClockInEvent(cmd.getEmployeeName(),
GenericEventMessage.clock.instant(),
UUID.randomUUID().toString()));
#EventSourcingHandler
public void on(ClockInEvent event) {
this.employeeName = event.getEmployeeName();
timeCardEntries.add(new TimeCardEntry(event.getTimeCardEntryId(), event.getTime()));
}
#Data
public class TimeCardEntry {
#EntityId
private final String timeCardEntryId;
private final Instant clockInTime;
private Instant clockOutTime;
#EventSourcingHandler
public void on(ClockOutEvent event) {
this.clockOutTime = event.getTime();
}
private boolean isClockedIn() {
return clockOutTime != null;
}
}
#ParameterizedTest
#MethodSource(value = "randomEmployeeName")
void testClockInCommand(String employeeName) {
testFixture.givenNoPriorActivity()
.when(new ClockInCommand(employeeName))
.expectEvents(new ClockInEvent(employeeName, testFixture.currentTime(), "Any-String-Works"));
}

Is it "bad practice" to generate IDs in the aggregate when created? Should IDs be generated outside of the aggregate?
Random numbers are a lot like clocks - they are a form of shared mutable state. Put another way, they are a concern of the imperative shell, not of the functional core.
What this usually means for your domain model is that the randomness is passed in as an argument, rather than produced by the aggregate itself. This might mean passing an ID generator to the domain model, or even generating the id in the application and passing in the generated identifier as a value.
Thus, in our unit test, we replace the random generator provided by the target application with a "random" generator provided by the test -- because the test controls the generator, the identifier used becomes deterministic, and therefore you can more easily work around it.
In cases where you aren't happy with making the random generator part of the api of your domain model, another option is to expose it as part of the test interface.
// We don't necessarily worry about testing this version, it is "too simple to break"
void doSomethingCool(...) {
doSomethingCool(ID.new, ...);
}
// Unit tests measure this function instead, which is easier to test and has
// all of the complicated logic
void doSomethingCool(ID id, ...) {
// ...
}

Related

Axon: Create and Save another Aggregate in Saga after creation of an Aggregate

Update: The issue seems to be the id that I'm using twice, or in other words, the id from the product entity that I want to use for the productinventory entity. As soon as I generate a new id for the productinventory entity, it seems to work fine. But I want to have the same id for both, since they're the same product.
I have 2 Services:
ProductManagementService (saves a Product entity with product details)
1.) For saving the Product Entity, I implemented an EventHandler that listens to ProductCreatedEvent and saves the product to a mysql database.
ProductInventoryService (saves a ProductInventory entity with stock quantities of product to a certain productId defined in ProductManagementService )
2.) For saving the ProductInventory Entity, I also implemented an EventHandler that listens to ProductInventoryCreatedEvent and saves the product to a mysql database.
What I want to do:
When a new Product is created in ProductManagementService, I want to create a ProductInventory entity in ProductInventoryService directly afterwards and save it to my msql table. The new ProductInventory entity shall have the same id as the Product entity.
For that to accomplish, I created a Saga, which listes to a ProductCreatedEvent and sends a new CreateProductInventoryCommand. As soon as the CreateProductInventoryCommand triggers a ProductInventoryCreatedEvent, the EventHandler as described in 2.) should catch it. Except it doesn't.
The only thing thta gets saved is the Product Entity, so in summary:
1.) works, 2.) doesn't. A ProductInventory Aggregate does get created, but it doesn't get saved since the saving process that is connected to an EventHandler isn't triggered.
I also get an Exception, the application doesn't crash though: Command 'com.myApplication.apicore.command.CreateProductInventoryCommand' resulted in org.axonframework.commandhandling.CommandExecutionException(OUT_OF_RANGE: [AXONIQ-2000] Invalid sequence number 0 for aggregate 3cd71e21-3720-403b-9182-130d61760117, expected 1)
My Saga:
#Saga
#ProcessingGroup("ProductCreationSaga")
public class ProductCreationSaga {
#Autowired
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "productId")
public void handle(ProductCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductCreatedEvent");
String productInventoryId = event.productId;
SagaLifecycle.associateWith("productInventoryId", productInventoryId);
//takes ID from product entity and sets all 3 stock attributes to zero
commandGateway.send(new CreateProductInventoryCommand(productInventoryId, 0, 0, 0));
}
#SagaEventHandler(associationProperty = "productInventoryId")
public void handle(ProductInventoryCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductInventoryCreatedEvent");
SagaLifecycle.end();
}
}
The EventHandler that works as intended and saves a Product Entity:
#Component
public class ProductPersistenceService {
#Autowired
private ProductEntityRepository productRepository;
//works as intended
#EventHandler
void on(ProductCreatedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductCreatedEvent");
ProductEntity entity = new ProductEntity(event.productId, event.productName, event.productDescription, event.productPrice);
productRepository.save(entity);
}
#EventHandler
void on(ProductNameChangedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductNameChangedEvent");
ProductEntity existingEntity = productRepository.findById(event.productId).get();
ProductEntity entity = new ProductEntity(event.productId, event.productName, existingEntity.getProductDescription(), existingEntity.getProductPrice());
productRepository.save(entity);
}
}
The EventHandler that should save a ProductInventory Entity, but doesn't:
#Component
public class ProductInventoryPersistenceService {
#Autowired
private ProductInventoryEntityRepository productInventoryRepository;
//doesn't work
#EventHandler
void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventoryPersistenceService, EventHandler, ProductInventoryCreatedEvent");
ProductInventoryEntity entity = new ProductInventoryEntity(event.productInventoryId, event.physicalStock, event.reservedStock, event.availableStock);
System.out.println(entity.toString());
productInventoryRepository.save(entity);
}
}
Product-Aggregate:
#Aggregate
public class Product {
#AggregateIdentifier
private String productId;
private String productName;
private String productDescription;
private double productPrice;
public Product() {
}
#CommandHandler
public Product(CreateProductCommand command) {
System.out.println("Product, CommandHandler, CreateProductCommand");
AggregateLifecycle.apply(new ProductCreatedEvent(command.productId, command.productName, command.productDescription, command.productPrice));
}
#EventSourcingHandler
protected void on(ProductCreatedEvent event) {
System.out.println("Product, EventSourcingHandler, ProductCreatedEvent");
this.productId = event.productId;
this.productName = event.productName;
this.productDescription = event.productDescription;
this.productPrice = event.productPrice;
}
}
ProductInventory-Aggregate:
#Aggregate
public class ProductInventory {
#AggregateIdentifier
private String productInventoryId;
private int physicalStock;
private int reservedStock;
private int availableStock;
public ProductInventory() {
}
#CommandHandler
public ProductInventory(CreateProductInventoryCommand command) {
System.out.println("ProductInventory, CommandHandler, CreateProductInventoryCommand");
AggregateLifecycle.apply(new ProductInventoryCreatedEvent(command.productInventoryId, command.physicalStock, command.reservedStock, command.availableStock));
}
#EventSourcingHandler
protected void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventory, EventSourcingHandler, ProductInventoryCreatedEvent");
this.productInventoryId = event.productInventoryId;
this.physicalStock = event.physicalStock;
this.reservedStock = event.reservedStock;
this.availableStock = event.availableStock;
}
}
What you are noticing right now is the uniqueness requirement of the [aggregate identifier, sequence number] pair within a given Event Store. This requirement is in place to safe guard you from potential concurrent access on the same aggregate instance, as several events for the same aggregate all need to have a unique overall sequence number. This number is furthermore use to identify the order in which events need to be handled to guarantee the Aggregate is recreated in the same order consistently.
So, you might think this would opt for a "sorry there is no solution in place", but that is luckily not the case. There are roughly three things you can do in this set up:
Life with the fact both aggregates will have unique identifiers.
Use distinct bounded contexts between both applications.
Change the way aggregate identifiers are written.
Option 1 is arguably the most pragmatic and used by the majority. You have however noted the reuse of the identifier is necessary, so I am assuming you have already disregarded this as an option entirely. Regardless, I would try to revisit this approach as using UUIDs per default for each new entity you create can safe you from trouble in the future.
Option 2 would reflect itself with the Bounded Context notion pulled in by DDD. Letting the Product aggregate and ProductInventory aggregate reside in distinct contexts will mean you will have distinct event stores for both. Thus, the uniqueness constraint would be kept, as no single store is containing both aggregate event streams. Whether this approach is feasible however depends on whether both aggregates actually belong to the same context yes/no. If this is the case, you could for example use Axon Server's multi-context support to create two distinct applications.
Option 3 requires a little bit of insight in what Axon does. When it stores an event, it will invoke the toString() method on the #AggregateIdentifier annotated field within the Aggregate. As your #AggregateIdentifier annotated field is a String, you are given the identifier as is. What you could do is have typed identifiers, for which the toString() method doesn't return only the identifier, but it appends the aggregate type to it. Doing so will make the stored aggregateIdentifier unique, whereas from the usage perspective it still seems like you are reusing the identifier.
Which of the three options suits your solution better is hard to deduce from my perspective. What I did do, is order them in most reasonable from my perspective.
Hoping this will help your further #Jan!

How to build a custom intermediate operation pipeline in Java for a series of API calls?

I am working on a project which provides a list of operations to be done on an entity, and each operation is an API call to the backend. Let's say the entity is a file, and operations are convert, edit, copy. There are definitely easier ways of doing this, but I am interested in an approach which allows me to chain these operations, similar to intermediate operations in java Streams, and then when I hit a terminal operation, it decides which API call to execute, and performs any optimisation that might be needed. My API calls are dependent on the result of other operations. I was thinking of creating an interface
interface operation{
operation copy(Params ..); //intermediate
operation convert(Params ..); // intermediate
operation edit(Params ..); // intermediate
finalresult execute(); // terminal op
}
Now each of these functions might impact the other based on the sequence in which the pipeline is created. My high level approach would be to just save the operation name and params inside the individual implementation of operation methods and use that to decide and optimise anything I'd like in the execute method. I feel that is a bad practice since I am technically doing nothing inside the operation methods, and this feels more like a builder pattern, while not exactly being that. I'd like to know the thoughts on my approach. Is there a better design for building operation pipelines in java?
Apologies if the question appears vague, but I am basically looking for a way to build an operation pipeline in java, while getting my approach reviewed.
You should look at a pattern such as
EntityHandler.of(remoteApi, entity)
.copy()
.convert(...)
.get();
public class EntityHandler {
private final CurrentResult result = new CurrentResult();
private final RemoteApi remoteApi;
private EntityHandler(
final RemoteApi remoteApi,
final Entity entity) {
this.remoteApi = remoteApi;
this.result.setEntity(entity);
}
public EntityHandler copy() {
this.result.setEntity(new Entity(entity)); // Copy constructor
return this;
}
public EntityHandler convert(final EntityType type) {
if (this.result.isErrored()) {
throw new InvalidEntityException("...");
}
if (type == EntityType.PRIMARY) {
this.result.setEntity(remoteApi.convertToSecondary(entity));
} else {
...
}
return this:
}
public Entity get() {
return result.getEntity();
}
public static EntityHandler of(
final RemoteApi remoteApi,
final Entity entity) {
return new EntityHandler(remoteApi, entity);
}
}
The key is to maintain the state immutable, and handle thread-safety on localized places, such as in CurrentResult, in this case.

Pattern for persisting data in Realm?

My issue is how to organize the code. Let say I have a User class
public class User extends RealmObject {
#PrimaryKey
private String id;
#Required
private String name;
public User() { // per requirement of no args constructor
id = UUID.randomUUID().toString();
}
// Assume getter & setter below...
}
and a Util class is needed to handles the save in an asynchronous manner since RealmObjects cannot have methods other than getter/setter.
public class Util {
public static void save(User user, Realm realm) {
RealmAsyncTask transaction = realm.executeTransaction(new Realm.Transaction() {
#Override
public void execute(Realm realm) {
realm.copyToRealm(user); // <====== Argument needs to be declared final in parent method's argument!
}
}, null);
}
}
The intention is to put save() in a Util class to prevent spreading similar save code all over the code-base so that every time I wanted to save I would just call it as such:
User u = new User();
u.setName("Uncle Sam");
Util.save(u, Realm.getDefaultInstance());
Not sure if this affects performance at all, but I was just going to save all fields overwriting what was there except for the unique id field every single time.
The problem is that I now need to set the "user" argument as final in the Util.save() method, which means I cannot pass in the object I need to save other than once.
Is there a different way of handling this? Maybe a different pattern? Or am I looking at this all wrong and should go back to SQLite?
Why is it a problem to set public static void save(final User user, Realm realm) ? It just means you cannot reassign the user variable to something else.
That said, the existence of a save() method can be a potential code smell as you then spread the update behaviour across the code base. I would suggest looking into something like the Repository pattern (http://martinfowler.com/eaaCatalog/repository.html) instead.
Realm is actually working on an example showing how you can combine the Model-View-Presenter architecture with a Repository to encapsulate updates which is a good pattern for what you are trying to do here. You can see the code for it here: https://github.com/realm/realm-java/pull/1960

HashMaps vs Reactive Programming

I am starting to embrace reactive programming a bit more, and I'm trying to apply it to my typical business problems. One pattern I often design with is database-driven classes. I have some defined unit class like ActionProfile whose instances are managed by an ActionProfileManager, which creates the instances off a database table and stores them in a Map<Integer,ActionProfile> where Integer is the actionProfileId key. The ActionProfileManager may clear and re-import the data periodically, and notify all dependencies to re-pull from its map.
public final class ActionProfileManager {
private volatile ImmutableMap<Integer,ActionProfile> actionProfiles;
private ActionProfileManager() {
this.actionProfiles = importFromDb();
}
public void refresh() {
this.actionProfiles = importFromDb();
notifyEventBus();
}
//called by clients on their construction or when notifyEventBus is called
public ActionProfile forKey(int actionProfileId) {
return actionProfiles.get(actionProfiles);
}
private ImmutableMap<Integer,ActionProfile> importFromDb() {
return ImmutableMap.of(); //import data here
}
private void notifyEventBus() {
//notify event through EventBus here
}
}
However, if I want this to be more reactive creating the map would kind of break the monad. One approach I could do is make the Map itself an Observable, and return a monad that looks up a specific key for the client. However the intermediate imperative operations may not be ideal, especially if I start using the rxjava-jdbc down the road. But the hashmap may help lookup performance significantly in intensive cases.
public final class ActionProfileManager {
private final BehaviorSubject<ImmutableMap<Integer,ActionProfile>> actionProfiles;
private ActionProfileManager() {
this.actionProfiles = BehaviorSubject.create(importFromDb());
}
public void refresh() {
actionProfiles.onNext(importFromDb());
}
public Observable<ActionProfile> forKey(int actionProfileId) {
return actionProfiles.map(m -> m.get(actionProfileId));
}
private ImmutableMap<Integer,ActionProfile> importFromDb() {
return ImmutableMap.of(); //import data here
}
}
Therefore, the most reactive approach to me seems to be just pushing everything from the database on each refresh through an Observable<ActionProfile> and filtering for the last matching ID for the client.
public final class ActionProfileManager {
private final ReplaySubject<ActionProfile> actionProfiles;
private ActionProfileManager() {
this.actionProfiles = ReplaySubject.create();
importFromDb();
}
public void refresh() {
importFromDb();
}
public Observable<ActionProfile> forKey(int actionProfileId) {
return actionProfiles.filter(m -> m.getActionProfileID() == actionProfileId).last();
}
private void importFromDb() {
// call onNext() on actionProfiles and pass each new ActionProfile coming from database
}
}
Is this the optimal approach? What about old data causing memory leaks and not being GC'd? Is it more practical to maintain the map and make it observable?
What is the most optimal reactive approach above to data driven classes? Or is there a better way I have not discovered?
Using BehaviorSubject is the right thing to do here if you don't care about earlier values.
Note most post discouraging Subjects were written in the early days of Rx.NET and is mostly quoted over and over again without much thought. I attribute this to the possibility that such authors didn't really understand how Subjects work or run into some problems with them and just declared they shouldn't be used.
I think Subjects are a great way to multicast events (coming from a single thread usually) where you control or you are the source of the events and the event dispatching is somewhat 'global' (such as listening to mouse move events).

Best design pattern/approach for a long list of if/else/execute branches of code

I have a "legacy" code that I want to refactor.
The code basically does a remote call to a server and gets back a reply. Then according to the reply executes accordingly.
Example of skeleton of the code:
public Object processResponse(String responseType, Object response) {
if(responseType.equals(CLIENT_REGISTERED)) {
//code
//code ...
}
else if (responseType.equals(CLIENT_ABORTED)) {
//code
//code....
}
else if (responseType.equals(DATA_SPLIT)) {
//code
//code...
}
etc
The problem is that there are many-many if/else branches and the code inside each if is not trivial.
So it becomes hard to maintain.
I was wondering what is that best pattern for this?
One thought I had was to create a single object with method names the same as the responseType and then inside processResponse just using reflection call the method with the same name as the responseType.
This would clean up processResponse but it moves the code to a single object with many/many methods and I think reflection would cause performance issues.
Is there a nice design approach/pattern to clean this up?
Two approaches:
Strategy pattern http://www.dofactory.com/javascript/strategy-design-pattern
Create dictionary, where key is metadata (in your case metadata is responseType) and value is a function.
For example:
Put this in constructor
responses = new HashMap<string, SomeAbstraction>();
responses.Put(CLIENT_REGISTERED, new ImplementationForRegisteredClient());
responses.Put(CLIENT_ABORTED, new ImplementationForAbortedClient());
where ImplementationForRegisteredClient and ImplementationForAbortedClient implement SomeAbstraction
and call this dictionary via
responses.get(responseType).MethodOfYourAbstraction(SomeParams);
If you want to follow the principle of DI, you can inject this Dictionary in your client class.
My first cut would be to replace the if/else if structures with switch/case:
public Object processResponse(String responseType, Object response) {
switch(responseType) {
case CLIENT_REGISTERED: {
//code ...
}
case CLIENT_ABORTED: {
//code....
}
case DATA_SPLIT: {
//code...
}
From there I'd probably extract each block as a method, and from there apply the Strategy pattern. Stop at whatever point feels right.
The case you've describe seems to fit perfectly to the application of Strategy pattern. In particular, you've many variants of an algorithm, i.e. the code executed accordingly to the response of the remote server call.
Implementing the Stategy pattern means that you have to define a class hierachy, such the following:
public interface ResponseProcessor {
public void execute(Context ctx);
}
class ClientRegistered implements ResponseProcessor {
public void execute(Context ctx) {
// Actions corresponding to a client that is registered
// ...
}
}
class ClientAborted implements ResponseProcessor {
public void execute(Context ctx) {
// Actions corresponding to a client aborted
// ...
}
}
// and so on...
The Context type should contain all the information that are needed to execute each 'strategy'. Note that if different strategies share some algorithm pieces, you could also use Templeate Method pattern among them.
You need a factory to create a particular Strategy at runtime. The factory will build a strategy starting from the response received. A possibile implementation should be the one suggested by #Sattar Imamov. The factory will contain the if .. else code.
If strategy classes are not to heavy to build and they don't need any external information at build time, you can also map each strategy to an Enumeration's value.
public enum ResponseType {
CLIENT_REGISTERED(new ClientRegistered()),
CLIENT_ABORTED(new ClientAborted()),
DATA_SPLIT(new DataSplit());
// Processor associated to a response
private ResponseProcessor processor;
private ResponseType(ResponseProcessor processor) {
this.processor = processor;
}
public ResponseProcessor getProcessor() {
return this.processor;
}
}

Categories