Rich domain in the context of packaged domain classes - java

A legacy Java code-base that I'm currently working with makes use of a notorious framework. It provides me out-of-the-box domain classes in nicely packaged jars. The domain classes are nothing but bags of getters and setters.
This is preventing me from breeding a rich domain model, by extracting the procedural code from static Util classes into their rightful places, i.e. the domain classes themselves. For example, consider the logic within the following method:
public static boolean areFriends(User user1, User user2) {
for (User friend : user1.getFriends()) {
if (friend.equals(user2)) {
return true;
}
}
return false;
}
This could instead be nicely expressed as isFriendOf(User another) in the User class. However, the User class is all locked up. By the way, the framework uses life-cycle methods to pass in the User objects:
//Life-cycle method
public void execute(FrameworkBlob frameworFattyObject) {
...
User user = frameworFattyObject.getUser();
User loggedInUser = getLoggedInUserFromSomewhere();
bool areFriends = BadUtilClass.areFriends(user, loggedInUser);
...
}
Keeping testability in mind, is there a way I could say something like:
bool areFriends = user.isFriendOf(loggedInUser);

Not familiar with the notorious framework. I should comment first but it's too long.
Is it possible to inject something to the Life-cycle method?
For example:
public class AClassIDontKnow {
private DomainModelMapper mapper;//inject this
//Life-cycle method
public void execute(FrameworkBlob frameworFattyObject) {
...
UserDomainModel user = mapper.getUser(frameworFattyObject);
UserDomainModel loggedInUser = getLoggedInUserFromSomewhere();
bool areFriends = user.isFriendOf(loggedInUser);
...
}
}
public class DomainModelMapper {
UserDomainModel getUser(FrameworkBlob frameworFattyObject) {
User userAnemicModel = frameworFattyObject.getUser();
//map the anemicModel to a rich domain model
return ....;
}
}
Therefore the test strategy:
1) DomainModelMapperUnitTest is placed to test the mapping.
2) UserDomainModelUnitTest covers the isFriend(user)
3) Use mock for DomainModelMapper in AClassIDontKnowUnitTest if neccessary.

Related

Using Optionals correctly in service layer of spring boot application

I new to spring boot application development. I using service layer in my application, but came across the repository method that return Optional as shown below.
#Override
public Questionnaire getQuestionnaireById(Long questionnaireId) {
Questionnaire returnedQuestionnaire = null;
Optional<Questionnaire> questionnaireOptional = questionnaireRepository.findById(questionnaireId);
if(questionnaireOptional.isPresent()) {
returnedQuestionnaire = questionnaireOptional.get();
}
return returnedQuestionnaire;
}
My question is ,
whether I am using the Optional correctly here. And is it ok to check this optional (isPresent()) in the RestController and throughing exception is not present.Like below
public Optional<Questionnaire> getQuestionnaireById(Long questionnaireId) {
return questionnaireRepository.findById(questionnaireId);
}
I wouldn't go for either option tbh, especially not the first. You don't want to introduce null values inside your domain. Your domain should stay as simple as possible, readable and void of clutter like null checks.
You might want to read through the optional API for all your options, but personally I would go for something like this:
In repository:
public interface TodoBoardRepository {
Optional<Questionnaire> findByQuestionnaireId(String questionnaireId);
// ...
}
In service:
#Service
#RequiredArgsConstructor // Or generate constructor if you're not using Lombok
public class QuestionnaireService {
private final QuestionnaireRepository questionnaireRepository;
// ...
public Questionnaire getQuestionnaireById(Long questionnaireId) {
Questionnaire questionnaire = questionnaireRepository.findById(questionnaireId)
.orElseThrow(() -> new QuestionaireNotFoundException(questionnaireId));
// Do whatever you want to do with the Questionnaire...
return questionnaire;
}
}
I go with way 1 that you have mentioned. In case the object is not present, throw a validation exception or something. This approach also ensures that service layer is in charge of the logic and controller is just used your interacting with the outside world.

How Can I Test Aggregate if ID is randomly generated?

This may be more of a design question but I have an aggregate member that is generated via a command and need to be able to test that the Event is generated given the command was run.
however, I don't see any obvious way to do an anyString match on one field of the event in the TestFixture framework.
Is it "bad practice" to generate IDs in the aggregate when created? Should IDs be generated outside of the aggregate?
#AggregateMember(eventForwardingMode = ForwardMatchingInstances.class)
private List<TimeCardEntry> timeCardEntries = new ArrayList<>();
data class ClockInCommand(#TargetAggregateIdentifier val employeeName: String)
#CommandHandler
public TimeCard(ClockInCommand cmd) {
apply(new ClockInEvent(cmd.getEmployeeName(),
GenericEventMessage.clock.instant(),
UUID.randomUUID().toString()));
#EventSourcingHandler
public void on(ClockInEvent event) {
this.employeeName = event.getEmployeeName();
timeCardEntries.add(new TimeCardEntry(event.getTimeCardEntryId(), event.getTime()));
}
#Data
public class TimeCardEntry {
#EntityId
private final String timeCardEntryId;
private final Instant clockInTime;
private Instant clockOutTime;
#EventSourcingHandler
public void on(ClockOutEvent event) {
this.clockOutTime = event.getTime();
}
private boolean isClockedIn() {
return clockOutTime != null;
}
}
#ParameterizedTest
#MethodSource(value = "randomEmployeeName")
void testClockInCommand(String employeeName) {
testFixture.givenNoPriorActivity()
.when(new ClockInCommand(employeeName))
.expectEvents(new ClockInEvent(employeeName, testFixture.currentTime(), "Any-String-Works"));
}
Is it "bad practice" to generate IDs in the aggregate when created? Should IDs be generated outside of the aggregate?
Random numbers are a lot like clocks - they are a form of shared mutable state. Put another way, they are a concern of the imperative shell, not of the functional core.
What this usually means for your domain model is that the randomness is passed in as an argument, rather than produced by the aggregate itself. This might mean passing an ID generator to the domain model, or even generating the id in the application and passing in the generated identifier as a value.
Thus, in our unit test, we replace the random generator provided by the target application with a "random" generator provided by the test -- because the test controls the generator, the identifier used becomes deterministic, and therefore you can more easily work around it.
In cases where you aren't happy with making the random generator part of the api of your domain model, another option is to expose it as part of the test interface.
// We don't necessarily worry about testing this version, it is "too simple to break"
void doSomethingCool(...) {
doSomethingCool(ID.new, ...);
}
// Unit tests measure this function instead, which is easier to test and has
// all of the complicated logic
void doSomethingCool(ID id, ...) {
// ...
}

Is it correct application of the SRP(single responsibility principle)?

I have a java class:
class User {
private String name;
private String address;
private int age;
private BigDecimal salary;
// other fields
//getters setters
}
I can receive a map of new values in these fields and update it. It looks like this: ChangeItem changeItem where changeItem.key is field's name and changeItem.value is the field's value
I create strategies for updating each field. For example common interface:
public interface UpdateStrategy<T> {
T updateField(T t, ChangeItem changeItem) throws ValidationExceptions;
}
And some implementation:
public class UpdateNameStrategy implements UpdateStrategy<User> {
private static final Pattern USER_NAME = Pattern.compile(...);
#Override
public User updateField(User user, ChangeItem changeItem) throws ValidationExceptions {
String fieldValue = changeItem.value;
if (!validate(fieldValue))
throw new ValidationExceptions(changeItem);
user.setName(fieldValue);
return user;
}
private boolean validate(String value){
return USER_NAME.matcher(value).matches();
}
}
In the real project I have 40 fields and 40 strategies for each field(with different validation and logic).
I think this class violates the SRP(single responsibility principle). And I move validation logic to separately class. I change the validation method to:
public class UpdateNameStrategy implements UpdateStrategy<User> {
#Override
public User updateField(User user, ChangeItem changeItem) throws ValidationExceptions {
String fieldValue = changeItem.value;
ValidateFieldStrategy fieldValidator = new UserNameValidate(fieldValue);
if (!fieldValidator.validate())
throw new ValidationExceptions(changeItem);
return user;
}
}
and
public class UserNameValidate implements ValidateFieldStrategy {
private static final Pattern USER_NAME = Pattern.compile(...);
private String value;
public UserNameValidate(String value) {
this.value = value;
}
#Override
public boolean validate() {
return USER_NAME.matcher(value).matches();
}
}
And now I have 40 strategies for update fields and 40 validators. Is it the correct way? Or maybe I can change this code more clear?
I'm sorry for being blunt, my eyes are bleeding while looking at this. You took one unnecessarily complicated validation model and you split it in two to make it even more complicated. And none of it has much to do with the Single Responsibility Principle.
Without knowing anything specific to your domain problem, this looks like a superfluous usage of the Strategy pattern.
I've never seen a legitimate domain problem requiring a validation strategy split like this, for every single field.
An object in a domain is not just a collection of fields. It is also behavior governing the fields (which is the object state) and the rules governing the mutability of that state.
In general we want rich objects with behavior. And that behavior typically includes validation.
I sincerely doubt that every single field in a model requires validation to this level of granularity. Put the validation in the object's setter methods and be done with it.
You are killing yourself doing all this elaborate setup. We all want structure, but at some point all of this is just ceremony for building very tall sand castles.
Validation in general is part of an object. And an object is responsible, it is its responsibility to govern its state, the collection of fields and values it possesses and controls.
Single Responsibility Principle does not mean extracting the responsibility of validating fields out of an object. That responsibility is intrinsic to the object.
Single Responsibility Principle concerns itself with "external" responsibility, the responsibility of an object to provide a single coherent function (or set of coherent functions) to someone that uses that object.
Consider a Printer object. This object is responsible to print. It is not responsible to manage the network connections between a printer and a user, for instance.
SRP is not limited to classes, but also packages and modules. A Mathematics module should provide you with, obviously, mathematical routines. It should not provide routines for filesystem manipulation, right?
That's what the SRP is about. What you are doing, extracting validation behavior out of an object, that has little, if anything, to do with SRP.
Sometimes one might want to extract out common validation routines (check if a string is black or null, or whether a number is a natural number.)
So you might have a class like this:
public class User {
// some fields, blah blah
public void setName(final String aName){
if( aName == null || a.aName.trim().length() < 1){
throw new SomeException("empty string blah blah");
}
this.name=aName.trim(); // model requires this to be trimmed.
}
public void setRawField(final String aValue){
if( aName == null || a.aName.trim().length() < 1){
throw new SomeException("empty string blah blah");
}
this.rawField=aValue; // model requires this to not be trimmed.
}
public void setRawField2(final String aValue){
// model requires this field to be non-null,
// can be blank, and if not blank, must be all lower case.
if(aValue == null) {
throw new NullPointerException("null string blah blah");
}
this.rawField2=aValue.toLowerCase();
}
changed into a class that delegates minutia to an external validation utility class or module.
public class User {
// some fields, blah blah
public void setName(final String aName){
// model requires this to be trimmed
this.name=Validator.notEmptyOrDie(aName).trim();
}
public void setRawField(final String aValue){
// model requires this to *not* be trimmed
this.rawField=Validator.notEmptyOrDie(aValue);
}
public void setRawField2(final String aValue){
// model requires this field to be non-null,
// can be blank, and if not blank, must be all lower case.
// too contrive to refactor, leave it here.
if(aValue == null) {
throw new NullPointerException("null string blah blah");
}
this.rawField2=aValue.toLowerCase();
}
public class Validator {
static public String notEmptyOrDie(final String aString){
if( aString == null || aString.trim().length() < 1){
throw new SomeException("empty string blah blah");
}
return aString;
}
This is an approach I actually follow, to refactor parts of common validation. I factor out minutia.
But the core validation logic, if any, it remains in the object. Notice that validation is still part of the User class. All that got extracted is the minutia.
The logic that declares the intent of validation (check if black or die) still remains part of the User class. It is intrinsic to the class' behavior.
In some models, the User class might not require validation at all. It might be just a data shuttle, a POJO.
OTH, in a model that requires it to validate its state, that state should usually go inside the class, and a developer must have a very good argument for extricating that logic the way you did in your sample code.
SRP says nothing about how you compose responsibility internal to the object, only external to consumers of said object.
As a rule of thumb, validation of object fields belong to the object as logic internal to the object. It is intrinsic to the object's behavior, invariants, pre conditions and post conditions.
Very rarely you extract out the entire validation out of an object (unless we are talking about POJOs serialized and deserialized by an external package, and with validations added declaratively via annotations or some sort of controlling configuration descriptor.)
Hit me up if you still have any questions. Not sure how fast I can answer back, but I don't mind to answer questions if I can.
**** EDIT ***
User #qujck mentions the valid concern in this proposed approach, that it is not possible to differentiate all validation exceptions (becuase they use common exceptions for all.)
One possibility (which I've used) is to have overloaded and/or polymorphic validators:
public class Validator {
static public String notEmptyOrDie(final String aString){
return Validator.notEmptyOrDie(aString, null);
}
static public String notEmptyOrDie(final String aString,
final String aFieldName){
if( aString == null || aString.trim().length() < 1){
throw new SomeException(
(aFieldName==null? "" : aFieldName + " ")
+ "empty string blah blah");
}
return aString;
}
}
If one uses a hierarchy of validation exceptions with common constructors, then one could take this further by passing the desired exception class, and use reflection to create instances to be thrown.
I've done that also. Actually, I'm doing that now for a common error-throwing mechanism in an EJB layer that itself reaches to another system via network protocols.
But that's something I do to cope with an existing system, not something I would do if I had a design choice. And it still limits itself to refactoring validation or error handling to its core elements.
Actual, object-specific validation still remains at/within the object itself.

accessing child constant in parent class in java

OK, so I have an interesting problem. I am using java/maven/spring-boot/cassandra... and I am trying to create a dynamic instantiation of the Mapper setup they use.
I.E.
//Users.java
import com.datastax.driver.mapping.annotations.Table;
#Table(keyspace="mykeyspace", name="users")
public class Users {
#PartitionKey
public UUID id;
//...
}
Now, in order to use this I would have to explicitly say ...
Users user = (DB).mapper(Users.class);
obviously replacing (DB) with my db class.
Which is a great model, but I am running into the problem of code repetition. My Cassandra database has 2 keyspaces, both keyspaces have the exact same tables with the exact same columns in the tables, (this is not my choice, this is an absolute must have according to my company). So when I need to access one or the other based on a form submission it becomes a mess of duplicated code, example:
//myWebController.java
import ...;
#RestController
public class MyRestController {
#RequestMapping(value="/orders", method=RequestMethod.POST)
public string getOrders(...) {
if(Objects.equals(client, "first_client_name") {
//do all the things to get first keyspace objects like....
FirstClientUsers users = (db).Mapper(FirstClientUsers.class);
//...
} else if(Objects.equals(client, "second_client_name") {
SecondClientUsers users = (db).Mapper(SecondClientUsers.class);
//....
}
return "";
}
I have been trying to use methods like...
Class cls = Class.forName(STRING_INPUT_VARIABLE_HERE);
and that works ok for base classes but when trying to use the Accessor stuff it no longer works because Accessors have to be interfaces, so when you do Class cls, it is no longer an interface.
I am trying to find any other solution on how to dynamically have this work and not have to have duplicate code for every possible client. Each client will have it's own namespace in Cassandra, with the exact same tables as all other ones.
I cannot change the database model, this is a must according to the company.
With PHP this is extremely simple since it doesn't care about typecasting as much, I can easily do...
function getData($name) {
$className = $name . 'Accessor';
$class = new $className();
}
and poof I have a dynamic class, but the problem I am running into is the Type specification where I have to explicitly say...
FirstClientUsers users = new FirstClientUsers();
//or even
FirstClientUsers users = Class.forName("FirstClientUsers");
I hope this is making sense, I can't imagine that I am the first person to have this problem, but I can't find any solutions online. So I am really hoping that someone knows how I can get this accomplished without duplicating the exact same logic for every single keyspace we have. It makes the code not maintainable and unnecessarily long.
Thank you in advance for any help you can offer.
Do not specify the keyspace in your model classes, and instead, use the so-called "session per keyspace" pattern.
Your model class would look like this (note that the keyspace is left undefined):
#Table(name = "users")
public class Users {
#PartitionKey
public UUID id;
//...
}
Your initialization code would have something like this:
Map<String, Mapper<Users>> mappers = new ConcurrentHashMap<String, Mapper<Users>>();
Cluster cluster = ...;
Session firstClientSession = cluster.connect("keyspace_first_client");
Session secondClientSession = cluster.connect("keyspace_second_client");
MappingManager firstClientManager = new MappingManager(firstClientSession);
MappingManager secondClientManager = new MappingManager(secondClientSession);
mappers.put("first_client", firstClientManager.mapper(Users.class));
mappers.put("second_client", secondClientManager.mapper(Users.class));
// etc. for all clients
You would then store the mappers object and make it available through dependency injection to other components in your application.
Finally, your REST service would look like this:
import ...
#RestController
public class MyRestController {
#javax.inject.Inject
private Map<String, Mapper<Users>> mappers;
#RequestMapping(value = "/orders", method = RequestMethod.POST)
public string getOrders(...) {
Mapper<Users> usersMapper = getUsersMapperForClient(client);
// process the request with the right client's mapper
}
private Mapper<Users> getUsersMapperForClient(String client) {
if (mappers.containsKey(client))
return mappers.get(client);
throw new RuntimeException("Unknown client: " + client);
}
}
Note how the mappers object is injected.
Small nit: I would name your class User in the singular instead of Users (in the plural).

What design pattern is appropriate here?

I am working on an application which has REST endpoints and for a Get-By-ID service, I am populating a resource (basically a POJO) by collecting data from the persistent store. Now, before sending the response back, I have to populate the HREF in the POJO resource. I want to do it in a generic way so that various other REST services (search etc.) can use it. I want to do this HREF population at a common place for reusability purpose. In a nutshell, my resource POJO can go through various massaging layers to have different state changed and finally sent back to the consumer.
Resource POJO --> Massager 1 --> Massager 2 --> Final Massaged POJO
Could someone help me to figure out a design pattern that can fit my problem.
I thought of Decorator pattern, but somehow it does not sail my ship.
~ NN
You could adapt Chain Of Responsability to your needs. Instead of having a series of processing objects which pass your POJO from one to another in case it cannot handle it, you could process your POJO and then pass it further.
abstract class Messager{
private Messager nextMessager;
void setNextMessager(Messager messager){
this.nextMessager = messager;
}
Messager getNextMessager(){
return this.nextMessager;
}
abstract void handle(Pojo pojo);
}
class FooMessager extends Messager{
void handle(Pojo pojo){
//operate on your pojo
if(pojo.getHref == null){
pojo.setHref("broken");
}
if(this.getNextMessager() != null){
this.getNextMessager().handle(pojo);
}
}
}
class BarMessager{
void handle(Pojo pojo){
//operate on your pojo
if(pojo.getHref().contains("broken")){
pojo.setHref(pojo.getHref().replace("broken","fixed"));
}
if(this.getNextMessager() != null){
this.getNextMessager().handle(pojo);
}
}
}
class Pojo{
private String href;
public Pojo() {
}
public String getHref() {
return href;
}
public void setHref(String href) {
this.href = href;
}
}
class Test{
public static void main(String[] args) {
Pojo pojo = new Pojo();
pojo.setHref(null);
Messager foo = new FooMessager();
Messager bar = new BarMessager();
foo.setNextMessager(bar);
foo.handle();
}
}
Even if the previous answers are good and does solve it, I want to propose you additional way if you want to go further. The communication between objects is very common, so a lot of concepts are out there and you can choose the one that fits best for your needs.
The Command pattern can help you with the encapsulation of a request as an object in
collecting data from the persistent store
It'll allow you to parameterize clients with queue or log requests.
The Mediator pattern can define your communication between the Massager 1 --> Massager 2 classes. By doing this it'll encapsulate your objects interaction. Also it promotes loose coupling by keeping objects from referring to each other explicitly, and it'll let you vary their interaction independently.
If you'll deal with how to notify change to Massager 1 --> Massager 2 classes
my resource POJO can go through various massaging layers to have different state changed
than the Observer pattern can define a dependency between your objects so that when one object changes state, all its dependents are notified and updated automatically.

Categories