Our team are using Spring Boot 2 with sql2o as db library. In the paste in our services, for trivial methods, we simply call the repository and returns the model. For example, if I have a Supplier table, I had in the service
#Override
public List<Supplier> findAll() {
return supplierRepository.findAll();
}
Now, since in our controllers we need 99% in the cases other objects correlated to the model, I would create a composite class that holds the model and other models. For example:
#Override
public List<UnknownName> findAll() {
List<Supplier> suppliers = supplierRepository.findAll();
List<UnknownName> res = new ArrayList<>();
UnknownName unknownName;
LegalOffice legalOffice;
if (suppliers != null) {
for (Supplier supplier in suppliers) {
unknownName = new UnknownName();
unknownName.setSupplier(supplier);
legalOffice = legalOfficeService.findByIdlegaloffice(supplier.getLegalofficeid);
unknownName.setLegalOffice(legalOffice);
res.add(unknownName);
}
}
return res;
}
What should the name of class UnknownName?
PS: I simplified the code for better redability, but I use a generic enrich() function that I call for all the find methods, so I don't have to dupe the code.
I would recommend SupplierDto or SupplierLegalOfficeDto. DTO stands for Data Transfer Objects and it's commonly used for enriched models (more here).
Also you shouldn't check suppliers for null as repository always returns a non-null list.
In the end, I adopted the suffix Aggregator, following the Domain-driven design wording.
Related
I am somewhat new to Spring and have recently generated a JHipster monolith application with the WebFlux option. My current aim is to make it compatible with Firestore and implement some missing features like inserting document references. To do so, I am currently having the following structure:
A domain object class "Device" which holds a field String firmwareType;
A domain object class "FirmwareType"
A DTO object DeviceDTO which holds a field FirmwareType firmwareType;
Correspondingly, I also have the corresponding Repository (extending FirestoreReactiveRepository which extends ReactiveCrudRepository) and Controller implementations, which all work fine. To perform the conversion from a "full object" of FirmwareType in the DTO-object to a String firmwareTypeId; in the Device-object, I implemented a MapStruct Mapper:
#Mapper(unmappedTargetPolicy = org.mapstruct.ReportingPolicy.IGNORE, componentModel = "spring")
public abstract class DeviceMapper {
private final Logger logger = LoggerFactory.getLogger(DeviceMapper.class);
#Autowired
protected FirmwareTypeRepository fwTypeRepo;
public abstract Device dtoToDevice(DeviceDTO deviceDTO);
public abstract DeviceDTO deviceToDto(Device device);
public abstract List<DeviceDTO> devicesToDTOs(List<Device> devices);
public abstract List<Device> dtosToDevices(List<DeviceDTO> dtos);
public String map(FirmwareType value) {
if (value == null || value.getId() == null || value.getId().isEmpty()) {
return null;
}
return value.getId();
}
public FirmwareType map(String value) {
if (value == null || value.isEmpty()) {
return null;
}
return fwTypeRepo.findById(value).block(); // <<-- this gets stuck!
}
}
The FirmwareTypeRepository which is autowired as fwTypeRepo field:
#Repository
public interface FirmwareTypeRepository extends FirestoreReactiveRepository<FirmwareType> {
Mono<FirmwareType> findById(String id);
}
The corresponding map functions get called perfectly fine, but the fwTypeRepo.findById(..) call in the marked line (third-last line) seems to get stuck somewhere and never returns or throws an error. When the "fwTypeRepo" via its Controller-endpoint is called, it works without any issues.
I suppose it must be some kind of calling context issue or something? Is there another way to force a result by Mono synchronously than block?
Thanks for your help in advance, everyone!
Edit: At this point, I am sure it has something to do with Autowiring the Repository. It seems to not correctly do so / use the correct instance. While a customized Interface+Impl is called correctly, the underlying logic (from FirestoreReactive/ReactiveCrudRepository) doesn't seem to supply data correctly (also when #Autowire is used in other components!). I found some hints pointing at the package-structure but (i.e. Application class needs to be in a root package) but that isn't an issue.
Mapstruct is not reactive as i know so this approach won't work, you'll need mapstruct to return a mono that builds the object itself but that wouldn't make sense as it's a mapping lib which is only for doing blocking things.
Could try use 2 Mono/Mappers, 1 for each DB call and then just Mono.zip(dbCall1, dbCall2) and set the the mapped db call output into the other objects field.
var call1 = Mono.fromFuture(() -> db.loadObject1()).map(o -> mapper1.map(o));
var call2 = Mono.fromFuture(() -> db.loadObject2()).map(o -> mapper2.map(o));
Mono.zip(call1, call2)
.map(t -> {
var o1 = t.getT1();
var o2 = t.getT2();
o1.setField(o2);
});
I have child collection
public class SectorDto {
List<RentalObjectDto> rentalObjects;
}
#Entity
public class Sector {
List<RentalObject> rentalObjects;
}
and before mapping I have Sector with child collection from DB. Then I use #MappingTarget to update existing bean
Sector map(SectorDto sectorDto, #MappingTarget Sector sectorDb);
I need the same possibility for child elements. But generated code doesn't want to use this method with #MappingTarget
RentalObject map(RentalObjectDto rentalObjectDto, #MappingTarget RentalObject rentalObjectDb)
the same when I add method for list with #MappingTarget it also generates own method without #MappingTarget for collection members.
List<RentalObject> mapRentalObjects(List<RentalObjectDto> rentalObjectDtos,
#MappingTarget List<RentalObject> rentalObjectsDb);
My intention is to update already existing in DB collection members, to add new, and to remove orphans, because child Entity class (the same as parent) has additional properties witch aren't in Dto class, and it is important not to lose them. I ended up with complicated default method witch works in my case,
default List<RentalObject> mapRentalObjects(List<RentalObjectDto> rentalObjectDtos,
#MappingTarget List<RentalObject> rentalObjectsDb) {
if ( rentalObjectDtos == null ) { return null;}
List<RentalObject> rentalObjectsDbForRemove = new ArrayList<>();
// map existing
for ( RentalObject rentalObjectDb : rentalObjectsDb ) {
Optional<RentalObjectDto> rentalObjectDto = rentalObjectDtos.stream()
.filter(ro->ro.getId()!=null)
.filter(ro->ro.getId().equals(rentalObjectDb.getId())).findFirst();
if (rentalObjectDto.isPresent()){
map(rentalObjectDto.get(), rentalObjectDb);
} else {
rentalObjectsDbForRemove.add(rentalObjectDb);
}
}
// remove orphan
rentalObjectsDb.removeAll(rentalObjectsDbForRemove);
// add new
rentalObjectDtos.stream()
.filter(roDto->roDto.getId()==null)
.map(roDto-> mapRentalObjectDb(roDto,new RentalObject())
.forEach((roDto)->rentalObjectsDb.add(roDto));
return rentalObjectsDb;
}
Does mapstuct have an easier solution for this case?
Updating collections is a tricky problem and requires a complex solution like you can see in your example.
Therefore MapStruct does not provide a way to achieve such a complex scenario.
The solution for this would be to write a custom method like you did and use that for mapping is the only way in the moment.
I have an endpoint, let's call it "GetPersonInfo". GetPersonInfo is given a few parameters but one of them is "PersonType". Based on this PersonType, multiple downstream services are called. Some of these services could be shared between PersonType's but that is not a guarantee.
For example GetPersonInfo(...) #1:
PersonType = "Adult"
When GetPersonInfo is called for Adult, the API endpoint would need to make two downstream calls and populate the payload model with results:
"GetPersonName()" and "GetFavoriteAlcoholicBeverage()"
For example GetPersonInfo(...) #2:
PersonType = "Child"
When GetPersonInfo is called for Child, the api endpoint would need to make two downstream calls and populate the payload model with results:
"GetPersonName()" and "GetFavoriteToy()"
For example GetPersonInfo(...) #3:
PersonType = "NamelessPerson"
When GetPersonInfo is called for NamelessPerson, the api endpoint would need to make one downstream call:
"GetPersonIdNumber()"
Each of these calls would be populating the same model PersonInfo but all of the fields are nullable in case the downstream call wasn't required for that person type.
Is there a pattern where I can achieve this without duplicating the common downstream calls in every single logic implementation for getting the person info by PersonType.
Below is the initial call
public PersonInfo getPersonInfo(int id, PersonType personType) {
// logic here based on personType to call necessary downstreams and populate person info model
}
Well, the poor man's approach would be an if-cascade. :-)
But thinking in design patterns, this looks clearly like a strategy pattern. Each of your downstreams would define a stragegy, and each PersonType would trigger one or more strategies. We will talk about this many-to-many relationship later on.
Let's start with a strategy interface...
public interface PersonStrategy {
void enrichPerson(String id, PersonInfo result);
}
... and let's have some downstreams implemented as strategies:
public class DefaultStrategy implements PersonStrategy {
#Override
public void enrichPerson(String id, PersonInfo result) {
// fetch basic person data ...
}
}
public class FavoriteToyStrategy implements PersonStrategy {
#Override
public void enrichPerson(String id, PersonInfo result) {
// fetch toys ...
}
}
public class FavoriteAlcoholicBeverageStrategy implements PersonStrategy {
#Override
public void enrichPerson(String id, PersonInfo result) {
// fetch beverages ...
}
}
Once this is provided, your initial method would look like this:
private final Map<PersonType, List<PersonStrategy>> strategies = new LinkedHashMap<>();
public PersonInfo getPersonInfo(String id, PersonType type) {
final PersonInfo result = new PersonInfo();
strategies.get(type).forEach(strategy -> strategy.enrichPerson(id, result));
return result;
}
As you see, I implemented the many-to-many dependency in a multi-value map with the type as key. It's not yet populated, I think you can imagine how it works.
Other possibilities:
have a factory that returns the strategies belonging to a specific
type.
have each strategy decide if it reacts to a given type or
provide a list of types that it belongs to.
As you see: not a single if statement is needed here.
I am working on a project which provides a list of operations to be done on an entity, and each operation is an API call to the backend. Let's say the entity is a file, and operations are convert, edit, copy. There are definitely easier ways of doing this, but I am interested in an approach which allows me to chain these operations, similar to intermediate operations in java Streams, and then when I hit a terminal operation, it decides which API call to execute, and performs any optimisation that might be needed. My API calls are dependent on the result of other operations. I was thinking of creating an interface
interface operation{
operation copy(Params ..); //intermediate
operation convert(Params ..); // intermediate
operation edit(Params ..); // intermediate
finalresult execute(); // terminal op
}
Now each of these functions might impact the other based on the sequence in which the pipeline is created. My high level approach would be to just save the operation name and params inside the individual implementation of operation methods and use that to decide and optimise anything I'd like in the execute method. I feel that is a bad practice since I am technically doing nothing inside the operation methods, and this feels more like a builder pattern, while not exactly being that. I'd like to know the thoughts on my approach. Is there a better design for building operation pipelines in java?
Apologies if the question appears vague, but I am basically looking for a way to build an operation pipeline in java, while getting my approach reviewed.
You should look at a pattern such as
EntityHandler.of(remoteApi, entity)
.copy()
.convert(...)
.get();
public class EntityHandler {
private final CurrentResult result = new CurrentResult();
private final RemoteApi remoteApi;
private EntityHandler(
final RemoteApi remoteApi,
final Entity entity) {
this.remoteApi = remoteApi;
this.result.setEntity(entity);
}
public EntityHandler copy() {
this.result.setEntity(new Entity(entity)); // Copy constructor
return this;
}
public EntityHandler convert(final EntityType type) {
if (this.result.isErrored()) {
throw new InvalidEntityException("...");
}
if (type == EntityType.PRIMARY) {
this.result.setEntity(remoteApi.convertToSecondary(entity));
} else {
...
}
return this:
}
public Entity get() {
return result.getEntity();
}
public static EntityHandler of(
final RemoteApi remoteApi,
final Entity entity) {
return new EntityHandler(remoteApi, entity);
}
}
The key is to maintain the state immutable, and handle thread-safety on localized places, such as in CurrentResult, in this case.
I am working on a project using ISIS 1.16.2. I have a superclass, called ConfigurationItem, which has some common properties (name, createdTimestamp etc.).
For example it has a delete action method, annotated with #Action(invokeOn = InvokeOn.OBJECT_AND_COLLECTION, ...), which I need to be callable from entitys detail view as well as from collection views with selection boxes.
Example:
public class ConfigurationItem {
#Action(
invokeOn = InvokeOn.OBJECT_AND_COLLECTION,
semantics = SemanticsOf.NON_IDEMPOTENT_ARE_YOU_SURE,
domainEvent = DeletedDomainEvent.class)
public Object delete() {
repositoryService.remove(this);
return null;
}
// ...
}
public class ConfigurationItems {
#Action(semantics = SemanticsOf.SAFE)
public List<T> listAll() {
return repositoryService.allInstances(<item-subclass>.class);
}
// ...
}
This works pretty well but the "invokeOn" annotation is now deprecated. The JavaDoc says that one should switch to #Action(associateWith="...") but I don't know how to transfer the semantics of 'InvokeOn' since I have no collection field for reference.
Instead I only have the collection of objects returned by the database retrieve action.
My question is: How do I transfer the deprecated #Action(invokeOn=...) semantics to the new #Action(associateWith="...") concept for collection return values with no backed property field?
Thanks in advance!
Good question, this obviously isn't explained well enough in the Apache Isis documentation.
The #Action(invokeOn=InvokeOn.OBJECT_AND_COLLECTION) has always been a bit of a kludge, because it involves invoking an action against a standalone collection (which is to say, the list of object returned from a previous query). We don't like this because there is no "single" object to invoke the action on.
When we implemented that feature, the support for view models was nowhere near as comprehensive as it now is. So, our recommendation now is, rather than returning a bare standalone collection, instead wrap it in a view model which holds the collection.
The view model then gives us a single target to invoke some behaviour on; the idea being that it is the responsibility of the view model to iterate over all selected items and invoke an action on them.
With your code, we can introduce SomeConfigItems as the view model:
#XmlRootElement("configItems")
public class SomeConfigItems {
#lombok.Getter #lombok.Setter
private List<ConfigurationItem> items = new ArrayList<>();
#Action(
associateWith = "items", // associates with the items collection
semantics = SemanticsOf.NON_IDEMPOTENT_ARE_YOU_SURE,
domainEvent = DeletedDomainEvent.class)
public SomeConfigItems delete(List<ConfigurationItem> items) {
for(ConfigurationItem item: items) {
repositoryService.remove(item);
}
return this;
}
// optionally, select all items for deletion by default
public List<ConfigurationItem> default0Delete() { return getItems(); }
// I don't *think* that a choices method is required, but if present then
// is the potential list of items for the argument
//public List<ConfigurationItem> choices0Delete() { return getItems(); }
}
and then change the ConfigurationItems action to return this view model:
public class ConfigurationItems {
#Action(semantics = SemanticsOf.SAFE)
public SelectedItems listAll() {
List<T> items = repositoryService.allInstances(<item-subclass>.class);
return new SelectedItems(items);
}
}
Now that you have a view model to represent the output, you'll probably find other things you can do with it.
Hope that makes sense!