I would like to use the same code to sort and manipulate objects in client and server sides.
But I am facing a problem since in client we need a proxy interface representing the class of the server.
Is there a way to use the same interface in both?, I know RF has a mechanism to copy bean attributes from the server instance to the client instance when it is sent through the wire.
One way to use the same API is to use interfaces that both your proxies extend and your domain objects implement.
// common interfaces
interface Foo { … }
interface Bar<T extends Foo> {
int getX();
void setX(int x);
// setters need to use generics
List<T> getFoos();
void setFoos(List<T> foos);
// with only a getter, things get easier:
Bar getParent();
}
// domain objects
class RealFoo implements Foo { … }
class RealBar implements Bar<RealFoo> {
int x;
List<RealFoo> foos;
RealBar parent;
#Override
public RealBar getParent() { return parent; }
// other getters and setters
}
// proxy interfaces
#ProxyFor(RealFoo.class)
interface FooProxy extends Foo { … }
#ProxyFor(RealBar.class)
interface BarProxy extends Bar<FooProxy> {
#Override
BarProxy getParent();
// other getters and setters
}
You can then use a Comparator<Foo> or Comparator<Bar> in both client and server side.
I generally only implement traits (aspects, facets, call them the way you like) that way though (HasId, HasLabel, HasPosition, etc.), not complete domain objects' APIs. I can then use HasId to get the key of any object to put them in a map or compare for equality, HasLabel for displays (custom Cells on the client-side, error messages on the server-side that are sent to the client, etc.), HasPosition for sorting, etc.
As Thomas says in his answer, the only way in current GWT to have shared code in client and sever is implementing the same interface in both sides and using it in your shared code.
Since RF copies attributes from the server to the client as you say in your query, in theory we could use the same interface (the proxy one) in both sides (simpler code), setting the #ValueFor value pointing to itself.
Lets see an example:
// Shared interface in client and server sides
#ProxyFor(Foo.class)
interface Foo extends ValueProxy {
String getBar();
}
// Server side implementation
class FooImpl implements Foo {
String getBar(){return "bar";};
}
As information, we use this approach in our product, so as we can sell 2 backend solutions (one is based on GAE and other on couchdb).
The code above works for client code which does not create new values, but if you want to create them, it is enough to define a value locator:
// Say RF which locator to use to create classes in server side
#ProxyFor(value = Foo.class, locator ALocator.class)
interface Foo extends ValueProxy {
}
public class ALocator extends Locator<Foo, String> {
public Foo create(Class<? extends Foo> clazz) {
return new FooImpl();
}
...
}
Unfortunately, RF does not deal with interfaces in the server side see issues: 7509 and 5762.
But, as you can read in the issues comments, there is already a fix for this (pending for review). Hopefully it would be included in a next release of GWT.
In the meanwhile, you can use this approach, just copying the file ResolverServiceLayer.java in your src folder and applying this patch to it.
The point of RequestFactory is that it does not do use the same type. Each request context describes a set of operations to perform when the call gets to the server (create and find things, then apply setters, then run service methods). As calls are described as just proxies to the real thing on the server, you need a 'fake' model object like a EntityProxy or ValueProxy to ensure that the only calls that can be made are getters and setters - and that sometimes, setters are not allows (when an object has been read from the server but before it has been edited).
If your models are simple, i.e. not holding other objects, but only string, date, and primitives, you can have both the entity and the proxy implement the same interface. However, if the model holds sub-objects, then this is more difficult - the only way possible is to leave out those getters and setters. Otherwise, you can't override those methods in the proxy type to specify the proxy version of that nested object.
Consider using RPC isntead if you actually want to reuse the same types on the client and server.
Related
I'm kinda new on DDD and even after read the blue and red book I still have some questions about how to transform some principles to code, specifically using Kotlin and Java.
For example, I identify a Client aggregate root that receive some parameters need it for the creation like Name and Address:
class Client: AggregateRoot {
var clientId: ClienteId
var name: Name
var address: Address
constructor(name: Name,address: Address) : super(){
// validations ....
this.name = name
this.address = address
}
Easy part:
To create a new Client I receive a DTO inside the RS service and try to create a new Client class passing the parameters above, case everything was solid and all rules fulfilled I send the new instance of Client to the repository, pretty straight foward.
clientRepository.store(client)
Other part:
I need to search my Client to change the address so I send the id to the repository and find the Client inside the database then I need to convert the database entity to the aggregate root and return to the caller.
override fun getById(id: Long): Client {
val clientEntity = em.find(...)
val client: Client(.....) //But I need another constructor with ClientId
return client
}
Then I will need a new constructor one that receive more parameters like the ClientId
constructor(clientId: ClienteId,name: Name,address: Address) : super(){
The problem is that every service can call this new constructor and create a incorrect instance of my aggregation root, so my questions are:
Is there a way to hide the complete constructor just for the repository or specific layers to see. Like in C# when you could use internal.
Is there any solution for Java or Kotlin to not expose this constructor that should be used just on tests and integrations ?
Another example is if I didn't need the address to be passed every time a client is created but just after in another method like:
client.addAddress(address)
But in both cases I will need to fulfill the entire Client from the database so I will need a second constructor with the address parameter.
So, the problem is how to rehydrate an Aggregate from the persistence without breaking its encapsulation by exposing the wrong interface to the client code (i.e. the Application layer or the Presentation layer).
I see two solutions to this:
Use reflection to populate the fields. This is the solution that most ORMs use and it is also the most generic. It works for most persistence types, even when there is an impedance mismatch. Some ORMs need to annotate fields or relations.
Expose a different interface to the client code. This means that your Aggregate implementation is larger that the interface and contains additional initialization methods used only by the infrastructure.
As an example in pseudo-code your could have:
// what you want the upper layers to see
interface Client {
void addAddress(address);
}
// the actual implementations
public class ClientAggregate implements Client
{
void rehidrate(clientId,name,address){...}
void addAddress(address){...}
}
public class ClientRepository
{
// this method returns Client (interface)
Client getById(id){
val clientEntity = em.find(...)
val client = new ClientAggregate()
client.rehydrate(clientEntity.id, clientEntity.name, clientEntity.address)
return client //you are returning ClientAggregate but the other see only Client (interface)
}
}
As a side note, I don't expose the constructor to create an Aggregate from the Domain point of view. I like to have empty constructors and a dedicated method, named from the Ubiquitous language, that creates the Aggregate. The reason is that is not clear that the constructor creates a new Aggregate. The constructor instantiate a new instance of a class; it is more a implementation details than a domain concern. An example:
class Client {
constructor(){ //some internal initializations, if needed }
void register(name){ ... }
}
I have read several articles and Stackoverflow posts for converting domain objects to DTOs and tried them out in my code. When it comes to testing and scalability I am always facing some issues. I know the following three possible solutions for converting domain objects to DTOs. Most of the time I am using Spring.
Solution 1: Private method in the service layer for converting
The first possible solution is to create a small "helper" method in the service layer code which is convertig the retrieved database object to my DTO object.
#Service
public MyEntityService {
public SomeDto getEntityById(Long id){
SomeEntity dbResult = someDao.findById(id);
SomeDto dtoResult = convert(dbResult);
// ... more logic happens
return dtoResult;
}
public SomeDto convert(SomeEntity entity){
//... Object creation and using getter/setter for converting
}
}
Pros:
easy to implement
no additional class for convertion needed -> project doesn't blow up with entities
Cons:
problems when testing, as new SomeEntity() is used in the privated method and if the object is deeply nested I have to provide a adequate result of my when(someDao.findById(id)).thenReturn(alsoDeeplyNestedObject) to avoid NullPointers if convertion is also dissolving the nested structure
Solution 2: Additional constructor in the DTO for converting domain entity to DTO
My second solution would be to add an additional constructor to my DTO entity to convert the object in the constructor.
public class SomeDto {
// ... some attributes
public SomeDto(SomeEntity entity) {
this.attribute = entity.getAttribute();
// ... nesting convertion & convertion of lists and arrays
}
}
Pros:
no additional class for converting needed
convertion hided in the DTO entity -> service code is smaller
Cons:
usage of new SomeDto() in the service code and therefor I have to provide the correct nested object structure as a result of my someDao mocking.
Solution 3: Using Spring's Converter or any other externalized Bean for this converting
If recently saw that Spring is offering a class for converting reasons: Converter<S, T> but this solution stands for every externalized class which is doing the convertion. With this solution I am injecting the converter to my service code and I call it when i want to convert the domain entity to my DTO.
Pros:
easy to test as I can mock the result during my test case
separation of tasks -> a dedicated class is doing the job
Cons:
doesn't "scale" that much as my domain model grows. With a lot of entities I have to create two converters for every new entity (-> converting DTO entitiy and entitiy to DTO)
Do you have more solutions for my problem and how do you handle it? Do you create a new Converter for every new domain object and can "live" with the amount of classes in the project?
Thanks in advance!
Solution 1: Private method in the service layer for converting
I guess Solution 1 will not not work well, because your DTOs are domain-oriented and not service oriented. Thus it will be likely that they are used in different services. So a mapping method does not belong to one service and therefore should not be implemented in one service. How would you re-use the mapping method in another service?
The 1. solution would work well if you use dedicated DTOs per service method. But more about this at the end.
Solution 2: Additional constructor in the DTO for converting domain entity to DTO
In general a good option, because you can see the DTO as an adapter to the entity. In other words: the DTO is another representation of an entity. Such designs often wrap the source object and provide methods that give you another view on the wrapped object.
But a DTO is a data transfer object so it might be serialized sooner or later and send over a network, e.g. using spring's remoting capabilities. In this case the client that receives this DTO must deserialize it and thus needs the entity classes in it's classpath, even if it only uses the DTO's interface.
Solution 3: Using Spring's Converter or any other externalized Bean for this converting
Solution 3 is the solution that I also would prefer. But I would create a Mapper<S,T> interface that is responsible for mapping from source to target and vice versa. E.g.
public interface Mapper<S,T> {
public T map(S source);
public S map(T target);
}
The implementation can be done using a mapping framework like modelmapper.
You also said that a converter for each entity
doesn't "scale" that much as my domain model grows. With a lot of entities I have to create two converters for every new entity (-> converting DTO entitiy and entitiy to DTO)
I doupt that you only have to create 2 converter or one mapper for one DTO, because your DTO is domain-oriented.
As soon as you start to use it in another service you will recognize that the other service usually should or can not return all values that the first service does.
You will start to implement another mapper or converter for each other service.
This answer would get to long if I start with pros and cons of dedicated or shared DTOs, so I can only ask you to read my blog pros and cons of service layer designs.
EDIT
About the third solution: where do you prefer to put the call for the mapper?
In the layer above the use cases. DTOs are data transfer objects, because they pack data in data structures that are best for the transfer protocol. Thus I call that layer the transport layer.
This layer is responsible for mapping use case's request and result objects from and to the transport representation, e.g. json data structures.
EDIT
I see you're ok with passing an entity as a DTO constructor parameter. Would you also be ok with the opposite? I mean, passing a DTO as an Entity constructor parameter?
A good question. The opposite would not be ok for me, because I would then introduce a dependency in the entity to the transport layer. This would mean that a change in the transport layer can impact the entities and I don't want changes in more detailed layers to impact more abstract layers.
If you need to pass data from the transport layer to the entity layer you should apply the dependency inversion principle.
Introduce an interface that will return the data through a set of getters, let the DTO implement it and use this interface in the entities constructor. Keep in mind that this interface belongs to the entity's layer and thus should not have any dependencies to the transport layer.
interface
+-----+ implements || +------------+ uses +--------+
| DTO | ---------------||-> | EntityData | <---- | Entity |
+-----+ || +------------+ +--------+
I like the third solution from the accepted answer.
Solution 3: Using Spring's Converter or any other externalized Bean for this converting
And I create DtoConverter in this way:
BaseEntity class marker:
public abstract class BaseEntity implements Serializable {
}
AbstractDto class marker:
public class AbstractDto {
}
GenericConverter interface:
public interface GenericConverter<D extends AbstractDto, E extends BaseEntity> {
E createFrom(D dto);
D createFrom(E entity);
E updateEntity(E entity, D dto);
default List<D> createFromEntities(final Collection<E> entities) {
return entities.stream()
.map(this::createFrom)
.collect(Collectors.toList());
}
default List<E> createFromDtos(final Collection<D> dtos) {
return dtos.stream()
.map(this::createFrom)
.collect(Collectors.toList());
}
}
CommentConverter interface:
public interface CommentConverter extends GenericConverter<CommentDto, CommentEntity> {
}
CommentConveter class implementation:
#Component
public class CommentConverterImpl implements CommentConverter {
#Override
public CommentEntity createFrom(CommentDto dto) {
CommentEntity entity = new CommentEntity();
updateEntity(entity, dto);
return entity;
}
#Override
public CommentDto createFrom(CommentEntity entity) {
CommentDto dto = new CommentDto();
if (entity != null) {
dto.setAuthor(entity.getAuthor());
dto.setCommentId(entity.getCommentId());
dto.setCommentData(entity.getCommentData());
dto.setCommentDate(entity.getCommentDate());
dto.setNew(entity.getNew());
}
return dto;
}
#Override
public CommentEntity updateEntity(CommentEntity entity, CommentDto dto) {
if (entity != null && dto != null) {
entity.setCommentData(dto.getCommentData());
entity.setAuthor(dto.getAuthor());
}
return entity;
}
}
I ended up NOT using some magical mapping library or external converter class, but just adding a small bean of my own which has convert methods from each entity to each DTO I need. The reason is that the mapping was:
either stupidly simple and I would just copy some values from one field to another, perhaps with a small utility method,
or was quite complex and would be more complicated to write down in the custom parameters to some generic mapping library, compared to just writing out that code. This is for example in the case where the client can send JSON but under the hood this is transformed into entities, and when the client retrieves the parent object of these entities again, it's converted back into JSON.
This means I can just call .map(converter::convert) on any collection of entities to get back a stream of my DTO's.
Is it scalable to have it all in one class? Well the custom configuration for this mapping would have to be stored somewhere even if using a generic mapper. The code is generally extremely simple, except for a handful of cases, so I'm not too worried about this class exploding in complexity. I'm also not expecting to have dozens more entities, but if I did I might group these converters in a class per subdomain.
Adding a base class to my entities and DTO's so I can write a generic converter interface and implement it per class just isn't needed (yet?) either for me.
In my opinion the third solution is the best one. Yes for each entity you'll have to create a two new convert classes but when you come time for testing you won't have a lot of headaches. You should never chose the solution which will cause you to write less code at the begining and then write much more when it comes to testing and maintaining that code.
Another point is , if you use the second approach and your entity has lazy dependencies, your Dto can't understand if dependency is loaded unless you inject EntityManager into the Dto and use it to check if dependency was loaded. I don't like this approach cause Dto shouldn't know anything about EntityManager. As a solution I personally prefer Converters but at the same time I prefer to have multiple Dto classes for the same entity . For example If I am 100 % sure that User Entity will be loaded without corresponding Company , then there has to be a UserDto that doesn't have CompanyDto as a field. At the same time If I know that UserEntity will be loaded with correlated Company , then I will use aggregate pattern , something like a UserCompanyDto class that contains UserDto and CompanyDto as parameters
On my side I prefer using option 3 with a third party library such as modelmapper or mapstruct. Also I use it through interface in an util package, because I don't want any external tool or library to interact directly with my code.
Definition:
public interface MapperWrapper {
<T> T performMapping(Object source, Class<T> destination);
}
#Component
public class ModelMapperWrapper implements MapperWrapper {
private ModelMapper mapper;
public ModelMapperWrapper() {
this.mapper = new ModelMapper();
}
#Override
public <T> T performMapping(Object source, Class<T>
destination) {
mapper.getConfiguration()
.setMatchingStrategy(MatchingStrategies.STRICT);
return mapper.map(source, destination);
}
}
Then after I can test it easily:
Testing:
#SpringJUnitWebConfig(TestApplicationConfig.class)
class ModelMapperWrapperTest implements WithAssertions {
private final MapperWrapper mapperWrapper;
#Autowired
public ModelMapperWrapperTest(MapperWrapper mapperWrapper) {
this.mapperWrapper = mapperWrapper;
}
#BeforeEach
void setUp() {
}
#Test
void givenModel_whenMapModelToDto_thenReturnsDto() {
var model = new DummyModel();
model.setId(1);
model.setName("DUMMY_NAME");
model.setAge(25);
var modelDto = mapperWrapper.performMapping(model, DummyModelDto.class);
assertAll(
() -> assertThat(modelDto.getId()).isEqualTo(String.valueOf(model.getId())),
() -> assertThat(modelDto.getName()).isEqualTo(model.getName()),
() -> assertThat(modelDto.getAge()).isEqualTo(String.valueOf(model.getAge()))
);
}
#Test
void givenDto_whenMapDtoToModel_thenReturnsModel() {
var modelDto = new DummyModelDto();
modelDto.setId("1");
modelDto.setName("DUMMY_NAME");
modelDto.setAge("25");
var model = mapperWrapper.performMapping(modelDto, DummyModel.class);
assertAll(
() -> assertThat(model.getId()).isEqualTo(Integer.valueOf(modelDto.getId())),
() -> assertThat(model.getName()).isEqualTo(modelDto.getName()),
() -> assertThat(model.getAge()).isEqualTo(Integer.valueOf(modelDto.getAge()))
);
}
}
After that it can be very easy to use another mapper library. I should have created an abstract factory, or strategy pattern also.
In the interface I have an abstract method
Server launchInstance(
Instance instance,
String name,
Set<String> network,
String userData)
throws Exception;
Now in my class that implements the previous interface, I am overriding this method but I do not need all the parameters because that will cause a lot of unnecessary tasks. In my implemented class I want to do something like-
#override
Server launchInstance(Instance instance, String name) throws Exception;
How can I remove some unnecessary parameters in my implemented(from Interface) class while overriding?
That's not possible with Java.
An interface defines method that all implementing classes must support, in order to have a unifying API.
One purpose is to be able to exchange implementations.
I see a couple of options:
Add a second method to the interface with fewer parameters.
But this requires, of course, that all implementations support this.
This may therefore not be viable for you.
Implement an additional second interface, which defines the method with two parameters.
if (x instanceof Server2)
// short-cut: do not need to compute network and userData
((Server2) x).launchInstance(instance, name)
else {
Set<String> network = …;
x.launchInstance(instance, name, network, userData)
}
Simply ignore the additional parameters.
If you desperately need a unified interface and want to avoid computation costs of the additional arguments, wrap the optional arguments of type T using lazy evaluation (e.g. in a Callable<T>). If you do not need the values, simply never call the Callable.
Interface is a common API for number of classes. By design you don't want interface implementations to change API.
However, you can omit unused parameters:
#Override
Server launchInstance(Instance instance, String name, Set<String> network, String userData) throws Exception {
launch(instance, name);
}
private Server launch(Instance instance, String name) throws Exception {
...
}
or provide Data object:
class Data {
private Instance instance;
private String name;
private Set<String> network;
private String userData;
}
#Override
Server launchInstance(Data data) {
...
}
Also interface(read data transfer) could be simlified using Dependency Injection.
nope, you just can break the compromisse behind the override... you will need to redesign the method signature for something a little bit more abstract...
I'm currently working on a project that involves creating an abstraction layer. The goal of the project is to support multiple implementations of server software in the event that I might need to switch over to it. The list of features to be abstracted is rather long, so I'm going to want to look into a rather painless way to do it.
Other applications will be able to interact with my project and make calls that will eventually boil down to being passed to the server I'm using.
Herein lies the problem. I haven't much experience in this area and I'm really not sure how to make this not become a sandwich of death. Here's a chain of roughly what it's supposed to look like (and what I'm trying to accomplish).
/*
Software that is dependent on mine
|
Public API layer (called by other software)
|
Abstraction between API and my own internal code (this is the issue)
|
Internal code (this gets replaced per-implementation, as in, each implementation needs its own layer of this, so it's a different package of entirely different classes for each implementation)
|
The software I'm actually using to write this (which is called by the internal code)
*/
The abstraction layer (the one in the very middle, obviously) is what I'm struggling to put together.
Now, I'm only stuck on one silly aspect. How can I possibly make the abstraction layer something that isn't a series of
public void someMethod() {
if(Implementation.getCurrentImplementation() == Implementation.TYPE1) {
// whatever we need to do for this specific implementation
else {
throw new NotImplementedException();
}
}
(forgive the pseudo-code; also, imagine the same situation but for a switch/case since that's probably better than a chain of if's for each method) for each and every method in each and every abstraction-level class.
This seems very elementary but I can't come up with a logical solution to address this. If I haven't explained my point clearly, please explain with what I need to elaborate on. Maybe I'm thinking about this whole thing wrong?
Why not using inversion of control ?
You have your set of abstractions, you create several implementations, and then you configure your public api to use one of the implementations.
Your API is protected by the set of interfaces that the implementations inherit. You can add new implementations later without modifying the API code, and you can switch even at runtime.
I don't know anymore if inversion of control IS dependency injection, or if DI is a form of Ioc but... it's just that you remove the responsibility of dependency management from your component.
Here, you are going to have
API layer (interface that the client uses)
implementations (infinite)
wrapper (that does the IoC by bringing the impl)
API layer:
// my-api.jar
public interface MyAPI {
String doSomething();
}
public interface MyAPIFactory {
MyAPI getImplementationOfMyAPI();
}
implementations:
// red-my-api.jar
public class RedMyAPI implements MyAPI {
public String doSomething() {
return "red";
}
}
// green-my-api.jar
public class GreenMyAPI implements MyAPI {
public String doSomething() {
return "green";
}
}
// black-my-api.jar
public class BlackMyAPI implements MyAPI {
public String doSomething() {
return "black";
}
}
Some wrapper provide a way to configure the right implementation. Here, you can hide your switch case in the factory, or load the impl from a config.
// wrapper-my-api.jar
public class NotFunnyMyAPIFactory implements MyAPIFactory {
private Config config;
public MyAPI getImplementationOfMyAPI() {
if (config.implType == GREEN) {
return new GreenMyAPI();
} else if (config.implType == BLACK) {
return new BlackMyAPI();
} else if (config.implType == RED) {
return new RedMyAPI();
} else {
// throw...
}
}
}
public class ReflectionMyAPIFactory implements MyAPIFactory {
private Properties prop;
public MyAPI getImplementationOfMyAPI() {
return (MyAPI) Class.forName(prop.get('myApi.implementation.className'))
}
}
// other possible strategies
The factory allows to use several strategies to load the class. Depending on the solution, you only have to add a new dependency and change a configuration (and reload the app... or not) to change the implementation.
You might want to test the performances as well.
If you use Spring, you can only use the interface in your code, and you inject the right implementation from a configuration class (Spring is a DI container). But no need to use Spring, you can do that on the Main entry point directly (you inject from the nearest of your entry point).
The my-api.jar does not have dependencies (or maybe some towards the internal layers).
All the jar for implementations depend on my-api.jar and on you internal code.
The wrapper jar depends on my-api.jar and on some of the impl jar.
So the client load the jar he wants, use the factory he wants or a configuration that inject the impl, and use your code. It depends also on how you expose your api.
I have an Android application that loads a data from web sources and displays it. I organized each API method those sources support in enum constants split per source. Let's say, SourceA provides Sport and Weather data and SourceB provides Stock and News data.
public enum ServiceA implements ServiceMethod {
GET_SPORT(...) {...},
GET_WEATHER(...) {...};
...
}
public enum ServiceB implements ServiceMethod {
GET_STOCK(...) {...},
GET_NEWS(...) {...};
...
}
Each enum class also has private members to deal with service API (different API for differenc sources)
And a variable of ServiceMethod
ServiceMethod method;
... // initializing
String cleanAndFancyData = method.queryData(); // e.g.
No I need to store user history, so I have to serialize and deserialize ServiceMethod variable and also enumerate all constants from all enums implementing this interface. The serialization looks simple and is out of question.
Now I just manually iterate over enums:
Collection<ServiceMethod> allMethods = new ArrayList<ServiceMethod>();
allMethods.addAll(EnumSet.allOf(ServiceA.class));
allMethods.addAll(EnumSet.allOf(ServiceB.class));
Is there a cleaner, nicer way for this? Or is my approach completely wrong?
Also, any way to ensure that all ServiceMethod have unique names (across all enums)?
You may be able to use the strategy pattern to combine ServiceA and ServiceB into a single Service.