What Martin Fowler meant by "avoid automatic deserialization" in a REST API? - java

Martin Fowler said to avoid automatic deserialization in an API:
I prefer to avoid automatic deserialization altogether. Automatic
deserialization usually falls into the WSDL pitfall of coupling
consumers and producers by duplicating a static class structure in
both.
What this means?
Is it to receive all information as JSON in each Rest Service without any "converter" in the middle?
By "converter" I mean some type adapter, like in GsonBuilder.

By automatic deserialization he means that there's a predefined hard structure for the JSON object which is used to retrieve the object itself.
This is however appropriate for most use cases.
Examples of predefined structure are Java Class or XML XSD.
Automatic deserialization usually falls into the WSDL pitfall of coupling consumers and producers by duplicating a static class structure in both.
What he means here is that using classes for deserialization is same as using WSDL to serialize or deserialize objects.
On the contrary to the hard structure of classes and XSD documents, JSON is much more relaxed as it's based on Javascript which allows modification to the object definition at any point of it's life cycle.
So the alternative would be to use a HashMap and ArrayList in Java combination (or parsing String itself) to deserialize the object, as then even if the server produces something different (like new fields) no change would be needed at the client side. And new clients can take advantage of the new fields.
In a hard structure since both the producer and consumer are strongly coupled because of the shared structure of the model classes, any change in the producer has to be reflected in the consumer.
In some SOA projects where I worked, we used to add some extra fields in all the request/response objects for future use so that there was no need to change the clients running in the production to accommodate the needs of a new client. These fields had some random name like customParam1 to customParam5, where the meaning of these fields was released with the documentation. These names were not intuitive all because we were coupling the producer and consumer on the shared structure or models.

Related

Best Practices for incoming and outgoing DTOs

I'm facing an API design issue. Consider the following flow:
As you can see, I have 2 classes to represent my model (SomethingDTO and SomethingResponse) and 2 classes to represent the 3rd party model (3rdPartyRequest and 3rdPartyResponse). I'm using a mapper to provide translation from the 3rdPArty model classes to my model classes.
The problem is: all these 4 classes have exactly the same attributes.
Should I repeat these attributes through all these classes? Should I have only one DTO class for the whole flow?
What is the best practice (or pattern) to solve this?
Thanks
As I previously answered, using DTOs helps to decouple the persistence models from the API models. So you seem to be doing the right thing.
The problem is: all these 4 classes have exactly the same attributes.
It's always a good idea to decouple the models of your API from the models of a third party API. If the third party API changes their contract, you won't break your clients. So use different models for each API.
And stick to mapping frameworks, such as MapStruct, to reduce the boilerplate code. You also may want to consider Lombok to generate getters, setters, equals(), hashcode() and toString() methods for you.
Should I repeat these attributes through all these classes? Should I have only one DTO class for the whole flow?
If both request and response models contain the same set of fields, then you could start with a single class for representing both request and response payloads of each API. When the fields start to differ (request payload different from the response payload), then you create new models for representing each payload.
In the long run, using distinct models for the request and response will give you flexibility, ensuring you'll expose and receive only the attributes you want to.
This is a tough one. Being pragmatic Vs correctness.
The correct approach (in my view) is to have a different class for each request/response, which is what you've done. This is because I try to design applications using a Ports and Adapters architecture and also use Domain-Driven Design. The benefit is that this gives more flexibility and clarity in the scenario where the objects start diverging.
If the classes have the exact same attributes, I would take a pragmatic approach of having one class per layer. So one for your Web request/response and one for the 3rd Party. But in no case 'I' would mix 2 integration layers (front end and the 3rd party).
Having single class for the whole thing smells really bad. As I mentioned above, because that is mixing layers (or Ports).

Using DDS Domain Objects in code

I have an architectural question related the Data Distribution Service (DDS). What are the downsides to using Objects imported from DDS directly inside your code for presentation to the user?
I'm working on a program that listens to a large amount of data from various sources and receives everything through DDS. What is the correct approach for handling the objects received via DDS? Or at least the pros and cons of each.
Use them directly?
Should I encapsulate and pass them through my code with accessors that wrap the fields of the DDS Object?
Convert them to an equivalent business object (including corresponding enumerations) and pass my new object.
The second two options will allow the DDS Domain Object to change with minimal code changes, but is the up-front work of converting all of them worth the time it will take me? There is also some extra processing overhead in new object creation.
In the instances where I will be using JavaFX to display information, the third option is required to use bindings. For those particular instances, however, the objects will just be updated as new domain objects come in instead of recreated so the overhead for object creation is mitigated. That is not the case of all of the DDS data.

Using Protobuf classes vs having a mapping framework/layer in Java

I couldn't find any "Best Practices" online for usage of gRPC and protobuf within a project.
I'm implementing an event-sourced server side app.
The core defines the domain aggregates, events and services without having external dependencies. The gRPC server calls the core services passing in request objects which eventually translates into events being published. Events are serialized using protobuf and published on the wire.
We're currently in a dilemma on whether our events should be the protobuf generated classes directly, or should we keep the core and events separate and implement a mapper/serializer layer to translate events between protobuf <-> core
If there's another approach we're not considering, please guide us :)
Thanks for the help.
Domain Model Objects and Data Transfer Objects (Protobuf Message) should be separated as much as possible. For this the best way is to transform your Domain Model Objects into Google Protobuf Messages and vice versa. We've made a protobuf-converter to make it extremely simple.
Protobufs are really good for serialization and backwards compatibility, but not so good at being first class Java objects. Adding custom functionality to protos is currently not possible. You can get a lot of the benefits by using Protobufs at the stub layer, wrap them in one of your event Pojos, and pass them around internally as such:
public final class Event {
private final EventProto proto;
public void foo() {
// do something with proto.
}
}
Most projects don't change their .proto file that often, and almost never in a backwards incompatible way (neither wire nor API). Having to change a lot of code because of proto changes has never been a problem in my experience.

Best Practice (Design Pattern) for copying and augmenting Objects

I'm using an API providing access to a special server environment. This API has a wide range of Data objects you can retrieve from it. For Example APICar
Now I'd like to have "my own" data object (MyCar) containing all information of that data object but i'd like to either leave out some properties, augment it, or simply rename some of them.
This is because i need those data objects in a JSON driven client application. So when someone changes the API mentioned above and changes names of properties my client application will break immediatly.
My question is:
Is there a best practice or a design pattern to copy objects like this? Like when you have one Object and want to transfer it into another object of another class? I've seen something like that in eclipse called "AdapterFactory" and was wondering if it's wide used thing.
To make it more clear: I have ObjectA and i need ObjectB. ObjectA comes from the API and its class can change frequently. I need a method or an Object or a Class somewhere which is capable of turning an ObjectA into ObjectB.
I think you are looking for Design Pattern Adapter
It's really just wrapping an instance of class A in an instance of class B, to provide a different way of using it / different type.
"I think" because you mention copying issues, so it may not be as much a class/type thing as a persistence / transmission thing.
Depending on your situation you may also be interested in dynamic proxying, but that's a Java feature.

Passing a Entity over a network?

I have been studying Java networking for a while.
I am using ObjectInputStream and ObjectOutputStream for the I/O between sockets.
Is this possible to transfer a Entity or Model from server to client and vise versa?
How can I implement this? Am I suppose to implement the Entity or Model to Serializable?
Your response is highly appreciated.
I am not sure what sort of special thing you mean to denote by capital-E Entity and capital-M Model; these terms don't have any fixed, privileged meaning in Java (although they might with respect to a certain API or framework.) In general, if by these you just mean some specific Java objects, then yes, you can send any sort of objects this way, and yes, they would be required to implement Serializable. The only limitations would be if these objects contained members whose values wouldn't make sense on the other end of the pipe -- like file paths, etc.
Note that if you send one object, you'll end up sending every other object it holds a non-transient reference to, as well.
First of all... why sending an object through I/O stream? What's wrong with XML?
However, you can always send/receive an object through I/O stream as long as the sender can serialize the object and the receiver can deserialize the object. Hope it helps
You definitely need to look at one of these two libraries
Google gson: http://code.google.com/p/google-gson/
Converts Java object to JSON and back. advantage is that the object can be consumed or generated by Javascript. I have also used this for Java-Java RPC, but it gives you flexibility if you want to target browsers later
Google protocol buffers: http://code.google.com/apis/protocolbuffers/
This is what google uses for RPC. Implementations for Java, C, Python. If you need performance and the smallest size, this is the one to go with (The trade off is you can't look at the data easily to debug problems, like you can with gson, which generates plaint text JSON).

Categories