I have a problem regarding Java custom serialization. I have a graph of objects and want to configure where to stop when I serialize a root object from client to server.
Let's make it a bit concrete, clear by giving a sample scenario. I have Classes of type
Company
Employee (abstract)
Manager extends Employee
Secretary extends Employee
Analyst extends Employee
Project
Here are the relations:
Company(1)---(n)Employee
Manager(1)---(n)Project
Analyst(1)---(n)Project
Imagine, I'm on the client side and I want to create a new company, assign it 10 employees (new or some existing) and send this new company to the server. What I expect in this scenario is to serialize the company and all bounding employees to the server side, because I'll save the relations on the database. So far no problem, since the default Java serialization mechanism serializes the whole object graph, excluding the field which are static or transient.
My goal is about the following scenario. Imagine, I loaded a company and its 1000 employees from the server to the client side. Now I only want to rename the company's name (or some other field, that directly belongs to the company) and update this record. This time, I want to send only the company object to the server side and not the whole list of employees (I just update the name, the employees are in this use case irrelevant). My aim also includes the configurability of saying, transfer the company AND the employees but not the Project-Relations, you must stop there.
Do you know any possibility of achieving this in a generic way, without implementing the writeObject, readObject for every single Entity-Object? What would be your suggestions?
I would really appreciate your answers. I'm open to any ideas and am ready to answer your questions in case something is not clear.
You can make another class (a Data-Transfer-Object) where you have only the fields you want to transfer.
A way of custom serialization is implementing Externalizable
I would say the short answer to your question is no, such varied logic for serialization can't be easily implemented without writing the serialization yourself. That said an alternative might be to write several serializer/deserializer pairs (XML, JSON, whatever your favorite format, instead of standard yusing the built in serialization). and then to run your objects through those pairs sending some kind of meta-information preamble.
for example following your scenarios above you may have these pairs of (de)serialization mechanisms
(de)serializeCompany(Company c) - for the base company information
(de)serializeEmployee(Employee e) - for an employee's information
(de)serializeEmployee(Company c) - the base information of employees in a company
(de)serializeRelationships(Company c) - for the project relationships
For XML each of these can generate a dom tree, and then you place them all in a root node containing metainformation i.e.
<Company describesEmployees="true" describeRelationships="false">
[Elements from (de)serializeCompany]
[Elements from (de)serializeEmployee(Company c)]
</Company>
One potential "gotcha" with this approach is making sure you do the deserialization in the correct order depending on your model (i.e. make sure you deserialize the company first, then the employees, then the relationships). But this approach should afford you the ability to only write the "actual" serialization once, and then you can build your different transport models based on compositions of these pieces.
You could take an object swizzling approach where you send a "stub" object over the wire to your client.
Pros
The same object graph is logically available client-side without the overhead of serializing / deserializing unnecessary data.
Full / stub implementations can be swapped in as necessary without your client code having to change.
Cons
The overhead in calling getters which result in dynamically loading additional attributes via a call to the server is hidden from the client, which can be problematic if you do not control the client code; e.g. An unwitting user could be making an expensive call many times in a tight loop.
If you decide to cache data locally on the client-side you need to ensure it stays in-sync with the server.
Example
/**
* Lightweight company stub that only serializes the company name.
* The collection of employees is fetched on-demand and cached locally.
* The service responsible for returning employees must be "installed"
* client-side when the object is first deserialized.
*/
public class CompanyStub implements Company, Serializable {
private final String name;
private transient Set<Employee> employees;
private Service service;
public Service getService() {
return service;
}
public void setService(Service service) {
this.service = service;
}
public String getName() {
return name;
}
public Set<? extends Employee> getEmployees() {
if (employees == null) {
// Employees not loaded so load them now.
this.employees = server.getEmployeesForCompany(name);
}
return employees;
}
}
Related
I have the following class:
public class DomainClass {
private Integer value;
private Integer total;
private Integer month;
public Double getPercent(){/*Business logic*/}
}
I want to do the same getPercent operation with a list of DomainClass objects without repeat code. I got 2 ideas to handle that but I don't know if they're good ones.
Idea 1 - Create a service and iterate the list:
public class DomainObjectService{
....
public Double getPercent(List<DomainClass> list){
double value, total;
for(DomainClass d : list){
value += d.getValue();
total += d.getTotal();
}
// do percent operation
}
Idea 2 - Query operation at database, fill object and call the business method
public class DomainObjectService{
....
public Double getPercent(){
double value, total;
.... query data with sql and set the above variables
// do percent operation
return new DomainBusiness(value, total).getPercentage();
}
I'm reading that in DDD a entity should handle itself logic but in this case of a collection operation how it should be treated?
Well, if my DDD base knowledge is wrong. I would like to know good article/books/example of DDD in java.
How do you manage your entities? Do you use any kind of ORM?
My solution for this kind of operations is to build a class that manages the collection of object.
So, for example:
public class DomainClasses{
private final List<DomainClass> domainClasses;
....
public Double getPercent(){
// manage the percent operation ...
// ... on all the members the way ...
// ... your business is expected ...
// ... to do it on the collection
return something;
}
// class initialization
}
In this way you can reuse the getPercent code of each class, but also implement a specific version of it to be used by the collection. Moreover, the collection could access to the package private getters, if any, of DomainClass to make this calculations. In this way you expose nothing else than just the functions that you need to build your domain objects.
Note: this solution is viable if you manage your persistence without any ORM. Or, if you want to use it, it will require additional work to configure correctly the container class.
Some links:
https://www.mehdi-khalili.com/orm-anti-patterns-part-4-persistence-domain-model (I work with a DM separated from the PM)
https://softwareengineering.stackexchange.com/questions/371344/why-not-use-an-orm-with-ddd (is what I'm doing, translating the Domain objects to DTOs that will be persisted - it's a bit of extra code to write for collections, but once is tested it works, always, and what you get is a domain that has less no interferences from ORM framework)
Update after question. I use the Memento pattern.
Storing
My Domain Class has a function that export all the data in a Memento object. The repository takes a Domain Instance, asks for the Memento and then:
I generate the SQL insert/update (plain SQL with transaction management from Spring)
You can load your JPA entity and update with the Memento information (care should be taken, but if you write tests, once done it will work always - hence, tests are important ;) )
Reading
For the reverse, building a Domain instance from the saved data, I do this:
in the persistence layer, where the repository code is implemented, I've extended my Memento (lets call it PersistedMemento)
when I've to load something, I build a PersistedMemento and I use it to build an instance of the Domain Class
my Domain Class has a function that allows to build objects from a Memento. Note: this could not always be necessary, but in my case the main constructor ha extra checks that cannot be done when the object is rebuilt from a saved one. Anyway, this simplifies the rebuilding of the Domain class.
To protect the Domain classes from being used outside the world of the domain:
my repositories require an existent transaction, so they cannot be directly used anywhere in the code
the Memento classes have protected constructors, so they are usable only in the Domain package or the Repository package. The PersistedMemento is also hidden in the Repository package, so no instances could be created.
Notes
Of course is not the perfect solution. The Domain Class has 2 functions that are there to support a non domain requirement. The Memento class could also be sub-classed, and an instance could be used to build the Domain Class (but why? It's much more simpler to build it with the default constructor). But, except this small amount of pollution, the domain stays really clean and I can really concentrate on the domain requirement, without thinking how to manage the persistence.
I have a graph of domain objects and i need to build a DTO to send it to the view. How to design it properly? I see 2 options where can I put the DTO building code:
1) Into the DTO constructor. But then the domain object has to present all fields to DTO via getters so it's not a DDD.
public DTO(DomainObject domain) {
/// access internal fields of different domain object.
}
2) Into the domain object. There will be no problem with accessing fields but the domain object will grow very fast when new view are added.
public DTO1 createDTO1() {
...
}
public DTO2 createDTO1() {
...
}
// and so on...
How should I build DTOs properly?
I think there is a bigger issue at play here. You should not be querying your domain. Your domain should be focused on behaviour and, as such, will quite possibly not contain the data in a format suitable for a view, especially for display purposes.
If you are sending back your entire, say, Customer object to Edit then you are performing entity-based interactions that are very much data focused. You may want to try and place more attention on task-based interactions.
So to get data to your view I'd suggest a simple query layer. Quite often you will need some denormalized data to improve query performance and that will not be present in your domain anyway. If you do need DTOs then map them directly from your data source. If you can get away with a more generic data container structure then that is first prize.
Variants:
Constructor with simple types in DTO: public DTO(Long id, String title, int year, double price)
Separate class - converter with methods like: DTO1 createDTO1(DomainObject domain)
Framework for copy properties from one object to other, like Dozer: http://dozer.sourceforge.net/
1) ... the domain object has to present all fields to DTO via getters ...
2) ... the domain object will grow very fast ...
As you can see, the problem is that both alternatives are coupling your model with your DTOs, so you need to decouple them: introduce a layer between them in charge of performing the mapping/translation.
You may find this SO question useful.
domain object has to present all fields to DTO via getters so it's not
a DDD
Just because a domain object has getters doesn't make it anemic or anti-DDD. You can have getters and behavior.
Besides, since you want to publish to the View a DTO containing particular data, I can't see how this data can remain private inside the domain object. You have to expose it somehow.
Solution 2) breaks separation of concerns and layering (it's not a domain object's business to create a DTO) so I'd stick with 1) or one of #pasha701's alternatives.
I'm using Jackson to parse JSON for my android app. I also intend to use it in my REST server too, so I'll be sharing my models between client and server.
I've created a POJO to model a domain object "Friend". When the client gets https://www.myserver.com/api/1/friend/1234 I want to return the serialised Friend with ID 1234, perhaps with one or 2 fields missing.
However, when a client gets https://www.myserver.com/api/1/friend/ I want to return all friend objects, but with less data that might be more appropriate to search results (e.g. just first name, last name and profile image, but excluding their list of friends, date of birth, etc.).
What pattern should I follow here so that I can represent the same underlying model in different ways depending on how it'll be accessed?
Inheritance can be an option in conjunction with #JsonIgnoreProperties.
You can have a class Friend and extend it to restrict what properties are to be serialized.
#JsonIgnoreProperties({ "friends", "dateOfBirth" })
class RestrictedFriend extends Friend {
}
See if you want to use Inheritance. Have a base class with fields that you want to share with everyone, and a sub-class which has more restricted data. Have two JSON APIs, one for public info, and one for public+secure info, and serialize the base class or sub-class object based on which API was called.
I am responsible for a Java EE application that provides backend functionality to a number of clients. Some of the clients are also written in Java, so I have extracted my entities into a separate jar, which server and clients share.
The server uses JPA2 for persistence, JAX-RS for communication with clients and JAXB for serialisation to/from XML and JSON. As a result, the (shared) class files contain both JPA- and JAXB-annotations.
Obviously, the same object behaves differently on the server (where it is a managed JPA entity) and on a client (where it is a de-serialized POJO) - especially with regards to one-to-many relationships.
Question: Sometimes I'd like to have individual methodcalls behave differently depending on where they are executed. Can I solve this through inheritance, so that I don't have to manually maintain two implementations of the same classes?
Example:
A team has many players. A player has a name.
Requests are mapped to GET /team/<id> to get a Team, and GET /team/<id>/<playerName> to get a particular player for a team.
For marshalling and serialisation, the Team shall remain "flat" (don't include players). However, include their names in the serialisation so that the clients know which players they can retrieve in detail.
On the server side, I'd build it like this:
#Entity
#XmlRootElement
#XmlAccessorType(XmlAccessType.FIELD)
public class Team {
/* some other fields, belonging to the Team */
#OneToMany(fetch = FetchType.LAZY, mappedBy="team")
#XmlTransient // don't marshall the players
List<Player> players;
/* getters and setters as necessary */
#XmlElement
public List<String> getPlayerNames() {
List<String> names = new ArrayList<String>();
for (Player p : getPlayers()) {
names.add(p.getName());
}
return names;
}
}
On the client side, I'd map it to:
#XmlRootElement
#XmlAccessorType(XmlAccessType.FIELD)
public class Team {
/* some other fields, belonging to the Team */
List<String> playerNames;
public List<String> getPlayerNames() {
return playerNames;
}
public void setPlayerNames(List<String> playerNames) {
this.playerNames = playerNames;
}
/* getters and setters as necessary */
}
This way, the playernames get marshalled (through #XmlElement-annotated getPlayerNames()) on the server side. When the client receives it, it unmarshalls the list properly. Everybody is happy.
However, now I'd have to maintain two essentially identical classes, where only minor differences occur... What is the best way to do this properly?
Directly serializing entities and transmitting them over the wire may pose a problem, for example if you have circular references or if you like to include additional information which might be needed for deserialization, especially in JSON. Another issue might be detached entities (If you send them to the client, the entity manager loses control of the entities and you have to reattach them when they come back) or lazy load (you can't lazy load on the client). Therefore I would recommend to convert the entities into Data Transfer Objects before you transmit them over the wire. See Fowler's book Patterns of Enterprise Application Architecture (page 401, most of the chapter is available through Google Books) for details and motivation.
Using the same classes on the client and server might be problematic too because they behave differently and might diverge further in the future. You might restrict yourself to much by committing to an identical code base on the client and server or end up with a big mess.
I'm currently thinking about some design details of remoting / serialization between a Java Swing WebStart Application (Fat Client) and some remote services running on Tomcat. I want to use a http compatible transport to communicate with the server and since I'm already using Spring, I assume Spring's HTTP Remoting is a good choice. But I'm open to other alternatives here. My design problem is best illustrated with some small example
The client will call some services on the remote side. A sample service interface:
public interface ServiceInterface extends Serialiazable {
// Get immutable reference data
public List<Building> getBuildings();
public List<Office> getOffices();
// Create, read and update Employee objects
public insertEmployee(Employee employee);
public Employee getEmployee();
public void updateEmployee(Employee employee);
}
Building and Office are immutable reference data objects, e.g.
public Building implements Serializable {
String name;
public Building(String name) { this.name = name; }
public String getName() { return name; }
}
public Office implements Serializable {
Building building;
int maxEmployees;
public Office(Building building, int maxEmployess) {
this.building = building;
this.maxEmployees = maxEmployees;
}
public Building getBuilding() { return building; }
punlic int getMaxEmployees() { retrun maxEmployees; }
}
The available Buildings and Offices won't change during runtime and should be prefeteched by the client to have the available for selection list, filter condition, ... I want to have only one instance of each particular Building and Office on client and one instance onserver side. On server side it is not a big problem, but in my eyes the problem starts here when I call getOffices() after getBuildings(). The Buildings returned by getOffices() share the same instance of Buildings (if they have the same Building assigned) but the Buildings returned by getOffices() (referenced in Office objects) are not the same instance as the Buildings returned by getBuildings().
This might been solved by using some getReferenceDate() method returning both information in the same call, but than the problem will start if I have Employees referencing Offices.
I was thinking about some custom serialization (readObject, writeObject) transfering only the primary key and than getting the instance of the object from some class holding the reference data objects. But is this the best solution to this problem? I assume that this problem is not an uncommon problem, but did not find anything on Google. Is there a better solution? If not, what would be the best way to implemet it?
If you're going to serialize, You'll probably need to implement readResolve to guarantee that you're not creating additional instances:
From the javadoc for Serializable:
Classes that need to designate a
replacement when an instance of it is
read from the stream should implement
this special method with the exact
signature.
ANY-ACCESS-MODIFIER Object
readResolve() throws
ObjectStreamException;
I seem to remember reading about this approach in pre-enum days for handling serialization of objects that had to be guaranteed to be singular, like typesafe enums.
I'd also strongly recommend that you include a manual serialVersionUID in your serialized classes so that you can manually control when the application will decide that your classes represent incompatible versions that can't be deserialized.
However, on a more basic level, I'd question the whole approach - rather than trying to guarantee object identity over the network, which sounds to be, at the very least, a concurrency nightmare, why not just pass the raw data around and have your logic determine identity by id, the way we did it in my grandpappy's day? Your back end has a building object, it gets one from the front end, compares via ID (If you've altered the object on the front end you'll have to commit your object to your central datastore and determine what's changed, which could be a synchronizing issue with multiple clients, but you'd have that issue anyway).
Passing data remotely via Spring-httpclient is nice and simple, a bit less low-level than RMI.
Firstly, I'd recommend using RMI for your remoting, which can be proxied over HTTP (IIRC). Secondly, if you serialize the ServiceInterface, I believe the serialization mechanism will maintain the relative references for when it is deserialized in the remote JVM.