Generic - call actual methods of object passed in parameter - java

There are around 6 POJO classes (domain entities, DTO's, DMO's) all have almost same fields. To convert from one objec to another, I'm passing one object and calling its getters to set it in another object.
private UserTemp convertDmoToUserTempEntity(final UserDmo userDmo) {
final UserTemp userTemp = new UserTemp();
userTemp.setUsername(userDmo.getUsername());
userTemp.setPassword(userDmo.getPassword());
userTemp.setStatus(userDmo.getStatus());
return userTemp;
}
private UserDmo convertEntityToUserDmo(final UserTemp userTemp) {
final UserDmo userDmo = new UserDmo();
userDmo.setUserId(userTemp.getUserId());
userDmo.setUsername(userTemp.getUsername());
userDmo.setStatus(userTemp.getStatus());
return userDmo;
}
There are lots of these conversion like from one entity to another, DTO to DMO, DMO To DTO etc. I believe the better way to handle this would be Generics, passing source object and destination object.
public static <E, T> T convert(E e, T t) {
//call getter of source object to set it in destination object.
return t;
}
UserConverterImpl.convertFromTempToUser(userTemp, user);
I need help in this. When I pass object in parameter, I need a way its methods. Is there any better way to achieve this?

You could try to use a framework for this. In the past I have used Dozer to do this.
In following post, other frameworks are mentioned as well
any tool for java object to object mapping?.
(some advice: When mapping JPA entity objects, watch out for lazy fields being automatically mapped by the frameworks)
Perhaps you don't want/cannot use a separate framework for whatever reason.
Since you seem to have a strict layering, you will probably have mappers for each layer. Then I would go for a separate mapping for each object. This way you can easily work out mapping from username to userId etc. The chances that all objects in the layers have exactly the same names for their methods are not that high, and likely to change anyway.

Ended up using BeanUtils of Spring.
BeanUtils.copyProperties(Object source, Object target)

Related

Are API calls in a mapper considered a bad practice?

It's quite common to use DTOs as API models. Often you need to map those DTOs to other models afterwards. I will keep it really simple with following example:
class RequestDto {
private String companyId;
// more fields ..
// getter, setter etc..
}
class SomeModel {
private Company company;
// more fields ..
// getter, setter etc..
}
So in the above case RequestDto is the model that is used in the API and SomeModel is the model that is used internally by the server for the business logic. Usually you would create a class to map from one to another object, e.g:
class RequestMapper {
public SomeModel mapRequestToSomeModel(RequestDto request){
Company company = fetchCompanyFromApi(request.getCompanyId()); // makes a request to another service
SomeModel someModel = new SomeModel();
someModel.setCompany(company);
// map more fields..
return someModel;
}
}
Question
Is it a good practice to put external API call logic (like fetchCompanyFromApi) inside such mapper functions?
Are there better alternatives? (I like to keep mappers very very simple, but maybe that's just me)
It seems a little smelly to me. My (personal) expecption for a mapper, particularly if other mappers are trivial, is to be very very cheap. No database or API calls involved. I would prefer so create some kind of conversion service, which performs the same steps, but is called differently.
A similar question often arises for functions that are called get… where I would never expect expensive operations.
From my point of view, in this case you break single-responsibility principle: you map some data and fetch smth from another resource. And in this case you should name your method fetchDataAndMapRequestToModel. Because names should be clear to understand.
I also expect mapper class to do some simple stuff like pure get/set or based on some primitive logic (if A is null set B).

How to add java custom or string object to DAO model object at runtime?

I need to add new String object to Array of custom type object, ServiceOrderEntity in this case. I know that this kind of breaks ServiceOrderEntity integrity but I have to access this field from jsp. What is the best way to do it?
DAO class:
...
SQLQuery localSQLQuery = localSession.createSQLQuery(query).addEntity(ServiceOrderEntity.class);
localList = localSQLQuery.list();
Iterator itr = localList.iterator();
while (itr.hasNext()){
String field = "some value";
itr.next().append( field ); // something like that maybe....
}
return to Service class
...
Service class
...
List list = perform DAO request
model.addAttribute("serviceOrderList", localList);
....
UPDATE
I have all models generated by Hibernate and I don't want to touch them. I need to add to custom object, in this case ServiceOrderEntity or find workaround. I think I can make copy of it and append new field to it (using Dozer)? New fields is result of other complex subqueries.
List of ServiceOrderEntity objects at runtime:
-list
--[0]model.ServiceOrderEntity#d826d3c7
---createdBy = {....}
---serviceRequestFK{java.Lang.Integer} // << this one
--[1]
....
etc
I need to get name using serviceRequestFK in ServiceOrderEntity. As long as java doesn't allow hot fix (to add custom filed to already created object) I need to find a way to pass to JSP the name field as well. What is the right way?
I really don't want to include DAO mathod requests from jsp...
Create separate list of names?...
Since Java does not allow mix-ins (aka monkey-patching) you'll have to:
Add the field to the base entity.
Return a sub-class that includes this field.
If you'd like to add the field so that the Service class can do its job, then fair enough. However, if the new field is part of the payload in/out then consider instead for that particular service then consider:
Making use-case specific payloads for each service call.
Map the results of these onto your reusable object model. (You can use something like Dozer for this).
The rationale behind this suggestion is to follow the principles of contract-first development.
Your model will be more general purpose, and thus reusable. You can add reusable behaviors to your model classes. Your services will use these behaviors to orchestrate process. (As opposed to having 'anaemic' entitites).
Your service payloads can remain relatively stable over time. So changes to your model won't effect all of your service subscribers. (aka "don't spill your guts").

Custom Constructor : Apache Cayenne 3.2M

I'm new to the API. It appears to me that you have to construct objects via the 'context' object like this:
ServerRuntime cayenneRuntime = new ServerRuntime("cayenne-project.xml");
context = cayenneRuntime.newContext()
...
MyEntity entity=context.newObject(MyEntity.class);
Rather than just creating Java Objects in the usual new() way:
MyEntity entity=new MyEntity();
But I want to create a constructor for my 'MyEntity' class that would do something like:
public MyEntity(String inputFile) {
...
do setters based on information derived from inputFile (size, time created etc).
...
How can I achieve this - ideally I want to keep the logic on the class MyEntity itself, rather than having a 'wrapper' class somewhere else to instantiate the object and perform the setting.... I guess I could have a 'helper' method which just the settings on a previously instantiated instance...but is there an idiom I'm missing here...?
You got it right about creating the object via 'context.newObject(..)' - this is the best way to do it and will keep you out of trouble. Still you can actually have your own constructor (provided you also maintain a default constructor for the framework to use):
public MyEntity(String inputFile) {
...
}
public MyEntity() {
}
Then you can create your object first, and add it to the context after that:
MyEntity e = new MyEntity(inputFile);
context.registerNewObject(e);
As far as idioms go, a very common one is to avoid business logic in persistent objects. ORM models are often reused in more than one application, and behavior you add to the entities doesn't uniformly apply everywhere. The other side of this argument is that anything but simplest methods depend on the knowledge of the surrounding environment - something you don't want your entities to be aware of.
Instead one would write a custom service layer that sits on top of the entities and contains all the business logic (often used with a dependency injection container). Services are not wrappers of entities (in fact services are often singletons). You can think of them as configurable strategy objects. In the Java world such layered design and this type of separation of concerns is very common and is probably the most flexible approach.
But if you want to hack something quickly, and don't envision it to grow into a complex multi-module system, then using a custom constructor or a static factory method in the entity is just fine of course.

How to observe / trace class member access in Java / Scala?

I'm developing a Scala extension to an existing Java ORM (Ebean). The goal of this project is to add as much type safety as possible to the ORM.
Instead of
Ebean.find(Product.class).fetch("name", "unit").findList()
I would finally like to be able to write something like
(objects of entity[Product] with attributes name and unit) getIt
(note that this is just a very first DSL approach).
The ORM model is already defined as
#Entity
public class {
public String name;
public String unit;
}
In order to achieve type safety at compile time for the attributes in the query, I would need to access them on e.g. a dummy object like (new Product()).name.
I think this is the best way to ensure that only such model members are used that exists on that class, but, at runtime, I need a way to recognize that this variable was accessed. Otherwise I would just call that member name and wouldn't know about this in my query.
Does anybody know a way how to achieve this? Is there a possibility to trace when a variable is accessed and to give that information, at runtime, to any other object?
I already thought about hooking into getters and setters instead of using public members in the model classes, but this would either make the query or the model very ugly. Another problem is that any additional specific methods would have to be added manually for each model.
I would be happy if anyone could suggest possible solutions. Thanks!
If you are willing to define the fields of your model objects as something like the Record Fields, what Emil suggested could work, but if you're building your solution on top of a Java ORM using custom types might be an issue. If you need to track field access I think your best bet will be runtime bytecode instrumentation using a library like CGLib or Javassist. You can pass an instrumented "dummy" object into the body of your function, then track which field was accessed in a thread local. That's how it's done in Squeryl.
You could take a gander at how the Lift folks have implemented Mapper and Records. It allows for type safe queries using companion objects (as well as using raw sql). It does require inheriting traits into your model and the fields are specified as objects and not regular vals. Might be helpfull though. You can find the source for the persistance stuff here.

NoSQL Schemaless data and statically typed language

One of the key benefits of NoSQL data stores like MongoDB is that they're schemaless. With dynamically typed languages this seem to be a natural fit. You can receive some arbitrary JSON inputs, perform business logic on the known fields, and persist the whole thing without first having to define the object.
What if your choice of language is limited to the statically typed, say Java? How could I achieve the same level of flexibility?
A typical data flow like the following:
JSON Input
Serialize to Java Object to perform business logic
Deserialize into BSON to persist in Mongo
where the serialization to object step is necessary since you want to perform business logic with POJOs, not JSON strings. However, before I can serialize the input into objects, I must define it first. What if the input contains additional fields undefined in the object? While they may not be used in the business logic, I may still want to be able to persist them. I have seem implementations where the undefined fields are put into a map, but am not sure if that's the best approach. For one, the undefined fields may be complex objects as well.
Schemaless data doesn't necessarily mean structureless data; the fields are typically known in advance and some type-safe pattern can be applied on top of it to avoid the Magic Container anti-pattern But this is not always the case. Sometimes keys are entered by the user and cannot be known in advance.
I've used the Role Object Pattern several times to give coherence to a dynamic structure. I think it is well suited here for both cases.
The Role Object Pattern defines a way to access different views of an object. The canonical example being a User that can assume several roles such as Customer, Vendor, and Seller. Each of these views has different operations it can perform and can be accessed from any of the other views. Common fields are typically available at the interface level (especially userId(), or in your case toJson()).
Here's an example of using the pattern:
public void displayPage(User user) {
display(user.getName());
if (user.hasView(Customer.class))
displayShoppingCart(user.getView(Customer.class);
if (user.hasView(Seller.class))
displayProducts(user.getView(Seller.class));
}
In the case of data with a known structure, you can have several views bringing different sets of keys into cohesive units. These different views can read the json data on construction.
In the case of data with a dynamic structure, an authoritative RawDataView can have the data in it's dynamic form (ie. a Magic Container like a HashMap<String, Object>). This can be used to query the dynamic data. At the same time, type-safe wrappers can be created lazily and can delegate to the RawDataView to assist in program readability/maintainability:
public class Customer implements User {
private final RawDataView data;
public CustomerView(UserView source) {
this.data = source.getView(RawDataView.class);
}
// All User views must specify this
#Override
public long id() {
return data.getId();
}
#Override
public <T extends UserView> T getView(Class<T> view) {
// construct or look up view
}
#Override
public Json toJson() {
return data.toJson();
}
//
// Specific to Customer
//
public List<Item> shoppingCart() {
List<Item> items = (List<Item>) data.getValue("items", List.class);
}
// etc....
}
I've had success with both of these approaches. Here are some extra pointers that I've discovered along the way:
Have a static structure structure to your data as much as possible. This makes things a lot easier to maintain. I had to break this rule and use the RawDataView approach when working on a legacy system. You may also have to break it with dynamically-entered user data as mentioned above. In which case, use a convention for non-dynamic field names such as a leading underscore (_userId)
Have equals() and hashcode() implemented such that user.getView(A.class).equals(user.getView(B.class)) is always true for the same user.
Have a UserCore class that does all the heavy lifting of common code such as creating views; performing common operations (like toJson()) returning common fields (like userId()); and implementing equals() and hashcode(). Have all views delegate to this core object
Have an AbstractUserView that delegates to the UserCore and implements equals() and hashcode()
Use a type-safe heterogeneous container (like ClassToInstanceMap) constructing/caching views.
Allow the existence of a view to be queried. This can be done with either a hasView() method or by having getView return Optional<T>
You can always have a class which provides both:
easy access to attributes you know about and optional fallback cases to older formats (for example it can return "name" if it exists, or older case of "name.first" + "name.last" if it doesn't (or some similar scenario))
easy access to unknown elements simulating the map interface
Whether you do a full validation or not, whether you allow extra undefined attributes or not depends on what you want to achieve. But I think that creating an abstraction which allows you either way of accessing the data is the best solution.
Hopefully over time, you'll get to the stage where your schema is pretty much stable and messing directly with the attributes is not needed anymore.
This is not well solved in Java due to the lack of dynamic types. One way this can be solved is using Maps.
Map
The object can again be a Map of objects.
This is not an elegant way but works in Java. An example : SnakeYaml library for YAML allows traversal in this way.

Categories