What I need to do is having a unit test that checks whether a given class MyClass is serializable. But I can't only check that the class implements Serializable: all the attributes of the class (and the attributes of the attributes etc.) have to implement Serializable.
Also, a solution like creating an instance of MyClass and try to serialize and deserialize it is not satisfactory: if someone adds an attribute and does not update the unit test, the unit test won't test the new attribute.
One solution would probably be to use reflection recursively to test whether the classes of all the attributes implement Serializable, but that seems really heavy.
Isn't there a better solution?
Thanks.
This is an old question, but I thought I would add my 2c.
Even if you recurse through the whole object, you cannot guarantee that your object implements serializable. If you have a member variable that is abstract or is an interface, you cannot say if the object you eventually store will or will not be serialized. If you know what you are doing, it would serialized.
This answer provides a solution by populating your objects with random data, but this is probably overkill.
I have created a class that does recurse through the structure ignoring interfaces, java.lang.Object and abstract classes.
package au.com.tt.util.test;
import java.io.Serializable;
import java.lang.reflect.Field;
import java.lang.reflect.Modifier;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public final class SerializableChecker
{
public static class SerializationFailure
{
private final String mContainingClass;
private final String mMemberName;
public SerializationFailure(String inNonSerializableClass, String inMemberName)
{
mContainingClass = inNonSerializableClass;
mMemberName = inMemberName;
}
public String getContainingClass()
{
return mContainingClass;
}
public String getMemberName()
{
return mMemberName;
}
public String getBadMemberString()
{
if (mMemberName == null)
return mContainingClass;
return mContainingClass + "." + mMemberName;
}
#Override
public String toString()
{
return "SerializationFailure [mNonSerializableClass=" + mContainingClass + ", mMemberName=" + mMemberName + "]";
}
}
private static class SerializationCheckerData
{
private Set<Class<?>> mSerializableClasses;
SerializationCheckerData()
{
mSerializableClasses = new HashSet<Class<?>>();
}
boolean isAlreadyChecked(Class<?> inClass)
{
return mSerializableClasses.contains(inClass);
}
void addSerializableClass(Class<?> inClass)
{
mSerializableClasses.add(inClass);
}
}
private SerializableChecker()
{ }
public static SerializationFailure isFullySerializable(Class<?> inClass)
{
if (!isSerializable(inClass))
return new SerializationFailure(inClass.getName(), null);
return isFullySerializable(inClass, new SerializationCheckerData());
}
private static SerializationFailure isFullySerializable(Class<?> inClass, SerializationCheckerData inSerializationCheckerData)
{
for (Field field : declaredFields(inClass))
{
Class<?> fieldDeclaringClass = field.getType();
if (field.getType() == Object.class)
continue;
if (Modifier.isStatic(field.getModifiers()))
continue;
if (field.isSynthetic())
continue;
if (fieldDeclaringClass.isInterface() || fieldDeclaringClass.isPrimitive())
continue;
if (Modifier.isAbstract(field.getType().getModifiers()))
continue;
if (inSerializationCheckerData.isAlreadyChecked(fieldDeclaringClass))
continue;
if (isSerializable(fieldDeclaringClass))
{
inSerializationCheckerData.addSerializableClass(inClass);
SerializationFailure failure = isFullySerializable(field.getType(), inSerializationCheckerData);
if (failure != null)
return failure;
else
continue;
}
if (Modifier.isTransient(field.getModifiers()))
continue;
return new SerializationFailure(field.getDeclaringClass().getName(), field.getName());
}
return null;
}
private static boolean isSerializable(Class<?> inClass)
{
Set<Class<?>> interfaces = getInterfaces(inClass);
if (interfaces == null)
return false;
boolean isSerializable = interfaces.contains(Serializable.class);
if (isSerializable)
return true;
for (Class<?> classInterface : interfaces)
{
if (isSerializable(classInterface))
return true;
}
if (inClass.getSuperclass() != null && isSerializable(inClass.getSuperclass()))
return true;
return false;
}
private static Set<Class<?>> getInterfaces(Class<?> inFieldDeclaringClass)
{
return new HashSet<Class<?>>(Arrays.asList(inFieldDeclaringClass.getInterfaces()));
}
private static List<Field> declaredFields(Class<?> inClass)
{
List<Field> fields = new ArrayList<Field>(Arrays.asList(inClass.getDeclaredFields()));
Class<?> parentClasses = inClass.getSuperclass();
if (parentClasses == null)
return fields;
fields.addAll(declaredFields(parentClasses));
return fields;
}
}
As a general check, you can use SerializationUtils from Apache Commons, as follows:
byte [] data = SerializationUtils.serialize(obj);
Object objNew = SerializationUtils.deserialize(data);
but it does not guarantee that other instances of that class will serialize properly; for complex objects there is a risk that particular members will not be serializable depending on their members.
Using reflection recursively is not perfect but could partially makes the job.
To do that, you can re-use an utility class like this.
Usage will look like this :
public class SerializationTests {
#Test
public void ensure_MyClass_is_serializable() {
assertIsSerializable(MyClass.class);
}
#Test
public void ensure_MyComplexClass_is_serializable() {
// We excludes "org.MyComplexClass.attributeName"
// because we can not be sure it is serializable in a
// reliable way.
assertIsSerializable(MyComplexClass.class, "org.MyComplexClass.attributeName");
}
private static void assertIsSerializable(Class<?> clazz, String... excludes) {
Map<Object, String> results = SerializationUtil.isSerializable(clazz, excludes);
if (!results.isEmpty()) {
StringBuilder issues = new StringBuilder();
for (String issue : results.values()) {
issues.append("\n");
issues.append(issue);
}
fail(issues.toString());
}
}
}
I think serialize and deserialize instance is OK. If class is changed unit test will still be valid, you need to compare original and deserialized objects
An easy way to test if an object is serializable in Kotlin :
/**
* Check if an object is Serializable
*/
fun assertIsSerializable(obj: Any) {
val serialized = SerializationUtils.serialize(obj as Serializable)
val unSerialized = SerializationUtils.deserialize<Any>(serialized)
}
Related
I'm learning generics and trying to write generic method in a project that I'm working on. I have a use case where I have a method which is recursive, which is exactly same but the parameters type are either Document or BasicDBObject. These parameters have nested list of objects so each of them again have objects of type Document or BasicDBObject inside them. So I have overloaded the methods with both the types. Can I use generics to change this method to accept any type?
public <T> String convertToRules(T rulesObj) {
final StringBuilder expressionBuilder = new StringBuilder("${");
if (rulesObj.getString("condition") != null) {
String condition = rulesObj.getString("condition");
if (rulesObj.get("rules") != null) {
ArrayList<BasicDBObject> rules = (ArrayList<BasicDBObject>) rulesObj.get("rules");
// rules can be ArrayList<BasicDBObject> or ArrayList<Document> so just mentioning them
// both here in question to avoid confusion of what type they are.
ArrayList<Document> rules = (ArrayList<Document>) rulesObj.get("rules");
rules.forEach(rule -> {
// Some code
expressionBuilder.append(convertToRules(rule));
}
}
}
There are compiler errors in the above code as I was trying to convert the existing method to generic method. Below are the actual methods.
public String convertToRules(BasicDBObject rulesObj) {
final StringBuilder expressionBuilder = new StringBuilder("${");
if (rulesObj.getString("condition") != null) {
String condition = rulesObj.getString("condition");
if (rulesObj.get("rules") != null) {
ArrayList<BasicDBObject> rules = (ArrayList<BasicDBObject>) rulesObj.get("rules");
rules.forEach(rule -> {
// Some code
expressionBuilder.append(convertToRules(rule));
}
}
}
public String convertToRules(Document rulesObj) {
final StringBuilder expressionBuilder = new StringBuilder("${");
if (rulesObj.getString("condition") != null) {
String condition = rulesObj.getString("condition");
if (rulesObj.get("rules") != null) {
ArrayList<Document> rules = (ArrayList<Document>) rulesObj.get("rules");
rules.forEach(rule -> {
// Some code
expressionBuilder.append(convertToRules(rule));
}
}
}
I'm pretty sure this is not the right way to do it, need some guidance on how this can be achieved. I want to know if I'm thinking the right way about generics or this implementation is wrong.
RuleType - Contract of Two objects.
import java.util.List;
public interface RuleType<T> {
String getString(String parameter);
List<T> get(String param);
}
BasicDBObject
import java.util.ArrayList;
import java.util.List;
public class BasicDBObject implements RuleType<BasicDBObject> {
#Override
public String getString(String parameter) {
return "Something";
}
#Override
public List<BasicDBObject> get(String param) {
return new ArrayList<>();
}
}
Doument
import java.util.ArrayList;
import java.util.List;
public class Document implements RuleType<Document> {
#Override
public String getString(String parameter) {
return "Something";
}
#Override
public List<Document> get(String param) {
return new ArrayList<>();
}
}
Generic Method (Logic implementation)
public <T extends RuleType> String convertToRules(T rulesObj) {
final StringBuilder expressionBuilder = new StringBuilder("${");
if (rulesObj.getString("condition") != null) {
String condition = rulesObj.getString("condition");
if (rulesObj.get("rules") != null) {
List<T> rules = rulesObj.get("rules");
rules.forEach(rule ->
// Some code
expressionBuilder.append(convertToRules(rule))
);
}
}
return expressionBuilder.toString();
}
You can make an abstract superclass or interface for both of the class. Let's say SuperClass with the common methods like getString() of the BasicDBObject and Document. Then, override the methods in the two class. Then for the generic method, you can make it like <? extends SuperClass. Then you can read the property of SuperClass easily. I think it should work fine.
The project uses TestNg, Java11, Spring test
I am writing testNG tests for API
I have a java object that has this kind of stucture:
class Object1
private Object2 o2;
private List<Object3> o3;
Where Object2 is not only composed of primitive attributes.
I would like to test if 2 Object1 are equals with these rules:
if actual o2 is null, don't fail even if the other o2 is not
if actual o3 is null or empty, don't fail even if the other o3 is not
if actual o3 is not null nor empty, compare only non null Object3 fields
To sum up, I would like to assert that 2 objects are the same, ignoring null fields recursively.
I could do it
assertThat(actual).usingRecursiveComparison().ignoringActualNullFields().isEqualTo(other);
but the recursive null fields are not ignored.
How can I fix this?
You can also ignore expected null fields..
Can you also provide a simple test to reproduce the issue?
Feel free to raise an issue in https://github.com/joel-costigliola/assertj-core/issues
For me following code worked:
public class SocialLink {
private String platform;
private String link;
}
SocialLink obj1 = new SocialLink("Facebook", null);
SocialLink obj2 = new SocialLink("Facebook", null);
assertThat(obj1).isEqualToIgnoringNullFields(obj2);
I finally created my own asserts like this:
import org.assertj.core.api.AbstractAssert;
import org.assertj.core.api.Assertions;
import java.util.List;
import java.util.stream.Collectors;
public class Object1Assert extends AbstractAssert<Object1Assert, Object1> {
public Object1Assert isEqualTo(Object1 other) {
// specially for null
if(actual == other) {return this;}
if(actual.getObject2() != null) {
Assertions.assertThat(other.getObject2()).isEqualToIgnoringNullFields(actual.getObject2());
}
if(actual.getObject3() != null) {
for(Object3 object3 : actual.getObject3()) {
my.package.Assertions.assertThat(object3).isIn(other.getObject3());
}
}
// return the current assertion for method chaining
return this;
}
public Object1Assert(Object1 actual) {
super(actual, Object1Assert.class);
}
public static Object1Assert assertThat(Object1 actual) {
return new Object1Assert(actual);
}
}
public class Assertions {
public static Object3Assert assertThat(Object3 actual) {
return new Object3Assert(actual);
}
}
public class Object3Assert extends AbstractAssert<Object3Assert, Object3> {
public Object3Assert isIn(List<Object3> others) {
List<String> otherStringIds = others.stream().map(Object3::getStringId).collect(Collectors.toList());
Assertions.assertThat(otherStringIds).isNotEmpty();
Assertions.assertThat(actual.getStringId()).isIn(otherStringIds);
for (Object3 otherObject3 : others) {
if(actual.getStringId().equalsIgnoreCase(otherObject3.getStringId())) {
Assertions.assertThat(otherObject3).usingComparatorForType(Comparators.bigDecimalComparator, BigDecimal.class).isEqualToIgnoringNullFields(actual);
}
}
// return the current assertion for method chaining
return this;
}
public Object3Assert(Object3 actual) {
super(actual, Object3Assert.class);
}
public static Object3Assert assertThat(Object3 actual) {
return new Object3Assert(actual);
}
}
I created this class for each type I needed with this tutorial
https://www.baeldung.com/assertj-custom-assertion
I have a class with various properties and I would like to write a wrapper method around them in order to loop around them more easily.
Some properties return a collection of values, some a single value. And I'm looking for the best approach for this.
My first approach is to let the wrapper method return whatever the property getters return.
public class Test {
public Object getValue(String propName) {
if ("attr1".equals(propName)) return getAttribute1();
else if ("attr2".equals(propName)) return getAttribute2();
else return null;
}
public List<String> getAttribute1() {
return Arrays.asList("Hello","World");
}
public String getAttribute2() {
return "Goodbye";
}
public static void main(String[] args) {
final Test test=new Test();
Stream.of("attr1","attr2")
.forEach(p-> {
Object o=test.getValue(p);
if (o instanceof Collection) {
((Collection) o).forEach(v->System.out.println(v));
}
else {
System.out.println(o);
}
});
}
}
The bad point with this approach is that the caller has to test himself whether the result is a collection or not.
Other approach, seamless for the caller, is to always return a collection, ie. the wrapper function wraps the single values into a Collection. Here an HashSet, but we can imagine an adhoc, minimum 1 element list.
public class TestAlt {
public Collection getValue(String propName) {
if ("attr1".equals(propName))
return getAttribute1();
else if ("attr2".equals(propName)) {
Set s = new HashSet();
s.add(getAttribute2());
return s;
}
else
return null;
}
public List<String> getAttribute1() {
return Arrays.asList("Hello", "World");
}
public String getAttribute2() {
return "Goodbye";
}
public static void main(String[] args) {
final TestAlt test = new TestAlt();
Stream.of("attr1", "attr2")
.forEach(p -> {
test.getValue(p).forEach(v -> System.out.println(v));
});
}
Performance-wise, design-wise, ... what's your opinion on these approaches ? Do you have better ideas ?
Well, you could pass the action to be performed on each attribute to the object and let the object decide on how to handle it. E.g.:
in Class Test:
public void forEachAttribute(String propName, Handler h) {
if ("attr1".equals(propName))
h.handle(getAttribute1());
else if ("attr2".equals(propName)) {
getAttribute2().forEach(o -> h.handle(o))
}
}
And a class Handler with the function handle(String s), that does, what you want to do.
If you cannot edit Test, you can also move the function outside Test
public void forEachTestAttribute(Test t, String propName, Handler h)...
Performance-wise: This removes an if-clause
Design-wise: This removes a cast, but creates more classes.
*Edit: It also maintains type-security, and if there are multiple kinds of attributes (String, int, etc.) you could add more handle-functions, to still maintain type-security.
Regarding the design I would rewrite your code into this:
TestAlt.java
import java.util.*;
import java.util.stream.Stream;
public class TestAlt {
private Map<String, AttributeProcessor> map = AttributeMapFactory.createMap();
public Collection getValue(String propName) {
return Optional
.ofNullable(map.get(propName))
.map(AttributeProcessor::getAttribute)
.orElse(Arrays.asList("default")); //to avoid unexpected NPE's
}
public static void main(String[] args) {
final TestAlt test = new TestAlt();
Stream.of("attr1", "attr2")
.forEach(p -> test.getValue(p).forEach(v -> System.out.println(v)));
}
}
AttributeMapFactory.java
import java.util.HashMap;
import java.util.Map;
public class AttributeMapFactory {
public static Map<String, AttributeProcessor> createMap() {
Map<String, AttributeProcessor> map = new HashMap<>();
map.put("attr1", new HiAttributeProcessor());
map.put("attr2", new ByeAttributeProcessor());
return map;
}
}
AttributeProcessor.java
import java.util.Collection;
public interface AttributeProcessor {
Collection<String> getAttribute();
}
HiAttributeProcessor.java
import java.util.Arrays;
import java.util.Collection;
public class HiAttributeProcessor implements AttributeProcessor{
#Override
public Collection<String> getAttribute() {
return Arrays.asList("Hello", "World");
}
}
ByeAttributeProcessor.java
import java.util.Arrays;
import java.util.Collection;
public class ByeAttributeProcessor implements AttributeProcessor{
#Override
public Collection<String> getAttribute() {
return Arrays.asList("Goodbye");
}
}
The main point is that you get rid of if-else statements using map and dynamic dispatch.
The main advantage of this approach is that your code becomes more flexible to further changes. In case of this small programm it does not really matter and is an overkill. But if we are talking about large enterprise application, then yes, it becomes crucial.
I have two unrelated java classes (only *.class, no *.java) like this:
public class Trick {
public String getName() { return "Jack"; }
public String trick() { ... }
}
and
public class Treat {
public String getName() { return "John"; }
public String treat() { ... }
}
and I would like to generate a sort of Proxy class at runtime that represents the union of both classes and forwards them to the respective instance, and maybe throw if that's not possible. I assume that can be done with cglib but I don't know where to start.
This is what I would like to do (pseudocode):
// prepare: generate a common interface
TrickOrTreat trickOrTreat = magic.createUnionInterface(Trick.class, Treat.class);
// use with concrete interface A:
Trick trick = new Trick();
TrickOrTreat proxyA = magic.createProxy(trickOrTreat.class, trick);
System.out.println("trick name: " + proxyA.getName());
// use with concrete interface B:
Treat treat = new Treat();
TrickOrTreat proxyB = magic.createProxy(trickOrTreat.class, treat);
System.out.println("treat name: " + proxyB.getName());
Or something to that effect. I would like to do it completely dynamically, probably cglib-based? If thats not possible I would do it with a code generation step in between?
If you are willing to trade in cglib, you can do this with Byte Buddy. I typically refuse to call it magic but here you go:
class Magic {
Class<?> createUnionInterface(Class<?> a, Class<?> b) {
DynamicType.Builder<?> builder = new ByteBuddy().makeInterface();
Set<MethodDescription.SignatureToken> tokens = new HashSet<>();
for (MethodDescription m : new TypeDescription.ForLoadedType(a)
.getDeclaredMethods()
.filter(ElementMatchers.isVirtual())) {
tokens.add(m.asSignatureToken());
builder = builder.defineMethod(m.getName(),
m.getReturnType(),
m.getModifiers()).withoutCode();
}
for (MethodDescription m : new TypeDescription.ForLoadedType(b)
.getDeclaredMethods()
.filter(ElementMatchers.isVirtual())) {
if (!tokens.contains(m.asSignatureToken())) {
builder = builder.defineMethod(m.getName(),
m.getReturnType(),
m.getModifiers()).withoutCode();
}
}
return builder.make()
.load(Magic.class.getClassLoader())
.getLoaded();
}
Object createProxy(Class<?> m, final Object delegate) throws Exception {
return new ByteBuddy()
.subclass(m)
.method(new ElementMatcher<MethodDescription>() {
#Override
public boolean matches(MethodDescription target) {
for (Method method : delegate.getClass()
.getDeclaredMethods()) {
if (new MethodDescription.ForLoadedMethod(method)
.asSignatureToken()
.equals(target.asSignatureToken())) {
return true;
}
}
return false;
}
}).intercept(MethodDelegation.to(delegate))
.make()
.load(Magic.class.getClassLoader())
.getLoaded()
.newInstance();
}
}
Note that you cannot reference a runtime-generated type at compile-time. This is however a given constraint with runtime code generation.
Magic magic = new Magic();
Class<?> trickOrTreat = magic.createUnionInterface(Trick.class, Treat.class);
Trick trick = new Trick();
Object proxyA = magic.createProxy(trickOrTreat, trick);
System.out.println("trick name: " + trickOrTreat.getDeclaredMethod("getName").invoke(proxyA));
Treat treat = new Treat();
Object proxyB = magic.createProxy(trickOrTreat, treat);
System.out.println("trick name: " + trickOrTreat.getDeclaredMethod("getName").invoke(proxyB));
You can overcome this by generating your TrickOrTreat class prior to runtime such that you can reference the type at runtime.
As for the suggested union-type approach, this would require you to have at least one class to be an interface type as Java does not support multiple inheritance.
If you need functionality of both classes/interfaces you can use
public <TT extends Trick & Treat> void process(TT thing){
//...
}
edit:
Implement new Interface MyProxyHandler
public interface MyProxyHandler {}
Extend it with interfaces of classes say TreatInterface and TrickInterface
Create class ProxyManager that implements java.lang.reflect.InvocationHandler
public abstract class ProxyManager<T extends MyProxyHandler> implements InvocationHandler {
protected static String LOCK_OBJECT = new String("LOCK");
protected T proxyHandler;
protected List<T> handlers = new ArrayList<>();
#SuppressWarnings("unchecked")
public ProxyManager(Class<T> _clazz) {
proxyHandler = (T) Proxy.newProxyInstance(_clazz.getClassLoader(), new Class[]{_clazz}, this);
}
public T getProxy() {
return proxyHandler;
}
public List<T> getHandlers() {
return handlers;
}
public void setHandlers(List<T> handlers) {
this.handlers = handlers;
}
public boolean registerHandler(T handler) {
synchronized (LOCK_OBJECT) {
boolean add = true;
for (T item : this.handlers) {
if (item.getClass().equals(handler.getClass())) {
add = false;
}
}
if (add)
this.handlers.add(handler);
return add;
}
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
String result = "";
for (MyProxyHandler handler : getHandlers()) {
try {
//I recommend that methods returns some enum like HANDLED/NOTHANDLED
result = (String) method.invoke(handler, args);
if (result.equals("Some flag"))
break;
} catch (InvocationTargetException e) {
throw e.getCause();
}
}
return result;
}
}
Extend that class with your concrete class
public class TreatTrickProxyManager<T extends TreatInterface & TreatInterface> extends ProxyManager<T> {
public TreatTrickProxyManager(Class<T> _clazz) {
super(_clazz);
}
}
In your bussines logic class get an instance of TreatTrickProxyManager
In your method
public void retrieveSomeData(){
((TreatTrickProxyManager)getTreatTrickProxyManager().getProxy()).someMethodInvocation()
}
I'm using Dozer version 5.4.0.
I have one class with a Map in it, and another class with a List. I'm trying to write a custom converter that will take the values of the map and put it in the List. But the problem is, the converter always gets passed a null source object for the Map, even if the parent source has a populated Map. I can't figure out why this is happening, I think the converter should be passed a populated Map object.
Here is some source code that compiles and shows the problem in action:
package com.sandbox;
import org.dozer.DozerBeanMapper;
import org.dozer.loader.api.BeanMappingBuilder;
import org.dozer.loader.api.FieldsMappingOptions;
public class Sandbox {
public static void main(String[] args) {
DozerBeanMapper mapper = new DozerBeanMapper();
mapper.addMapping(new MappingConfig());
ClassWithMap parentSource = new ClassWithMap();
ClassWithList parentDestination = mapper.map(parentSource, ClassWithList.class);
int sourceMapSize = parentSource.getMyField().size();
assert sourceMapSize == 1;
assert parentDestination.getMyField().size() == 1; //this assertion fails!
}
private static class MappingConfig extends BeanMappingBuilder {
#Override
protected void configure() {
mapping(ClassWithMap.class, ClassWithList.class)
.fields("myField", "myField",
FieldsMappingOptions.customConverter(MapToListConverter.class, "com.sandbox.MyMapValue"));
}
}
}
As you can see, that second assertion fails. Here are the other classes I'm using.
MapToListConverter.java:
package com.sandbox;
import org.dozer.DozerConverter;
import org.dozer.Mapper;
import org.dozer.MapperAware;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
public class MapToListConverter extends DozerConverter<Map, List> implements MapperAware {
private Mapper mapper;
public MapToListConverter() {
super(Map.class, List.class);
}
#Override
public List convertTo(Map source, List destination) { //source is always null, why?!
List convertedList = new ArrayList();
if (source != null) {
for (Object object : source.values()) {
Object mappedItem = mapper.map(object, getDestinationClass());
convertedList.add(mappedItem);
}
}
return convertedList;
}
private Class<?> getDestinationClass() {
try {
return Class.forName(getParameter());
} catch (ClassNotFoundException e) {
throw new IllegalArgumentException(e);
}
}
#Override
public Map convertFrom(List source, Map destination) {
throw new UnsupportedOperationException();
}
#Override
public void setMapper(Mapper mapper) {
this.mapper = mapper;
}
}
ClassWithMap.java:
package com.sandbox;
import java.util.HashMap;
import java.util.Map;
public class ClassWithMap {
private Map<String, MyMapValue> myField;
public Map<String, MyMapValue> getMyField() { //this method gets called by dozer, I've tested that with a break point
if (myField == null) {
myField = new HashMap<String, MyMapValue>();
myField.put("1", new MyMapValue());
}
return myField; //myField has an entry in it when called by dozer
}
public void setMyField(Map<String, MyMapValue> myField) {
this.myField = myField;
}
}
ClassWithList.java:
package com.sandbox;
import java.util.List;
public class ClassWithList {
private List<MyMapValue> myField;
public List<MyMapValue> getMyField() {
return myField;
}
public void setMyField(List<MyMapValue> myField) {
this.myField = myField;
}
}
MyMapValue.java
package com.sandbox;
public class MyMapValue {
}
The problem seems to be in the MapFieldMap.getSrcFieldValue method of dozer. These comments are added by me:
#Override
public Object getSrcFieldValue(Object srcObj) {
DozerPropertyDescriptor propDescriptor;
Object targetObject = srcObj;
if (getSrcFieldName().equals(DozerConstants.SELF_KEYWORD)) {
propDescriptor = super.getSrcPropertyDescriptor(srcObj.getClass());
} else {
Class<?> actualType = determineActualPropertyType(getSrcFieldName(), isSrcFieldIndexed(), getSrcFieldIndex(), srcObj, false);
if ((getSrcFieldMapGetMethod() != null)
|| (this.getMapId() == null && MappingUtils.isSupportedMap(actualType) && getSrcHintContainer() == null)) {
// Need to dig out actual map object by using getter on the field. Use actual map object to get the field value
targetObject = super.getSrcFieldValue(srcObj);
String setMethod = MappingUtils.isSupportedMap(actualType) ? "put" : getSrcFieldMapSetMethod();
String getMethod = MappingUtils.isSupportedMap(actualType) ? "get" : getSrcFieldMapGetMethod();
String key = getSrcFieldKey() != null ? getSrcFieldKey() : getDestFieldName();
propDescriptor = new MapPropertyDescriptor(actualType, getSrcFieldName(), isSrcFieldIndexed(), getDestFieldIndex(),
setMethod, getMethod, key, getSrcDeepIndexHintContainer(), getDestDeepIndexHintContainer());
} else {
propDescriptor = super.getSrcPropertyDescriptor(srcObj.getClass());
}
}
Object result = null;
if (targetObject != null) {
result = propDescriptor.getPropertyValue(targetObject); //targetObject is my source map, but the result == null
}
return result;
}
I figured out how to fix this. Still not sure if it's a bug or not, but I think it is. The solution is to change my configuration to say this:
mapping(ClassWithMap.class, ClassWithList.class, TypeMappingOptions.oneWay())
.fields("myFields", "myFields"
, FieldsMappingOptions.customConverter(MapToListConverter.class, "com.sandbox.MyMapValue")
);
The fix is in the TypeMappingOptions.oneWay(). When it's bidirectional, the dozer MappingsParser tries to use a MapFieldMap which causes my problem:
// iterate through the fields and see wether or not they should be mapped
// one way class mappings we do not need to add any fields
if (!MappingDirection.ONE_WAY.equals(classMap.getType())) {
for (FieldMap fieldMap : fms.toArray(new FieldMap[]{})) {
fieldMap.validate();
// If we are dealing with a Map data type, transform the field map into a MapFieldMap type
// only apply transformation if it is map to non-map mapping.
if (!(fieldMap instanceof ExcludeFieldMap)) {
if ((isSupportedMap(classMap.getDestClassToMap())
&& !isSupportedMap(classMap.getSrcClassToMap()))
|| (isSupportedMap(classMap.getSrcClassToMap())
&& !isSupportedMap(classMap.getDestClassToMap()))
|| (isSupportedMap(fieldMap.getDestFieldType(classMap.getDestClassToMap()))
&& !isSupportedMap(fieldMap.getSrcFieldType(classMap.getSrcClassToMap())))
|| (isSupportedMap(fieldMap.getSrcFieldType(classMap.getSrcClassToMap())))
&& !isSupportedMap(fieldMap.getDestFieldType(classMap.getDestClassToMap()))) {
FieldMap fm = new MapFieldMap(fieldMap);
classMap.removeFieldMapping(fieldMap);
classMap.addFieldMapping(fm);
fieldMap = fm;
}
}