We have "beans" that are meant to be serialized to JSON, to be then returned to our (vue.js based) UI layer. So far, my beans look like this:
public class ExampleBean {
private final int id;
private final String name;
public ExampleBean(int id, String name) {
this.id = id; ...
}
// getter for all fields
}
They are instantiated by some mapper:
public ExampleBean map(SomeInternalThing foo) {
int id = getIdFromFoo(foo);
String name = doSomethingElse(foo.itsBar());
return new ExampleBean(id, name);
}
I then have some unit tests (for the mapper):
#Test
public void testGetId() {
... do some mocking setup so that the mapper can do its job
assertThat(mapperUnderTest.map(someFoo).getId(), is(5));
}
The main advantage of this approach is that bean objects are immutable (and the compiler tells me when I forgot to initialize a field).
But: the number of fields for that bean keeps increasing. That SomeInternalThing context has maybe 30 to 50 "properties", and the number of fields required in the bean ... went from 3 to 5 to 8 by now.
What is really "killing" me is the fact that the mapping code is doing different things for each required field. Which requires me to have more and more "common" mock specifications to deal with.
By now I am wondering if there are better choices to implement such "data only objects".
Personally I prefer lombok ( https://projectlombok.org/ ), when creating data objects. It gets rid of the boilerplate code. You should take a look into the "#Builder" and "#Data" annotation.
Since using lombok is always a team decision, you could start by implementing the builder-pattern by yourself (for such data-objects).
This enables you to set every property seperately, and test every property individually.
That beeing said you probably shouldn't use a constructor with every field.
(see #AllArgsConstructor in lombok)
As you can see here (https://en.wikipedia.org/wiki/JavaBeans) beans should have a public default constructor
Related
I have a hierarchical list of converters like the following for example:
#Named
public class NameConverter {
#Inject
private AddressConverter addressConverter;
public package2.Name convert(package1.Name source) {
package2.Name target = new package2.Name();
target.setFirstName(source.getName());
target.setLastName(source.getName());
target.setAddress(addressConverter.convert(source.getAddress()));
}
}
and AddressConverter has ZipCodeConverter and so on ...
In the Unit Testing class,
1) I would create a mock for addressConverter - EasyMock.createNiceMock.
2) Set the expectation -
EasyMock.expect(addressConverter.convert(EasyMock.anyObject(package1.Address.class))).andReturn(addressList); # What this addressList should be?
3) Whitebox.setInternalState for private fields.
Question :
I would assert on first name and last name if they are equal which is straight forward.
But, NameConverter is also responsible for setting the converted Address.There is a possibility for NameConverter to change the values of returned converted Address and other POJOs inside.
So how do I ensure through Asserts or something else, that NameConverter just sets the Address(and the POJOs encapsulated by it) as it is and does not tamper with the values ??
Possible Solution: In the EasyMock.expect return, should I create and set values for all POJOs till the last one in the hierarchy and assert on each of the values?
But that doesn't seem like unit testing !!
Please help as how to unit test this converter.
It is unit testing to set the return value of a mock object and to assert that that return value is put in the right place by your NameConverter.
However, perhaps what you're coming across is a failure of appropriate layering. If you have a set of 'converter' classes which then need to be co-ordinated in some fashion you may want to make each converter independent and to bring the co-ordination responsibility elsewhere. So, your NameConverter should be completely independent of AddressConverter and you perhaps need a third class which is responsible for calling a set of converters, which each just do their job. You could restructure each converter to be given an instance of both their input and output object and their unit tests assert that they only act on known fields within each object. Then the co-ordinator object doesn't need to know anything about what each converter does, it just needs to locate / create instances of the input and output objects and to call each converter in turn. That's very amenable to a unit-testing approach, without resulting in a lot of layering concerns.
Example code:
public interface Converter<S, T> {
convert(S source, T target);
}
public class NameConverter implements Converter<p1.Name, p2.Name> {
#Override
public void convert(p1.Name source, p2.Name target) {
target.setFirstName(source.getName());
target.setLastName(source.getName());
}
}
public class AddressConverter implements Converter<p1.Name, p2.Name> {
#Override
public void convert(p1.Name source, p2.Name target) {
// more stuff.
}
}
public class ConversionService {
private final Set<Converter> converters;
#Inject
public ConversionService(Set<Converter> converters) {
this.converters = converters;
}
public p2.Name convert(p1.Name source) {
p2.Name target = new p2.Name();
converters.forEach((converter) -> converter.convert(source, target);
return target;
}
}
Then your unit test really just need to know that all your lower-level converters were called.
I would suggest three options:
Return a new empty instance of address from your mock and compare that the same exact instance is set to the target. Don't test if the address value is modified. It's Ok not to test every single possibility of things going wrong.
Return a strict mock of address without any expectations set. It will throw if there is an attempt to modify it. And again check for instance equality.
Don't use mocks at all. Test the entire hierarchy as a whole. It does not look like unit testing, but it may be a good option. I think mocks are often overused and should be avoided when possible. Please see more here.
I would recommend the following as a good unit test for NameConverter:
public final class NameConverterTest {
#SUT
NameConverter tested;
#Test
public void convertNameFromPackage1ToNameFromPackage2() {
Address address = new Address();
package1.Name source = new package1.Name("A Name", address);
package2.Name converted = tested.convert(source);
assertEquals(source.getName(), converted.getFirstName());
assertEquals(source.getName(), converted.getLastName());
assertNotNull(converted.getAddress());
}
}
According to Martin Fowler's definition, the above is still a unit test for the NameConverter unit, even if it doesn't isolate it from its dependency on AddressConverter (which would have its own unit test).
(For simplicity, I used an hypothetical #SUT annotation which takes care of instantiating the "system under test" with injected dependencies - and actual testing libraries for this do exist.)
We are actually using Spring Boot's #ConfigurationProperties as basically a configuration mapper : it provides us an easy shortcut to map properties on objects.
#ConfigurationProperties("my.service")
public class MyService {
private String filePrefix;
private Boolean coefficient;
private Date beginDate;
// getters/setters mandatory at the time of writing
public void doBusinessStuff() {
// ...
}
}
Although this was a nice productivity boost when we were prototyping the app, we came to question if this was right usage.
I mean, configuration properties have a different status in Spring Boot's context, they're exposed through actuator endpoints, they can be used to trigger conditional beans, and seem more oriented toward technical configuration properties.
Question : Is it "correct" to use this mechanism on any business property/value, or is it plain misuse ?
Any potential drawback we missed ?
Right now our only concern is that we cannot use #ConfigurationProperties on immutable classes, which is closely related to this issue on Spring Boot's tracker : Allow field based #ConfigurationProperties binding
If your property represents something that is configurable based on the environment/profile that is what the mechanism is there for. Though I'm a little unclear what you mean by
"map properities on objects".
I would not favor this style in general, especially if your bean has multiple properties to set. A more standard idiom is to have a class that encapsulates the properties/settings used to create your bean:
#ConfigurationProperties("my.service")
public class MyServiceProperties {
private String filePrefix;
private Boolean coefficient;
private Date beginDate;
// getters/setters mandatory at the time of writing
}
then your Service class would look like this:
#EnableConfigurationProperties(MyServiceProperties.class)
public class MyService {
#Autowired
private MyServiceProperties properties;
//do stuff with properties
public void doBusinessStuff() {
// ...
}
}
This would at least allow you to pass the properties easily into an immutable class through it's constructor (make copies of any mutable properties). Also having the properties bean can be reused if you find other parts of your app need some shared configuration.
So I have a class like so:
public class HBaseUtil {
private final String fileName = "hbase.properties";
private Configuration config;
private HBaseUtil() {
try {
config = new PropertiesConfiguration(fileName);
} catch (ConfigurationException e) {
// some exception handling logging
}
}
// now some getters pulling data out of the config object
public static String getProperty(String fieldKeyName) {...}
public static String getColumnFamily(String fieldName) {...}
// ... some more getters
// NO setters (thus making this a read-only class)
}
Thus, basically I have for myself a Singleton class, that the very first time that it is put to use, sets up a configuration object, and then simply keeps listening for get calls. There are a number of problems with this class:
Unit testing the static methods within class HBaseUtil becomes difficult because of a tight-knit coupling between the Singleton and the configurations file.
What I really want is me being able to supply the filename/filename+path to the class so that it can go in there, read the configuration properties from that file and offer them to incoming read requests. One important note here though: I need this flexibility in specifying the properties file ONLY ONCE per JVM launch. So I certainly don't need to maintain state.
Here is what I was able to come up with:
Instead of a Singleton, I have a normal class with all static methods and no explicit constructor defined.
public class HBaseUtil {
// directly start with getters
public static String getProperty(Configuration config, String fieldKeyName) {...}
public static String getColumnFamily(Configuration config, String fieldKeyName) {...}
// ...and so on
}
And then, instead of using the class in my other code like such:
HBaseUtil.getProperty(String fieldKeyName)
I'd use it like so:
Configuration externalConfig = new PropertiesConfiguration("my-custom-hbase.properties");
HbaseUtil.getProperty(externalConfig, fieldKeyName)
My questions:
Am I even thinking in the right direction? My requirement is to have the flexibility in the class only ONCE per JVM. All that needs to be configurable in my project for this, is the location/contents of the HBase .properties file. I was thinking having a Singleton is overkill for this requirement.
What other better approaches are there for my requirement (stated in above point)?
Thanks!
Note: I've read this StackOverflow discussion, but now it's gotten me even more confused.
You should avoid all static methods and instead design a class which does not mandate its lifecycle: it can be a typical immutable POJO with a public constructor.
Then, when you need it as a singleton, use it as a singleton. For testing, use it in some other way.
Usually, dependency injection is the preferred avenue to solve these problems: instead of hard-coding a pulling mechanism for your configuration object, you have the object delivered to any class which needs it. Then you can decide late what bean you will deliver.
Since you are probably not using Spring (otherwise dependency injection would be your default), consider using Guice, which is a very lightweight and non-intrusive approach to dependency injection.
Disclaimer: I understand that trying to use Spring to inject static variables is considered bad practice (and I know there are ways around it, e.g. here). So ultimately I plan to redesign, but am curious about possible solutions or workarounds.
I am using Jakarta's Unstandard tag library (particularly useConstants) to easily expose public static final objects to my JSP pages. I want these static objects to initialize themselves from my database, which means I need to inject a JDBC Template or Data Source. So I want something like:
public class MyGroup {
// #Autowire or inject somehow?
private static /*final?*/ NamedParameterJdbcTemplate jdbcTemplate;
public static final MyGroup GROUP_A = new MyGroup("GROUP_A");
public static final MyGroup GROUP_B = new MyGroup("GROUP_B");
public static final MyGroup GROUP_C = new MyGroup("GROUP_C");
// Instance fields
private int id;
private String name;
private String description;
/**
* Construct a group
*/
public MyGroup() {}
/**
* Construct a group using information from the database
* #param key the key to match
*/
public MyGroup(String key) {
// Do DB stuff using injected JDBC template
this.id = id_from_DB;
this.name = name_from_DB;
this.description = desc_from_DB;
}
}
In my JSP, I could simply do ${MyGroup.GROUP_A.id} and anywhere else in the Java code I could just MyGroup.GROUP_B.getName().
So the problem is that these groups must be final for the Jakarta library to pick them up, but I can't static initialize them via Spring. Thoughts?
This isn't a problem with spring so much as with a conflict between what you want and what java allows. You cannot delay the assignment of a static final property. It has to be set when the class is loaded. Therefore, by the time spring could inject, it is too late.
If you don't have to have it be final, you can open up some options.
Another possibility is it might be possible to create an aspect when intercepts the access of the property, and returns the value you want rather than the stored value. You could then inject the desired value into the aspect.
I've never done it before specifically with static properties, but I presume it is possible. It is not possible to use constant fields (static final fields bound to a constant string object or primitive value) as a JoinPoint since java requires those to be inlined, but since you are pointing to a non-String object, I think using an aspect could work.
To make sure spring injects into your aspect, make sure you tell spring about it via via something like this:
<bean id="someId" class="com.yourdomain.YourAspect" factory-method="aspectOf"/>
I'm building a simple RESTFul Service; and for achieve that I need two tasks:
Get an instance of my resource (i.e Book) from request parameters, so I can get that instance to be persisted
Build an XML document from that instance to send the representation to the clients
Right now, I'm doing both things in my POJO class:
public class Book implements Serializable {
private Long id;
public Book(Form form) {
//Initializing attributes
id = Long.parseLong(form.getFirstValue(Book.CODE_ELEMENT));
}
public Element toXml(Document document) {
// Getting an XML Representation of the Book
Element bookElement = document.createElement(BOOK_ELEMENT);
}
I've remembered an OO principle that said that behavior should be where the data is, but now my POJO depends from Request and XML API's and that doesn't feels right (also, that class has persistence anotations)
Is there any standard approach/pattern to solve that issue?
EDIT:
The libraries i'm using are Restlets and Objectify.
I agree with you when you say that the behavior should be where the data is. But at the same time, as you say I just don't feel confortable polluting a POJO interface with specific methods used for serialization means (which can grow considerably depending on the way you want to do it - JSON, XML, etc.).
1) Build an XML document from that instance to send the representation to the clients
In order to decouple the object from serialization logic, I would adopt the Strategy Pattern:
interface BookSerializerStrategy {
String serialize(Book book);
}
public class XmlBookSerializerStrategy implements BookSerializerStrategy {
public String serialize(Book book) {
// Do something to serialize your book.
}
}
public class JsonBookSerializerStrategy implements BookSerializerStrategy {
public String serialize(Book book) {
// Do something to serialize your book.
}
}
You POJO interface would become:
public class Book implements Serializable {
private Long id;
private BookSerializerStrategy serializer
public String serialize() {
return serializer.serialize(this);
}
public void setSerializer(BookSerializerStrategy serializer) {
this.serializer = serializer;
}
}
Using this approach you will be able to isolate the serialization logic in just one place and wouldn't pollute your POJO with that. Additionally, returning a String I won't need to couple you POJO with classes Document and Element.
2) Get an instance of my resource (i.e Book) from request parameters, so I can get that instance to be persisted
To find a pattern to handle the deserialization is more complex in my opinion. I really don't see a better way than to create a Factory with static methods in order to remove this logic from your POJO.
Another approach to answer your two questions would be something like JAXB uses: two different objects, an Unmarshaller in charge of deserialization and a Marshaller for serialization. Since Java 1.6, JAXB comes by default with JDK.
Finally, those are just suggestions. I've become really interested in your question actually and curious about other possible solutions.
Are you using Spring, or any other framework, in your project? If you used Spring, it would take care of serialization for you, as well as assigning request params to method params (parsing as needed).