How do you name a class/method that only calls other methods? - java

Say I follow the Single Responsibility Principle and I have the following classes.
public class Extractor {
public Container extract(List<Container> list) {
... some extraction
}
}
public class Converter {
public String convert(Container container) {
... some conversion
}
}
As you can see it's following the principle and all the names of the classes/methods tell what they do. Now I have another class that has a method like this.
public class SomeClass {
private Extractor extractor = new Extractor();
private Converter converter = new Converter();
private Queue queue = new Queue();
public void someMethod(List<Container> list) {
Container tmp = extractor.extract(list);
String result = converter.convert(tmp);
queue.add(result);
}
}
As you can see the "someMethod"-Method does call extract, convert and add. My question is now, how do you call such a class/method? It's not actually extracting, converting or adding but it's calling those?
If you name the method after its responsibility what would that be?

Well since you seem to add to a queue and you don't return anything I'd call it addToQueue. The fact that you convert + extract is implementation detail that I don't think needs to be exposed.

What about processAndQueueMessage?
Also (not related), you shouldn't create (using new) the Extractor and Converter in your SomeClass, you should rather inject them (at construction or in setters), and use interfaces to them. That will make it easier to test, and reduce coupling between implementations.
// Assuming Converter and Extractor are interfaces to the actual implementations
public class SomeClass {
private final Extractor extractor ;
private final Converter converter;
private Queue queue = new Queue();
public SomeClass(Extractor extractor, Converter converter) {
this.converter = converter;
this.extractor = extractor;
}
public void someMethod(List<Container> list) {
Container tmp = extractor.extract(list);
String result = converter.convert(tmp);
queue.add(result);
}
}
And you create it using:
final SomeClass myProcessor = new SomeClass(new MyExtractorImplementation(), new MyConverterImplementation());
(Or use a DI container, like Spring or Pico)

What you do is think about the composite meaning of the sequence of method calls, turn that into a concise verb or verb phrase and use that as the name. If you can't come up with a concise name then you could use a generic / neutral name (like "process") or use something completely bogus (like "sploddify").

If you want the name to be really generic, I'd go with addToQueue() or populateQueue() since getting something into that object seems to be the point of the method.
But really at that level I'd call it by what business logic it's trying to accomplish, in which case the name really depends on what it's being used for.
If you can't come up with a good name, it is an indication that your procedural abstraction is rather arbitrary / artificial, and a possible hint that there might be a better way to do it. Or maybe not.

Sounds like some kind of builder class. You get data in one format, convert it and then create some kind of output format. So how about "SomethingSomethingBuilder"?
I'm assuming someone downvoted me because I forgot to provide a good name for the method. Sorry about that.
So this method adds incrementally data into your builder class. I would call it, "Add", "AddData" or "Push" (I'd probably go with push because that has very similar meaning in many standard classes).
Alternative to "Builder" could potentially be "SomeKindOfCreator". Obviously you would name it based on whatever it is your class is actually creating.

Related

Spring Integration - aggregate and transform

What would be the simplest integration component arrangement in my use case:
Receive messages from multiple sources and in multiple formats (all messages are JSON serialized objects).
Store messages in buffer up to 10 seconds (aggregate)
Group messages by different class property getter (eg class1.someId(), class2.otherId(), ...)
Release all messages that are grouped and transform to new aggregated message.
So far (point 1. and 2.), I'm using aggregator, but don't know if there is out of box solution for problem at 3.) - or I will have to try to cast each Message and check if type of object is class1 - then use correlationstrategy someId, if class2 then otherId.
For problem 4.) - I could manually code something - but Transformer seems like a good component to use, I just don't know if there is something like aggregating transformer where I can specify mapping rules for each input type.
UPDATE
Something like this:
class One{
public String getA(){ return "1"; }
}
class Two{
public Integer getB(){ return 1; }
}
class ReduceTo{
public void setId(Integer id){}
public void setOne(One one){}
public void setTwo(Two two){}
}
public class ReducingAggregator {
#CorrelationStrategyMethod
public String strategy(One one){
return one.getA();
}
#CorrelationStrategyMethod
public String strategy(Two two){
return two.getB().toString();
}
#AggregatorMethod
public void reduce(ReduceTo out, One in){
out.setId(Integer.valueOf(in.getA()));
out.setOne(in);
}
#AggregatorMethod
public void reduce(ReduceTo out, Two in){
out.setId(in.getB());
out.setTwo(in);
}
}
Annotations have, I suppose, different use-case than current spring ones. RediceTo could be any object including collections. In config we could specify when passed first time should it be empty list or something else (like reduce in java streams).
Not sure what you would like to see as out-of-the-box solution. That is your classes, so your methods. How Framework may make some decision on them?
Well, yes, you need to implement CorrelationStrategy. Or you can consider to use ExpressionEvaluatingCorrelationStrategy and don't write the Java code :-).
Please, elaborate more what you would like to see as an out-of-the-box feature.
The aggregating transformer is encapsulated exactly in the MessageGroupProcessor function of the Aggregator. By default it is DefaultAggregatingMessageGroupProcessor. Yes, you can code your own or again - use an ExpressionEvaluatingMessageGroupProcessor and don't write Java code again :-)

API - How can I let API-users declare their own types (which are then used by the API)?

I want to create a library (for own use to begin with but maybe for publishing later so I want to do it the proper way).
I'll use following (abstract) example to explain my question:
public void onSomeEvent(SomeEvent someEvent) {
String action = someEvent.getAction();
EventTypes eventTypes = getEventTypes(); //problem lies with this method
for (EventType eventType : eventTypes) {
if (eventType.getAction().equals(action)) {
eventType.onEvent(someEvent);
}
}
}
private EventTypes getEventTypes {
//User should have defined his own event-type-classes by extending EventType:
//What is the best way to let the user list/define these EventTypes
//so my API can access them (e.g. with this method)?
}
My question is as shown in the comment of the example:
What is the best way to let the user of my API define his own EventTypes for this EventReceiver of the API while meeting (in best case all of the) following criteria:
event-types are easy to define for users
not using reflection
user-types are not registered at runtime, but statically listed somewhere (without annotation-processor)
I don't know if these creteria can all be fulfilled (I guess not).
But if you neglect one or more of the criteria (the first one shouldn't be in any case), please explain to me why there is no better way of doing it (without writing my own annotation-processor).
I hope my question is clear.
If you think it isn't, please suggest me how to make it more precise.
If you think I'm missing out something or should make an entirely different approach, I'd be glad for your corrections.
Thank you in advance.
public List<? extends EventType> getEventTypes() {
...
}
Time to learn about Generics, my friend:
Lesson: Generics (Updated) (The Java Tutorials > Learning the Java Language)
I would define a new interface the user of your api can implement:
public interface EventType {
String getAction ();
[...]
}
And then implement your get method to access the event types to following way. You will then also need a list as a member of your class and a method to add new EventTypes:
private List<EventType> eventTypes = new List<>();
public void addEventType (EventType type) {
this.eventTypes.add(type);
}
public List<EventType> getEventTypes() {
return this.eventTypes;
}
The user of your API is than able to define new EventTypes by creating new classes implementing your interface.

Unit Testing and Mocking - Approach and Practice for Deeper Hierarchies - Junit and EasyMock

I have a hierarchical list of converters like the following for example:
#Named
public class NameConverter {
#Inject
private AddressConverter addressConverter;
public package2.Name convert(package1.Name source) {
package2.Name target = new package2.Name();
target.setFirstName(source.getName());
target.setLastName(source.getName());
target.setAddress(addressConverter.convert(source.getAddress()));
}
}
and AddressConverter has ZipCodeConverter and so on ...
In the Unit Testing class,
1) I would create a mock for addressConverter - EasyMock.createNiceMock.
2) Set the expectation -
EasyMock.expect(addressConverter.convert(EasyMock.anyObject(package1.Address.class))).andReturn(addressList); # What this addressList should be?
3) Whitebox.setInternalState for private fields.
Question :
I would assert on first name and last name if they are equal which is straight forward.
But, NameConverter is also responsible for setting the converted Address.There is a possibility for NameConverter to change the values of returned converted Address and other POJOs inside.
So how do I ensure through Asserts or something else, that NameConverter just sets the Address(and the POJOs encapsulated by it) as it is and does not tamper with the values ??
Possible Solution: In the EasyMock.expect return, should I create and set values for all POJOs till the last one in the hierarchy and assert on each of the values?
But that doesn't seem like unit testing !!
Please help as how to unit test this converter.
It is unit testing to set the return value of a mock object and to assert that that return value is put in the right place by your NameConverter.
However, perhaps what you're coming across is a failure of appropriate layering. If you have a set of 'converter' classes which then need to be co-ordinated in some fashion you may want to make each converter independent and to bring the co-ordination responsibility elsewhere. So, your NameConverter should be completely independent of AddressConverter and you perhaps need a third class which is responsible for calling a set of converters, which each just do their job. You could restructure each converter to be given an instance of both their input and output object and their unit tests assert that they only act on known fields within each object. Then the co-ordinator object doesn't need to know anything about what each converter does, it just needs to locate / create instances of the input and output objects and to call each converter in turn. That's very amenable to a unit-testing approach, without resulting in a lot of layering concerns.
Example code:
public interface Converter<S, T> {
convert(S source, T target);
}
public class NameConverter implements Converter<p1.Name, p2.Name> {
#Override
public void convert(p1.Name source, p2.Name target) {
target.setFirstName(source.getName());
target.setLastName(source.getName());
}
}
public class AddressConverter implements Converter<p1.Name, p2.Name> {
#Override
public void convert(p1.Name source, p2.Name target) {
// more stuff.
}
}
public class ConversionService {
private final Set<Converter> converters;
#Inject
public ConversionService(Set<Converter> converters) {
this.converters = converters;
}
public p2.Name convert(p1.Name source) {
p2.Name target = new p2.Name();
converters.forEach((converter) -> converter.convert(source, target);
return target;
}
}
Then your unit test really just need to know that all your lower-level converters were called.
I would suggest three options:
Return a new empty instance of address from your mock and compare that the same exact instance is set to the target. Don't test if the address value is modified. It's Ok not to test every single possibility of things going wrong.
Return a strict mock of address without any expectations set. It will throw if there is an attempt to modify it. And again check for instance equality.
Don't use mocks at all. Test the entire hierarchy as a whole. It does not look like unit testing, but it may be a good option. I think mocks are often overused and should be avoided when possible. Please see more here.
I would recommend the following as a good unit test for NameConverter:
public final class NameConverterTest {
#SUT
NameConverter tested;
#Test
public void convertNameFromPackage1ToNameFromPackage2() {
Address address = new Address();
package1.Name source = new package1.Name("A Name", address);
package2.Name converted = tested.convert(source);
assertEquals(source.getName(), converted.getFirstName());
assertEquals(source.getName(), converted.getLastName());
assertNotNull(converted.getAddress());
}
}
According to Martin Fowler's definition, the above is still a unit test for the NameConverter unit, even if it doesn't isolate it from its dependency on AddressConverter (which would have its own unit test).
(For simplicity, I used an hypothetical #SUT annotation which takes care of instantiating the "system under test" with injected dependencies - and actual testing libraries for this do exist.)

Design Patterns, override a method without need to re compile / relink

We are building a product that needs to run on production environments. We need to modify some of the functionality of a existing library. The existing library has class's and methods, we need to override 1 or more methods so that the caller uses our overriden methods instead of the original library.
OriginalLibrary
package com.original.library ;
public class OriginalLibrary {
public int getValue() {
return 1 ;
}
public int getAnotherValue() {
return 1 ;
}
}
Original Client
public class MyClient {
private OriginalLibraryClass originalLibraryObject ;
public MyClient () {
originalLibraryObject = new OriginalLibraryClass() ;
System.out.println(originalLibraryObject.getValue()) ;
System.out.println(originalLibraryObject.getAnotherValue()) ;
}
}
Output
1
2
Now, I need to change getValue() to return 3, instead of 1
Needed Output
3
2
package com.original.library.improved ;
public class OriginalLibrary extends com.original.library.OriginalLibrary {
public int getValue() {
return 3 ;
}
public int getAnotherValue() {
return super.getAnotherValue() ;
}
}
If I do the above, I need to tell my Original Client to reorder and use my new com.original.library.improved jar file before com.original.library.
I am almost convinced that this is the most non intrusive way to launch my improved services over and above the OriginalLibrary. I would have preferred a solution where I need to tell the customer to just add my jar file, no need to recompile, relink your client code.
Similar (not same) questions on a google search
here
here
java assist is excellent library for bytecode manipulation. I have modified code below as per your sample code given, You have to explore javaassist more for your actual requirenment
CtClass etype = ClassPool.getDefault().get("com.original.library.OriginalLibrary");
// get method from class
CtMethod cm = etype.getDeclaredMethod("getValue");
// change the method bosy
cm.setBody("return 3;");
etype.rebuildClassFile();
// give the path where classes is placed, In my eclipse it is bin
etype.writeFile("bin");
OriginalLibrary originalLibraryObject;
originalLibraryObject = new OriginalLibrary();
System.out.println(originalLibraryObject.getValue());
System.out.println(originalLibraryObject.getAnotherValue());
Now output of getValue is 3 because I changed body of that method.
A couple of questions -
How is the client getting an instance of your library's class?
If they are using new OriginalLibrary(), then you're pretty much stuck with creating a new subclass of OriginalLibrary and then asking your client to use your new OriginalLibraryImproved class. This is a common problem encountered in projects and is one reason why a library should not allow its clients to instantiate its classes directly using the new operator.
If instead, your client is instantiating OriginalLibrary using a factory method provided by the library (say, OriginalLibrary.getInstance()), you may want to check if there are any hooks into the factory that allow you to change the object being returned.
Do you have full control of the source code of the original library?
If yes, then you definitely should (and I cannot emphasize this strongly enough) provide factory methods for any class in the library that is instantiable. Doing this allows you to change the actual object being returned without modifying the client (as long as the returned object's class is a subclass of the return value from the factory method).
If not, then I suggest you do the following.
Create a subclass of OriginalLibrary (say, OriginalLibraryImproved).
Create a Factory class named OriginalLibraryFactory that has a static method named getInstance(). Write code to return an instance of OriginalLibraryImproved from this method.
Ask your client to replace all occurrences of new OriginalLibrary() with OriginalLibraryFactory.getInstance(). Note that this approach will only involve adding an extra import for the factory class. The client will still refer to the returned instance using the same OriginalLibrary reference as before.
The advantage of this approach is that it gives you complete flexibility to change the implementation details of OriginalLibraryImproved without affecting the client in anyway. You could also swap OriginalLibararyImproved with a newer version like OriginalLibraryImprovedVer2 and the client will be oblivious to the fact that it is using a new class. You'll just have to make sure that OriginalLibraryImprovedVer2 subclasses OriginalLibrary.
An even more flexible approach is to use the Wrapper or Decorator pattern to avoid the pitfalls of inheritance. You can understand more about the Decorator pattern here.
In a nutshell, try to avoid forcing your clients to use new and try to avoid inheritance unless you have very compelling reasons.

Patterns: Populate instance from Parameters and export it to XML

I'm building a simple RESTFul Service; and for achieve that I need two tasks:
Get an instance of my resource (i.e Book) from request parameters, so I can get that instance to be persisted
Build an XML document from that instance to send the representation to the clients
Right now, I'm doing both things in my POJO class:
public class Book implements Serializable {
private Long id;
public Book(Form form) {
//Initializing attributes
id = Long.parseLong(form.getFirstValue(Book.CODE_ELEMENT));
}
public Element toXml(Document document) {
// Getting an XML Representation of the Book
Element bookElement = document.createElement(BOOK_ELEMENT);
}
I've remembered an OO principle that said that behavior should be where the data is, but now my POJO depends from Request and XML API's and that doesn't feels right (also, that class has persistence anotations)
Is there any standard approach/pattern to solve that issue?
EDIT:
The libraries i'm using are Restlets and Objectify.
I agree with you when you say that the behavior should be where the data is. But at the same time, as you say I just don't feel confortable polluting a POJO interface with specific methods used for serialization means (which can grow considerably depending on the way you want to do it - JSON, XML, etc.).
1) Build an XML document from that instance to send the representation to the clients
In order to decouple the object from serialization logic, I would adopt the Strategy Pattern:
interface BookSerializerStrategy {
String serialize(Book book);
}
public class XmlBookSerializerStrategy implements BookSerializerStrategy {
public String serialize(Book book) {
// Do something to serialize your book.
}
}
public class JsonBookSerializerStrategy implements BookSerializerStrategy {
public String serialize(Book book) {
// Do something to serialize your book.
}
}
You POJO interface would become:
public class Book implements Serializable {
private Long id;
private BookSerializerStrategy serializer
public String serialize() {
return serializer.serialize(this);
}
public void setSerializer(BookSerializerStrategy serializer) {
this.serializer = serializer;
}
}
Using this approach you will be able to isolate the serialization logic in just one place and wouldn't pollute your POJO with that. Additionally, returning a String I won't need to couple you POJO with classes Document and Element.
2) Get an instance of my resource (i.e Book) from request parameters, so I can get that instance to be persisted
To find a pattern to handle the deserialization is more complex in my opinion. I really don't see a better way than to create a Factory with static methods in order to remove this logic from your POJO.
Another approach to answer your two questions would be something like JAXB uses: two different objects, an Unmarshaller in charge of deserialization and a Marshaller for serialization. Since Java 1.6, JAXB comes by default with JDK.
Finally, those are just suggestions. I've become really interested in your question actually and curious about other possible solutions.
Are you using Spring, or any other framework, in your project? If you used Spring, it would take care of serialization for you, as well as assigning request params to method params (parsing as needed).

Categories