ReentrantLock contains an abstract class Sync, and Sync has two subclasses FairSync and NonFairSync. I want to know is this Decorator Design Pattern?
BTW, is there any good resources about Design Pattern usage in java source code?
No it's not. Sync (and FairSync/NonFairSync as well) are only inner classes that are used as an attribute of ReentrantLock (basically, this is only composition, no special pattern involved here).
The second question will result in opinion-based answers since each person has its own tastes and colors about design patterns (so there is no single good resource about design patterns).
If you really want to start somewhere, start on Wikipedia where each pattern is explained quite neutrally but in any case it will let you know when (and if) it is appropriate to use them.
Related
More and more I see new labels for the same solutions but with different names. In this case why we cannot simply say that the Execute Aroud idiom is the same that the Strategy design pattern?
When I read the Jon Skeet’s answer to the question: “What is the “Execute Around” idiom?”, he states that:
it's the pattern where you write a method to do things which are always required, e.g. resource allocation and clean-up, and make the caller pass in "what we want to do with the resource".
In this case #jon-skeet uses a executeWithFile(String filename, InputStreamAction action) method as an example of the method that does things which are always required and the interface InputStreamAction as an abstraction over what we want to do with the resource.
public interface InputStreamAction
{
void useStream(InputStream stream) throws IOException;
}
Comparing this with the Stratey design pattern why we cannot just say that the interface InputStreamAction defines a family of algorithms? And, each implementation of the interface InputStreamAction corresponds to a concrete strategy encapsulating an algorithm. Finally these algorithms are interchangeable and we can make use of the executeWithFile with any of those strategies.
So, why can’t I interpret the Jon Skeet’s answer as an example of the application of the Stratey design pattern?
By the way, which of the idioms came first? The Execute Around or the Strategy design pattern?
Although the Execute Around idiom is usually related with a functional style programming, the only documentation that I found about it was in Smalltalk Best Practice Patterns book from 1997 in the scope of the Smalltalk programming language, which follows an OO approach.
Because Design patterns describe general solutions to recurrent problems, then we can say that the Execute Around and Strategy are both the same solution to solve different problems. So, I ask: do we really need a different name to identify the same solution, when it is applied to a different problem?
Although I agree with #iluwatar's answer that this is the way to communicate different ideas, I still think that I could transmit the idea of the Execute Around just telling that: "I used a Strategy to specify what I want to do with a resource, which is always setup and cleaned in the same manner."
So the Execute Around idiom is reducing ambiguity (and that's good), but at same time is proliferating the names of design patterns? And, is that a good practice?
The way I see it, although both may be solutions to similar problems, these are different idioms and, therefore, should have different names.
I call them idioms, because the execute around it's just a programming idiom, used in many typical situations (similar to one presented) and also in several languages. And, at least that I have knowledge, the Execute Around was never formalized as a software design pattern, at least yet.
And why are they different? Here's my view:
The Stategy design pattern is intended to (from Wikipedia):
defines a family of algorithms,
encapsulates each algorithm, and
makes the algorithms interchangeable within that family.
Typically the strategy instance as a long lasting relation with the context instance, typically passed as a constructor dependency. Even though the context may have setters to the strategy, the same strategy can be used in several call to the context, without the caller (client) even not knowing which one is being used by that particular context and at the moment the call was done. Moreover, the same strategy instance may be used by the context in several methods of its public interface, without the caller no even knowing anything about its usage.
On the other hand, the execute around idiom is suited for parameterized algorithms, where the caller (client) should always pass the algorithm parametrization function in each call. Therefore, the strategy function passed, only influences behavior of that particular call, not other calls.
Although the presented differences may seam theoretical and rhetorical, if you put the context being called in a multithreaded scenario, is where I think the differences are more obvious and easily seen and "felt".
Without lots of locks and synchronization, you cannot use the strategy design pattern in a multithreaded scenario, at least if it is allowed to change the context strategy. If you don't, you see the more lasting duration of that relation between the context and the strategy, as they typically live the same time.
The execute around idiom, if properly implemented, should not suffer from this "disease", at least if the the passed function doesn't have any side effects.
Wrapping up, although Strategy design pattern and execute around idiom may seam alike, may be used to solve similar problems, and may seam to have a similar static structure, they are different in nature, having the former a much more OO style and the latter more functional style and, therefore, should have different names!
I agree with Miguel Gamboa, that the proliferation of names that mean the same is not good and should be avoided. But, at least in my opinion, this is not the case.
Hope this helps!
While Strategy and Execute Around are technically very similar they communicate different ideas. When we discuss interchangeable algorithms the term to use is Strategy. When we discuss about allocating and freeing resources around a business method we should use the term Execute Around.
To me, the Strategy pattern being a "family of algorithms" implies that there are multiple different ways to achieve a particular goal. For example, there are multiple different algorithms/strategies that can achieve the particular goal of sorting a list of values. But in the example given – where executeWithFile handles the opening and closing of a file – I don't think that there is any particular goal for the InputStreamAction family. The concrete implementations probably all have different goals.
In Java, the Execute Around pattern requires objects that are concrete implementations of interfaces. I think that's why it looks so similar to the Strategy pattern. But in other languages, only a plain old function is required. Take Ruby, for example:
execute_with_file('whatever.txt') do |stream|
stream.write('whatever')
end
There is no InputStreamAction interface, and there is no concrete WhateverWriterAction. There is just a function that takes a another function as a parameter. Can a plain old function parameter be considered as a "strategy"? It could be, but I wouldn't call it a strategy. And I certainly don't think of it in terms of the Strategy pattern when I'm using or creating an Execute Around implementation.
In summary, if you wanted to be very literal, you could say that Execute Around is a specific type of Strategy pattern. If you consider the intent behind the two patterns, however, they are separate ideas: the Strategy pattern is a family of algorithms that achieve a particular goal, and Execute Around is a generic way to run something before and after a chunk of arbitrary code.
I'm studying the visitor pattern and I wonder how this pattern is related to the open/closed principle. I read on several websites that "It is one way to follow the open/closed principle." (citated from wikipedia).
On another website I learned that is follows the open/closed principle in such a way that it is easy to add new visitors to your program in order to "extend existing funcionality without changing existing code". That same website mentions that this visitor pattern has a major drawback: "If a new visitable object is added to the framework structure all the implemented visitors need to be modified." A solution for this problem is provided by using Java's Reflection framework.
Now, isn't this solution a bit of a hack solution? I mean, I found this solution on some other blogs as well, but the code looks more like a workaround!
Is there another solution to this problem of adding new visitables to an existing implementation of the visitor pattern?
Visitor is one of the most boilerplate-ridden patterns of all and has the drawbacks regarding non-extensibility that you mention. It is itself a sloppy workaround to introduce double dispatch into a single-dispatch language. When you consider all its drawbacks, resorting to reflection is not such a terrible choice.
In fact, reflection is not a very bad choice in any case: consider how much today's code is written in dynamic languages, in other words using nothing but reflection, and the applications don't fall apart because of it.
Type safety has its merits, certainly, but when you find yourself hitting the wall of static typing and single dispatch, embrace reflection without remorse. Note also that, with proper caching of Method objects, reflective method invocation is almost as fast as static invocation.
It depends on precisely what job the Visitor is supposed to accomplish, but in most cases, this is what interfaces are for. Consider a SortedSet; the implementation needs to be able to compare the different objects in the set to know their ordering, but it doesn't need to understand anything else about the objects. The solution (for sorting by natural order) is to use the Comparable interface.
Many claims that the biggest part of the GoF design patterns are just workarounds for the absence of first class functions. Now that Java is about to get lambda expressions, which of those patterns will be influenced by them? Which ones can be dramatically simplified or generalized? And which ones will basically remain the same? Any practical example is welcome.
I think your will see the most changes in Behavioral patterns.
Template Method - Template methods will be more and more seldomly used, and instead we will see objects pass functions to the AbstractTemplate instead of subclassing the AbstractTemplate. I blogged about this a loooong time ago here: http://hamletdarcy.blogspot.ch/2007/11/groovy-closures-end-of-template-method.html
Observer Pattern - Observer becomes simplified because you no longer need to keep a list of observers that get updated on new events, but instead keep a list of functions that need to be called back on new events. So there is no more Observer interface and just function objects.
State/Strategy Pattern - I group these together because they are structurally equivalent, just different in intent. Strategy usage become much more common because it is easier to implement. You don't need a parent strategy and strategy subclasses, you just need functions. So it is simple to just pass a function as a parameter, which in effect is using the strategy pattern.
Overall, I think any pattern that requires a one-method interface becomes easier to implement. This will have the two effects. 1) We will use these functional patterns more, and 2) we will stop referring to them as patterns but just as "passing a function".
You do what you want, but I think "JavaScript the Good Parts" gives a pretty nice introduction to leveraging functions in a language. You might pick it up and read it!
I tried to reply to this question myself writing a series of articles where I analyzed some GoF pattern and their functional counterpart with practical code examples. In particular I revisited: Command and Strategy, Template and Observer, Decorator and Chain of Responsibility, Interpreter and Visitor.
The open-closed principle states that "Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification".
However, Joshua Bloch in his famous book "Effective Java" gives the following advice: "Design and document for inheritance, or else prohibit it", and encourages programmers to use the "final" modifier to prohibit subclassing.
I think these two principles clearly contradict each other (am I wrong?). Which principle do you follow when writing your code, and why? Do you leave your classes open, disallow inheritance on some of them (which ones?), or use the final modifier whenever possible?
Frankly I think the open/closed principle is more an anachronism than not. It sems from the 80s and 90s when OO frameworks were built on the principle that everything must inherit from something else and that everything should be subclassable.
This was most typified in UI frameworks of the era like MFC and Java Swing. In Swing, you have ridiculous inheritance where (iirc) button extends checkbox (or the other way around) giving one of them behaviour that isn't used (I think it's its the setDisabled() call on checkbox). Why do they share an ancestry? No reason other than, well, they had some methods in common.
These days composition is favoured over inheritance. Whereas Java allowed inheritance by default, .Net took the (more modern) approach of disallowing it by default, which I think is more correct (and more consistent with Josh Bloch's principles).
DI/IoC have also further made the case for composition.
Josh Bloch also points out that inheritance breaks encapsulation and gives some good examples of why. It's also been demonstrated that changing the behaviour of Java collections is more consistent if done by delegation rather than extending the classes.
Personally I largely view inheritance as little more than an implemntation detail these days.
I don't think the two statements contradict each other. A type can be open for extension and still be closed for inheritance.
One way to do this is to employ dependency injection. Instead of creating instances of its own helper types, a type can have these supplied upon creation. This allows you to change the parts (i.e. open for extension) of the type without changing the type itself (i.e. close for modification).
In open-closed principle (open for extension, closed for modification) you can still use the final modifier. Here is one example:
public final class ClosedClass {
private IMyExtension myExtension;
public ClosedClass(IMyExtension myExtension)
{
this.myExtension = myExtension;
}
// methods that use the IMyExtension object
}
public interface IMyExtension {
public void doStuff();
}
The ClosedClass is closed for modification inside the class, but open for extension through another one. In this case it can be of anything that implements the IMyExtension interface. This trick is a variation of dependency injection since we're feeding the closed class with another, in this case through the constructor. Since the extension is an interface it can't be final but its implementing class can be.
Using final on classes to close them in java is similar to using sealed in C#. There are similar discussions about it on the .NET side.
I respect Joshua Bloch a great deal, and I consider Effective Java to pretty much be the Java bible. But I think that automatically defaulting to private access is often a mistake. I tend to make things protected by default so that they can at least be accessed by extending the class. This mostly grew out of a need to unit test components, but I also find it handy for overriding the default behavior of classes. I find it very annoying when I'm working in my own company's codebase and end up having to copy & modify the source because the author chose to "hide" everything. If it's at all in my power, I lobby to have the access changed to protected to avoid the duplication, which is far worse IMHO.
Also keep in mind that Bloch's background is in designing very public bedrock API libraries; the bar for getting such code "correct" must be set very high, so chances are it's not really the same situation as most code you'll be writing. Important libraries such as the JRE itself tend to be more restrictive in order to ensure that the language is not abused. See all the deprecated APIs in the JRE? It's almost impossible to change or remove them. Your codebase is probably not set in stone, so you do have the opportunity to fix things if it turns out you made a mistake initially.
Nowadays I use the final modifier by default, almost reflexively as part of the boilerplate. It makes things easier to reason about, when you know that a given method will always function as seen in the code you're looking at right now.
Of course, sometimes there are situations where a class hierarchy is exactly what you want, and it would be silly not to use one then. But be scared of hierarchies of more than two levels, or ones where non-abstract classes are further subclassed. A class should be either abstract or final.
Most of the time, using composition is the way to go. Put all the common machinery into one class, put the the different cases into different classes, then composit instances to have working whole.
You can call this "dependency injection", or "strategy pattern" or "visitor pattern" or whatever, but what it boils down to is using composition instead of inheritance to avoid repetition.
The two statements
Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.
and
Design and document for inheritance, or else prohibit it.
are not in direct contradiction with one another. You can follow the open-closed principle as long as you design and document for it (as per Bloch's advice).
I don't think that Bloch states that you should prefer to prohibit inheritance by using the final modifier, just that you should explicitly choose to allow or disallow inheritance in each class you create. His advice is that you should think about it and decide for yourself, instead of just accepting the default behavior of the compiler.
I don't think that the Open/closed principle as originally presented allows the interpretation that final classes can be extended through injection of dependencies.
In my understanding, the principle is all about not allowing direct changes to code that has been put into production, and the way to achieve that while still permitting modifications to functionality is to use implementation inheritance.
As pointed out in the first answer, this has historical roots. Decades ago, inheritance was in favor, developer testing was unheard of, and recompilation of the codebase often took too long.
Also, consider that in C++ the implementation details of a class (in particular, private fields) were commonly exposed in the ".h" header file, so if a programmer needed to change it, all clients would require recompilation. Notice this isn't the case with modern languages like Java or C#. Besides, I don't think developers back then could count on sophisticated IDEs capable of performing on-the-fly dependency analysis, avoiding the need for frequent full rebuilds.
In my own experience, I prefer to do the exact opposite: "classes should be closed for extension (final) by default, but open for modification". Think about it: today we favor practices like version control (makes it easy to recover/compare previous versions of a class), refactoring (which encourages us to modify code to improve design, or as a prelude to introducing new features), and developer testing, which provides a safety net when modifying existing code.
I just found myself creating a class called "InstructionBuilderFactoryMapFactory". That's 4 "pattern suffixes" on one class. It immediately reminded me of this:
http://www.jroller.com/landers/entry/the_design_pattern_facade_pattern
Is this a design smell? Should I impose a limit on this number?
I know some programmers have similar rules for other things (e.g. no more than N levels of pointer indirection in C.)
All the classes seem necessary to me. I have a (fixed) map from strings to factories - something I do all the time. The list is getting long and I want to move it out of the constructor of the class that uses the builders (that are created by the factories that are obtained from the map...) And as usual I'm avoiding Singletons.
A good tip is: Your class public API (and that includes it's name) should reveal intention, not implementation. I (as a client) don't care whether you implemented the builder pattern or the factory pattern.
Not only the class name looks bad, it also tells nothing about what it does. It's name is based on its implementation and internal structure.
I rarely use a pattern name in a class, with the exception of (sometimes) Factories.
Edit:
Found an interesting article about naming on Coding Horror, please check it out!
I see it as a design smell - it will make me think if all those levels of abstraction are pulling enough weight.
I can't see why you wanted to name a class 'InstructionBuilderFactoryMapFactory'? Are there other kinds of factories - something that doesn't create an InstructionBuilderFactoryMap? Or are there any other kinds of InstructionBuildersFactories that it needs to be mapped?
These are the questions that you should be thinking about when you start creating classes like these. It is possible to just aggregate all those different factory factories to just a single one and then provide separate methods for creating factories. It is also possible to just put those factory-factory in a different package and give them a more succinct name. Think of alternative ways of doing this.
Lots of patterns in a class name is most definitely a smell, but a smell isn't a definite indicator. It's a signal to "stop for a minute and rethink the design". A lot of times when you sit back and think a clearer solution becomes apparent. Sometimes due to the constraints at hand (technical/time/man power/etc) means that the smell should be ignored for now.
As for the specific example, I don't think suggestions from the peanut gallery are a good idea without more context.
I've been thinking the same thing. In my case, the abundance of factories is caused by "build for testability". For example, I have a constructor like this:
ParserBuilderFactoryImpl(ParserFactory psF) {
...
}
Here I have a parser - the ultimate class that I need.
The parser is built by calling methods on a builder.
The builders (new one for each parser that needs to be built) are obtained from builder factory.
Now, what the h..l is ParserFactory? Ah, I am glad you asked! In order to test the parser builder implementation, I need to call its method and then see what sort of parser got created. The only way to do it w/o breaking the incapsulation of the particular parser class that the builder is creating is to put an interception point right before the parser is created, to see what goes into its constructor. Hence ParserFactory. It's just a way for me to observe in a unit test what gets passed to the constructor of a parser.
I am not quite sure how to solve this, but I have a feeling that we'd be better off passing around classes rather than factories, and Java would do better if it could have proper class methods rather than static members.