I want to know the meaning of compile-time decisions - java

What does it mean to say "with inheritance you're locked into compile-time decisions about code behavior".

I suggest this post from Donal Fellows on Programmers,
Some languages are pretty strongly static, and only allow the
specification of the inheritance relationship between two classes at
the time of definition of those classes. For C++, definition time is
practically the same as compilation time. (It's slightly different in
Java and C#, but not very much.) Other languages allow much more
dynamic reconfiguration of the relationship of classes (and class-like
objects in Javascript) to each other; some go as far as allowing the
class of an existing object to be modified, or the superclass of a
class to be changed. (This can cause total logical chaos, but can also
model real world nasties quite well.)
But it is important to contrast this to composition, where the
relationship between one object and another is not defined by their
class relationship (i.e., their type) but rather by the references
that each has in relation to the other. General composition is a very
powerful and ubiquitous method of arranging objects: when one object
needs to know something about another, it has a reference to that
other object and invokes methods upon it as necessary. As soon as you
start looking for this super-fundamental pattern, you'll find it
absolutely everywhere; the only way to avoid it is to put everything
in one object, which would be massively dumb! (There's also stricter
UML composition/aggregation, but that's not what the GoF book is
talking about there.)
One of the things about the composition relationship is that
particular objects do not need to be hard-bound to each other. The
pattern of concrete objects is very flexible, even in very static
languages like C++. (There is an upside to having things very static:
it is possible to analyse the code more closely and — at least
potentially — issue better code with less overhead.) To recap,
Javascript, as with many other dynamic languages, can pretend it
doesn't use compilation at all; just pretence, of course, but the
fundamental language model doesn't require transformation to a fixed
intermediate format (e.g., a “binary executable on disk”). That
compilation which is done is done at runtime, and can be easily redone
if things vary too much. (The fascinating thing is that such a good
job of compilation can be done, even starting from a very dynamic
basis…)
Some GoF patterns only really make sense in the context of a language
where things are fairly static. That's OK; it just means that not all
forces affecting the pattern are necessarily listed. One of the key
points about studying patterns is that it helps us be aware of these
important differences and caveats. (Other patterns are more universal.
Keep your eyes open for those.)

Related

Wondering about Microstream class StorageConfiguration

there are two questions with microstream database and its class StorageConfiguration:
1) What ist the difference of the methods New() and Builder() and the DEFAULT construct?
2) Why the methods are writting uppercased? That does not seem to be Java naming convention.
Thanks for any answers!
I am the MicroStream lead developer and I can gladly answer those questions.
To 1)
"New" is a "static factory method" for the type itself.
"Builder" is a static factory method for a "builder" instance of the type.
Both terms can be perfectly googled for more information about them.
A quick service as a starting point:
"static factory method":
https://www.baeldung.com/java-constructors-vs-static-factory-methods
"builder pattern":
https://en.wikipedia.org/wiki/Builder_pattern
--
To your actually second question, about the "DEFAULT" construct:
If I may, there is no "DEFAULT" construct, but "Default".
(Conventions are important ... mostly. See below.)
"Default" is simply the default implementation (= class) of the interface StorageConfiguration.
Building a software architecture directly in classes quickly turns out to be too rigid and thus bad design. Referencing and instantiating classes directly creates a lot of hardcoded dependencies to one single implementation that can't be changed or made more flexible later on. Inheritance is actually only very rarely flexible enough to be a solution for arising architecture flexibility problems. Interfaces, on the other hand, only define a type and the actual class implementing it hardly matters and can even be easily interchangeable. For example, by only designing via interfaces, every instance can easily be "wrapped" by any desired logic via using the decorator pattern. E.g. adding a logging aspect to a type.
There is a good article with an anecdote about James Gosling (the inventor of Java) named "Why extends is evil" that describes this:
https://www.javaworld.com/article/2073649/why-extends-is-evil.html
So:
"Default" is just the default class implementing the interface it is nested in. It makes sense to name such a class "Default", doesn't it? There can be other classes next to it, like "Wrapper" or "LazyInitializing" or "Dummy" or "Randomizing" or whatever.
This design pattern is used in the entire code of MicroStream, giving it an incredibly flexible and powerful architecture. For example:
With a single line of code, every part of MicroStream (every single "gear" in the machine) can be replaced by a custom implementation. One that does things differently (maybe better?) or fixes a bug without even needing a new MicroStream version. Or one that adds logging or customized exception handling or that introduces object communication where there normally is none. Maybe directly with the application logic (but at your own risk!). Anything is possible, at least inside the boundaries of the interfaces.
Thinking in interfaces might be confusing in the beginning (which is why a lot of developers "burn mark" interfaces with a counterproductive "I" prefix. It hurts me every time I see that), but THEY are the actual design types in Java. Classes are only their implementation vehicles and next to irrelevant on the design level.
--
To 2)
I think a more fitting term for "static factory method" is "pseudo constructor". It is a method that acts as a public API constructor for that type, but it isn't an actual constructor. Following the argumentation about the design advantages of such constructor-encapsulating static methods, the question about the best, consistent naming pattern arose to me. The JDK gives some horribly bad examples that should not be copied. Like "of" or "get". Those names hardly carry the meaning of the method's purpose.
It should be as short but still as descriptive as possible. "create" or "build" would be okay, but are they really the best option? "new" would be best, but ironically, that is a keyword associated with the constructors that should be hidden from public API. "neW" or "nEw" would look extremely ugly and would be cumbersome to type. But what about "New"? Yes, it's not strictly Java naming conventions. But there already is one type of methods that does is an exception to the general naming rule. Which one? Constructors! It's not "new person(...") but "new Person(...)". A method beginning with a capital letter. Since the beginning of Java. So if the static method should take the place of a constructor, wouldn't it be quite logical and a very good signal to apply that same exception ... or ... "extension" of the naming convention to that, too? So ... "New" it is. Perfectly short, perfectly clear. Also not longer and VERY similar to the original constructors. "Person.New" instead of "new Person".
The "naming convention extension" that fits BOTH naming exceptions alike is: "every static method that starts with a capital letter is guaranteed to return a new instance of that type." Not a cached one. Always a new one. (this can be sometime crucial to guarantee the correctness of algorithms.)
This also has some neat side effects. For example:
The pseudo-constructor method for creating a new instance of
"StorageConfigurationBuilder" can be "StorageConfiguration.Builder()".
It is self-explaining, simple, clear.
Or if there is a method "public static Vector Normalized(Vector v)", it implicitely
tells that the passed instance will not be changed, but a new instance will
be returned for the normalized vector value. It's like having the
option to give constructors proper names all of a sudden. Instead of
a sea of different "Vector(...)" methods and having to rely on the
JavaDoc to indirectly explain their meaning, the explanation is right
there in the name. "New(...)", "Normalized(...)", "Copy(...)" etc.
AND it also plays along very nicely with the nested-Default-class
pattern: No need to write "new StorageConfiguration.Default()" (which
would be bad because too hardcoded, anyway), but just
"StorageConfiguration.New" suffices. It will internally create and
return a new "StorageConfiguration.Default" instance. And should that
internal logic ever change, it won't even be noticable by the API
user.
Why do I do that if no one else does?
If one thinks about it, that cannot be a valid argument. I stick VERY closely to standards and conventions as far as they make sense. They do about 99% of the time, but if they contain a problem (like forbidding a static method to be called "new") or lacking a perfectly reasonable feature (like PersonBuilder b = Person.Builder()" or choosing properly speaking names for constructors), then, after careful thought, I br... extend them as needed. This is called innovation. If no one else had that insight so far, bad for them, not for me. The question is not why an inventor creates an improvment, but why no one else has done it so far. If there is an obvious possibility for improvement, it can't be a valid reason not to do it just because no one else did it. Such a thinking causes stagnation and death of progress. Like locking oneself in a 1970ies data storing technology for over 40 years instead of just doing the obviously easier, faster, direct, better way.
I suggest to see the capital letter method naming extension as a testimony to innovation: If a new idea objectively brings considerably more advantages than disadvantages, it should - or almost MUST - be done.
I hereby invite everyone to adopt it.

Is there any performance decrease in java, for extending classes for "no" reason?

I have a Vector3i class that is useful in a lot of situations, but I've found myself extending it to use the type system to prevent bugs.
For example, I might have an "ego-centric" vector3i that is local to an object in the world, and a world co-ordinate vector3i.
The two are naturally incompatible without conversion and are meaningless to each other.
It would be a good situation to use True Hungarian Notation but instead I'm extending the class and adding no new functionality.
Am I incurring a performance loss considering the JVM/Hotspots optimizations?
Inheritance is a powerful tool, but the power comes at a price. Inheritance has a lot of pitfalls and problems of its own. In particular, it breaks encapsulation and can lead to fragile code when not implemented with care.
The inheritance mechanism in Java has been developed continuously for over 15 years. You can rely on it to be fast and efficient. There are no significant performance-related reasons to pass on inheritance when it your data model calls for it.
For your case, it may make more sense to represent your functionalities by composition rather than by inheritance (in other words, instead of having ClassB extend ClassA, make ClassA an instance field within ClassB and then delegate method calls to the encapsulated object). You should at least consider it. Compared to inheritance, composition results in code that is more robust, and less fragile.

Why the Execute Aroud idiom is not considered a Strategy design pattern?

More and more I see new labels for the same solutions but with different names. In this case why we cannot simply say that the Execute Aroud idiom is the same that the Strategy design pattern?
When I read the Jon Skeet’s answer to the question: “What is the “Execute Around” idiom?”, he states that:
it's the pattern where you write a method to do things which are always required, e.g. resource allocation and clean-up, and make the caller pass in "what we want to do with the resource".
In this case #jon-skeet uses a executeWithFile(String filename, InputStreamAction action) method as an example of the method that does things which are always required and the interface InputStreamAction as an abstraction over what we want to do with the resource.
public interface InputStreamAction
{
void useStream(InputStream stream) throws IOException;
}
Comparing this with the Stratey design pattern why we cannot just say that the interface InputStreamAction defines a family of algorithms? And, each implementation of the interface InputStreamAction corresponds to a concrete strategy encapsulating an algorithm. Finally these algorithms are interchangeable and we can make use of the executeWithFile with any of those strategies.
So, why can’t I interpret the Jon Skeet’s answer as an example of the application of the Stratey design pattern?
By the way, which of the idioms came first? The Execute Around or the Strategy design pattern?
Although the Execute Around idiom is usually related with a functional style programming, the only documentation that I found about it was in Smalltalk Best Practice Patterns book from 1997 in the scope of the Smalltalk programming language, which follows an OO approach.
Because Design patterns describe general solutions to recurrent problems, then we can say that the Execute Around and Strategy are both the same solution to solve different problems. So, I ask: do we really need a different name to identify the same solution, when it is applied to a different problem?
Although I agree with #iluwatar's answer that this is the way to communicate different ideas, I still think that I could transmit the idea of the Execute Around just telling that: "I used a Strategy to specify what I want to do with a resource, which is always setup and cleaned in the same manner."
So the Execute Around idiom is reducing ambiguity (and that's good), but at same time is proliferating the names of design patterns? And, is that a good practice?
The way I see it, although both may be solutions to similar problems, these are different idioms and, therefore, should have different names.
I call them idioms, because the execute around it's just a programming idiom, used in many typical situations (similar to one presented) and also in several languages. And, at least that I have knowledge, the Execute Around was never formalized as a software design pattern, at least yet.
And why are they different? Here's my view:
The Stategy design pattern is intended to (from Wikipedia):
defines a family of algorithms,
encapsulates each algorithm, and
makes the algorithms interchangeable within that family.
Typically the strategy instance as a long lasting relation with the context instance, typically passed as a constructor dependency. Even though the context may have setters to the strategy, the same strategy can be used in several call to the context, without the caller (client) even not knowing which one is being used by that particular context and at the moment the call was done. Moreover, the same strategy instance may be used by the context in several methods of its public interface, without the caller no even knowing anything about its usage.
On the other hand, the execute around idiom is suited for parameterized algorithms, where the caller (client) should always pass the algorithm parametrization function in each call. Therefore, the strategy function passed, only influences behavior of that particular call, not other calls.
Although the presented differences may seam theoretical and rhetorical, if you put the context being called in a multithreaded scenario, is where I think the differences are more obvious and easily seen and "felt".
Without lots of locks and synchronization, you cannot use the strategy design pattern in a multithreaded scenario, at least if it is allowed to change the context strategy. If you don't, you see the more lasting duration of that relation between the context and the strategy, as they typically live the same time.
The execute around idiom, if properly implemented, should not suffer from this "disease", at least if the the passed function doesn't have any side effects.
Wrapping up, although Strategy design pattern and execute around idiom may seam alike, may be used to solve similar problems, and may seam to have a similar static structure, they are different in nature, having the former a much more OO style and the latter more functional style and, therefore, should have different names!
I agree with Miguel Gamboa, that the proliferation of names that mean the same is not good and should be avoided. But, at least in my opinion, this is not the case.
Hope this helps!
While Strategy and Execute Around are technically very similar they communicate different ideas. When we discuss interchangeable algorithms the term to use is Strategy. When we discuss about allocating and freeing resources around a business method we should use the term Execute Around.
To me, the Strategy pattern being a "family of algorithms" implies that there are multiple different ways to achieve a particular goal. For example, there are multiple different algorithms/strategies that can achieve the particular goal of sorting a list of values. But in the example given – where executeWithFile handles the opening and closing of a file – I don't think that there is any particular goal for the InputStreamAction family. The concrete implementations probably all have different goals.
In Java, the Execute Around pattern requires objects that are concrete implementations of interfaces. I think that's why it looks so similar to the Strategy pattern. But in other languages, only a plain old function is required. Take Ruby, for example:
execute_with_file('whatever.txt') do |stream|
stream.write('whatever')
end
There is no InputStreamAction interface, and there is no concrete WhateverWriterAction. There is just a function that takes a another function as a parameter. Can a plain old function parameter be considered as a "strategy"? It could be, but I wouldn't call it a strategy. And I certainly don't think of it in terms of the Strategy pattern when I'm using or creating an Execute Around implementation.
In summary, if you wanted to be very literal, you could say that Execute Around is a specific type of Strategy pattern. If you consider the intent behind the two patterns, however, they are separate ideas: the Strategy pattern is a family of algorithms that achieve a particular goal, and Execute Around is a generic way to run something before and after a chunk of arbitrary code.

Why ADTs are good and Inheritance is bad?

I am a long time OO programmer and a functional programming newbie. From my little exposure algebraic data types only look like a special case of inheritance to me where you only have one level hierarchy and the super class cannot be extended outside the module.
So my (potentially dumb) question is: If ADTs are just that, a special case of inheritance (again this assumption may be wrong; please correct me in that case), then why does inheritance gets all the criticism and ADTs get all the praise?
Thank you.
I think that ADTs are complementary to inheritance. Both of them allow you to create extensible code, but the way the extensibility works is different:
ADTs make it easy to add new functionality for working with existing types
You can easily add new function that works with ADT, which has a fixed set of cases
On the other hand, adding new case requires modifying all functions
Inheritance makes it easy to add new types when you have fixed functionality
You can easily create inherited class and implement fixed set of virtual functions
On the other hand, adding a new virtual function requires modifying all inherited classes
Both object-oriented world and functional world developed their ways to allow the other type of extensibility. In Haskell, you can use typeclasses, in ML/OCaml, people would use dictionary of functions or maybe (?) functors to get the inhertiance-style extensibility. On the other hand, in OOP, people use the Visitor pattern, which is essentially a way to get something like ADTs.
The usual programming patterns are different in OOP and FP, so when you're programming in a functional language, you're writing the code in a way that requires the functional-style extensibility more often (and similarly in OOP). In practice, I think it is great to have a language that allows you to use both of the styles depending on the problem you're trying to solve.
Tomas Petricek has got the fundamentals exactly right; you might also want to look at Phil Wadler's writing on the "expression problem".
There are two other reasons some of us prefer algebraic data types over inheritance:
Using algebraic data types, the compiler can (and does) tell you if you have forgotten a case or if a case is redundant. This ability is especially useful when there are many more operations on things than there are kinds of thing. (E.g., many more functions than algebraic datatypes, or many more methods than OO constructors.) In an object-oriented language, if you leave a method out of a subclass, the compiler can't tell whether that's a mistake or whether you intended to inherit the superclass method unchanged.
This one is more subjective: many people have noted that if inheritance is used properly and aggressively, the implementation of an algorithm can easily be smeared out over a half a dozen classes, and even with a nice class browser at can be hard to follow the logic of the program (data flow and control flow). Without a nice class browser, you have no chance. If you want to see a good example, try implementing bignums in Smalltalk, with automatic failover to bignums on overflow. It's a great abstraction, but the language makes the implementation difficult to follow. Using functions on algebraic data types, the logic of your algorithm is usually all in one place, or if it is split up, its split up into functions which have contracts that are easy to understand.
P.S. What are you reading? I don't know of any responsible person who says "ADTs good; OO bad."
In my experience, what people usually consider "bad" about inheritance as implemented by most OO languages is not the idea of inheritance itself but the idea of subclasses modifying the behavior of methods defined in the superclass (method overriding), specifically in the presence of mutable state. It's really the last part that's the kicker. Most OO languages treat objects as "encapsulating state," which amounts to allowing rampant mutation of state inside of objects. So problems arise when, for example, a superclass expects a certain method to modify a private variable, but a subclass overrides the method to do something completely different. This can introduce subtle bugs which the compiler is powerless to prevent.
Note that in Haskell's implementation of subclass polymorphism, mutable state is disallowed, so you don't have such issues.
Also, see this objection to the concept of subtyping.
I am a long time OO programmer and a functional programming newbie. From my little exposure algebraic data types only look like a special case of inheritance to me where you only have one level hierarchy and the super class cannot be extended outside the module.
You are describing closed sum types, the most common form of algebraic data types, as seen in F# and Haskell. Basically, everyone agrees that they are a useful feature to have in the type system, primarily because pattern matching makes it easy to dissect them by shape as well as by content and also because they permit exhaustiveness and redundancy checking.
However, there are other forms of algebraic datatypes. An important limitation of the conventional form is that they are closed, meaning that a previously-defined closed sum type cannot be extended with new type constructors (part of a more general problem known as "the expression problem"). OCaml's polymorphic variants allow both open and closed sum types and, in particular, the inference of sum types. In contrast, Haskell and F# cannot infer sum types. Polymorphic variants solve the expression problem and they are extremely useful. In fact, some languages are built entirely on extensible algebraic data types rather than closed sum types.
In the extreme, you also have languages like Mathematica where "everything is an expression". Thus the only type in the type system forms a trivial "singleton" algebra. This is "extensible" in the sense that it is infinite and, again, it culminates in a completely different style of programming.
So my (potentially dumb) question is: If ADTs are just that, a special case of inheritance (again this assumption may be wrong; please correct me in that case), then why does inheritance gets all the criticism and ADTs get all the praise?
I believe you are referring specifically to implementation inheritance (i.e. overriding functionality from a parent class) as opposed to interface inheritance (i.e. implementing a consistent interface). This is an important distinction. Implementation inheritance is often hated whereas interface inheritance is often loved (e.g. in F# which has a limited form of ADTs).
You really want both ADTs and interface inheritance. Languages like OCaml and F# offer both.

Why is using a class as a struct bad practice in Java?

We recently had a code review . One of my classes was used so that I could return/pass more than one type of data from/to methods . The only methods that the class had were getters/setters . One of the team's members ( whose opinion I respect ) said that having a class like that is bad practice ( and not very OOP ) . Why is that ?
There's an argument that classes should either be "data structures" (i.e., focus on storing data with no functionality) or "functionality oriented" (i.e., focus on performing certain actions while storing minimal state). If you follow that argument (which makes sense but isn't always easy to do) then there is nothing necessarily wrong with that.
In fact, one would argue that beans and entity beans are essentially that - data containers with getters and setters.
I have seen certain sources (e.g., the book "clean code") arguing that one should avoid methods with multiple parameters and instead pass them as a single object with getters and setters. This is also closer to the "smalltalk model" of named parameters where order does not matter.
So I think that when used appropriately, your design makes sense.
Note that there are two separate issues here.
Is a "struct-like" class sensible?
Is creating a class to return multiple values from a method sensible?
Struct-like classes
An object class should -- for the most part -- represent a class of real-world objects. A passive, struct-like java bean (all getters and setters) may represent a real-world thing.
However, most real-world things have rules, constraints, behaviors, and basic verbs in which they engage. A struct-like class is rarely a good match for a real-world thing, it's usually some technical thing. That makes it less than ideal OO design.
Multiple returns from a method
While Python has this, Java doesn't. Multiple return values isn't an OO question, per se. It's a question of working through the language limitations.
Multiple return values may mean that an object has changed state. Perhaps one method changes the state and some group of getters return the values stemming from this state change.
To be honest, it sounds fine to me. What alternative did the reviewer suggest?
Following OOP "best practices" and all is fine, but you've got to be pragmatic and actually get the job done.
Using Value Objects like this (OO speak for 'struct') is a perfectly legitimate approach in some cases.
In general, you'll want to isolate the knowledge needed to operate upon a class into the class itself. If you have a class like this, either it is used in multiple places, and thus can take on some of the functionality in both of those places, or it is in a single place, and should be an inner class. If it is used in multiple ways, but in completely different ways, such that there is no shared functionality, having it be a single class is misleading, indicating a shared functionality where there is none.
However, there are often specific reasons for where these general rules may or may not apply, so it depends on what your class was supposed to represent.
I think he might be confusing "not very OOP" for bad practice. I think he expected you to provide several methods that would each return 1 value that was needed (as you will have to use them in your new class anyway that isn't too bad).
Note that in this case you probably shouldn't use getters/setters, just make the data public. No this is "not very OOP" but is the right way to do it.
Maybe Josh Bloch offers some insight into this here.

Categories