Is it bad design to use Java enums to call other methods? - java

I have got a problem where I calculate a number and according to this number I have to call a specific method. I ended up with the idea of creating an enum in which each element calls another method. Just as described in this post https://stackoverflow.com/a/4280838/2426316
However, the poster of that answer also mentioned that it would not be considered a very good design, so that I am wondering what I should do. Is it safe to implement an algorithm that uses this design? If not, what else can I do?

The Java Enum type is a language level support (syntactic sugar) for the type-safe enum pattern.
One of the advantages of the type-safe enum pattern and the Java Enum type (compared to other solutions such as C# enums) is that it's designed to support methods, even abstract ones.
Possible usage:
places where you would use the Strategy pattern, but have a fixed set of strategies
replace switch statements with polymorphism (prefered)
...
For more information:
Effective Java, by Joshua Bloch.
First edition includes the type-safe enum pattern
Second edition includes the Java Enum type
Refactoring, by Martin Fowler (e.g. Replace conditional with polymorphism)

Well, safety probably isn't the issue here. It is uncommon an can be difficult to follow though.
My recommendation would be to build around this in one of two ways:
Use ENUMS but don't include the function calls in the ENUM code itself. Rather have a function explore(Level1 level){...} that has a switch statement which differentiates by the level passed.
Use Lambda expressions. This is the cleaner solution as it allows you to pass functions as arguments; sadly Java won't support this method natively until Java 8 is released. There are however implementations to simulate Lambda expressions, the best known probably being lambdaj.

Related

Why does RandomAccessFile takes a String instead of an enum as "mode" parameter?

I've read many answers here confirming my (quite obvious) opinions about using enums instead of String objects when dealing with a small, closed sets of values.
But lately I've noticed a couple of examples in the Java API where the opposite choice has been made. I can only remember this one right now
public RandomAccessFile(File file, String mode)
where mode parameter can only be r, rw, rws or rwd. Otherwise, an IllegalArgumentException exception will be thrown.
The only reason I can think of is that such method could be previous to enum introduction in Java language, am I right? If that's true, are there situations today where chosing a String instead of an enum could make any sense for a closed set?
RandomAccessFile was present as of JDK1.0 whereas enums were introduced in JDK5.0.
Like #mrwiggles said, Java enum types were introduced as part of the Java 5 release.
If you're curious about why and/or how they improved the language, there are a lot of blogs and examples out there demonstrating their benefits:
Making the Most of Java 5: http://www.ajaxonomy.com/2007/java/making-the-most-of-java-50-enum-tricks
Java Enum Introduction : http://theopentutorials.com/tutorials/java/enum/introduction/
In general, if you have a series of specifically predefined values, like you've noted here with r, rw, etc., it's definitely more logical to go with a enum. They are less ambiguous and can be designed much like any other class (containing methods, instance variables, constants, etc). They also can also show minor performance improvements over Strings in some scenarious if they are checked frequently. Read Java: Performance of Enums vs. if-then-else to see exactly what I'm talking about.
Joshua Bloch discusses the benefits of enums over int constants in his book Effective Java. It's not exactly what we're talking about here, but it's pretty close (see More Effective Java with Google's Joshua Bloch):
For enums, the sound bite is "Always use enums in place of int constants" (Item 30). Enums provide so many advantages: compile-time type safety, the ability to add or remove values without breaking clients, meaningful printed values, the ability to associate methods and fields with the values, and so on. Since we have EnumSet, this advice applies equally to bit fields, which should be considered obsolete.

"built in dependency injection" in scala

Hi the following post says there is "built in dependency injection" in scala
"As a Scala and Java developer, I am not even slightly tempted to
replace Scala as my main language for my next project with Java 8. If
I'm forced to write Java, it might better be Java 8, but if I have a
choice, there are so many things (as the OP correctly states) that
make Scala compelling for me beyond Lambdas that just adding that
feature to Java doesn't really mean anything to me. Ruby has Lambdas,
so does Python and JavaScript, Dart and I'm sure any other modern
language. I like Scala because of so many other things other than
lambdas that a single comment is not enough.
But to name a few (some were referenced by the OP)
Everything is an expression, For
comprehensions (especially with multiple futures, resolving the
callback triangle of death in a beautiful syntax IMHO), Implicit
conversions, Case classes, Pattern Matching, Tuples, The fact that
everything has equals and hashcode already correctly implemented (so I
can put a tuple, or even an Array as a key in a map), string
interpolation, multiline string, default parameters, named parameters,
built in dependency injection, most complex yet most powerful type
system in any language I know of, type inference (not as good as
Haskell, but better than the non existent in Java). The fact I always
get the right type returned from a set of "monadic" actions thanks to
infamous things like CanBuildFrom (which are pure genius). Let's not
forget pass by name arguments and the ability to construct a DSL.
Extractors (via pattern matching). And many more.
I think Scala is
here to stay, at least for Scala developers, I am 100% sure you will
not find a single Scala developer that will say: "Java 8 got lambdas?
great, goodbye scala forever!". Only reason I can think of is compile
time and binary compatibility. If we ignore those two, all I can say
is that this just proves how Scala is in the right direction (since
Java 8 lambdas and default interface methods and steams are so clearly
influenced)
I do wish however that Scala will improve Java 8
interoperability, e.g. support functional interfaces the same way. and
add new implicit conversions to Java 8 collections as well as take
advantage to improvements in the JVM.
I will replace Scala as soon as
I find a language that gives me what Scala does and does it better. So
far I didn't find such a language (examined Haskell, Clojure, Go,
Kotlin, Ceylon, Dart, TypeScript, Rust, Julia, D and Nimrod, Ruby
Python, JavaScript and C#, some of them were very promising but since
I need a JVM language, and preferably a statically typed one, it
narrowed down the choices pretty quickly)
Java 8 is by far not even
close, sorry. Great improvement, I'm very happy for Java developers
that will get "permission" to use it (might be easier to adopt than
Scala in an enterprise) but this is not a reason for a Scala shop to
consider moving back to Java." [1]
what is exactly the built in dependency injection in scala?
It's not a discrete language feature. I think the author was referring to the fact that Scala's feature set is flexible enough to support a number of techniques that could be said to accomplish DI:
the cake pattern, building on the trait system
the Reader monad, building on higher-kinded types
DI through currying, building on functional techniques
using implicit class parameters, building on Scala's concept of implicits
in my own project, we accomplish DI by requiring function values in the class constructor explicitly
This diversity is rather emblematic of Scala. The language was designed to implement a number of very powerful concepts, mostly orthogonally, resulting in multiple valid ways to solve many problems. The challenge as a Scala programmer is to understand this breadth and then make an intelligent choice for your project. A lot of times, that choice depends on what paradigms are being used internally to implement your components.

What are the key differences between Java 8's Optional, Scala's Option and Haskell's Maybe?

I've read a few posts on Java 8's upcoming Optional type, and I'm trying to understand why people keep suggesting it's not as powerful as Scala's Option. As far as I can tell it has:
Higher order functions like map and filter using Java 8 lambdas.
Monadic flatMap
Short circuiting through getOrElse type functions.
What am I missing?
Some possibilities come to mind (OTOH, I haven't seen people actually saying that, so they might mean something else):
No pattern matching.
No equivalent to Scala's fold or Haskell's fromMaybe: you have to do optional.map(...).orElseGet(...) instead.
No monadic syntax.
I also wouldn't call any of these "less powerful" myself, since you can express everything you can with the corresponding Scala/Haskell types; these are all conciseness/usability concerns.
Optional and Maybe are effectively in correspondence. Scala has None and Some[A] as subclassing Option[A] which might be more directly comparable to Java since Java could have done the same.
Most other differences either have to do with the ease of handling Maybe/Option in Haskell/Scala which won't translate since Java is less expressive as a language or the consistency of use of Maybe/Option in Haskell/Scala where many of the guarantees and conveniences afforded by the type only kick in once most libraries have agreed to use optional types instead of nulls or exceptions.
For most purposes, they are equivalent; the main difference is that the Scala one is well-integrated into Scala, while the Java one is well-integrated into Java.
The biggest difference in my mind is that Java's is a value-based class. That's something new to the JVM. At the moment there's no real difference between value-based and regular classes, but the distinction paves the way for JVM runtimes to eliminate Java object overhead. In other words, a future JVM could rewrite Optional code as a directive for how to handle nulls, rather than allocating memory for Optional objects.
Scala does something similar with value classes, though it's done by unboxing types in the compiler rather than in the JVM, and its usage is limited. (Option isn't a value class.)

Visitor-Pattern vs. open/closed principle: how to add new visitable object?

I'm studying the visitor pattern and I wonder how this pattern is related to the open/closed principle. I read on several websites that "It is one way to follow the open/closed principle." (citated from wikipedia).
On another website I learned that is follows the open/closed principle in such a way that it is easy to add new visitors to your program in order to "extend existing funcionality without changing existing code". That same website mentions that this visitor pattern has a major drawback: "If a new visitable object is added to the framework structure all the implemented visitors need to be modified." A solution for this problem is provided by using Java's Reflection framework.
Now, isn't this solution a bit of a hack solution? I mean, I found this solution on some other blogs as well, but the code looks more like a workaround!
Is there another solution to this problem of adding new visitables to an existing implementation of the visitor pattern?
Visitor is one of the most boilerplate-ridden patterns of all and has the drawbacks regarding non-extensibility that you mention. It is itself a sloppy workaround to introduce double dispatch into a single-dispatch language. When you consider all its drawbacks, resorting to reflection is not such a terrible choice.
In fact, reflection is not a very bad choice in any case: consider how much today's code is written in dynamic languages, in other words using nothing but reflection, and the applications don't fall apart because of it.
Type safety has its merits, certainly, but when you find yourself hitting the wall of static typing and single dispatch, embrace reflection without remorse. Note also that, with proper caching of Method objects, reflective method invocation is almost as fast as static invocation.
It depends on precisely what job the Visitor is supposed to accomplish, but in most cases, this is what interfaces are for. Consider a SortedSet; the implementation needs to be able to compare the different objects in the set to know their ordering, but it doesn't need to understand anything else about the objects. The solution (for sorting by natural order) is to use the Comparable interface.

What are the advantages of Lambda Expressions for multicore systems?

The Java Tutorials for Lambda Expressions says following:
This section discusses features included in Project Lambda, which aims
to support programming in a multicore environment by adding closures
and related features to the Java language.
My question is, what concrete advantages do I have with Lambda Expressions according to multicore systems and concurrent/parallel programming?
Parallelism is trivial to implement e.g. if you have a collection and you implement a lambda thus:
collection.map { // my lambda }
then the collection itself can parallelise that operation without you having to do the threading etc. yourself. The parallelism is handled within the collection map() implementation.
In a purely functional (i.e. no side effects) system, you can do this for every lambda. For a non-purely functional environment you'd have to select the lambdas for which this would apply (since your lambda may not operate safely in parallel). e.g. in Scala you have to explicitly take the parallel view on a collection in order to implement the above.
Some reference material:
You can read Maurice Naftalin's answer in Why are lambda expressions being added to Java.
Or you can read Mark Reinhold's answer in his article Closures for Java.
Reinhold also wrote, in his blog, a Closures Q&A which seems to address some of your questions.
And there is even an interesting article in JavaWorld about Understanding the Closures Debate.
With full respect to Java 8 lambda function and intents o developers I would like to ask: what is the new and why it is better than traditional interface/method function approach? Why it is better than (suppose) forEach(IApply) where:
IApply is interface
public interface IApply {
void exec(KEY key, VALUE value);
}
How it impedes to parallelism? At least implementation of IApply can be reused, inherited(extended), be implemented in static class.
The last argument is important after dozens of examples of errors of juniors that I seen, which miss that lambda code accesses this of outer class can be a cause of class stays in memory after distinct reference to it is null already.
From this point of view reference to static members of a class are much important (are analogies of a case of C# "delegate"s. And principally- from one hand- lambda is super encapsulation, from another one not reusable, and concurrently - violation of basic principles of OOD culture: free accessing master's members. From point of view of culture of programming- reverse to 70-th years of last century.
Functional programming? -I see, but why to mix is the OOP phenomena that Java is. OOD has wonderful pattern named Data-Behavior decoupling which elegantly provides the same possibilities? The argument - it is same like in Java Script ...nu, really! So give tools to embed Java Script in Java and write chunks of system in Java Script. So I still don't see really so much real benefits, as there are in wave of advertisement.

Categories