Coming from a Java background, I'm used to the common practice of dealing with collections: obviously there would be exceptions but usually code would look like:
public class MyClass {
private Set<String> mySet;
public void init() {
Set<String> s = new LinkedHashSet<String>();
s.add("Hello");
s.add("World");
mySet = Collections.unmodifiableSet(s);
}
}
I have to confess that I'm a bit befuddled by the plethora of options in Scala. There is:
scala.List (and Seq)
scala.collections.Set (and Map)
scala.collection.immutable.Set (and Map, Stack but not List)
scala.collection.mutable.Set (and Map, Buffer but not List)
scala.collection.jcl
So questions!
Why are List and Seq defined in package scala and not scala.collection (even though implementations of Seq are in the collection sub-packages)?
What is the standard mechanism for initializing a collection and then freezing it (which in Java is achieved by wrapping in an unmodifiable)?
Why are some collection types (e.g. MultiMap) only defined as mutable? (There is no immutable MultiMap)?
I've read Daniel Spiewak's excellent series on scala collections and am still puzzled by how one would actually use them in practice. The following seems slightly unwieldy due to the enforced full package declarations:
class MyScala {
var mySet: scala.collection.Set[String] = null
def init(): Unit = {
val s = scala.collection.mutable.Set.empty[String]
s + "Hello"
s + "World"
mySet = scala.collection.immutable.Set(s : _ *)
}
}
Although arguably this is more correct than the Java version as the immutable collection cannot change (as in the Java case, where the underlying collection could be altered underneath the unmodifiable wrapper)
Why are List and Seq defined in package scala and not scala.collection (even though implementations of Seq are in the collection sub-packages)?
Because they are deemed so generally useful that they are automatically imported into all programs via synonyms in scala.Predef.
What is the standard mechanism for initializing a collection and then freezing it (which in Java is achieved by wrapping in an unmodifiable)?
Java doesn't have a mechanism for freezing a collection. It only has an idiom for wrapping the (still modifiable) collection in a wrapper that throws an exception. The proper idiom in Scala is to copy a mutable collection into an immutable one - probably using :_*
Why are some collection types (e.g. MultiMap) only defined as mutable? (There is no immutable MultiMap)?
The team/community just hasn't gotten there yet. The 2.7 branch saw a bunch of additions and 2.8 is expected to have a bunch more.
The following seems slightly unwieldy due to the enforced full package declarations:
Scala allows import aliases so it's always less verbose than Java in this regard (see for example java.util.Date and java.sql.Date - using both forces one to be fully qualified)
import scala.collection.{Set => ISet}
import scala.collection.mutable.{Set => MSet}
class MyScala {
var mySet: ISet[String] = null
def init(): Unit = {
val s = MSet.empty[String]
s + "Hello"
s + "World"
mySet = Set(s : _ *)
}
}
Of course, you'd really just write init as def init() { mySet = Set("Hello", "World")} and save all the trouble or better yet just put it in the constructor var mySet : ISet[String] = Set("Hello", "World")
Mutable collections are useful occasionally (though I agree that you should always look at the immutable ones first). If using them, I tend to write
import scala.collection.mutable
at the top of the file, and (for example):
val cache = new mutable.HashMap[String, Int]
in my code. It means you only have to write “mutable.HashMap”, not scala.collection.mutable.HashMap”. As the commentator above mentioned, you could remap the name in the import (e.g., “import scala.collection.mutable.{HashMap => MMap}”), but:
I prefer not to mangle the names, so that it’s clearer what classes I’m using, and
I use ‘mutable’ rarely enough that having “mutable.ClassName” in my source is not an
undue burden.
(Also, can I echo the ‘avoid nulls’ comment too. It makes code so much more robust and comprehensible. I find that I don’t even have to use Option as much as you’d expect either.)
A couple of random thoughts:
I never use null, I use Option, which would then toss a decent error. This practice has gotten rid of a ton NullPointerException opportunities, and forces people to write decent errors.
Try to avoid looking into the "mutable" stuff unless you really need it.
So, my basic take on your scala example, where you have to initialize the set later, is
class MyScala {
private var lateBoundSet:Option[ Set[ String ] ] = None
def mySet = lateBoundSet.getOrElse( error("You didn't call init!") )
def init {
lateBoundSet = Some( Set( "Hello", "World" ) )
}
}
I've been on a tear recently around the office. "null is evil!"
Note that there might be some inconsistencies in the Scala collections API in the current version; for Scala 2.8 (to be released later in 2009), the collections API is being overhauled to make it more consistent and more flexible.
See this article on the Scala website: http://www.scala-lang.org/node/2060
To add to Tristan Juricek's example with a lateBoundSet: Scala has a built-in mechanism for lazy initialization, using the "lazy" keyword:
class MyClass {
lazy val mySet = Set("Hello", "World")
}
By doing this, mySet will be initialized on first use, instead of immediately when creating a new MyClass instance.
Related
I would like to offer some accessors to a value-container (that is a java.lang.Map under the hood).
When implementing accessors like this
public Optional<Integer> getSomeValueThatMayBeNullOrAnInteger() {
Integer nullable = this.get("keyToANullableInteger", Integer.class); // is casting to given Class
return Optional.ofNullable(nullable);
}
... 'ofNullable' is marked with 'unsafe null type convertion'.
Why is this? The parameter ofNullable is not marked as #NonNull.
Is it, because empty() or of() is used and of() is checking for NonNull?
Is this a Eclipse-bug, because empty() is ignored, when checking the possibilities?
Can someone please explain this to me and maybe tell me, if there is another way to solve this warning as using #SuppressWarnings("null"). Thanks
Missing nullity annotations in java core
Every type has a nullity status. Thus, in Optional<Integer>, both types mentioned have a nullity status - is it #NonNull Optional<#NonNull Integer>? Presumably the answer is trivially: Yes.
There is no point in having a 'nullable optional' (if you have those, stop and fix that first), and the typearg to optional is trivially #NonNull (you can't have an Optional.SOME whose value is null).
However, eclipse doesn't know that. Furthermore, the java core classes are not nullity annotated.
Presumably, you've marked your class as 'default to -everything is nonnull-' otherwise you need to slap a #NonNull on everything which is unwieldy. Thus, eclipse thinks your method's return type is #NonNull Optional<#NonNull Integer>.
However, the Optional.ofNullable method (in class java.util.Optional), simply returns #WhoKnows Optional<#WhoKnows T> - the j.u.Optional class is not marked as 'default to everything being non null' nor is the method itself annotated with #NonNull because the java core classes do not have such annotations at all. Therefore, eclipse is warning you, telling you: Hey, Optional.ofNullable might return null (we know it won't, but how would eclipse know that?), and you're just returning it verbatim, but your method promises to never return null so this is a potential problem.
How to use nullity annotations
You need a way to have your nullity checker (eclipse, in your case) know the nullity status of everything. Including libraries you are using, so, at least including java.*. That means you need a concept called 'external annotations' - a file that 'fills in the blanks', so to speak, on java.* and everything else. For starters, it would list that the ofNullable static method in java.util.Optional should be deemed to be annotated with #NonNull.
Without such a system it's a miserable, miserable experience - going halfway on nullity annotations means you get an endless cavalcade of the very warnings you are getting now. Leading you to remove the warnings (which makes the feature pointless) or annotate most of your methods with #SuppressWarnings("null") which would also make the feature pointless.
The last time I checked which is admittedly a very long time ago, there was no such file or system available, but I believe eclipse has at least added the concept of 'external annotations'. I'd search the web and try to find an EA file that includes all/most of the java.* API at least and use it.
Pick a side!
Optional is a way to lift nullity status into the type system.
Annotations are also a way to do that.
Do not use both at the same time.
Useful facts about nullity systems
Optional is not backwards compatible. Take java.util.Map's .get() method. It currently returns V. If you feel Optional is the correct answer, then this method is incorrect as it should be returning Optional<V> instead. However, changing it is backwards incompatible, and java is exceedingly unlikely to ever do that. Thus, Optional will never get to their perfect world, where Optional is used as de-facto mechanism everywhere.
Annotations can be backwards compatible and therefore clearly superior as an answer. Unfortunately, there are at least 9 competing annotation-based nullity systems, and they all work quite differently. Some are type-use based, some are not, for example.
Most annotation-based frameworks don't have a #PolyNull concept (only checkerframework's does). That means they are all incapable of representing the nullity status of certain existing methods.
Optional doesn't properly compose and doesn't have the poly null concept and cannot be made to have it, so that is another reason Optional will never be able to fully replace null in java.
Where does that leave you
Nullity annotations are unwieldy (not standardized, lacking external annotation files). Optional is a pipe dream (existing APIs cannot be modified to use it). So where does that leave you?
Mostly, just accept that null is fine. Turn off the annotation-based checks, stop using Optional for stuff like this.
There is something you can do right now, and something that the java API is already doing, as it's entirely backwards compatible:
_Work on nicer APIs.
Have a look at the javadoc of java.util.Map, for example. There's computeIfAbsent, getOrDefault, and a few other methods that mostly mean you don't need any type-based nullity at all to work with Map without having to bother with null. For example, you don't have to write this code:
Map<String, List<String>> example = ...;
List<String> list = example.get(key);
if (list == null) {
list = new ArrayList<>();
example.put(key, list);
}
list.add("Test");
Instead you can just write:
Map<String, List<String>> example = ...;
example.computeIfAbsent(key, k -> new ArrayList<>()).add("Test");
and you don't have to write:
Map<String, Integer> example = ...;
int v = 0;
// 2 calls - ugly
if (example.containsKey(key)) v = example.get(key);
// or:
Integer vRaw = example.get(key);
if (vRaw != null) v = vRaw.intValue();
You can instead just write:
Map<String, Integer> example = ...;
int v = example.getOrDefault(key, 0);
You can apply the same principles to the code you write. For example:
public int getSomeValue(int defaultValue) {
Integer nullable = this.get("keyToANullableInteger", Integer.class);
return nullable == null ? defaultValue : nullable.intValue();
}
public Integer getSomeValueOrNull() {
return this.get("keyToANullableInteger", Integer.class);
}
public int getSomeValue() {
Integer n = this.get("keyToANullableInteger", Integer.class);
if (n == null) throw new NullPointerException("key not present in data store");
return n.intValue();
}
No chance a caller is going to be confused about what can and cannot be null here. (You probably don't need all 3, think a bit about your API design).
Is there a way to swap myself (this) with some other object in Java?
In Smalltalk we could write
Object subclass:myClass [
"in my method I swap myself with someone else"
swapWith:anObject [
self become:anObject.
^nil
]
]
myClass subclass:subClass [
]
obj := myClass new.
obj swapWith:subClass new.
obj inspect.
Result is An instance of subClass, obviously.
I need to do following in Java:
I am in a one-directional hierarchy (directed acyclic graph)
in one of my methods (event listener method to be exact) I decide that I am not the best suited object to be here, so:
I create a new object (from a subclass of my class to be exact), swap myself with him, and let myself to be garbage-collected in near future
So, in short, how can I achieve in Java self become: (someClass new:someParameters)? Are there some known design patterns I could use?
In its most general form, arbitrary object swapping is impossible to reconcile with static typing. The two objects might have different interfaces, so this could compromise type safety. If you impose constraints on how objects can be swapped, such a feature can be made type safe. Such feature never became mainstream, but have been investigated in research. Look for instead at Gilgul.
Closely related is reclassification, the ability to change the class of an object dynamically. This is possible in Smalltalk with some primitives. Again, this puts type safety at risks, never became mainstream, but has been investigated in research. Look at Wide Classes, Fickle, or Plaid.
A poor man's solution to object swapping is a proxy that you interpose between the client and the object to swap, or the use of the state and strategy design patterns.
Here is an interesting thread on the official forum. I believe that object encapuslation in combination with strong types makes this function unable to work in Java. Plus for already slow JVM, this could lead to disaster...
this is a reserved word in Java and you cannot override it.
What you're trying to do can be implemented with a simple reference. You just iterate (or go through your graph) and change pointer to what you want to be active.
Consider this:
List<String> stringList = new ArrayList<String>();
// fill your list
String longestWord = "";
for (String s : stringList) {
if (longestWord.length() < s.length()) {
longestWord = s;
}
}
longestWord is poiniting to another object now.
I am trying to wrap an external library in more idiomatic Clojure. This includes making its data structures lazy. I'd like to get in the read static method below and make FooList lazy.
I am running into many problems:
Java static methods can't be overridden
proxy only seems to generate instance objects
gen-class seems to kinda sorta work but I get lost in sea of namespaces, aliases, etc.
The method in question calls another static and private method (which I'd like to reuse), making overriding the public method difficult.
What's the best strategy to do this? Is it possible to open up FooList and re-implement it as lazy, with the resulting class being available to the rest of my code?
org.apache.commons.collections.list.LazyList seems appropriate for the task, but I'm not sure how to go about actually being able to use it.
Help?
public class Foo {
public static FooList read(String filename) {
FooList foos = new FooList(); //FooList extends ArrayList
BufferedReader br = new BufferedReader(new FileReader(filename));
for (String s = br.readLine(); null != s; s = br.readLine()) {
Foo f = parseLine(s);
foos.add(f);
}
br.close();
return foos;
}
private static Foo parseLine(String s) {
//return s as Foo
}
}
Might be completely out of your scope, but I had a similar problem. So my final choice was to go with java.lang.Iterable which, if you think deeper, fits the lazyness criteria exactly. Yes, it does not have some stuff List has (e.g. size()), but I could afford to sacrifice this for simplicity. Looking back I have never regret I made this choice.
In Clojure you can just call (seq collection) on any Iterable collection (e.g. your FooList) and get a lazy sequence that traverses the collection. Though the laziness doesn't actually win you much since you have already loaded the full FoolList on the Java side.
If you really want laziness across both Clojure and Java then you could do the following:
Have your Java function return a custom implementation of
java.util.Iterator rather than a complete collection. In order
to get lazy reading, you should ensure that readLine() only gets called once per iteration (either in the next() or hasNext() method). Remember to close() the reader when you reach the end of the file.
On the Clojure side create the lazy seq with (iterator-seq java-iterator)
If you do this, then your file will be read in and parsed incrementally as the Clojure sequence is consumed.
Gnerally in Java this is not worth doing as you usually want to read the whole file. However say you just want to read the header, or the file it too large to read at once you would read just the lines you need.
However as an exercise, the simplest way to change this class is to edit it. You can get the source for this class (decompiling it if you have to) and add it first in your class path so it is taken as patch to your current system.
You can't create Bar and BarList with the methods and the lazy feature you want to design, delegating to Foo and FooList and then simple use Bar and BarList??
Did i miss the point in your question?
I am not able to understand the point of Option[T] class in Scala. I mean, I am not able to see any advanages of None over null.
For example, consider the code:
object Main{
class Person(name: String, var age: int){
def display = println(name+" "+age)
}
def getPerson1: Person = {
// returns a Person instance or null
}
def getPerson2: Option[Person] = {
// returns either Some[Person] or None
}
def main(argv: Array[String]): Unit = {
val p = getPerson1
if (p!=null) p.display
getPerson2 match{
case Some(person) => person.display
case None => /* Do nothing */
}
}
}
Now suppose, the method getPerson1 returns null, then the call made to display on first line of main is bound to fail with NPE. Similarly if getPerson2 returns None, the display call will again fail with some similar error.
If so, then why does Scala complicate things by introducing a new value wrapper (Option[T]) instead of following a simple approach used in Java?
UPDATE:
I have edited my code as per #Mitch's suggestion. I am still not able to see any particular advantage of Option[T]. I have to test for the exceptional null or None in both cases. :(
If I have understood correctly from #Michael's reply, is the only advantage of Option[T] is that it explicitly tells the programmer that this method could return None? Is this the only reason behind this design choice?
You'll get the point of Option better if you force yourself to never, ever, use get. That's because get is the equivalent of "ok, send me back to null-land".
So, take that example of yours. How would you call display without using get? Here are some alternatives:
getPerson2 foreach (_.display)
for (person <- getPerson2) person.display
getPerson2 match {
case Some(person) => person.display
case _ =>
}
getPerson2.getOrElse(Person("Unknown", 0)).display
None of this alternatives will let you call display on something that does not exist.
As for why get exists, Scala doesn't tell you how your code should be written. It may gently prod you, but if you want to fall back to no safety net, it's your choice.
You nailed it here:
is the only advantage of Option[T] is
that it explicitly tells the
programmer that this method could
return None?
Except for the "only". But let me restate that in another way: the main advantage of Option[T] over T is type safety. It ensures you won't be sending a T method to an object that may not exist, as the compiler won't let you.
You said you have to test for nullability in both cases, but if you forget -- or don't know -- you have to check for null, will the compiler tell you? Or will your users?
Of course, because of its interoperability with Java, Scala allows nulls just as Java does. So if you use Java libraries, if you use badly written Scala libraries, or if you use badly written personal Scala libraries, you'll still have to deal with null pointers.
Other two important advantages of Option I can think of are:
Documentation: a method type signature will tell you whether an object is always returned or not.
Monadic composability.
The latter one takes much longer to fully appreciate, and it's not well suited to simple examples, as it only shows its strength on complex code. So, I'll give an example below, but I'm well aware it will hardly mean anything except for the people who get it already.
for {
person <- getUsers
email <- person.getEmail // Assuming getEmail returns Option[String]
} yield (person, email)
Compare:
val p = getPerson1 // a potentially null Person
val favouriteColour = if (p == null) p.favouriteColour else null
with:
val p = getPerson2 // an Option[Person]
val favouriteColour = p.map(_.favouriteColour)
The monadic property bind, which appears in Scala as the map function, allows us to chain operations on objects without worrying about whether they are 'null' or not.
Take this simple example a little further. Say we wanted to find all the favourite colours of a list of people.
// list of (potentially null) Persons
for (person <- listOfPeople) yield if (person == null) null else person.favouriteColour
// list of Options[Person]
listOfPeople.map(_.map(_.favouriteColour))
listOfPeople.flatMap(_.map(_.favouriteColour)) // discards all None's
Or perhaps we would like to find the name of a person's father's mother's sister:
// with potential nulls
val father = if (person == null) null else person.father
val mother = if (father == null) null else father.mother
val sister = if (mother == null) null else mother.sister
// with options
val fathersMothersSister = getPerson2.flatMap(_.father).flatMap(_.mother).flatMap(_.sister)
I hope this sheds some light on how options can make life a little easier.
The difference is subtle. Keep in mind to be truly a function it must return a value - null is not really considered to be a "normal return value" in that sense, more a bottom type/nothing.
But, in a practical sense, when you call a function that optionally returns something, you would do:
getPerson2 match {
case Some(person) => //handle a person
case None => //handle nothing
}
Granted, you can do something similar with null - but this makes the semantics of calling getPerson2 obvious by virtue of the fact it returns Option[Person] (a nice practical thing, other than relying on someone reading the doc and getting an NPE because they don't read the doc).
I will try and dig up a functional programmer who can give a stricter answer than I can.
For me options are really interesting when handled with for comprehension syntax. Taking synesso preceding example:
// with potential nulls
val father = if (person == null) null else person.father
val mother = if (father == null) null else father.mother
val sister = if (mother == null) null else mother.sister
// with options
val fathersMothersSister = for {
father <- person.father
mother <- father.mother
sister <- mother.sister
} yield sister
If any of the assignation are None, the fathersMothersSister will be None but no NullPointerException will be raised. You can then safely pass fathersMothersSisterto a function taking Option parameters without worrying. so you don't check for null and you don't care of exceptions. Compare this to the java version presented in synesso example.
You have pretty powerful composition capabilities with Option:
def getURL : Option[URL]
def getDefaultURL : Option[URL]
val (host,port) = (getURL orElse getDefaultURL).map( url => (url.getHost,url.getPort) ).getOrElse( throw new IllegalStateException("No URL defined") )
Maybe someone else pointed this out, but I didn't see it:
One advantage of pattern-matching with Option[T] vs. null checking is that Option is a sealed class, so the Scala compiler will issue a warning if you neglect to code either the Some or the None case. There is a compiler flag to the compiler that will turn warnings into errors. So it's possible to prevent the failure to handle the "doesn't exist" case at compile time rather than at runtime. This is an enormous advantage over the use of the null value.
It's not there to help avoid a null check, it's there to force a null check. The point becomes clear when your class has 10 fields, two of which could be null. And your system has 50 other similar classes. In the Java world, you try to prevent NPEs on those fields using some combination of mental horesepower, naming convention, or maybe even annotations. And every Java dev fails at this to a significant degree. The Option class not only makes "nullable" values visually clear to any developers trying to understand the code, but allows the compiler to enforce this previously unspoken contract.
[ copied from this comment by Daniel Spiewak ]
If the only way to use Option were
to pattern match in order to get
values out, then yes, I agree that it
doesn’t improve at all over null.
However, you’re missing a *huge* class
of its functionality. The only
compelling reason to use Option is
if you’re using its higher-order
utility functions. Effectively, you
need to be using its monadic nature.
For example (assuming a certain amount
of API trimming):
val row: Option[Row] = database fetchRowById 42
val key: Option[String] = row flatMap { _ get “port_key” }
val value: Option[MyType] = key flatMap (myMap get)
val result: MyType = value getOrElse defaultValue
There, wasn’t that nifty? We can
actually do a lot better if we use
for-comprehensions:
val value = for {
row <- database fetchRowById 42
key <- row get "port_key"
value <- myMap get key
} yield value
val result = value getOrElse defaultValue
You’ll notice that we are *never*
checking explicitly for null, None or
any of its ilk. The whole point of
Option is to avoid any of that
checking. You just string computations
along and move down the line until you
*really* need to get a value out. At
that point, you can decide whether or
not you want to do explicit checking
(which you should never have to do),
provide a default value, throw an
exception, etc.
I never, ever do any explicit matching
against Option, and I know a lot of
other Scala developers who are in the
same boat. David Pollak mentioned to
me just the other day that he uses
such explicit matching on Option (or
Box, in the case of Lift) as a sign
that the developer who wrote the code
doesn’t fully understand the language
and its standard library.
I don’t mean to be a troll hammer, but
you really need to look at how
language features are *actually* used
in practice before you bash them as
useless. I absolutely agree that
Option is quite uncompelling as *you*
used it, but you’re not using it the
way it was designed.
One point that nobody else here seems to have raised is that while you can have a null reference, there is a distinction introduced by Option.
That is you can have Option[Option[A]], which would be inhabited by None, Some(None) and Some(Some(a)) where a is one of the usual inhabitants of A. This means that if you have some kind of container, and want to be able to store null pointers in it, and get them out, you need to pass back some extra boolean value to know if you actually got a value out. Warts like this abound in the java containers APIs and some lock-free variants can't even provide them.
null is a one-off construction, it doesn't compose with itself, it is only available for reference types, and it forces you to reason in a non-total fashion.
For instance, when you check
if (x == null) ...
else x.foo()
you have to carry around in your head throughout the else branch that x != null and that this has already been checked. However, when using something like option
x match {
case None => ...
case Some(y) => y.foo
}
you know y is not Noneby construction -- and you'd know it wasn't null either, if it weren't for Hoare's billion dollar mistake.
Option[T] is a monad, which is really useful when you using high-order functions to manipulate values.
I'll suggest you read articles listed below, they are really good articles that show you why Option[T] is useful and how can it be used in functional way.
Martians vs Monads: Null Considered Harmful
Monads are Elephants Part 1
Adding on to Randall's teaser of an answer, understanding why the potential absence of a value is represented by Option requires understanding what Option shares with many other types in Scala—specifically, types modeling monads. If one represents the absence of a value with null, that absence-presence distinction can't participate in the contracts shared by the other monadic types.
If you don't know what monads are, or if you don't notice how they're represented in Scala's library, you won't see what Option plays along with, and you can't see what you're missing out on. There are many benefits to using Option instead of null that would be noteworthy even in the absence of any monad concept (I discuss some of them in the "Cost of Option / Some vs null" scala-user mailing list thread here), but talking about it isolation is kind of like talking about a particular linked list implementation's iterator type, wondering why it's necessary, all the while missing out on the more general container/iterator/algorithm interface. There's a broader interface at work here too, and Option provides a presence-and-absence model of that interface.
I think the key is found in Synesso's answer: Option is not primarily useful as a cumbersome alias for null, but as a full-fledged object that can then help you out with your logic.
The problem with null is that it is the lack of an object. It has no methods that might help you deal with it (though as a language designer you can add increasingly long lists of features to your language that emulate an object if you really feel like it).
One thing Option can do, as you've demonstrated, is to emulate null; you then have to test for the extraordinary value "None" instead of the extraordinary value "null". If you forget, in either case, bad things will happen. Option does make it less likely to happen by accident, since you have to type "get" (which should remind you that it might be null, er, I mean None), but this is a small benefit in exchange for an extra wrapper object.
Where Option really starts to show its power is helping you deal with the concept of I-wanted-something-but-I-don't-actually-have-one.
Let's consider some things you might want to do with things that might be null.
Maybe you want to set a default value if you have a null. Let's compare Java and Scala:
String s = (input==null) ? "(undefined)" : input;
val s = input getOrElse "(undefined)"
In place of a somewhat cumbersome ?: construct we have a method that deals with the idea of "use a default value if I'm null". This cleans up your code a little bit.
Maybe you want to create a new object only if you have a real value. Compare:
File f = (filename==null) ? null : new File(filename);
val f = filename map (new File(_))
Scala is slightly shorter and again avoids sources of error. Then consider the cumulative benefit when you need to chain things together as shown in the examples by Synesso, Daniel, and paradigmatic.
It isn't a vast improvement, but if you add everything up, it's well worth it everywhere save very high-performance code (where you want to avoid even the tiny overhead of creating the Some(x) wrapper object).
The match usage isn't really that helpful on its own except as a device to alert you about the null/None case. When it is really helpful is when you start chaining it, e.g., if you have a list of options:
val a = List(Some("Hi"),None,Some("Bye"));
a match {
case List(Some(x),_*) => println("We started with " + x)
case _ => println("Nothing to start with.")
}
Now you get to fold the None cases and the List-is-empty cases all together in one handy statement that pulls out exactly the value you want.
Null return values are only present for compatibility with Java. You should not use them otherwise.
It is really a programming style question. Using Functional Java, or by writing your own helper methods, you could have your Option functionality but not abandon the Java language:
http://functionaljava.org/examples/#Option.bind
Just because Scala includes it by default doesn't make it special. Most aspects of functional languages are available in that library and it can coexist nicely with other Java code. Just as you can choose to program Scala with nulls you can choose to program Java without them.
Admitting in advance that it is a glib answer, Option is a monad.
Actually I share the doubt with you. About Option it really bothers me that 1) there is a performance overhead, as there is a lor of "Some" wrappers created everywehre. 2) I have to use a lot of Some and Option in my code.
So to see advantages and disadvantages of this language design decision we should take into consideration alternatives. As Java just ignores the problem of nullability, it's not an alternative. The actual alternative provides Fantom programming language. There are nullable and non-nullable types there and ?. ?: operators instead of Scala's map/flatMap/getOrElse. I see the following bullets in the comparison:
Option's advantage:
simpler language - no additional language constructs required
uniform with other monadic types
Nullable's advantage:
shorter syntax in typical cases
better performance (as you don't need to create new Option objects and lambdas for map, flatMap)
So there is no obvious winner here. And one more note. There is no principal syntactic advantage for using Option. You can define something like:
def nullableMap[T](value: T, f: T => T) = if (value == null) null else f(value)
Or use some implicit conversions to get pritty syntax with dots.
The real advantage of having explicit option types is that you are able to not use them in 98% of all places, and thus statically preclude null exceptions. (And in the other 2% the type system reminds you to check properly when you actually access them.)
Another situation where Option works, is in situations where types are not able to have a null value. It is not possible to store null in an Int, Float, Double, etc. value, but with an Option you can use the None.
In Java, you would need to use the boxed versions (Integer, ...) of those types.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
One of the things that can be a little annoying about Java is the amount of code you need to express concepts. I am a believer in the "less code is better" philosophy, and I'd like to know how I can write Java without being so frustratingly verbose. Recently, I read the Hidden Features of Java question and was introduced to using double-brace initialization to simulate a List or Map literal. There are, of course, drawbacks to using this method, but it does allow you to do certain things with significantly fewer characters and (if you format it right) make the code a lot cleaner and clearer. I'm wondering if there aren't other clever tricks and lesser known language features which could make my code more concise.
I'd like to see answers with an explanation of the technique, the more verbose way which it replaces, and any potential drawbacks to using the technique.
Prior to the introduction of the diamond operator in Java 7, static factory methods for creating generic types could be used to reduce verbosity by reducing the need to repeat the type parameter. (This is because without the diamond operator, Java never infers the type parameter on constructors, but it will on methods calls.) Google Collections uses this technique, so you can write:
Set<MyClassWithALongName> set = Sets.newHashSet();
instead of:
Set<MyClassWithALongName> set = new HashSet<MyClassWithALongName>();
Look in the Lists, Sets and Maps classes of Google Collections for methods starting with "new" for more examples of this.
Unless you are writing for an old version of Java, as of Java 7 it is better to just use the diamond operator.
A similar one you probably already know about, using the "varargs" feature:
String[] array = new String[] {"stack", "over", "flow"};
List<String> list = Arrays.asList(array);
can be abbreviated
List<String> list = Arrays.asList("stack", "over", "flow");
Admittedly not a huge savings, but it does reduce the verbosity a little bit. As Thomas notes, the list will be immutable, so watch out for that. Actually, you can modify the list, you just can't change its length. Thanks to pimlottc for pointing that out.
Use a dependency injection framework like spring. I'm almost always amazed at how much code construction logic produces.
I've found that the most (only?) effective way to write concise java is to not write java at all. In cases where I needed to write something quickly that still interoperates with Java, I've found Groovy to be an excellent choice. Using a more concise language that still compiles to JVM bytecode can be an excellent solution. While I have no personal experiences with it, I've heard that Scala is an even better choice than Groovy in many cases.
Check out lambdaj. It has lots of features that can help to make your code more concise and readable.
Fluent interfaces can help - using builders and method chaining to make something resembling a DSL in java. The code you end up with can be a little harder to read though as it breaks Java's normal coding conventions such as removing the set / get from properties.
So in a fake Swing fluent interface you might define a button thus:
JButton button = Factory.button().icon(anIcon).tooltip("Wow").swing();
Another approach is to use another language there are many that integrate well with the JVM such as:
JRuby
Scala
Cal
A "closeQuietly" method can be used in try/finally blocks in situations where IO exceptions on close are uninteresting (or impossible).
Closeable c = null;
try {
...
c = openIt(...);
...
} finally {
closeQuietly(c);
}
where:
/** Close 'c' if it is not null, squashing IOExceptions */
public void closeQuietly(Closeable c) {
if (c != null) {
try {
c.close();
} catch (IOException ex) {
// log error
}
}
}
Note that with Java 7 and later, the new "try with resources" syntax makes this particular example redundant.
I found a blog post giving an interesting technique which allows for writing a map literal in Java like you would be able to do in Perl, Python, Ruby, etc: Building your own literals in Java - Tuples and Maps
I really like this approach! I'll just summarize it here.
The basic idea is to create a generic pair class and define static functions that will construct a pair, and a map from a varargs array of pairs. This allows the following concise map literal definition:
Map(o("height", 3), o("width", 15), o("weight", 27));
Where o is the name of the static function to construct a pair of T1 and T2 objects, for any object types T1 and T2, and Map is the name of the static function to construct a Map. I'm not sure I like the choice of Map as the name of the map construction function because it is the same as the name of the Java interface, but the concept is still good.
Static initialisers
Example 1 (Map):
Map<String, String> myMap = new HashMap<String, String>() {{
put ("a", "b");
put ("c", "d");
}};
Example 2(List):
List<String> myList = new ArrayList<String>() {{
add("a");
add("b");
add("c");
}};
More Guave goodness to initialize immutable maps (which I'm finding to be a way more common case than initializing mutable maps): the ImmutableMap.of(...) variants
Map<Service, Long> timeouts = ImmutableMap.of(
orderService, 1500,
itemService, 500);
Ever have to iterate through a collection, just to map its elements by one of its properties? No more, thanks to Maps.uniqueIndex():
private void process(List<Module> modules) {
Map<String, Module> byName = Maps.uniqueIndex(modules, new Function<Module, String>() {
#Override public String apply(Module input) {
return input.getName();
}
});
or if this is frequent enough, make the function a public static final member of Module so that the above is reduced to:
Map<String, Module> byName = Maps.uniqueIndex(modules, Module.KEY_FUNCTION);