I recently watched this youtube tutorial on the Null Object design pattern. Even though there were some errors in it: such as the NullCar that doesn't do anything creates an infinite loop, the concept was well explained. My question is, what do you do when the objects that can be null have getters, and are used in your code? How do you know which value to return by default? Or should I implement this pattern inside all the objects? What if I need to return strings or primitives? I'm talking from a Java perspective.
EDIT: won't I be trading null objects testing for default value testing ? If not , why not ?
The objective of a Null Object is to avoid having Null references in the code. The values returned by Null Object getters depend on your domain. Zero or empty string are usually appropriate.
If we transpose the Null Object pattern to real life, what you're asking is similar to ask "how old is nobody ?".
Perhaps your design can be improved as you seem not to follow the tell, don't ask principle.
EDIT: the Null Object design pattern is typically used when an object delegates behavior to another object (such as in Strategy or State Design Patterns) ; as Tom Hawtin - tackline commented, use Special Case Objects for objects returning values.
As far as I've understood it the idea is that the null object's value is as close to "nothing" as possible. That unfortunately means you have to define it yourself. As an example I personally use "" when I can't pass a null String, null object number for me is -1 (mostly because by default most database sequences start at 1 and we use those for item id:s a lot so -1 is dead giveaway it's a null object), with lists/maps/sets it's Collections.EMPTY_SET, EMPTY_MAP or EMPTY_LIST and so on and so forth. If I have custom class I have to create a null object from, I remove all actual data from it and see where that takes me and then apply what I just mentioned until it's "empty".
So you really don't "know" which value to return by default, you just have to decide it by yourself.
what do you do when the objects that can be null have getters , and are used in your code ? How do you know which value to return by default ?
How do you know which classes to implement? This is a design question, it depends on the application.
Generally speaking the purpose of the NullObject pattern is to support a Replace Conditional with Polymorphism refactoring in the special case where the the conditional is a comparison against the null value of the programming language.
A correct implementation of the example in the video would require delegating the driveCar method to the Car classes. The SlowCar and FastCar classes would perform the loop, presumably through a shared implementation in a base class, and the NullCar would just return immediately.
In a Java context, the NullCar.speed attribute would probably be an unboxed int. So setting it to null is not an option. I would probably hide the attribute behind accessor, and have NullCar.getSpeed raise an exception. Any client code that would need a test to avoid this exception would instead move into the car classes.
Delegating all operations that directly depend on a speed value being available is an application of the Tell Don't Ask principle of object-oriented design mentioned by philippe
What should be the integration point of Null design pattern in code ? I think that DAO objects are the fisrt level client for this design pattern as they lookup an entity in database and return it simply.
The nullability check of these objects pollute the code in service layer or command layer where they are actually accessed and used.
Please comment.
It should return the null object for the class you are getting. For example, if you have a class A with a getter that returns an object of class B, then the corresponding NullA's getter should return NullB.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Part of the project I'm working on got updated, and at some point somebody started sending empty collections instead of null as arguments to a method.
This led to a single bug first, which then led me to change if (null == myCollection) with if (CollectionUtils.isEmpty(myCollection)), which in the end led to a cascade of several bugs. This way I discovered that a lot of the code treats these collections differently:
when a collection is empty (i.e. the user specifically wanted to have nothing here)
when a collection is null (i.e. the user did not mention anything here)
Hence, my question: is this good or bad design practice?
In his (very good) book Effective Java, Joshua Bloch treats this question for the returned values of methods (not "in general" like your question) :
(About use of null) It is errorprone, because the programmer writing
the client might forget to write the special case code to handle a null
return.
(...)
It is sometimes argued that a null return value is preferable to an
empty array because it avoids the expense of allocating the array.
This argument fails on two counts. First, it is inadvisable to worry
about performance at this level unless profiling has shown that the
method in question is a real contributor to performance problems (Item
55). Second, it is possible to return the same zero-length array from every invocation that returns no items because
zero-length arrays are immutable and immutable objects may be shared
freely (Item 15).
(...)
In summary, there is no reason ever to return null from an
array- or collection-valued method instead of returning an empty array
or collection. (...)
Personnally I use this reasoning as a rule of thumb with any use of Collections in my code. Of course, there is some case where a distinction between null and empty makes sense but in my experience it's quite rare.
Nevertheless, as stated by BionicCode in the comments section, in the case of a method that returns null instead of empty to specify that something went wrong, you always have the possibility of throwing an exception instead.
I know this question already has an accepted answer. But because of the discussions I had, I decided to write my own to address this problem.
The inventor of NULL himself Mr. Tony Hoare says
I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
First of all it is very bad practice to introduce breaking changes when updating old code that was already released and tested. Refactoring in this situation is always dangerous and should be avoided. I know that it can hurt to release ugly or stupid code, but if this happened, then this is a clear sign of a lack of quality management tools (e.g. code reviews or conventions). But obviously there were no unit tests otherwise the changes would have been reverted immediately by the author of the changes, due to the failing tests. The main problem is that your team has no conventions how to handle NULL as argument or as result. The intention of the author of the update to switch from NULL arguments to empty collections is absolutely right and should be supported, but he shouldn't have done it for working code and he should have discussed with the team, so that the rest of the team can follow in order to make it more effective. Your Team definitely must come together and agree on abandoning NULL as an argument or result value or at least find a standard. Better send NULL to hell. Your only solution is to revert the changes done by the author of the update (I assume you are using version control). Then redo the update, but do it the old fashioned and nasty way using NULL like you were doing it before. Don't use NULL in new code - for a brighter future. Trying to fix the updated version will escalate the situation for sure and therefore will waste time. (I assume that we are talking about a bigger project). Roll back to the previous version if possible.
To make it short if you don't like to continue reading: yes it's very bad practice. You yourself can draw this conclusion from the situation you are in now. You now witness very unstable and unpredictable code. The irrational bugs are the proof that your code has become unpredictable. If you don't have unit tests for at least the business logic then the code is hot iron. The practice that has lead to this code can never be good. NULL has no intuitional meaning to the user of an API, when there is the possibility to get a neutral result or parameter (like this is the case with a collection). An empty collection is neutral. You can expect a collection to be empty or to contain at least one element. There is noting else you can expect from a collection. You want to indicate an error and that an operation couldn't terminate? Then prefer to throw an exception with a good name that communicates the root of the error clearly to the caller.
NULL is a historical relic. Stumbling over a NULL value meant that the programmer wrote sloppy code. He forgot to initialize a pointer by assigning a memory address to it. The compiler must know a memory address as a reference in order to create a valid pointer. If you wanted a pointer but don't waste memory for just declaring the pointer or didn't know the address to this point, NULL was a convention to have the pointer to point to nowhere, which means no memory must be allocated for the pointer's reference (except for the pointer itself). Today in modern OO-Languages with garbage collection and plenty of available memory, NULL has become irrelevant in programming. There are situations where it is used to express e.g. the absence of data like in SQL. But in OO programming you can absolutely avoid it and therefore make your application more robust.
You have the choice to apply the Null-Object Pattern. Also it is best practice to either use default parameters (if your language like C# supports this) or use overloads. E.g. if the method parameter is optional then use an overload (or a default parameter). If the parameter is mandatory but you can't provide the value, then simply don't call the method. If the parameter is a collection, then always use an empty collection whenever you have no values. The author of the method must handle this case as he must handle all possible cases or parameter states. This includes NULL. So it's the duty of the method's author to check for NULL and to decide how to handle it. Here kicks the convention in. If your team agreed on to never use NULL, this annoying and ugly NULL checks are not required anymore. Some frameworks offer a #NotNull attribute. The author can use it to decorate the method parameters to indicate that NULL is not a valid value. The compiler will now do the NULL checks and show an error to the programmer that (mis)uses the method and simply won't compile. Alongside code reviews, this can help to prevent or identify violations and lead to more robust code.
Most libraries provide helper classes e.g. Array.Empty() or Enumerable.Empty() (C#) to create empty collections that provide methods like IsEmpty(). This makes intentions semantically clear and code therefore nice to read. It's worth it to write your own helper if none exist in your standard library.
Try to integrate quality management into your team's routine. Do code reviews to make sure the code to be released is conform to your quality standards (unit tests, no NULL values, always use curly braces for statement bodies, naming, etc...)
I hope you can fix your problem. I know this is a stressful situation to clean up the mess of somebody else. This is why communication in teams is so important.
It depends on your needs. Null is definitele not empty collection. I'd say that it is a bad practice to treat separately empty and not empty collection.
Null howver indicates a kind of lack of data. I'd say that if such situation is legal and you are using java 8 or higher you should probably use Optional. In this case Optional.empty() means that there is no collection while Optional.of(collection) means that collection is here even if it itself is empty.
It is recommended to use like this in case of collection.
if(myCollection !=null && !myCollection.isEmpty()) {
//Process the logic
}
However as part of design, Joshua Bloch recommends to use empty collection in Effective Java.
To quote his statement,
there is no reason ever to return null from an array-valued method
instead of returning a zero-length array.
You can find the link for effective java.
https://www.amazon.com/dp/0321356683
well, by accessing a null value, you can get a NullPointerException directly, and most times (from my experience) it is bad practice to make a difference between a null value and empty collection in deeper hierarchy of the logic. it should just be empty instead of null.
Let’s put some downvote bait here, shall we? As I understand it, you have got a method like
public void foo(Collection myCollection) {
if (CollectionUtils.isEmpty(myCollection)) {
// ...
}
// ...
}
I also understand that this method is called from many places so it would most practical if you don’t need to change all the calls. And I understand that there is a semantic difference between passing null and passing an empty collection to it. An empty collection means that we know for a fact that there are no elements. Null means that it is not specified whether there are any (and I am assuming that your method is able to do useful work in this case too).
The nice design would have two methods:
/** #param myCollection a possibly empty collection; not null */
public void foo(Collection myCollection);
/** Call this method if you don’t want to specify elements */
public void foo();
However, considering your existing code base, you don’t want to introduce the non-null requirement mentioned all of a sudden and break code that has been working until now. One way forward would be to introduce a comment on the 1-arg method effectively saying
Passing null to this method is deprecated. It works but may be
prohibited in a future version.
This will buy you time to change the code base over the coming months or even years and still allow you to arrive at the better design at some point.
At the same time you may change your 1-arg method to just delegating to the no-arg method if it receives a null.
Caveat: You need to be very clear in your documentation (Javadoc) about the semantics of not specifying elements (ideally calling the no-arg method) and the semantic difference from passing an empty collection.
I've read here why Optional.of() should be used over Optional.ofNullable(), but the answer didn't satisfy me at all, so I ask slightly different:
If you are SURE that your method does not return null, why should you use Optional at all? As far as I know, the more or less only purpose of it is to remind the "user of a method", that he might have to deal with null-values. If he does not have to deal with null-values, why should he be bothered with an Optional?
I ask, because I recently made my service-layer return Optionals instead of nulls (in certain situations). I used Optional.of() and was highly confused when it threw a NullPointer.
A sample of what I did:
Optional valueFromDB = getUserById("12");
User user = valueFromDB.get();
.....
public Optional<User> getUserById(String id) {
//...
return Optional.of(userRepository.findOne(id)); // NullPointerException!
}
If null is not possible, I don't see why one would wrap it in an Optional. The dude in the linked answer said "well, if a NullPointer happens, it happens right away!" But do I really want that? If the sole purpose of an Optional is, to remind the programmer who gets such an object, to keep null in mind (he HAS to unwrap it), why should I want to have NullPointerException at wrapping-time?
Edit: I needed to edit the question, because it got marked as duplicate, even though I already linked said question from the start. I also did explain, why the answer did not satisfy me, but now I need to edit my text with an explanation.
But here is some appendix to what I want to ask, since I got 5 answers and everyone answers a different case, but none fully covered what I try to ask here:
Is there a reason, that Optional.of(null) is impossible and they specifically added Optional.ofNullable() for the null case?
Using streams should not be the problem with my idea of the implementation.
I got a lot of insight from your answers, thanks for that. But the real question has not been answered until now, as far as I can tell/read/understand.
Maybe I should have asked: "What if we remove the Optional.of() method and only allow Optional.ofNullable() in Java 9, would there be any problem except backwards-compatibility?"
You are mixing up the API design rationale with knowledge within a particular implementation code. It’s perfectly possible that a method declares to return an Optional, because the value might be absent, while at a certain code location within the method, it is known to be definitely present. I.e.
String content;
public Optional<String> firstMatch(String pattern) {
Matcher m = Pattern.compile(pattern).matcher(content);
return m.find()? Optional.of(m.group()): Optional.empty();
}
This method’s return type denotes a String that might be absent, while at the code locations creating an Optional instance, it is known whether the value is present or absent. It’s not about detecting a null value here.
Likewise, within the Stream API methods findFirst() and findAny(), it will be known at one point, whether there is a matching element, whereas supporting the conversion of its presence to absence in case of a matching null element is explicitly unsupported and supposed to raise a NullPointerException, per specification. Therefore, Optional.of will be used to return the matching element, which you can easily recognize in the stack trace when using Stream.of((Object)null) .findAny();
The other reason to use Optional.of(value) when you know that value can't be null is that if you want to do additional filtering operations on that Optional.
For example:
public static long getPageSizeFrom(HttpServletRequest request) {
return Optional.of(request.getParameter("pageSize"))
.filter(StringUtils::isNumeric)
.map(Long::valueOf)
.filter(page::hasPageSize)
.orElse(page::getDefaultPageSize)
}
I think you are right with your opinion that you should not use Optional if you are sure that you always have a return-value.
But your method is not sure, that it always returns a value!
Think of an call to getUserById(-1). There is (normally) no User with this id, and your userRepository will return null.
So in this case you should use Optional.ofNullable.
https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html#ofNullable-T-
Optional is one of those things that has been imported from functional programming languages and dumped into the laps of OO and procedural programmers without much background explanation...which has caused much pain and hand wringing.
First, a quick link to a blog post (not by me) which greatly helps to clear the air on this: The Design of Optional
Optional is related to functional programming types like Haskell Maybe. Because of the way strong typing works in functional programming, a programmer in that language would use Maybe to say that a value can be either Something, or Nothing. The Something and Nothing are actually different types, here. Anything that needs the values inside a Maybe has to handle both - the code simply won't compile if it doesn't handle both.
Compare that scenario to what is the typical situation in C-based object-oriented languages (Java, C#, C++, etc.) where an object can either have a value, or be null. If a method needs to handle null parameters as an edge case, you need to explicitly write that code - and being the lazy programmers we all are, it's just as often we don't bother to.
Imagine what coding would be like if code wouldn't compile unless null cases were always explicitly handled. That's a pretty close comparison to what happens when using Maybe in functional languages.
When we pull language features over from functional programming, and the compiler behaves the way it always has, and we code the way we always have... you can see there's a disconnect happening.
Separately, Optional can be used as a simple stand-in for null. Because it seems familiar that way, and is new, magpie developers are prone to using it as a replacement for situations where null-checks would have happened before. But, in the end, is foo.isPresent() really so different than foo != null. If that is the sole difference, it's pointless.
And let's not even get started on how Optional can be a stand-in for autoboxing and unboxing in Java.
Now, getting back to your specific question about the particular API of Optional in Java, comparing ofNullable() vs. of(), the best I can work out is that you probably aren't expected to use those in typical code. They are mainly used at the terminal end of stream() operations. You can look at the code to Optional.of() vs. Optional.ofNullable() and see for yourself that the only difference is that ofNullable checks if the value is null and arranges things for that situation.
My jaded eye doesn't see a whole lot of benefit to using Optional in Java, unless I am using Java 8 streams and I have to. Some would say that the main benefit of using Optional as a type for non-stream usage is to specify that a particular parameter is optional - that is, the logic takes a different route if it's there vs. not there. Which, again, you can simply do with null. But I would say that the mental baggage associated with Optional means that for every forward step of supposedly more verbose code, you are taking multiple steps backwards with requiring people to understand that the Optional (in this specific case) is almost entirely cosmetic. For the situation you described, I would probably go back to using nulls and null checks.
Angelika Langer says that Optional.ofNullable is only a convenience-method, calling the other both static methods from Optional. It is implemented as:
return value == null ? empty() : of(value) ;
Also she says that Optional.ofNullable was added lately to the API.
Here is her text in german language: http://www.angelikalanger.com/Articles/EffectiveJava/80.Java8.Optional-Result/80.Java8.Optional-Result.html
So I would use Optional.of only when null is an error, which should be found early. This is what Tagir Valeev said in:
Why use Optional.of over Optional.ofNullable?
The practical answer is: on most occasions, no. As you mention, if the whole point of using Optional is not knowing if a value can return null, and you want to make it explicit in certain API, the fact that .of() can throw a null exception does not make any sense. I always use ofNullable.
The only situation I can think of is if you have a method that returns Optional (to make explicit this null-value possibility), and that method has a default/fallback value under some circumstances, you will return a "default value" Optional, using .of().
public Optional<String> getSomeNullableValue() {
if (defaultSituationApplies()) { return Optional.of("default value"); }
else {
String value = tryToGetValueFromNetworkOrNull();
return Optional.ofNullable(value);
}
}
Then again, someone can question whether in that case you can return this default value in case of a null.
Metaphysical discussions aside, IMHO if you use Optionals, and want them to make any sense and not throw exceptions, use ofNullable().
I agree that Optional.of is counterintuitive and for most use cases you would want to use Optional.ofNullable, but there are various purposes for Optional.of:
To explicitly throw a NullPointerException if the value is null. In this case Optional.of functions as a Guard.
When the value simply cannot be null. For instance, constants like Optional.of("Hello world"!). This is a programming esthetics thing. Optional.ofNullable("Hello world!") looks weird.
To turn a non-null value into an Optional for further chaining with map or filter. This is more a programming convenience thing. Just like Optional.stream() exists to turn an Optional into a Stream for further chaining.
"What if we remove the Optional.of() method and only allow Optional.ofNullable() in Java 9, would there be any problem except backwards-compatibility?"
Yes, of course there will be compatibility issues. There's just too much code out there using Optional.of.
I agree with your general sentiment though: Optional.of is doing too much (wrapping the value and null-checking). For null-checks we already have Objects.requireNonNull which is conveniently overloaded to accept a descriptive text.
Optional.of and Optional.ofNullable should have been discarded in favor of a constructor made available for users:
return new Optional<>(value);
For null-checks this would have sufficed:
return new Optional<>(Objects.requireNonNull(value, "cannot be null!"));
Optional type introduced in Java 8 is a new thing for many developers.
Is a getter method returning Optional<Foo> type in place of the classic Foo a good practice? Assume that the value can be null.
Of course, people will do what they want. But we did have a clear intention when adding this feature, and it was not to be a general purpose Maybe type, as much as many people would have liked us to do so. Our intention was to provide a limited mechanism for library method return types where there needed to be a clear way to represent "no result", and using null for such was overwhelmingly likely to cause errors.
For example, you probably should never use it for something that returns an array of results, or a list of results; instead return an empty array or list. You should almost never use it as a field of something or a method parameter.
I think routinely using it as a return value for getters would definitely be over-use.
There's nothing wrong with Optional that it should be avoided, it's just not what many people wish it were, and accordingly we were fairly concerned about the risk of zealous over-use.
(Public service announcement: NEVER call Optional.get unless you can prove it will never be null; instead use one of the safe methods like orElse or ifPresent. In retrospect, we should have called get something like getOrElseThrowNoSuchElementException or something that made it far clearer that this was a highly dangerous method that undermined the whole purpose of Optional in the first place. Lesson learned. (UPDATE: Java 10 has Optional.orElseThrow(), which is semantically equivalent to get(), but whose name is more appropriate.))
After doing a bit of research of my own, I've come across a number of things that might suggest when this is appropriate. The most authoritative being the following quote from an Oracle article:
"It is important to note that the intention of the Optional class is not to replace every single null reference. Instead, its purpose is to help design more-comprehensible APIs so that by just reading the signature of a method, you can tell whether you can expect an optional value. This forces you to actively unwrap an Optional to deal with the absence of a value." - Tired of Null Pointer Exceptions? Consider Using Java SE 8's Optional!
I also found this excerpt from Java 8 Optional: How to use it
"Optional is not meant to be used in these contexts, as it won't buy us anything:
in the domain model layer (not serializable)
in DTOs (same reason)
in input parameters of methods
in constructor parameters"
Which also seems to raise some valid points.
I wasn't able to find any negative connotations or red flags to suggest that Optional should be avoided. I think the general idea is, if it's helpful or improves the usability of your API, use it.
I'd say in general its a good idea to use the optional type for return values that can be nullable. However, w.r.t. to frameworks I assume that replacing classical getters with optional types will cause a lot of trouble when working with frameworks (e.g., Hibernate) that rely on coding conventions for getters and setters.
The reason Optional was added to Java is because this:
return Arrays.asList(enclosingInfo.getEnclosingClass().getDeclaredMethods())
.stream()
.filter(m -> Objects.equals(m.getName(), enclosingInfo.getName())
.filter(m -> Arrays.equals(m.getParameterTypes(), parameterClasses))
.filter(m -> Objects.equals(m.getReturnType(), returnType))
.findFirst()
.getOrThrow(() -> new InternalError(...));
is cleaner than this:
Method matching =
Arrays.asList(enclosingInfo.getEnclosingClass().getDeclaredMethods())
.stream()
.filter(m -> Objects.equals(m.getName(), enclosingInfo.getName())
.filter(m -> Arrays.equals(m.getParameterTypes(), parameterClasses))
.filter(m -> Objects.equals(m.getReturnType(), returnType))
.getFirst();
if (matching == null)
throw new InternalError("Enclosing method not found");
return matching;
My point is that Optional was written to support functional programming, which was added to Java at the same time. (The example comes courtesy of a blog by Brian Goetz. A better example might use the orElse() method, since this code will throw an exception anyway, but you get the picture.)
But now, people are using Optional for a very different reason. They're using it to address a flaw in the language design. The flaw is this: There's no way to specify which of an API's parameters and return values are allowed to be null. It may be mentioned in the javadocs, but most developers don't even write javadocs for their code, and not many will check the javadocs as they write. So this leads to a lot of code that always checks for null values before using them, even though they often can't possibly be null because they were already validated repeatedly nine or ten times up the call stack.
I think there was a real thirst to solve this flaw, because so many people who saw the new Optional class assumed its purpose was to add clarity to APIs. Which is why people ask questions like "should getters return Optionals?" No, they probably shouldn't, unless you expect the getter to be used in functional programming, which is very unlikely. In fact, if you look at where Optional is used in the Java API, it's mainly in the Stream classes, which are the core of functional programming. (I haven't checked very thoroughly, but the Stream classes might be the only place they're used.)
If you do plan to use a getter in a bit of functional code, it might be a good idea to have a standard getter and a second one that returns Optional.
Oh, and if you need your class to be serializable, you should absolutely not use Optional.
Optionals are a very bad solution to the API flaw because a) they're very verbose, and b) They were never intended to solve that problem in the first place.
A much better solution to the API flaw is the Nullness Checker. This is an annotation processor that lets you specify which parameters and return values are allowed to be null by annotating them with #Nullable. This way, the compiler can scan the code and figure out if a value that can actually be null is being passed to a value where null is not allowed. By default, it assumes nothing is allowed to be null unless it's annotated so. This way, you don't have to worry about null values. Passing a null value to a parameter will result in a compiler error. Testing an object for null that can't be null produces a compiler warning. The effect of this is to change NullPointerException from a runtime error to a compile-time error.
This changes everything.
As for your getters, don't use Optional. And try to design your classes so none of the members can possibly be null. And maybe try adding the Nullness Checker to your project and declaring your getters and setter parameters #Nullable if they need it. I've only done this with new projects. It probably produces a lot of warnings in existing projects written with lots of superfluous tests for null, so it might be tough to retrofit. But it will also catch a lot of bugs. I love it. My code is much cleaner and more reliable because of it.
(There is also a new language that addresses this. Kotlin, which compiles to Java byte code, allows you to specify if an object may be null when you declare it. It's a cleaner approach.)
Addendum to Original Post (version 2)
After giving it a lot of thought, I have reluctantly come to the conclusion that it's acceptable to return Optional on one condition: That the value retrieved might actually be null. I have seen a lot of code where people routinely return Optional from getters that can't possibly return null. I see this as a very bad coding practice that only adds complexity to the code, which makes bugs more likely. But when the returned value might actually be null, go ahead and wrap it inside an Optional.
Keep in mind that methods that are designed for functional programming, and that require a function reference, will (and should) be written in two forms, one of which uses Optional. For example, Optional.map() and Optional.flatMap() both take function references. The first takes a reference to an ordinary getter, and the second takes one that returns Optional. So you're not doing anyone a favor by return an Optional where the value can't be null.
Having said all that, I still see the approach used by the Nullness Checker is the best way to deal with nulls, since they turn NullPointerExceptions from runtime bugs to compile time errors.
If you are using modern serializers and other frameworks that understand Optional then I have found these guidelines work well when writing Entity beans and domain layers:
If the serialization layer (usually a DB) allows a null value for a cell in column BAR in table FOO, then the getter Foo.getBar() can return Optional indicating to the developer that this value may reasonably be expected to be null and they should handle this. If the DB guarantees the value will not be null then the getter should not wrap this in an Optional.
Foo.bar should be private and not be Optional. There's really no reason for it to be Optional if it is private.
The setter Foo.setBar(String bar) should take the type of bar and not Optional. If it's OK to use a null argument then state this in the JavaDoc comment. If it's not OK to use null an IllegalArgumentException or some appropriate business logic is, IMHO, more appropriate.
Constructors don't need Optional arguments (for reasons similar to point 3). Generally I only include arguments in the constructor that must be non-null in the serialization database.
To make the above more efficient, you might want to edit your IDE templates for generating getters and corresponding templates for toString(), equals(Obj o) etc. or use fields directly for those (most IDE generators already deal with nulls).
You have to keep in mind that the often-cited advice came from people who had little experience outside Java, with option types, or with functional programming.
So take it with a grain of salt. Instead, let's look at it from the "good practice" perspective:
Good practice not only means asking "how do we write new code?", but also "what happens to existing code?".
In the case of Optional, my environment found a good and easy to follow answer:
Optional is mandatory to indicate optional values in records:
record Pet(String name, Optional<Breed> breed,
Optional<ZonedDateTime> dateOfBirth)
This means that existing code is good as-is, but code that makes use of record (that is, "new code") causes widespread adoption of Optional around it.
The result has been a complete success in terms of readability and reliability. Just stop using null.
When you define an enum for something that can be "undefined" in your interfaces, should you
define a separate enum value for that, or
just use enumValue = null for those situations?
For example,
serviceX.setPrice(Price priceEnum)
enum Price {
CHEAP, EXPENSIVE, VERRRY_EXPENSIVE, UNKNOWN
}
and priceEnum.UNKNOWN when needed
or
enum Price {
CHEAP, EXPENSIVE, VERRRY_EXPENSIVE
}
and priceEnum = null when needed?
Having a little debate on this. Some points that come to mind:
using Price.UNKNOWN saves some "if (price == null)" code. You can handle Price x's all values in a single switch-case
Depending on view technology, it may be easier to localize Price.UNKNOWN
using Price.UNKNOWN kind of causes "magic number" problem in the code, IMO. Here we have Price.UNKNOWN, elsewhere maybe Color.UNDEFINED, Height.NULLVALUE, etc
using priceValue = null is more uniform with how other data types are handled in Java. We have Integer i = null, DomainObject x = null, String s = null for unknown values as well, don't we?
Price.UNKNOWN forces you to decide whether null value is allowed univerally for all use cases. We may have method Price getPrice() which may return Price.UNKNOWN and setPrice(Price p) which is not allowed to accept Price.UNKNOWN. Since Price.UNKNOWN is always included in the enum's values, those interfaces look a little unclean. I know priceValue = null has the same problem (you cannot define in the interface whether null is accepted or not) but it feels a little cleaner and a little less misleading(?)
This is actually an example of applying Null Object pattern. IMHO it is always better to have a dummy object rather than null. For instance you can add dummy methods to null-object rather than scattering your code with null-checks all over the place. Very convenient.
Also the name of the enum gives you some additional semantics: is the price unknown, undefined, not trustworthy, not yet known? And what does it mean if the price is null?
UPDATE: As Aaron Digulla points out, Null Object pattern requires memory. But this isn't actually the case most of the time. In the traditional implementation you typically have a singleton for Null object used everywhere as there is no need for separate instances. It gets even better with enums because you get singleton semantics for free.
Another point is that null reference and reference to some object occupy the same amount of memory (say 4 bytes on 32-bit machine). It is the object being referenced that occupies some extra memory. But if this is a singleton, there is practically no memory overhead here.
I'd say go with Price.UNKNOWN if that's a valid value for a price.
I agree with the drawbacks of dealing with null references that you mention, and I think they motivate the decision enough.
New languages, take Scala for example (and some older ones, Haskell) strain away from null references all together and uses option / maybe monads instead... for good reasons.
It depends how are you going to use this enum. If you use it in switch/case statements it does not matter.
If you create method(s) into the enum you actually must define UNKNOWN.
For example you can define abstract method
public abstract Icon icon();
into your enum and then implement this method for each member of Price. Probably you will want to display question mark for unknown price. In this case just implement method icon() that creates appropriate icon.
There is the Enum-Switch-Null-Trap.
So it seems that just like with any property that is an object,
if it doesn't exist then it is null.
Color or Height would be used in the program logic. Them cannot handle with an undefined color.
A Price is userdata and may be unknown.
The color may be userdata else, but to be used as color in the code, they must be defined.
So Price may be UNKNOWN (instead of null), Color not (null may indicate error).
I'm in the middle of QA'ing a bunch of code and have found several instances where the developer has a DTO which implements Comparable. This DTO has 7 or 8 fields in it. The compareTo method has been implemented on just one field:
private DateMidnight field1; //from Joda date/time library
public int compareTo(SomeObject o) {
if (o == null) {
return -1;
}
return field1.compareTo(o.getField1());
}
Similarly the equals method is overridden and basically boils down to:
return field1.equals(o.getField1());
and finally the hashcode method implementation is:
return field1.hashCode;
field1 should never be null and will be unique across these objects (i.e. we shouldn't get two objects with the same field1).
So, the implementations are consistent which is good, but should I be concerned that only one field is used? Is this unusual? Is it likely to cause problems or confuse other developers? I'm thinking of the scenario where a list of these objects are passed around and another developer uses a Map or Set of somesort and gets unusual behaviour from these objects. Any thoughts appreciated. Thanks!
I suspect that this is a case of "first use wins" - someone needed to sort a collection of these objects or put them in a hash map, and they only cared about the date. The easiest way of implementing that was to override equals/hashCode and implement Comparable<T> in the way you've said.
For specialist sorting, a better approach would be to implement Comparator<T> in a different class... but Java doesn't have any equivalent class for equality testing, unfortunately. I consider it a major weakness in the Java collections, to be honest.
Assuming this really isn't "the one natural and obvious comparison", it certainly smells in terms of design... and should be very carefully document.
Strictly speaking, this violates the Comparable spec:
http://download.oracle.com/javase/6/docs/api/java/lang/Comparable.html
Note that null is not an instance of any class, and e.compareTo(null) should throw a NullPointerException even though e.equals(null) returns false.
Similarly, it looks like the equals method will throw NPE on equals(null) instead of returning false (unless of course you "boiled" out the null handling code).
Is it likely to cause problems or confuse other developers?
Possibly, possibly not. It really depends on how large your project is and how widespread/"reusable"/long-lived your object source code is expected to be used:
Small/short-lived/limited use == probably not a problem.
Large/long-lived/widespread use == counter-intuitive implementation may cause future problems
You shouldnt be concerned with it, if field1 is really unique. If it`s not, you may have problems. Anyway, my advise is to do some unit tests. They should show the truth.
I don't think you need to be concerned. The contract between the three methods is kept and it's consistent.
Whether it's correct from a business logic point of view is a different question.
If e.g. field1 maps to a primary key in the database it's perfectly valid. If field1 is the "firstname" of a person, I would be concerned