Lombok offers the annotation #NonNull which executes the nullcheck and throws a NPE (if not configured differently).
I do not understand why I would use that annotation as described in the example of that documentation:
private String name;
public NonNullExample(#NonNull Person person) {
super("Hello");
if (person == null) {
throw new NullPointerException("person is marked #NonNull but is null");
}
this.name = person.getName();
}
The NPE would be thrown anyway. The only reason here to use the annotation imo is if you would want the exception to be different from a NPE.
EDIT: I do know that the Exception would be thrown explicitly and thus 'controlled', but at least the text of the error message should be editable, shouldn't it?
Writing a type annotation such as #NonNull serves several purposes.
It is documentation: it communicates the method's contract to clients, in a more concise and precise way than Javadoc text.
It enables run-time checking -- that is, it guarantees that your program crashes with a useful error message (rather than doing something worse) if a buggy client mis-uses your method. Lombok does this for you, without forcing the programmer to write the run-time check. The referenced example shows the two ways to do this: with a single #NonNull annotation or with an explicit programmer-written check. The "Vanilla Java" version either has a typo (a stray #NonNull) or shows the code after Lombok processes it.
It enables compile-time checking. A tool such as the Checker Framework gives a guarantee that the code will not crash at run time. Tools such as NullAway, Error Prone, and FindBugs are heuristic bug-finders that will warn you about some mis-uses of null but do not give you a guarantee.
IMHO, you've understood that documentation page wrongly.
That documentation page doesn't imply that you are recommended to use both Lombok #NonNull annotations and explicit if (smth == null) throw …-like checks as the same time (in the same method).
It just says that a code like this one (let's call it code A):
import lombok.NonNull;
public class NonNullExample extends Something {
private String name;
public NonNullExample(#NonNull Person person) {
super("Hello");
this.name = person.getName();
}
}
will be automatically (internally) translated by Lombok into a code like the one quoted the question (let's call it code B).
But that documentation page doesn't say that it would make sense for you to explicitly write the code B (though you are allowed; and Lombok will even try to prevent double check in this case). It just says that with Lombok you are now able to write the code A (and how it will work — it will be implicitly converted into the code B).
Note, that the code B is a “vanilla Java” code. It isn't expected to be processed by the Lombok for the second time. So #NonNull in the code B is just a plain annotation, which has no influence on the behavior (at least, not by Lombok means).
It's a separate question why Lombok works in that way — why it doesn't remove #NonNull from the generated code. Initially I even thought that it might be a bug in that documentation page. But, as Lombok author explains in his comment, #NonNulls are intentionally kept for the purposes of documentation and possible processing by other tools.
The idea of the annotation is to avoid the if (person == null) in your code and keep your code cleaner.
I love lombok but in this case (personally) I prefer to use the #Nonnull annotation from javax.annotation with the Objects.requireNonNull from java.util.Objects.
Using lombok in this way make the code cleaner but even less clear and readable:
public Builder platform(#NonNull String platform) {
this.platform = platform;
return this;
}
This method raises a NullPointerException (no evidence of it) and in addiction
passing a null argument, in a method call, is not reported by my IDE (IntelliJ Ultimate 2020.1 EAP - latest version - with lombok plugin)
So I prefer using the #Nonnull annotation from javax.annotation in this way:
public Builder platform(#Nonnull String platform) {
this.platform = Objects.requireNonNull(platform);
return this;
}
The code is a little bit verbose but clearer and my IDE is capable to warning me if I pass a null argument on method call!
It serves similar purpose to
java.util.Objects requireNonNull()
or Guava’s PreConditions. This just makes the code more compact and fail-fast.
Related
Android 3.5.1
I was using the WebView and I noticed that when I override some of the methods all the parameters are nullable types:
webview.webViewClient = object : WebViewClient() {
override fun shouldOverrideUrlLoading(view: WebView?, request: WebResourceRequest?): Boolean {
return super.shouldOverrideUrlLoading(view, request)
}
}
Which means I have to use the safe call operator to use them. However, when I looked at the WebViewClient class that I have overridden the method from they are not specified as nullable annotation in the Java code.
public boolean shouldOverrideUrlLoading(WebView view, WebResourceRequest request) {
return shouldOverrideUrlLoading(view, request.getUrl().toString());
}
So I am left thinking do I remove the nullability from the overridden method or keep them?
The source of this issue comes from Interoperability between Java and Kotlin. There are some basic language level differences between Java and Kotlin which causes interoperability issues. Android Studio provides some Lint checks to warn them, such as Unknown Nullness. (reference)
By taking a look at details of Unknown nullness Lint check from android.com, we see that:
To improve referencing code from Kotlin, consider adding
explicit nullness information here with either #NonNull or #Nullable.
and on developer.android.com:
If you use Kotlin to reference an unannotated name member that is defined in a Java class (e.g. a String), the compiler doesn't know whether the String maps to a String or a String? in Kotlin. This ambiguity is represented via a platform type, String!.
and on kotlinlang.org:
Any reference in Java may be null, which makes Kotlin's requirements of strict null-safety impractical for objects coming from Java. Types of Java declarations are treated specially in Kotlin and called platform types.
Therefore, when we override a Java method that its arguments are not annotated with nullity annotations, the IDE adds nullable sign (?) for arguments in Kotlin class. It leads to avoid throwing NullPointerException when the method is called in Java by passing a null value for one of the arguments.
webview.webViewClient = object : WebViewClient() {
override fun shouldOverrideUrlLoading(
view: WebView, // <- potential to throw NPE before executing the function block!
request: WebResourceRequest // <- as well!
): Boolean {
return super.shouldOverrideUrlLoading(view, request)
}
}
In a nutshell, we SHOULD NOT remove ? sign from function arguments, when the overridden method is defined in a Java class.
Unlike Kotlin , Java objects by default can accept null values
#Nullable annotation is just used for operations like code analysers (for eg. if the #Nullable parameter is not handled inside the method then it will show warning)
#NonNull annotation is used to specify that the value received can't/won't be null
if(#NonNull){
can omit ? check
}else if(#Nullable){
Mandatory to put ? check
}else(No annotation){
Not mandatory but put on safer side .
Passing null from Java into Kotlin fun without ? will lead to NPE
if(putting ? check){
java equivalent Kotlin param (#Nullable Webview view)
} else{
java equivalent Kotlin param (#NonNull Webview view)
}
}
Also Refer this : https://kotlinlang.org/docs/reference/java-to-kotlin-interop.html#null-safety
If a virtual method in Java doesn't specify nullability of its parameters somehow, for example with the #Nullable/#NotNull annotations, you are free to choose the nullability either way when overriding that method in Kotlin.
But how should you choose?
First, you can consult the method documentation and check the method contract. Does it specify that the method can be called with nulls, and what would these nulls mean when passed to the method?
In this particular case,
WebViewClient.shouldOverrideUrlLoading
method doc page doesn't say anything about nulls, so it can be taken as
an evidence that its parameters are supposed to be non-nullable.
Second, if you are still unsure about the nullability after consulting the docs, you can consider what would you do with the null parameter value, if you receive one. If the only reasonable thing in this situation is to throw an exception, you can delegate that check to the parameter checking code generated by Kotlin—by declaring parameters as non-nullable.
They are not specified as nullable annotation in the Java code.
If that's true note that you risk throwing a NullPointerException if not specified as nullable annotation in the Java code and assign a null value.
so remove the nullability from the overridden method if not specified as nullable annotation in the Java code.
For more detail read this also this
On the language-level, this can be generalized:
For proper Java interoperability, the Kotlin code should reflect the annotations of the Java code.
The linter only complains about lacking annotations in the other direction, for Kotlin interoperability.
See this recent article on How to write Java friendly Kotlin code?
Null reference is pretty obvious exception for everybody now because for everything has started with native development on C/C++. Reference to the objects in memory might be missing or cleaned by different reasons. Java was designed in way of those native languages, which assume null pointers everywhere.
Managing all mutable states are getting fun with thousand of microservices. This cause a lot of workaround for Nullable reference. - Optional Objects - Mock of Null Object - Wrappers around references - Annotations, etc. And all this for avoiding changing state of somewhere allocated object.
Finally, Kotlin is not first here. Scala, with immutable states had excellent experience in usage and supporting application. So answering this question and summarize Java was designed in this way from its parent C++, and you should expect null values everywhere. We still checking reference for null, even it's not annotated #Nullable, because of this reason. And in the same way Kotlin handles Java usage, and that is why you need to handle null values in overridden methods.
I'm using Spring AOP and therefore indirectly CGLIB in my Spring MVC controller. Since CGLIB needs an default constructor I included one and my controller now looks like this:
#Controller
public class ExampleController {
private final ExampleService exampleService;
public ExampleController(){
this.exampleService = null;
}
#Autowired
public ExampleController(ExampleService exampleService){
this.exampleService = exampleService;
}
#Transactional
#ResponseBody
#RequestMapping(value = "/example/foo")
public ExampleResponse profilePicture(){
return this.exampleService.foo(); // IntelliJ reports potential NPE here
}
}
The problem now is, that IntelliJ IDEA's static code analysis reports a potential NullPointerException, because this.exampleService might be null.
My question is:
How to prevent these false positive null pointer warnings? One solution would be to add assert this.exampleService != null or maybe use Guava's Preconditions.checkNotNull(this.exampleService).
However, this has to be added to each method for each and every field used in this method. I would prefer a solution I could add in a single place. Maybe a annotation on the default constructor or something?
EDIT:
Seems to be fixed with Spring 4, however I'm currently using Spring 3:
http://blog.codeleak.pl/2014/07/spring-4-cglib-based-proxy-classes-with-no-default-ctor.html
You can annotate your field (if you are sure that it will really not be null) with:
//import org.jetbrains.annotations.NotNull;
#NotNull
private final ExampleService exampleService;
This will instruct Idea to assume this field to be not-null in all cases. In this case your real constructor will also be annotated automatically by Idea:
public ExampleController(#NotNull ExampleService exampleService){
this.exampleService = exampleService;
}
You could create a default new instance of the ExampleService and assign it in the default constructor rather than assign it to null:
public ExampleController(){
this.exampleService = new ExampleService();
}
or
public ExampleController(){
this.exampleService = ExampleServiceFactory.Create();
}
Since this object should never be used in normal operation, it will have no effect, but if the object is used by the framework, or accidentally used directly because of later changes to the code, this will give you more information in a stack trace than a null pointer exception, and this will also solve the error that this.exampleService can be null.
This might require some changes to the ExampleService class, either to allow creating a new instance with default parameters, or to allow creating a new instance that is essentially a shell that does nothing. If it inherits from a base interface type, then a non-functional class can inherit from the same base type specifically as a place-holder. This pattern will also allow you to inject error handling code to provide a clear warning if the application attempts to use a default non-functional instance.
I have found that in languages like Java and C# where almost everything is a pointer, relying on null pointers even in areas where they should never be used makes maintenance more difficult than it should be, since they can often be used accidentally. The underlying virtual machines are designed to have the equivalent of a panic attack whenever code attempts to use a null pointer - I suspect this is because of the legacy of C where null pointers could really mess up the entire running program. Because of this virtual panic attack, they don't provide any useful information that would help diagnose the problem, especially since the value (null) is completely useless for identifying what happened. By avoiding null pointers, and instead specifically designing class hierarchies to determine whether an instantiated object should do any real work, you can avoid the potential problems with null pointers and also make your code easier and safer to maintain.
IntelliJ IDEA's static code analysis reports a potential NullPointerException
You can switch off these reports for specific fields, variables, methods, etc using #SuppressWarnings({"unchecked", "UnusedDeclaration"}) or a comment. Actually, the IDEA itself can suggest you this solution. See https://www.jetbrains.com/idea/help/suppressing-inspections.html
You can switch warning of for a single line of code:
void foo(java.util.Set set) {
#SuppressWarnings("unchecked")
java.util.Set<String> strings = set;
System.out.println(strings);
}
Do you know some nice alternative to Apache Commons Validate or Guava Preconditions that would throw IllegalArgumentException instead of NullPointerException when checking if object is not null (except Spring Assert)?
I'm aware that Javadocs say:
Applications should throw instances of this class [NullPointerException] to indicate other
illegal uses of the null object.
Nevertheless, I just don't like it. For me NPE was always meaning I just forgot to secure null reference somewhere. My eyes are so trained, I could spot it browsing logs with a speed of few pages per second and if I do there is always bug alert in my head enabled. Therefore, it would be quite confusing for me to have it thrown where I expect an IllegalArgumentException.
Say I have a bean:
public class Person {
private String name;
private String phone;
//....
}
and a service method:
public void call(Person person) {
//assert person.getPhone() != null
//....
}
In some context it may be ok, that a person has no phone (my grandma doesn't own any). But if you'd like to call such person, for me it's calling the call method with an IllegalArgument passed. Look at the hierarchy - NullPointerException is not even a subclass of IllegalArgumentException. It basically tells you - Again you tried to call a getter on null reference.
Besides, there were discussions already and there is this nice answer I fully support. So my question is just - do I need to do ugly things like this:
Validate.isTrue(person.getPhone() != null, "Can't call a person that hasn't got a phone");
to have it my way, or is there a library that would just throw IllegalArgumentException for a notNull check?
Since the topic of this question evolved into "Correct usage of IllegalArgumentException and NullpointerException", I would like to point out the strait forward answer in Effective Java Item 60 (second edition):
Arguably, all erroneous method invocations boil down to an illegal argument
or illegal state, but other exceptions are standardly used for certain kinds of illegal
arguments and states. If a caller passes null in some parameter for which null values
are prohibited, convention dictates that NullPointerException be thrown
rather than IllegalArgumentException. Similarly, if a caller passes an out-ofrange
value in a parameter representing an index into a sequence, IndexOutOfBoundsException
should be thrown rather than IllegalArgumentException.
What about Preconditions's checkArgument?
public void call(Person person) {
Preconditions.checkArgument(person.getPhone() != null);
// cally things...
}
checkArgument throws IllegalArgumentException instead of NullPointerException.
You can use valid4j with hamcrest-matchers (found on Maven Central as org.valid4j:valid4j). The 'Validation' class has support for regular input validation (i.e. throwing recoverable exceptions):
import static org.valid4j.Validation.*;
validate(argument, isValid(), otherwiseThrowing(InvalidException.class));
Links:
http://www.valid4j.org/
https://github.com/valid4j/valid4j
On a side-note: This library also has support for pre- and post-conditions (like assertions really), and it's possible to register your own customized global policy, if needed:
import static org.valid4j.Assertive.*;
require(x, greaterThan(0)); // throws RequireViolation extends AssertionError
...
ensure(r, notNullValue()); // throws EnsureViolation extends AssertionError
Take a look at https://github.com/cowwoc/requirements.java/ (I'm the author). You can override the default exception type using withException() as follows:
new Verifiers().withException(IllegalArgumentException.class).requireThat(name, value).isNotNull();
Not that I'm aware of. I'd just roll your own to get the behavior you want with a concise invocation, mimicking Guava's implementation but tweaking the exception type.
class Preconditionz {
public static <T> T checkNotNull(T reference, Object errorMessage) {
if (reference == null) {
throw new IllegalArgumentException(String.valueOf(errorMessage));
}
return reference;
}
}
I like to go ahead and import static these really frequently used methods, too, so you can call them super concisely.
import static com.whatever.util.Preconditionz.checkNotNull;
// ...
public void call(Person person) {
checkNotNull(person, "person");
checkNotNull(person.getPhone(), "person.phone");
// ...
}
Depending on your environment, you might want to name it checkNotNull2 so it's easier to add the import via autocompletion in your IDE, or let you use it alongside the standard checkNotNull.
I guess I learned something again here on SO thanks to great comments by Olivier Grégoire, Louis Wasserman, CollinD and Captain Man.
The standards are ussually a strong and sufficient reason as they make the common language programmers will always understand correctly, but in this particular case I had this little doubt, that maybe this rule set around NPE isn't too ok. Java is an old language and some of its features came up to be a bit unlucky (I don't want to say wrong, that's maybe too strong judgment) - like checked exceptions, although you may also disagree. Now I think that this doubt is resolved and I should:
Throw an IllegalArgumentException when in the particular context I can tell why the null value is wrong rather from the business perspective. For instance in the service method public void call(Person person) I know what does it mean to the system that the phone number is null.
Throw a NullPointerException when I just know that the null value here is wrong and will sooner or later cause a NullPointerException, but in the particular context I'm unaware what does it mean from the business perspective. The example would be Guavas immutable collections. When you build such and try to add an element of a null value it throws you a NPE. It doesn't understand what this value mean for you, it's too generic, but it just knows it's wrong here, so it decides too tell you this immediately, with some more appropriate message, so that you can recognize the problem more effectively.
Having above in mind I would say the best option to make the assertion in the public void call(Person person) example is like Captain Man suggests:
Preconditions.checkArgument(person.getPhone() != null, "msg");
Check argument is a good name for this method - it's clear that I'm checking the business contract compliance against the person argument and it's clear that I'm expecting IllegalArgumentException if it fails. It's a better name than the Validate.isTrue of Apache Commons. Saying Validate.notNull or Preconditions.checkNotNull on the other hand suggest that I'm checking for a null reference and I'm actually expecting the NPE.
So the final answer would be - there is no such nice library and shouldn't be as this would be confusing. (And Spring Assert should be corrected).
You can easily do this:
if (person.getPhone() == null) {
throw new IllegalArgumentException("Can't call a person that hasn't got a phone");
}
It is clear to other programmers what you mean, and does exactly what you want.
Scenario:
Java 1.6
class Animal {
private String name;
...
public String getName() { return name; }
...
}
class CatDog extends Animal {
private String dogName;
private String catName;
...
public String getDogName() { return dogName; }
public String getCatName() { return catName; }
public String[] getNames() { return new String[]{ catName, dogName }; }
...
public String getName() { return "ERROR! DO NOT USE ME"; }
}
Problem:
getName doesn't make sense and shouldn't be used in this example. I'm reading about #Deprecated annotation. Is there a more appropriate annotation method?
Questions:
A) Is it possible to force an error when this function is used (before runtime)?
B) Is there a way to display a customized warning/error message for the annotation method I will use? Ideally when the user is hovering over deprecated/error function.
Generally, you use #Deprecated for methods that have been made obsolete by a newer version of your software, but which you're keeping around for API compatibility with code that depends on the old version. I'm not sure if it's exactly the best tag to use in this scenario, because getName is still being actively used by other subclasses of Animal, but it will certainly alert users of the CatDog class that they shouldn't call that method.
If you want to cause an error at compile time when that function is used, you can change your compiler options to consider use of #Deprecated methods to be an error instead of a warning. Of course, you can't guarantee that everyone who uses your library will set this option, and there's no way I know of to force a compile error just based on the language specification. Removing the method from CatDog will still allow clients to call it, since the client will just be invoking the default implementation from the superclass Animal (which presumably you still want to include that method).
It is certainly possible, however, to display a custom message when the user hovers over the deprecated method. The Javadoc #deprecated tag allows you to specify an explanation of why a method was deprecated, and it will pop up instead of the usual description of the method when the user hovers over the method in an IDE like Eclipse. It would look like this:
/**
*
* #deprecated Do not use this method!
*/
#Deprecated
public String getName() {
throw new UnsupportedOperationException();
}
(Note that you can make your implementation of the method throw an exception to guarantee that if the user didn't notice the #Deprecated tag at compile time, they'll definitely notice it at runtime).
Deprecation means the method shouldn't be used any longer and that it may be removed in future releases. Basically exactly what you want.
Yes there's a trivially easy way to get a compile error when someone tries to use the method: Remove the method - that'll cause errors at linktime for any code that tries to use it, generally not to be recommended for obvious reasons, but if there's a really good reason to break backwards compatibility, that's the easiest way to achieve it. You could also make the method signature incompatible (always possible), but really the simplest solution that works is generally the best.
If you want a custom message when someone hovers over the method, use the javadoc for it, that's exactly what it's there for:
/**
* #deprecated
* explanation of why function was deprecated, if possible include what
* should be used.
*/
After refactoring our User class, we could not remove userGuid property, because it was used by mobile apps. Therefore, I have marked it as deprecated. The good thing is dev tools such as IntellijIdea recognize it and shows the message.
public User {
...
/**
* #Deprecated userGuid equals to guid but SLB mobile app is using user_guid.
* This is going to be removed in the future.
*/
#Deprecated
public String getUserGuid() {
return guid;
}
}
Deprecated is the way to go ... you can also configure the compiler to flag certain things as an error as opposed to a warning, but as Edward pointed out, you generally deprecate a method so that you don't have to completely clean up all references to it at this point in time.
In Eclipse, to configure Errors and Warnings, go to Window -> Preferences. Under Java -> Compiler -> Errors/Warnings, you'll see a section for Deprecated APIs. You may choose to instruct the compiler to ignore, warn, or error when a method is deprecated. Of course, if you're working with other developers, they would have to configure their compiler the same way .
Are there any annotations in java which mark a method as unsupported? E.g. Let's say I'm writing a new class which implements the java.util.List interface. The add() methods in this interface are optional and I don't need them in my implementation and so I to do the following:
public void add(Object obj) {
throw new UnsupportedOperationException("This impl doesn't support add");
}
Unfortunately, with this, it's not until runtime that one might discover that, in fact, this operation is unsupported.
Ideally, this would have been caught at compile time and such an annotation (e.g. maybe #UnsupportedOperation) would nudge the IDE to say to any users of this method, "Hey, you're using an unsupported operation" in the way that using #Deprecated flags Eclipse to highlight any uses of the deprecated item.
Although on the surface this sounds useful, in reality it would not help much. How do you usually use a list? I generally do something like this:
List<String> list = new XXXList<String>();
There's already one indirection there, so if I call list.add("Hi"), how should the compiler know that this specific implementation of list doesn't support that?
How about this:
void populate(List<String> list) {
list.add("1");
list.add("2");
}
Now it's even harder: The compiler would need to verify that all calls to that function used lists that support the add() operation.
So no, there is no way to do what you are asking, sorry.
You can do it using AspectJ if you are familiar with it. You must first create a point-cut, then give an advice or declare error/warning joint points matching this point cut. Of course you need your own #UnsupportedOperation annotation interface. I gave a simple code fragment about this.
// This the point-cut matching calls to methods annotated with your
// #UnsupportedOperation annotation.
pointcut unsupportedMethodCalls() : call(#UnsupportedOperation * *.*(..));
// Declare an error for such calls. This causes a compilation error
// if the point-cut matches any unsupported calls.
declare error: unsupportedMethodCalls() : "This call is not supported."
// Or you can just throw an exception just before this call executed at runtime
// instead of a compile-time error.
before() : unsupportedMethodCalls() {
throw new UnsupportedOperationException(thisJoinPoint.getSignature()
.getName());
}
(2018) While it may not be possible to detect it at compile time, there could be an alternative. i.e. The IDE (or other tools) could use the annotation to warn the user that such a method is being used.
There actually is a ticket for this: JDK-6447051
From a technical point of view, it shouldn't be much harder to implement than inspections that detect an illegal use of an #NotNull or a #Nullable accessor.
try this annotation #DoNotCall
https://errorprone.info/api/latest/com/google/errorprone/annotations/DoNotCall.html