Scenario:
Java 1.6
class Animal {
private String name;
...
public String getName() { return name; }
...
}
class CatDog extends Animal {
private String dogName;
private String catName;
...
public String getDogName() { return dogName; }
public String getCatName() { return catName; }
public String[] getNames() { return new String[]{ catName, dogName }; }
...
public String getName() { return "ERROR! DO NOT USE ME"; }
}
Problem:
getName doesn't make sense and shouldn't be used in this example. I'm reading about #Deprecated annotation. Is there a more appropriate annotation method?
Questions:
A) Is it possible to force an error when this function is used (before runtime)?
B) Is there a way to display a customized warning/error message for the annotation method I will use? Ideally when the user is hovering over deprecated/error function.
Generally, you use #Deprecated for methods that have been made obsolete by a newer version of your software, but which you're keeping around for API compatibility with code that depends on the old version. I'm not sure if it's exactly the best tag to use in this scenario, because getName is still being actively used by other subclasses of Animal, but it will certainly alert users of the CatDog class that they shouldn't call that method.
If you want to cause an error at compile time when that function is used, you can change your compiler options to consider use of #Deprecated methods to be an error instead of a warning. Of course, you can't guarantee that everyone who uses your library will set this option, and there's no way I know of to force a compile error just based on the language specification. Removing the method from CatDog will still allow clients to call it, since the client will just be invoking the default implementation from the superclass Animal (which presumably you still want to include that method).
It is certainly possible, however, to display a custom message when the user hovers over the deprecated method. The Javadoc #deprecated tag allows you to specify an explanation of why a method was deprecated, and it will pop up instead of the usual description of the method when the user hovers over the method in an IDE like Eclipse. It would look like this:
/**
*
* #deprecated Do not use this method!
*/
#Deprecated
public String getName() {
throw new UnsupportedOperationException();
}
(Note that you can make your implementation of the method throw an exception to guarantee that if the user didn't notice the #Deprecated tag at compile time, they'll definitely notice it at runtime).
Deprecation means the method shouldn't be used any longer and that it may be removed in future releases. Basically exactly what you want.
Yes there's a trivially easy way to get a compile error when someone tries to use the method: Remove the method - that'll cause errors at linktime for any code that tries to use it, generally not to be recommended for obvious reasons, but if there's a really good reason to break backwards compatibility, that's the easiest way to achieve it. You could also make the method signature incompatible (always possible), but really the simplest solution that works is generally the best.
If you want a custom message when someone hovers over the method, use the javadoc for it, that's exactly what it's there for:
/**
* #deprecated
* explanation of why function was deprecated, if possible include what
* should be used.
*/
After refactoring our User class, we could not remove userGuid property, because it was used by mobile apps. Therefore, I have marked it as deprecated. The good thing is dev tools such as IntellijIdea recognize it and shows the message.
public User {
...
/**
* #Deprecated userGuid equals to guid but SLB mobile app is using user_guid.
* This is going to be removed in the future.
*/
#Deprecated
public String getUserGuid() {
return guid;
}
}
Deprecated is the way to go ... you can also configure the compiler to flag certain things as an error as opposed to a warning, but as Edward pointed out, you generally deprecate a method so that you don't have to completely clean up all references to it at this point in time.
In Eclipse, to configure Errors and Warnings, go to Window -> Preferences. Under Java -> Compiler -> Errors/Warnings, you'll see a section for Deprecated APIs. You may choose to instruct the compiler to ignore, warn, or error when a method is deprecated. Of course, if you're working with other developers, they would have to configure their compiler the same way .
Related
Lombok offers the annotation #NonNull which executes the nullcheck and throws a NPE (if not configured differently).
I do not understand why I would use that annotation as described in the example of that documentation:
private String name;
public NonNullExample(#NonNull Person person) {
super("Hello");
if (person == null) {
throw new NullPointerException("person is marked #NonNull but is null");
}
this.name = person.getName();
}
The NPE would be thrown anyway. The only reason here to use the annotation imo is if you would want the exception to be different from a NPE.
EDIT: I do know that the Exception would be thrown explicitly and thus 'controlled', but at least the text of the error message should be editable, shouldn't it?
Writing a type annotation such as #NonNull serves several purposes.
It is documentation: it communicates the method's contract to clients, in a more concise and precise way than Javadoc text.
It enables run-time checking -- that is, it guarantees that your program crashes with a useful error message (rather than doing something worse) if a buggy client mis-uses your method. Lombok does this for you, without forcing the programmer to write the run-time check. The referenced example shows the two ways to do this: with a single #NonNull annotation or with an explicit programmer-written check. The "Vanilla Java" version either has a typo (a stray #NonNull) or shows the code after Lombok processes it.
It enables compile-time checking. A tool such as the Checker Framework gives a guarantee that the code will not crash at run time. Tools such as NullAway, Error Prone, and FindBugs are heuristic bug-finders that will warn you about some mis-uses of null but do not give you a guarantee.
IMHO, you've understood that documentation page wrongly.
That documentation page doesn't imply that you are recommended to use both Lombok #NonNull annotations and explicit if (smth == null) throw …-like checks as the same time (in the same method).
It just says that a code like this one (let's call it code A):
import lombok.NonNull;
public class NonNullExample extends Something {
private String name;
public NonNullExample(#NonNull Person person) {
super("Hello");
this.name = person.getName();
}
}
will be automatically (internally) translated by Lombok into a code like the one quoted the question (let's call it code B).
But that documentation page doesn't say that it would make sense for you to explicitly write the code B (though you are allowed; and Lombok will even try to prevent double check in this case). It just says that with Lombok you are now able to write the code A (and how it will work — it will be implicitly converted into the code B).
Note, that the code B is a “vanilla Java” code. It isn't expected to be processed by the Lombok for the second time. So #NonNull in the code B is just a plain annotation, which has no influence on the behavior (at least, not by Lombok means).
It's a separate question why Lombok works in that way — why it doesn't remove #NonNull from the generated code. Initially I even thought that it might be a bug in that documentation page. But, as Lombok author explains in his comment, #NonNulls are intentionally kept for the purposes of documentation and possible processing by other tools.
The idea of the annotation is to avoid the if (person == null) in your code and keep your code cleaner.
I love lombok but in this case (personally) I prefer to use the #Nonnull annotation from javax.annotation with the Objects.requireNonNull from java.util.Objects.
Using lombok in this way make the code cleaner but even less clear and readable:
public Builder platform(#NonNull String platform) {
this.platform = platform;
return this;
}
This method raises a NullPointerException (no evidence of it) and in addiction
passing a null argument, in a method call, is not reported by my IDE (IntelliJ Ultimate 2020.1 EAP - latest version - with lombok plugin)
So I prefer using the #Nonnull annotation from javax.annotation in this way:
public Builder platform(#Nonnull String platform) {
this.platform = Objects.requireNonNull(platform);
return this;
}
The code is a little bit verbose but clearer and my IDE is capable to warning me if I pass a null argument on method call!
It serves similar purpose to
java.util.Objects requireNonNull()
or Guava’s PreConditions. This just makes the code more compact and fail-fast.
I am refactoring a class to use a builder with a private constructor instead of public constructors. I want to have the old, deprecated, public constructors use the builder as shown below. (this is an example of the behavior I am try to achieve)
// old public construcor
#Deprecated
public MyClazz(){
return new MyClazzBuilder().Build();
}
This give a "Cannot return a value from a method with void result type"
Is this type of functionality possible in Java? How could this be achieved?
Update: This code is part of a distributed jar, deleting the old constructors is not an option because I need to maintain backwards compatibility
No. Constructors operate on an object, they don't return it. [footnote 1]
One way to get this sort of functionality is with an init() method.
#Deprecated
public MyClazz() {
init();
}
public void init() {
// do the actual work
}
Now your builder can call the same init() method to avoid having the code in two places.
Because you are keeping the deprecated signatures around, it's hard to avoid splitting the logic for preparing an instance into multiple places. That's not ideal, but it's the price of deprecation and maintaining backwards compatibility.
[footnote 1] The lifecycle of a java object is that first the object memory is allocated, but all the fields have junk content. Next a constructor is run on the memory to get it into a consistent state by changing all those meaningless values into real values. Note that the memory the constructor works on is already there, so you can never substitute another object in place of the one being constructed. A return value from a constructor would be exactly this sort of substitution, which is not supported by the language. If that trick is needed use a factory/builder instead of a constructor -- constructors can never do that.
Constructors don't return values. Notice how there is no "declared" return type in a constructor signature. You have a few options:
1) Mark the constructor private and resolve compilation errors immediately
2) Deprecate the constructor, and leave the original implementation there.
I recommend number 2. What you have done is deprecate a constructor, and then changed the implementation. That is not how deprecation works.
I am using Eclipse Build id: 20120614-1722.
I have an object class called "TOY1" and a function toString() within it. I previously know that when I call the
System.out.println(TOY1);
It should return the address. However, and for some reason it is returning the toString() declared for my object without me specifying the #Override notation.
Is it safe to keep it that way? Or is this a new feature implemented in the specific build I have.
Thanks
EDIT
As asked this is part of my code:
public class TOY1 {
//irrelevant declarations
public String toString() {
String data;
data="Manufacturer=";
data+=manufact;
data+="DOP:";
data+=date_of_production;
return data;
}
}
When declaring TOY1_instance of TOY1 and then printing out using the System.out.printIn(TOY1_instance)
I am getting the actual data as opposed to some junk address.
My question is where did I override it no warning was shown and no extension overrides this class.
System.out.println(obj); is calling obj.toString() internally. It happens that the default implementation of toString(), if you don't override it, returns some address-like value.
You can omit #Override annotation, but it's safer to use it. It becomes especially useful when you think you are overriding while you aren't because of tiny difference in signature. E.g.:
#Override
public String tostring() //...
won't compile. Even more common mistake is wrong equals():
#Override
public boolean equals(TOY1 obj) //...
Do you see why? Without #Override it's very easy to miss such a tremendous bug.
#Override annotation is just optional,
code works the same with and without it.
But it's a matter of good taste, to use it. It clearly shows what's going on - that you redefine method from super-class or implement method from interface.
And as Tamasz mentioned - it's impossible to annotate with #Override method that actually isn't overriding anything. So it can save you sometimes.
It would always run the closest toString() instance method even if #Override is not specified. To be on the safer side, or to compile quickly and effeciently, you should include #Override at the top
#Override annotation is just for compilers, and JVM does not care about it.
See definition of Override annotation here:
#Target(value=METHOD)
#Retention(value=SOURCE)
public #interface Override
As you can see in the declaration, it is only retained in the source and not in the class file.
OK let me get it clear. So, your over-ridden function is still called but the issue is why is it called even when #override is not used? If that is the case, then this annotation is simply for the compiler to check whether you have actually overriden the function or not. It should not affect the logic of program as such. Also, please post code if you want specific answer.
Actually, System.out.println() can deal with several types of parameters, and one of them is reference to a String.
Please refer this URL
There is a constructor with three parameters of type enum:
public SomeClass(EnumType1 enum1,EnumType2 enum2, EnumType3 enum3)
{...}
The three parameters of type enum are not allowd to be combined with all possible values:
Example:
EnumType1.VALUE_ONE, EnumType2.VALUE_SIX, EnumType3.VALUE_TWENTY is a valid combination.
But the following combination is not valid:
EnumType1.VALUE_TWO, EnumType2.VALUE_SIX, EnumType3.VALUE_FIFTEEN
Each of the EnumTypes knows with which values it is allowed to be combined:
EnumType1 and the two others implement a isAllowedWith() method to check that as follows:
public enum EnumType1 {
VALUE_ONE,VALUE_TWO,...;
public boolean isAllowedWith(final EnumType2 type) {
switch (this) {
case VALUE_ONE:
return type.equals(Type.VALUE_THREE);
case VALUE_TWO:
return true;
case VALUE_THREE:
return type.equals(Type.VALUE_EIGHT);
...
}
}
I need to run that check at compile time because it is of extreme importance in my project that the combinations are ALWAYS correct at runtime.
I wonder if there is a possibility to run that check with user defined annotations?
Every idea is appreciated :)
The posts above don't bring a solution for compile-time check, here's mine:
Why not use concept of nested Enum.
You would have EnumType1 containing its own values + a nested EnumType2 and this one a nested EnumType3.
You could organize the whole with your useful combination.
You could end up with 3 classes (EnumType1,2 and 3) and each one of each concerned value containing the others with the allowed associated values.
And your call would look like that (with assuming you want EnumType1.VALUE_ONE associated with EnumType2.VALUE_FIFTEEN) :
EnumType1.VALUE_ONE.VALUE_FIFTEEN //second value corresponding to EnumType2
Thus, you could have also: EnumType3.VALUE_SIX.VALUE_ONE (where SIX is known by type3 and ONE by type1).
Your call would be change to something like:
public SomeClass(EnumType1 enumType)
=> sample:
SomeClass(EnumType1.VALUE_ONE.VALUE_SIX.VALUE_TWENTY) //being a valid combination as said
To better clarify it, check at this post: Using nested enum types in Java
So the simplest way to do this is to 1) Define the documentation to explain valid combinations and
2) add the checks in the constructor
If a constructor throws an Exception than that is the responsibility of the invoker. Basically you would do something like this:
public MyClass(enum foo, enum bar, enum baz)
{
if(!validateCombination(foo,bar,baz))
{
throw new IllegalStateException("Contract violated");
}
}
private boolean validateCombination(enum foo, enum bar, enum baz)
{
//validation logic
}
Now this part is absolutely critical. Mark the class a final, it is possible that a partially constructed object can be recovered and abused to break your application. With a class marked as final a malicious program cannot extend the partially constructed object and wreak havoc.
One alternative idea is to write some automated tests to catch this, and hook them into your build process as a compulsory step before packaging/deploying your app.
If you think about what you're trying to catch here, it's code which is legal but wrong. While you could catch that during the compilation phase, this is exactly what tests are meant for.
This would fit your requirement of not being able to build any code with an illegal combination, because the build would still fail. And arguably it would be easier for other developers to understand than writing your own annotation processor...
The only way I know is to work with annotations.
Here is what I do I mean.
Now your constructor accepts 3 parameters:
public SomeClass(EnumType1 enum1,EnumType2 enum2, EnumType3 enum3){}
so you are calling it as following:
SomeClass obj = new SomeClass(EnumTupe1.VALUE1, EnumTupe2.VALUE2, EnumTupe1.VALUE3)
Change the constructor to be private. Create public constructor that accept 1 parameter of any type you want. It may be just a fake parameter.
public SomeClass(Placeholder p)
Now you have to require to call this constructor while each argument is annotated with special annotation. Let's call it TypeAnnotation:
SomeClass obj = new SomeClass(TypeAnnotation(
type1=EnumType1.VALUE1,
type2=EnumTupe2.VALUE2,
type3=EnumTupe1.VALUE3)
p3);
The call is more verbose but this is what we have to pay for compile time validation.
Now, how to define the annotation?
#Documented
#Retention({RetentionPolicy.RUNTIME, RetentionPolicy.SOURCE})
#Target(PARAMETER)
#interface TypeAnnotation {
EnumType1 type1();
EnumType2 type3();
EnumType3 type3();
}
Please pay attention that target is PARAMETER and retention values are RUNTIME and SOURCE.
RUNTIME allows reading this annotation at runtime, while SOURCE allows creating annotation processor that can validate the parameters at runtime.
Now the public constructor will call the 3-parameters private construcor:
public SomeClass(Placeholder p) {
this(readAnnotation(EnumType1.class), readAnnotation(EnumType2.class), readAnnotation(EnumType3.class), )
}
I am not implementing readAnnotation() here: it should be static method that takes stack trace, goes 3 elements back (to caller of the public costructor) and parses annotation TypeAnnotation.
Now is the most interesting part. You have to implement annotation processor.
Take a look here for instructions and here for an example of annotation processor.
You will have to add usage of this annotation processor to your build script and (optionally) to your IDE. In this case you will get real compilation error when your compatibility rules are violated.
I believe that this solution looks too complicated but if you really need this you can do this. It may take a day or so. Good luck.
Well, I am not aware of a compile time check but I do not think it is possible because how can the compiler know which value will be passed to the constructor (In case the value of your enum variable is calculated in runtime (e.g. by an If clause) ?
This can only be validated on runtime by using a validator method as you implemented for the enum types.
Example :
If in your code you have something like this :
EnumType1 enumVal;
if (<some condition>) {
enumVal = EnumType2.VALUE_SIX;
} else {
enumVal = EnumType2.VALUE_ONE;
}
There is no way the compiler can know which of the values will be assigned to enumVal so it won't be able to verify what is passed to the constructor until the if block is evaluated (which can be done only in runtime)
Are there any annotations in java which mark a method as unsupported? E.g. Let's say I'm writing a new class which implements the java.util.List interface. The add() methods in this interface are optional and I don't need them in my implementation and so I to do the following:
public void add(Object obj) {
throw new UnsupportedOperationException("This impl doesn't support add");
}
Unfortunately, with this, it's not until runtime that one might discover that, in fact, this operation is unsupported.
Ideally, this would have been caught at compile time and such an annotation (e.g. maybe #UnsupportedOperation) would nudge the IDE to say to any users of this method, "Hey, you're using an unsupported operation" in the way that using #Deprecated flags Eclipse to highlight any uses of the deprecated item.
Although on the surface this sounds useful, in reality it would not help much. How do you usually use a list? I generally do something like this:
List<String> list = new XXXList<String>();
There's already one indirection there, so if I call list.add("Hi"), how should the compiler know that this specific implementation of list doesn't support that?
How about this:
void populate(List<String> list) {
list.add("1");
list.add("2");
}
Now it's even harder: The compiler would need to verify that all calls to that function used lists that support the add() operation.
So no, there is no way to do what you are asking, sorry.
You can do it using AspectJ if you are familiar with it. You must first create a point-cut, then give an advice or declare error/warning joint points matching this point cut. Of course you need your own #UnsupportedOperation annotation interface. I gave a simple code fragment about this.
// This the point-cut matching calls to methods annotated with your
// #UnsupportedOperation annotation.
pointcut unsupportedMethodCalls() : call(#UnsupportedOperation * *.*(..));
// Declare an error for such calls. This causes a compilation error
// if the point-cut matches any unsupported calls.
declare error: unsupportedMethodCalls() : "This call is not supported."
// Or you can just throw an exception just before this call executed at runtime
// instead of a compile-time error.
before() : unsupportedMethodCalls() {
throw new UnsupportedOperationException(thisJoinPoint.getSignature()
.getName());
}
(2018) While it may not be possible to detect it at compile time, there could be an alternative. i.e. The IDE (or other tools) could use the annotation to warn the user that such a method is being used.
There actually is a ticket for this: JDK-6447051
From a technical point of view, it shouldn't be much harder to implement than inspections that detect an illegal use of an #NotNull or a #Nullable accessor.
try this annotation #DoNotCall
https://errorprone.info/api/latest/com/google/errorprone/annotations/DoNotCall.html