Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
OK, after reviewing some code with PMD and FindBugs code analyzers, i was able to do great changes on the reviewed code. However, there are some things i don't know how to fix. I'll iterate them bellow, and (for better reference) i will give each question a number. Feel free to answer to any/all of them. Thanks for your patience.
1. Even tough i have removed some of the rules, the associated warnings are still there, after re-evaluate the code. Any idea why?
2. Please look at the declarations :
private Combo comboAdress;
private ProgressBar pBar;
and the references to objects by getters and setters :
private final Combo getComboAdress() {
return this.comboAdress;
}
private final void setComboAdress(final Combo comboAdress) {
this.comboAdress = comboAdress;
}
private final ProgressBar getpBar() {
return this.pBar;
}
private final void setpBar(final ProgressBar pBar) {
this.pBar = pBar;
}
Now, i wonder why the first declaration don't give me any warning on PMD, while the second gives me the following warning :
Found non-transient, non-static member. Please mark as transient or provide accessors.
More details on that warning here.
3. Here is another warning, also given by PMD :
A method should have only one exit point, and that should be the last statement in the method
More details on that warning here.
Now, i agree with that, but what if i write something like this :
public void actionPerformedOnModifyComboLocations() {
if (getMainTree().isFocusControl()) {
return;
}
....//do stuffs, based on the initial test
}
I tend to agree with the rule, but if performance of the code suggest multiple exit points, what should i do?
4. PMD gives me this :
Found 'DD'-anomaly for variable 'start_page' (lines '319'-'322').
when i declare something like :
String start_page = null;
I get rid of this info (level of warning is info) if i remove the assignment to null, but..i got an error from IDE, saying that the variable could be uninitialized, at some point later in the code. So, i am kind of stuck with that. Supressing the warning is the best you can do?
5. PMD Warning :
Assigning an Object to null is a code smell. Consider refactoring.
This is the case of a singletone use of GUI components or the case of a method who returns complex objects. Assigning the result to null in the catch() section it's justified by the need to avoid the return of an incomplete/inconsistent object. Yes, NullObject should be used, but there are cases where i don't want to do that. Should i supress that warning then?
6. FindBugs warning #1:
Write to static field MyClass.instance from instance method MyClass.handleEvent(Event)
in the method
#Override
public void handleEvent(Event e) {
switch (e.type) {
case SWT.Dispose: {
if (e.widget == getComposite()) {
MyClass.instance = null;
}
break;
}
}
}
of the static variable
private static MyClass instance = null;
The variable allows me to test whether the form is already created and visible or not, and i need to force the re-creation of the form, in some cases. I see no other alternative here. Any insights? (MyClass implements Listener, hence the overrided handleEvent() method).
7. FindBugs warning #2:
Class MyClass2 has a circular dependency with other classes
This warning is displayed based on simple imports of other classes. Do i need to refactor those imports to make this warning go away? Or the problem relies in MyClass2?
OK, enough said for now..expect an update, based on more findings and/or your answers. Thanks.
Here are my answers to some of your questions:
Question number 2:
I think you're not capitalizing the properties properly. The methods should be called getPBar and setPBar.
String pBar;
void setPBar(String str) {...}
String getPBar() { return pBar};
The JavaBeans specification states that:
For readable properties there will be a getter method to read the property value. For writable properties there will be a setter method to allow the property value to be updated. [...] Constructs a PropertyDescriptor for a property that follows the standard Java convention by having getFoo and setFoo accessor methods. Thus if the argument name is "fred", it will assume that the reader method is "getFred" and the writer method is "setFred". Note that the property name should start with a lower case character, which will be capitalized in the method names.
Question number 3:
I agree with the suggestion of the software you're using. For readability, only one exit point is better. For efficiency, using 'return;' might be better. My guess is that the compiler is smart enough to always pick the efficient alternative and I'll bet that the bytecode would be the same in both cases.
FURTHER EMPIRICAL INFORMATION
I did some tests and found out that the java compiler I'm using (javac 1.5.0_19 on Mac OS X 10.4) is not applying the optimization I expected.
I used the following class to test:
public abstract class Test{
public int singleReturn(){
int ret = 0;
if (cond1())
ret = 1;
else if (cond2())
ret = 2;
else if (cond3())
ret = 3;
return ret;
}
public int multReturn(){
if (cond1()) return 1;
else if (cond2()) return 2;
else if (cond3()) return 3;
else return 0;
}
protected abstract boolean cond1();
protected abstract boolean cond2();
protected abstract boolean cond3();
}
Then, I analyzed the bytecode and found that for multReturn() there are several 'ireturn' statements, while there is only one for singleReturn(). Moreover, the bytecode of singleReturn() also includes several goto to the return statement.
I tested both methods with very simple implementations of cond1, cond2 and cond3. I made sure that the three conditions where equally provable. I found out a consistent difference in time of 3% to 6%, in favor of multReturn(). In this case, since the operations are very simple, the impact of the multiple return is quite noticeable.
Then I tested both methods using a more complicated implementation of cond1, cond2 and cond3, in order to make the impact of the different return less evident. I was shocked by the result! Now multReturn() is consistently slower than singleReturn() (between 2% and 3%). I don't know what is causing this difference because the rest of the code should be equal.
I think these unexpected results are caused by the JIT compiler of the JVM.
Anyway, I stand by my initial intuition: the compiler (or the JIT) can optimize these kind of things and this frees the developer to focus on writing code that is easily readable and maintainable.
Question number 6:
You could call a class method from your instance method and leave that static method alter the class variable.
Then, your code look similar to the following:
public static void clearInstance() {
instance = null;
}
#Override
public void handleEvent(Event e) {
switch (e.type) {
case SWT.Dispose: {
if (e.widget == getComposite()) {
MyClass.clearInstance();
}
break;
}
}
}
This would cause the warning you described in 5, but there has to be some compromise, and in this case it's just a smell, not an error.
Question number 7:
This is simply a smell of a possible problem. It's not necessarily bad or wrong, and you cannot be sure just by using this tool.
If you've got a real problem, like dependencies between constructors, testing should show it.
A different, but related, problem are circular dependencies between jars: while classes with circular dependencies can be compiled, circular dependencies between jars cannot be handled in the JVM because of the way class loaders work.
I have no idea. It seems likely that whatever you did do, it was not what you were attempting to do!
Perhaps the declarations appear in a Serializable class but that the type (e.g. ComboProgress are not themselves serializable). If this is UI code, then that seems very likely. I would merely comment the class to indicate that it should not be serialized.
This is a valid warning. You can refactor your code thus:
public void actionPerformedOnModifyComboLocations() {
if (!getMainTree().isFocusControl()) {
....//do stuffs, based on the initial test
}
}
This is why I can't stand static analysis tools. A null assignment obviously leaves you open to NullPointerExceptions later. However, there are plenty of places where this is simply unavoidable (e.g. using try catch finally to do resource cleanup using a Closeable)
This also seems like a valid warning and your use of static access would probably be considered a code smell by most developers. Consider refactoring via using dependency-injection to inject the resource-tracker into the classes where you use the static at the moment.
If your class has unused imports then these should be removed. This might make the warnings disappear. On the other hand, if the imports are required, you may have a genuine circular dependency, which is something like this:
class A {
private B b;
}
class B {
private A a;
}
This is usually a confusing state of affairs and leaves you open to an initialization problem. For example, you may accidentally add some code in the initialization of A that requires its B instance to be initialized. If you add similar code into B, then the circular dependency would mean that your code was actually broken (i.e. you couldn't construct either an A or a B.
Again an illustration of why I really don't like static analysis tools - they usually just provide you with a bunch of false positives. The circular-dependent code may work perfectly well and be extremely well-documented.
For point 3, probably the majority of developers these days would say the single-return rule is simply flat wrong, and on average leads to worse code. Others see that it a written-down rule, with historical credentials, some code that breaks it is hard to read, and so not following it is simply wrong.
You seem to agree with the first camp, but lack the confidence to tell the tool to turn off that rule.
The thing to remember is it is an easy rule to code in any checking tool, and some people do want it. So it is pretty much always implemented by them.
Whereas few (if any) enforce the more subjective 'guard; body; return calculation;' pattern that generally produces the easiest-to-read and simplest code.
So if you are looking at producing good code, rather than simply avoiding the worst code, that is one rule you probably do want to turn off.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In terms of best practices, suppose I have this code:
public class ClassObject {
private int someNumber;
public void setSomeNumber(int x){
this.someNumber = x;
}
public int getSomeNumber(int x){
return this.someNumber;
}
//Should I even use this?
public void decreaseSomeNumber(){
--this.someNumber;
}
}
public void doSomeStuff(ClassObject instance){
// do some things
instance.decreaseSomeNumber(); //A
instance.setSomeNumber(instance.getSomeNumber() - 1); //B
}
I am wondering if either lines A or B are code smells. I think decreaseSomeNumber() is likely a redundant/useless function since I can just do instance.setSomeNumber(instance.getSomeNumber() - 1); everwhere.
On the other hand, it seems slightly more verbose doing instance.setSomeNumber(instance.getSomeNumber() - 1). What is the cleanest and good code design between A and B?
If you have a multithreaded environment, having (A) a decreaseSomeNumber method is worth it, however, you should make it threadsafe. Otherwise (B) two parallel threads might try to decrease the value at the same time, resulting in just a single decrease operation if they overlap.
That being said, it's typically hard work to really make code threadsafe, and in simple cases, occasional glitches might not matter. However, occasional is the keyword here: If you ever run into these, reproducing the problem will be horribly hard.
In terms of best practises you must avoid when is possible the form
public void decreaseSomeNumber(){
--this.someNumber;
}
and prefer the standard getters and setters.
But in some cases you need to decrease the value of a variable,
if this thing is occasional is good to use getters and setters
instance.setSomeNumber(instance.getSomeNumber() - 1);
instead in the case you need decreasing the a variable repeatedly (ex. A withdraw in a bank account) using only one method is not bad, but it must be defined like
public void decreaseSomeNumber(int many){
this.someNumber -= many;
}
in this way you are making a code more reusable, and this is good
P.S. the B way is more simple to syncronize in multi-threading enviroments
I would say it depends on more specific details, but I would be probably in favour of decreaseSomething.
With the getter and setter, you implicitly assume that:
The user of the API implements some (albeit trivial) computation.
The computation is performed at the time of the request.
The caller handles to concurrency-related issues on their own.
The (1) is rather a philosophical problem, although it might lead to errors caused by inadvertence, like calling get and set on two different objects.
The (2) can be a practical problem. Maybe you want to use the object from multiple threads. And maybe you don't need the number often, but you need to change it often. I believe that one could come up with some optimizations based on LongAdder or LongAccumulator or AtomicInt, which can optimize some highly concurrent places. With decreaseSomething, you can do it inside the class implementation. With getters and setters, you would need to somehow replace all occurences of x.setSomething(x.getSomething() + 1) by something else. That does not look like a proper encapsulation…
The (3) depends on your objective. Some people just make thread-unsafe code and claim it is programmer's responsibility to handle locks where needed, which can be OK. Sometimes, there might be a demand for thread-safe code. With getter and setter, you would need to use some locking scheme every time you access the data (which also makes (1) a less philosophical issue). Sometimes, it can be awful. Sometimes, it can be OK, because the caller wants to lock something more than just this one object.
As mentioned on the start of the post, I don't say I would prefer it every time. Maybe there are some cases when I would not go this way.
Edited
I would recommend changing this class as follows:
public class ClassObject {
private final int someNumber;
public ClassObject(int someNumber) {
this.someNumber = someNumber;
}
public int getSomeNumber() {
return someNumber;
}
public ClassObject decreaseSomeNumber() {
return new ClassObject(someNumber - 1);
}
public void doSomeStuff(ClassObject instance) {
//New ClassObject with new someNumber(instance's someNumber is decreased one unit)
ClassObject decreasedNumberClassObject = instance.decreaseSomeNumber();
}
}
I mean, if you wanna make a change in the Class properties(decrease, increase, multiply,...), it must return you, new Object(from the same Class), with the new property.
This code completely follows OOP paradigms. It is thread-safe, immutable and software(code) maintenance will be very high with the help of this approach.
I'm working in an environment where developers use different IDEs - Eclipse, Netbeans and IntelliJ. I'm using the #Nonnull annotation (javax.annotation.Nonnull) to indicate that a method will never return null:
#Nonnull
public List<Bar> getBars() {
return bars; // this.bars is a final, initialized list
}
I'd like other developers to get a warning if they do one of the following:
Change the method to return null without removing the #Nonnull annotation
Unnecessarily check for null for methods that are explicitly defined with #Nonnull: if (foo.getBars() == null) { ... do something ... }
The first scenario is supported e.g. by IntelliJ. The second is not; the clients are not warned that checking for null is unnecessary.
We're in any case planning to move towards returning clones of collections instead of the collections themselves, so is it better to forget #Nonnull and do this instead:
public List<Bar> getBars() {
return new ArrayList<Bar>(bars);
}
Edit:
To clarify, I'm not considering changing IDE's for this. I'd like to know whether what I described above is supported by the mentioned IDEs - or alternatively, if there is a good reason as to why it is not supported.
I get the point about not relying too much on contracts. However, if I write getBars() with the style in the last paragraph (return a clone of the list), then e.g. IntelliJ flags a warning for any code that does
if (foo.getBars() == null) { ... }
If you choose to follow this warning and remove the null check, you seem to be equally reliant on the getBars() implementation not changing. However, in this case you seem to be depending on implementation details instead of an explicit contract (as is the case with #Nonnull).
Edit #2:
I'm not concerned about execution speed, null checks are indeed very fast. I'm concerned about code readability. Consider the following:
Option A:
if ((foo.getBars() == null || foo.getBars().size() < MAXIMUM_BARS) &&
(baz.getFoos() == null || baz.getFoos().size() < MAXIMUM_FOOS)) {
// do something
}
Option B:
if (foo.getBars().size() < MAXIMUM_BARS &&
baz.getFoos().size() < MAXIMUM_FOOS) {
// do something
}
I think Option B is more readable than Option A. Since code is read more often than it is written, I'd like to ensure all code I (and others in our team) write is as readable as possible.
I would recommend the following:
Stick with your #Nonnull and #Nullable annotations just like you are doing it already.
Define and enforce a default. It doesn't really matter what the default is, but it should be the same across your entire code base.
Use FindBugs to check for your cases 1 and 2. FindBugs has plugins for most IDEs, I saw Eclipse, Netbeans, and IntelliJ mentioned, but there are more. (The number 2 case is covered by the RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE rule. I tested it. Just thought I should mention that after reading DavidHarkness' comment elsewhere on this page.)
This approach is IDE agnostic and works also in an external build environment like Hudson or Jenkins.
I guess, with your annotation you want to do two things:
make the program faster
make the programmers have to work less (because they don't have to type the null-checks)
For the first one: A Null-check consumes hardly any resources. If you run this sample:
public static void main(String[] args) {
long ms = System.currentTimeMillis();
for (int i=0;i<10000;i++) {
returnsNonNull(i);
doNothing();
}
System.out.println("Without nullcheck took "+(System.currentTimeMillis()-ms)+" ms to complete");
ms = System.currentTimeMillis();
for (int i=10000;i<20000;i++) {
if (returnsNonNull(i)!=null) {
doNothing();
}
}
System.out.println("With nullcheck took "+(System.currentTimeMillis()-ms)+" ms to complete");
}
public static String returnsNonNull(int i) {
return "Foo "+i+" Bar";
}
public static void doNothing() {
}
You will see, there is hardly any difference between these two tests. The second one (with the null-check) is sometimes even faster than the first one, meaning they both are pretty much indistinguishable when it comes to resource use.
Now to the second point:
You are actually making the programmers work more, not less. Except if there are absolutely no functions in your whole project and framework that ever return null the programmer now has to look up the function each time to check if he has to do a null-check or not. Seeing that about every framework in Java can under some conditions return null in some functions you are not reducing the workload for your programmers.
You said you want a warning to appear when they do a null-check where it is not necessary because of your #Nonnull-annotation. So what they will do is they type out the whole null-check and then get a warning and have to remove it again. See what I mean?
Also what if you make a mistake in your code and mark something as #Nonnull that can return null?
What I'd do is write it into the Javadoc under #return. Then the programmer can decide what he does, which is usually better than forcing others to write code your style.
I'm doing a bit of playing about to learn a framework I'm contributing to, and an interesting question came up. EDIT: I'm doing some basic filters in the Okapi Framework, as described in this guide, note that the filter must return different event types to be useful, and that resources must be used by reference (as the same resource may be used in other filters later). Here's the code I'm working with:
while (filter.hasNext()) {
Event event = filter.next();
if (event.isTextUnit()) {
TextUnit tu = (TextUnit)event.getResource();
if (tu.isTranslatable()) {
//do something with it
}
}
}
Note the cast of the resource to a TextUnit object on line 4. This works, I know it's a TextUnit because events that are isTextUnit() will always have a TextUnit resource. However, an alternative would be to add an asTextUnit() method to the IResource interface that returns the event as a TextUnit (as well as equivalent methods for each common resource type), so that the line would become:
TextUnit tu = event.getResource().asTextUnit;
Another approach might be providing a static casting method in TextUnit itself, along the lines of:
TextUnit tu = TextUnit.fromResource(event.getResource());
My question is: what are some arguments for doing it one way or the other? Are there performance differences?
The main advantage I can think of with asTextUnit() (or .fromResource) is that more appropriate exceptions could be thrown if someone tries to get a resource as the wrong type (i.e. with a message like "Cannot get this RawDocument type resource as a TextUnit - use asRawDocument()" or "The resource is not a TextUnit").
The main disadvantages I can think of with .asTextUnit() is that each resource type would then have to implement all the methods (most of which will just throw an exception), and if another major resource type is added there would be some refactoring to add the new method to every resource type (although there's no reason the .asSomething() methods would have to be defined for every possible type, the less common resources could just be cast, although this would lead to inconsistency of approach). This wouldn't be a problem with .fromResource() since it's just one method per type, and could be added or not per type depending on preference.
If the aim is to test an object's type and cast it, then I don't see any value in creating / using custom isXyz and asXyz methods. You just end up with a bunch of extra methods that make little difference to code readability.
Re: your point about appropriate exception messages, I would say that it is most likely not worth it. It is reasonable to assume that not having a TextUnit when a TextUnit is expected is symptom of a bug somewhere. IMO, it is not worthwhile trying to provide "user friendly" diagnostics for bugs. The person that the information is aimed at is a Java programmer, and for that person the default message and stacktrace for a regular ClassCastException (and the source code) provides all of the information required. (Translating it into pretty language adds no real value.)
On the flip-side, the performance differences between the two forms are not likely to be significant. But consider this:
if (x instanceof Y) {
((Y) x).someYMethod();
}
versus
if (x.isY()) {
x.asY().someYMethod();
}
boolean isY(X x) { return x instanceof Y; }
Y asY(X x) { return (Y) x; }
The optimizer might be able to do a better job of the first compared with the second.
It might not inline the method calls in the second case, especially if it is changed to use instanceof and throw a custom exception.
It is less likely to figure out that only one type test is really required in the second case. (It might not in the first case either ... but it is more likely to.)
But either way, the performance difference is going to be small.
Summary, the fancy methods are not really worth the effort, though they don't do any real harm.
Now if the isXyz or asXyz methods were testing the state of the object (not just the object's Java type), or if the asXyz was returning a wrapper, then the answers would be different ...
You could also just go
if (event instanceof TextUnit) {
// ...
}
and save yourself the trouble.
To answer your question regarding whether to go asTextUnit() vs. TextUnit.fromResource, the performance difference would depend upon how you actually implement these methods.
In the case of the static converter you would have a to create and return a new object of type TextUnit. However, in the case of the member function you could simply return this casted or you could create an return a new object - depends upon your use case.
Either ways, seems like instanceof is probably the cleanest way here.
What if your filter were extended - or wrapped - to return only text unit events? In fact, what if it returned only the resources of text unit events? Then your loop would be much simpler. I would think the clean way to do this would be a second filter, which simply returned just the text unit events, followed by, let's say, an Extractor, which returned the properly cast resource.
If you have a common base class, you can have a single asXMethod there for every derived class, and needn't refactor all derived classes:
abstract class Base {
A asA () { throw new InstantiationException ("not an A"); }
B asB () { throw new InstantiationException ("not an B"); }
C asC () { throw new InstantiationException ("not an C"); }
// much more ...
}
class A extends Base {
A asA () { /* hard work */ return new A (); }
// no asB, asC requiered
}
class B extends Base {
B asB () { /* hard work */ return new B (); }
// no asA, asC required
}
// and so on.
looks pretty clever. For a new Class N, just add a new Method to Base, and all derived classes get it. Just N needs to implement asN.
But it smells.
Why should a B have a method asA if it will always fail? That's not a good design. Exceptions in the generator are cheap, if they aren't triggered. Only thrown exceptions might be costly.
Yes, there are differences. Creating new immutable elements is better then casting. Pass all serializable data (non transient or computable data) to a Builder and build appropriate class.
(Please no advise that I should abstract X more and add another method to it.)
In C++, when I have a variable x of type X* and I want to do something specific if it is also of type Y* (Y being a subclass of X), I am writing this:
if(Y* y = dynamic_cast<Y*>(x)) {
// now do sth with y
}
The same thing seems not possible in Java (or is it?).
I have read this Java code instead:
if(x instanceof Y) {
Y y = (Y) x;
// ...
}
Sometimes, when you don't have a variable x but it is a more complex expression instead, just because of this issue, you need a dummy variable in Java:
X x = something();
if(x instanceof Y) {
Y y = (Y) x;
// ...
}
// x not needed here anymore
(Common thing is that something() is iterator.next(). And there you see that you also cannot really call that just twice. You really need the dummy variable.)
You don't really need x at all here -- you just have it because you cannot do the instanceof check at once with the cast. Compare that again to the quite common C++ code:
if(Y* y = dynamic_cast<Y*>( something() )) {
// ...
}
Because of this, I have introduced a castOrNull function which makes it possible to avoid the dummy variable x. I can write this now:
Y y = castOrNull( something(), Y.class );
if(y != null) {
// ...
}
Implementation of castOrNull:
public static <T> T castOrNull(Object obj, Class<T> clazz) {
try {
return clazz.cast(obj);
} catch (ClassCastException exc) {
return null;
}
}
Now, I was told that using this castOrNull function in that way is an evil thing do to. Why is that? (Or to put the question more general: Would you agree and also think this is evil? If yes, why so? Or do you think this is a valid (maybe rare) use case?)
As said, I don't want a discussion whether the usage of such downcast is a good idea at all. But let me clarify shortly why I sometimes use it:
Sometimes I get into the case where I have to choose between adding another new method for a very specific thing (which will only apply for one single subclass in one specific case) or using such instanceof check. Basically, I have the choice between adding a function doSomethingVeryVerySpecificIfIAmY() or doing the instanceof check. And in such cases, I feel that the latter is more clean.
Sometimes I have a collection of some interface / base class and for all entries of type Y, I want to do something and then remove them from the collection. (E.g. I had the case where I had a tree structure and I wanted to delete all childs which are empty leafs.)
Starting Java 14 you should be able to do instanceof and cast at the same time. See https://openjdk.java.net/jeps/305.
Code example:
if (obj instanceof String s) {
// can use s here
} else {
// can't use s here
}
The variable s in the above example is defined if instanceof evaluates to true. The scope of the variable depends on the context. See the link above for more examples.
Now, I was told that using this castOrNull function in that way is an evil thing do to. Why is that?
I can think of a couple of reasons:
It is an obscure and tricky way of doing something very simple. Obscure and tricky code is hard to read, hard to maintain, a potential source of errors (when someone doesn't understand it) and therefore evil.
The obscure and tricky way that the castOrNull method works most likely cannot be optimized by the JIT compiler. You'll end up with at least 3 extra method calls, plus lots of extra code to do the type check and cast reflectively. Unnecessary use of reflection is evil.
(By contrast, the simple way (with instanceof followed by a class cast) uses specific bytecodes for instanceof and class casting. The bytecode sequences can almost certainly will be optimized so that there is no more than one null check and no more that one test of the object's type in the native code. This is a common pattern that should be easy for the JIT compiler to detect and optimize.)
Of course, "evil" is just another way of saying that you REALLY shouldn't do this.
Neither of your two added examples, make use of a castOrNull method either necessary or desirable. IMO, the "simple way" is better from both the readability and performance perspectives.
In most well written/designed Java code the use of instanceof and casts never happens. With the addition of generics many cases of casts (and thus instanceof) are not needed. They do, on occasion still occur.
The castOrNull method is evil in that you are making Java code look "unnatural". The biggest problem when changing from one language to another is adopting the conventions of the new language. Temporary variables are just fine in Java. In fact all your method is doing is really hiding the temporary variable.
If you are finding that you are writing a lot of casts you should examine your code and see why and look for ways to remove them. For example, in the case that you mention adding a "getNumberOfChildren" method would allow you to check if a node is empty and thus able to prune it without casting (that is a guess, it might not work for you in this case).
Generally speaking casts are "evil" in Java because they are usually not needed. Your method is more "evil" because it is not written in the way most people would expect Java to be written.
That being said, if you want to do it, go for it. It isn't actually "evil" just not "right" way to do it in Java.
IMHO your castOrNull is not evil, just pointless. You seem to be obsessed with getting rid of a temporary variable and one line of code, while to me the bigger question is why you need so many downcasts in your code? In OO this is almost always a symptom of suboptimal design. And I would prefer solving the root cause instead of treating the symptom.
I don't know exactly why the person said that it was evil. However one possibility for their reasoning was the fact that you were catching an exception afterwards rather than checking before you casted. This is a way to do that.
public static <T> T castOrNull(Object obj, Class<T> clazz) {
if ( clazz.isAssignableFrom(obj.getClass()) ) {
return clazz.cast(obj);
} else {
return null;
}
}
Java Exceptions are slow. If you're trying to optimize your performance by avoiding a double cast, you're shooting yourself in the foot by using exceptions in lieu of logic. Never rely on catching an exception for something you could reasonably check for and correct for (exactly what you're doing).
How slow are Java exceptions?
I'm currently trying to build a more or less complete set of unit tests for a small library. Since we want to allow different implementations to exist we want this set of tests to be (a) generic, so that we can re-use it to test the different implementations and (b) as complete as possible. For the (b) part I'd like to know if there is any best-practice out there for testing enum types. So for example I have an enum as follows:
public enum Month {
January,
February,
...
December;
}
Here I want to ensure that all enum types really exist. Is that even necessary? Currently I'm using Hamcrests assertThat like in the following example:
assertThat(Month.January, is(notNullValue()));
A missing "January" enum would result in a compile time error which one can fix by creation the missing enum type.
I'm using Java here but I don't mind if your answer is for a different language..
Edit:
As mkato and Mark Heath have both pointed out testing enums may not be necessary since the compiler won't compile when you are using an enum type which isn't there. But I still want to test those enums since we want to build a seperate TCK-like test.jar which will run the same test on different implementations. So my question was more meant to be like: What is the best way to test enum types?
After thinking about it a bit more I changed the Hamcrest statement above to:
assertThat(Month.valueOf("January"), is(notNullValue()));
This statement now throws a NPE when January is not there (yet). Is there anything wrong with this approach?
For enums, I test them only when they actually have methods in them. If it's a pure value-only enum like your example, I'd say don't bother.
But since you're keen on testing it, going with your second option is much better than the first. The problem with the first is that if you use an IDE, any renaming on the enums would also rename the ones in your test class.
I agree with aberrant80.
For enums, I test them only when they actually have methods in them.
If it's a pure value-only enum like your example, I'd say don't
bother.
But since you're keen on testing it, going with your second option is
much better than the first. The problem with the first is that if you
use an IDE, any renaming on the enums would also rename the ones in
your test class.
I would expand on it by adding that unit testings an Enum can be very useful. If you work in a large code base, build time starts to mount up and a unit test can be a faster way to verify functionality (tests only build their dependencies). Another really big advantage is that other developers cannot change the functionality of your code unintentionally (a huge problem with very large teams).
And with all Test Driven Development, tests around an Enums Methods reduce the number of bugs in your code base.
Simple Example
public enum Multiplier {
DOUBLE(2.0),
TRIPLE(3.0);
private final double multiplier;
Multiplier(double multiplier) {
this.multiplier = multiplier;
}
Double applyMultiplier(Double value) {
return multiplier * value;
}
}
public class MultiplierTest {
#Test
public void should() {
assertThat(Multiplier.DOUBLE.applyMultiplier(1.0), is(2.0));
assertThat(Multiplier.TRIPLE.applyMultiplier(1.0), is(3.0));
}
}
Usually I would say it is overkill, but there are occasionally reasons for writing unit tests for enums.
Sometimes the values assigned to enumeration members must never change or the loading of legacy persisted data will fail. Similarly, apparently unused members must not be deleted. Unit tests can be used to guard against a developer making changes without realising the implications.
you can test if have exactly some values, by example:
for(MyBoolean b : MyBoolean.values()) {
switch(b) {
case TRUE:
break;
case FALSE:
break;
default:
throw new IllegalArgumentException(b.toString());
}
for(String s : new String[]{"TRUE", "FALSE" }) {
MyBoolean.valueOf(s);
}
If someone removes or adds a value, some of test fails.
If you use all of the months in your code, your IDE won't let you compile, so I think you don't need unit testing.
But if you are using them with reflection, even if you delete one month, it will compile, so it's valid to put a unit test.
This is a sample for what we have within our project.
public enum Role {
ROLE_STUDENT("LEARNER"),
ROLE_INSTRUCTOR("INSTRUCTOR"),
ROLE_ADMINISTRATOR("ADMINISTRATOR"),
ROLE_TEACHER("TEACHER"),
ROLE_TRUSTED_API("TRUSTEDAPI");
private final String textValue;
Role(String textValue) {
this.textValue = textValue;
}
public String getTextValue() {
return textValue;
}
}
class RoleTest {
#Test
void testGetTextValue() {
assertAll(
() -> assertEquals("LEARNER", Role.ROLE_STUDENT.getTextValue()),
() -> assertEquals("INSTRUCTOR", Role.ROLE_INSTRUCTOR.getTextValue()),
() -> assertEquals("ADMINISTRATOR", Role.ROLE_ADMINISTRATOR.getTextValue()),
() -> assertEquals("TEACHER", Role.ROLE_TEACHER.getTextValue()),
() -> assertEquals("TRUSTEDAPI", Role.ROLE_TRUSTED_API.getTextValue())
);
}
}