How to test enum types? - java

I'm currently trying to build a more or less complete set of unit tests for a small library. Since we want to allow different implementations to exist we want this set of tests to be (a) generic, so that we can re-use it to test the different implementations and (b) as complete as possible. For the (b) part I'd like to know if there is any best-practice out there for testing enum types. So for example I have an enum as follows:
public enum Month {
January,
February,
...
December;
}
Here I want to ensure that all enum types really exist. Is that even necessary? Currently I'm using Hamcrests assertThat like in the following example:
assertThat(Month.January, is(notNullValue()));
A missing "January" enum would result in a compile time error which one can fix by creation the missing enum type.
I'm using Java here but I don't mind if your answer is for a different language..
Edit:
As mkato and Mark Heath have both pointed out testing enums may not be necessary since the compiler won't compile when you are using an enum type which isn't there. But I still want to test those enums since we want to build a seperate TCK-like test.jar which will run the same test on different implementations. So my question was more meant to be like: What is the best way to test enum types?
After thinking about it a bit more I changed the Hamcrest statement above to:
assertThat(Month.valueOf("January"), is(notNullValue()));
This statement now throws a NPE when January is not there (yet). Is there anything wrong with this approach?

For enums, I test them only when they actually have methods in them. If it's a pure value-only enum like your example, I'd say don't bother.
But since you're keen on testing it, going with your second option is much better than the first. The problem with the first is that if you use an IDE, any renaming on the enums would also rename the ones in your test class.

I agree with aberrant80.
For enums, I test them only when they actually have methods in them.
If it's a pure value-only enum like your example, I'd say don't
bother.
But since you're keen on testing it, going with your second option is
much better than the first. The problem with the first is that if you
use an IDE, any renaming on the enums would also rename the ones in
your test class.
I would expand on it by adding that unit testings an Enum can be very useful. If you work in a large code base, build time starts to mount up and a unit test can be a faster way to verify functionality (tests only build their dependencies). Another really big advantage is that other developers cannot change the functionality of your code unintentionally (a huge problem with very large teams).
And with all Test Driven Development, tests around an Enums Methods reduce the number of bugs in your code base.
Simple Example
public enum Multiplier {
DOUBLE(2.0),
TRIPLE(3.0);
private final double multiplier;
Multiplier(double multiplier) {
this.multiplier = multiplier;
}
Double applyMultiplier(Double value) {
return multiplier * value;
}
}
public class MultiplierTest {
#Test
public void should() {
assertThat(Multiplier.DOUBLE.applyMultiplier(1.0), is(2.0));
assertThat(Multiplier.TRIPLE.applyMultiplier(1.0), is(3.0));
}
}

Usually I would say it is overkill, but there are occasionally reasons for writing unit tests for enums.
Sometimes the values assigned to enumeration members must never change or the loading of legacy persisted data will fail. Similarly, apparently unused members must not be deleted. Unit tests can be used to guard against a developer making changes without realising the implications.

you can test if have exactly some values, by example:
for(MyBoolean b : MyBoolean.values()) {
switch(b) {
case TRUE:
break;
case FALSE:
break;
default:
throw new IllegalArgumentException(b.toString());
}
for(String s : new String[]{"TRUE", "FALSE" }) {
MyBoolean.valueOf(s);
}
If someone removes or adds a value, some of test fails.

If you use all of the months in your code, your IDE won't let you compile, so I think you don't need unit testing.
But if you are using them with reflection, even if you delete one month, it will compile, so it's valid to put a unit test.

This is a sample for what we have within our project.
public enum Role {
ROLE_STUDENT("LEARNER"),
ROLE_INSTRUCTOR("INSTRUCTOR"),
ROLE_ADMINISTRATOR("ADMINISTRATOR"),
ROLE_TEACHER("TEACHER"),
ROLE_TRUSTED_API("TRUSTEDAPI");
private final String textValue;
Role(String textValue) {
this.textValue = textValue;
}
public String getTextValue() {
return textValue;
}
}
class RoleTest {
#Test
void testGetTextValue() {
assertAll(
() -> assertEquals("LEARNER", Role.ROLE_STUDENT.getTextValue()),
() -> assertEquals("INSTRUCTOR", Role.ROLE_INSTRUCTOR.getTextValue()),
() -> assertEquals("ADMINISTRATOR", Role.ROLE_ADMINISTRATOR.getTextValue()),
() -> assertEquals("TEACHER", Role.ROLE_TEACHER.getTextValue()),
() -> assertEquals("TRUSTEDAPI", Role.ROLE_TRUSTED_API.getTextValue())
);
}
}

Related

How to unit test to the case where there are two public methods, one calling another?

What is the way to test the methods when there are two public methods and a method calls another public method in the same class?
How should I write unit tests in this scenario?
An example
class SpecificIntMath {
public int add(int a,int b) {
return a+b;
}
public int multiply(int a, int b) {
int mul = 0;
for(int i = 0;i<b,i++) {
mul=add(a,mul);
}
return mul;
}
}
This example doesn't show the complexity of both the methods involved but the concept.
Should I test add and multiply separately? If I should test multiply only, I feel like we miss out cases where multiple cannot provide parameters.
Assuming multiply and add to be tested separately, should I be able to mock add? How is that possible?
Assuming multiply and add to be tested separately and I shouldn't mock, should I let add perform as it is. If this is the case how should I deal with the flow of the program inside add?
What is the approach to test such a kind of situation.
Edit 1:
In the below code,
class MCVC {
public boolean getWhereFrom(List<User> users) {
boolean allDone = true;
for(User user: users){
String url = user.getUrl();
switch(url) {
case Consts.GOOGLE:
someDao.updateFromAddr(user);
user.setEntry("Search Engine");
break;
case Consts.FACEBOOK:
someDao.updateFromAddr(user);
user.setEntry("Social Media");
break;
case Consts.HOME:
someDao.updateToAddr(user);
user.setEntry("Company");
default
user.setEntry(null);
allDone = false;
break;
}
}
return allDone;
}
public void likedDeck() {
List<Users> usersList = deckDao.getPotentialUsers(345L,HttpStatus.OK);
boolean flag = getWhereFrom(usersList);
if(flag) {
for(User user: usersList) {
//some Action
}
}
}
}
Should I consider getWhereFrom() while testing likedDeck() or should I assume any default situation? If I consider default situation, I lose out on cases where the output isn't default. I am not sure I should mock it since class which is calling is being tested. Spying/Mocking class under test
You don't care.
You use unit-testing to test the contract of each public method on its own. Thus you write tests that make sure that both add() and multiply() do what they are supposed to do.
The fact that the one uses the other internally is of no interest on the outside. Your tests should neither know nor care about this internal implementation detail.
And just for the record: as your code is written right now; you absolutely do not turn to mocking here. Mocking is not required here; and only adds the risk of testing something that has nothing to do with your real production code. You only use mocking for situations when you have to control aspects of objects in order to enable testing. But nothing in your example code needs mocking to be tested. And if it would - it would be an indication of poor design/implementation (given the contract of those methods)!
Edit; given the changes example in the question:
First of all, there is a bug in getWhereFrom() - you iterate a list; but you keep overwriting the return value in that list. So when the first iteration sets the result to false; that information might be lost in the next loop.
I see two options for the actual question:
You turn to Mockito; and its "spy" concept to do partial mocking; in case you want to keep your source code as is
Me, personally; I would rather invest time into improving the production code. It looks to me as getWhereFrom() could be worth its own class (where I would probably not have it work on a list of users; but just one user; that also helps with returning a single boolean value ;-). And when you do that, you can use dependency injection to acquire a (mocked) instance of that "WhereFromService" class.
In other words: the code you are showing could be reworked/refactored; for example to more clearly follow the SRP. But that is of course a larger undertaking; that you need to discuss with the people around you.
At least test them both seperatly. That the multiply test implicitly tests the add is no problem. In most of this cases you should ask yourself the question if it is necessary that both methods need to be public.
Should I test add and multiply separately?
You should test them separately, if you are doing unit testing. You would only like to test them together when doing component or integration tests.
Assuming multiply and add to be tested separately, should I be able to
mock add?
yes
How is that possible?
use mockito or any other mocking framework. Exactly how you can see here Use Mockito to mock some methods but not others
Assuming multiply and add to be tested separately and I shouldn't
mock, should I let add perform as it is.
I wouldn't do that. Internal changes in add could affect the tests from multiply and your tests would get more complicated and unstable.

How to mock/fake a new enum value with JMockit

How can I add a fake enum value using JMockit?
I could't find anything in the documentation. Is it even possible?
Related: this question but it is for mockito only, not for JMockIt.
EDIT: I removed the examples I gave in the first place because the examples seem to be distracting. Please have a look at the most upvoted answer on the linked question to see what I'm expecting. I want to know if it's possible to do the same with JMockit.
I think you are trying to solve the wrong problem. Instead, fix the foo(MyEnum) method as follows:
public int foo(MyEnum value) {
switch (value) {
case A: return 1;
case B: return 2;
}
return 0; // just to satisfy the compiler
}
Having a throw at the end to capture an imaginary enum element that doesn't exist is not useful, as it will never be reached. If you are concerned about a new element getting added to the enum and the foo method not being updated accordingly, there are better solutions. One of them is to rely on a code inspection from your Java IDE (IntelliJ at least has one for this case: "switch statement on enumerated type misses case") or a rule from a static analysis tool.
The best solution, though, is to put constant values associated to enum elements where they belong (the enum itself), therefore eliminating the need for a switch:
public enum BetterEnum {
A(1), B(2);
public final int value;
BetterEnum(int value) { this.value = value; }
}
Having had a second thought about the problem I have a solution, and surprisingly a really trivial one.
You are asking about mocking an enum or extending it in test. But the actual problem actually seems to be the necessity to guarantee that any enum extension must be accompanied by amendments in the function that uses it. So you essentially need a test that would fail if the enum is extended, no matter if a mock is used or at all possible. In fact it is better to go without if possible.
I had exactly the same problem a number of times, but the actual solution came to my mind just now after seeing your question:
The original enum:
public enum MyEnum { A, B }
The function that has been defined when the enum provided only A and B:
public int mapper(MyEnum e) {
switch (e) {
case A: return 1;
case B: return 2;
default:
throw new IllegalArgumentException("value not supported");
}
}
The test that will point out that mapper will need to be dealt with when the enum is extended:
#Test
public void test_mapper_onAllDefinedArgValues_success() {
for (MyEnum e: MyEnum.values()) {
mapper(e);
}
}
The test result:
Process finished with exit code 0
Now let's extend the enum with a new value C and rerun the test:
java.lang.IllegalArgumentException: value not supported
at io.ventu.rpc.amqp.AmqpResponderTest.mapper(AmqpResponderTest.java:104)
...
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Process finished with exit code 255
Just creating a fake enum value will probably not be enough, you eventually also need to manipulate an integer array that is created by the compiler.
Actually to create a fake enum value, you don't even need any mocking framework. You can just use Objenesis to create a new instance of the enum class (yes, this works) and then use plain old Java reflection to set the private fields name and ordinal and you already have your new enum instance.
Using Spock framework for testing, this would look something like:
given:
def getPrivateFinalFieldForSetting = { clazz, fieldName ->
def result = clazz.getDeclaredField(fieldName)
result.accessible = true
def modifiers = Field.getDeclaredFields0(false).find { it.name == 'modifiers' }
modifiers.accessible = true
modifiers.setInt(result, result.modifiers & ~FINAL)
result
}
and:
def originalEnumValues = MyEnum.values()
MyEnum NON_EXISTENT = ObjenesisHelper.newInstance(MyEnumy)
getPrivateFinalFieldForSetting.curry(Enum).with {
it('name').set(NON_EXISTENT, "NON_EXISTENT")
it('ordinal').setInt(NON_EXISTENT, originalEnumValues.size())
}
If you also want the MyEnum.values() method to return the new enum, you now can either use JMockit to mock the values() call like
new MockUp<MyEnum>() {
#Mock
MyEnum[] values() {
[*originalEnumValues, NON_EXISTENT] as MyEnum[]
}
}
or you can again use plain old reflection to manipulate the $VALUES field like:
given:
getPrivateFinalFieldForSetting.curry(MyEnum).with {
it('$VALUES').set(null, [*originalEnumValues, NON_EXISTENT] as MyEnum[])
}
expect:
true // your test here
cleanup:
getPrivateFinalFieldForSetting.curry(MyEnum).with {
it('$VALUES').set(null, originalEnumValues)
}
As long as you don't deal with a switch expression, but with some ifs or similar, either just the first part or the first and second part might be enough for you.
If you however are dealing with a switch expression, e. g. wanting 100% coverage for the default case that throws an exception in case the enum gets extended, things get a bit more complicated and at the same time a bit more easy.
A bit more complicated because you need to do some serious reflection to manipulate a synthetic field that the compiler generates in a synthetic anonymous innner class that the compiler generates, so it is not really obvious what you are doing and you are bound to the actual implementation of the compiler, so this could break anytime in any Java version or even if you use different compilers for the same Java version. It is actually already different between Java 6 and Java 8.
A bit more easy, because you can forget the first two parts of this answer, because you don't need to create a new enum instance at all, you just need to manipulate an int[], that you need to manipulate anyway to make the test you want.
I recently found a very good article regarding this at https://www.javaspecialists.eu/archive/Issue161.html.
Most of the information there is still valid, except that now the inner class containing the switch map is no longer a named inner class, but an anonymous class, so you cannot use getDeclaredClasses anymore but need to use a different approach shown below.
Basically summarized, switch on bytecode level does not work with enums, but only with integers. So what the compiler does is, it creates an anonymous inner class (previously a named inner class as per the article writing, this is Java 6 vs. Java 8) that holds one static final int[] field called $SwitchMap$net$kautler$MyEnum that is filled with integers 1, 2, 3, ... at the indices of MyEnum#ordinal() values.
This means when the code comes to the actual switch, it does
switch(<anonymous class here>.$SwitchMap$net$kautler$MyEnum[myEnumVariable.ordinal()]) {
case 1: break;
case 2: break;
default: throw new AssertionError("Missing switch case for: " + myEnumVariable);
}
If now myEnumVariable would have the value NON_EXISTENT created in the first step above, you would either get an ArrayIndexOutOfBoundsException if you set ordinal to some value greater than the array the compiler generated, or you would get one of the other switch-case values if not, in both cases this would not help to test the wanted default case.
You could now get this int[] field and fix it up to contain a mapping for the orinal of your NON_EXISTENT enum instance. But as I said earlier, for exactly this use-case, testing the default case, you don't need the first two steps at all. Instead you can simple give any of the existing enum instances to the code under test and simply manipulate the mapping int[], so that the default case is triggered.
So all that is necessary for this test case is actually this, again written in Spock (Groovy) code, but you can easily adapt it to Java too:
given:
def getPrivateFinalFieldForSetting = { clazz, fieldName ->
def result = clazz.getDeclaredField(fieldName)
result.accessible = true
def modifiers = Field.getDeclaredFields0(false).find { it.name == 'modifiers' }
modifiers.accessible = true
modifiers.setInt(result, result.modifiers & ~FINAL)
result
}
and:
def switchMapField
def originalSwitchMap
def namePrefix = ClassThatContainsTheSwitchExpression.name
def classLoader = ClassThatContainsTheSwitchExpression.classLoader
for (int i = 1; ; i++) {
def clazz = classLoader.loadClass("$namePrefix\$$i")
try {
switchMapField = getPrivateFinalFieldForSetting(clazz, '$SwitchMap$net$kautler$MyEnum')
if (switchMapField) {
originalSwitchMap = switchMapField.get(null)
def switchMap = new int[originalSwitchMap.size()]
Arrays.fill(switchMap, Integer.MAX_VALUE)
switchMapField.set(null, switchMap)
break
}
} catch (NoSuchFieldException ignore) {
// try next class
}
}
when:
testee.triggerSwitchExpression()
then:
AssertionError ae = thrown()
ae.message == "Unhandled switch case for enum value 'MY_ENUM_VALUE'"
cleanup:
switchMapField.set(null, originalSwitchMap)
In this case you don't need any mocking framework at all. Actually it would not help you anyway, as no mocking framework I'm aware of allows you to mock an array access. You could use JMockit or any mocking framework to mock the return value of ordinal(), but that would again simply result in a different switch-branch or an AIOOBE.
What this code I just shown does is:
it loops through the anonymous classes inside the class that contains the switch expression
in those it searches for the field with the switch map
if the field is not found, the next class is tried
if a ClassNotFoundException is thrown by Class.forName, the test fails, which is intended, because that means that you compiled the code with a compiler that follows a different strategy or naming pattern, so you need to add some more intelligence to cover different compiler strategies for switching on enum values. Because if the class with the field is found, the break leaves the for-loop and the test can continue. This whole strategy of course depends on anonymous classes being numbered starting from 1 and without gaps, but I hope this is a pretty safe assumption. If you are dealing with a compiler where this is not the case, the searching algorithm needs to be adapted accordingly.
if the switch map field is found, a new int array of the same size is created
the new array is filled with Integer.MAX_VALUE which usually should trigger the default case as long as you don't have an enum with 2,147,483,647 values
the new array is assigned to the switch map field
the for loop is left using break
now the actual test can be done, triggering the switch expression to be evaluated
finally (in a finally block if you are not using Spock, in a cleanup block if you are using Spock) to make sure this does not affect other tests on the same class, the original switch map is put back into the switch map field

Junit multiple results in one test

Ok, I know this is considered an anti-pattern, and I am certainly open to a better way of doing this.
I have a map of enum values. I want to ensure that each of those enum values is assigned to something. My test looks like this.
#Test
public void eachRowRequiresCellCalc()
{
Model model = new Model();
EnumValues[] values = EnumValues.values();
for (EnumValues value : values)
{
Assert.assertTrue(String.format("%s must be assigned", value.name()), model.hasEnumValue(value));
}
}
This works and accomplishes 90% of what I'm looking for. What it doesn't do is show me if multiple enum values are unassigned (it fails on the first). Is there a way with JUnit to have multiple fails per test?
Ideally you would not want to check for all values once you get a failure since it is anyways going to fail.
But a workaround I would suggest, but not sure if it works for you:
#Test
public void eachRowRequiresCellCalc()
{
Model model = new Model();
EnumValues[] values = EnumValues.values();
List<EnumValues> isFalse = new ArrayList<EnumValues>;
for (EnumValues value : values)
{
if(!model.hasEnumValue(value)) {
isFalse.add(value);
}
}
//Now you have the array of incorrect values in 'isFalse'
}
You cannot have multiple fails per test. But you can do something similar by tracking the failures in the for loop. Then outside the for loop print out your string in a single assert.
#Test
public void eachRowRequiresCellCalc()
{
Model model = new Model();
EnumValues[] values = EnumValues.values();
String errors = "";
for (EnumValues value : values)
{
if(!model.hasEnumValue(value))
errors += String.format("%s must be assigned", value.name()+". ");
}
if(!errors.isEmpty()){
fail(errors);
}
}
A way to express this using junit-quickcheck would be:
#RunWith(Theories.class)
public class Models {
#Theory public void mustHaveValue(#ForAll #ValuesOf EnumValues e) {
assertTrue(e.name(), new Model().hasEnumValue(e));
}
}
This would run the theory method for every value of your enum.
Another way to express this would be via a parameterized test.
(Full disclosure: I am the creator of junit-quickcheck.)
I know you asked about JUnit, but if you are in a position to consider TestNG, you can define a method as #DataProvider, which will supply parameters to a #Test method. What you seem to be looking for fits this perfectly.
Another option that comes to mind is that you might want to look into MatcherAssert.assertThat with collections matchers. Also it has better logging for your asserts - you might not need to use string format.
A suggestion regarding unit tests, if you don't mind: If you brake down a test method into blocks of given - when - then parts, it greatly improves their readability. given sets up the test case (variables, mocks, etc), when executes the method that is being tested, then is the part where you check the result (assert, verify, ...). I have found that following this structure helps others as well as myself weeks after the test has been written to understand what is going on.
I am aware this is an older question, but for anyone who stumbles upon it now, like I did - as of today, JUnit provides an assertAll() function, which can be used just for that, so there's no more need to fiddle around trying to get the results of multiple chained assertions :)
Here's the reference, hope this saves you some time! Would for sure have saved some of mine if I had known about it earlier.

Redundant methods across several unit tests/classes

Say in the main code, you've got something like this:
MyClass.java
public class MyClass {
public List<Obj1> create(List<ObjA> list) {
return (new MyClassCreator()).create(list);
}
// Similar methods for other CRUD operations
}
MyClassCreator.java
public class MyClassCreator {
Obj1Maker obj1Maker = new Obj1Maker();
public List<Obj1> create(List<ObjA> list) {
List<Obj1> converted = new List<Obj1>();
for(ObjA objA : list)
converted.add(obj1Maker.convert(objA));
return converted;
}
}
Obj1Maker.java
public class Obj1Maker {
public Obj1 convert(ObjA objA) {
Obj1 obj1 = new Obj1();
obj1.setProp(formatObjAProperty(objA));
return obj1;
}
private String formatObjAProperty(ObjA objA) {
// get objA prop and do some manipulation on it
}
}
Assume that the unit test for Obj1Maker is already done, and involves a method makeObjAMock() which mocks complex object A.
My questions:
For unit testing MyClassCreator, how would I test create(List<ObjA> list)? All the method really does is delegate the conversion from ObjA to Obj1 and runs it in a loop. The conversion itself is already tested. If I were to create a list of ObjA and tested each object in the list of Obj1 I get back, I would have to copy makeObjAMock() into MyClassCreator's unit test. Obviously, this would be duplicate code, so is using verify() enough to ensure that create(List list) works?
For unit testing MyClass, again, its create(List<ObjA>) method just delegates the operation to MyClassCreator. Do I actually need to test this with full test cases, or should I just verify that MyClassCreator's create method was called?
In the unit test for Obj1Maker, I checked that the properties Obj1 and ObjA corresponded to each other by doing assertEquals(obj1.getProp(), formatObjAProperty(objA)). However, that means I had to duplicate the code for the private formatObjAProperty method from the Obj1Maker class into its unit test. How can I prevent code repetition in this case? I don't want to make this method public/protected just so I can use it in a unit test. Is repetition acceptable in this case?
Thanks, and sorry for the lengthy questions.
My opinion is here. Picking which methods to test is a hard thing to do.
You have to think about a) whether you are meeting your requirements and b) what could go wrong when someone stupid makes changes to the code in the future. (Actually, the stupid person could be you. We all have bad days.)
I would say, writing new code to verify the two objects have the same data in two formats would be a good idea. Probably there is no reason to duplicate the code from the private method and copying the code over is a bad idea. Remember that you are verifying requirements. So if the original string said "6/30/13" and the reformatted one said "June 30th 2013", I would just hard code the check:
assertEquals("Wrong answer", "June 30th 2013", obj.getProp());
Add some more asserts for edge cases and errors. (In my example, use "2/30/13" and "2/29/12" and "12/1/14" to check illegal date, leap year day and that it gets "1st" not "1th" perhaps.)
In the test on the create method, I would probably just go for the easy error and verify that the returned array had the same number as the one passed in. The one I passed in would have two identical elements and some different ones. I'd just check that the identical ones came back identical and the different ones non-identical. Why? Because we already know the formatter works.
I wouldn't test the constructor but would make sure some test ran the code in it. It's good to make sure most of the code actually runs in a test to catch dumb errors like null pointers you missed.
The balance point is what you are looking for.
Enough tests, testing enough different things, to feel good about the code working.
Enough tests, testing obvious things, that stupid changes in the future will get found.
Not so many tests that the tests take forever to run and all the developers (including you) will put off running them because they don't want to wait or lose their train of thought while they run.
Balance!

Simple, general-interest, code-analyzers based, Java questions [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
OK, after reviewing some code with PMD and FindBugs code analyzers, i was able to do great changes on the reviewed code. However, there are some things i don't know how to fix. I'll iterate them bellow, and (for better reference) i will give each question a number. Feel free to answer to any/all of them. Thanks for your patience.
1. Even tough i have removed some of the rules, the associated warnings are still there, after re-evaluate the code. Any idea why?
2. Please look at the declarations :
private Combo comboAdress;
private ProgressBar pBar;
and the references to objects by getters and setters :
private final Combo getComboAdress() {
return this.comboAdress;
}
private final void setComboAdress(final Combo comboAdress) {
this.comboAdress = comboAdress;
}
private final ProgressBar getpBar() {
return this.pBar;
}
private final void setpBar(final ProgressBar pBar) {
this.pBar = pBar;
}
Now, i wonder why the first declaration don't give me any warning on PMD, while the second gives me the following warning :
Found non-transient, non-static member. Please mark as transient or provide accessors.
More details on that warning here.
3. Here is another warning, also given by PMD :
A method should have only one exit point, and that should be the last statement in the method
More details on that warning here.
Now, i agree with that, but what if i write something like this :
public void actionPerformedOnModifyComboLocations() {
if (getMainTree().isFocusControl()) {
return;
}
....//do stuffs, based on the initial test
}
I tend to agree with the rule, but if performance of the code suggest multiple exit points, what should i do?
4. PMD gives me this :
Found 'DD'-anomaly for variable 'start_page' (lines '319'-'322').
when i declare something like :
String start_page = null;
I get rid of this info (level of warning is info) if i remove the assignment to null, but..i got an error from IDE, saying that the variable could be uninitialized, at some point later in the code. So, i am kind of stuck with that. Supressing the warning is the best you can do?
5. PMD Warning :
Assigning an Object to null is a code smell. Consider refactoring.
This is the case of a singletone use of GUI components or the case of a method who returns complex objects. Assigning the result to null in the catch() section it's justified by the need to avoid the return of an incomplete/inconsistent object. Yes, NullObject should be used, but there are cases where i don't want to do that. Should i supress that warning then?
6. FindBugs warning #1:
Write to static field MyClass.instance from instance method MyClass.handleEvent(Event)
in the method
#Override
public void handleEvent(Event e) {
switch (e.type) {
case SWT.Dispose: {
if (e.widget == getComposite()) {
MyClass.instance = null;
}
break;
}
}
}
of the static variable
private static MyClass instance = null;
The variable allows me to test whether the form is already created and visible or not, and i need to force the re-creation of the form, in some cases. I see no other alternative here. Any insights? (MyClass implements Listener, hence the overrided handleEvent() method).
7. FindBugs warning #2:
Class MyClass2 has a circular dependency with other classes
This warning is displayed based on simple imports of other classes. Do i need to refactor those imports to make this warning go away? Or the problem relies in MyClass2?
OK, enough said for now..expect an update, based on more findings and/or your answers. Thanks.
Here are my answers to some of your questions:
Question number 2:
I think you're not capitalizing the properties properly. The methods should be called getPBar and setPBar.
String pBar;
void setPBar(String str) {...}
String getPBar() { return pBar};
The JavaBeans specification states that:
For readable properties there will be a getter method to read the property value. For writable properties there will be a setter method to allow the property value to be updated. [...] Constructs a PropertyDescriptor for a property that follows the standard Java convention by having getFoo and setFoo accessor methods. Thus if the argument name is "fred", it will assume that the reader method is "getFred" and the writer method is "setFred". Note that the property name should start with a lower case character, which will be capitalized in the method names.
Question number 3:
I agree with the suggestion of the software you're using. For readability, only one exit point is better. For efficiency, using 'return;' might be better. My guess is that the compiler is smart enough to always pick the efficient alternative and I'll bet that the bytecode would be the same in both cases.
FURTHER EMPIRICAL INFORMATION
I did some tests and found out that the java compiler I'm using (javac 1.5.0_19 on Mac OS X 10.4) is not applying the optimization I expected.
I used the following class to test:
public abstract class Test{
public int singleReturn(){
int ret = 0;
if (cond1())
ret = 1;
else if (cond2())
ret = 2;
else if (cond3())
ret = 3;
return ret;
}
public int multReturn(){
if (cond1()) return 1;
else if (cond2()) return 2;
else if (cond3()) return 3;
else return 0;
}
protected abstract boolean cond1();
protected abstract boolean cond2();
protected abstract boolean cond3();
}
Then, I analyzed the bytecode and found that for multReturn() there are several 'ireturn' statements, while there is only one for singleReturn(). Moreover, the bytecode of singleReturn() also includes several goto to the return statement.
I tested both methods with very simple implementations of cond1, cond2 and cond3. I made sure that the three conditions where equally provable. I found out a consistent difference in time of 3% to 6%, in favor of multReturn(). In this case, since the operations are very simple, the impact of the multiple return is quite noticeable.
Then I tested both methods using a more complicated implementation of cond1, cond2 and cond3, in order to make the impact of the different return less evident. I was shocked by the result! Now multReturn() is consistently slower than singleReturn() (between 2% and 3%). I don't know what is causing this difference because the rest of the code should be equal.
I think these unexpected results are caused by the JIT compiler of the JVM.
Anyway, I stand by my initial intuition: the compiler (or the JIT) can optimize these kind of things and this frees the developer to focus on writing code that is easily readable and maintainable.
Question number 6:
You could call a class method from your instance method and leave that static method alter the class variable.
Then, your code look similar to the following:
public static void clearInstance() {
instance = null;
}
#Override
public void handleEvent(Event e) {
switch (e.type) {
case SWT.Dispose: {
if (e.widget == getComposite()) {
MyClass.clearInstance();
}
break;
}
}
}
This would cause the warning you described in 5, but there has to be some compromise, and in this case it's just a smell, not an error.
Question number 7:
This is simply a smell of a possible problem. It's not necessarily bad or wrong, and you cannot be sure just by using this tool.
If you've got a real problem, like dependencies between constructors, testing should show it.
A different, but related, problem are circular dependencies between jars: while classes with circular dependencies can be compiled, circular dependencies between jars cannot be handled in the JVM because of the way class loaders work.
I have no idea. It seems likely that whatever you did do, it was not what you were attempting to do!
Perhaps the declarations appear in a Serializable class but that the type (e.g. ComboProgress are not themselves serializable). If this is UI code, then that seems very likely. I would merely comment the class to indicate that it should not be serialized.
This is a valid warning. You can refactor your code thus:
public void actionPerformedOnModifyComboLocations() {
if (!getMainTree().isFocusControl()) {
....//do stuffs, based on the initial test
}
}
This is why I can't stand static analysis tools. A null assignment obviously leaves you open to NullPointerExceptions later. However, there are plenty of places where this is simply unavoidable (e.g. using try catch finally to do resource cleanup using a Closeable)
This also seems like a valid warning and your use of static access would probably be considered a code smell by most developers. Consider refactoring via using dependency-injection to inject the resource-tracker into the classes where you use the static at the moment.
If your class has unused imports then these should be removed. This might make the warnings disappear. On the other hand, if the imports are required, you may have a genuine circular dependency, which is something like this:
class A {
private B b;
}
class B {
private A a;
}
This is usually a confusing state of affairs and leaves you open to an initialization problem. For example, you may accidentally add some code in the initialization of A that requires its B instance to be initialized. If you add similar code into B, then the circular dependency would mean that your code was actually broken (i.e. you couldn't construct either an A or a B.
Again an illustration of why I really don't like static analysis tools - they usually just provide you with a bunch of false positives. The circular-dependent code may work perfectly well and be extremely well-documented.
For point 3, probably the majority of developers these days would say the single-return rule is simply flat wrong, and on average leads to worse code. Others see that it a written-down rule, with historical credentials, some code that breaks it is hard to read, and so not following it is simply wrong.
You seem to agree with the first camp, but lack the confidence to tell the tool to turn off that rule.
The thing to remember is it is an easy rule to code in any checking tool, and some people do want it. So it is pretty much always implemented by them.
Whereas few (if any) enforce the more subjective 'guard; body; return calculation;' pattern that generally produces the easiest-to-read and simplest code.
So if you are looking at producing good code, rather than simply avoiding the worst code, that is one rule you probably do want to turn off.

Categories