Naming convention JUnit suffix or prefix Test [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Class under test MyClass.java
JUnit test case name alternatives:
TestMyClass.java
MyClassTest.java
http://moreunit.sourceforge.net seems to use "Test" as prefix default but I have seen both uses. Both seems to be recognized when running the entire project as unit test in eclipse as it is the annotation inside classes that are parsed for #Test. I guess maven does the same thing.
Which is preferred?

Another argument for suffix - at least in english language:
A class usually represents a noun, it is a model of a concept. An instance of one of your tests would be a 'MyClass test'. In contrast, a method would model some kind of action, like 'test [the] calculate [method]'.
Because of this, I'd always use the 'suffix' for test classes and the prefix for test methods:
the MyClass test --> MyClassTest
test the calculate method --> testCalculate()

I prefer to use the suffix - it means that looking down the list of files in a directory is simpler: you don't have to mentally ignore the first four letters to get to something meaningful. (I'm assuming you have the tests in a different directory to the production code already.)
It also means that when you use Open Type (Ctrl-T) in Eclipse, you end up seeing both the production code and its test at the same time... which is also a reminder if you don't see a test class :)

Prior to JUnit 4 it was common to name your test classes SomethingTest and then run JUnit across all classes matching *Test.java. These days annotation driven JUnit 4, you just need to annotate your test methods with #Test and be done with it. Your test classes are probably going to be under a different directory structure than your actual source (source in src/ test classes in test/) so these days prefixes/suffixes are largely irrelevant.

Not to offend anybody, but I think it is fair to say that "moreunit" is much less known than JUnit, which is pretty much ubiquitous, and established the convention of suffixing test classes "Test".
Although JUnit4 did away with the necessity of following both class and method naming conventions (resp. "postfix Test" and "prefix test"), I think both are still useful for clarity.
Imagine the horror of having src/test/java/.../MyClass.myMethod() tested by src/main/java/.../MyClass.myMethod()...
Sometimes, it is useful to diverge from the JUnit3 conventions - I find that naming setup methods after what they do ("createTestFactory()") and annotating them "#Before" is much clearer than the generic "setUp()".
This is particularly useful when several unrelated setup actions need to be performed - they can be in separate methods, each tagged #Before. This communicates the independence of the actions very nicely.

I prefer using the TestClassName syntax. When using the other syntax I have trouble identifying which is the test and which is the actual class in editors when I have both open. Having to look for the Last four letters in the name is tiresome and also these letters are not always displayed.
For me the other syntax leads to several wrong swapping´s between files every day and that is time consuming.

I think it is important you feel comfortable with your tests if you are working alone. But if you're in a group, you better sit down and get something fixed. I personally tend to use suffix for classes and prefix for methods and try to have my groups adapt to this convention.

I also use MyClassTest_XXX when I want to split my test into multiple classes. This is useful when testing a big class and I want the tests logically grouped. (Can't control legacy code so this scenario does come up.) Then I have something like KitchenSinkTest_ForArray, KitchSinkTest_ForCollection, etc.

I suggest MyClassTests.
Classes should be noun phrases, so commonly used MyClassTest and less common MyClassTests or MyClassTestCase or MyClassTestFixture all work. Technically, an instance of a JUnit test class represents a test fixture, but TestFixture is a bit too verbose for me.
I think that MyClassTests conveys intent in the best way because there are typically multiple test methods in a class each representing a single test (test case).

i prefer the suffix: TestCase. this is consistant with: http://xunitpatterns.com/Testcase%20Class.html

Related

Should we avoid usage of Powermock? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In general, Powermock allows us to mock/stub an static behavior or state. For example, we could mock an static method of utility class like public static String buildKeyFrom(...) {...} and override its behavior. Or even return our mock instance when target class tries to create an object using constructor of class new MyService(...)
Few examples of powermock API usage:
when(StorageKeyUtils.buildKey(id, group, suffixes)).thenReturn("my:group-test:an-id:suffix1")
whenNew(MyParser.class).withArguments(factory).thenReturn(parserMock)
And... it works, actually it helps to avoid refactoring to improve test-ability of our code. You have no more needs to extract static behavior into separate classes, no needs to introduce factories to instantiate new objects and so on.
But, Powermock also have disadvantages:
Complicated setup.
In fact, it's not just single whenSomething like in Mockito, besides this and replacement of test-runner, you also forced to use #PrepareForTest and PowerMock.mockStatic(..). Try to remember what classes to be described within the annotation and inside mockStatic, without checking of tests you implemented previously or documentation.
Sometimes it even works without mockStatic while you still trying to mock static methods.
Of course we could spend some time and investigate documentation to clarify all questions...
Bugs and glitches.
Sometimes it works, sometimes it doesn't. Examples:
Conflicts with coverage tools. Due conflicts with instrumenting of classes you may face loss of coverage of your code by test, for example - JaCoCo
Try to google for powermock mbeanserver... Why powermockito tries to abuse mbeanserver and forces us to mark our test-sets with #PowerMockIgnore? Since 2013. Bot sometimes it works OK without exclude, why? - idk
Unable to mock static method or constructor passed as lambda by reference, for example: ``
It simply encourages usage of static - ambassadors of OOP welcome to describe why we mustn't use static methods, etc
In general, I would say yes, we should avoid usage of Powermock. One doubtful case I see for it - you have no time for appropriate design of you code to make it testable enough without power-mockito (, but do you really need that quality of testing, if you don't have time for code-design?)
What do you think? Do you use Powermock on regular basis? Do you follow some rules while using Powermock on your project?
Typically, clean code won't need powermock for testing. Because clean code supports dependency injection, is loosely coupled and is easy to unit test so doesn't rely on static methods.
Legacy / dirty code on the other hand is riddled with static methods, is tightly coupled and doesn't support dependency injection. It's these legacy code bases where you'll need powermock for testing
I don't recommend PowerMock but life is not always as we would like to have, sometimes you come into project which, let's say, doesn't follow best programming practices, then PowerMock can be accepted in my opinion. Question is a bit too broad.

How do I write a TDD test for removing a field from a Java class?

I have a Java class with three fields. I realized I only need two of them due to changes in requirements.
Ideally I'd write a failing test case before modifying code.
Is there a standard way, or should I just ignore TDD for this task?
That's refactoring, so you don't need to start with failing tests.
Find all the methods using the field.
Make sure that they're covered by unit tests.
Refactor the methods so they no longer use the field.
Remove the field.
Ensure that the tests are running.
Does the drop of this field change the behavior of the class? If not, just drop the field and check if the class still works correctly (aka, passes the tests you should have already written).
TDD principle is to write code "designed by tests". Which may be sound silly but means that the first class you should write is the test class, testing the behavior of the class under test. You should iterate over few steps:
write the test. It should not compile (you don't have the class/classes under test)
Make the test compile. It should fail (you just have empty class which does not satisfy the assertions in the test)
Make the test to pass in the simplest way (usually, just making the method you are testing to return the expected value)
Refine/Refactor/Generalize the class under test, re-run the test (it should still pass). This step should be really fast, usually less than 2 minutes.
Repeat from step 2 until the desired behavior will emerge almost naturally.
If you have an exhaustive list of all the fields you need, you can compare that list of fields by reflection :
yourClassName.getClass().getDeclaredFields() vs your list of fields
Write a test for the constructor without the field you want to remove.
Obviously only works if the constructor takes the field's value as a parameter.
Delete all tests covering the removed functionality (this doesn't count as "writing production code" as per the 3 Rules of TDD).
Delete all references to the obsolete field in remaining tests. If any of them is to fail, you are then allowed to write the required production code to make it pass.
Once your tests are green again, all subsequent modifications fall into the "refactoring" category. You are allowed to remove your (now unused) field here.

Class Naming Convention [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As the size of my project becomes larger, I feel there should be a convention for class names that have similar functionality.
Let's assume that there are three data handlers having similar functionality, and the only difference is the data type they handle.
There is interface DataHandler.
interface DataHandler
There are three different types of data, Bitmap, Video, and Sound.
Which option is more widely-used naming convention?
Option 1.
class BitmapHandler
class VideoHandler
class SoundHandler
Option 2.
class DataHandlerBitmap
class DataHandlerVideo
class DataHandlerSound
I am currently using option 1 since it sounds better, but I think using option 2 also has advantages, especially the size of project is large. I can easily check how many data handlers exists by sorting class names alphabetically, and it also makes people can easily figure out and use all similar type of classes using IDE's intellisense.
EDITED
I removed c# tag. I couldn't think that C# and Java have different naming conventions.
First of all, the naming convention for an interface would be
IDataHandler
Then, your classes should be:
class BitmapDataHandler
class VideoDataHandler
class SoundDataHandler
In any case, I believe option #2 is not so relevant; in order to discover classes in your project use the Find all References command (for example, if you don't use ReSharper which makes things much more easier).
My company is a very large company (Fortune 30) and the convention is to use something along the lines of your Option 1. It is sufficiently descriptive enough for even massive projects and just has a nicer ring to it. :)
Your Option 2 only serves to obfuscate the meaning or purpose of your classes without adding any semantic value.
I think you should use namespaces to do the grouping.
On top of that, if project size gets in the way of retrieving files: use ReSharper. It enables you to search on partial file or class names in a very intuïtive way. Using the right tools prevents you from adhering strange standards for the wrong reasons.
Option 1 makes the most sense to me, and seems to be the convention used throughout the built-in Java classes. For example, in the java.io package,
BufferedReader
CharArrayReader
InputStreamReader
are all subclasses of java.io.Reader.
Option 2 seems redundant, e.g. class DataHandlerXX implements DataHandler, and the IDE may be able to identify all subclasses of DataHandler, meaning you wouldn't need to rely on sorting to determine common functionality.
My preferred option is a combination of many of the answers here.
1) Interfaces should start with an I IDataHandler this is common practice.
2) Name classes after Option1. Names should not describe structure. This class could easily implement several interfaces, it would provide significant clutter to list them all, and what happens when you implement a new or remove an interface due to changing requirements / updates.
3) Use namespaces to group like items. This will make discovering / understanding of the design easier (as is you intention in Option 2).
Combining these things it is apparent with little context how things work.
// create a context with the namespace
namespace Media.Decoders
{
// start interfaces with an I
IDataHandler { ... }
// name classes descriptively
class BitmapHandler : IDataHandler
class VideoHandler : IDataHandler
class SoundHandler : IDataHandler
}
// example if these handlers were decoding media types
// imagine navigating through the structure section by section
MyLibrary.Media.Decoders.BitmapHandler
// if everything is grouped logically if I were looking for an encoder a natural place to look would be
MyLibrary.Media.Encoders.BitmapHandler

Unit testing several implementation of the same trait/interface

I program mostly in scala and java, using scalatest in scala and junit for unit testing. I would like to apply the very same tests to several implementations of the same interface/trait. The idea is to verify that the interface contract is enforced and to check Liskov substitution principle.
For instance, when testing implementations of lists, tests could include:
An instance should be empty, if and only if and only if it has zero size.
After calling clear, the size should be zero.
Adding an element in the middle of a list, will increment by one the index of rhs elements.
etc.
What are the best practices ?
In Java/JUnit, I generally handle this by having an abstract testcase from which tests for the specific test class inherit all the tests and have a setup method instantiating the implementation. I can't watch the video abyx posted right now, but I suspect it's this general idea.
Another interesting possibility if you don't mind introducing yet another testing framework would be to use JDave Specification classes.
I haven't tried using either of these with Scalatest or with Scala traits and implementations, but it should be possible to do something similar.
This sounds like it could be a job for shared tests. Shared tests are tests that are shared by different fixture objects. I.e., the same test code is run on different data. ScalaTest does have support for that. Search for "shared tests" in the documentation of your favorite style trait that represents tests as functions (Spec, WordSpec, FunSuite, FlatSpec, etc.). An example is the syntax for FlatSpec:
it should behave like emptyList
See Sharing Tests in the FlatSpec documentation
Contract tests are easy to do with JUnit 4, here's a video by Ben Rady.
For Scala, strongly consider ScalaCheck. All of those contracts are expressible as one-line specifications in ScalaCheck. When run, ScalaCheck will generate a configurable number of sample inputs randomly, and check that all of the specifications hold. It's about the most semantically dense way possible to create unit tests.

Java: Best practices for turning foreign horror-code into clean API...?

I have a project (related to graph algorithms). It is written by someone else.
The code is horrible:
public fields, no getters/setters
huge methods, all public
some classes have over 20 fields
some classes have over 5 constructors (which are also huge)
some of those constructors just leave many fields null
(so I can't make some fields final, because then every second constructor signals errors)
methods and classes rely on each other in both directions
I have to rewrite this into a clean and understandable API.
Problem is: I myself don't understand anything in this code.
Please give me hints on analyzing and understanding such code.
I was thinking, perhaps, there are tools which perform static code analysis
and give me call graphs and things like this.
Oh dear :-) I envy you and not at the same time..ok let's take one thing at a time. Some of these things you can tackle yourself before you set a code analyzing tool loose at it. This way you will gain a better understanding and be able to proceed much further than with a simple tool
public fields, no getters/setters
make everything private. Your rule should be to limit access as much as possible
huge methods, all public
split and make private where it makes sense to do so
some classes have over 20 fields
ugh..the Builder pattern in Effective Java 2nd Ed is a prime candidate for this.
some classes have over 5 constructors (which are also huge)
Sounds like telescoping constructors, same pattern as above will help
some of those constructors just left many fields null
yep it is telescoping constructors :)
methods and classes rely on each other in both directions
This will be the least fun. Try to remove inheritance unless you're perfectly clear
it is required and use composition instead via interfaces where applicable
Best of luck we are here to help
WOW!
I would recommend: write unittests and then start refactoring
* public fields, no getters/setters
start by making them private and 'feel' the resistance on compiler errors as metric.
* huge methods, all public
understand their semantics, try to introdue interfaces
* some classes have over 20 fields
very common in complex appilcations, nothing to worrie
* some classes have over 5 constructors (which are also huge)
replace them by by buider/creator pattern
* some of those constructors just left many fields null
see above answer
* methods and classes rely on each other in both directions
decide whether to to rewrite everything (honestly I faced cased where only 10% of the code was needed)
Well, the clean-up wizard in eclipse will scrape off a noticable percentage of the sludge.
Then you could point Sonar at it and fix everything it complains about, if you live long enough.
For static analysis and call graphs (no graphics, but graph structures), you can use Dependency Finder.
Use an IDE that knows something about refactoring, like IntelliJ. You won't have situations where you move one method and five other classes complain, because IntelliJ is smart enough to make all the required changes.
Unit tests are a must. Someone refactoring without unit tests is like a high-wire performer without a safety net. Get one before you start the long, hard climb.
The answer may be: patience & coffee.
This is the way I would do it:
Start using the code , e.g. from within a main method, as if it were used by the other classes - same arguments, same invocation orders. Do that inside a debugger, as you see each step that this class makes.
Start writing unit tests for that functionality. Once you have reached a reasonable coverage, you will start to notice that this class probably has too many responsibilities.
while ( responsibilities != 1 ) {
Extract an interface which expresses one responsibility of that class.
Make all callers use that interface instead of the concrete type;
Extract the implementation to a separate class;
Pass the new class to all callers using the new interface.
}
Not saying tools like Sonar, FindBugs etc. that some have already mentiones don't help, but there are no magic tricks. Start from something you do understand, create a unit test for it and once it runs green start refactoring piece by piece. Remember to mock dependencies as you go along.
Sometimes it is easier to rewrite something from scratch. Is this 'horrible code' working as intended or full of bugs? It is documented?
In my current project, deleting my predessor's work nearly in its entirety, and rewriting it from scratch, was the most efficient approach. Granted, this was an extreme case of code obfuscation, utter lack of meaningful comments, and utter incompetence, so your mileage may vary.
Though some legacy code might be barely comprehensible, still it can be refactored and improved to legibility in a stepwise fashion. Have you seen Joshua Kerievsky's Refactoring To Patterns book? -- it's good on this.

Categories