Can I make JUnit more verbose? - java

I'd like to have it yell hooray whenever an assert statement succeeds, or at the very least have it display the number of successful assert statements that were encountered.
I'm using JUnit4.
Any suggestions?

If you want to see some output for each successful assertion, another simple approach which requires no external dependencies or source code, would be to define your own Assert class which delegates all methods to the standard JUnit Assert class, as well as logging successful assertions (failed assertions will be reported as usual by the JUnit class).
You then run a global search-and-replace on your test classes from "org.junit.Assert" => "com.myco.test.Assert", which should fix-up all regular and static import statements.
You could also then easily migrate your approach to the quieter-is-better-camp and change the wrapper class to just report the total # of passed assertions per test or per class, etc.

Adding some info that would have been helpful to me when I wanted JUnit to be more verbose and stumbled on this question. Maybe it will help other testers in the future.
If you are running JUnit from Ant, and want to see what tests are being run, you can add the following to your task:
<junit showoutput="true" printsummary="on" enabletestlistenerevents="true" fork="#{fork}" forkmode="once" haltonfailure="no" timeout="1800000">
Note that showoutput, printsummary, and enabletestlistenerevents are what helped, not the other task attributes. If you set these, you'll get output like:
Running com.foo.bar.MyTest
junit.framework.TestListener: tests to run: 2
junit.framework.TestListener: startTest(myTestOne)
junit.framework.TestListener: endTest(myTestOne)
junit.framework.TestListener: startTest(myTestTwo)
junit.framework.TestListener: endTest(myTestTwo)
Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.495 sec
This was useful to me when my tests were timing out and I wasn't sure which tests were actually taking too long, and which tests got cancelled because they were unlucky enough to be running when the time was up.

You can use AOP (with Spring or AspectJ) define pointcuts on all assert methods in junit.framework.Assert class. Using spring you can implement your own class as after returning advice (http://static.springframework.org/spring/docs/2.5.x/reference/aop.html#aop-advice-after-returning) which will only be called if the assert method passed (otherwise it throws an exception: junit.framework.AssertionFailedError
). In you own class you can implement a simple counter and print it at the end.

Are you really interested in an assertion that succeeds? Normally, the only interesting assertions are ones that fail.
Being a fervent JUnit devotee myself, I try and make the output of the tests as quiet as possible because it improves the signal-to-noise ratio when something doesn't pass. The best test run is one where everything passes and there's not a peep from stdout.
You could always work on your unit test until it succeeds and run "grep Assert test.java | wc -l". :-)

Hard to be done. All assert methods are static members of the class Assert, which implies that the RunNotifier (which counts the successful and failed tests) is not within reach.
If you dont refrain from an ungly hack: take the sources from JUnit, patch them to store the current notifier in a static field of Assert when running tests such that the static methods can report successful asserts to this notifier.

I'm pretty sure you can create a custom TestRunner that does that. We ended up with something similar in our homemade Unit-testing framework (a clone of NUnit).
Oh, wait - now that I'm reading your question again, if you really want output for each successful assertion, you'll have to dig into the plumbing more. The TestRunner only gets called once for each testcase start/end, so it'll count passed and failed tests, not assertions.
This isn't much of a problem for me, since I tend towards one assertion per test, generally.

I don't think, it's the goal of JUnit to count matched assertions or print out more verbose information.
If tests are atomic, you'll get most information in there. So I would review my tests.
You're also able to establish a LogFile in JUnit. It's possible, but it will decrease test execution performance...

This is not a straight answer to the question and is in fact a misuse of a junit feature, but if you need to debug some values that are used in the asserts, then you can add temporarily something like:
Assume.assumeTrue(interestingData, false);
This won't fail the build, but will mark the test as IGNORED, and will force the values to be included in the test report output.
❗️ Make sure to remove the statements once you are done. Or, as an alternative, you can change the statement to Assume.assumeTrue(interestingData, true) in case you might want to debug it again in the future.

This question was asked a long ago but just to add:
Isn't it better to get the reports generated as html under the build folder and refresh the browser after every test?
Using Gradle (as it offers support for Junit out of the box with its plugins for integration) I'm able to open the file at build/reports/tests/test/index.html in my project and check the results in a per package, per class, and per method basis.
PS: You can install an extension in the browser for refreshing the page if it becomes annoying.
Hope this helps someone if the constraints apply.
Here, here and here you can find more on how to generate reports for Junit test results.

junit's javadoc unfortunately says that only failed assertions are recorded (http://junit.sourceforge.net/javadoc_40/index.html)
so it seems it would not be possible

Can you consider ?
1) download junit source
2) to modify the class org.junit.Assert to do whatever modifications you're looking for

Related

Custom unit test result

Is there a way to create a custom unit test result in TestNG/JUnit (or any other Java testing framework)? I understand that unit tests can either pass, or fail (or ignored), but currently I really would like to have the third option.
The company I'm working with right now has adapted the testing style of cleverly comparing screenshots of their application and so the test can either pass, fail, or diff, when the screenshots does not match with predetermined tolerance. In addition, they have their in house test "framework" and runners. This was done long before I joined.
What I would like to do is to migrate test framework to the one of the standard ones, but this process should be very gradual.
The approach I was thinking about was to create a special exception (e.g. DiffTolleranceExcededException), fail the test and then customize test result in the report.
Maybe you already mean the following with
The approach I was thinking about was to create a special exception
(e.g. DiffTolleranceExcededException), fail the test and then
customize test result in the report.
but just in case: You certainly can use the possibility to give a pre-defined message string to the assertions. In your case, if the screenshots are identical, the tests pass. If they are too different, the tests just fail. If they are within tolerance, you make them fail with a message like "DIFFERENT BUT WITHIN-TOLERANCE" or whatever - these failures are then easily distinguishable. Certainly, you could also invert the logic: Add a message to the failures that are not within the tolerance, to make these visually prominent.
You should follow this approach for customize our test reports, adding a new column on test report and create your test report (with screenshot for example).

Junit testing assert output not being displayed on console [duplicate]

I'd like to have it yell hooray whenever an assert statement succeeds, or at the very least have it display the number of successful assert statements that were encountered.
I'm using JUnit4.
Any suggestions?
If you want to see some output for each successful assertion, another simple approach which requires no external dependencies or source code, would be to define your own Assert class which delegates all methods to the standard JUnit Assert class, as well as logging successful assertions (failed assertions will be reported as usual by the JUnit class).
You then run a global search-and-replace on your test classes from "org.junit.Assert" => "com.myco.test.Assert", which should fix-up all regular and static import statements.
You could also then easily migrate your approach to the quieter-is-better-camp and change the wrapper class to just report the total # of passed assertions per test or per class, etc.
Adding some info that would have been helpful to me when I wanted JUnit to be more verbose and stumbled on this question. Maybe it will help other testers in the future.
If you are running JUnit from Ant, and want to see what tests are being run, you can add the following to your task:
<junit showoutput="true" printsummary="on" enabletestlistenerevents="true" fork="#{fork}" forkmode="once" haltonfailure="no" timeout="1800000">
Note that showoutput, printsummary, and enabletestlistenerevents are what helped, not the other task attributes. If you set these, you'll get output like:
Running com.foo.bar.MyTest
junit.framework.TestListener: tests to run: 2
junit.framework.TestListener: startTest(myTestOne)
junit.framework.TestListener: endTest(myTestOne)
junit.framework.TestListener: startTest(myTestTwo)
junit.framework.TestListener: endTest(myTestTwo)
Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.495 sec
This was useful to me when my tests were timing out and I wasn't sure which tests were actually taking too long, and which tests got cancelled because they were unlucky enough to be running when the time was up.
You can use AOP (with Spring or AspectJ) define pointcuts on all assert methods in junit.framework.Assert class. Using spring you can implement your own class as after returning advice (http://static.springframework.org/spring/docs/2.5.x/reference/aop.html#aop-advice-after-returning) which will only be called if the assert method passed (otherwise it throws an exception: junit.framework.AssertionFailedError
). In you own class you can implement a simple counter and print it at the end.
Are you really interested in an assertion that succeeds? Normally, the only interesting assertions are ones that fail.
Being a fervent JUnit devotee myself, I try and make the output of the tests as quiet as possible because it improves the signal-to-noise ratio when something doesn't pass. The best test run is one where everything passes and there's not a peep from stdout.
You could always work on your unit test until it succeeds and run "grep Assert test.java | wc -l". :-)
Hard to be done. All assert methods are static members of the class Assert, which implies that the RunNotifier (which counts the successful and failed tests) is not within reach.
If you dont refrain from an ungly hack: take the sources from JUnit, patch them to store the current notifier in a static field of Assert when running tests such that the static methods can report successful asserts to this notifier.
I'm pretty sure you can create a custom TestRunner that does that. We ended up with something similar in our homemade Unit-testing framework (a clone of NUnit).
Oh, wait - now that I'm reading your question again, if you really want output for each successful assertion, you'll have to dig into the plumbing more. The TestRunner only gets called once for each testcase start/end, so it'll count passed and failed tests, not assertions.
This isn't much of a problem for me, since I tend towards one assertion per test, generally.
I don't think, it's the goal of JUnit to count matched assertions or print out more verbose information.
If tests are atomic, you'll get most information in there. So I would review my tests.
You're also able to establish a LogFile in JUnit. It's possible, but it will decrease test execution performance...
This is not a straight answer to the question and is in fact a misuse of a junit feature, but if you need to debug some values that are used in the asserts, then you can add temporarily something like:
Assume.assumeTrue(interestingData, false);
This won't fail the build, but will mark the test as IGNORED, and will force the values to be included in the test report output.
❗️ Make sure to remove the statements once you are done. Or, as an alternative, you can change the statement to Assume.assumeTrue(interestingData, true) in case you might want to debug it again in the future.
This question was asked a long ago but just to add:
Isn't it better to get the reports generated as html under the build folder and refresh the browser after every test?
Using Gradle (as it offers support for Junit out of the box with its plugins for integration) I'm able to open the file at build/reports/tests/test/index.html in my project and check the results in a per package, per class, and per method basis.
PS: You can install an extension in the browser for refreshing the page if it becomes annoying.
Hope this helps someone if the constraints apply.
Here, here and here you can find more on how to generate reports for Junit test results.
junit's javadoc unfortunately says that only failed assertions are recorded (http://junit.sourceforge.net/javadoc_40/index.html)
so it seems it would not be possible
Can you consider ?
1) download junit source
2) to modify the class org.junit.Assert to do whatever modifications you're looking for

How to build up test cases in junit?

I'm coming from a Perl background where I used Test::More to handle unit testing. Using that framework, I knew the order in which the tests took place and could rely on that, which I understand is not encouraged with the JUnit framework. I've seen several ways to get around this, but I want to understand the proper/intended way of doing things.
In my Perl unit testing I would build up tests, knowing that if test #3 passed, I could make some assumptions in further tests. I don't quite see how to structure that in the JUnit world so that I can make every test completely independent.
For example, suppose I have a class that parses a date from a string. Methods include:
parse a simple date (YYYY-MM-DD)
parse a simple date with alternate separator (YYYY_MM_DD or YYYY/MM/DD)
parse a date with a string for a month name (YYYY-MON-DD)
parse a date with a string month name in a different language
and so on
I usually write my code to focus as many of the externally-accessible methods into as few core methods as possible, re-using as much code as possible (which is what most of us would do, I'm sure). So, let's say I have 18 different tests for the first method, 9 that are expected to pass and 9 that throw an exception. For the second method, I only have 3 tests, one each with the separators that work ('_' & '/') and one with a separator that doesn't work ('*') which is expected to fail. I can limit myself to the new code being introduced because I already know that the code properly handles the standard boundary conditions and common errors, because the first 18 tests already passed.
In the Perl world, if test #20 fails, I know that it's probably something to do with the specific separator, and is not a general date parsing error because all of those tests have already passed. In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there. That's not too hard to do, of course, but maybe in a bigger, more complex class, it would be more difficult to do.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite? That seems tedious. (And before someone suggests that I put the first18 in one class and the second 3 in another, and use a test suite for just those groupings, let's pretend that all 18 of the early tests build on each other, too).
And, again, I know there are ways around this (FixedMethodOrder in JUnit 4.11+ or JUnit-HierarchicalContextRunner) but I want to understand the paradigm as its intended to be used.
In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there.
Yes that is correct. If something in your code is broken then multiple tests may fail. That is a good thing. Use intent revealing test method names and possibly use the optional String message parameter in the JUnit assertions to explain what exactly failed the test.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite?
The general convention is one test class per source class. Depending on what build tool you are using, you may or may not need to use test suites. If you are using Ant, you probably need to collect the tests into test suites, but if you are using Maven, the test plugins for maven will find all your test classes for you so you don't need suites.
I also want to point out that you should be coding to Java interfaces as much as possible. If you are testing class C that depends on an implementation of interface I, then you should mock your I implementation in your C test class so that C is tested in isolation. Your mock I should follow what the interface is supposed to do. This also keeps the number of failing tests down. If there is a bug in your real I implementation, then only your I tests should fail, the C tests should still all pass (since you are testing it against a fake but working I implementation)
Don't worry about suites yet. You'll know when you need them. I've only had to use them a handful of times, and I'm not entirely sold on their usefulness...but I leave that decision up to you.
To the meat of your question - the conventional way with JUnit tests is to neither know nor depend on the order of execution of your tests; this ensures that your tests are not run-order dependent, and if they are, something is wrong with your tests* and validation.
The main core concept behind unit tests is that they test a unit of code - as simple as a single function. If you're attempting to test five different things at once, your test is far too large, and should be broken out. If the method you're testing is monolithic in nature, and difficult to test, it should be refactored and broken out into different slices of responsibility.
Tests that exercise a larger flow are better suited for integration-style tests, which tend to be written as unit tests, but aren't actually unit tests.
I've not run into a scenario in which, if I knew that if a certain test failed, I could expect different behavior in the other tests. I've never thought that such a thing was necessary to be noted, since the only thing I care about in my unit test is how that unit of code behaves given a certain input.
Keep your tests small and simple to understand; the test should only make one assertion about the result (or a general assertion of the state of your result).
*: That's not to say that it's completely broken, but those sorts of tests should be fixed sooner rather than later.

How to deal with interdependent JUnit tests?

I have a question about JUnit testing.
Our JUnit suite is testing various functions that we wrote that interact with our memory system.
The way our system was designed, requires it to be static, and therefore initialized prior to the running of the tests.
The problem we are having is that when subsequent tests are run, they are affected by tests prior to it, so it is possible (and likely) that we are getting false positive, or innaccurate failures.
Is there a way to maintain the testing order of our JUnit tests, but have it re-initialize the entire system, as if testing on the system from scratch.
The only option we can think of is to write a method that does this, and call it at the end of each test, but as there are lots and lots of things that need to be reset this way, I am hoping there is a simpler way to do this.
I've seen problems with tests many times where they depend on each other (sometimes deliberately!).
Firstly you need to setup a setUp method:
#Before
public void setUp() {
super.setUp();
// Now clear, reset, etc all your static data.
}
This is automatically run by JUnit before each test and will reset the environment. You can add one after as well, but before is better for ensuring a clean starting point.
The order of your tests is usually the order they are in the test class. But this should never be assumed and it's a really bad idea to base code on that.
Go back to the documentation. If you need more information.
The approach I took to this kind of problem was to do partial reinitialization before each test. Each test knows the preconditions that it requires, and the setup ensures that they are true. Not sure if this will be relevant for you. Relying on order often ends up being a continuing PITA - being able to run tests by themselves is better.
Oh yeah - there's one "test" that's run as the beginning of a suite that's responsible for static initialization.
You might want to look at TestNG, which supports test dependencies for this kind of functional testing (JUnit is a unit testing framework):
#Test
public void f1() {}
#Test(dependsOnMethods = "f1")
public void f2() {}

Can I write a test without any assert in it?

I'd like to know if it is "ok" to write a test without any "assert" in it. So the test would fail only when an exception / error has occured.
Eg: like a test which has a simple select query, to ensure that the database configuration is right. So when I change some db-configuration, I re-run this test and check if the configuration is right. ?
Thanks!
It is perfectly valid to make sure a unit test runs without encountering an exception.
As per Matt B's suggestion, be sure to document what the test is actually testing to be clear and precise.
As #Kyle noted, your test case is valid. In fact the opposite would also be a valid: when you write a test case to confirm that a certain call with specific parameter(s) results in an exception.
Sure you can do that.
It is also perfectly fine to write a test without assertions where the expected outcome is an exeption. I know testng will let you specify an exception that should be thrown and the test will fail if the expected exception isn't thrown.
Testing is a really subjective discussion. Some people will say no, you should always have AAA syntax. Personally I've written tests that do things very similar to what your talking about so I'd say, sure go ahead - if it helps you build a more stable app then why not.
For example in NUnit i consider [ExpectedException typeof(XXXX)] to be logically equivalent to an Assert.
Also in some tests you might not assert anything but expect a particular order of execution via Mocks and Expects.
It is surely acceptable to write a unit test that doesn't have any assertions. You could do this for:
Testing a case that ends without an exception. In this case, if you can, it's nice to dress the test with the specific type of the exception, as in [ExpectedException(MyException)].
Testing a feature is there. Even there isn't a possibility that the test may generate an exception, you may want to make this test fail if someone decides to remove that feature. If the test uses a method and the method is removed, the test will simply fail to build.
the purpose of the test is to check if "X" has an "expected something", in order to check that "expected something" is correct to assert, to expect or to verify. This is why most frameworks implements those methods in a way or another

Categories