I know that one way to do it would be:
#Test
public void foo() {
try {
// execute code that you expect not to throw Exceptions.
} catch(Exception e) {
fail("Should not have thrown any exception");
}
}
Is there any cleaner way of doing this? (Probably using Junit's #Rule?)
You're approaching this the wrong way. Just test your functionality: if an exception is thrown the test will automatically fail. If no exception is thrown, your tests will all turn up green.
I have noticed this question garners interest from time to time so I'll expand a little.
Background to unit testing
When you're unit testing it's important to define to yourself what you consider a unit of work. Basically: an extraction of your codebase that may or may not include multiple methods or classes that represents a single piece of functionality.
Or, as defined in The art of Unit Testing, 2nd Edition by Roy Osherove, page 11:
A unit test is an automated piece of code that invokes the unit of work being tested, and then checks some assumptions about a single end result of that unit. A unit test is almost always written using a unit testing framework. It can be written easily and runs quickly. It's trustworthy, readable, and maintainable. It's consistent in its results as long as production code hasn't changed.
What is important to realize is that one unit of work usually isn't just one method but at the very basic level it is one method and after that it is encapsulated by other unit of works.
Ideally you should have a test method for each separate unit of work so you can always immediately view where things are going wrong. In this example there is a basic method called getUserById() which will return a user and there is a total of 3 unit of works.
The first unit of work should test whether or not a valid user is being returned in the case of valid and invalid input.
Any exceptions that are being thrown by the datasource have to be handled here: if no user is present there should be a test that demonstrates that an exception is thrown when the user can't be found. A sample of this could be the IllegalArgumentException which is caught with the #Test(expected = IllegalArgumentException.class) annotation.
Once you have handled all your usecases for this basic unit of work, you move up a level. Here you do exactly the same, but you only handle the exceptions that come from the level right below the current one. This keeps your testing code well structured and allows you to quickly run through the architecture to find where things go wrong, instead of having to hop all over the place.
Handling a tests' valid and faulty input
At this point it should be clear how we're going to handle these exceptions. There are 2 types of input: valid input and faulty input (the input is valid in the strict sense, but it's not correct).
When you work with valid input you're setting the implicit expectancy that whatever test you write, will work.
Such a method call can look like this: existingUserById_ShouldReturn_UserObject. If this method fails (e.g.: an exception is thrown) then you know something went wrong and you can start digging.
By adding another test (nonExistingUserById_ShouldThrow_IllegalArgumentException) that uses the faulty input and expects an exception you can see whether your method does what it is supposed to do with wrong input.
TL;DR
You were trying to do two things in your test: check for valid and faulty input. By splitting this into two method that each do one thing, you will have much clearer tests and a much better overview of where things go wrong.
By keeping the layered unit of works in mind you can also reduce the amount of tests you need for a layer that is higher in the hierarchy because you don't have to account for every thing that might have gone wrong in the lower layers: the layers below the current one are a virtual guarantee that your dependencies work and if something goes wrong, it's in your current layer (assuming the lower layers don't throw any errors themselves).
JUnit 5 (Jupiter) provides three functions to check exception absence/presence:
● assertAll()
Asserts that all supplied executables
do not throw exceptions.
● assertDoesNotThrow()
Asserts that execution of the
supplied executable/supplier
does not throw any kind of exception.
This function is available
since JUnit 5.2.0 (29 April 2018).
● assertThrows()
Asserts that execution of the supplied executable
throws an exception of the expectedType
and returns the exception.
Example
package test.mycompany.myapp.mymodule;
import static org.junit.jupiter.api.Assertions.*;
import org.junit.jupiter.api.Test;
class MyClassTest {
#Test
void when_string_has_been_constructed_then_myFunction_does_not_throw() {
String myString = "this string has been constructed";
assertAll(() -> MyClass.myFunction(myString));
}
#Test
void when_string_has_been_constructed_then_myFunction_does_not_throw__junit_v520() {
String myString = "this string has been constructed";
assertDoesNotThrow(() -> MyClass.myFunction(myString));
}
#Test
void when_string_is_null_then_myFunction_throws_IllegalArgumentException() {
String myString = null;
assertThrows(
IllegalArgumentException.class,
() -> MyClass.myFunction(myString));
}
}
I stumbled upon this because of SonarQube's rule "squid:S2699": "Add at least one assertion to this test case."
I had a simple test whose only goal was to go through without throwing exceptions.
Consider this simple code:
public class Printer {
public static void printLine(final String line) {
System.out.println(line);
}
}
What kind of assertion can be added to test this method?
Sure, you can make a try-catch around it, but that is only code bloat.
The solution comes from JUnit itself.
In case no exception is thrown and you want to explicitly illustrate this behaviour, simply add expected as in the following example:
#Test(expected = Test.None.class /* no exception expected */)
public void test_printLine() {
Printer.printLine("line");
}
Test.None.class is the default for the expected value.
If you import org.junit.Test.None, you can then write:
#Test(expected = None.class)
which you might find more readable.
For JUnit versions before 5:
With AssertJ fluent assertions 3.7.0:
Assertions.assertThatCode(() -> toTest.method())
.doesNotThrowAnyException();
Update:
JUnit 5 introduced assertDoesNotThrow() assertion, so I'd prefer to use it instead of adding an additional dependency to your project. See this answer for details.
Java 8 makes this a lot easier, and Kotlin/Scala doubly so.
We can write a little utility class
class MyAssertions{
public static void assertDoesNotThrow(FailingRunnable action){
try{
action.run()
}
catch(Exception ex){
throw new Error("expected action not to throw, but it did!", ex)
}
}
}
#FunctionalInterface interface FailingRunnable { void run() throws Exception }
and then your code becomes simply:
#Test
public void foo(){
MyAssertions.assertDoesNotThrow(() -> {
//execute code that you expect not to throw Exceptions.
}
}
If you dont have access to Java-8, I would use a painfully old java facility: aribitrary code blocks and a simple comment
//setup
Component component = new Component();
//act
configure(component);
//assert
/*assert does not throw*/{
component.doSomething();
}
And finally, with kotlin, a language I've recently fallen in love with:
fun (() -> Any?).shouldNotThrow()
= try { invoke() } catch (ex : Exception){ throw Error("expected not to throw!", ex) }
#Test fun `when foo happens should not throw`(){
//...
{ /*code that shouldn't throw*/ }.shouldNotThrow()
}
Though there is a lot of room to fiddle with exactly how you want to express this, I was always a fan of fluent assertions.
Regarding
You're approaching this the wrong way. Just test your functionality: if an exception is thrown the test will automatically fail. If no exception is thrown, your tests will all turn up green.
This is correct in principle but incorrect in conclusion.
Java allows exceptions for flow of control. This is done by the JRE runtime itself in APIs like Double.parseDouble via a NumberFormatException and Paths.get via a InvalidPathException.
Given you've written a component that validates Number strings for Double.ParseDouble, maybe using a Regex, maybe a hand-written parser, or perhaps something that embeds some other domain rules that restricts the range of a double to something specific, how best to test this component? I think an obvious test would be to assert that, when the resulting string is parsed, no exception is thrown. I would write that test using either the above assertDoesNotThrow or /*comment*/{code} block. Something like
#Test public void given_validator_accepts_string_result_should_be_interpretable_by_doubleParseDouble(){
//setup
String input = "12.34E+26" //a string double with domain significance
//act
boolean isValid = component.validate(input)
//assert -- using the library 'assertJ', my personal favourite
assertThat(isValid).describedAs(input + " was considered valid by component").isTrue();
assertDoesNotThrow(() -> Double.parseDouble(input));
}
I would also encourage you to parameterize this test on input using Theories or Parameterized so that you can more easily re-use this test for other inputs. Alternatively, if you want to go exotic, you could go for a test-generation tool (and this). TestNG has better support for parameterized tests.
What I find particularly disagreeable is the recommendation of using #Test(expectedException=IllegalArgumentException.class), this exception is dangerously broad. If your code changes such that the component under test's constructor has if(constructorArgument <= 0) throw IllegalArgumentException(), and your test was supplying 0 for that argument because it was convenient --and this is very common, because good generating test data is a surprisingly hard problem--, then your test will be green-bar even though it tests nothing. Such a test is worse than useless.
If you are unlucky enough to catch all errors in your code.
You can stupidly do
class DumpTest {
Exception ex;
#Test
public void testWhatEver() {
try {
thisShouldThrowError();
} catch (Exception e) {
ex = e;
}
assertEquals(null,ex);
}
}
Although this post is 6 years old now, however, a lot has changed in the Junit world. With Junit5, you can now use
org.junit.jupiter.api.Assertions.assertDoesNotThrow()
Ex:
public void thisMethodDoesNotThrowException(){
System.out.println("Hello There");
}
#Test
public void test_thisMethodDoesNotThrowException(){
org.junit.jupiter.api.Assertions.assertDoesNotThrow(
()-> thisMethodDoesNotThrowException()
);
}
Hope it will help people who are using newer version of Junit5
JUnit5 adds the assertAll() method for this exact purpose.
assertAll( () -> foo() )
source: JUnit 5 API
To test a scenario with a void method like
void testMeWell() throws SomeException {..}
to not throw an exception:
Junit5
assertDoesNotThrow(() -> {
testMeWell();
});
If you want to test that whether your test target consumes the exception. Just leave the test as (mock collaborator using jMock2):
#Test
public void consumesAndLogsExceptions() throws Exception {
context.checking(new Expectations() {
{
oneOf(collaborator).doSth();
will(throwException(new NullPointerException()));
}
});
target.doSth();
}
The test would pass if your target does consume the exception thrown, otherwise the test would fail.
If you want to test your exception consumption logic, things get more complex. I suggest delegating the consumption to a collaborator which could be mocked. Therefore the test could be:
#Test
public void consumesAndLogsExceptions() throws Exception {
Exception e = new NullPointerException();
context.checking(new Expectations() {
{
allowing(collaborator).doSth();
will(throwException(e));
oneOf(consumer).consume(e);
}
});
target.doSth();
}
But sometimes it's over-designed if you just want to log it. In this case, this article(http://java.dzone.com/articles/monitoring-declarative-transac, http://blog.novoj.net/2008/09/20/testing-aspect-pointcuts-is-there-an-easy-way/) may help if you insist tdd in this case.
Use assertNull(...)
#Test
public void foo() {
try {
//execute code that you expect not to throw Exceptions.
} catch (Exception e){
assertNull(e);
}
}
This may not be the best way but it definitely makes sure that exception is not thrown from the code block that is being tested.
import org.assertj.core.api.Assertions;
import org.junit.Test;
public class AssertionExample {
#Test
public void testNoException(){
assertNoException();
}
private void assertException(){
Assertions.assertThatThrownBy(this::doNotThrowException).isInstanceOf(Exception.class);
}
private void assertNoException(){
Assertions.assertThatThrownBy(() -> assertException()).isInstanceOf(AssertionError.class);
}
private void doNotThrowException(){
//This method will never throw exception
}
}
I faced the same situation, I needed to check that exception is thrown when it should, and only when it should.
Ended up using the exception handler to my benefit with the following code:
try {
functionThatMightThrowException()
}catch (Exception e){
Assert.fail("should not throw exception");
}
RestOfAssertions();
The main benefit for me was that it is quite straight forward and to check the other way of the "if and only if" is really easy in this same structure
I end up doing like this
#Test
fun `Should not throw`() {
whenever(authService.isAdmin()).thenReturn(true)
assertDoesNotThrow {
service.throwIfNotAllowed("client")
}
}
You can expect that exception is not thrown by creating a rule.
#Rule
public ExpectedException expectedException = ExpectedException.none();
You can do it by using a #Rule and then call method reportMissingExceptionWithMessage as shown below:
This is Scala code.
Stumbled over this issue since I created some generic methods like
#Test
void testSomething() {
checkGeneric(anComplexObect)
}
In https://newbedev.com/sonarqube-issue-add-at-least-one-assertion-to-this-test-case-for-unit-test-with-assertions some annotation stuff is proposed.
The solution is much more simple. It's enough to rename the "checkGeneric" method to "assertGeneric".
#Test
void testSomething() {
assertGeneric(anComplexObect)
}
You can create any kind of your own assertions based on assertions from junit, because these are especially designed for creating user defined asserts intendet to work exactly like junit ones:
static void assertDoesNotThrow(Executable executable) {
assertDoesNotThrow(executable, "must not throw");
}
static void assertDoesNotThrow(Executable executable, String message) {
try {
executable.execute();
} catch (Throwable err) {
fail(message);
}
}
Now testing the so called scenario methodMustNotThrow and log all failures in a junit style:
//test and log with default and custom messages
//the following will succeed
assertDoesNotThrow(()->methodMustNotThrow(1));
assertDoesNotThrow(()->methodMustNotThrow(1), "custom facepalm");
//the following will fail
assertDoesNotThrow(()->methodMustNotThrow(2));
assertDoesNotThrow(()-> {throw new Exception("Hello world");}, "message");
//See implementation of methodMustNotThrow below
Generally speaking there is possibility to instantly fail anything the test in any scenarios, in any place where it makes sense by calling fail(someMessage), which is designed exactly for this purpose. For instance use it in a try/catch block to fail if anything is thrown in the test case:
try{methodMustNotThrow(1);}catch(Throwable e){fail("must not throw");}
try{methodMustNotThrow(1);}catch(Throwable e){Assertions.fail("must not throw");}
This is the sample of the method we test, supposing we have such a method that must not fail under specific circumstances, but it can fail:
void methodMustNotThrow(int x) throws Exception {
if (x == 1) return;
throw new Exception();
}
The above method is a simple sample. But this works for complex situations, where the failure is not so obvious.
There are the imports:
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.function.Executable;
import static org.junit.jupiter.api.Assertions.*;
AssertJ can handle this scenario:
assertThatNoException().isThrownBy(() -> System.out.println("OK"));
Check the doc for more information https://assertj.github.io/doc/#assertj-core-exception-assertions-no-exception
The following fails the test for all exceptions, checked or unchecked:
#Test
public void testMyCode() {
try {
runMyTestCode();
} catch (Throwable t) {
throw new Error("fail!");
}
}
I am trying to write a test method in TestNG, that after it fails - the entire test suite will stop running.
#Test
public void stopTestingIfThisFailed() throws Exception
{
someTestStesp();
if (softAsserter.isOneFailed()) {
asserter.fail("stopTestingIfThisFailed test Failed");
throw new Exception("Test can't continue, fail here!");
}
}
The exception is being thrown, but other test methods are running.
How to solve this?
You can use the dependsOnMethods or dependsOnGroups annotation parameter in your other test methods:
#Test(dependsOnMethods = {"stopTestingIfThisFailed"})
public void testAnotherTestMehtod() {
}
JavaDoc of the dependsOnMethods parameter:
The list of methods this method depends on. There is no guarantee on the order on which the methods depended upon will be run, but you are guaranteed that all these methods will be run before the test method that contains this annotation is run. Furthermore, if any of these methods was not a SUCCESS, this test method will not be run and will be flagged as a SKIP. If some of these methods have been overloaded, all the overloaded versions will be run.
See https://testng.org/doc/documentation-main.html#dependent-methods
It depends on what you expect (there is no direct support for this in TestNG). You can create ShowStopperException which is thrown in #Test and then in your ITestListener implementation (see docs) you can call System.exit(1 (or whatever number)) when you find this exeption in result but there will be no report and in general it's not good practice. Second option is to have some base class which is parent of all test classes and some context variable which will handle ShowStopperException in #BeforeMethod in parent class and throw SkipException so workflow can be like:
test passed
test passed
showstopper exception in some test
test skipped
test skipped
test skipped
...
I solved the problem like this: after a test that mustn't fail fails - I'm writing data to a temporary text file.
Later, in the next test I added code in the #BeforeClass that checks that data in the former mentioned text file. If a show stopper was found I'm killing the current process.
If a test the "can't" fail actually fails:
public static void saveShowStopper() {
try {
General.createFile("ShowStopper","tempShowStopper.txt");
} catch (ParseException e) {
e.printStackTrace();
}
}
The #BeforeClass validating code:
#BeforeClass(alwaysRun = true)
public void beforeClass(ITestContext testContext, #Optional String step, #Optional String suiteLoopData,
#Optional String group) throws Exception
{
boolean wasShowStopperFound = APIUtils.loadShowStopper();
if (wasShowStopperFound){
Thread.currentThread().interrupt();
return;
}
}
It behaves if you throw a specific exception, SkipException, from the #BeforeSuite setup method.
See (possible dupe)
TestNG - How to force end the entire test suite from the BeforeSuite annotation if a condition is met
If you want to do it from an arbitrary test, it doesn't appear there is a framework mechanism. But you could always flip a flag, and check that flag in the #BeforeTest setup method. Before you jump to that, maybe have a think if you could check once before the whole suite runs, and just abort there (ie #BeforeSuite).
I'm having issues with SonarQube raising issues with several of my unit tests, prompting the following issue:
Add at least one assertion to this test case.
Each test case resembles this format (where a number of assertions are delegated to a method with common assertions, to avoid duplication):
#Test
public void companyNameOneTooLong() throws Exception {
AddressFormBean formBean = getValidBean();
formBean.setCompanyNameOne("123456789012345678901234567890123456");
assertViolation(validator.validate(formBean), "companyNameOne", "length must be between 0 and 35");
}
private void assertViolation(Set<ConstraintViolation<AddressFormBean>> violations, String fieldname, String message) {
assertThat(violations, hasSize(1));
assertEquals(fieldname, violations.iterator().next().getPropertyPath().iterator().next().getName());
assertEquals(message, violations.iterator().next().getMessage());
}
Now, obviously I could just pull the three assertions out of the private method and put them in the test method - but I'm performing the same checks (on different fields) multiple times.
So, I thought I'd try to emulate the behaviour of the assertion methods, by (re) throwing an AssertionError:
private void assertViolation(Set<ConstraintViolation<AddressFormBean>> violations, String fieldname, String message) throws AssertionError {
try {
assertThat(violations, hasSize(1));
assertEquals(fieldname, violations.iterator().next().getPropertyPath().iterator().next().getName());
assertEquals(message, violations.iterator().next().getMessage());
} catch (AssertionError e) {
throw e;
}
}
Unfortunately, this approach does not work either.
What's special about the JUnit assert methods / what is SonarQube looking for specifically to check that an assertion has been made for each test?
Alternatively - are there other approaches to achieve the same end result (avoiding duplicating the shared assertion code over and over)?
The rule S2699 (Tests should include assertions) from the SonarQube Java Analyzer does not perform cross-procedural analysis and only explore body of methods being identified as test method (usually annotated with #Test).
Consequently, if the only assertions which will be called when executing the test method are done by a dedicated method (to avoid duplication), then the rule will raise an issue. This is a known limitation of the rule and we will deal with it only when we will be able to efficiently perform cross-procedural analysis.
Regarding the issues raised by SonarQube on such cases, you can safely mark them as Won't Fix.
Regarding the detected assertions, the rule consider as assertions the usual assert/fail/verify/expect methods from the following (unit test) frameworks :
JUnit
Fest (1.x & 2.x)
AssertJ
Hamcrest
Mockito
Spring
EasyMock
If you don't expect any exception to be throw from your test, this can be a workaround:
#Test(expected = Test.None.class /* no exception expected */)
Alternatively, you can suppress the warning for the test method/test class:
#SuppressWarnings("squid:S2699")
One thing I have done in the past is to have the helper method return true, and assert on that:
#Test
public void testSomeThings() {
Thing expected = // . . .
Thing actual = service.methodReturningThing(42);
assertTrue(assertViolation(expected, actual));
}
private boolean assertViolation(Thing expected, Thing actual) {
assertEquals(expected.getName(), actual.getName());
assertEquals(expected.getQuest(), actual.getQuest());
assertEquals(expected.getFavoriteColor(), actual.getFavoriteColor());
return true;
}
I hate this, but I hate duplicated code even more.
The other thing we've done at times, is to simply mark any such objections from SonarQube as Won't Fix, but I hate that, too.
Sometimes you don't need to have any code or assertation, for example, the test of load the context of spring boot successfully. In this case, to prevent Sonar issue when you don't expect any exception to be throw from your test, you can use this part of the code:
#Test
void contextLoads() {
Assertions.assertDoesNotThrow(this::doNotThrowException);
}
private void doNotThrowException(){
//This method will never throw exception
}
I have several Spring beans/components implementing AutoCloseable, and I expect the Spring Container to close them when the application context is destroyed.
Anyway, my code coverage tool is complaining because the close() methods are "uncovered by tests" from its point of view.
What should I do:
Introduce some trivial close() tests?
Just let it go and accept they will lower the coverage %
Something else?
You're not testing that the app closes the bean, you're testing that the bean closes properly when closed. If the implementation is non-trivial, then you should write a test for that behaviour. If all your method does is call close on a single field then don't bother testing it. However, if your close method calls close on multiple fields or does something a bit more complicated then you should test it.
For instance, given the following Closer class that must close all its Readers when it is closed...
public class Closer implements AutoCloseable {
private Reader[] readers;
public Closer(Reader... readers) {
this.readers = readers;
}
#Override
public void close() {
try {
for (Reader reader : readers) {
reader.close();
}
} catch (IOException ex) {
// ignore
}
}
}
You may wish to test as such:
public class CloserTest {
#Test
public void allReadersClosedWhenOneReaderThrowsException() {
// given
Reader badReader = mock(Reader.class);
Reader secondReader = mock(Reader.class);
doThrow(new IOException()).when(badReader).close();
Closer closer = new Closer(badReader, secondReader);
// when
closer.close();
// then
verify(badReader).close();
verify(secondReader).close(); // fails as loop stops on first exception
}
}
Excessively high code coverage can be a bad thing if it means that your unit tests contain large amounts of trivial tests, especially if those tests are fragile. They will increase the amount of effort required to maintain the unit tests, without actually adding anything.
Example code:
public class Count {
static int count;
public static int add() {
return ++count;
}
}
I want test1 and test2 run totally separately so that they both pass. How can I finish that? My IDE is Intellij IDEA.
public class CountTest {
#Test
public void test1() throws Exception {
Count.add();
assertEquals(1, Count.count);//pass.Now count=1
}
#Test
public void test2() throws Exception {
Count.add();
assertEquals(1, Count.count);//error, now the count=2
}
}
Assume the test1 runs before test2.
This is just a simplified code. In fact the code is more complex so I can't just make count=0 in #after method.
There is no automated way of resetting all the static variables in a class. This is one reason why you should refactor your code to stop using statics.
Your options are:
Refactor your code
Use the #Before annotation. This can be a problem if you've got lots of variables. Whilst its boring code to write, if you forget to reset one of the variables, one of your tests will fail so at least you'll get chance to fix it.
Use reflection to dynamically find all the member of your class and reset them.
Reload the class via the class loader.
Refactor you class. (I know I've mentioned it before but its so important I thought it was worth mentioning again)
3 and 4 are a lot of work for not much gain. Any solution apart from refactoring will still give you problems if you start trying to run your tests in parallel.
Use the #Before annotation to re-initialize your variable before each test :
#Before
public void resetCount(){
Count.count = 0;
}