I'd like to know if it is "ok" to write a test without any "assert" in it. So the test would fail only when an exception / error has occured.
Eg: like a test which has a simple select query, to ensure that the database configuration is right. So when I change some db-configuration, I re-run this test and check if the configuration is right. ?
Thanks!
It is perfectly valid to make sure a unit test runs without encountering an exception.
As per Matt B's suggestion, be sure to document what the test is actually testing to be clear and precise.
As #Kyle noted, your test case is valid. In fact the opposite would also be a valid: when you write a test case to confirm that a certain call with specific parameter(s) results in an exception.
Sure you can do that.
It is also perfectly fine to write a test without assertions where the expected outcome is an exeption. I know testng will let you specify an exception that should be thrown and the test will fail if the expected exception isn't thrown.
Testing is a really subjective discussion. Some people will say no, you should always have AAA syntax. Personally I've written tests that do things very similar to what your talking about so I'd say, sure go ahead - if it helps you build a more stable app then why not.
For example in NUnit i consider [ExpectedException typeof(XXXX)] to be logically equivalent to an Assert.
Also in some tests you might not assert anything but expect a particular order of execution via Mocks and Expects.
It is surely acceptable to write a unit test that doesn't have any assertions. You could do this for:
Testing a case that ends without an exception. In this case, if you can, it's nice to dress the test with the specific type of the exception, as in [ExpectedException(MyException)].
Testing a feature is there. Even there isn't a possibility that the test may generate an exception, you may want to make this test fail if someone decides to remove that feature. If the test uses a method and the method is removed, the test will simply fail to build.
the purpose of the test is to check if "X" has an "expected something", in order to check that "expected something" is correct to assert, to expect or to verify. This is why most frameworks implements those methods in a way or another
Related
On my unit test I want to check if one or another method was called. I can easily verify how many times some methods are called thanks to Mockito, but verify has no verification mode like "OR". Any workarounds?
In my case I want to check if on SharedPreferences.Editor was called .apply() or .commit() because two of this possibilities satisfies me and saves data. Unfortunately if I call verify(mEditor).apply() but someone will change implementation to .commit() in example due to requirement of instant save, the test will fail, but shouldn't because I want to only test from this point of view if data are saved or not. It's the unit test and should be independent from changes like that and checks only scope of what are tested inside.
The work around you ask for would be to catch the underlying MockitoAssertionError (or just AssertionError):
try {
verify(mEditor).apply();
} catch (MockitoAssertionError mae) {
// apply was not called. Let's verify commit instead.
verify(mEditor).commit();
}
Alternatively, if both apply and commit call some (internal) save method you could also try verifying that (assuming it is exposed -- mock-based testing can be at odds with information hiding). Or, if you have control over the code you're testing you could refactor it along these lines.
The better advice, though, would be to avoid the need for this altogether, as argued in the answer by #GhostCat.
I am not aware of a good way of doing that, and honestly, I think the real answer is: do not do that. Yes, the other answer shows a way to achieve what you ask for, but then:
You know what your production code should be doing. Meaning: instead of writing a single piece of verification code that allows "this or that", rather write two independent tests, one for "this", and one for "that".
In other words: you control what goes into your tests. So write one test that should result in apply(), and one that should result in commit(). And then verify() that one case that each test is expected to see!
Unit tests should be straight forward. When something fails, you quickly look at the unit test and you already know where to look in the production code to spot the root cause. Anything that adds complexity to your tests might make that harder. It is better to have two tests that follow a clear "when then verify" path, instead of having one (or multiple) tests that go "when then verify this OR verify that".
I'd like to have it yell hooray whenever an assert statement succeeds, or at the very least have it display the number of successful assert statements that were encountered.
I'm using JUnit4.
Any suggestions?
If you want to see some output for each successful assertion, another simple approach which requires no external dependencies or source code, would be to define your own Assert class which delegates all methods to the standard JUnit Assert class, as well as logging successful assertions (failed assertions will be reported as usual by the JUnit class).
You then run a global search-and-replace on your test classes from "org.junit.Assert" => "com.myco.test.Assert", which should fix-up all regular and static import statements.
You could also then easily migrate your approach to the quieter-is-better-camp and change the wrapper class to just report the total # of passed assertions per test or per class, etc.
Adding some info that would have been helpful to me when I wanted JUnit to be more verbose and stumbled on this question. Maybe it will help other testers in the future.
If you are running JUnit from Ant, and want to see what tests are being run, you can add the following to your task:
<junit showoutput="true" printsummary="on" enabletestlistenerevents="true" fork="#{fork}" forkmode="once" haltonfailure="no" timeout="1800000">
Note that showoutput, printsummary, and enabletestlistenerevents are what helped, not the other task attributes. If you set these, you'll get output like:
Running com.foo.bar.MyTest
junit.framework.TestListener: tests to run: 2
junit.framework.TestListener: startTest(myTestOne)
junit.framework.TestListener: endTest(myTestOne)
junit.framework.TestListener: startTest(myTestTwo)
junit.framework.TestListener: endTest(myTestTwo)
Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.495 sec
This was useful to me when my tests were timing out and I wasn't sure which tests were actually taking too long, and which tests got cancelled because they were unlucky enough to be running when the time was up.
You can use AOP (with Spring or AspectJ) define pointcuts on all assert methods in junit.framework.Assert class. Using spring you can implement your own class as after returning advice (http://static.springframework.org/spring/docs/2.5.x/reference/aop.html#aop-advice-after-returning) which will only be called if the assert method passed (otherwise it throws an exception: junit.framework.AssertionFailedError
). In you own class you can implement a simple counter and print it at the end.
Are you really interested in an assertion that succeeds? Normally, the only interesting assertions are ones that fail.
Being a fervent JUnit devotee myself, I try and make the output of the tests as quiet as possible because it improves the signal-to-noise ratio when something doesn't pass. The best test run is one where everything passes and there's not a peep from stdout.
You could always work on your unit test until it succeeds and run "grep Assert test.java | wc -l". :-)
Hard to be done. All assert methods are static members of the class Assert, which implies that the RunNotifier (which counts the successful and failed tests) is not within reach.
If you dont refrain from an ungly hack: take the sources from JUnit, patch them to store the current notifier in a static field of Assert when running tests such that the static methods can report successful asserts to this notifier.
I'm pretty sure you can create a custom TestRunner that does that. We ended up with something similar in our homemade Unit-testing framework (a clone of NUnit).
Oh, wait - now that I'm reading your question again, if you really want output for each successful assertion, you'll have to dig into the plumbing more. The TestRunner only gets called once for each testcase start/end, so it'll count passed and failed tests, not assertions.
This isn't much of a problem for me, since I tend towards one assertion per test, generally.
I don't think, it's the goal of JUnit to count matched assertions or print out more verbose information.
If tests are atomic, you'll get most information in there. So I would review my tests.
You're also able to establish a LogFile in JUnit. It's possible, but it will decrease test execution performance...
This is not a straight answer to the question and is in fact a misuse of a junit feature, but if you need to debug some values that are used in the asserts, then you can add temporarily something like:
Assume.assumeTrue(interestingData, false);
This won't fail the build, but will mark the test as IGNORED, and will force the values to be included in the test report output.
❗️ Make sure to remove the statements once you are done. Or, as an alternative, you can change the statement to Assume.assumeTrue(interestingData, true) in case you might want to debug it again in the future.
This question was asked a long ago but just to add:
Isn't it better to get the reports generated as html under the build folder and refresh the browser after every test?
Using Gradle (as it offers support for Junit out of the box with its plugins for integration) I'm able to open the file at build/reports/tests/test/index.html in my project and check the results in a per package, per class, and per method basis.
PS: You can install an extension in the browser for refreshing the page if it becomes annoying.
Hope this helps someone if the constraints apply.
Here, here and here you can find more on how to generate reports for Junit test results.
junit's javadoc unfortunately says that only failed assertions are recorded (http://junit.sourceforge.net/javadoc_40/index.html)
so it seems it would not be possible
Can you consider ?
1) download junit source
2) to modify the class org.junit.Assert to do whatever modifications you're looking for
I'm coming from a Perl background where I used Test::More to handle unit testing. Using that framework, I knew the order in which the tests took place and could rely on that, which I understand is not encouraged with the JUnit framework. I've seen several ways to get around this, but I want to understand the proper/intended way of doing things.
In my Perl unit testing I would build up tests, knowing that if test #3 passed, I could make some assumptions in further tests. I don't quite see how to structure that in the JUnit world so that I can make every test completely independent.
For example, suppose I have a class that parses a date from a string. Methods include:
parse a simple date (YYYY-MM-DD)
parse a simple date with alternate separator (YYYY_MM_DD or YYYY/MM/DD)
parse a date with a string for a month name (YYYY-MON-DD)
parse a date with a string month name in a different language
and so on
I usually write my code to focus as many of the externally-accessible methods into as few core methods as possible, re-using as much code as possible (which is what most of us would do, I'm sure). So, let's say I have 18 different tests for the first method, 9 that are expected to pass and 9 that throw an exception. For the second method, I only have 3 tests, one each with the separators that work ('_' & '/') and one with a separator that doesn't work ('*') which is expected to fail. I can limit myself to the new code being introduced because I already know that the code properly handles the standard boundary conditions and common errors, because the first 18 tests already passed.
In the Perl world, if test #20 fails, I know that it's probably something to do with the specific separator, and is not a general date parsing error because all of those tests have already passed. In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there. That's not too hard to do, of course, but maybe in a bigger, more complex class, it would be more difficult to do.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite? That seems tedious. (And before someone suggests that I put the first18 in one class and the second 3 in another, and use a test suite for just those groupings, let's pretend that all 18 of the early tests build on each other, too).
And, again, I know there are ways around this (FixedMethodOrder in JUnit 4.11+ or JUnit-HierarchicalContextRunner) but I want to understand the paradigm as its intended to be used.
In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there.
Yes that is correct. If something in your code is broken then multiple tests may fail. That is a good thing. Use intent revealing test method names and possibly use the optional String message parameter in the JUnit assertions to explain what exactly failed the test.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite?
The general convention is one test class per source class. Depending on what build tool you are using, you may or may not need to use test suites. If you are using Ant, you probably need to collect the tests into test suites, but if you are using Maven, the test plugins for maven will find all your test classes for you so you don't need suites.
I also want to point out that you should be coding to Java interfaces as much as possible. If you are testing class C that depends on an implementation of interface I, then you should mock your I implementation in your C test class so that C is tested in isolation. Your mock I should follow what the interface is supposed to do. This also keeps the number of failing tests down. If there is a bug in your real I implementation, then only your I tests should fail, the C tests should still all pass (since you are testing it against a fake but working I implementation)
Don't worry about suites yet. You'll know when you need them. I've only had to use them a handful of times, and I'm not entirely sold on their usefulness...but I leave that decision up to you.
To the meat of your question - the conventional way with JUnit tests is to neither know nor depend on the order of execution of your tests; this ensures that your tests are not run-order dependent, and if they are, something is wrong with your tests* and validation.
The main core concept behind unit tests is that they test a unit of code - as simple as a single function. If you're attempting to test five different things at once, your test is far too large, and should be broken out. If the method you're testing is monolithic in nature, and difficult to test, it should be refactored and broken out into different slices of responsibility.
Tests that exercise a larger flow are better suited for integration-style tests, which tend to be written as unit tests, but aren't actually unit tests.
I've not run into a scenario in which, if I knew that if a certain test failed, I could expect different behavior in the other tests. I've never thought that such a thing was necessary to be noted, since the only thing I care about in my unit test is how that unit of code behaves given a certain input.
Keep your tests small and simple to understand; the test should only make one assertion about the result (or a general assertion of the state of your result).
*: That's not to say that it's completely broken, but those sorts of tests should be fixed sooner rather than later.
Folks it is always said in TDD that
we should write junits even before we write the actual code.
Somehow I am not able to understand this in right spirit. I hope what it means is that you just write empty methods wih right signatures and your test case is expected to fail initially
Say in TDD approach i need to get the list of customers.
As per my understanding i will write the empty method like below
public List<CustomerData> getCustomers(int custId){
return null;
}
Now i will write junit test case where i will check the size as 10(that i am eactually expecting). Is this Right?
Basically my question is in TDD, How we can write junit test case before writing actual code?
I hope what it means is that you just write empty methods wih right signatures
Yes. And with most modern IDEs, if you write a method name which does not exist in your test, they will create a stub for you.
Say in TDD approach i need to get the list of customers. whats the right way to proceed?
Your example is not quite there. You want to test for a 0-length array, but you already return it: you should first return null, the test will obviously fail.
Then modify the method so that the test succeeds.
Then create a test method for customer add. Test fails. Fix it. Rinse. Repeat.
So, basically: with TDD, you start and write test that you KNOW will fail, and then fix your code so that they work.
Recommended read.
Often you'll write the test alongside the skeleton of the code. Initially you can write a non-functional implementation (e.g. throw an UnsupportedOperationException) and that will trigger a test failure. Then you'd flesh out the implementation until finally your test passes.
You need to be pragmatic about this. Obviously you can't compile your test until at least your unit under test compiles, and so you have to do a minimal amount of implementation work alongside your test.
Check out this recent Dr Dobbs editoral, which discusses exactly this point and the role of pragmatism around this, especially by the mavens of this practise (Kent Beck et al)
A key principle of TDD is that you write no code without first writing
a failing unit test. But in fact, if you talk to the principal
advocates of TDD (such as Kent Beck, who popularized the technique,
and Bob Martin, who has taught it to thousands of developers), you
find that both of them write some code without writing tests first.
They do not — I should emphasize this — view these moments as lapses
of faith, but rather as the necessary pragmatism of the intelligent
developer.
That's partly right.
Using an IDE (Eclipse, IntelliJ) you can create a test. In that test invoke a method (that does not exist) and using a refactoring tool create a method with the proper signature.
That's a trick that makes working with TDD easier and more fun.
According to Now i will write junit test case where i will check the size as 0. Is this Right? you should write a test that fails, and the provide proper implementation.
I think go write the test first, Think about the signature of the function while writing the test.
It's the same as writing the signature and then writing the test, But when inventing the signature of the function while writing the test will be helpful since you will have all information about the responsibility of the function and you will be able to come up with the proper signature.
I'd like to have it yell hooray whenever an assert statement succeeds, or at the very least have it display the number of successful assert statements that were encountered.
I'm using JUnit4.
Any suggestions?
If you want to see some output for each successful assertion, another simple approach which requires no external dependencies or source code, would be to define your own Assert class which delegates all methods to the standard JUnit Assert class, as well as logging successful assertions (failed assertions will be reported as usual by the JUnit class).
You then run a global search-and-replace on your test classes from "org.junit.Assert" => "com.myco.test.Assert", which should fix-up all regular and static import statements.
You could also then easily migrate your approach to the quieter-is-better-camp and change the wrapper class to just report the total # of passed assertions per test or per class, etc.
Adding some info that would have been helpful to me when I wanted JUnit to be more verbose and stumbled on this question. Maybe it will help other testers in the future.
If you are running JUnit from Ant, and want to see what tests are being run, you can add the following to your task:
<junit showoutput="true" printsummary="on" enabletestlistenerevents="true" fork="#{fork}" forkmode="once" haltonfailure="no" timeout="1800000">
Note that showoutput, printsummary, and enabletestlistenerevents are what helped, not the other task attributes. If you set these, you'll get output like:
Running com.foo.bar.MyTest
junit.framework.TestListener: tests to run: 2
junit.framework.TestListener: startTest(myTestOne)
junit.framework.TestListener: endTest(myTestOne)
junit.framework.TestListener: startTest(myTestTwo)
junit.framework.TestListener: endTest(myTestTwo)
Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.495 sec
This was useful to me when my tests were timing out and I wasn't sure which tests were actually taking too long, and which tests got cancelled because they were unlucky enough to be running when the time was up.
You can use AOP (with Spring or AspectJ) define pointcuts on all assert methods in junit.framework.Assert class. Using spring you can implement your own class as after returning advice (http://static.springframework.org/spring/docs/2.5.x/reference/aop.html#aop-advice-after-returning) which will only be called if the assert method passed (otherwise it throws an exception: junit.framework.AssertionFailedError
). In you own class you can implement a simple counter and print it at the end.
Are you really interested in an assertion that succeeds? Normally, the only interesting assertions are ones that fail.
Being a fervent JUnit devotee myself, I try and make the output of the tests as quiet as possible because it improves the signal-to-noise ratio when something doesn't pass. The best test run is one where everything passes and there's not a peep from stdout.
You could always work on your unit test until it succeeds and run "grep Assert test.java | wc -l". :-)
Hard to be done. All assert methods are static members of the class Assert, which implies that the RunNotifier (which counts the successful and failed tests) is not within reach.
If you dont refrain from an ungly hack: take the sources from JUnit, patch them to store the current notifier in a static field of Assert when running tests such that the static methods can report successful asserts to this notifier.
I'm pretty sure you can create a custom TestRunner that does that. We ended up with something similar in our homemade Unit-testing framework (a clone of NUnit).
Oh, wait - now that I'm reading your question again, if you really want output for each successful assertion, you'll have to dig into the plumbing more. The TestRunner only gets called once for each testcase start/end, so it'll count passed and failed tests, not assertions.
This isn't much of a problem for me, since I tend towards one assertion per test, generally.
I don't think, it's the goal of JUnit to count matched assertions or print out more verbose information.
If tests are atomic, you'll get most information in there. So I would review my tests.
You're also able to establish a LogFile in JUnit. It's possible, but it will decrease test execution performance...
This is not a straight answer to the question and is in fact a misuse of a junit feature, but if you need to debug some values that are used in the asserts, then you can add temporarily something like:
Assume.assumeTrue(interestingData, false);
This won't fail the build, but will mark the test as IGNORED, and will force the values to be included in the test report output.
❗️ Make sure to remove the statements once you are done. Or, as an alternative, you can change the statement to Assume.assumeTrue(interestingData, true) in case you might want to debug it again in the future.
This question was asked a long ago but just to add:
Isn't it better to get the reports generated as html under the build folder and refresh the browser after every test?
Using Gradle (as it offers support for Junit out of the box with its plugins for integration) I'm able to open the file at build/reports/tests/test/index.html in my project and check the results in a per package, per class, and per method basis.
PS: You can install an extension in the browser for refreshing the page if it becomes annoying.
Hope this helps someone if the constraints apply.
Here, here and here you can find more on how to generate reports for Junit test results.
junit's javadoc unfortunately says that only failed assertions are recorded (http://junit.sourceforge.net/javadoc_40/index.html)
so it seems it would not be possible
Can you consider ?
1) download junit source
2) to modify the class org.junit.Assert to do whatever modifications you're looking for