Can you tell me if there's a way to dis- and enable junit.framework.Assert commands inline to ignore them?
Code example just to show what i mean:
junitOff(); // first method I need help with
assertTrue(false); //would normally cause an AssertionError
junitOn(); // second method I need help with
System.out.println("Broke through asserTrue without Error")
I know it would be possible to use try/catch but that's not what I'm looking for because I can not go on with execution in the line after the assert...
I don't see any way to do this. Looking at the junit code you see:
static public void assertTrue(String message, boolean condition) {
if (!condition)
fail(message);
}
And fail() is:
static public void fail(String message) {
throw new AssertionError(message == null ? "" : message);
}
I see no ways to turn on a switch or otherwise disable it from throwing an AssertionError.
For the record, if you are talking about the assert Java primitive then that can be disabled but only at compile-time with the -ea switch.
Related
I would like to debug some code in order to study some behaviour (and introduce some forced errors in the execution). I would like to either throw exceptions or return earlier from functions before they complete. But, by using throw or return (as pointed by user #user15244370 in a comment below) generating unreachable code is a compilation error.
Currently I am using this snippet to avoid the detection of unreachable code:
if (Math.random() < 1) {
throw new RuntimeException("This is an experiment.");
}
Is there a more compact form of such forced control flow interruption?
I'd say:
x();
// or if you want to specify your own exception:
y(() -> new RuntimeException("..."));
I omitted the boilerplate:
import static foo.Bar.*;
// and:
public class Bar {
public static void x() {
if(true) { // this will just give a dead code warning, no error
// alternative: 1==1
throw new RuntimeException();
}
}
public static void y(Supplier<RuntimeException> s) {
if(...) {
throw s.get();
}
}
}
To omit the warning you can use a more complex expression, like "".equals("")
If you are using Eclipse, then in debug mode you could,
Add a conditional breakpoint and add the required code there.
Open the Debug Shell view and write the required code there and once the debugger reaches your point of interest, select the code in the debug shell and then : right click -> force return
I would suggest the first approach because in the second approach, you need to select the piece of code and do a force return each time the breakpoint is reached. But in the case of conditional breakpoints it is mostly an one time process.
You could add the random exception code as the condition of the breakpoint:
if (Math.random() < 1) {
throw new RuntimeException("This is an experiment.");
}
return false;
The conditional breakpoint feature is available in most of the IDEs.
Now this is really quite difficult for me to explain so please bear with me.
I've been wondering as of late the best way to "unwind" every chained method back to a main method when certain circumstances are met. For example, say I make a call to a method from Main and from that method I call another one and so on. At some point I may want to cancel all further operations of every method that is chained and simply return to the Main method. What is the best way to do this?
I'll give a scenario:
In the following code there are 3 methods, however when Method1 calls Method2 with a null value it should unwind all the way back to Main without further operations in Method2 (EG the "Lots of other code" section).
public static void main(String[] args)
{
try
{
Method1();
}
catch( ReturnToMainException e )
{
// Handle return \\
}
}
public static void Method1() throws ReturnToMainException
{
String someString = null;
Method2( someString );
// Lots more code after here
}
public static boolean Method2( String someString )
{
if( someString == null )
throw new ReturnToMainException();
else if( someString.equals( "Correct" ))
return true;
else
return false;
}
In this example I use a throw which I've read should only be used in "Exceptional Circumstances". I often run into this issue and find myself simply doing If/Else statements to solve the issue, but when dealing with methods that can only return True/False I find I don't have enough options to return to decide on an action. I guess I could use Enumerators or classes but that seems somewhat cumbersome.
I use a throw which I've read should only be used in "Exceptional Circumstances". I often run into this issue and find myself simply doing If/Else statements to solve the issue
Exception throwing is relatively expensive so it should not be used without careful thought but I believe that your example is a ok example of proper usage.
In general, you should use exceptions only for "exceptional" behavior of the program. If someString can be null through some sort of user input, database values, or other normal mechanism then typically you should handle that case with normal return mechanisms if possible.
In your case, you could return a Boolean object (not a primitive) and return null if someString is null.
private static Boolean method2( String someString ) {
if (someString == null) {
return null;
}
...
}
Then you would handle the null appropriately in the caller maybe returning a boolean to main based on whether or not the method "worked".
private static boolean method1() {
...
Boolean result = method2(someString);
if (result == null) {
// the method didn't "work"
return false;
}
Then in main you can see if method1 "worked":
public static void main(String[] args) {
if (!method1()) {
// handle error
}
...
}
Notice that I downcased your method names and changed the permissions of your methods to private both which are good patterns.
Enumerators or classes but that seems somewhat cumbersome.
Yeah indeed. It depends a bit on how this code is used. If it is a API method that is called by others, you might want to return some sort of Result class which might provide feedback like a boolean that the argument was null. Or you might throw an IllegalArgumentException in that case. Instead, if this is an internal local private method, then I'd vote for a simpler way of handling argument errors. Either way I'd use javadocs to document the behavior so you don't trip up future you.
Hope this helps.
Consider the following code:
public Object getClone(Cloneable a) throws TotallyFooException {
if (a == null) {
throw new TotallyFooException();
}
else {
try {
return a.clone();
} catch (CloneNotSupportedException e) {
e.printStackTrace();
}
}
//cant be reached, in for syntax
return null;
}
The return null; is necessary since an exception may be caught, however in such a case since we already checked if it was null (and lets assume we know the class we are calling supports cloning) so we know the try statement will never fail.
Is it bad practice to put in the extra return statement at the end just to satisfy the syntax and avoid compile errors (with a comment explaining it will not be reached), or is there a better way to code something like this so that the extra return statement is unnecessary?
A clearer way without an extra return statement is as follows. I wouldn't catch CloneNotSupportedException either, but let it go to the caller.
if (a != null) {
try {
return a.clone();
} catch (CloneNotSupportedException e) {
e.printStackTrace();
}
}
throw new TotallyFooException();
It's almost always possible to fiddle with the order to end up with a more straight-forward syntax than what you initially have.
It definitely can be reached. Note that you're only printing the stacktrace in the catch clause.
In the scenario where a != null and there will be an exception, the return null will be reached. You can remove that statement and replace it with throw new TotallyFooException();.
In general*, if null is a valid result of a method (i.e. the user expects it and it means something) then returning it as a signal for "data not found" or exception happened is not a good idea. Otherwise, I don't see any problem why you shouldn't return null.
Take for example the Scanner#ioException method:
Returns the IOException last thrown by this Scanner's underlying Readable. This method returns null if no such exception exists.
In this case, the returned value null has a clear meaning, when I use the method I can be sure that I got null only because there was no such exception and not because the method tried to do something and it failed.
*Note that sometimes you do want to return null even when the meaning is ambiguous. For example the HashMap#get:
A return value of null does not necessarily indicate that the map contains no mapping for the key; it's also possible that the map explicitly maps the key to null. The containsKey operation may be used to distinguish these two cases.
In this case null can indicate that the value null was found and returned, or that the hashmap doesn't contain the requested key.
Is it bad practice to put in the extra return statement at the end just to satisfy the syntax and avoid compile errors (with a comment explaining it will not be reached)
I think return null is bad practice for the terminus of an unreachable branch. It is better to throw a RuntimeException (AssertionError would also be acceptable) as to get to that line something has gone very wrong and the application is in an unknown state.
Most like this is (like above) because the developer has missed something (Objects can be none-null and un-cloneable).
I'd likely not use InternalError unless I'm very very sure that the code is unreachable (for example after a System.exit()) as it is more likely that I make a mistake than the VM.
I'd only use a custom exception (such as TotallyFooException) if getting to that "unreachable line" means the same thing as anywhere else you throw that exception.
You caught the CloneNotSupportedException which means your code can handle it. But after you catch it, you have literally no idea what to do when you reach the end of the function, which implies that you couldn't handle it. So you're right that it is a code smell in this case, and in my view means you should not have caught CloneNotSupportedException.
I would prefer to use Objects.requireNonNull() to check if the Parameter a is not null. So it's clear when you read the code that the parameter should not be null.
And to avoid checked Exceptions I would re throw the CloneNotSupportedException as a RuntimeException.
For both you could add nice text with the intention why this shouldn't happen or be the case.
public Object getClone(Object a) {
Objects.requireNonNull(a);
try {
return a.clone();
} catch (CloneNotSupportedException e) {
throw new IllegalArgumentException(e);
}
}
The examples above are valid and very Java. However, here's how I would address the OP's question on how to handle that return:
public Object getClone(Cloneable a) throws CloneNotSupportedException {
return a.clone();
}
There's no benefit for checking a to see if it is null. It's going to NPE. Printing a stack trace is also not helpful. The stack trace is the same regardless of where it is handled.
There is no benefit to junking up the code with unhelpful null tests and unhelpful exception handling. By removing the junk, the return issue is moot.
(Note that the OP included a bug in the exception handling; this is why the return was needed. The OP would not have gotten wrong the method I propose.)
In this sort of situation I would write
public Object getClone(SomeInterface a) throws TotallyFooException {
// Precondition: "a" should be null or should have a someMethod method that
// does not throw a SomeException.
if (a == null) {
throw new TotallyFooException() ; }
else {
try {
return a.someMethod(); }
catch (SomeException e) {
throw new IllegalArgumentException(e) ; } }
}
Interestingly you say that the "try statement will never fail", but you still took the trouble to write a statement e.printStackTrace(); that you claim will never be executed. Why?
Perhaps your belief is not that firmly held. That is good (in my opinion), since your belief is not based on the code you wrote, but rather on the expectation that your client will not violate the precondition. Better to program public methods defensively.
By the way, your code won't compile for me. You can't call a.clone() even if the type of a is Cloneable. At least Eclipse's compiler says so. Expression a.clone() gives error
The method clone() is undefined for the type Cloneable
What I would do for your specific case is
public Object getClone(PubliclyCloneable a) throws TotallyFooException {
if (a == null) {
throw new TotallyFooException(); }
else {
return a.clone(); }
}
Where PubliclyCloneable is defined by
interface PubliclyCloneable {
public Object clone() ;
}
Or, if you absolutely need the parameter type to be Cloneable, the following at least compiles.
public static Object getClone(Cloneable a) throws TotallyFooException {
// Precondition: "a" should be null or point to an object that can be cloned without
// throwing any checked exception.
if (a == null) {
throw new TotallyFooException(); }
else {
try {
return a.getClass().getMethod("clone").invoke(a) ; }
catch( IllegalAccessException e ) {
throw new AssertionError(null, e) ; }
catch( InvocationTargetException e ) {
Throwable t = e.getTargetException() ;
if( t instanceof Error ) {
// Unchecked exceptions are bubbled
throw (Error) t ; }
else if( t instanceof RuntimeException ) {
// Unchecked exceptions are bubbled
throw (RuntimeException) t ; }
else {
// Checked exceptions indicate a precondition violation.
throw new IllegalArgumentException(t) ; } }
catch( NoSuchMethodException e ) {
throw new AssertionError(null, e) ; } }
}
Is having a return statement just to satisfy syntax bad practice?
As others have mentioned, in your case this does not actually apply.
To answer the question, though, Lint type programs sure haven't figured it out! I have seen two different ones fight it out over this in a switch statement.
switch (var)
{
case A:
break;
default:
return;
break; // Unreachable code. Coding standard violation?
}
One complained that not having the break was a coding standard violation. The other complained that having it was one because it was unreachable code.
I noticed this because two different programmers kept re-checking the code in with the break added then removed then added then removed, depending on which code analyzer they ran that day.
If you end up in this situation, pick one and comment the anomaly, which is the good form you showed yourself. That's the best and most important takeaway.
It isn't 'just to satisfy syntax'. It is a semantic requirement of the language that every code path leads to a return or a throw. This code doesn't comply. If the exception is caught a following return is required.
No 'bad practice' about it, or about satisfying the compiler in general.
In any case, whether syntax or semantic, you don't have any choice about it.
I would rewrite this to have the return at the end. Pseudocode:
if a == null throw ...
// else not needed, if this is reached, a is not null
Object b
try {
b = a.clone
}
catch ...
return b
No one mentioned this yet so here goes:
public static final Object ERROR_OBJECT = ...
//...
public Object getClone(Cloneable a) throws TotallyFooException {
Object ret;
if (a == null)
throw new TotallyFooException();
//no need for else here
try {
ret = a.clone();
} catch (CloneNotSupportedException e) {
e.printStackTrace();
//something went wrong! ERROR_OBJECT could also be null
ret = ERROR_OBJECT;
}
return ret;
}
I dislike return inside try blocks for that very reason.
The return null; is necessary since an exception may be caught,
however in such a case since we already checked if it was null (and
lets assume we know the class we are calling supports cloning) so we
know the try statement will never fail.
If you know details about the inputs involved in a way where you know the try statement can never fail, what is the point of having it? Avoid the try if you know for sure things are always going to succeed (though it is rare that you can be absolutely sure for the whole lifetime of your codebase).
In any case, the compiler unfortunately isn't a mind reader. It sees the function and its inputs, and given the information it has, it needs that return statement at the bottom as you have it.
Is it bad practice to put in the extra return statement at the end
just to satisfy the syntax and avoid compile errors (with a comment
explaining it will not be reached), or is there a better way to code
something like this so that the extra return statement is unnecessary?
Quite the opposite, I'd suggest it's good practice to avoid any compiler warnings, e.g., even if that costs another line of code. Don't worry too much about line count here. Establish the reliability of the function through testing and then move on. Just pretending you could omit the return statement, imagine coming back to that code a year later and then try to decide if that return statement at the bottom is going to cause more confusion than some comment detailing the minutia of why it was omitted because of assumptions you can make about the input parameters. Most likely the return statement is going to be easier to deal with.
That said, specifically about this part:
try {
return a.clone();
} catch (CloneNotSupportedException e) {
e.printStackTrace();
}
...
//cant be reached, in for syntax
return null;
I think there's something slightly odd with the exception-handling mindset here. You generally want to swallow exceptions at a site where you have something meaningful you can do in response.
You can think of try/catch as a transaction mechanism. try making these changes, if they fail and we branch into the catch block, do this (whatever is in the catch block) in response as part of the rollback and recovery process.
In this case, merely printing a stacktrace and then being forced to return null isn't exactly a transaction/recovery mindset. The code transfers the error-handling responsibility to all the code calling getClone to manually check for failures. You might prefer to catch the CloneNotSupportedException and translate it into another, more meaningful form of exception and throw that, but you don't want to simply swallow the exception and return a null in this case since this is not like a transaction-recovery site.
You'll end up leaking the responsibilities to the callers to manually check and deal with failure that way, when throwing an exception would avoid this.
It's like if you load a file, that's the high-level transaction. You might have a try/catch there. During the process of trying to load a file, you might clone objects. If there's a failure anywhere in this high-level operation (loading the file), you typically want to throw exceptions all the way back to this top-level transaction try/catch block so that you can gracefully recover from a failure in loading a file (whether it's due to an error in cloning or anything else). So we generally don't want to just swallow up an exception in some granular place like this and then return a null, e.g., since that would defeat a lot of the value and purpose of exceptions. Instead we want to propagate exceptions all the way back to a site where we can meaningfully deal with it.
Your example is not ideal to illustrate your question as stated in the last paragraph:
Is it bad practice to put in the extra return statement at the end
just to satisfy the syntax and avoid compile errors (with a comment
explaining it will not be reached), or is there a better way to code
something like this so that the extra return statement is unnecessary?
A better example would be the implementation of clone itself:
public class A implements Cloneable {
public Object clone() {
try {
return super.clone() ;
} catch (CloneNotSupportedException e) {
throw new InternalError(e) ; // vm bug.
}
}
}
Here the catch clause should never be entered. Still the syntax either requires to throw something or return a value. Since returning something does not make sense, an InternalError is used to indicate a severe VM condition.
I am trying my hand at writing test cases. From what I have read, my tests should fail from the start and I should strive to make tests pass. However, I find myself writing tests checking boundaries and the exceptions they should cause:
#Test(expected=NegativeArraySizeException.class)
public void testWorldMapIntInt() {
WorldMap w = new WorldMap(-1, -1);
}
#Test(expected=IndexOutOfBoundsException.class)
public void testGetnIntnInt() {
WorldMap w = new WorldMap(10,10);
Object o = w.get(-1, -1);
}
However, this test passes by default because Java will throw the exception anyway. Is there a better way to handle these kinds of expected exceptions, possibly a way that fails by default-- forcing me to strive to handle these cases?
I agree that the style you present is not so good. The problem is that it doesn't check where in the method the exception is thrown, so it's possible to get false negatives.
We usually write tests for exceptions like this:
public void testWorldMapIntInt() {
try {
WorldMap w = new WorldMap(-1, -1);
Assert.fail("should have thrown IndexOutOfBoundsException");
}
catch (IndexOutOfBoundsException e) {}
}
Expected behaviour for WorldMap is to throw an exception if (-1, -1) passed into it
Initially it doesn't do that, so your test will fail as it does not see expected exception.
You implement the code for WorldMap correctly, including throwing exception when (-1, -1) passed in.
You rerun your test, it passes.
Sound like good TDD to me!
That seems like a fair test to write. WorldMap is no standard Java class. Presumably it's your own class. Therefore the test wouldn't be passing if you hadn't already written some code. This test will force you to throw (or propagate) an appropriate exception from your class. That sounds like a good test to me, which you should write before implementing the behavior.
I personally look for mistakes like that in the WorldMap constructor and throw an IllegalArgumentException, that way you can provide a better error message, such as what the value passed in was and what the expected range is.
As for having that test fail by default, I cannot think of a reasonable way of doing that if you are going to have it actually do something (if you are writing the tests first then it should fail because the constructor won't have any code).
Agree with accepted answer, try-fail-catch idiom, although ugly and cluttering the test, is much better than #Test(expcted=...) as it might report false positives.
A while back I implemented very simple JUnit rule to deal with exception testing in both safe and readable manner:
public class DefaultFooServiceTest {
#UnderTest
private FooService fooService = new DefaultFooService();
#Rule
public ExceptionAssert exception = new ExceptionAssert();
#Test
public void shouldThrowNpeWhenNullName() throws Exception {
//given
String name = null;
//when
fooService.echo(name);
//then
exception.expect(NullPointerException.class);
}
#Test
public void shouldThrowIllegalArgumentWhenNameJohn() throws Exception {
//given
String name = "John";
//when
fooService.echo(name);
//then
exception.expect(IllegalArgumentException.class)
.expectMessage("Name: 'John' is not allowed");
}
}
See blog post and source.
I'm pretty new to JUnit, and I don't really know what best practices are for exceptions and exception handling.
For example, let's say I'm writing tests for an IPAddress class. It has a constructor IPAddress(String addr) that will throw an InvalidIPAddressException if addr is null. As far as I can tell from googling around, the test for the null parameter will look like this.
#Test
public void testNullParameter()
{
try
{
IPAddress addr = new IPAddress(null);
assertTrue(addr.getOctets() == null);
}
catch(InvalidIPAddressException e)
{
return;
}
fail("InvalidIPAddressException not thrown.");
}
In this case, try/catch makes sense because I know the exception is coming.
But now if I want to write testValidIPAddress(), there's a couple of ways to do it:
Way #1:
#Test
public void testValidIPAddress() throws InvalidIPAddressException
{
IPAddress addr = new IPAddress("127.0.0.1");
byte[] octets = addr.getOctets();
assertTrue(octets[0] == 127);
assertTrue(octets[1] == 0);
assertTrue(octets[2] == 0);
assertTrue(octets[3] == 1);
}
Way #2:
#Test
public void testValidIPAddress()
{
try
{
IPAddress addr = new IPAddress("127.0.0.1");
byte[] octets = addr.getOctets();
assertTrue(octets[0] == 127);
assertTrue(octets[1] == 0);
assertTrue(octets[2] == 0);
assertTrue(octets[3] == 1);
}
catch (InvalidIPAddressException e)
{
fail("InvalidIPAddressException: " + e.getMessage());
}
}
Is is standard practice to throw unexpected exceptions to JUnit or just deal with them yourself?
Thanks for the help.
Actually, the old style of exception testing is to wrap a try block around the code that throws the exception and then add a fail() statement at the end of the try block. Something like this:
public void testNullParameter() {
try {
IPAddress addr = new IPAddress(null);
fail("InvalidIPAddressException not thrown.");
} catch(InvalidIPAddressException e) {
assertNotNull(e.getMessage());
}
}
This isn't much different from what you wrote but:
Your assertTrue(addr.getOctets() == null); is useless.
The intend and the syntax are clearer IMO and thus easier to read.
Still, this is a bit ugly. But this is where JUnit 4 comes to the rescue as exception testing is one of the biggest improvements in JUnit 4. With JUnit 4, you can now write your test like this:
#Test (expected=InvalidIPAddressException.class)
public void testNullParameter() throws InvalidIPAddressException {
IPAddress addr = new IPAddress(null);
}
Nice, isn't it?
Now, regarding the real question, if I don't expect an exception to be thrown, I'd definitely go for way #1 (because it's less verbose) and let JUnit handle the exception and fail the test as expected.
For tests where I don't expect an exception, I don't bother to catch it. I let JUnit catch the exception (it does this reliably) and don't cater for it at all beyond declaring the throws cause (if required).
I note re. your first example that you're not making use of the #expected annotation viz.
#Test (expected=IndexOutOfBoundsException.class) public void elementAt() {
int[] intArray = new int[10];
int i = intArray[20]; // Should throw IndexOutOfBoundsException
}
I use this for all tests that I'm testing for throwing exceptions. It's briefer than the equivalent catch/fail pattern that I had to use with Junit3.
Since JUnit 4.7 you have the possibility to use an ExpectedException rule and you should use it. The rule gives you the possibility to define exactly the called method where the exception should be thrown in your test code. Moreover, you can easily match a string against the error message of the exception. In your case the code looks like this:
#Rule
public ExpectedException expectedException = ExpectedException.none();
#Test
public void test() {
//working code here...
expectedException.expect(InvalidIPAddressException.class);
IPAddress addr = new IPAddress(null);
}
UPDATE: In his book Practical Unit Testing with JUnit and Mockito Tomek Kaczanowski argues against the use of ExpectedException, because the rule "breaks the arrange/act/assert [...] flow" of a Unit test (he suggests to use Catch Exception Library instead). Although I can understand his argument, I think using the rule is fine if you do not want to introduce another 3rd-party library (using the rule is better than catching the exception "manually" anyway).
For the null test you can simply do this:
public void testNullParameter() {
try {
IPAddress addr = new IPAddress(null);
fail("InvalidIPAddressException not thrown.");
}
catch(InvalidIPAddressException e) { }
}
If the exception has a message, you could also check that message in the catch if you wish. E.g.
String actual = e.getMessage();
assertEquals("Null is not a valid IP Address", actual);
For the valid test you don't need to catch the exception. A test will automatically fail if an exception is thrown and not caught. So way #1 would be all you need as it will fail and the stack trace will be available to you anyway for your viewing pleasure.
if i understand your question, the answer is either - personal preference.
personally i throw my exceptions in tests. in my opinion a test failing by assertion is equivalent to a test failing by an uncaught exception. both show something that needs to be fixed.
the important thing to remember in testing is code coverage.
In general way #1 is the way to go, there is no reason to call out a failure over an error - either way the test essentially failed.
The only time way #2 makes sense if you need a good message of what went wrong, and just an exception won't give that to you. Then catching and failing can make sense to better announce the reason of the failure.
Reg: Testing for Exceptions
I agree with "Pascal Thivent", ie use #Test (expected=InvalidIPAddressException.class)
Reg: Testing for testValidIPAddress
IPAddress addr = new IPAddress("127.0.0.1");
byte[] octets = addr.getOctets();
I would write a test like
class IPAddressTests
{
[Test]
public void getOctets_ForAValidIPAddress_ShouldReturnCorrectOctect()
{
// Test code here
}
}
The point is when testinput is VALID ipAddress
The test must be on the public methods/capabilities on the class asserting that they are working as excepted
IMO it is better to handle the exception and show appropriate messaging from the test than throwing it from a test.