Can JUnit make assertions about all Strings? - java

Is there any way in JUnit (or any other testing framework) to make assertions about all possible inputs to a method?
Something like:
assertTrue(myClass.myMethod(anyString))
EDIT:
I realize that there are an infinite number of possible inputs but I guess I was wondering if there were some frameworks that could statically analyze a method and detect that in all possible cases a certain result would be reached. E.g. public static boolean myMethod(String input) { return true; } will always return true?

No, there is practically an unlimited number of possible inputs.
Its your job to separate them into test cases with (expected) equivalent behaviour.
If such an artificial intelligence would exist, then it could also write the code to be tested.
There exist test case generators, that auto create test cases, but they are mostly useless. They produce a huge amount of test cases, and mainly only touch the code, instead of testing an expected result.
Such tools raise test coverage percentage, but in a very dubious way. (I would call that an illegal raise of test coverage: you should test, not touch!)
Such a tool is CodePro from Google. Use CodePro->Test Case generation (e.g within Eclipse)
On first you will be a bit suprised, it's not to bad to try it out. Then you will know the limits of auto test case generation.

You cannot do this with JUnit. The only way I think you could do such a thing would be using Formal Logic Verification

As said before, it's not possible. However, there's the approach of automated property based testing. This comes somehow as close as possible to your idea. Well, still far tough...
For instance, hava a look at scalacheck:
ScalaCheck is a library written in Scala and used for automated
property-based testing of Scala or Java programs. ScalaCheck was
originally inspired by the Haskell library QuickCheck, but has also
ventured into its own.

Related

What To Unit Test [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm a bit confused how much I should dedicate on Unit Tests.
Say I have a simple function like:
appendRepeats(StringBuilder strB, char c, int repeats)
[This function will append char c repeats number of times to strB.
e.g.:
strB = "hello"
c = "h"
repeats = 5
// result
strB = "hellohhhhh"
]
For unit testing this function, I feel there's already so many possibilities:
AppendRepeats_ZeroRepeats_DontAppend
AppendRepeats_NegativeRepeats_DontAppend
AppendRepeats_PositiveRepeats_Append
AppendRepeats_NullStrBZeroRepeats_DontAppend
AppendRepeats_NullStrBNegativeRepeats_DontAppend
AppendRepeats_NullStrBPositiveRepeats_Append
AppendRepeats_EmptyStrBZeroRepeats_DontAppend
AppendRepeats_EmptyStrBNegativeRepeats_DontAppend
AppendRepeats_EmptyStrBPositiveRepeats_Append
etc. etc.
strB can be null or empty or have value.
c can be null or have value
repeats can be negative or positive or zero
That seems already 3 * 2 * 3 = 18 test methods. Could be a lot more on other functions if those functions also need to test for special characters, Integer.MIN_VALUE, Integer.MAX_VALUE, etc. etc.
What should be my line of stopping?
Should I assume for the purpose of my own program:
strB can only be empty or have value
c has value
repeats can only be empty or positive
Sorry for the bother. Just genuinely confused how paranoid I should go with unit testing in general. Should I stay within the bounds of my assumptions or is that bad practice and should I have a method for each potential case in which case, the number of unit test methods would scale exponentially quite quick.
There's no right answer, and it's a matter of personal opinion and feelings.
However, some things I believe are universal:
If you adopt Test Driven Development, in which you never write any non-test code unless you've first written a failing unit test, this will guide you in the number of tests you write. With some experience in TDD, you'll get a feel for this, so even if you need to write unit tests for old code that wasn't TDD'd, you'll be able to write tests as if it was.
If a class has too many unit tests, that's an indication that the class does too many things. "Too many" is hard to quantify, however. But when it feels like too many, try to split the class up into more classes each with fewer responsibilities.
Mocking is fundamental to unit testing -- without mocking collaborators, you're testing more than the "unit". So learn to use a mocking framework.
Checking for nulls, and testing those checks, can add up to a lot of code. If you adopt a style in which you never produce a null, then your code never needs to handle a null, and there's no need to test what happens in that circumstance.
There are exceptions to this, for example if you're supplying library code, and want to give friendly invalid parameter errors to the caller
For some methods, property tests can be a viable way to hit your code with a lot of tests. jUnit's #Theory is one implementation of this. It allows you to test assertions like 'plus(x,y) returns a positive number for any positive x and positive y'
The general rule of thumb is usually that every 'fork' in your code should have a test, meaning you should cover all possible edge-cases.
For example, if you have the following code:
if (x != null) {
if (x.length > 100) {
// do something
} else {
// do something else
}
} else {
// do something completely else
}
You should have three test cases- one for null, one for value shorter than 100 and one for longer.
This is if you are strict and want to be 100% covered.
Whether it's different tests or parameterized is less important, it's more a matter of style and you can go either way. I think the more important thing is to cover all cases.
The set of test cases you have developed are the result of a black-box test-design approach, in fact they look as if you had applied the classification-tree-method. While it is perfectly fine to temporarily take a black-box perspective when doing unit-testing, limiting yourself to black-box testing only can have some undesired effects: First, as you have observed, you can end up with the Cartesian product of all possible scenarios for each of the inputs, second, you will probably still not find bugs that are specific to the chosen implementation.
By (also) taking a glass-box (aka white-box) perspective, you can avoid creating useless tests: Knowing that your code as the first step handles the special case that the number of repeats is negative means you don't have to multiply this scenario with all the others. Certainly, this means you are making use of your knowledge of implementation details: If you were later to change your code such that the check against negative repeats comes at several places, then you better also adjust your test suite.
Since there seems to be a wide spread concern about testing implementation details: unit-testing is about testing the implementation. Different implementations have different potential bugs. If you don't use unit-testing for finding these bugs, then any other test level (integration, subsystem, system) is definitely less suited for finding them systematically - and in a bigger project you don't want implementation level bugs escape to later development phases or even to the field. On a side note, coverage analysis implies you take a glass-box perspective, and TDD does the same.
It is correct, however, that a test suite or individual tests should not unnecessarily depend on implementation details - but that is a different statement than saying you should not depend on implementation details at all. A plausible approach therefore is, to have a set of tests that make sense from a black-box perspective, plus tests that are meant to catch those bugs that are implementation specific. The latter need to be adjusted when you change your code, but the effort can be reduced by various means, e.g. using test helper methods etc.
In your case, taking a glass-box perspective would probably reduce the number of tests with negative repeats to one, also the null char cases, possibly also the NullStrB cases (assuming you handle that early by replacing the null with an empty string), and so on.
First, use a code coverage tool. That will show you which lines of your code are executed by your tests. IDEs have plugins for code coverage tools so that you can run a test and see which lines were executed. Shoot for covering every line, that may be hard for some cases but for this kind of utility it is very do-able.
Using the code coverage tool makes uncovered edge cases stand out. For tests that are harder to implement code coverage shows you what lines your test executed, so if there's an error in your test you can see how far it got.
Next, understand no tests cover everything. There will always be values you don't test. So pick representative inputs that are of interest, and avoid ones that seem redundant. For instance, is passing in an empty StringBuilder really something you care about? It doesn't affect the behavior of the code. There are special values that may cause a problem, like null. If you are testing a binary search you'll want to cover the case where the array is really big, to see if the midpoint calculation overflows. Look for the kinds of cases that matter.
If you validate upfront and kick out troublesome values, you don't have to do as much work testing. One test for null StringBuilder passed in to verify you throw IllegalArgumentException, one test for negative repeat value to verify you throw something for that.
Finally, tests are for the developers. Do what is useful for you.

How do I write a TDD test for removing a field from a Java class?

I have a Java class with three fields. I realized I only need two of them due to changes in requirements.
Ideally I'd write a failing test case before modifying code.
Is there a standard way, or should I just ignore TDD for this task?
That's refactoring, so you don't need to start with failing tests.
Find all the methods using the field.
Make sure that they're covered by unit tests.
Refactor the methods so they no longer use the field.
Remove the field.
Ensure that the tests are running.
Does the drop of this field change the behavior of the class? If not, just drop the field and check if the class still works correctly (aka, passes the tests you should have already written).
TDD principle is to write code "designed by tests". Which may be sound silly but means that the first class you should write is the test class, testing the behavior of the class under test. You should iterate over few steps:
write the test. It should not compile (you don't have the class/classes under test)
Make the test compile. It should fail (you just have empty class which does not satisfy the assertions in the test)
Make the test to pass in the simplest way (usually, just making the method you are testing to return the expected value)
Refine/Refactor/Generalize the class under test, re-run the test (it should still pass). This step should be really fast, usually less than 2 minutes.
Repeat from step 2 until the desired behavior will emerge almost naturally.
If you have an exhaustive list of all the fields you need, you can compare that list of fields by reflection :
yourClassName.getClass().getDeclaredFields() vs your list of fields
Write a test for the constructor without the field you want to remove.
Obviously only works if the constructor takes the field's value as a parameter.
Delete all tests covering the removed functionality (this doesn't count as "writing production code" as per the 3 Rules of TDD).
Delete all references to the obsolete field in remaining tests. If any of them is to fail, you are then allowed to write the required production code to make it pass.
Once your tests are green again, all subsequent modifications fall into the "refactoring" category. You are allowed to remove your (now unused) field here.

Is there a way to generate all expectations using EasyMock?

I am trying to write a EasyMock Junit test case for some code which is having a lot of extra bits and pieces of code which I am finding a little overkill to Mock.
Say for the given example http://java.dzone.com/articles/easymock-tutorial-%E2%80%93-getting,
Following expectation is set to test
portfolio.getTotalValue()
Expectation
EasyMock.expect(marketMock.getPrice("EBAY")).andReturn(42.00);
EasyMock.replay(marketMock);
Now in my case there are around 30-40 such expectations that I need to set before I can come to my piece of code to unit test.
Is there a way to generate expectations of a code or dynamically generate them ? So that I don't have to manually do all this stuff to test my specific piece of code ?
No.
Seriously, what would you expect it to do?
You can save some labor over the long run by looking at patterns of expectations in multiple tests, and combining those into reusable methods or "#Before" methods.
Actually, it's a code smell: Hard-to-Test Code. Your object might not fulfill the Single Responsibility Principle (SRP).
You can try extracting out some expectations to one or more allowXY or createMockedXY helper methods (void allowDownloadDocument(path, name, etc), Document createMockedDocument(...) for example). Eliminating static helper classes also could be helpful.

Basic jUnit Questions

I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String)
public String multiply(String num1, String num2);
I have done the implementation and created a test class with the following test cases involving the input String parameter as
valid numbers
characters
special symbol
empty string
Null value
0
Negative number
float
Boundary values
Numbers that are valid but their product is out of range
numbers will + sign (+23)
Now my questions are these:
I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ?
If testing an input value like character ("a"), do I need to include test cases for ALL scenarios?
"a" as the first argument
"a" as the second argument
"a" and "b" as the 2 arguments
As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit?
Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough?
Following from the above point, have I successfully tested the multiply() method?
Unit testing is great (in the 200 KLOC project I'm working I've got as many unit test code as regular code) but (assuming a correct unit test):
a unit test that passes does not guarantee that your code works
Think of it this way:
a unit test that fails proves your code is broken
It is really important to realize this.
In addition to that:
it is usually impossible to test every possible input
And then, when you're refactoring:
if all your unit tests are passing does not mean you didn't introduce a regression
But:
if one of your unit test fails you know you have introduced a regression
This is really fundamental and should be unit testing 101.
1) I do think it's a good idea to limit the number of assertions you make in each test. JUnit only reports the first failure in a test, so if you have multiple assertions some problems may be masked. It's more useful to be able to see everything that passed and everything that failed. If you have 10 assertEquals in one test and the first one fails, then you just don't know what would have happened with the other 9. Those would be good data points to have when debugging.
2) Yes, you should include tests for all of your inputs.
3) It's not just end-user input that needs to be tested. You'll want to write tests for any public methods that could possibly fail. There are some good guidelines for this, particularly concerning getters and setters, at the JUnit FAQ.
4) I think you've got it pretty well covered. (At least I can't think of anything else, but see #5).
5) Give it to some users to test out. They always find sample data that I never think of testing. :)
1) There is a tradeoff between granularity of tests (and hence ease of diagnosis) and verbosity of your unit test code. I'm personally happy to go for relatively coarse-grained test methods, especially once the tests and tested code have stabilized. The granularity issue is only relevant when tests fail. (If I get a failure in a multi-assertion testcase, I either fix the first failure and repeat, or I temporarily hack the testcase as required to figure out what is going on.)
2) Use your common sense. Based on your understanding of how the code is written, design your tests to exercise all of the qualitatively different subcases. Recognize that it is impossible to test all possible inputs in all but the most trivial cases.
3) The point of unit testing is to provide a level of assurance that the methods under test do what they are required to do. What this means depends on the code being tested. For example, if I am unit testing a sort method, validation of user input is irrelevant.
4) The coverage seems reasonable. However, without a detailed specification of what your class is required to do, and examination of the actual unit tests, it is impossible to say if you ave covered everything. For example, is your method supposed to cope with leading / trailing whitespace characters, numbers with decimal points, numbers like "123,456", numbers expressed using non-latin digits, numbers in base 42?
5) Define "successfully tested". If you mean, do my tests prove that the code has no errors, then the answer is a definite "NO". Unless the unit tests enumerate each and every possible input, they cannot constitute a proof of correctness. (And in some circumstances, not even testing all inputs is sufficient.)
In all but the most trivial cases, testing cannot prove the absence of bugs. The only thing it can prove is that bugs are present. If you need to prove that a program has no bugs, you need to resort to "formal methods"; i.e. applying formal theorem proving techniques to your program.
And, as another answer points out, you need to give it to real users to see what they might come up with in the way of unexpected input. In other words ... whether the stated or inferred user requirements are actually complete and valid.
True numbers of tests are, of course, infinite. That is not practical. You have to choose valid representative cases. You seem to have done that. Good job.
1) It's best to keep your tests small and focused. That way, when a test fails, it's clear why the test failed. This usually results in a single assertion per test, but not always.
However, instead of hand-coding a test for each individual "invalid scenario", you might want to take a look at JUnit 4.4 Theories (see the JUnit 4.4 release notes and this blog post), or the JUnit Parameterized test runner.
Parametrized tests and Theories are perfect for "calculation" methods like this one. In addition, to keep things organized, I might make two test classes, one for "good" inputs, and one for "bad" inputs.
2) You only need to include the test cases that you think are most likely to expose any bugs in your code, not all possible combinations of all inputs (that would be impossible as WizardOfOdds points out in his comments). The three sets that you proposed are good ones, but I probably wouldn't test more than those three. Using theories or parametrized tests, however, would allow you to add even more scenarios.
3) There are many benefits to writing unit tests, not just the one you mention. Some other benefits include:
Confidence in your code - You have a high decree of certainty that your code is correct.
Confidence to Refactor - you can refactor your code and know that if you break something, your tests will tell you.
Regressions - You will know right away if a change in one part of the system breaks this particular method unintentionally.
Completeness - The tests forced you to think about the possible inputs your method can receive, and how the method should respond.
5) It sounds like you did a good job with coming up with possible test scenarios. I think you got all the important ones.
I just want to add, that with unit testing, you can gain even more if you think first of the possible cases and after that implement in the test driven development fashion, because this will help you stay focuesed on the current case and this will enable you to create easiest implementation possible in DRY fashion. You might also be usng some test coverage tool, e.g. in Eclipse EclEmma, which is really easy to use and will show you if tests have executed all of your code, which might help you to determine when it is enough (although this is not a proof, just a metric). Generally when it comes to unit testing I was much inspired by Kent Becks's Test Driven Development by Example book, I strongly recommend it.

Java assertions underused

I'm wondering why the assert keyword is so underused in Java? I've almost never seen them used, but I think they're a great idea. I certainly much prefer the brevity of:
assert param != null : "Param cannot be null";
to the verbosity of:
if (param == null) {
throw new IllegalArgumentException("Param cannot be null");
}
My suspicion is that they're underused because
They arrived relatively late (Java 1.4), by which time many people had already established their Java programming style/habit
They are turned off at runtime by default
assertions are, in theory, for testing invariants, assumptions that must be true in order for the code to complete properly.
The example shown is testing for valid input, which isn't a typical usage for an assertion because it is, generally, user supplied.
Assertions aren't generally used in production code because there is an overhead and it is assumed that situations where the invariants fail have been caught as coding errors during development and testing.
Your point about them coming "late" to java is also a reason why they aren't more widely seen.
Also, unit testing frameworks allow for some of the need for programmatic assertions to be external to the code being tested.
It's an abuse of assertions to use them to test user input. Throwing an IllegalArgumentException on invalid input is more correct, as it allows the calling method to catch the exception, display the error, and do whatever it needs to (ask for input again, quit, whatever).
If that method is a private method inside one of your classes, the assertion is fine, because you are just trying to make sure you aren't accidentally passing it a null argument. You test with assertions on, and when you have tested all the paths through and not triggered the assertion, you can turn them off so that you aren't wasting resources on them. They are also useful just as comments. An assert at the start of a method is good documentation to maintainers that they should be following certain preconditions, and an assert at the end with a postcondition documents what the method should be doing. They can be just as useful as comments; moreso, because with assertions on, they actually TEST what they document.
Assertions are for testing/debugging, not error-checking, which is why they are off by default: to discourage people from using assertions to validate user input.
In "Effective Java", Joshua Bloch suggested (in the "Check parameters for validity" topic) that (sort of like a simple rule to adopt), for public methods, we shall validate the arguments and throw a necessary exception if found invalid, and for non-public methods (which are not exposed and you as the user of them should ensure their validity), we can use assertions instead.
From Programming with Assertions
By default, assertions are disabled at runtime. Two command-line switches allow you to selectively enable or disable assertions.
This means that if you don't have complete control over the run-time environment, you can't guarantee that the assertion code will even be called. Assertions are meant to be used in a test-environment, not for production code. You can't replace exception handling with assertions because if the user runs your application with assertions disabled (the default), all of your error handling code disappears.
#Don, you are frustrated that assertion are turned off by default. I was also, and thus wrote this little javac plugin that inlines them (ie emits the bytecode for if (!expr) throw Ex rather than this silly assert bytecode.
If you include fa.jar in your classpath while compiling Java code, it will do its magic and then tell
Note: %n assertions inlined.
#see http://smallwiki.unibe.ch/adriankuhn/javacompiler/forceassertions and alternatively on github https://github.com/akuhn/javac
I'm not sure why you would bother to write asserts and then replace them with a standard if then condition statement, why not just write the conditions as ifs in the first place?
Asserts are for testing only, and they have two side effects: Larger binaries and degraded performance when enabled (which is why you can turn them off!)
Asserts shouldn't be used to validate conditions because that means the behaviour of your app is different at run time when asserts are enabled/disabled - which is a nightmare!
Assertions are useful because they:
catch PROGRAMMING errors early
document code using code
Think of them as code self-validation. If they fail it should mean that your program is broken and must stop. Always turn them on while unit testing !
In The Pragmatic Programmer they even recommend to let them run in production.
Leave Assertions Turned On
Use Assertions to Prevent the Impossible.
Note that assertions throw AssertionError if they fail, so not caught by catch Exception.
tl;dr
Yes, use assertion-testing in production where it makes sense.
Use other libraries (JUnit, AssertJ, Hamcrest, etc.) rather than the built-in assert facility if you wish.
Most of the other Answers on this page push the maxim "Assertions aren't generally used in production code”. While true in productivity apps such as a word-processor or spreadsheet, in custom business apps where Java is so commonly used, assertion-testing in production is extremely useful, and common.
Like many maxims in the world of programming, what starts out true in one context is misconstrued and then misapplied in other contexts.
Productivity Apps
This maxim of "Assertions aren't generally used in production code”, though common, is incorrect.
Formalized assertion-testing originated with apps such as a word-processor like Microsoft Word or a spreadsheet like Microsoft Excel. These apps might invoke an array of assertion tests assertions on every keystroke made by the user. Such extreme repetition impacted performance severely. So only the beta-versions of such products in limited distribution had assertions enabled. Thus the maxim.
Business Apps
In contrast, in business-oriented apps for data-entry, database, or other data-processing, the use of assertion-testing in production is enormously useful. The insignificant hit on performance makes it quite practical – and common.
Test business rules
Verifying your business rules at runtime in production is entirely reasonable, and should be encouraged. For example:
If an invoice must have one or more line items at all times, then write an assertion testing than the count of invoice line items is greater than zero.
If a product name must be at least 3 characters or more, write an assertion testing the length of the string.
When calculating the balance for a cash ledger, you know the result can never be negative, so run a check for a negative number signaling a flaw in the data or code.
Such tests have no significant impact on performance in production.
Runtime conditions
If your app expects certain conditions to always be true when your app runs in production, write those expectations into your code as assertion tests.
If you expect those conditions may reasonably on occasion fail, then do not write assertion tests. Perhaps throw certain exceptions. Then try to recover where possible.
Sanity-checks
Sanity checks at runtime in production is also entirely reasonable, and should be encouraged. Testing a few arbitrary conditions that one could not imagine being untrue has saved my bacon in countless situations when some bizarre happening occurred.
For example, testing that rounding a nickel (0.05) to the penny resulted in a nickel (0.05) in a certain library helped me in being one of the first people to discover a floating-point technology flaw that Apple shipped in their Rosetta library during the PowerPC-to-Intel transition. Such a flaw reaching the public would have seemed impossible. But amazingly, the flaw had escaped the notice of the originating vendor, Transitive, and Apple, and the early-access developers testing on Apple’s betas.
(By the way, I should mention… never use floating-point for money, use BigDecimal.)
Choice of frameworks
Rather than use the built-in assert facility, you may want to consider using another assertion framework. You have multiple options, including:
JUnitSee: org.junit.jupiter.api.Assertions.
AssertJKnown for its slick fluent interface.
HamcrestUsed across many languages (Java, Python, Ruby, Swift, etc.).
Or roll-your-own. Make a little class to use in your project. Something like this.
package work.basil.example;
public class Assertions {
static public void assertTrue ( Boolean booleanExpression , CharSequence message ) throws java.lang.AssertionError {
if ( booleanExpression ) {
// No code needed here.
} else { // If booleanExpression is false rather than expected true, throw assertion error.
// FIXME: Add logging.
throw new java.lang.AssertionError( message.toString() );
}
}
}
Example usage:
Assertions.assertTrue(
localTime.isBefore( LocalTime.NOON ) ,
"The time-of-day is too late, after noon: " + localTime + ". Message # 816a2a26-2b95-45fa-9b0a-5d10884d819d."
) ;
Your questions
They arrived relatively late (Java 1.4), by which time many people had already established their Java programming style/habit
Yes, this is quite true. Many people were disappointed by the API that Sun/JCP developed for assertion-testing. Its design was lackluster in comparison to existing libraries. So many ignored the new API, and stuck with known tools (3rd-party tools, or roll-your-own mini-library).
They are turned off at runtime by default, WHY OH WHY??
In the earliest years, Java got a bad rap for poor performance speed. Ironically, Java quickly evolved to become one of the best platforms for performance. But the bad rap hung around like a stinky odor. So Sun was extremely wary of anything that might in any measurable way impact performance. So in this perspective, it made sense to make disabling assertion-testing the default.
Another reason to disable by default might have been related to the fact that, in adding the new assertion facility, Sun had hijacked the word assert. This was not a previously reserved keyword, and required one of the few changes ever made to the Java language. The method name assert had been used by many libraries and by many developers in their own code. For some discussion of this historical transition, read this old documentation, Programming With Assertions.
Assertions are very limited: You can only test boolean conditions and you need to write the code for a useful error message every time. Compare this to JUnit's assertEquals() which allows to generate a useful error message from the inputs and even show the two inputs side by side in the IDE in a JUnit runner.
Also, you can't search for assertions in any IDE I've seen so far but every IDE can search for method invocations.
In fact they arrived in Java 1.4.
I think the main problem is that when you code in an environment where you do not manage JVM options directly by yourself like in Eclipse or J2EE servers (in both cases it is possible to change JVM options, but you need to deeply search to find where it can be done), it is easier (I mean it requires less effort) to use if and exceptions (or worse not to use anything).
As others have stated: assertions are not appropriate for validating user input.
If you are concerned with verbosity, I recommend you check out a library I wrote: https://github.com/cowwoc/requirements.java/. It'll allow you to express these checks using very little code, and it'll even generate the error message on your behalf:
requireThat("name", value).isNotNull();
and if you insist on using assertions, you can do this too:
assertThat("name", value).isNotNull();
The output will look like this:
java.lang.NullPointerException: name may not be null

Categories