I wonder if a lot of people program in Java with assertions. I think this can be very useful on large projects without enough written contracts or outdated contracts. Particularly when you use webservices, components, etc.
But I have never seen any project using assertions (except in JUnit/testing tests...).
I've noticed that the thrown class is an Error and not an Exception. Why do they choose an error? Can it be because an exception could be unexpectedly caught and not logged/rethrown?
If you develop an application with components, I wonder where you put the assertions:
On the component side, just before returning the data through the public API?
On the component client side? And if the API is called everywhere you set up a facade pattern that will call the assertion mechanism? (Then I guess you put your assertions and facade on some external project and your client projects will depend on this assertion project?)
I understand how to use assertions, and when use them, but just wonder if some people have recommendations based on a real experience of assertions.
By the way, do you refer to assert in Java?
I personally find assertions especially useful for invariants.
Take into account that assertion checking is turned off by default in Java. You have to add the -ea flag to enable assertion checking.
In other words, you can test your application in a kind of debug mode, where the program will halt once an assertion is broken. On the other hand, the release application will have its assertion turned off and will not incur time penalty for assertion checking. They will be ignored.
In Java, assertions are far less powerful than exceptions and have totally different meanings. Exceptions are there when something unexpected happens and you have to handle it. Assertions are about the correctness of your code. They are here to confirm that what 'should be' is indeed the case.
My rough policy, especially when working with many developers:
public methods: always check arguments and throw IllegalArgumentException when something is wrong
private methods: use assertions to check arguments against null pointers and so on
complex methods: intermediate assertions to ensure that the intermediate results satisfy requested properties
...but actually, I use them sparsingly. Just where it's critical or error-prone places.
About the minor usage of asserts, I think that was a bad decision to make assertions disabled by default.
About extending Error I suppose it extends Error because Errors are exceptions that are not expected to be caught. And that way, when in your code you have catch(Exception), the assertion isn't cached.
And about usage, the best place is in precoditions, postconditions or in the middle of the code in any invariant you want to check.
In my opinion, errors in Java should be treated as an Exception. Therefore I would enable assertions in development and in private methods to check that my code is running fine and don't pass invalid values to private methods.
Since those checks should be made in public methods, I wouldn't not check again in private methods.
In order to disable assertions:
-da flag in compiler
In my opinion, in public methods you should check and manage the exception or log them yourself.
Assertions should not be used outside tests because they can be turned off in a production environment, which may cause serious problems due to the lack of proper checking.
However, I've seen statement that it is allowed to use them to check parameters in private methods. That's because you assume that data which managed to reach to your private method is correct and if it isn't application may fail hard.
Related
I work on a large legacy Java 8 (Android) application. We recently found a bug that was caused by an ignored result of method. Specifically a caller of a send() method didn't take the right actions when it the sending failed. It's been fixed but now I want to add some static analysis to help find if other existing bugs of the same nature exist in our code. And additionally, to prevent new bugs of the same nature from being added in the future.
We already use Find Bugs, PMD, Checkstyle, Lint, and SonarQube. So I figured that one of these probably already has the check I'm looking for, but it just needs to be enabled. But after a few hours of searching and testing, I don't think that's the case.
For reference, this is the code I was testing with:
public class Application {
public status void main(String[] args) {
foo(); // I want this to be caught
Bar aBar = new Bar();
aBar.baz(); // I want this to be caught
}
static boolean foo() {
return System.currentTimeMillis() % 2 == 0;
}
}
public class Bar {
boolean baz() {
return System.currentTimeMillis() % 2 == 0;
}
}
I want to catch this on the caller side since some callers may use the value while others do not. (The send() method described above was this case)
I found the following existing static analysis rules but they only seem to apply to very specific circumstances to avoid false positives and not work on my example:
Return values from functions without side effects should not be ignored (only for immutable classes in the Java API)
Method ignores exceptional return value (only for known methods like File.delete())
Method ignores return value (only for methods annotated with javax.annotation.CheckReturnValue I think...)
Method ignores return value, is this OK? (only when the return value is the same type as the type the method is invoked on)
Return value of method without side effect is ignored (only when the method does not produce any effect other than return value)
So far the best option seems to be #3 but it requires me to annotate EVERY method or class in my HUGE project. Java 9+ seems to allow annotating at the package-level but that's not an option for me. Even if it was, the project has A LOT of packages. I really would like a way to configure this to be applied to my whole project via one/few locations instead needing to modify every file.
Lastly I came across this Stack Overflow answer that showed me that IntelliJ has this check with a "Report all ignored non-library calls" check. Doing this seems to work as far as highlighting in the IDE. But I want this to cause CI fail. I found there's a way to trigger this via command line using intelliJ tools but this still outputs an XML/JSON file and I'll need to write custom code to parse that output. I'd also need to install IDE tools onto the CI machine which seems like overkill.
Does anyone know of a better way to achieve what I want? I can't be the first person to only care about false negatives and not care about false positives. I feel like it should be manageable to have any return value that is currently being unused to either be logged or have it explicitly stated that the return value is intentionally ignored it via an annotation or assigning to a variable convention like they do in Error Prone
Scenarios like the one you describe invariably give rise to a substantial software defect (a true bug in every respect); made more frustrating and knotty because the code fails silently, and which allowed the problem to remain hidden. Your desire to identify any similar hidden defects (and correct them) is easy to understand; however, (I humbly suggest) static code analysis may not be the best strategy:
Working from the concerns you express in your question: a CheckReturnValue rule runs a high risk of producing a cascade of //Ignore code comments, rule violationSuppress clauses, and/or #suppressRule annotations that far outnumber the rule's positive defect detection count.
The Java programming language further increases the likelihood of a high rule suppression count, after taking Java garbage collection into consideration and assessing how garbage collection effects software development. Working from the understanding that Java garbage collection is based on object instance reference counting, that only instances with a reference count of 0 (zero) are eligible for garbage collection, it makes perfect sense for Java developers to avoid unnecessary references, and to naturally adopt the practice of ignoring unimportant method call return values. The ignored instances will simply fall off of the local call stack, most will reach a reference count of 0 (zero), immediately become eligible for and quickly undergo garbage collection.
Shifting now from a negative perspective to positive, I offer alternatives, for your consideration, that (I believe) will improve your results, as well as your probability to reach a successful outcome.
Based on your description of the scenario and resulting defect / bug, it feels like the proximate root cause of the problem is a unit testing failure or an integration testing failure. The implementation of a send operation that may (and almost certainly will at some point) fail, both unit testing and integration testing absolutely should have incorporated multiple possible failure scenarios and verified failure scenario handling. I obviously don't know, but I'm willing to bet that if you focus on creating and running unit tests and integration tests, the quality of the system will improve at every step, the improvements will be clearly evident, and you may very well uncover some or all of the hidden bugs that are the cause of your current concern, trepidation, stress, and worry.
Consider keeping the gist of your current static code analysis research alive, but shift your approach in a new direction. The first time I read your question, I was struck by the realization that the code checks you would like to perform exist in multiple unrelated locations across the code base and are quickly becoming overly complex, the specific details of the checks are different in many section of code, and each of the special cases make the overall effort unrealistic. Basically, what you would like to implement represents a cross-cutting goal that falls across a sizable section of the code base, and the implementation details have made what is a fairly simple good idea ridiculously complex. Your question is almost a textbook example of a problem that is best implemented taking a cross-cutting aspect-oriented approach.
If you have the time and interest, please take a look at the AspectJ framework, maybe code a few exploratory aspects, and let me know what you think. I'd like to hear your thoughts, if you feel like having a geeky dev conversation at some point. I really hope this is helpful-
You may use the intelliJ IDEA's inspection: Java | Probable bugs | Result of method call ignored with "Report all ignored non-library calls" option enabled. It catches both cases provided in your code sample.
There are quite a few posts on SO about the "checked vs unchecked exception" topic. This answer is probably the most well-rounded and informative. Yet I'm still conflicted to follow the logic presented there, and there's a reason for that.
I'm building a wrapper API around a set of services similar to each other. There are, however, minor differences between them (or a possibility of such in the future), so that certain functions (of secondary and shortcut nature) may be supported by some services and not supported by others. So it seems only logical to go with the following approach:
public interface GenericWrapperInterface {
void possiblyUnsupportedOperation () throws java.lang.UnsupportedOperationException;
}
Why UnsupportedOperationException? Because it is designed exactly for this kind of situations.
However, all the relevant SO posts, in addition to Oracle's own manual, postulate that if an exception is used to signal an issue that the client could recover from, or an issue predictable but unavoidable, then that exception should be a checked one. My case conforms to these requirements, because for some operations the possibility of their unavailability may be known in advance and those operations are not critical and may be avoided if needed.
So I'm lost in this conundrum. Should I go with a perfectly fitting standard exception and violate the common logic of exception usage, or should I build my own checked alternative and thus duplicate code and create additional confusion among the API users?
The "UnsupportedOperationException" denotes a failure of Java OO model, in particular, a violation of Liskov's Principle.
Usually, that means your code has other, non-OO means to decide if the method in question should be called:
if (instance.isSupportive()) instance.possiblyUnsupportedOperation();
Therefore, calling an unimplemented method is a logic error, on par with assertion failure, stack overflow or running out of memory. As such, it should not be a checked exception.
The rule of thumb is that unchecked exceptions represent programmer errors and checked exceptions represent situations. So as the API author you have to decide whether the exceptional condition should have been prevented by the programmer beforehand, or whether the programmer should be required to deal with the condition after it occurs.
I help maintain and build on a fairly large Swing GUI, with a lot of complex interaction. Often I find myself fixing bugs that are the result of things getting into odd states due to some race condition somewhere else in the code.
As the code base gets large, I've found it's gotten less consistent about specifying via documentation which methods have threading restrictions: most commonly, methods that must be run on the Swing EDT. Similarly, it would be useful to know and provide static awareness into which (of our custom) listeners are notified on the EDT by specification.
So it came to me that this should be something that could be easily enforced using annotations. Lo and behold, there exists at least one static analysis tool, CheckThread, that uses annotations to accomplish this. It seems to allow you to declare a method to be confined to a specific thread (most commonly the EDT), and will flag methods that try to call that method without also declaring themselves as confined to that thread.
So on the surface this just seems like a low-pain, huge-gain addition to the source and build cycle. My questions are:
Are there any success stories for people using CheckThread or similar libraries to enforce threading constraints? Any stories of failure? Why did it succeed/fail?
Is this good in theory? Are there theoretical downsides?
Is this good in practice? Is it worth it? What kind of value has it delivered?
If it works in practice, what are good tools to support this? I've just found CheckThread but admit I'm not entirely sure what I'm searching for to find other tools that do the same thing.
I know whether it's right for us depends on our scenario. But I've never heard of people using something like this in practice, and to be honest it doesn't seem to have taken hold much from some general browsing. So I'm wondering why.
This answer is more focused on the theory aspect of your question.
Fundamentally you are making an assertion: "This methods runs only under certain threads". This assertion isn't really different than any other assertion you might make ("The method accepts only integers less than 17 for parameter X"). Issues are
Where do such assertions come from?
Can static analyzers check them?
Where do you get such a static analyzer?
Mostly such assertions have to come from the software designers, as they are the only people that know the intentions. The traditional term for this is "Design by Contract",
although most DBC schemes are only over the current program state (C's assert macro) and they should really be over the programs' past and future states ("temporal assertions"), e.,g., "This routine will allocate a block of storage, and eventually some piece of code will deallocate it". One can build tools that try to determine hueristically what the assertions are (e.g., Engler's assertion induction work; others have done work in this area). That's useful, but the false positives are an issue. As practical matter, asking the designers to code such assertions doesn't seem particularly onerous, and is really good long term documentation. Whether you code such assertions with a specific "Contract" language construct, or with an if statement ("if Debug && Not(assertion) Then Fail();") or hide them in an annotation is really just a matter of convenience. Its nice when the language allows to code such assertions directly.
Checking of such assertions statically is difficult. If you stick with current-state only, the static analyzer pretty much has to do full data flow analysis of your entire application, because the information needed to satisfy the assertion likely comes from data created by another part of the application. (In your case, the "inside EDT" signal has to come from analyzing the whole call graph of the application to see if there is any call-path that leads to the method from a thread which is NOT the EDT thread). If you use temporal properties, the static check pretty much needs some kind of state-space verification logic in addition; these are presently still pretty much research tools. Even with all this machinery, static analyzers generally have to be "conservative" in their anlayses; if they can't demonstrate that something is false, they pretty much have to assume it is true, because of the halting problem.
Where do you get such analyzers? Given all the machinery needed, they're hard to build and so you should expect them to be rare. If somebody has built one, great. If not... as a general rule, you don't want do this yourself from scratch. The best long-term hope is to have generic program analysis machinery available on which to build such analyzers, to amortize the cost of building all the infrastructure. (I build program analyzer tool foundations; see our DMS Software Reengineering Toolkit).
One way to make it "easier" to build such static analyzers is to restrict the cases they handle to narrow scope, e.g., CheckThread. I'd expect CheckThread to do exactly what it presently does, and it would be unlikely to get a lot stronger.
The reason that "assert" macros and other such dynamic "current state" checks are popular is that they can actually be implemented by a simple runtime test. That's pretty practical. The problem here is that you may never exercise a path that leads to a failed conditions. So, for dynamic analysis, absence of detected failure is not really evidence of correctness. Still feels good.
Bottom line: static analyzers and dynamic analyzers each have their strength.
We haven't tried any static analysis tools, but we've used AspectJ to write a simple aspect that detects at runtime when any code in java.awt or javax.swing is invoked outside the EDT. It has found several places in our code that were missing a SwingUtilities.invokeLater(). We run with this aspect enabled throughout our QA cycle, then turn it off shortly before release.
As requested, this doesn’t pertain specifically to Java or the EDT, but I’ve seen good results with Coverity’s concurrency static analysis checkers for C/C++. They did have a higher false positive rate than less complicated checkers, but the code owners seemed willing to put up with that, given how hard threading bugs can be to find via testing. The details are confidential, I’m afraid, but Dawson Engler’s public papers (e.g., “Bugs as Deviant Behavior”) are very good on the general approach of “The following «N» instances of your code do «X» before doing «Y»,; this instance doesn’t.”
Seeing a checked expection in API is not rare, one of the most well known examples is IOException in Closeable.close(). And often dealing with this exception really annoys me. Even more annoying example was in one of our projects. It consists from several components and each component declares specific checked exception. The problem (in my opinion) is that at design time it was not exactly known what certain exceptions would be. Thus, for instance, component Configurator declared ConfiguratorExeption. When I asked why not just use unchecked exceptions, I was told that we want our app to be robust and not to blow in runtime. But it seams to be a weak argument because:
Most of those exceptions effectively make app unusable. Yes, it doesn't blow up, but it cannot make anything exepting flooding log with messages.
Those exceptions are not specific and virtually mean that 'something bad happened'. How client is supposed to recover?
In fact all recovering consists from logging exception and then swallowing it. This is performed in large try-catch statement.
I think, that this is a recurring pattern. But still checked exceptions are widely used in APIs. What is the reason for this? Are there certain types of APIs that are more appropriate for checked exceptions?
There have been a lot of controversy around this issue.
Take a look at this classic article about that subject http://www.mindview.net/Etc/Discussions/CheckedExceptions
I personally tend to favor the use of Runtime exceptions myself and have started to consider the use of checked exceptions a bad idea in your API.
In fact some very popular Java API's have started to do the same, for instance, Hibernate dropped its use of checked exceptions for Runtime from version 3, the Spring Framework also favor the use of Runtime over checked exceptions.
One of the problems with large libraries is that they do not document all the exceptions that may be thrown, so your code may bomb at any time if an undocumented RuntimeException just happens to be thrown from deep down code you do not "own".
By explicitly declaring all those, at least the developer using said library have the compiler help dealing with them correctly.
There is nothing like doing forensic analysis at 3 in the morning to discover that some situation triggered such an undeclared exception.
Checked Exceptions should only be thrown for things that are 1) Exceptional they are an exception to the rule of success, most poor exception throwing is the terrible habit of defensive coding and 2) Actionable by the client. If something happens that the client of the API can't possibly affect in any way make it a RuntimeException.
There's different views on the matter, but I tend to view things as follows:
a checked exception represents an event which one could reasonably expect to occur under some predictable, exceptional circumstances that are still "within the normal operating conditions of a program/typical caller", and which can typically be dealt with not too far up the call stack;
an unchecked exception represents a condition that we "wouldn't really expect to occur" within the normal running environment of a program, and which can be dealt with fairly high up the call stack (or indeed possibly cause us to shut down the application in the case of a simpler app);
en error represents a condition which, if it occurs, we would generally expect to result in us shutting down the application.
For example, it's quite within the realms of a typical environment that under some exceptional-- but fairly predictable-- conditions, closing a file could cause an I/O error (flushing a buffer to a file on closing when the disk is full). So the decision to let Closable throw a checked IOException is probably reasonable.
On the other hand, there are some examples within the standard Java APIs where the decision is less defensible. I would say that the XML APIs are typically overfussy when it comes to checked exceptions (why is not finding an XML parser something you really expect to happen and deal with in a typical application...?), as is the reflection API (you generally really expect class definitions to be found and not to be able to plough on regardless if they're not...). But many decisions are arguable.
In general, I would agree that exceptions of the "configuration exception" type should probably be unchecked.
Remember if you are calling a method which declares a checked exception but you "really really don't expect it to be thrown and really wouldn't know what to do if it were thrown", then you can programmatically "shrug your shoulders" and re-cast it to a RuntimeException or Error...
You can, in fact, use Exception tunneling so that a generic exception (such as your ConfiguratorException) can give more detail about what went wrong (such as a FileNotFound).
In general I would caution against this however, as this is likely to be a leaky abstraction (no one should care whether your configurator is trying to pull its data from the filesystem, database, across a network or whatever)
If you are using checked exceptions then at least you'll know where and why your abstractions are leaky. IMHO, this is a good thing.
While poking around the questions, I recently discovered the assert keyword in Java. At first, I was excited. Something useful I didn't already know! A more efficient way for me to check the validity of input parameters! Yay learning!
But then I took a closer look, and my enthusiasm was not so much "tempered" as "snuffed-out completely" by one simple fact: you can turn assertions off.*
This sounds like a nightmare. If I'm asserting that I don't want the code to keep going if the input listOfStuff is null, why on earth would I want that assertion ignored? It sounds like if I'm debugging a piece of production code and suspect that listOfStuff may have been erroneously passed a null but don't see any logfile evidence of that assertion being triggered, I can't trust that listOfStuff actually got sent a valid value; I also have to account for the possibility that assertions may have been turned off entirely.
And this assumes that I'm the one debugging the code. Somebody unfamiliar with assertions might see that and assume (quite reasonably) that if the assertion message doesn't appear in the log, listOfStuff couldn't be the problem. If your first encounter with assert was in the wild, would it even occur to you that it could be turned-off entirely? It's not like there's a command-line option that lets you disable try/catch blocks, after all.
All of which brings me to my question (and this is a question, not an excuse for a rant! I promise!):
What am I missing?
Is there some nuance that renders Java's implementation of assert far more useful than I'm giving it credit for? Is the ability to enable/disable it from the command line actually incredibly valuable in some contexts? Am I misconceptualizing it somehow when I envision using it in production code in lieu of statements like if (listOfStuff == null) barf();?
I just feel like there's something important here that I'm not getting.
*Okay, technically speaking, they're actually off by default; you have to go out of your way to turn them on. But still, you can knock them out entirely.
Edit: Enlightenment requested, enlightenment received.
The notion that assert is first and foremost a debugging tool goes a long, long way towards making it make sense to me.
I still take issue with the notion that input checks for non-trivial private methods should be disabled in a production environment because the developer thinks the bad inputs are impossible. In my experience, mature production code is a mad, sprawling thing, developed over the course of years by people with varying degrees of skill targeted to rapidly changing requirements of varying degrees of sanity. And even if the bad input really is impossible, a piece of sloppy maintenance coding six months from now can change that. The link gustafc provided (thanks!) includes this as an example:
assert interval > 0 && interval <= 1000/MAX_REFRESH_RATE : interval;
Disabling such a simple check in production strikes me as foolishly optimistic. However, this is a difference in coding philosophy, not a broken feature.
In addition, I can definitely see the value of something like this:
assert reallyExpensiveSanityCheck(someObject) : someObject;
My thanks to everybody who took the time to help me understand this feature; it is very much appreciated.
assert is a useful piece of Design by Contract. In that context, assertions can be used in:
Precondition checks.
Postcondition checks.
Intermediate result checks.
Class invariant checks.
Assertions can be expensive to evaluate (take, for example, the class invariant, which must hold before and after calling any public method of your class). Assertions are typically wanted only in debug builds and for testing purposes; you assert things that can't happen - things which are synonymous of having a bug. Assertions verify your code against its own semantics.
Assertions are not an input validation mechanism. When input could really be correct or wrong in the production environment, i.e. for input-output layers, use other methods, such as exceptions or good old conditional checks.
Java's assertions aren't really made for argument validation - it's specifically stated that assertions are not to be used instead of dear old IllegalArgumentException (and neither is that how they are used in C-ish languages). They are more there for internal validation, to let you make an assumption about the code which isn't obvious from looking at it.
As for turning them off, you do that in C(++), too, just that if someone's got an assert-less build, they have no way to turn it on. In Java, you just restart the app with the appropriate VM parameters.
Every language I've ever seen with assertions comes with the capability of shutting them off. When you write an assertion you should be thinking "this is silly, there's no way in the universe this could ever be false" -- if you think it could be false, it should be an error check. The assertion is just to help you during development if something goes horribly wrong; when you build the code for production you disable them to save time and avoid (hopefully) superfluous checks
Assertions are meant to ensure things you are sure that your code fulfills really are fulfilled. It's an aid in debugging, in the development phase of the product, and is usually omitted when the code is released.
What am I missing?
You're not using assertions the way they were meant to be used. You said "check the validity of input parameters" - that's precisely the sort of things you do not want to verify with assertions.
The idea is that if an assertion fails, you 100% have a bug in your code. Assertions are often used for identifying the bug earlier than it would have surfaced otherwise.
I think its the way assert usage is interpreted and envisioned.
If you really want to add the check in your actual production code, why not use If directly or any other conditional statement?
Those being already present in language, the idea of assert was only to have developer's add assertions only if they don't really expect this condition to ever happen.
E.g checking an object to be null, let's say a developer wrote a private method and called it from two places (this is not ideal example but may works for private methods) in the class where he knows he passes a not null object, instead of adding unnecessary check of if since as of today you know there is no way object would be null
But if someone tomorrow calls this method with null argument, in developer's unit testing this can be caught due to presence of assertion and in final code you still don't need an if check.
Assertions are really a great and concise documentation tool for a code maintainer.
For example I can write:
foo should be non-null and greater
than 0
or put this into the body of the program:
assert foo != null;
assert foo.value > 0;
They are extremely valuable for documenting private/package private methods to express original programmer invariants.
For the added bonus, when the subsystem starts to behave flaky, you can turn asserts on and add extra validation instantly.
This sounds about right. Assertions are just a tool that is useful for debugging code - they should not be turned on all the time, especially in production code.
For example, in C or C++, assertions are disabled in release builds.
If asserts could not be turned off, then why should they even exist.
If you want to performa validity check on an input, you can easily write
if (foobar<=0) throw new BadFoobarException();
or pop up a message box or whatever is useful in context.
The whole point of asserts is that they are something that can be turned on for debugging and turned off for production.
Assertions aren't for the end user to see. They're for the programmer, so you can make sure the code is doing the right thing while it's being developed. Once the testing's done, assertions are usually turned off for performance reasons.
If you're anticipating that something bad is going to happen in production, like listOfStuff being null, then either your code isn't tested enough, or you're not sanitizing your input before you let your code have at it. Either way, an "if (bad stuff) { throw an exception }" would be better. Assertions are for test/development time, not for production.
Use an assert if you're willing to pay $1 to your end-user whenever the assertion fails.
An assertion failure should be an indication of a design error in the program.
An assertion states that I have engineered the program in such a way that I know and guarantee that the specified predicate always holds.
An assertion is useful to readers of my code, since they see that (1) I'm willing to set some money on that property; and (2) in previous executions and test cases the property did hold indeed.
My bet assumes that the client of my code sticks to the rules, and adheres to the contract he and I agreed upon. This contract can be tolerant (all input values allowed and checked for validity) or demanding (client and I agreed that he'll never supply certain input values [described as preconditions], and that he doesn't want me to check for these values over and over again).
If the client sticks to the rules, and my assertions nevertheless fail, the client is entitled to some compensation.
Assertions are to indicate a problem in the code that may be recoverable, or as an aid in debugging. You should use a more destructive mechanism for more serious errors, such as stopping the program.
They can also be used to catch an unrecoverable error before the application fails later in debugging and testing scenarios to help you narrow down a problem. Part of the reason for this is so that integrity checking does not reduce the performance of well-tested code in production.
Also, in certain cases, such as a resource leak, the situation may not be desirable, but the consequences of stopping the program are worse than the consequences of continuing on.
This doesn't directly answer your question about assert, but I'd recommend checking out the Preconditions class in guava/google-collections. It allows you to write nice stuff like this (using static imports):
// throw NPE if listOfStuff is null
this.listOfStuff = checkNotNull(listOfStuff);
// same, but the NPE will have "listOfStuff" as its message
this.listOfStuff = checkNotNull(listOfStuff, "listOfStuff");
It seems like something like this might be what you want (and it can't be turned off).