While poking around the questions, I recently discovered the assert keyword in Java. At first, I was excited. Something useful I didn't already know! A more efficient way for me to check the validity of input parameters! Yay learning!
But then I took a closer look, and my enthusiasm was not so much "tempered" as "snuffed-out completely" by one simple fact: you can turn assertions off.*
This sounds like a nightmare. If I'm asserting that I don't want the code to keep going if the input listOfStuff is null, why on earth would I want that assertion ignored? It sounds like if I'm debugging a piece of production code and suspect that listOfStuff may have been erroneously passed a null but don't see any logfile evidence of that assertion being triggered, I can't trust that listOfStuff actually got sent a valid value; I also have to account for the possibility that assertions may have been turned off entirely.
And this assumes that I'm the one debugging the code. Somebody unfamiliar with assertions might see that and assume (quite reasonably) that if the assertion message doesn't appear in the log, listOfStuff couldn't be the problem. If your first encounter with assert was in the wild, would it even occur to you that it could be turned-off entirely? It's not like there's a command-line option that lets you disable try/catch blocks, after all.
All of which brings me to my question (and this is a question, not an excuse for a rant! I promise!):
What am I missing?
Is there some nuance that renders Java's implementation of assert far more useful than I'm giving it credit for? Is the ability to enable/disable it from the command line actually incredibly valuable in some contexts? Am I misconceptualizing it somehow when I envision using it in production code in lieu of statements like if (listOfStuff == null) barf();?
I just feel like there's something important here that I'm not getting.
*Okay, technically speaking, they're actually off by default; you have to go out of your way to turn them on. But still, you can knock them out entirely.
Edit: Enlightenment requested, enlightenment received.
The notion that assert is first and foremost a debugging tool goes a long, long way towards making it make sense to me.
I still take issue with the notion that input checks for non-trivial private methods should be disabled in a production environment because the developer thinks the bad inputs are impossible. In my experience, mature production code is a mad, sprawling thing, developed over the course of years by people with varying degrees of skill targeted to rapidly changing requirements of varying degrees of sanity. And even if the bad input really is impossible, a piece of sloppy maintenance coding six months from now can change that. The link gustafc provided (thanks!) includes this as an example:
assert interval > 0 && interval <= 1000/MAX_REFRESH_RATE : interval;
Disabling such a simple check in production strikes me as foolishly optimistic. However, this is a difference in coding philosophy, not a broken feature.
In addition, I can definitely see the value of something like this:
assert reallyExpensiveSanityCheck(someObject) : someObject;
My thanks to everybody who took the time to help me understand this feature; it is very much appreciated.
assert is a useful piece of Design by Contract. In that context, assertions can be used in:
Precondition checks.
Postcondition checks.
Intermediate result checks.
Class invariant checks.
Assertions can be expensive to evaluate (take, for example, the class invariant, which must hold before and after calling any public method of your class). Assertions are typically wanted only in debug builds and for testing purposes; you assert things that can't happen - things which are synonymous of having a bug. Assertions verify your code against its own semantics.
Assertions are not an input validation mechanism. When input could really be correct or wrong in the production environment, i.e. for input-output layers, use other methods, such as exceptions or good old conditional checks.
Java's assertions aren't really made for argument validation - it's specifically stated that assertions are not to be used instead of dear old IllegalArgumentException (and neither is that how they are used in C-ish languages). They are more there for internal validation, to let you make an assumption about the code which isn't obvious from looking at it.
As for turning them off, you do that in C(++), too, just that if someone's got an assert-less build, they have no way to turn it on. In Java, you just restart the app with the appropriate VM parameters.
Every language I've ever seen with assertions comes with the capability of shutting them off. When you write an assertion you should be thinking "this is silly, there's no way in the universe this could ever be false" -- if you think it could be false, it should be an error check. The assertion is just to help you during development if something goes horribly wrong; when you build the code for production you disable them to save time and avoid (hopefully) superfluous checks
Assertions are meant to ensure things you are sure that your code fulfills really are fulfilled. It's an aid in debugging, in the development phase of the product, and is usually omitted when the code is released.
What am I missing?
You're not using assertions the way they were meant to be used. You said "check the validity of input parameters" - that's precisely the sort of things you do not want to verify with assertions.
The idea is that if an assertion fails, you 100% have a bug in your code. Assertions are often used for identifying the bug earlier than it would have surfaced otherwise.
I think its the way assert usage is interpreted and envisioned.
If you really want to add the check in your actual production code, why not use If directly or any other conditional statement?
Those being already present in language, the idea of assert was only to have developer's add assertions only if they don't really expect this condition to ever happen.
E.g checking an object to be null, let's say a developer wrote a private method and called it from two places (this is not ideal example but may works for private methods) in the class where he knows he passes a not null object, instead of adding unnecessary check of if since as of today you know there is no way object would be null
But if someone tomorrow calls this method with null argument, in developer's unit testing this can be caught due to presence of assertion and in final code you still don't need an if check.
Assertions are really a great and concise documentation tool for a code maintainer.
For example I can write:
foo should be non-null and greater
than 0
or put this into the body of the program:
assert foo != null;
assert foo.value > 0;
They are extremely valuable for documenting private/package private methods to express original programmer invariants.
For the added bonus, when the subsystem starts to behave flaky, you can turn asserts on and add extra validation instantly.
This sounds about right. Assertions are just a tool that is useful for debugging code - they should not be turned on all the time, especially in production code.
For example, in C or C++, assertions are disabled in release builds.
If asserts could not be turned off, then why should they even exist.
If you want to performa validity check on an input, you can easily write
if (foobar<=0) throw new BadFoobarException();
or pop up a message box or whatever is useful in context.
The whole point of asserts is that they are something that can be turned on for debugging and turned off for production.
Assertions aren't for the end user to see. They're for the programmer, so you can make sure the code is doing the right thing while it's being developed. Once the testing's done, assertions are usually turned off for performance reasons.
If you're anticipating that something bad is going to happen in production, like listOfStuff being null, then either your code isn't tested enough, or you're not sanitizing your input before you let your code have at it. Either way, an "if (bad stuff) { throw an exception }" would be better. Assertions are for test/development time, not for production.
Use an assert if you're willing to pay $1 to your end-user whenever the assertion fails.
An assertion failure should be an indication of a design error in the program.
An assertion states that I have engineered the program in such a way that I know and guarantee that the specified predicate always holds.
An assertion is useful to readers of my code, since they see that (1) I'm willing to set some money on that property; and (2) in previous executions and test cases the property did hold indeed.
My bet assumes that the client of my code sticks to the rules, and adheres to the contract he and I agreed upon. This contract can be tolerant (all input values allowed and checked for validity) or demanding (client and I agreed that he'll never supply certain input values [described as preconditions], and that he doesn't want me to check for these values over and over again).
If the client sticks to the rules, and my assertions nevertheless fail, the client is entitled to some compensation.
Assertions are to indicate a problem in the code that may be recoverable, or as an aid in debugging. You should use a more destructive mechanism for more serious errors, such as stopping the program.
They can also be used to catch an unrecoverable error before the application fails later in debugging and testing scenarios to help you narrow down a problem. Part of the reason for this is so that integrity checking does not reduce the performance of well-tested code in production.
Also, in certain cases, such as a resource leak, the situation may not be desirable, but the consequences of stopping the program are worse than the consequences of continuing on.
This doesn't directly answer your question about assert, but I'd recommend checking out the Preconditions class in guava/google-collections. It allows you to write nice stuff like this (using static imports):
// throw NPE if listOfStuff is null
this.listOfStuff = checkNotNull(listOfStuff);
// same, but the NPE will have "listOfStuff" as its message
this.listOfStuff = checkNotNull(listOfStuff, "listOfStuff");
It seems like something like this might be what you want (and it can't be turned off).
Related
I work on a large legacy Java 8 (Android) application. We recently found a bug that was caused by an ignored result of method. Specifically a caller of a send() method didn't take the right actions when it the sending failed. It's been fixed but now I want to add some static analysis to help find if other existing bugs of the same nature exist in our code. And additionally, to prevent new bugs of the same nature from being added in the future.
We already use Find Bugs, PMD, Checkstyle, Lint, and SonarQube. So I figured that one of these probably already has the check I'm looking for, but it just needs to be enabled. But after a few hours of searching and testing, I don't think that's the case.
For reference, this is the code I was testing with:
public class Application {
public status void main(String[] args) {
foo(); // I want this to be caught
Bar aBar = new Bar();
aBar.baz(); // I want this to be caught
}
static boolean foo() {
return System.currentTimeMillis() % 2 == 0;
}
}
public class Bar {
boolean baz() {
return System.currentTimeMillis() % 2 == 0;
}
}
I want to catch this on the caller side since some callers may use the value while others do not. (The send() method described above was this case)
I found the following existing static analysis rules but they only seem to apply to very specific circumstances to avoid false positives and not work on my example:
Return values from functions without side effects should not be ignored (only for immutable classes in the Java API)
Method ignores exceptional return value (only for known methods like File.delete())
Method ignores return value (only for methods annotated with javax.annotation.CheckReturnValue I think...)
Method ignores return value, is this OK? (only when the return value is the same type as the type the method is invoked on)
Return value of method without side effect is ignored (only when the method does not produce any effect other than return value)
So far the best option seems to be #3 but it requires me to annotate EVERY method or class in my HUGE project. Java 9+ seems to allow annotating at the package-level but that's not an option for me. Even if it was, the project has A LOT of packages. I really would like a way to configure this to be applied to my whole project via one/few locations instead needing to modify every file.
Lastly I came across this Stack Overflow answer that showed me that IntelliJ has this check with a "Report all ignored non-library calls" check. Doing this seems to work as far as highlighting in the IDE. But I want this to cause CI fail. I found there's a way to trigger this via command line using intelliJ tools but this still outputs an XML/JSON file and I'll need to write custom code to parse that output. I'd also need to install IDE tools onto the CI machine which seems like overkill.
Does anyone know of a better way to achieve what I want? I can't be the first person to only care about false negatives and not care about false positives. I feel like it should be manageable to have any return value that is currently being unused to either be logged or have it explicitly stated that the return value is intentionally ignored it via an annotation or assigning to a variable convention like they do in Error Prone
Scenarios like the one you describe invariably give rise to a substantial software defect (a true bug in every respect); made more frustrating and knotty because the code fails silently, and which allowed the problem to remain hidden. Your desire to identify any similar hidden defects (and correct them) is easy to understand; however, (I humbly suggest) static code analysis may not be the best strategy:
Working from the concerns you express in your question: a CheckReturnValue rule runs a high risk of producing a cascade of //Ignore code comments, rule violationSuppress clauses, and/or #suppressRule annotations that far outnumber the rule's positive defect detection count.
The Java programming language further increases the likelihood of a high rule suppression count, after taking Java garbage collection into consideration and assessing how garbage collection effects software development. Working from the understanding that Java garbage collection is based on object instance reference counting, that only instances with a reference count of 0 (zero) are eligible for garbage collection, it makes perfect sense for Java developers to avoid unnecessary references, and to naturally adopt the practice of ignoring unimportant method call return values. The ignored instances will simply fall off of the local call stack, most will reach a reference count of 0 (zero), immediately become eligible for and quickly undergo garbage collection.
Shifting now from a negative perspective to positive, I offer alternatives, for your consideration, that (I believe) will improve your results, as well as your probability to reach a successful outcome.
Based on your description of the scenario and resulting defect / bug, it feels like the proximate root cause of the problem is a unit testing failure or an integration testing failure. The implementation of a send operation that may (and almost certainly will at some point) fail, both unit testing and integration testing absolutely should have incorporated multiple possible failure scenarios and verified failure scenario handling. I obviously don't know, but I'm willing to bet that if you focus on creating and running unit tests and integration tests, the quality of the system will improve at every step, the improvements will be clearly evident, and you may very well uncover some or all of the hidden bugs that are the cause of your current concern, trepidation, stress, and worry.
Consider keeping the gist of your current static code analysis research alive, but shift your approach in a new direction. The first time I read your question, I was struck by the realization that the code checks you would like to perform exist in multiple unrelated locations across the code base and are quickly becoming overly complex, the specific details of the checks are different in many section of code, and each of the special cases make the overall effort unrealistic. Basically, what you would like to implement represents a cross-cutting goal that falls across a sizable section of the code base, and the implementation details have made what is a fairly simple good idea ridiculously complex. Your question is almost a textbook example of a problem that is best implemented taking a cross-cutting aspect-oriented approach.
If you have the time and interest, please take a look at the AspectJ framework, maybe code a few exploratory aspects, and let me know what you think. I'd like to hear your thoughts, if you feel like having a geeky dev conversation at some point. I really hope this is helpful-
You may use the intelliJ IDEA's inspection: Java | Probable bugs | Result of method call ignored with "Report all ignored non-library calls" option enabled. It catches both cases provided in your code sample.
We usually put unnecessary checks in our business logic to avoid failures.
Eg.
1. public ObjectABC funcABC(){
ObjectABC obj = new ObjectABC;
..........
..........
//its never set to null here.
..........
return obj;
}
ObjectABC o = funABC();
if(o!=null){
//do something
}
Why do we need this null check if we are sure that it will never be null?
Is it a good practice or not?
2. int pplReached = funA(..,..,..);
int totalPpl = funB(..,..,..);
funA() just puts a few more restriction over result of funB().
Double percentage = (totalPpl==0||totalPpl<pplReached) ? 0.0 : pplReached/totalPpl;
Do we need 'totalPpl<pplReached' check?
The questions is: Aren't we swallowing some fundamental issue by putting such checks? Issues which should be shown ideally, are avoided by putting these checks.
What is the recommended way?
Think about your audience. A check is worthwhile when it
helps you, the programmer, detect errors,
helps other programmers detect errors where their code meets yours,
allows the program to recover from bad input or an invalid state, or
helps a maintainer avoid introducing errors later.
If your null check above does not fall into these, or there is a simpler mechanism which would do the same, then leave it out.
Simpler mechanisms often include,
unit tests.
annotations that communicate intent to the reader and can be checked by findbugs or similar tools
asserts that cause code to fail early, and communicate intent without requiring you to put in error handling code that should never be reached and without confusing code coverage tools
documentation or inline comments
In this case, I would suggest adding an annotation
public #Nonnull ObjectABC funcABC(){
integrating findbugs into your build process, and maybe replacing
if(o!=null){
//do something
}
with
assert o != null: "funcABC() should have allocated a new instance or failed."
Aren't we swallowing some fundamental issue by putting such checks?
As a rule of thumb,
unit tests are good for checking the behavior of a small piece of code. If you can't write unit tests for important functions, then the fundamental issue is that you aren't writing testable code.
annotations are good for conveying intent to code reviewers, maintainers, and automated tools. If you haven't integrated those tools into your process, then the fundamental issue is that you aren't taking advantage of available code quality tools.
asserts are good for double-checking your assumptions. If you can't sprinkle asserts into your code and quickly tell which are being violated then your fundamental problem is that you don't have a quick way to run your code against representative data to shake out problems.
documentation and inline comments (including source control comments) are good for spreading knowledge about the system among the team -- making sure that more than one person on the team can fix a problem in any part of the code. If they are constantly missing or out-of-sync, then the underlying problem is that you are not writing code with maintainers in mind.
Finally, design by contract is a programming methodology that many have found useful for business logic code. Even if you can't convince your team to adopt the specific tools and practices, reading up on DbC might still help you reason and explain how to enforce the important invariants in your codebase.
I've read (e.g. from Martin Fowler) that we should use guard clause instead of single return in a (short) method in OOP. I've also read (from somewhere I don't remember) that else clause should be avoided when possible.
But my colleagues (I work in a small team with only 3 guys) force me not to use multiple returns in a method, and to use else clause as much as possible, even if there is only one comment line in the else block.
This makes it difficult for me to follow their coding style, because for example, I cannot view all code of a method in one screen. And when I code, I have to write guard clause first, and then try to convert it into the form with out multiple returns.
Am I wrong or what should I do with it?
This is arguable and pure aesthetic question.
Early return has been historically avoided in C and similar languages since it was possible to miss resource cleanup which is usually placed at the end of the function in case of early return.
Given that Java have exceptions and try, catch, finally, there's no need to fear early returns.
Personaly, I agree with you, since I do early return often - that usually means less code and simpler code flow with less if/else nesting.
Guard clause is a good idea because it clearly indicates that current method is not interested in certain cases. When you clear up at the very beginning of the method that it doesn't deal with some cases (e.g. when some value is less than zero), then the rest of the method is pure implementation of its responsibility.
There is one stronger case of guard clauses - statements that validate input and throw exceptions when some argument is unacceptable, e.g. null. In that case you don't want to proceed with execution but wish to throw at the very beginning of the method. That is where guard clauses are the best solution because you don't want to mix exception throwing logic with the core of the method you're implementing.
When talking about guard clauses that throw exceptions, here is one article about how you can simplify them in C# using extension methods: How to Reduce Cyclomatic Complexity: Guard Clause. Though that method is not available in Java, it is useful in C#.
Have them read http://www.cis.temple.edu/~ingargio/cis71/software/roberts/documents/loopexit.txt and see if it will change their minds. (There is history to their idea, but I side with you.)
Edit: Here are the critical points from the article. The principle of single exits from control structures was adopted on principle, not observational data. But observational data says that allowing multiple ways of exiting control structures makes certain problems easier to solve accurately, and does not hurt readability. Disallowing it makes code harder and more likely to be buggy. This holds across a wide variety of programmers, from students to textbook writers. Therefore we should allow and use multiple exits where appropriate.
I'm in the multiple-return/return-early camp and I would lobby to convince other engineers of this. You can have great arguments and cite great sources, but in the end, all you can do is make your pitch, suggest compromises, come to a decision, and then work as a team, which ever way it works out. (Although revisiting of the topic from time to time isn't out of the question either.)
This really just comes down to style and, in the grand scheme of things, a relatively minor one. Overall, you're a more effective developer if you can adapt to either style. If this really "makes it difficult ... to follow their coding style", then I suggest you work on it, because in the end, you'll end up the better engineer.
I had an engineer once come to me and insist he be given dispensation to follow his own coding style (and we had a pretty minimal set of guidelines). He said the established coding style hurt his eyes and made it difficult for him to concentrate (I think he may have even said "nauseous".) I told him that if he was going to work on a lot of people's code, not just code he wrote, and vice versa. If he couldn't adapt to work with the agreed upon style, I couldn't use him and that maybe this type of collaborative project wasn't the right place for him. Coincidentally, it was less of an issue after that (although every code review was still a battle).
My issue with guard clauses is that 1) they can be easily dispersed through code and be easy to miss (this has happened to me on multiple occasions) and 2) I have to remember which code has been "ejected" as I trace code blocks which can become complex and 3) by setting code within if/else you have a contained set of code that you know executes for a given set of criteria. With guard conditions, the criteria is EVERYTHING minus what the guard has ejected. It is much more difficult for me to get my head around that.
I wonder if a lot of people program in Java with assertions. I think this can be very useful on large projects without enough written contracts or outdated contracts. Particularly when you use webservices, components, etc.
But I have never seen any project using assertions (except in JUnit/testing tests...).
I've noticed that the thrown class is an Error and not an Exception. Why do they choose an error? Can it be because an exception could be unexpectedly caught and not logged/rethrown?
If you develop an application with components, I wonder where you put the assertions:
On the component side, just before returning the data through the public API?
On the component client side? And if the API is called everywhere you set up a facade pattern that will call the assertion mechanism? (Then I guess you put your assertions and facade on some external project and your client projects will depend on this assertion project?)
I understand how to use assertions, and when use them, but just wonder if some people have recommendations based on a real experience of assertions.
By the way, do you refer to assert in Java?
I personally find assertions especially useful for invariants.
Take into account that assertion checking is turned off by default in Java. You have to add the -ea flag to enable assertion checking.
In other words, you can test your application in a kind of debug mode, where the program will halt once an assertion is broken. On the other hand, the release application will have its assertion turned off and will not incur time penalty for assertion checking. They will be ignored.
In Java, assertions are far less powerful than exceptions and have totally different meanings. Exceptions are there when something unexpected happens and you have to handle it. Assertions are about the correctness of your code. They are here to confirm that what 'should be' is indeed the case.
My rough policy, especially when working with many developers:
public methods: always check arguments and throw IllegalArgumentException when something is wrong
private methods: use assertions to check arguments against null pointers and so on
complex methods: intermediate assertions to ensure that the intermediate results satisfy requested properties
...but actually, I use them sparsingly. Just where it's critical or error-prone places.
About the minor usage of asserts, I think that was a bad decision to make assertions disabled by default.
About extending Error I suppose it extends Error because Errors are exceptions that are not expected to be caught. And that way, when in your code you have catch(Exception), the assertion isn't cached.
And about usage, the best place is in precoditions, postconditions or in the middle of the code in any invariant you want to check.
In my opinion, errors in Java should be treated as an Exception. Therefore I would enable assertions in development and in private methods to check that my code is running fine and don't pass invalid values to private methods.
Since those checks should be made in public methods, I wouldn't not check again in private methods.
In order to disable assertions:
-da flag in compiler
In my opinion, in public methods you should check and manage the exception or log them yourself.
Assertions should not be used outside tests because they can be turned off in a production environment, which may cause serious problems due to the lack of proper checking.
However, I've seen statement that it is allowed to use them to check parameters in private methods. That's because you assume that data which managed to reach to your private method is correct and if it isn't application may fail hard.
I help maintain and build on a fairly large Swing GUI, with a lot of complex interaction. Often I find myself fixing bugs that are the result of things getting into odd states due to some race condition somewhere else in the code.
As the code base gets large, I've found it's gotten less consistent about specifying via documentation which methods have threading restrictions: most commonly, methods that must be run on the Swing EDT. Similarly, it would be useful to know and provide static awareness into which (of our custom) listeners are notified on the EDT by specification.
So it came to me that this should be something that could be easily enforced using annotations. Lo and behold, there exists at least one static analysis tool, CheckThread, that uses annotations to accomplish this. It seems to allow you to declare a method to be confined to a specific thread (most commonly the EDT), and will flag methods that try to call that method without also declaring themselves as confined to that thread.
So on the surface this just seems like a low-pain, huge-gain addition to the source and build cycle. My questions are:
Are there any success stories for people using CheckThread or similar libraries to enforce threading constraints? Any stories of failure? Why did it succeed/fail?
Is this good in theory? Are there theoretical downsides?
Is this good in practice? Is it worth it? What kind of value has it delivered?
If it works in practice, what are good tools to support this? I've just found CheckThread but admit I'm not entirely sure what I'm searching for to find other tools that do the same thing.
I know whether it's right for us depends on our scenario. But I've never heard of people using something like this in practice, and to be honest it doesn't seem to have taken hold much from some general browsing. So I'm wondering why.
This answer is more focused on the theory aspect of your question.
Fundamentally you are making an assertion: "This methods runs only under certain threads". This assertion isn't really different than any other assertion you might make ("The method accepts only integers less than 17 for parameter X"). Issues are
Where do such assertions come from?
Can static analyzers check them?
Where do you get such a static analyzer?
Mostly such assertions have to come from the software designers, as they are the only people that know the intentions. The traditional term for this is "Design by Contract",
although most DBC schemes are only over the current program state (C's assert macro) and they should really be over the programs' past and future states ("temporal assertions"), e.,g., "This routine will allocate a block of storage, and eventually some piece of code will deallocate it". One can build tools that try to determine hueristically what the assertions are (e.g., Engler's assertion induction work; others have done work in this area). That's useful, but the false positives are an issue. As practical matter, asking the designers to code such assertions doesn't seem particularly onerous, and is really good long term documentation. Whether you code such assertions with a specific "Contract" language construct, or with an if statement ("if Debug && Not(assertion) Then Fail();") or hide them in an annotation is really just a matter of convenience. Its nice when the language allows to code such assertions directly.
Checking of such assertions statically is difficult. If you stick with current-state only, the static analyzer pretty much has to do full data flow analysis of your entire application, because the information needed to satisfy the assertion likely comes from data created by another part of the application. (In your case, the "inside EDT" signal has to come from analyzing the whole call graph of the application to see if there is any call-path that leads to the method from a thread which is NOT the EDT thread). If you use temporal properties, the static check pretty much needs some kind of state-space verification logic in addition; these are presently still pretty much research tools. Even with all this machinery, static analyzers generally have to be "conservative" in their anlayses; if they can't demonstrate that something is false, they pretty much have to assume it is true, because of the halting problem.
Where do you get such analyzers? Given all the machinery needed, they're hard to build and so you should expect them to be rare. If somebody has built one, great. If not... as a general rule, you don't want do this yourself from scratch. The best long-term hope is to have generic program analysis machinery available on which to build such analyzers, to amortize the cost of building all the infrastructure. (I build program analyzer tool foundations; see our DMS Software Reengineering Toolkit).
One way to make it "easier" to build such static analyzers is to restrict the cases they handle to narrow scope, e.g., CheckThread. I'd expect CheckThread to do exactly what it presently does, and it would be unlikely to get a lot stronger.
The reason that "assert" macros and other such dynamic "current state" checks are popular is that they can actually be implemented by a simple runtime test. That's pretty practical. The problem here is that you may never exercise a path that leads to a failed conditions. So, for dynamic analysis, absence of detected failure is not really evidence of correctness. Still feels good.
Bottom line: static analyzers and dynamic analyzers each have their strength.
We haven't tried any static analysis tools, but we've used AspectJ to write a simple aspect that detects at runtime when any code in java.awt or javax.swing is invoked outside the EDT. It has found several places in our code that were missing a SwingUtilities.invokeLater(). We run with this aspect enabled throughout our QA cycle, then turn it off shortly before release.
As requested, this doesn’t pertain specifically to Java or the EDT, but I’ve seen good results with Coverity’s concurrency static analysis checkers for C/C++. They did have a higher false positive rate than less complicated checkers, but the code owners seemed willing to put up with that, given how hard threading bugs can be to find via testing. The details are confidential, I’m afraid, but Dawson Engler’s public papers (e.g., “Bugs as Deviant Behavior”) are very good on the general approach of “The following «N» instances of your code do «X» before doing «Y»,; this instance doesn’t.”