I currently have a disagreement going on with my 2nd year JAVA professor that I'm hoping y'all could help solve:
The code we started with was this:
public T peek()
{
if (isEmpty())
.........
}
public boolean isEmpty()
{
return topIndex<0;
}
And she wants us to remove the isEmpty() reference and place its code directly into the if statement (i.e. change the peek method contents to:
if(topIndex<0).......) to "Make the code more efficient". I have argued that a) the runtime/compile time optimizer would most likely have inlined
the isEmpty() call, b) even if it didn't, the 5-10 machine operations would be negligible in nearly every situation, and c) its just bad style because it makes the program less readable and less changeable.
So, I guess my question is:
Is there any runtime efficiency gained by inlineing logic as opposed to just calling a method?
I have tried simple profiling techniques (aka long loop and a stopwatch) but tests have been inconclusive.
EDIT:
Thank you everyone for the responses! I appreciate you all taking the time. Also, I appreciate those of you who commented on the pragmatism of arguing with my professor and especially doing so without data. #Mike Dunlavey I appreciate your insight as a former professor and your advice on the appropriate coding sequence. #ya_pulser I especially appreciate the profiling advice and links you took the time to share.
You are correct in your assumptions about java code behaviour, but you are impolite to your professor arguing without data :). Arguing without data is pointless, prove your assumptions with measurements and graphs.
You can use JMH ( http://openjdk.java.net/projects/code-tools/jmh/ ) to create a small benchmark and measure difference between:
inlined by hand (remove isEmpty method and place the code in call place)
inlined by java jit compiler (hotspot after 100k (?) invocations - see jit print compilation output)
disabled hotspot inlining at all
Please read http://www.oracle.com/technetwork/java/whitepaper-135217.html#method
Useful parameters could be:
-Djava.compiler=NONE
-XX:+PrintCompilation
Plus each jdk version has it's own set of parameters to control jit.
If you will create some set of graphics as results of your research and will politely present them to the professor - I think it will benefit you in future.
I think that https://stackoverflow.com/users/2613885/aleksey-shipilev can help with jmh related questions.
BTW: I had great success when I inlined plenty of methods into a single huge code loop to achieve maximum speed for neural network backpropagation routine cause java is (was?) too lazy to inline methods with methods with methods. It was unmaintainable and fast :(.
Sad...
I agree with your intuitions about it, especially "the 5-10 machine operations would be negligible in nearly every situation".
I was a C.S. professor a long time ago.
On one hand, professors need all the slack you can give them.
Teaching is very demanding. You can't have a bad day.
If you show up for a class and you're not fully prepared, you're in for a rough ride.
If you give a test on Friday and don't have the grades on Monday the students will say "But you had all weekend!"
You can get satisfaction from seeing your students learn, but you yourself don't learn much, except how to teach.
On the other hand, few professors have much practical experience with real software.
So their opinions tend to be founded on various dogmatic certitudes rather than solid pragmatism.
Performance is a perfect example of this.
They tend to say "Don't do X. Do Y because it performs better." which completely misses the point about performance problems - you have to deal in fractions, not absolutes. Everything depends on what else is going on.
The way to approach performance is, as someone said "First make it right. Then make it fast."
And the way you make it fast is not by eyeballing the code (and wondering "should I do this, or should I do that"), but by running it and letting it tell you how it's spending time.
The basic idea of profiling is how you do this.
Now there is such a thing as bad profiling and good profiling, as explained in the second answer here (and usually when professors do teach profiling, they teach the bad kind), but that's the way to go.
As you say, the difference will be small, and in most circumstances the readability should be a higher priority. In this case though, since the extra method consists of a single line, I'm not sure this adds any real readability benefit unless you're calling the same method from elsewhere.
That said, remember that your lecturer's target is to help you learn computer science, and this is a different priority than writing production code. Particularly, she won't want you to be leaving optimization to automated tools, since that doesn't help your learning.
Also, just a practical note - in school and in professional development, we all have to adhere to coding standards we personally disagree with. It's an important skill, and really is necessary for team working, even if it does chafe.
Calling isEmpty is idiomatic and nicely readable.
Manually inlining that would be a micro optimization,
something best done in performance critical situations,
and after a bottleneck was confirmed by benchmarking in the intended production environment.
Is there a real performance benefit in manually inlining?
Theoretically yes, and maybe that's what the lecture wanted to emphasize.
In practice,
I don't think you'll find an absolute answer.
The automatic inlining behavior maybe implementation dependent.
Also keep in mind that benchmark results will depend on JVM implementation, version, platform.
And for that reason,
this kind of optimization can be useful in rare extreme situations,
and in general detrimental to portability and maintainability.
By the same logic, should we inline all methods,
eliminating all indirections at the expense of duplicating large blocks of code?
Definitely not.
Where you draw the line exactly between decomposition and inlining may also depend on personal taste,
to some degree.
Another form of data to look at could be the code that is generated. See the -XX:+PrintAssembly option and friends. See How to see JIT-compiled code in JVM? for more information.
I'm confident that in this particular case the Hotspot JVM will inline the call to isEmpty and there would be no performance difference.
Related
So this may sound simple, but I have a method that has a for loop inside, inside the forloop the method createprints needs a map of "parameters" which it gets from getParameters, now there are 2 types of reports, one has a general set of parameters and another has that general set and a set of its own.
I have two options:
Either have 2 getparameters method, one that is general and another that is for rp2 but also calls the general parameters method. If I do this then it would make sense to add the conditional before the for loop like this :
theMethod(){
if (rp1){
for loop{
createPrints(getgenParameters())
do general forloop stuff
}
}else{
for loop{
createPrints(getParameters())
do general forloop stuff
}
}
}
This way it only checks once which parameters method to call instead of having the if statement inside the loop so that it checks every iteration (this is bad because the report type will never change throughout the loop) but then this way, repeating the for loop looks ugly and not clean at all, is there a cleaner way to design this?
The other option is to pass in a boolean to the get parameters method and basically you check inside which type of report it is and based on that you create the map, this however adds a conditional at each iteration as well.
From a performance perspective, it makes sense to have the conditional outside the loop so that its not redundantly checked at each iteration, but it doesnt look clean, and my boss really cares about how clean code looks, he didnt like me using an if else code block instead of doing it with ternary operators since ternary only uses one line (i think performance is still the same no?).
Forgot to mention I am using java, i cannot assign functions to variables or use callbacks
Inside the method, there was an if else code block before the for loop something like
String aVariable;
if(condition){
aVariable= value1;
}else{
aVariable =value2;
}
so i initially wanted to just create a boolean variable like isreport1 and inside if/else code block also assign the value cause it was using the same condition. And then as mentioned before pass in the parameter but, my boss again said not use booleans in parameters, so is this case the same I shouldnt do it here?
The branch prediction is a property of the CPU, not of the compiler. All modern CPUs have it, don't worry.
Most compilers can pull a constant condition out of the loop, so don't worry again. Java does it, too. Details: The javac compiler does not do it, it's job is to transform source cod to byte code and nothing else. But at runtime, the time-critical parts of the byte code get compiled in the machine and there many optimizations happen.
Forgot to mention I am using java, i cannot assign functions to variables or use callbacks.
You sort of can. It's implemented via anonymous classes and since Java 8, there are lambdas. Surely worth learning, but not relevant to your case.
I'd simply go for
for loop{
createPrints(rp1 ? getgenParameters() : getParameters())
do general forloop stuff
}
as it's the shortest and cleanest way for doing it.
There are tons of alternatives like defining Parameters getSomeParameters(boolean rp1), which you can create easily using "extract method" from the above ternary.
From a performance perspective, it makes sense to have the conditional outside the loop so that its not redundantly checked at each iteration, but it doesn't look clean, and my boss really cares about how clean code looks
From a performance perspective, it all doesn't matter. The compiler is damn smart and knows tons of optimizations. Just write clean code with short methods, so it can do its job properly.
he didnt like me using an if else code block instead of doing it with ternary operators since ternary only uses one line
A simple ternary one-liner makes the code much easier to understand, so you should go for it (complicated ternaries may be hard to grasp, but that's not the case here).
(i think performance is still the same no?).
Definitely. Note that
most program parts are irrelevant for the performance (Pareto principle)
low level optimizations rarely matter (the compiler knows them better)
clean code using proper data structures matters
Most "debates" about whether one way of writing some chunk of code is more efficient than another are missing the point.
The JIT compiler performs a lot of optimizations on your code behinds the scenes. It (or more precisely the people who wrote it) know a lot more about how to optimize than you do. And they will have the advantage of having done extensive benchmarking and (machine level) code walk-throughs and the like to figure out the best way to optimize.
The compiler can also optimize more reliably than a human can. And its developers are constantly improving it.
And the compiler does not need to be paid a salary to optimize code. Yes. Every hour you expend on optimizing is time that you could be spending on something else.
So the best strategy for you when optimizing is as follows:
First get the code implemented. Time spent on optimizing while coding is probably wasted. That is classic "premature optimization" behavior. Your intuition is probably not good enough to make the right call at this stage. (Few people are genuinely smart enough ...)
Next get the code working and passing its tests. Optimizing buggy code is missing the point.
Set some realistic, quantifiable goals for performance. ("As fast as possible" is neither realistic or quantifiable). Your code just needs to be fast enough to meet the business requirements. If it is already fast enough, then you are wasting developer time by optimizing it further.
Create a realistic benchmark so that you can measure your application's performance. If you can't (or don't) measure, you won't know if you have met your goals, or if a particular attempted optimization has improved things.
Use profiling to determine which parts of your code are worth the effort optimizing. Profiling tells you the hotspots where most of the time is spent. Focus there ... not on stuff that is only executed occasionally. This is where the 80-20 rule comes in.
I would talk to my boss about this. If the boss is not on board with the idea of being methodical and scientific about performance / optimization, then don't fight it. Just do what the boss says. (But when be willing to push back if you are blamed for missed deadlines that are due to time wasted on unnecessary optimization.)
There are two kinds of efficiency in Software Engineering:
The efficiency of the program
The efficiency of the programmer
These two efficiencies are in conflict.
How about this ?
theMethod(getParametersCallback){
for loop {
createPrints(getgenParametersCallback)
do general forloop stuff
}
}
. . . . .
if (rp1){
theMethod(getgenParameters)
}
else {
theMethod(getParameters)
}
I am a senior developer, so this appears to me a stupid question. My answer should be NO, or WHAT? NO!!!
But I was in a meeting yesterday, and I was explaining some PMD results. When we get to the "too long method name" issue, I started to explain and the customer said: well, and remember a long method name has an impact on performance, the program run slower.
I said: no, you are wrong, is only a clean code rule, and is important to get a good code, but has nothing to do with performance, the bytecode is similar with different names.
But the client, and there were some people in the meeting arguing in this, was sure about this. They had some projects in that long method names were the cause of poor performance.
The only idea I have is that some introspection or reflection thing has is related to this, but apart from this, I am sure, or I thought I was Sure, the method name length has not any performance impact.
Any idea or suggestion about this?
Arguably it will take more space in memory and storage - so a jar file containing classes with enormous method names will be larger than one with short class names, for example.
However, any difference in performance is incredibly unlikely to be noticeable. I think it's almost certain that the projects where they were blaming long method names for poor performance were actually misdiagnosed. It's not like it would be the first time that's happened.
Of course, the best way to take the heat out of this situation is to provide evidence - if performance is important, you should have tests for performance. Run those tests with long method names, then refactor them to short method names and rerun the tests. I'd be incredibly surprised if there were a significant difference.
Method names are not just relevant with reflection but also during class loading and of course in both cases a long method names means that at some level there is more for the CPU to do. However, with method name length that are even remotely practical (i.e. not thousands of characters long), I am absolutely certain that it's impossible for this to be significant compared to other things that have to be done during reflection or class loading.
But the client, and there were some people in the meeting arguing in
this, was sure about this. They had some projects in that long method
names were the cause of poor performance.
It sounds like a total guess being treated as fact.
This is just a case of some people's general nuttiness about performance.
Even if they happen to be right, it's a total guess.
Every program has room for performance improvement by changing certain things.
Guessing does not inform you what those things are.
If two programs that do the same thing have different performance, it only means they've been optimized to different degrees.
Your challenge is to explain this.
Startup times will be affected positively if class names and member names are shortened. To that end one can use a bytecode shrinker.
For example, yguard (LGPL) can shrink code. It also allows you to deobfuscate stack traces for debugging purposes.
Manually assigning short class and member names for performance reasons is of course a horrible idea.
I can't why it can possibly impact performance significantly unless you are pulling method names out yourself through reflection and then render them on an UI. That is obviously not the case. So I'm just confused. Are you sure your client isn't confusing method name with file name or is he thinking about the cases where some really old programming languages do not support super long method names? Depending on how old that person is, their judgement is definitely absurd to a computer scientist. If they can prove their point with fact, they may as well submit it to ACM, Oracle/Sun or MIT to verify their findings.
I think the length of function name impact to performance as followed:
compile time from bytecode to binary code (with java, .net, ..). The byte code still contains file name, class name, package name.
if we use *.lib, *.dll, *.so it may impact to performance (in android for example when you use native code)
when we use native code to call to java function (in java, android)
when a black box (lib file,app) connect to other black boxes (lib file,app) it use function name in header file as the indetification. So I think length of name will impact to performance.
I've read (e.g. from Martin Fowler) that we should use guard clause instead of single return in a (short) method in OOP. I've also read (from somewhere I don't remember) that else clause should be avoided when possible.
But my colleagues (I work in a small team with only 3 guys) force me not to use multiple returns in a method, and to use else clause as much as possible, even if there is only one comment line in the else block.
This makes it difficult for me to follow their coding style, because for example, I cannot view all code of a method in one screen. And when I code, I have to write guard clause first, and then try to convert it into the form with out multiple returns.
Am I wrong or what should I do with it?
This is arguable and pure aesthetic question.
Early return has been historically avoided in C and similar languages since it was possible to miss resource cleanup which is usually placed at the end of the function in case of early return.
Given that Java have exceptions and try, catch, finally, there's no need to fear early returns.
Personaly, I agree with you, since I do early return often - that usually means less code and simpler code flow with less if/else nesting.
Guard clause is a good idea because it clearly indicates that current method is not interested in certain cases. When you clear up at the very beginning of the method that it doesn't deal with some cases (e.g. when some value is less than zero), then the rest of the method is pure implementation of its responsibility.
There is one stronger case of guard clauses - statements that validate input and throw exceptions when some argument is unacceptable, e.g. null. In that case you don't want to proceed with execution but wish to throw at the very beginning of the method. That is where guard clauses are the best solution because you don't want to mix exception throwing logic with the core of the method you're implementing.
When talking about guard clauses that throw exceptions, here is one article about how you can simplify them in C# using extension methods: How to Reduce Cyclomatic Complexity: Guard Clause. Though that method is not available in Java, it is useful in C#.
Have them read http://www.cis.temple.edu/~ingargio/cis71/software/roberts/documents/loopexit.txt and see if it will change their minds. (There is history to their idea, but I side with you.)
Edit: Here are the critical points from the article. The principle of single exits from control structures was adopted on principle, not observational data. But observational data says that allowing multiple ways of exiting control structures makes certain problems easier to solve accurately, and does not hurt readability. Disallowing it makes code harder and more likely to be buggy. This holds across a wide variety of programmers, from students to textbook writers. Therefore we should allow and use multiple exits where appropriate.
I'm in the multiple-return/return-early camp and I would lobby to convince other engineers of this. You can have great arguments and cite great sources, but in the end, all you can do is make your pitch, suggest compromises, come to a decision, and then work as a team, which ever way it works out. (Although revisiting of the topic from time to time isn't out of the question either.)
This really just comes down to style and, in the grand scheme of things, a relatively minor one. Overall, you're a more effective developer if you can adapt to either style. If this really "makes it difficult ... to follow their coding style", then I suggest you work on it, because in the end, you'll end up the better engineer.
I had an engineer once come to me and insist he be given dispensation to follow his own coding style (and we had a pretty minimal set of guidelines). He said the established coding style hurt his eyes and made it difficult for him to concentrate (I think he may have even said "nauseous".) I told him that if he was going to work on a lot of people's code, not just code he wrote, and vice versa. If he couldn't adapt to work with the agreed upon style, I couldn't use him and that maybe this type of collaborative project wasn't the right place for him. Coincidentally, it was less of an issue after that (although every code review was still a battle).
My issue with guard clauses is that 1) they can be easily dispersed through code and be easy to miss (this has happened to me on multiple occasions) and 2) I have to remember which code has been "ejected" as I trace code blocks which can become complex and 3) by setting code within if/else you have a contained set of code that you know executes for a given set of criteria. With guard conditions, the criteria is EVERYTHING minus what the guard has ejected. It is much more difficult for me to get my head around that.
When I receive code I have not seen before to refactor it into some sane state, I normally fix "cosmetic" things (like converting StringTokenizers to String#split(), replacing pre-1.2 collections by newer collections, making fields final, converting C-style arrays to Java-style arrays, ...) while reading the source code I have to get familiar with.
Are there many people using this strategy (maybe it is some kind of "best practice" I don't know?) or is this considered too dangerous, and not touching old code if it is not absolutely necessary is generally prefered? Or is it more common to combine the "cosmetic cleanup" step with the more invasive "general refactoring" step?
What are the common "low-hanging fruits" when doing "cosmetic clean-up" (vs. refactoring with more invasive changes)?
In my opinion, "cosmetic cleanup" is "general refactoring." You're just changing the code to make it more understandable without changing its behavior.
I always refactor by attacking the minor changes first. The more readable you can make the code quickly, the easier it will be to do the structural changes later - especially since it helps you look for repeated code, etc.
I typically start by looking at code that is used frequently and will need to be changed often, first. (This has the biggest impact in the least time...) Variable naming is probably the easiest and safest "low hanging fruit" to attack first, followed by framework updates (collection changes, updated methods, etc). Once those are done, breaking up large methods is usually my next step, followed by other typical refactorings.
There is no right or wrong answer here, as this depends largely on circumstances.
If the code is live, working, undocumented, and contains no testing infrastructure, then I wouldn't touch it. If someone comes back in the future and wants new features, I will try to work them into the existing code while changing as little as possible.
If the code is buggy, problematic, missing features, and was written by a programmer that no longer works with the company, then I would probably redesign and rewrite the whole thing. I could always still reference that programmer's code for a specific solution to a specific problem, but it would help me reorganize everything in my mind and in source. In this situation, the whole thing is probably poorly designed and it could use a complete re-think.
For everything in between, I would take the approach you outlined. I would start by cleaning up everything cosmetically so that I can see what's going on. Then I'd start working on whatever code stood out as needing the most work. I would add documentation as I understand how it works so that I will help remember what's going on.
Ultimately, remember that if you're going to be maintaining the code now, it should be up to your standards. Where it's not, you should take the time to bring it up to your standards - whatever that takes. This will save you a lot of time, effort, and frustration down the road.
The lowest-hanging cosmetic fruit is (in Eclipse, anyway) shift-control-F. Automatic formatting is your friend.
First thing I do is trying to hide most of the things to the outside world. If the code is crappy most of the time the guy that implemented it did not know much about data hiding and alike.
So my advice, first thing to do:
Turn as many members and methods as
private as you can without breaking the
compilation.
As a second step I try to identify the interfaces. I replace the concrete classes through the interfaces in all methods of related classes. This way you decouple the classes a bit.
Further refactoring can then be done more safely and locally.
You can buy a copy of Refactoring: Improving the Design of Existing Code from Martin Fowler, you'll find a lot of things you can do during your refactoring operation.
Plus you can use tools provided by your IDE and others code analyzers such as Findbugs or PMD to detect problems in your code.
Resources :
www.refactoring.com
wikipedia - List of tools for static code analysis in java
On the same topic :
How do you refactor a large messy codebase?
Code analyzers: PMD & FindBugs
By starting with "cosmetic cleanup" you get a good overview of how messy the code is and this combined with better readability is a good beginning.
I always (yeah, right... sometimes there's something called a deadline that mess with me) start with this approach and it has served me very well so far.
You're on the right track. By doing the small fixes you'll be more familiar with the code and the bigger fixes will be easier to do with all the detritus out of the way.
Run a tool like JDepend, CheckStyle or PMD on the source. They can automatically do loads of changes that are cosemetic but based on general refactoring rules.
I do not change old code except to reformat it using the IDE. There is too much risk of introducing a bug - or removing a bug that other code now depends upon! Or introducing a dependency that didn't exist such as using the heap instead of the stack.
Beyond the IDE reformat, I don't change code that the boss hasn't asked me to change. If something is egregious, I ask the boss if I can make changes and state a case of why this is good for the company.
If the boss asks me to fix a bug in the code, I make as few changes as possible. Say the bug is in a simple for loop. I'd refactor the loop into a new method. Then I'd write a test case for that method to demonstrate I have located the bug. Then I'd fix the new method. Then I'd make sure the test cases pass.
Yeah, I'm a contractor. Contracting gives you a different point of view. I recommend it.
There is one thing you should be aware of. The code you are starting with has been TESTED and approved, and your changes automatically means that that retesting must happen as you may have inadvertently broken some behaviour elsewhere.
Besides, everybody makes errors. Every non-trivial change you make (changing StringTokenizer to split is not an automatic feature in e.g. Eclipse, so you write it yourself) is an opportunity for errors to creep in. Do you get the exact behaviour right of a conditional, or did you by mere mistake forget a !?
Hence, your changes implies retesting. That work may be quite substantial and severely overwhelm the small changes you have done.
I don't normally bother going through old code looking for problems. However, if I'm reading it, as you appear to be doing, and it makes my brain glitch, I fix it.
Common low-hanging fruits for me tend to be more about renaming classes, methods, fields etc., and writing examples of behaviour (a.k.a. unit tests) when I can't be sure of what a class is doing by inspection - generally making the code more readable as I read it. None of these are what I'd call "invasive" but they're more than just cosmetic.
From experience it depends on two things: time and risk.
If you have plenty of time then you can do a lot more, if not then the scope of whatever changes you make is reduced accordingly. As much as I hate doing it I have had to create some horrible shameful hacks because I simply didn't have enough time to do it right...
If the code you are working on has lots of dependencies or is critical to the application then make as few changes as possible - you never know what your fix might break... :)
It sounds like you have a solid idea of what things should look like so I am not going to say what specific changes to make in what order 'cause that will vary from person to person. Just make small localized changes first, test, expand the scope of your changes, test. Expand. Test. Expand. Test. Until you either run out of time or there is no more room for improvement!
BTW When testing you are likely to see where things break most often - create test cases for them (JUnit or whatever).
EXCEPTION:
Two things that I always find myself doing are reformatting (CTRL+SHFT+F in Eclipse) and commenting code that is not obvious. After that I just hammer the most obvious nail first...
I help maintain and build on a fairly large Swing GUI, with a lot of complex interaction. Often I find myself fixing bugs that are the result of things getting into odd states due to some race condition somewhere else in the code.
As the code base gets large, I've found it's gotten less consistent about specifying via documentation which methods have threading restrictions: most commonly, methods that must be run on the Swing EDT. Similarly, it would be useful to know and provide static awareness into which (of our custom) listeners are notified on the EDT by specification.
So it came to me that this should be something that could be easily enforced using annotations. Lo and behold, there exists at least one static analysis tool, CheckThread, that uses annotations to accomplish this. It seems to allow you to declare a method to be confined to a specific thread (most commonly the EDT), and will flag methods that try to call that method without also declaring themselves as confined to that thread.
So on the surface this just seems like a low-pain, huge-gain addition to the source and build cycle. My questions are:
Are there any success stories for people using CheckThread or similar libraries to enforce threading constraints? Any stories of failure? Why did it succeed/fail?
Is this good in theory? Are there theoretical downsides?
Is this good in practice? Is it worth it? What kind of value has it delivered?
If it works in practice, what are good tools to support this? I've just found CheckThread but admit I'm not entirely sure what I'm searching for to find other tools that do the same thing.
I know whether it's right for us depends on our scenario. But I've never heard of people using something like this in practice, and to be honest it doesn't seem to have taken hold much from some general browsing. So I'm wondering why.
This answer is more focused on the theory aspect of your question.
Fundamentally you are making an assertion: "This methods runs only under certain threads". This assertion isn't really different than any other assertion you might make ("The method accepts only integers less than 17 for parameter X"). Issues are
Where do such assertions come from?
Can static analyzers check them?
Where do you get such a static analyzer?
Mostly such assertions have to come from the software designers, as they are the only people that know the intentions. The traditional term for this is "Design by Contract",
although most DBC schemes are only over the current program state (C's assert macro) and they should really be over the programs' past and future states ("temporal assertions"), e.,g., "This routine will allocate a block of storage, and eventually some piece of code will deallocate it". One can build tools that try to determine hueristically what the assertions are (e.g., Engler's assertion induction work; others have done work in this area). That's useful, but the false positives are an issue. As practical matter, asking the designers to code such assertions doesn't seem particularly onerous, and is really good long term documentation. Whether you code such assertions with a specific "Contract" language construct, or with an if statement ("if Debug && Not(assertion) Then Fail();") or hide them in an annotation is really just a matter of convenience. Its nice when the language allows to code such assertions directly.
Checking of such assertions statically is difficult. If you stick with current-state only, the static analyzer pretty much has to do full data flow analysis of your entire application, because the information needed to satisfy the assertion likely comes from data created by another part of the application. (In your case, the "inside EDT" signal has to come from analyzing the whole call graph of the application to see if there is any call-path that leads to the method from a thread which is NOT the EDT thread). If you use temporal properties, the static check pretty much needs some kind of state-space verification logic in addition; these are presently still pretty much research tools. Even with all this machinery, static analyzers generally have to be "conservative" in their anlayses; if they can't demonstrate that something is false, they pretty much have to assume it is true, because of the halting problem.
Where do you get such analyzers? Given all the machinery needed, they're hard to build and so you should expect them to be rare. If somebody has built one, great. If not... as a general rule, you don't want do this yourself from scratch. The best long-term hope is to have generic program analysis machinery available on which to build such analyzers, to amortize the cost of building all the infrastructure. (I build program analyzer tool foundations; see our DMS Software Reengineering Toolkit).
One way to make it "easier" to build such static analyzers is to restrict the cases they handle to narrow scope, e.g., CheckThread. I'd expect CheckThread to do exactly what it presently does, and it would be unlikely to get a lot stronger.
The reason that "assert" macros and other such dynamic "current state" checks are popular is that they can actually be implemented by a simple runtime test. That's pretty practical. The problem here is that you may never exercise a path that leads to a failed conditions. So, for dynamic analysis, absence of detected failure is not really evidence of correctness. Still feels good.
Bottom line: static analyzers and dynamic analyzers each have their strength.
We haven't tried any static analysis tools, but we've used AspectJ to write a simple aspect that detects at runtime when any code in java.awt or javax.swing is invoked outside the EDT. It has found several places in our code that were missing a SwingUtilities.invokeLater(). We run with this aspect enabled throughout our QA cycle, then turn it off shortly before release.
As requested, this doesn’t pertain specifically to Java or the EDT, but I’ve seen good results with Coverity’s concurrency static analysis checkers for C/C++. They did have a higher false positive rate than less complicated checkers, but the code owners seemed willing to put up with that, given how hard threading bugs can be to find via testing. The details are confidential, I’m afraid, but Dawson Engler’s public papers (e.g., “Bugs as Deviant Behavior”) are very good on the general approach of “The following «N» instances of your code do «X» before doing «Y»,; this instance doesn’t.”