Program to find ignored junits - java

Is there a program out there that can allow me to find all ignored junits?
By this I mean, I have seen unit tests that use the #Ignore and tests with method name like ignore_testFoo() or xtestBar() or xxtestBar1(), which all get ignored and they are very hard to find sometimes.
I could grep for those cases, but I was wondering if there was an application that would find any of those situations automatically.
I tried using cobertura to obtain coverage on junits, to see which methods were being executed and which were not being executed, and picking apart the bad unit tests that was.
I was just wondering if there was a program or another method to obtain this information without hacking something up.

A static analysis tool would serve you well here. Checkstyle is a decent choice amongst them, it has a long list of modules, and worst case you can easily write your own module to validate any coding convention you need.
You would locate or create a module for it then execute to find any non-conforming code.
Edit
PMD looks to be an excellent choice to handle this task. It actually comes with a set of JUnit rules already built in and its very easy to combine rules or create new ones.

It should be easy to detect ignored tests using junit3 by a grep on your java test files. Find all lines matching test and parenthesis but with a method name that doesn't start by test.
For junit4, you could
* implement your own test runner by extending the default one, print out ignored tests
* build a small app that loads test classes, get all declared methods through introspect, print out those markedas ignored.
There may be a tool to do that, maybe even some runners already do, but actually it could take a few hours to have those tools from scratch if you really need them.

Related

Is it possible to debug runtime generated groovy code?

I'm working on a project where we need to compile groovy classes at runtime and then instantiate objects from these classes and execute methods on it. The source for these classes exists as a string only in live environments.
These classes can contain pretty complex code, so there is a good chance of bugs hiding in there.
The problem with our approach is, that when we notice missbehavior in these classes, we can't use a comfortable method of debugging.
We can of course write and execute tests against these classes, but often times you just want to know what's going on step-by-step.
So my question is: Is there is a way to debug runtime generated groovy/Java classes?
The steps we currently take to track down bugs:
1) Write tests to reproduce the behavior
2) Read through the code. (obviously has a super bad success rate in complex classes)
3) Do instrumentation. We call a static "_break" method which we have written in a utils class (so no runtime generated stuff). In that _break method we can add a breakpoint. So this is almost as if we were debugging the runtime generated classes directly. The Problem with that approach is, that you have to recompile and add a new version of the groovy classes to the testsystem everytime you want to add/remove a _break call.
If you're wondering how we can even write tests for these classes, here is how:
For unit tests we copy the code from the strings into regular groovy classes. These are used for development and unit testing, because it gives us code completion and a fast way to at least execute the classes against simple tests.
We can debug the code on unit test level. The problem here is, that the data setup is too complex to reproduce certain combinations in unit tests.
For integration tests, we do the whole compilation, adding, executing process, just like it would happen in the live system.
We use Intellij 2017 as IDE and I currently have no idea if or how we could "connect" the bytecode to either the strings which is was generated from, or the copied groovy classes we use for unit testing.
Any other tool that would allow us to debug would be fine as well.

Is it correct if requirement verification happens in jUnit tests

I'm having this logic to check a java file content and verify that it has a comment, which says who is the author(not necessarily the creator) of that file - special requirement of the project.
I wrote a unit test using Junit to check the logic and it works fine.
And I want all the .java files adhere to that standard and make the build fail if at least one of them does not comply to that.
So far I have my Junit test method to do the following,
Read all the .java file contents in the application
For each content check if it contains the comment which has the standard format
Fail the test case if at least one of them with no comment with such a format (so that eventually the build will fail)
Is that a correct approach? It will serve the purpose but is it a good practice to use Junit test to do some verification work.?
If not, what kind of approach should I use to analyze(using my logic - I have a Analyzer.java file with the logic) all the files in the build time and have the build be success iff all files comply too the required standard.
EDIT :
Code comment check is only one verification. There are several checks that need to be done. (ex : variable names should end with a given suffix, patterns to use some internal libraries .. etc) All the scenarios are handled in that logic (Analyzer.java). Just need to check all the java file contents and use that logic to verify them.
It's safe to say like I have a java library and when invoked a method that accept a file name , check(fileName) , it will analyze the file and return true if it pass some invisible logic. And if it returns false build should be failed. Since I need to fail the build if something is not right I'm using it in a jUnit test to check all my .java files in the code base.
If this can be done by a static code analyzing tool (but needs to use the logic I have) it is also acceptable. But I don't have an idea whether this kind of custom verification supports by the existing static code analyzers.
Is that approach I'm using correct ? ... is it a good practice to use junit test to do some verification work
No. Unit testing is for checking the integrity of a code unit, ensuring the unit's behavior works properly. You should not be checking comments/documentation in unit tests.
Since I need to fail the build if something is not right..
You need to add more steps to your build process, more specifically a static analysis step.
Unit testing is considered a build step, along with compilation, execution and deployment. Your project requires an extra step, which brings me to the following...
You could use a build tool, such as Apache Ant, to add steps to your project's build. Although static analysis doesn't come bundled (this is simply a build automation tool), it does allow you to ensure that the build fails of a custom build step fails.
With that said, you could add a step that triggers a static analysis program. This page contains an example of using Ant to create multiple build steps, including static code analysis and bug checking. You could even create your own analyzer to use.
For more information about build tools & automation:
StackOverflow: What is a Build Tool?
Wiki: Software Build > Build Tools
Wiki: Build Automation
You can use Checkstyle for that.
Build can be failed.
Check comments. It is called static code analysis.
To define the format for an author tag or a version tag, set property authorFormat or versionFormat respectively to a regular expression.
If that would be my project then i would not be placing this kind of verifications in the standard src/test/java and run as part of the test suite of the application.
Unit tests suite should be testing production logic and not doing 'coding style' checks.
A place for this kind of verification would be for example on a pre-commit to git repository. And all the scripts checking this (or style-checking tools) would be invoked in that place.
You can put everything in one place, but as far as i can see the, the separation of concerns in all the areas of the software development are the leading trend for quite a while.. and there is a good epoint to that.

How to check for dead java methods at runtime

I am trying to create an index of unused Java methods in the form of a json file.
There are a couple different ways in which the methods can be referenced. I have already checked for all the other ways and have a relatively small list of possibly unused java methods.
The final way in which a method can be used is in other java files. They would be called with a basic class.method(args,args2,etc...) syntax somewhere in the java source code.
My question is, is there an easy way to just check my list of possible unused methods to see if any of them are not used in the java code. It would be ideal if this could be done at runtime, but it would also work if I could create a file that I could then read in at runtime.
I have tried using pre-built software like UCDetector, but the source code is huge, and running UCDetector takes hours and often doesn't even finish. It also checks all methods to see if they are used which is a waste of time since I have narrowed it down to a small number of possible methods to check.
You should use your IDE (eclipse, intelliJ), or some static code analysis tool such as findbugs, pmd, checkstyle.
It seems like you are trying to reinvent the wheel.
One option might be to use "coverage analysis" tools to see what is not used (https://en.m.wikipedia.org/wiki/Java_Code_Coverage_Tools). If you have good branch coverage with your unit tests, simply running the tests with coverage will yield the result you're looking for. If you don't have good tests coverage, you might run the application itself with code instrumented for coverage calculation, but as with unit tests - the quality of the result will depend on the amount of code executed with your unit tetst or manual test.
Some examples of the coverage tools you might use are : JaCoCo (http://www.eclemma.org/jacoco/) and Cobertura (http://cobertura.github.io/cobertura).
Alternatively you might instrument your code yourself in order to log methods usage, as it might be more lightweight than calculating full line coverage. This is however indeed reinventing the wheel.
This SO question has similar solutions: How to find unused/dead code in java projects

Why can classes being unit tested with JUnit not have a main?

My lecturer mentioned this before, but I don't really understand why this is the case. Would anyone be able to explain ?
We are writing a program to compute an array list of prime numbers, and we have to use JUnit to ensure all members of this arraylist are prime. Why can I not use a main in testing this class ?
Thank you very much :)
Ok these answers are for the most part too complex. I think your question is more fundamental. ANd its a very good one
The answer is when you become a java developer and start writing large amount of code that get updated/fixed over time then it helps to have a separate test plug-in that will automatically run tests on your code from outside the code to check if it’s still working in the way you would expect. This means you can fix/debug different aspect of the code for whatever reason and afterwards your boss walks over and asks does the code still do what the client wanted it to do since your fix? Without complication You can answer him without complex in-main error statements, which are mixed up with the normal program output (and slow down the code in non test conditions), but with a pretty green junit bar that says it all still works. You won’t see the value of this until you develop large projects and you have hundreds of tests to do. In addition junit has a number of other tricks up its sleeves...
Because JUnit is providing a main that calls the functions that you provide in your classes. You can still have your own main functions; they just won't get used when you run JUnit. You can use main functions to test your own classes individually, but using JUnit has some advantages as described in org.life.java's answer.
You can, it just wouldn't be recommended. If you write a unit test for testing it, then you can use the junit test runner to run the test and to produce a report indicating whether it passed or failed. If you don't do this then you'll need to code your own report mechanism.
Unit tests have the following structure normally:
Create test infrastructure
Execute test
Validate passed
Your situation has something similar and is thus a good candidate for using junit.
The unit testing API's available provide you with useful utilities that you would ordinarily have to code yourself.
Why don't you try both approaches and see for yourself.
In unit testing you are not testing anything as a whole. A unit test must test a UNIT normally a method. So you should write the method that computes your array, and use Junit to just test the method.
The main method is just an entrypoint and it "defines" the flow of the procedure. In unit testing we don't worry on flow. We just forcus on the unit. The program flow is verified using the System/Component test, not by the unit tests.
Because JUnit tests are run by a framework not as a standard console application.
The JUnit test runner finds the tests by reflection.
See the documentation here.
See: org.junit.runner.JUnitCore.main(String...), something like that is underlying.

Exclude individual JUnit Test methods without modifying the Test class?

I'm currently re-using JUnit 4 tests from another project against my code. I obtain them directly from the other project's repository as part of my automated Ant build. This is great, as it ensures I keep my code green against the very latest version of the tests.
However, there is a subset of tests that I never expect to pass on my code. But if I start adding #Ignore annotations to those tests, I will have to maintain my own separate copy of the test implementation, which I really don't want to do.
Is there a way of excluding individual tests without modifying the Test source? Here's what I have looked at so far:
As far as I can see, the Ant JUnit task only allows you to exclude entire Test classes, not individual test methods - so that's no good for me, I need method granularity.
I considered putting together a TestSuite that uses reflection to dynamically find and add all of the original tests, then add code to explicitly remove the tests I don't want to run. But I ditched that idea when I noticed that the TestSuite API doesn't provide a method for removing tests.
I can create my own Test classes that extend the original Test classes, override the specific tests I don't want to run, and annotate them with #Ignore. I then run JUnit on my subclasses. The downside here is that if new Test classes are added to the original project, I won't pick them up automatically. I'll have to monitor for new Test classes as they are added to the original project. This is my best option so far, but doesn't feel ideal.
The only other option I can think of is to run the bad tests anyway and ignore the failures. However, these tests take a while to run (and fail!) so I'd prefer to not run them at all. Additionally, I can't see a way of telling the Ant task to ignore failures on specific test methods (again - I see how you can do it for individual Test classes, but not methods).
If you can't touch the original test at all you are going to have some serious limitations. Your overriding sounds like the best bet, but with a couple of changes:
Build the Ant tests specifically excluding the super classes, so that additional classes that you don't know about get run.
You can use the #Rule annotation (new to JUnit 4.7) to know what test is being run and abort it (by returning an empty Statement implementation) rather than overriding specific methods, giving you more flexibility in knowing whether or not to avoid the test. The only problem with this method is that you can't stop the #Before methods from running using this method, which may be slow. If that is a problem (and you really can't touch the tests) then #Ignore in the overridden method is the only thing I can think of.
If, however, you can touch those tests, some additional options open up:
You could run them with a custom runner by specifying the #RunWith tag on the class. This runner would just pass over execution to the standard runner (JUnit4.class) in that project, but in your project (via a system property or some other mechanism) would inspect the test name and not run a test. This has the advantage of being the least intrusive, but the most difficult to implement (runners are hairy beasts, one of the stated goals of #Rule was to eliminate most of the need to make them).
Another is to make an assumeThat statement on the test that would check some configuration setting that would be true if that test should run. That would actually involve injecting right into the test, which is most likely a deal breaker in anything remotely labeled a "separate project."
It doesn't help you now, but TestNG supports this sort of ability.
OK, this is a rather heavyweight solution, but don't throw things at me if it sounds ridiculous.
The core of Junit4 is the org.junit.runner.Runner class, and its various subclasses, most importantly org.junit.runners.Suite. These runners determine what the tests are for a given test class, using things like #Test and #Ignore.
It's quite easy to create custom implementations of a runner, and normally you would hook them up by using the #RunWith annotation on your test classes, but obviously that's not an option for you.
However, in theory you could write your own Ant task, perhaps based upon the standard Ant Junit task, which takes your custom test runner and uses it directly, passing each test class to it in turn. Your runner implementation could use an external config file which specifies which test methods to ignore.
It'd be quite a lot of work, and you'd have to spend time digging around in the prehistoric Ant Junit codebase to find out how it works. The investment in time may be worth it, however.
It's just a shame that the Junit Ant task doesn't provide a mechanism to specify the test Runner, that would be ideal.
A possibility I can think of to achieve what you want with the stated constraints is to use bytecode modification. You could keep a list of classes and methods to ignore in a separate file, and patch the bytecode of the test classes as you load them to remove this methods altogether.
If I am not mistaken, JUnit uses reflection to find the test methods to execute. A method rename operation would then allow you to remove these methods before JUnit finds them. Or the method can be modified to return immediately, without performing any operation.
A library like BCEL can be used to modify the classes when loaded.
If you want to run only a subset of the tests it sounds like that class has more than one responsibility and should be refactored down. Alternately the test class could be broken apart so that the original project had all the tests but on one or more classes(I'm guessing some of the tests are really integration tests and touch the database or network) and you could exclude the class(es) you didn't want.
If you can't do any of that, your option of overriding is probably best. Take the process of whenever you need to ignore some methods you extend that class and add it to your Ant exclude list. That way you can exclude what you can't pass and will still pull in all new tests (methods you didn't override and new test classes) without modifying your build.
If the unwanted tests are in specific classes/packages, you could use a fileset exclude in Ant to exclude them during import.
Two options
Work with the owner of the borrowed tests to extract your ones into a separate class you both can share.
Create your own test class which proxies the test class you want to use. For each method you want to include have a method in your class. You'll need to construct an instance of the test class you are calling and do before and after methods too if they're in the original.
Create a custom Junit runner based on blockjunitrunner and use it to filter out or in the tests you want.

Categories