Can you Unit Test Obfuscated Code? - java

I am looking to obfuscate our Java web app code within our existing Ant build script, but am running into problems around unit testing. I am obfuscating the code right after it has been compiled, before it is jar-ed and before the unit tests are ran.
However, if I obfuscate my production code and not my test code, all my tests fail because they are trying to call methods that no longer exist because they have been renamed by the obfuscator. I can mark certain methods to not obfuscate so they can be used by external systems such as our test suite, but since we are shooting for high unit test coverage we will need to mark all of our methods as un-obfuscatable.
If I obfuscate the test classes as well, I run into two problems:
1: The production classes and the test classes get merged into the same output directory and I am unable to exclude the test classes from the production .jar files
2: I cannot run my normal Ant batchtest call:
<batchtest todir="${basedir}/reports">
<fileset dir="${basedir}/components/common/build-zkm">
<include name="**/*Test.class"/>
</fileset>
</batchtest>
because the obfuscator has changed the names of the tests.
I could just run the obfuscator on the resulting .war/.ear files, but I want to have our unit tests run against the modified code to drive out any bugs caused by the obfuscator.
I am currently working with Zelix KlassMaster, but I am still in the evaluation phase so I would be open to other options if they would work better.

I use yguard (it is free, which is why I mention it).
You should be able to tell the obfuscator not to obfuscate certain things (looking here it seems you can).
Some as others have said, don't obfuscate the tests, but do obfuscate the rest.
However, I would suggest that you do the following:
compile
jar the un-obfuscated files (if desired)
test the un-obfuscated files
if they pass the tests then obfuscate jar the obfuscated files
test the obfuscated files
It will be slower, but if the tests fail in step 3 it'll be easier to fix (potentially) and if the tests fail at 5 then you know there is an issue with the obfuscation not your source code.

Can you tell it to run the obfuscator such that it effectively refactors the code including the references from the tests (i.e. when a production name changes, the test code changes its reference) but not to obfuscate the tests themselves (i.e. don't change the names of the test classes or their methods)? Given previous experience with obfuscators I'd expect that to work.
So for example, suppose we had unobfuscated source of:
public class ProductionCode
{
public void productionMethod() {}
}
public class ProductionCodeTest
{
public void testProductionMethod()
{
new ProductionCode().productionMethod();
}
}
You want to set the options of the obfuscator to make it effectively:
public class Xyzzy
{
public void ababa() {}
}
public class ProductionCodeTest
{
public void testProductionMethod()
{
new Xyzzy(). ababa();
}
}
That way your "run the tests" Ant tasks should be able to stay the same, because the API of the tests hasn't changed - merely the implementation of the methods.

The obfuscator should not change your public calls. It seems that yo should run the other tests before obfuscation because they check internal functionality that should not change after obfuscation.
So if that is the case, why not just run the tests that call public functionality? All you need to do is have a separate class with those calls and re-build it using the obfuscated code and then run that dll.

Related

JUnit tests influence each other

I'm working with a lot of legacy code. There was a JUnit-TestSuite to begin with. When running all tests with gradle, they failed. When running the test in IntelliJ, they worked. We configured gradle to use the test suite.
Now someone reported tests working locally without gradle, but not with gradle. It's time we fix this mess.
Is there a smart way to figure out which test leaves some configuration behind or which tests relies on the other tests?
The most likely cause of this "bleed" from one test into another is mutable static values. By default, all tests are run by the same JVM so a static variable which is "mutated" by one test will be "dirty" in another test.
Mutable statics are evil! I'm working on a codebase currently with mutable statics everywhere and it's a mess. If possible you should refactor to use dependency injection and store mutable state in instances and not statics.
The best workaround is to find the tests which "dirty" the static mutable variables and do
#After
public void cleanup() {
SomeStatic.reset();
}
If you can't find the "dirty" test which is causing the issue, you might be forced to do the following in the "failing" test. This is not preferred, and a little hacky
#Before
public void cleanBefore() {
SomeStatic.reset();
}
But this has a slight code "smell". Better to find the offending test which "dirties" the mutable static
The "nuclear" option is to run each test in its own jvm. This is a total hack and should be avoided at all costs. It will drastically increase the time it takes to run your tests
test {
forkEvery = 1
}
See Test.forkEvery
I recently diagnosed a similar issue in a Gradle Java project, where a test was working when run individually, but not when run as part of the Gradle build.
In order to track down the offending test that was breaking the subsequent test, I first configured Gradle to print out the tests that were run using beforeTest, so I could tell what all was running prior to my test:
test {
beforeTest { TestDescriptor descriptor ->
logger.lifecycle("$descriptor.className#$descriptor.name")
}
}
This printed the tests in the order in which they ran:
$./gradlew test
> Task :test
com.example.Test1#firstTest()
com.example.Test1#secondTest()
com.example.Test2#firstTest()
com.example.Test2#secondTest()
com.example.Test3#firstTest()
com.example.Test3#secondTest()
com.example.BrokenTest#brokenTest()
BrokenTest > brokenTest() FAILED
java.lang.AssertionError at BrokenTest.java:34
com.example.Test4#firstTest()
com.example.Test4#secondTest()
Now that I knew which test classes were running before the broken test, I knew that one (or more) of those tests were causing the test to break.
Next, I added test filtering in my Gradle build to only run those tests that ran before the broken test, plus the broken test itself:
test {
filter {
includeTestsMatching 'com.example.Test1'
includeTestsMatching 'com.example.Test2'
includeTestsMatching 'com.example.Test3'
includeTestsMatching 'com.example.BrokenTest'
}
}
After doing a sanity check Gradle build to confirm that the test was still broken, I commented out groups of tests and reran the build until I was able to narrow it down to the test that caused the build to break:
test {
filter {
// includeTestsMatching 'com.example.Test1'
includeTestsMatching 'com.example.Test2'
// includeTestsMatching 'com.example.Test3'
includeTestsMatching 'com.example.BrokenTest'
}
}
In this example, the build broke when Test2 was run before BrokenTest, but not when Test1 or Test3 were. Armed with this information, I could dive deeper into what specifically about Test2 was affecting the system in such a way as to break the other test when it was run after it.

Running a code with JUnit Test Cases

I have checked out a code from CVS and need to make changes to it. The code has 2 folders
Java
Test
The later has JUnit test cases. I'm not very familiar with JUnit but as far as my understanding is, the classes are duplicated in JUnit as class names. That's why I get the error in the test folder.
Class "xxxxx" already exists
I'm not sure how do I run this project without removing the folder test. Is there a way I can make eclipse ignore the JUnit test cases for now?
Go into the properties of the Eclipse project, open Java Build Path / Source and remove folder Test. Eclipse will then ignore the sources in that folder.
Test and normal java classes are merged together during build time, your error happens because the test classes have the exact same name as the normal classes. You should rename your test cases with some kind of prefix like Test to prevent them conflicting.
Doing things to work around the problem will only conflict later when you are changing the build platform, maybe your current build platform accepts it, but your future platform/editor may not, and then you have the real problems.

Run JUnit tests from a dependency jar in Eclipse

I have some JUnit tests that contained in a .jar that is intended to be used as a library. The library contains some tests that should be run whenever the library is used in another project.
However when I create a new project using the library and run JUnit on it in Eclipse then the tests in the dependency .jar don't run / don't get detected by the JUnit test runner. I get the message:
No tests found with test runner 'JUnit 4'.
Is there a way I can configure the dependency .jar so that the tests will run alongside any tests that might be contained in the main project?
Basically I want the dependency .jar to "export" the tests to whatever projects it is used in.
I'm using Eclipse Juno, JUnit 4.10, and Maven for the dependency management.
EDIT:
The point of this library is to be able to help test projects that use it - i.e. it runs some specialised tests. This is why I want to be able to import the library .jar and have it contribute the extra tests to the importing project.
You can try Maven Surefire.
In some cases it would be useful to have a set of tests that run with various dependency configurations. One way to accomplish this would be to have a single project that contains the unit tests and generates a test jar. Several test configuration projects could then consume the unit tests and run them with different dependency sets. The problem is that there is no easy way to run tests in a dependency jar. The Surefire plugin should have a configuration to allow me to run all or a set of unit tests contained in a dependency jar.
This can be done as follows (Junit 3):
Ensure test jar contains a class which has a static suite() method
import junit.framework.Test;
import junit.framework.TestSuite;
public class AllTests {
public static Test suite()
{
TestSuite suite = new TestSuite( "All Tests");
suite.addTestSuite(TestOne.class);
suite.addTestSuite(TestTwo.class);
return suite;
}
}
Then in the project using the test-jar dependency:
create a TestCase:
package org.melati.example.contacts;
import org.melati.poem.AllExportedTests;
import junit.framework.Test;
import junit.framework.TestCase;
public class PoemTest extends TestCase {
public static Test suite()
{
return AllExportedTests.suite();
}
}
Now the tests will be found.
I think that making a library of unit tests (#Test annotated methods) is a bad idea. However, making a library of reusable test components is a good one. We've done this in a few open source projects, and you can take a look how it works.
One Maven module exports test components (we call them "mocks"), from src/mock/java directory. Exported artifact has -mock classifier. See rexsl/pom.xml (pay attention to highlighted lines).
Mock artifacts are being deployed to Maven Central, together with usual artifacts: http://repo1.maven.org/maven2/com/rexsl/rexsl-core/0.3.8/ (pay attention to ...-mock.jar files)
Modules that need that mocks can include them as usual artifacts, for example rexsl-core/pom.xml (see highlighted lines):
Then, in your unit tests just use the classes from that mock libraries, like regular builders of mocks, for example: BulkHttpFeederTest
That's how you can make your test artifacts reusable, in an elegant way. Hope it helps.
#Mikera,
I find that this may help you. Just extend the Testcase Class to one of your java classes in project and you can run that particular class to run it as a JUnit Test.
I am not sure that this is desirable - On the one hand, if you use a jar, its behaviour might be influenced by the external context, e.g. other libraries in the classpath. From inside the jar, there is no simple way to analyse this context and to adjust the tests accordingly. On the other hand, if you write and compile a library, you should test it before packaging it as a jar. You might even want to not include your tests.
If it is really important to you to run the tests again, I would be interested in what could make them fail without changing the jar. In that case, however, you might want to extend the testrunner. As far as I know it uses reflection. You can quite easily load jars in a classloader and go through all their classes. By reflection you can identify the test classes and assemble testsuites. You could look into the testrunner for an example. Still, you would need to start this process from outside, e.g. from inside one of your test classes in the client project. Here, QATest's approach might be helpful: By providing an overriden version of testsuite or testrunner, you could automate this - if the client uses your overridden API.
Let me know if this rather costly approach seems to be applicable in your scenario and I can provide code examples.
Why should the user of the jar run the test cases inside the jar!!! When the jar is packaged and delivered, it means that the unit tests are run successfully.
Typically, the jar itself should be either treated as a separate project or as one of the modules. In both the cases, unit test cases are run before its delivered.

Can java unit test hide seam interface for production code, as .NET can do?

As I read from Art of unit test, knowing that .NET can hide seam methods for testing in production runtime. (p.78~p.80).
Such as,
public class LogAnalyzer
{
...
internal LogAnalyzer (IExtensionManager extentionMgr)
{
manager = extentionMgr;
}
}
run like this.
using System.Runtime.CompilerServices;
[assembly:
InternalsVisibleTo("AOUT.CH3.Logan.Tests")]
So LogAnalyzer() can only be called for test classes, without worries of adding extra cost on production code on purpose of testability.
After brief survey, seems Java does not have equivalent feature.
But does Java have alternatives?
Thanks.
What about implementing your own custom ClassLoader? You can define your own annotation like #HideFromProductionCode and have your custom ClassLoader throw an exception if it loads a class that has the #HideFromProductionCode annotation. See How to set my custom class loader to be the default?
Alternately, just add a script to your build process that goes through all your compiled production code and looks for the #HideFromProductionCode annotation.
One fairly straightforward approach would be to use a Maven-like directory structure, with separate directories for production code and test code (typically under directories called src/main and src/test). When unit tests are run, the classpath includes both the main directory and the test directory. But when you build the JAR that gets deployed in production, only classes defined in the main directory are included; this way, production code that references test classes will result in a compile error.

My JUnit tests works when run in Eclipse, but sometimes randomly fails via Ant

The core of my question is that I am concerned that my Ant build file is missing something that will allow a test to finish and clean itself up. The details are below.
I have a suite of tests that always passes when I run it through Eclipse, but sometimes passes or fails when I run it using my Ant build. The tests use openCL via JOCL so I have limited memory on the GPU and it has to be managed correctly. I get this in my output sometimes when I run my Ant build,
[junit] Caused an ERROR
[junit] CL_MEM_OBJECT_ALLOCATION_FAILURE
[junit] org.jocl.CLException: CL_MEM_OBJECT_ALLOCATION_FAILURE
The problem can not be in the test itself. I think it is that my most memory hungry test is invoked at the end of the suite. When this last test is invoked, somehow the GPU is left in a bad state from my previous tests. This doesn't happen when I run the tests through Eclipse. It has never failed in my Ant build when I make the memory hungry test the first test in the suite. Is this a familiar case? Why does running the tests through Eclipse always work? Is there anything I can try?
Here is the testing target in my Ant build:
<target name="test" if="testing.enabled">
<mkdir dir="${test.bin.dir}" />
<javac srcdir="test" destdir="${test.bin.dir}" debug="true" classpathref="testclasspath" source="1.6"/>
<junit haltonerror="true" haltonfailure="true">
<classpath refid="testclasspath"/>
<formatter type="plain" usefile="false" />
<batchtest>
<fileset dir="test">
<include name="*Test.java"/>
</fileset>
</batchtest>
</junit>
</target>
If you are really sure no left-over cleanup is missed in your code, you can create a JUnit test suite and run that from both eclipse and ant. By creating a test suite you make yourself independent of the sequence of tests that eclipse (order within the project?) and ant (order within the filesystem?) use and determine the order yourself in both cases.
If you are not really really sure your code is issue-free you could make a test suite which starts of by calling Collections.shuffle() on the list of test classes to introduce unknown test execution order in both eclipse see if your tests still never fail.
The problem could be that you do not free memory in your test cases. JUnit instanciates all test classes when it is started and then runs them, as far as I know. If you have fields that reference objects in your test classes all fields will stay assigned through the whole testrun when you don't assign null to them in a tearDown() method. For example:
class Test extends TestCase {
private Data data;
public void setUp() {
data = new Data();
}
public void tearDown() {
data = null; // required to allow garbage collection of the Data object
}
}
Maybe Eclipse unreferences the Test instances after they are executed, so that the fields can be garbage collected. But using the standard JUnit TestRunner you will end up with a lot of objects that are not used anymore but that are still referenced and eat up all your memory.
If the tests are passing in Eclipse and failing elsewhere, then you're suffering from one of the many kinds of developer syndrome: "...but It works when I run it here...!"
You have managed to configure Eclipse to let you work with your code, the functionality is in, yet your code is, this far, not deployable, which means it's not done.
Shelve Eclipse for a while (stop blaming it), and drop to the command line (or use a different IDE) until things work. Try the code on a different computer, even!
Then go back to Eclipse, and repeat the above cycle until you're certain that any dependencies on Eclipse or your hard disk/setup have been removed. In the end, your code must be able to run on who-knows-which server.
Have you tried having a clean Eclipse installation (on a different computer) take a shot at a source-only snapshot of the code? It would be a good configurations management test that I'm quite sure your code won't pass as it stands.
Seriously try to get Eclipse doing its magic on a clean (virtual) machine. It won't work on a first run, but you'll learn what you did to make it work under your setup.
Let me google that for you:
Ant looks for an environment variable called ANT_OPTS which is use to set Java parameters. > Just set the environment variable and off you go. So I added the following to increase the > heap size:
export ANT_OPTS=-Xmx256m
When running withing Eclipse, the JVM (and therefore Ant) most likely already has more memory than default.

Categories