I'm using Spock to write tests for a Java application. The test includes a JMockit MockUp for the class. When debugging a test (using IntelliJ) I would like to be able to step into the Java code. I realize using F5 to step into the code won't work, because of all the Groovy magic with reflection that goes on. Unfortunately, even if I set a breakpoint in the Java code, it still will not be hit, even though it runs through the code.
Here is the code to test:
public class TestClass {
private static void initializeProperties() {
// Code that will not run during test
}
public static boolean getValue() {
return true; // <=== BREAK POINT SET HERE WILL NEVER BE HIT
}
}
Here is the test code:
class TestClassSpec extends Specification {
void setup() {
new MockUp<TestClass>() {
#Mock
public static void initializeProperties() {}
}
}
void "run test"() {
when:
boolean result = TestClass.getValue()
then:
result == true
}
}
The test passes and everything works well, but I'm not able to debug into the Java code.
Note: I can debug if I do not use the JMockit MockUp, but that is required to test a private static method in my production code.
Any ideas on how to get the debugger to stop in the Java code from a Spock test that uses JMockit?
This answer has plagued me for days, and I have searched and searched for an answer. Fifteen minutes after posting for help, I find the answer . So for anyone else who runs into this, here's my solution - this is all thanks to this answer: https://stackoverflow.com/a/4852727/2601060
In short, the breakpoint is removed when the class is redefined by JMockit. The solution is to set a break point in the test, and once the debugger has stopped in the test THEN set the breakpoint in the Java code.
And there was much rejoicing...
Related
When debugging a test I realized that something was interfering with Mockito well functioning. Somehow, the inclusion of breakpoints in specific classes leads to a different output.
I try to illustrate it with a simple example.
public class MockitoTrial {
#Test
public void simpleTest() {
var func = Mockito.mock(Function.class);
Entry<String, Integer> entry = new SimpleEntry<>("one", 1);
when(func.apply(eq(Entry.class))).thenReturn(entry);
assertThat(func.apply(Entry.class)).isEqualTo(entry);
}
}
If I set a breakpoint for instance in org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.interceptAbstract, and rerun it in debug-mode, the test fails.
It seems apparently unrelated to the IDE as it happens when debugging remotely as well.
The library versions I am using:
assertj-core-3.22.0
junit-jupiter-api-5.8.2
mockito-core-4.5.1
The question pointed out by Jonasz, gives a good clue.
If I instead set the breakpoint in org.mockito.internal.handler.MockHandlerImpl.handle, the test does already pass even in debug-mode.
Now, I add a watcher for the expression invocation.getMock().toString() and rerun it in debug-mode. Doing that, I got the test to fail again.
Mockito is very sensitive to calls to the mock methods in between its processing.
Example code:
public class Count {
static int count;
public static int add() {
return ++count;
}
}
I want test1 and test2 run totally separately so that they both pass. How can I finish that? My IDE is Intellij IDEA.
public class CountTest {
#Test
public void test1() throws Exception {
Count.add();
assertEquals(1, Count.count);//pass.Now count=1
}
#Test
public void test2() throws Exception {
Count.add();
assertEquals(1, Count.count);//error, now the count=2
}
}
Assume the test1 runs before test2.
This is just a simplified code. In fact the code is more complex so I can't just make count=0 in #after method.
There is no automated way of resetting all the static variables in a class. This is one reason why you should refactor your code to stop using statics.
Your options are:
Refactor your code
Use the #Before annotation. This can be a problem if you've got lots of variables. Whilst its boring code to write, if you forget to reset one of the variables, one of your tests will fail so at least you'll get chance to fix it.
Use reflection to dynamically find all the member of your class and reset them.
Reload the class via the class loader.
Refactor you class. (I know I've mentioned it before but its so important I thought it was worth mentioning again)
3 and 4 are a lot of work for not much gain. Any solution apart from refactoring will still give you problems if you start trying to run your tests in parallel.
Use the #Before annotation to re-initialize your variable before each test :
#Before
public void resetCount(){
Count.count = 0;
}
I am using JMockit 1.12 and want to verify that AccessController.doPrivileged() was called. This seems rather straightforward:
#Test(expected = MissingInvocation.class)
public void testFoo1() {
foo(false, true);
}
#Test
public void testFoo2() {
foo(false, false);
}
#Test
public void testFoo3() {
foo(true, true);
}
private void foo(boolean usePrivilegedAccess, boolean expectAccessControllerCall) {
new NonStrictExpectations(AccessController.class) {{
}};
if (usePrivilegedAccess) {
AccessController.doPrivileged((PrivilegedAction<String>) () -> "");
}
// verify AccessController.doPrivileged was called
if (expectAccessControllerCall) {
new Verifications() {{ AccessController.doPrivileged(withAny((PrivilegedAction<Object>) () -> null )); }};
}
}
Note that testFoo1() does not call AccessController.doPrivileged() yet performs the check anyway.
I added this method because I found that sometimes the Verifications block would pass even if AccessController.doPrivileged(). I am using Netbeans 8.0.1 and after a lot of testing, I found that if I run the test using "Run Focused Test Method" or "Debug Focused Test Method" (runs only 1 test) then it passes. If I use "Test File" (runs all tests) then testFoo1() fails because it does not throw MissingInvocation. If I use "Debug Test File" (runs all tests) then it always fails if I put in a breakpoint; it intermittently fails if I do not put in a breakpoint. Very strange.
Is my JMockit usage correct? I am new, so any pointers appreciated but please note that I want to run the exact same test code from 2 tests which only differ by a boolean flag. I do not want to copy/paste the test twice.
Is there something up with Netbeans?
Is it something to do with the CGLib injection somewhere in the pipeline?
What probably happens is that, during the test, AccessController.doPrivileged(...) gets called from somewhere else, perhaps from NetBeans or more likely from the JRE itself.
JMockit 1.x does not restrict the mocking of the AccessController class to just the foo(boolean,boolean) method; instead, it registers all invocations to its methods, regardless of where they come from. The test would have to implement a more restrictive verification, perhaps checking the exact PrivilegedAction instance passed to the mocked method, or even by checking the call stack to see where it comes from.
For JMockit 2, an API change is planned so that mocking gets scoped to #Tested classes only, avoiding situations like this.
I am running tests with uiautomator. When I get to the end of my test, I need to test my results. My problem is of one test fails, the others will not be tested. I need them all to be tested regardless of the results of other test. This is my attempts:
public void testSomeUI() {
////lots of stuff
assertEquals(///assertion///);
assertEquals(///assertion///);
assertEquals(///assertion///);
....and so on
}
Also I tried:
public void testSomeUI() {
////lots of stuff
testValue1();
testValue2();
testValue3();
....and so on
}
private void testValue1(){
assertEquals(///assertion///);
}
private void testValue2(){
assertEquals(///assertion///);
}
private void testValue3(){
assertEquals(///assertion///);
}
..and so on
If one fails, the last three won't run. Any suggestions? Thanks.
Problem is that once an assert fails, it breaks out of the method. That's why the rest don't get run.
Try using a test framework like JUnit (which UIAutomator appears to be built on). Then write one method per assert. That way will you not only get all asserts to run every time, you also break down the tests into suitably small sizes. If they're named properly, you may not need to debug at all, since you can tell by name of the failing test where the problem really is.
Here's a link to a tutorial for example.
Today I needed to ensure some code sections were guaranteed to be removed from production using just core Java (no AspectJ). After a bit of thought I realized I could do the following, other than being a complete abuse of the concept of contractual assertions can anyone suggest practical problems that could arise?
public class ProductionVerifier {
static boolean isTest;
static {
// This will set isTest to true if run with -ea, otherwise
// the following line will be removed and isTest will be false.
assert isTest=true;
}
public static final boolean TEST = isTest;
public static final void runOnlyIfInTest(Runnable runable) {
// javac will remove the following section if TEST == false
if (TEST) {
runable.run();
}
}
}
import static ProductionVerifier.*;
public class DemonstrationClass {
private static Runnable notForClient = new Runnable() {
public void run(){System.out.println("h4x0r");}
};
public static void main (String[] args) {
runOnlyIfInTest(notForClient);
}
}
My main initial concern was that the scope of the test code allowed it to be accessed from the production environment, but I think even if I wrapped each set of test statements in if (TEST) blocks there is probably some more fundamental issues with the pattern.
Edit: To conclude from the answer and linked question, there is a maintenance/design concern, which is that the enabling of assertions now changes the behavior of arbitrary bits of the system, and a technical issue, which is that the code in these statements is not actually removed from the class file as TEST is not a compile time constant. These issues could be solved by removing the assertions hack, although refactoring the code not to need ProductionVerifier would be preferable.
If someone at the data center - for whatever reasons - enables assertions, your production is going to execute test code happily in the production environment: For example, if an administrator enables assertions to analyze another application, but he just picked the wrong one in his console. Or it's only possible to enable them globally. This just happens.
And you cannot really blame him: The basic problem is really that the connection between "assertions" and "conditionally execute production or test code" is not obvious.