My project: testng-surefire-maven.
In one of the modules I runt mvn clean install.
When all tests are green, I have a result:
Tests run: 277, Failures: 0, Errors: 0, Skipped: 0
Then I in turn make an intentional mistake in one of 3 tests that I am refactoring right now. And as a result I have 3 totally different outputs:
Test1>AbstractTestNGSpringContextTests.springTestContextPrepareTestInstance:149 » BeanCreation
Tests run: 344, Failures: 1, Errors: 0, Skipped: 100
Test2>AbstractTestNGSpringContextTests.springTestContextPrepareTestInstance:149 » BeanCreation
Tests run: 282, Failures: 1, Errors: 0, Skipped: 8
Test3>AbstractTestNGSpringContextTests.springTestContextPrepareTestInstance:149 » BeanCreation
Tests run: 416, Failures: 1, Errors: 0, Skipped: 205
How is that possible???
All I've done is a one-line change in one of the test classes in turn. I didn't touch testng.xml nor pom.xml.
Additionally, if I make a mistake in all 3 of them simultaneously, only one will pop up. I didn't set a custom skipAfterFailureCount in surefire nor any other testng property. Why doesn't it run though all of them and show me the list of all failing tests at once? All tests are in the same package.
Ok, I don't know what influences test count, but I can answer a second part of my question.
Additionally, if I make a mistake in all 3 of them simultaneously, only one will pop up. I didn't set a custom skipAfterFailureCount in surefire nor any other testng property. Why doesn't it run though all of them and show me the list of all failing tests at once? All tests are in the same package.
It happens because testng has a property - configfailurepolicy: Whether TestNG should continue to execute the remaining tests in the suite or skip them if an #Before* method fails. Default behavior is skip.
And that's what happened in my case. I had problems on test init stage, not in the test method itself.
Related
I'm running mvn test, and based on the generated xml results file, i'm preparing the results summary.
Here now i'm only able to handle right or wrong testcases.
and if any issue in the code, i'm getting the compilation error in the mvn test
Can we log the compilation errors as well in the surefire-reports ?
Outcome expected: include the compilation error case in the below result itself.
-------------------------------------------------------------------------------
Test set: com.example.helloworld.HelloWorldApplicationTests
-------------------------------------------------------------------------------
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.666 s - in com.example.helloworld.HelloWorldApplicationTests
I follow the evosuite maven documentation to generate unit test cases for my project. Below is the command I've used:
mvn -DmemoryInMB=2000 -Dcores=2 evosuite:generate evosuite:export test
The tool takes around 1 hour to generate test cases for my project (with 9700 lines of Java code). However when it proceed to mvn test phase, all test cases failed with the same message:
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec <<< FAILURE! - in com.xxx.yyy.util.Messages_ESTest
com.xxx.yyy.util.Messages_ESTest Time elapsed: 0 sec <<< ERROR!
java.lang.IllegalStateException: Trying to set up the sandbox while executing a test case
at <evosuite>.<evosuite>(<evosuite>)
at <evosuite>.<evosuite>(<evosuite>)
at <evosuite>.<evosuite>(<evosuite>)
While running the generated test case in Intellij IDEA most of them can pass the test:
Any one got any idea?
this is clearly a bug, which would be best to report on
https://github.com/EvoSuite/evosuite/issues
furthermore, currently in 1.0.3 the tests generated by EvoSuite do not work well with "mvn test". The fix is already in the SNAPSHOT though, and will be part of next release. However, the "Trying to set up the sandbox" might be a new bug as I haven't seen it before... :(
With a view to managing / reducing our build times, I want to identify which unit tests are taking the most time - in a parallel test environment using the maven-surefire-plugin.
We are using JUnit (4.10) for unit tests. We use maven (2.2.1 - some plugins we use don't yet support maven 3) as our primary build tool, and the maven-surefire-plugin (2.19) to run unit tests.
We are using the maven-surefire-plugin in parallel mode, where both the individual methods are run in parallel and the unit test classes are run in parallel - this is very important, as it significantly reduces build unit test time. The maven-surefire-plugin is configured in the .pom as follows:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.19</version>
<configuration>
<argLine>-Xmx2G -XX:MaxPermSize=1G -XX:-UseSplitVerifier</argLine>
<failIfNoTests>false</failIfNoTests>
<parallel>classesAndMethods</parallel>
<useUnlimitedThreads>true</useUnlimitedThreads>
</configuration>
</plugin>
However, one of the implications of this is that in the console output, the time elapsed for each JUnit test class is the aggregate time for all the methods in the class.
For example, if a test class had 10 unit test methods, each of which took 1 second to run, then the test class would take about 1 second to run (each method being run in parallel), but the output would be something like:
Running com.package.QuickParallelTest Tests run: 10, Failures: 0, Errors:
0, Skipped: 0, Time elapsed: 10.0 sec - in com.package.QuickParallelTest
This makes it difficult to distinguish in the console output from another test class with 10 unit test methods, of which 9 run almost instantly and 1 takes almost 10 seconds to run. In this case, the test class would take about 10 seconds to run (because of the one slow test method), but the maven-surefire-plugin console output would be effectively the same:
Running com.package.SlowParallelTest Tests run: 10, Failures: 0, Errors:
0, Skipped: 0, Time elapsed: 10.0 sec - in com.package.SlowParallelTest
Ideally, I would like the time elapsed to indicate how long the test class took to run (in parallel), not the aggregate time taken to run the methods separately (as if single-threaded).
So, my question(s) is/are:
Is there maven-surefire-plugin setting that I am missing, so that the print summary would show time taken per class rather than aggregate for methods?
Is this is a known "bug" (or "feature") in the maven-surefire-plugin? (I've checked the SureFire JIRA, but couldn't find anything like this.)
Is there an alternative way for me to identify which tests are taking a long time and are therefore prime for optimisation.
EDIT:
I've tried playing with some additional configuration settings. Curiously, adding the the following to the configuration in the .pom seems to change the time elapsed in the console output to be the time taken in running the test class - however, this is (in my mind) counter-intuitive, since these settings are the default settings:
<configuration>
...
<forkCount>1</forkCount>
<reuseForks>true</reuseForks>
</configuration>
Adding to the Maven Surefire Plugin configuration the reportFormat entry and setting its value to plain (instead of the default brief) would give you elapsed time per method.
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.19</version>
<configuration>
<argLine>-Xmx2G -XX:MaxPermSize=1G -XX:-UseSplitVerifier</argLine>
<failIfNoTests>false</failIfNoTests>
<parallel>classesAndMethods</parallel>
<useUnlimitedThreads>true</useUnlimitedThreads>
<reportFormat>plain</reportFormat>
</configuration>
</plugin>
</plugins>
</build>
Output with default reportFormat (brief):
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running com.sample.mocking.InternalServiceTestCase
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec - in com.sample.mocking.InternalServiceTestCase
Output with plain value:
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running com.sample.mocking.InternalServiceTestCase
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec - in com.sample.mocking.InternalServiceTestCase
test(com.sample.mocking.InternalServiceTestCase) Time elapsed: 0.005 sec
mockTest(com.sample.mocking.InternalServiceTestCase) Time elapsed: 0.17 sec
mockTestFailureTollerance(com.sample.mocking.InternalServiceTestCase) Time elapsed: 0.007 sec
mockProcessfile(com.sample.mocking.InternalServiceTestCase) Time elapsed: 0.003 sec
This option may give you further details on tests and execution time.
I'm applying BDD methodology using Cucumber, which is GREAT!
The problem is that my test suite getting bigger and bigger and now I get the following exception which fails my test from the wrong reason...
I'm using all sort or Cucumber features, such as: Background, Scenario Outline and simple scenarios.
I run the tests like this:
#RunWith(Cucumber.class)
#Cucumber.Options(features={"...../controller1"})
public class RunCukes1Test {
}
I split my feature files to different directories (controller1, controller2...) and runners (RunCukes1Test, RunCukes2Test...), but this didn't help.
When I run each test itself everything is ok, but when I run it all using mave lifecycle test, it fails. Does anyone know of any best practices in Java Cucumber to avoid problems of such.
Tests run: 5896, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.082 sec
Running com.kenshoo.urlbuilder.appservice.controller.RunCukes4Test
Tests run: 11838, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.833 sec
Exception in thread "Thread-73" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2882)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:515)
at java.lang.StringBuffer.append(StringBuffer.java:306)
at java.io.BufferedReader.readLine(BufferedReader.java:345)
at java.io.BufferedReader.readLine(BufferedReader.java:362)
at org.codehaus.plexus.util.cli.StreamPumper.run(StreamPumper.java:129)
Exception in thread "ThreadedStreamConsumer" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3209)
at java.lang.String.<init>(String.java:215)
at java.lang.StringBuffer.toString(StringBuffer.java:585)
at org.apache.maven.surefire.report.PrettyPrintXMLWriter.escapeXml(PrettyPrintXMLWriter.java:167)
at org.apache.maven.surefire.report.PrettyPrintXMLWriter.addAttribute(PrettyPrintXMLWriter.java:178)
at org.apache.maven.surefire.shade.org.codehaus.plexus.util.xml.Xpp3DomWriter.write(Xpp3DomWriter.java:50)
at org.apache.maven.surefire.shade.org.codehaus.plexus.util.xml.Xpp3DomWriter.write(Xpp3DomWriter.java:55)
at org.apache.maven.surefire.shade.org.codehaus.plexus.util.xml.Xpp3DomWriter.write(Xpp3DomWriter.java:39)
at org.apache.maven.surefire.report.XMLReporter.testSetCompleted(XMLReporter.java:128)
at org.apache.maven.surefire.report.MulticastingReporter.testSetCompleted(MulticastingReporter.java:51)
at org.apache.maven.surefire.report.TestSetRunListener.testSetCompleted(TestSetRunListener.java:115)
at org.apache.maven.plugin.surefire.booterclient.output.ForkClient.consumeLine(ForkClient.java:97)
at org.apache.maven.plugin.surefire.booterclient.output.ThreadedStreamConsumer$Pumper.run(ThreadedStreamConsumer.java:67)
at java.lang.Thread.run(Thread.java:662)
Results :
Tests run: 11790, Failures: 0, Errors: 0, Skipped: 0
I got an answer to another java-heap-space exception I had, after the cucumber tests running.
You can see it here - related problem
My theory is that the -XX:MaxPermSize is a factor for during Cucumber running, as Cucumber generates tests code and PermSize is related to amount of code as described what is permsize in java
The -Xmx is a factor for the post Cucumber running, while parsing the tests results.
So the solution is to find the balance between them both and the actual available memory.
I've got a Jenkins CI server that is set up with a Selenium test project running with maven-surefire. I need the project to be a parameterized build, so that I can trigger the build via URL with a Dtest as a parameter (and only run the tests I specify in the url). This works great.
Unfortunately, I've been unable to figure out how to run ALL of the tests, while in this parameterized configuration. Since it is in parameterized build mode, I must ALWAYS specify the -Dtest parameter.
Based on the Surefire documentation, it seems like I should be able to wildcard the test names, and everything will be run:
-Dtest=* or -Dtest=Test*
The odd result of running these parameters is a print statement (that I created) from all 6 of the tests (denoting that they were all started):
"Test <test_name> started, click here to see the SauceLabs video"
And then the standard test result (below) for only 4/6 tests
Running <test_class_path>
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.048 sec
Followed by the summary:
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0
If it matters, the tests are being run in parallel using surefire, and one other odd thing is that while printing out the individual test results, after the 4th one, the 5th result starts printing, but never shows a result, and includes a $1 at the end:
Running <test_class_path>$1
Please let me know if I can clarify anything or answer any questions.
Thanks in advance for any help!
I think that it's a regular expession:
mvn -Dtest=.*
works for me.