Code Coverage for every (different) input data - java

I want to get the Code Coverage of some simple test, which gets data from a DataProvider. I need the coverage result for each data that runs through the test. For example:
if (value != 0)
{
//do something
}
if (value == 100) {
//do something
}
//else do something
If the test gets a value like 0 from the DataProvider, it never reaches the first part of the code, so the coverage result is different than if the value was 100.
So how do I get those coverage results for each data? I am using jacoco with the maven plugin...
It maybe would help, if there is a possibility to run the subtests with maven... Currently I am doing this:
mvn test
but I want to do something like this:
mvn -Dtest=myTestClass#myTest#myData (#myData of course not working)
However IntelliJ uses this parameter to specifiy the subtest:
java.exe -ea [.......] #name0 //-> to run the test only with first Data
java.exe -ea [.......] #name1 //-> to run the test only with second Data
etc.
Thanks for your help in advance!

Code coverage is the percentage of code which is covered by automated tests. Code coverage measurement simply determines which statements in a body of code have been executed through a test run, and which statements have not.
https://confluence.atlassian.com/clover/about-code-coverage-71599496.html
You can pass the arguments from command line and then run your tests.
You can pass them on the command line like this
mvn test -Dtest=<ClassName> -Dvalue=100
then access them in your test with
int value=Integer.valueOf(System.getProperty("value"));

Related

Cannot stop at breakpoints (on Method under test) when debugging by testng and mockito-inline

I write a test case to test for my service (has Method Under Test - MUT) using TestNG, Mockito-inline (to mock static). My issue is: when I set breakpoints at MUT then Eclipse's cursor cannot stop at these breakpoints. If set breakpoints at test code, the cursor stop and highlight correctly.
I tried to back to Mockito-core (but cannot use Mockito-static anymore, same version 3.11.2) then every breakpoints (in test code and in my service) work correctly.
Here is detail:
A test case: Breakpoints at this test file work correctly
try (MockedStatic<myClass> mockMyClass = Mockito.mockStatic(myClass.class)) {
mockMyClass.when(() -> myClass.get(anyInt()).thenReturn(myReturnObject);
myService.myMethodUnderTest();
}
My service code: another file (myService) has many methods (these methods - myMethodUnderTest call the static method myClass.get(int value))
public void myMethodUnderTest(int value) {
line 1; // if set breakpoint here, Eclipse's cursor does not stop, everything works
normally as without this breakpoint. This code line must be run without any
conditions
myClass.get(value); // the static mock works correctly
line n; // If set breakpoint -> the same line 1
}
Here is my detail environment: Ubuntu 20.04, Eclipse 2018-12, Java 1.8, Spring 5.1.9, TestNG 7.4.0, Mockito 3.11.2 (core and inline)
I tried to use Mockito-inline 3.8.0 but the same issue.
When I debug step by step (F5, F6) the Eclipse'cursor is not correct (still highlight a line but wrong logic) when running in the method under test too (still correct when running in the test code).
Anyone can help to resolve this issue, run debug step by step correctly with Mockito-inline.
Thank you very much

How should I manage dependencies among test cases [duplicate]

When I run go test, my output:
--- FAIL: TestGETSearchSuccess (0.00s)
Location: drivers_api_test.go:283
Error: Not equal: 200 (expected)
!= 204 (actual)
--- FAIL: TestGETCOSearchSuccess (0.00s)
Location: drivers_api_test.go:391
Error: Not equal: 200 (expected)
!= 204 (actual)
But after I run go test again, all my tests pass.
Tests fail only when I reset my mysql database, and then run go test for the first time.
For every GET request, I do a POST request before to ensure that there is data created in the DB.
Could anyone help me with how to make sure that tests are run sequentially? That is the POST requests are run before the GET requests?
You can't / shouldn't rely on test execution order. The order in which tests are executed is not defined, and with the use of testing flags it is possible to exclude tests from running, so you have no guarantee that they will run at all.
For example the following command will only run tests whose name contains a 'W' letter:
go test -run W
Also note that if some test functions mark themselves eligible for parallel execution using the T.Parallel() method, the go tool will reorder the tests to first run non-parallel tests, and then run parallel tests in parallel under certain circumstances (controlled by test flags like -p). You can see examples of this in this answer: Are tests executed in parallel in Go or one by one?
Tests should be independent from each other. If a test function has prerequisites, that cannot be done/implemented in another test function.
Options to do additional tasks before a test function is run:
You may put it in the test function itself
You may put it in a package init() function, in the _test.go file itself. This will run once before execution of test functions begins.
You may choose to implement a TestMain() function which will be called first and in which you may do additional setup before you call M.Run() to trigger the execution of test functions.
You may mix the above options.
In your case in package init() or TestMain() you should check if your DB is initialized (there are test records inserted), and if not, insert the test records.
Note that starting with Go 1.7, you may use subtests in which you define execution order of subtests. For details see blog post: Using Subtests and Sub-benchmarks, and the package doc of the testing package.
For those who as I am is getting problems because of multiple concurring tests running simultaneously. I found a way to limit the maximum number of test running in parallel:
go test -p 1
With this, your test will run sequentially one by one.
Source
Apart for 3rd party libraries like Convey and Ginkgo, with plain Golang 1.7 you can run tests sequentially. You can read more here
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
And you can run them conditionally with:
go test -run '' # Run all tests.
go test -run Foo # Run top-level tests matching "Foo", such as "TestFooBar".
go test -run Foo/A= # For top-level tests matching "Foo", run subtests matching "A=".
go test -run /A=1 # For all top-level tests, run subtests matching "A=1".
So lets say you got an user package from a REST api that you want to test. You need to test the create handler in order to be able to test the login handler. Usually I would have this on the user_test.go
type UserTests struct { Test *testing.T}
func TestRunner(t *testing.T) {
t.Run("A=create", func(t *testing.T) {
test:= UserTests{Test: t}
test.TestCreateRegularUser()
test.TestCreateConfirmedUser()
test.TestCreateMasterUser()
test.TestCreateUserTwice()
})
t.Run("A=login", func(t *testing.T) {
test:= UserTests{Test: t}
test.TestLoginRegularUser()
test.TestLoginConfirmedUser()
test.TestLoginMasterUser()
})
}
Then I can append methods to the UserTest type that wont be executed by the go test command in any _test.go file
func (t *UserTests) TestCreateRegularUser() {
registerRegularUser := util.TableTest{
Method: "POST",
Path: "/iot/users",
Status: http.StatusOK,
Name: "registerRegularUser",
Description: "register Regular User has to return 200",
Body: SerializeUser(RegularUser),
}
response := util.SpinSingleTableTests(t.Test, registerRegularUser)
util.LogIfVerbose(color.BgCyan, "IOT/USERS/TEST", response)
}
The best way to achieve that is to create a TestMain, as presented here.
import (
"testing"
"os"
)
func TestMain(m *testing.M) {
// Do your stuff here
os.Exit(m.Run())
}
It's also possible to synchronize the test using wait groups:
awaitRootElement := sync.WaitGroup{}
awaitRootElement.Add(1)
t.Run("it should create root element", func(t0 *testing.T) {
// do the test which creates root element
awaitRootElement.Done()
})
awaitRootElement.Wait()
t.Run("it should act on root element somehow", func(t0 *testing.T) {
// do tests which depend on root element
})
Note that you should wait before scheduling the tests, since the asynchronous test execution otherwise might deadlock (the test routine is awaiting another test which never gets to run).

How to get Jenkins env variable from Java app?

I am running integration tests written in Java inside of a Jenkins pipeline.
In my pipeline I am setting the appium.app.branch variable(env.'appium.app.branch' = branch).
Then I call 'mvn verify'. The problem is that in my Java test code I can't get appium.app.branch value. System.getenv("appium.app.branch") call retutns null.
How to get the value?
Use withEnv() {} block. Something like this should work
node {
withEnv(["appium.app.branch=${branch}"]) {
sh 'mvn verify'
}
}
However I am not sure about the variable name, bash for example doesn't support variable names with dots. Try to use some alphanumeric + underscore name like APPIUM_APP_BRANCH

JUnit test case failed

I have a simple test case:
public class FileManagerTest {
String dirPath = “/myDir/”
#Before
public void setUp() {
mFileManager = MyFileManager.getInstance();
}
#Test
private void testPersistFiles() {
System.out.println(“testPersistFiles()…”);
//it deletes old files & persists new files to /myDir/ directory
boolean successful =mFileManager.persistFiles();
Assert.assertTrue(successful);
}
#Test
public void testGetFiles() {
System.out.println(“testGetFiles()…”);
mFileManager.persistFiles();
//I double checked, the persistFiles() works, the files are persisted.
List<File> files = mFileManager.getFilesAtPath(dirPath);
Assert.assertNotNull(files); //Failure here!!!!
}
#Test
public void testGetFilesMap() {
System.out.println(“testGetFilesMap()…”);
mFileManager.persistFiles();
Map<String, File> filesMap = mFileManager.getFilesMapAtPath(dirPath);
Assert.assertNotNull(files);
}
}
The persistFiles() function in FileManager delete all files under /myDir/ then persist files again.
As you see above, I have a System.out.println(…) in each test function. When I run it , I can see all the prints in the following order:
testGetFilesMap()…
testGetFiles()…
testPersistFiles()…
However, test is failed at testGetFiles(). Two things I don't understand:
I don’t understand, it is failed at testGetFiles() why I can still see the print testPersistFiles() which sounds like even it is failed, it doesn't stop running, but continues to run the next test testPersistFiles()? What is happening behind the scene in JUnit test case??
Another thing I don’t understand is why testGetFiles() is failed? I can see log that the persistFiles() has persisted files. Why it got null after that?
I don’t understand, it is failed at testGetFiles() why I can still see the print testPersistFiles() which sounds like even it is failed, i
That is how unit testing works. Each test should be isolated and working using only its set of data. Unit test frameworks run every test so you can see which parts of the system work and which do not, they do not stop on the first failure.
mFileManager.getFilesAtPath(dirPath);
You are not searching the files in the right place
String dirPath = “/myDir/”
Are you sure that this path is ok? with a slash before the directory name?
For each of your tests, JUnit creates a separate instance of that class and runs it. Since you seem to have 3 tests, JUnit will create 3 instances of your class, execute #Before on each of them to initialize state, and then run them.
The order in which they are run is typically the order in which the tests are written but this is not guaranteed.
Now about the print statement - you see that it's the first statement in your test so it will be executed. Then mFileManager.persistFiles(); is executed. For some reason it returns a false and hence the test fails.
As to why it returns false, you can run a local debugger, put a break point at the beginning of that method, single-step and see.

Junit preconditions and test data

I have a java assignment to create an address book then test and evaluate it. I have created it and created some junit tests. In the deliverables section of the assignment it says to list all the test cases for the full program in a table along with:
A unique id
a description of the test
pre-conditions for running the test
the test data
the expected result
Could somebody tell me what they mean by the preconditions and the test data for the test below:
public void testGetName()
{
Entry entry1 = new Entry("Alison Murray", "34 Station Rd", "Workington", "CA14 4TG");
assertEquals("Alison Murray",entry1.getName()); }
Tried emailing the tutor (im a distanct learner) but its taking too long to get a reply. Would the pre-condition be that entry1 needs populated? Test data: "Alison Murray"? Any help is apreciated
There are two types of checks with JUnit:
assertions (org.junit.Assert.*);
assumptions (org.junit.Assume.*).
Assertions are usually used to check your test results. If teh result is not what was expected, then the test fails.
Assumptions are used to check it test data are valid (if they match the test case). If they don't, the test is cancelled (without any error).
As I read your code sample: there are no preconditions and the test data would be entry1.

Categories