I am currently developing a really big app. We are now facing the problem of Unit Testing everything in it.
I am trying to record all the interactions in methods and classes during execution time to have inputs and outputs to compare.
Yeah, i know it is not the properly way of doing Unit Testing but we need to do it quickly. We are already working with Mockito/PowerMockito/JUnit.
Already tried AOP and AspectJ but the problem is having to create new files for each class we have.
I was thinking in a way of intercepting the execution flow layer or somewhat to just write then dynamically in a Json file the input + dependencies values and output of method and classes invoked.
Any clues?
We are now facing the problem of Unit Testing everything in it.
Unittest do not test code, unittest verify public observable behavior that has justification from your requirements.
public observable behavior does not nessessarrily mean public methods but observable from outside the code under test. This is return values and communication with dependencies.
Related
I have small CRUD application that I would like to create integration tests for. I've seen the recommendation that "tests depending on other tests" is a no go. But how can I keep the code maintainable while at the same time not using the data from other tests?
So what I mean is easier to show by example with some pseudo code
TestCreateUser {
make POST to API creating a user
verify the a 200 is received back
}
TestReadUser {
GET the user from the previous test.
verify its the same user
}
TestUpdateUser {
PATCH the user from the previous test.
verify the user have the new data.
}
So this would be bad since all tests depend on the first one. So what are the alternatives? I guess I could use a beforeEach
#BeforeEach
public void initEach(){
make POST to API creating a user
verify the a 200 is received back
}
And then just skip the create user test. But this might create unnecessary calls if i for example have a test like this
TestCreateUserWithSpecialData {
make POST to API creating a user with additional data
verify the a 200 is received back
verify the additional data is correctl
}
Then the beforEach would just create a user that the test does not need. Whats a good solution to solving this? Should I split them up into smaller classes and more files? Or are there a better solution? I suppose i could create if statements in the beforEach but that feels like a hack.
You could use #BeforeAll to create some test data (once) and then have individual tests operate on it.
Or, for a test that’s doing something destructive like “delete”, you could create the user within the test itself.
(Purists might complain that this means the “delete” test will fail if the problem is actually with the “create” operation, but I don’t consider that a big problem — if something is sufficiently messed up with my testing environment that it can’t even create some test data, the exact number of tests that fail is not very interesting to me)
One way to do that is to do whatever setup you need using files containing database inserts. There's a Sql annotation in Spring-Boot you can put on the test method to tell what file to run before a test.
That way each test has its own dedicated setup and there is no dependence on another test working, it also means the test isn't dependent on java setup code working.
TL;DR Is there any way we could annotate a full class with an extension, while at the same time being able to intercept all the tests and invoke them multiple times?
We are trying to update one of our projects from Junit4 to Junit5 (find the link here) but we have come across some problems in the migration from using Rules, Runners and Statements to the equivalents using JUnit5 Extensions.
The project performs mutation testing on SQL statements where each test method is run multiple times, one for each possible mutation of the SQL in order to generate a code coverage report of the SQL. Currently we do this using a JUnit4 Rule but we want to change this to support JUnit 5, so will need to change this Rule to an Extension. The way this currently works in the Rule is that when a (#Test) method gets called it in turn calls the Rule's apply() method and we then use this to return a custom Statement object. In this Statement we create the mutants and then call the base evaluate() method multiple times, once for each mutant, and monitor and store the results.
We tried looking into how to do the equivalent in JUnit5 with an extension but the similar functionality provided by Repeated and Paramaterized (which both run methods multiple times) require adding an annotation to every single method they apply to. We would prefer to just use #ExtendWith and then be able to intercept all test method calls and inject our multiple call functionality into all of them. We looked into how #Repeated and #Paramterized do this via the TestTemplate annotations and the 'TestTemplateInvocationContextProvider', but adding the annotation at the method level is something we want to avoid as generally users want coverage reports based on the entire class, not single methods. We also looked at various JUnit5 lifecycle interfaces that an Extension can implement, like InvocationInterceptor, but this only allows us to call proceed() once on the method being invoked while we need to do this multiple times.
Is there any way we could annotate a full class with an extension, while at the same time being able to intercept the tests and make changes to them?
Thank you so much for your time!
Simple question. If I use spring-data to generate CRUD methods for my DAO layer, should I still write unit tests against the generated methods? Or would that be the equivalent of unit testing library code?
Thanks in advance.
EDIT: To clarify, I'm asking whether or not the unit test needs to be written in addition to a suite of integration tests that get run before a release. For example, a unit test for the findAll() method of the DAO layer would be similar to the following:
class DepartmentDAOTest extends spock.lang.Specification {
/* ... */
def "returns all departments"() {
setup:
def result = new List<Department>()
when:
result = dao.findAll()
then:
result.size() == EXPECTED_SIZE
}
}
Whereas an integration test would be run probably by a test team or developer by hand, possibly before tagging a new release. This could either be automated using JWebUnit or Geb, and tests every component (including the platform) to ensure they work as expected when "integrated."
If I were to write the DAO implementation by hand using JdbcTemplate there would be no question that I should unit test every method. When I unit test the service layer (which makes calls to the DAO layer) I can mock out the DAO layer so I don't test it twice.
If I make a call into a third-party library like pdfbox for generating a PDF, there's an expectation for each method to work (because it is tested as a part of the pdfbox project). I don't test that their drawSquare method really draws a square, but during integration testing I'll see that my export PDF functionality correctly exports a PDF the way we want it to.
So the question should really be re-worded as, "Under which testing phase should I test my usage of spring-data?"
First, there is no code generated at all. We built a query meta model from the query methods you declare and dynamically execute these queries. The short answer here is: you definitely should test these methods declared. The reason is as obvious as it is simple: the query method declarations - no matter if they use derived queries or manually declared ones - interact with the mapping metadata you defined for your entities. Thus, it's definitely reasonable to check the query method execution to make sure you see the expected results. This then of course an more of an integration test and a semantical check for the queries executed, rather than a classical unit test.
No. As a general rule, don't test the platform.
I am new to JUnit and I got a sample java project in which I need to write unit tests for all the methods.
Unfortunately the code is poorly designed and some of the methods are done from the UI. Furthermore, some of the methods pop up a messagebox and do not return a return value.
I have two questions: First, without modifying the existing code, is there a way I can suppress the message boxes and not press enter every time I run the unit tests?
Second question: can a test function expect a message box and assert failure\success upon it's string content?
I appreciate any help, I know the best solution is to fix the code itself - separate the BusinessLogic completely from the UI and to test expected result, or even if message boxes are somehow mandatory use modal message boxes (like humble dialog boxes) but unfortunately I am not allowed to change anything in the code.
Thanks :)
Nili
There are all sorts of ways you could get started if only you were allowed to edit the code, so my first approach would be to see if you can get this restriction relaxed, and to read Working Effectively With Legacy Code.
Failing that you could try using a GUI testing framework like FEST-Swing to check the contents of the message boxes are as expected.
Not allowed to change the code, you say? First thought it to have a look at JMockit which really opens up a lot of possibilities when you are severely constrained by code that was not written with much concern about how it should be tested. It should enable you to, without modifying any code, substitute your preferred implementation of bothersome parts while your test is running--so only in the context of testing would you have altered the test subject (be careful to write a meaningful test!) or its dependencies. Other mock object frameworks can be useful, too, but the investment to learn JMockit is really time well-spent.
unfortunately I am not allowed to change anything in the code.
There's all sorts of stuff on Google about how to automate Swing testing with JUnit. Unfortunately, there's no way to get around this problem when testing.
I have not used Junit before and have not done unit testing automatically.
Scenario:
We are changing our backend DAO's from Sql Server to Oracle. So on the DB side all the stored procedures were converted to oracle. Now when our code calls these thew Oracle Stored Procedures we want to make sure that the data returned is same as compared to sql server stored procedures.
So for example I have the following method in a DAO:
//this is old method. gets data from sql server
public IdentifierBean getHeadIdentifiers_old(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
List result = getSqlMapClientTemplate().queryForList("Income.getIdentifiers", parmMap);
return (IdentifierBean)result.get(0);
}
//this is new method. gets data from Oracle
public IdentifierBean getHeadIdentifiers(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
getSqlMapClientTemplate().queryForObject("Income.getIdentifiers", parmMap);
return (IdentifierBean)((List)parmMap.get("Result0")).get(0);
}
now I want to write a Junit test method that would first call getHeadIdentifiers_old and then getHeadIdentifiers and would compare the Object returned (will have to over-write equals and hash in IdentifierBean). Test would pass only when both objects are same.
In the tester method I will have to provide a parameter (head in this case) for the two methods..this will be done manually for now. Yeah, from the front end parameters could be different and SPs might not return exact results for those parameters. But I think having these test cases will give us some relief that they return same data...
My questions are:
Is this a good approach?
I will have multiple DAO's. Do I write
the test methods inside the DAO
itself or for each DAO I should have
a seperate JUnit Test Class?
(might be n00b question) will all the
test cases be ran automatically? I do
not want to go to the front end click
bunch of stuff so that call to the
DAO gets triggered.
when tests are ran will I find out
which methods failed? and for the
ones failed will it tell me the test
method that failed?
lastly, any good starting points? any
tutorials, articles that show working
with Junit
Okay, lets see what can be done...
Is this a good approach?
Not really. Since instead of having one obsolete code path with somewhat known functionality, you now have two code paths with unequal and unpredictable functionality. Usually one would go with creating thorough unit tests for legacy code first and then refactor the original method to avoid incredibly large amounts of refactoring - what if some part of your jungle of codes forming the huge application keeps calling the other method while other parts call the new one?
However working with legacy code is never optimal so what you're thinking may be the best solution.
I will have multiple DAO's. Do I write the test methods inside the DAO itself or for each DAO I should have a seperate JUnit Test Class?
Assuming you've gone properly OO with your program structure where each class does one thing and one thing only, yes, you should make another class containing the test cases for that individual class. What you're looking for here is mock objects (search for it at SO and Google in general, lots of info available) which help you decouple your class under test from other classes. Interestingly high amount of mocks in unit tests usually mean that your class could use some heavy refactoring.
(might be n00b question) will all the test cases be ran automatically? I do not want to go to the front end click bunch of stuff so that call to the DAO gets triggered.
All IDE:s allow you to run all the JUnit test at the same time, for example in Eclipse just click the source folder/top package and choose Run -> Junit test. Also when running individual class, all the unit tests contained within are run in proper JUnit flow (setup() -> testX() -> tearDown()).
when tests are ran will I find out which methods failed? and for the ones failed will it tell me the test method that failed?
Yes, part of Test Driven Development is the mantra Red-Green-Refactor which refers to the colored bar shown by IDE:s for unit tests. Basically if any of the tests in test suite fails, the bar is red, if all pass, it's green. Additionally for JUnit there's also blue for individual tests to show assertion errors.
lastly, any good starting points? any tutorials, articles that show working with Junit
I'm quite sure there's going to be multiple of these in the answers soon, just hang on :)
You'll write a test class.
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
String head = "whatever your head should be";
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
You might find you need to parameterize this on head; that's straightforward.
Update: It looks like this:
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
compareIdentifiersWithHead("head1");
compareIdentifiersWithHead("head2");
compareIdentifiersWithHead("etc");
}
private static void compareIdentifiersWithHead(String head) {
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
* Is this a good approach?
Sure.
* I will have multiple DAOs. Do I write the test methods inside the DAO
itself or for each DAO I should have a separate JUnit Test Class?
Try it with a separate test class for each DAO; if that gets too tedious, try it the other way and see what you like best. It's probably more helpful to have the fine-grainedness of separate test classes, but your mileage may vary.
* (might be n00b question) will all the test cases be run automatically?
I do not want to go to the front end click bunch of stuff so that call
to the DAO gets triggered.
Depending on your environment, there will be ways to run all the tests automatically.
* when tests are ran will I find out which methods failed?
and for the ones failed will it tell me the test method that failed?
Yes and yes.
* lastly, any good starting points? any tutorials, articles that
show working with Junit
I really like Dave Astels' book.
Another useful introduction in writing and maintaining large unit test suites might be this book (which is partially available online):
XUnit Test Patterns, Refactoring Test Code by Gerard Meszaros
The book is organized in 3 major parts. Part I consists of a series of introductory narratives that describe some aspect of test automation using xUnit. Part II describes a number of "test smells" that are symptoms of problems with how we are automating our tests. Part III contains descriptions of the patterns.
Here's a quick yet fairly thorough intro to JUnit.