I am new to JUnit and I got a sample java project in which I need to write unit tests for all the methods.
Unfortunately the code is poorly designed and some of the methods are done from the UI. Furthermore, some of the methods pop up a messagebox and do not return a return value.
I have two questions: First, without modifying the existing code, is there a way I can suppress the message boxes and not press enter every time I run the unit tests?
Second question: can a test function expect a message box and assert failure\success upon it's string content?
I appreciate any help, I know the best solution is to fix the code itself - separate the BusinessLogic completely from the UI and to test expected result, or even if message boxes are somehow mandatory use modal message boxes (like humble dialog boxes) but unfortunately I am not allowed to change anything in the code.
Thanks :)
Nili
There are all sorts of ways you could get started if only you were allowed to edit the code, so my first approach would be to see if you can get this restriction relaxed, and to read Working Effectively With Legacy Code.
Failing that you could try using a GUI testing framework like FEST-Swing to check the contents of the message boxes are as expected.
Not allowed to change the code, you say? First thought it to have a look at JMockit which really opens up a lot of possibilities when you are severely constrained by code that was not written with much concern about how it should be tested. It should enable you to, without modifying any code, substitute your preferred implementation of bothersome parts while your test is running--so only in the context of testing would you have altered the test subject (be careful to write a meaningful test!) or its dependencies. Other mock object frameworks can be useful, too, but the investment to learn JMockit is really time well-spent.
unfortunately I am not allowed to change anything in the code.
There's all sorts of stuff on Google about how to automate Swing testing with JUnit. Unfortunately, there's no way to get around this problem when testing.
Related
I have read a lot about test-driven design. My project is using tests, but currently they are written after the code has been written, and I am not clear how to do it in the other direction.
Simple example: I have a class Rectangle. It has private fields width and height with corresponding getters and setters. Common Java. Now, I want to add a function getArea() which returns the product of both, but I want to write the test first.
Of course, I can write a unit test. But it isn’t the case that it fails, but it does not even compile, because there is no getArea() function yet. Does that mean that writing the test always already involves changing the productive code to introduce dummys without functionality? Or do I have to write the test in a way that it uses introspection? I don’t like the latter approach, because it makes the code less readable and later refactoring with tools will not discover it and break the test, and I know that we refactor a lot. Also, adding ‘dummys’ may include lots of changes, i.e. if I need additional fields, the database must be changed for Hibernate to continue to work, … that seems way to much productive code changes for me when yet “writing tests only”. What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
Is there a way to do that?
Well, TDD does not mean, that you cannot have anything in the production code before writing the test.
For example:
You put your method, e.g. getArea(param1, param2) in your production code with an empty body.
Then you write the test with valid input and your expected result.
You run the test and it will fail.
Then you change the production code and run the test again.
If it still fails: back to the previous step.
If it passes, you write the next test.
A quick introduction can be found for example here: codeutopia -> 5-step-method-to-make-test-driven-development-and-unit-testing-easy
What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
There isn't, that I have ever seen, a way to write a test with a dependency on a new part of the API, and have that test immediately compile without first extending the API of the test subject.
It's introspection or nothing.
But it isn’t the case that it fails, but it does not even compile, because there is no getArea() function yet
Historically, writing code that couldn't compile was part of the rhythm of TDD. Write a little bit of test code, write a little bit of production code, write a little bit of test code, write a little bit of production code, and so on.
Robert Martin describes this as the nano-cycle of TDD
... the goal is always to promote the line by line granularity that I experienced while working with Kent so long ago.
I've abandoned the nano-cycle constraint in my own work. Perhaps I fail to appreciate it because I've never paired with Kent.
But I'm perfectly happy to write tests that don't compile, and then back fill the production code I need when the test is in a satisfactory state. That works well for me because I normally work in a development environment that can generate production implementations at just a few key strokes.
Another possibility is to consider a discipline like TDD as if you meant it, which does a lot more of the real work in the test source hierarchy before moving code into the production hierarchy.
I've been working on Android dev quite sometimes, but never fully adopt TDD in Android. However I tried recently to develop my new app with complete TDD. So here is my opinion..
Does that mean that writing the test always already involves changing the productive code to introduce dummys without functionality?
I think is the yes. As I understand every tests are equivalent to every specs/use cases I have on the software. So writing a fail test first is about the attempt to filling in the requirement specs with test codes. Then when I tried to fill the productive code to pass the just-written TC, I really tried to make it work. After a doing this a while, I was pretty surprised how with my productive code size is very small, but it's able to fill how much of the requirement.
For me personally all the fail TC I wrote before productive code, were actually come from list of questions, which I brainstormed them about the requirement, and I sometimes used it to explore edge cases of requirement.
So the basic workflow is Red - Green - Refactor, which I got from the presentation from Bryan Breecham - https://www.infoq.com/presentations/tdd-lego/
About,
What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
For me I think it's possible, when you write all your productive logic first, then UT plays the roles of fulfilling the requirement. It's just the another way around. So in overall I think TDD is the approach but people may use Unit Test in different purposes, e.g reduce testing time, etc.
I've been working with Java, specifically in Android, for a few months now and I've found that working with PowerMockito is something I'd rather not do. The complexities of keeping it working have outweighed any benefit of it. I also think I'd agree with most of the comments I've read on Stackoverflow that say not to use PowerMockito, so please keep that in mind when answering my question. I am looking for guidance to testing without PowerMockito.
My question is, when writing code that interfaces with a 3rd party SDK that has some static method, how would you test it? Specifically, when it seems the only thing really worth testing is a behaviour? ie that the static method was called?
I can and do put these 3rd party services behind adapter classes usually. And I can test that my adapter was called. But how do you live with not ever being able to test that the 3rd party itself was called and maybe confirm which arguments it was called with? Is this the only thing available in my toolbox? to limit logic as much as possible so that the untested area is less likely to fail?
When explaining this to someone coming from a dynamically typed language would you just say that the test wasn't valuable? I'm thinking at this point that these kind of tests are low value, but I can understand why others would want to test this kind of thing. Its the kind of test I've seen written a lot in Ruby projects I've worked on.
The one thing I have done in the past in similar situations:
created a tiny wrapper interface and an impl class calling that static method; and test verifying that the wrapper is called
a single test case that invokes that impl class and thereby the real static method.
If one is "lucky" that call has an observable effect, for example some exception gets thrown (that is the problem with a lot of static code in my context - it simply breaks unless the whole stack is running). And then you check for that. But I also agree: there isn't much value in doing so. It proofs correct plumbing, at the cost of being subject to change whenever the behavior of that static method changes.
Whenever I program, I seem to accumulate a lot of "trash" code, code that is not in use anymore. Just to keep my code neat, and to avoid making any expensive and unnecessary computations, Is there an easy way to tell if there is code that is not being used?
One of the basic principles which will help you in this regard is to reduce visibility of everything as much as possible. If a class can be private don't make it default, protected or public. Same applies for methods and variables. It is much easier when you can say for sure if something is not being used outside a class. In cases like this even IDEs like Eclipse and IntelliJ Idea will suggest you about unused code.
Using this practice while developing and refactoring code is the best way to clean unused code confidently without the possibility of breaking the application. This will help in scenarios even when reflection is being used.
It's difficult to do in Java since it's a reflective language. (You can't simply hunt for calls to a certain class or function, for example, since reflection can be used to call a function using strings that can only be resolved at runtime.)
So in full generality, you cannot be certain.
If you have adequate unit tests for your code base then the possibility of redundant code should not be a cause for concern.
I think "unused code" means the code that is always not executed at runtime. I hope I interpreted you correctly.
The way to do a simple check on this is very easy. Just use IntelliJ IDEA to write your code. It will tell you that parts of your code that will never be executed and also the parts where the code can be simplified. For example,
if (x == 5) {
}
And then it will tell you that this if statement is redundant. Or if you have this:
return;
someMethod();
The IDE will tell you that someMethod() can never be reached. And it also provides a lot of other cool features.
But sometimes this isn't enough. What if you have
if (x == 5) {
someMethod();
}
But actually in your code, x can only be in the range of 1 to 4? The IDE won't tell you about this. You can use a tool that shows your code coverage by running lots of tests. Then you can see which part of your code is not executed.
If you don't want to use such a tool, you can put breakpoints in your methods. Then run some tests by hand. When the debugger steps through your code, you can see exactly where the code goes and exactly which piece(s) of code is not executed.
Another method to do this is to use the Find/Replace function of the IDE. Check if some of your public/private methods are not being called anywhere. For example, to check whether someMethod() is called, search for someMethod in the whole project and see if there are occurrences other than the declaration.
But the most effective way would be,
Stop writing this kind of code in the first place!
i think the best way to check that is to install a plugin of coverage like eclemma and create unit and integration tests to get 100% of coverage of the code that accomplish the use code/task you have.
The code that don't need to be tested or don't pass over it after the tests are completed and run, is code that you are not using
Try to avoid accumulating trash in the first place. Remove stuff you don't need anymore. (You could make a backup or better use a source code management system.)
You should also write unit tests for your functions. So you know if it still works after you remove something.
Aside from that, most IDEs will show you unused local variables and private methods.
I do imagine situation when you have app developed by years and some part of your functions doesn't used anymore even they still working. Example: Let's assume you make some changes on internal systems when specific event occured but it is not occurs anymore.
I would say you could use AspectJ to obtain such data / log and then analyze after some time.
I have some importand methods in code that are used in a wrong way, people don't get the whole context of the process and invokes wrong methods, for example setters. If I had something like #Deprecated it could highlight / strike/ underline methods and show som info when somebody uses it. For example someone set some variables that are even not persisted as he thought that it would persist. Another person changed one method and spoiled dozen of usecases becaouse he didnt know about them..
I use Java7 and IntelliJ Idea 14
Instead of using an annotation, program defensively, check if the parameters you get make sense. Write tests to verify what happens when invalid input is provided.
I think Automated Tests, Good Method Names and such will do more good than some fancy IDE plugin to stop other developers from invoking wrong methods.
I'm learning Java by reading "Head First Java" and by doing all the puzzles and excercies. In the book they recommend to write TestDrive classes to test the code and clases I've written, that's one really simple thing to do, but by doing this I think I can't fully test my code because I'm writing the test code knowing what I want to get, I don't know if it makes any sense, but I was wondering if there's any way of testing my code in a simple way that it tell's me what isn't working correctly. Thanks.
that's right - you know what to expect, and write test cases to cover that knowledge. In many respects this is normal - you want to test the stuff you've written just so you know it works as you expect.
now, you need to take it to the next step: find a system where it will be working (ie integrate it with other bits n pieces of the complete puzzle) and see if it still works according to your assumptions and knowledge.
Then you need to give it to someone else to test for you - they will quickly find the bits that you never thought of.
Then you give it to a real user, and they not only find the things you and your tester never thought of, but they also find the things that were never thought of by the requirements analyst.
This is the way software works, and possibly the reason its never finished.
PS. One thing about your test code that does matter more than anything - once you've done it once and found it works as expected, you can add more stuff to your app and then run your test code again to make sure it still works as expected. This is called regression testing and I think its the only reason to write your own unit tests.
and: Dilbert's take on testing.
What do we mean by code? When Unit testing, which is what I think we're talking about here, we are testing specific methods and classes.
I think I can't fully test my code
because I'm writing the test code
knowing what I want to get
In other words you are investigating whether some code fulfils a contract. Consider this example:
int getInvestvalue( int depositCents, double annualInterestRate, int years) {
}
What tests can you devise? If you devise a good set of tests you can have some confidence in this routine. So we could try these kinds of input:
deposit 100, rate 5.0, years 1 : expected answer 105
deposit 100, rate 0, years 1 : expected answer 100
deposit 100, rate 10, years 0 : expected anwer 100
What else? How about a negative rate?
More interestingly, how about a very high rate of interest like 1,000,000.50 and 100,000 years, what happens to the result, would it fit in an integer - the thing about devising this test is that it challenges the interface - why is there no exception documented?
The question then comes: how do we figure out those test cases. I don't think there is a single approach that leads to building a comprehensive set but here's a couple of things to consider:
Edges: Zero, one, two, many. In my example we don't just do a rate of 5%. We consider especially the special cases. Zero is special, one is special, negative is special, a big number is special ...
Corner cases: combinations of edges. In my example that's a large rate and large number of years. Picking these is something of an art, and is helped by our knowledge of the implmentation: here we know that there's a "multiplier" effect between rates and years.
White box: using knowldge of the implementation to drive code coverage. Adjusting the inputs to force the code down particiular paths. For example if yoiu know that the code has a "if negative rate" conditional path, then this is a clue to include a negative rate test.
One of the tenets of "Test Driven Development" is writing a test first (i.e. before you've written the code). Obviously this test will initially fail (your program may not even compile). If the test doesn't fail, then you know you've got a problem with the test itself. Once the test fails, the objective then becomes to keep writing code until the test passes.
Also, some of the more popular unit testing frameworks such as jUnit will allow you to test if something works or explicitly doesn't work (i.e. you can assert that a certain type of exception is thrown). This becomes useful to check bad input, corner cases, etc.
To steal a line from Stephen Covey, just begin with the end in mind and write as many tests as you can think of. This may seem trivial for very simple code, but the idea becomes useful as you move onto more complex problems.
This site has a lot of help resources for testing codes. SoftwareTestingHelp
First, you need to make sure your code is written to be unit tested. Dependencies on outside classes should be made explicit (required by the constructor if possible) so that it isn't possible to write a unit test without identifying every possible way to break things. If you find that there are too many dependencies, or that it isn't obvious how each dependency will be used, you need to work on the Single Responsibility Principle, which will make your classes smaller, simpler, and more modular.
Once your code is written so that you can foresee situations that might occur based on your dependencies and input parameters, you should write tests looking for the correct behavior from a variety of those foreseeable situations. One of the biggest advantages I've found to unit testing is that it actually forced me to think, "What if ...", and figure out what the correct behavior would be in each case. For example, I have to decide whether it makes more sense to throw an exception or return a null value in the case of certain errors.
Once you think you've got all your bases covered, you might also want to throw your code at a tool like QuickCheck to help you identify possibilities that you might have missed.
TestDrive
No, you should be writing JUnit or TestNG tests.
Done correctly, your tests are your specification. It defines what your code is supposed to do. Each test defines a new aspect of your application. Therefore, you would never write tests looking for things that don't work correctly since you tests specify how things should work correctly.
Once you think you've finished unit testing and coding your component, one of the best and easiest ways to raise confidence that things are working correctly is to use a technique called Exploratory Testing, which can be thought of as an unscripted exploration of the part of the application you've written looking for bugs based on your intuition and experience (and deviousness!).
Pair Programming is another great way to prevent and flush out the bugs from your code. Two minds are better than one and often someone else will think of something you didn't (and vice versa).