I have read a lot about test-driven design. My project is using tests, but currently they are written after the code has been written, and I am not clear how to do it in the other direction.
Simple example: I have a class Rectangle. It has private fields width and height with corresponding getters and setters. Common Java. Now, I want to add a function getArea() which returns the product of both, but I want to write the test first.
Of course, I can write a unit test. But it isn’t the case that it fails, but it does not even compile, because there is no getArea() function yet. Does that mean that writing the test always already involves changing the productive code to introduce dummys without functionality? Or do I have to write the test in a way that it uses introspection? I don’t like the latter approach, because it makes the code less readable and later refactoring with tools will not discover it and break the test, and I know that we refactor a lot. Also, adding ‘dummys’ may include lots of changes, i.e. if I need additional fields, the database must be changed for Hibernate to continue to work, … that seems way to much productive code changes for me when yet “writing tests only”. What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
Is there a way to do that?
Well, TDD does not mean, that you cannot have anything in the production code before writing the test.
For example:
You put your method, e.g. getArea(param1, param2) in your production code with an empty body.
Then you write the test with valid input and your expected result.
You run the test and it will fail.
Then you change the production code and run the test again.
If it still fails: back to the previous step.
If it passes, you write the next test.
A quick introduction can be found for example here: codeutopia -> 5-step-method-to-make-test-driven-development-and-unit-testing-easy
What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
There isn't, that I have ever seen, a way to write a test with a dependency on a new part of the API, and have that test immediately compile without first extending the API of the test subject.
It's introspection or nothing.
But it isn’t the case that it fails, but it does not even compile, because there is no getArea() function yet
Historically, writing code that couldn't compile was part of the rhythm of TDD. Write a little bit of test code, write a little bit of production code, write a little bit of test code, write a little bit of production code, and so on.
Robert Martin describes this as the nano-cycle of TDD
... the goal is always to promote the line by line granularity that I experienced while working with Kent so long ago.
I've abandoned the nano-cycle constraint in my own work. Perhaps I fail to appreciate it because I've never paired with Kent.
But I'm perfectly happy to write tests that don't compile, and then back fill the production code I need when the test is in a satisfactory state. That works well for me because I normally work in a development environment that can generate production implementations at just a few key strokes.
Another possibility is to consider a discipline like TDD as if you meant it, which does a lot more of the real work in the test source hierarchy before moving code into the production hierarchy.
I've been working on Android dev quite sometimes, but never fully adopt TDD in Android. However I tried recently to develop my new app with complete TDD. So here is my opinion..
Does that mean that writing the test always already involves changing the productive code to introduce dummys without functionality?
I think is the yes. As I understand every tests are equivalent to every specs/use cases I have on the software. So writing a fail test first is about the attempt to filling in the requirement specs with test codes. Then when I tried to fill the productive code to pass the just-written TC, I really tried to make it work. After a doing this a while, I was pretty surprised how with my productive code size is very small, but it's able to fill how much of the requirement.
For me personally all the fail TC I wrote before productive code, were actually come from list of questions, which I brainstormed them about the requirement, and I sometimes used it to explore edge cases of requirement.
So the basic workflow is Red - Green - Refactor, which I got from the presentation from Bryan Breecham - https://www.infoq.com/presentations/tdd-lego/
About,
What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
For me I think it's possible, when you write all your productive logic first, then UT plays the roles of fulfilling the requirement. It's just the another way around. So in overall I think TDD is the approach but people may use Unit Test in different purposes, e.g reduce testing time, etc.
Related
Let us assume that the next feature that I have to develop is to store some data on a database. Following the TDD paradigm, I have to first write a failing test. It is not clear to me how I can approach this task, considering that I am using JDBC.
The simplest way I can think of is to define a function "storeDataOnDB" and using some framework like Mockito check that the function is called one time. I don't like this solution. Let us continue the TDD approach, next I would write the minimum amount of code that makes the test pass. Simply calling the function would make the test pass, but I am not actually storing anything on the db. Moreover, I am not checking whether I am storing the correct data.
Another solution would be to implement an integration test using a test db and verify that the data are stored correctly. But this is an integration test, while in TDD I am trying to write a unit test.
So, what would be the best method to apply TDD on this feature?
Thanks.
in TDD I am trying to write a unit test.
You should drop this constraint -- TDD isn't about "unit" tests, not really (see Chapter 32 of Test Driven Development by Example). It's about "programmer" tests - the small scale tests that act as scaffolding while writing the code.
The tests are there to serve your needs, not the other way around.
There is a trick, though, that you should be aware of: you will often want a design that separates your complicated logic from the parts that actually communicate with the database. Between the two sits a seam (see Working Effectively With Legacy Code, chapter 4), allowing you to write tests with substitute implementations for the part that needs to talk to the database.
The part that actually talks to the database? The ideal is code that is "so simple there are obviously no deficiencies". The expectation here being that if we make the code boring enough, we won't need to change it very often after we get it right (often this sort of code lives unchanged until it is eventually removed completely).
TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap.
"Unit test" is not the only permitted technique.
Whenever I program, I seem to accumulate a lot of "trash" code, code that is not in use anymore. Just to keep my code neat, and to avoid making any expensive and unnecessary computations, Is there an easy way to tell if there is code that is not being used?
One of the basic principles which will help you in this regard is to reduce visibility of everything as much as possible. If a class can be private don't make it default, protected or public. Same applies for methods and variables. It is much easier when you can say for sure if something is not being used outside a class. In cases like this even IDEs like Eclipse and IntelliJ Idea will suggest you about unused code.
Using this practice while developing and refactoring code is the best way to clean unused code confidently without the possibility of breaking the application. This will help in scenarios even when reflection is being used.
It's difficult to do in Java since it's a reflective language. (You can't simply hunt for calls to a certain class or function, for example, since reflection can be used to call a function using strings that can only be resolved at runtime.)
So in full generality, you cannot be certain.
If you have adequate unit tests for your code base then the possibility of redundant code should not be a cause for concern.
I think "unused code" means the code that is always not executed at runtime. I hope I interpreted you correctly.
The way to do a simple check on this is very easy. Just use IntelliJ IDEA to write your code. It will tell you that parts of your code that will never be executed and also the parts where the code can be simplified. For example,
if (x == 5) {
}
And then it will tell you that this if statement is redundant. Or if you have this:
return;
someMethod();
The IDE will tell you that someMethod() can never be reached. And it also provides a lot of other cool features.
But sometimes this isn't enough. What if you have
if (x == 5) {
someMethod();
}
But actually in your code, x can only be in the range of 1 to 4? The IDE won't tell you about this. You can use a tool that shows your code coverage by running lots of tests. Then you can see which part of your code is not executed.
If you don't want to use such a tool, you can put breakpoints in your methods. Then run some tests by hand. When the debugger steps through your code, you can see exactly where the code goes and exactly which piece(s) of code is not executed.
Another method to do this is to use the Find/Replace function of the IDE. Check if some of your public/private methods are not being called anywhere. For example, to check whether someMethod() is called, search for someMethod in the whole project and see if there are occurrences other than the declaration.
But the most effective way would be,
Stop writing this kind of code in the first place!
i think the best way to check that is to install a plugin of coverage like eclemma and create unit and integration tests to get 100% of coverage of the code that accomplish the use code/task you have.
The code that don't need to be tested or don't pass over it after the tests are completed and run, is code that you are not using
Try to avoid accumulating trash in the first place. Remove stuff you don't need anymore. (You could make a backup or better use a source code management system.)
You should also write unit tests for your functions. So you know if it still works after you remove something.
Aside from that, most IDEs will show you unused local variables and private methods.
I do imagine situation when you have app developed by years and some part of your functions doesn't used anymore even they still working. Example: Let's assume you make some changes on internal systems when specific event occured but it is not occurs anymore.
I would say you could use AspectJ to obtain such data / log and then analyze after some time.
Folks it is always said in TDD that
we should write junits even before we write the actual code.
Somehow I am not able to understand this in right spirit. I hope what it means is that you just write empty methods wih right signatures and your test case is expected to fail initially
Say in TDD approach i need to get the list of customers.
As per my understanding i will write the empty method like below
public List<CustomerData> getCustomers(int custId){
return null;
}
Now i will write junit test case where i will check the size as 10(that i am eactually expecting). Is this Right?
Basically my question is in TDD, How we can write junit test case before writing actual code?
I hope what it means is that you just write empty methods wih right signatures
Yes. And with most modern IDEs, if you write a method name which does not exist in your test, they will create a stub for you.
Say in TDD approach i need to get the list of customers. whats the right way to proceed?
Your example is not quite there. You want to test for a 0-length array, but you already return it: you should first return null, the test will obviously fail.
Then modify the method so that the test succeeds.
Then create a test method for customer add. Test fails. Fix it. Rinse. Repeat.
So, basically: with TDD, you start and write test that you KNOW will fail, and then fix your code so that they work.
Recommended read.
Often you'll write the test alongside the skeleton of the code. Initially you can write a non-functional implementation (e.g. throw an UnsupportedOperationException) and that will trigger a test failure. Then you'd flesh out the implementation until finally your test passes.
You need to be pragmatic about this. Obviously you can't compile your test until at least your unit under test compiles, and so you have to do a minimal amount of implementation work alongside your test.
Check out this recent Dr Dobbs editoral, which discusses exactly this point and the role of pragmatism around this, especially by the mavens of this practise (Kent Beck et al)
A key principle of TDD is that you write no code without first writing
a failing unit test. But in fact, if you talk to the principal
advocates of TDD (such as Kent Beck, who popularized the technique,
and Bob Martin, who has taught it to thousands of developers), you
find that both of them write some code without writing tests first.
They do not — I should emphasize this — view these moments as lapses
of faith, but rather as the necessary pragmatism of the intelligent
developer.
That's partly right.
Using an IDE (Eclipse, IntelliJ) you can create a test. In that test invoke a method (that does not exist) and using a refactoring tool create a method with the proper signature.
That's a trick that makes working with TDD easier and more fun.
According to Now i will write junit test case where i will check the size as 0. Is this Right? you should write a test that fails, and the provide proper implementation.
I think go write the test first, Think about the signature of the function while writing the test.
It's the same as writing the signature and then writing the test, But when inventing the signature of the function while writing the test will be helpful since you will have all information about the responsibility of the function and you will be able to come up with the proper signature.
When I receive code I have not seen before to refactor it into some sane state, I normally fix "cosmetic" things (like converting StringTokenizers to String#split(), replacing pre-1.2 collections by newer collections, making fields final, converting C-style arrays to Java-style arrays, ...) while reading the source code I have to get familiar with.
Are there many people using this strategy (maybe it is some kind of "best practice" I don't know?) or is this considered too dangerous, and not touching old code if it is not absolutely necessary is generally prefered? Or is it more common to combine the "cosmetic cleanup" step with the more invasive "general refactoring" step?
What are the common "low-hanging fruits" when doing "cosmetic clean-up" (vs. refactoring with more invasive changes)?
In my opinion, "cosmetic cleanup" is "general refactoring." You're just changing the code to make it more understandable without changing its behavior.
I always refactor by attacking the minor changes first. The more readable you can make the code quickly, the easier it will be to do the structural changes later - especially since it helps you look for repeated code, etc.
I typically start by looking at code that is used frequently and will need to be changed often, first. (This has the biggest impact in the least time...) Variable naming is probably the easiest and safest "low hanging fruit" to attack first, followed by framework updates (collection changes, updated methods, etc). Once those are done, breaking up large methods is usually my next step, followed by other typical refactorings.
There is no right or wrong answer here, as this depends largely on circumstances.
If the code is live, working, undocumented, and contains no testing infrastructure, then I wouldn't touch it. If someone comes back in the future and wants new features, I will try to work them into the existing code while changing as little as possible.
If the code is buggy, problematic, missing features, and was written by a programmer that no longer works with the company, then I would probably redesign and rewrite the whole thing. I could always still reference that programmer's code for a specific solution to a specific problem, but it would help me reorganize everything in my mind and in source. In this situation, the whole thing is probably poorly designed and it could use a complete re-think.
For everything in between, I would take the approach you outlined. I would start by cleaning up everything cosmetically so that I can see what's going on. Then I'd start working on whatever code stood out as needing the most work. I would add documentation as I understand how it works so that I will help remember what's going on.
Ultimately, remember that if you're going to be maintaining the code now, it should be up to your standards. Where it's not, you should take the time to bring it up to your standards - whatever that takes. This will save you a lot of time, effort, and frustration down the road.
The lowest-hanging cosmetic fruit is (in Eclipse, anyway) shift-control-F. Automatic formatting is your friend.
First thing I do is trying to hide most of the things to the outside world. If the code is crappy most of the time the guy that implemented it did not know much about data hiding and alike.
So my advice, first thing to do:
Turn as many members and methods as
private as you can without breaking the
compilation.
As a second step I try to identify the interfaces. I replace the concrete classes through the interfaces in all methods of related classes. This way you decouple the classes a bit.
Further refactoring can then be done more safely and locally.
You can buy a copy of Refactoring: Improving the Design of Existing Code from Martin Fowler, you'll find a lot of things you can do during your refactoring operation.
Plus you can use tools provided by your IDE and others code analyzers such as Findbugs or PMD to detect problems in your code.
Resources :
www.refactoring.com
wikipedia - List of tools for static code analysis in java
On the same topic :
How do you refactor a large messy codebase?
Code analyzers: PMD & FindBugs
By starting with "cosmetic cleanup" you get a good overview of how messy the code is and this combined with better readability is a good beginning.
I always (yeah, right... sometimes there's something called a deadline that mess with me) start with this approach and it has served me very well so far.
You're on the right track. By doing the small fixes you'll be more familiar with the code and the bigger fixes will be easier to do with all the detritus out of the way.
Run a tool like JDepend, CheckStyle or PMD on the source. They can automatically do loads of changes that are cosemetic but based on general refactoring rules.
I do not change old code except to reformat it using the IDE. There is too much risk of introducing a bug - or removing a bug that other code now depends upon! Or introducing a dependency that didn't exist such as using the heap instead of the stack.
Beyond the IDE reformat, I don't change code that the boss hasn't asked me to change. If something is egregious, I ask the boss if I can make changes and state a case of why this is good for the company.
If the boss asks me to fix a bug in the code, I make as few changes as possible. Say the bug is in a simple for loop. I'd refactor the loop into a new method. Then I'd write a test case for that method to demonstrate I have located the bug. Then I'd fix the new method. Then I'd make sure the test cases pass.
Yeah, I'm a contractor. Contracting gives you a different point of view. I recommend it.
There is one thing you should be aware of. The code you are starting with has been TESTED and approved, and your changes automatically means that that retesting must happen as you may have inadvertently broken some behaviour elsewhere.
Besides, everybody makes errors. Every non-trivial change you make (changing StringTokenizer to split is not an automatic feature in e.g. Eclipse, so you write it yourself) is an opportunity for errors to creep in. Do you get the exact behaviour right of a conditional, or did you by mere mistake forget a !?
Hence, your changes implies retesting. That work may be quite substantial and severely overwhelm the small changes you have done.
I don't normally bother going through old code looking for problems. However, if I'm reading it, as you appear to be doing, and it makes my brain glitch, I fix it.
Common low-hanging fruits for me tend to be more about renaming classes, methods, fields etc., and writing examples of behaviour (a.k.a. unit tests) when I can't be sure of what a class is doing by inspection - generally making the code more readable as I read it. None of these are what I'd call "invasive" but they're more than just cosmetic.
From experience it depends on two things: time and risk.
If you have plenty of time then you can do a lot more, if not then the scope of whatever changes you make is reduced accordingly. As much as I hate doing it I have had to create some horrible shameful hacks because I simply didn't have enough time to do it right...
If the code you are working on has lots of dependencies or is critical to the application then make as few changes as possible - you never know what your fix might break... :)
It sounds like you have a solid idea of what things should look like so I am not going to say what specific changes to make in what order 'cause that will vary from person to person. Just make small localized changes first, test, expand the scope of your changes, test. Expand. Test. Expand. Test. Until you either run out of time or there is no more room for improvement!
BTW When testing you are likely to see where things break most often - create test cases for them (JUnit or whatever).
EXCEPTION:
Two things that I always find myself doing are reformatting (CTRL+SHFT+F in Eclipse) and commenting code that is not obvious. After that I just hammer the most obvious nail first...
I'm learning Java by reading "Head First Java" and by doing all the puzzles and excercies. In the book they recommend to write TestDrive classes to test the code and clases I've written, that's one really simple thing to do, but by doing this I think I can't fully test my code because I'm writing the test code knowing what I want to get, I don't know if it makes any sense, but I was wondering if there's any way of testing my code in a simple way that it tell's me what isn't working correctly. Thanks.
that's right - you know what to expect, and write test cases to cover that knowledge. In many respects this is normal - you want to test the stuff you've written just so you know it works as you expect.
now, you need to take it to the next step: find a system where it will be working (ie integrate it with other bits n pieces of the complete puzzle) and see if it still works according to your assumptions and knowledge.
Then you need to give it to someone else to test for you - they will quickly find the bits that you never thought of.
Then you give it to a real user, and they not only find the things you and your tester never thought of, but they also find the things that were never thought of by the requirements analyst.
This is the way software works, and possibly the reason its never finished.
PS. One thing about your test code that does matter more than anything - once you've done it once and found it works as expected, you can add more stuff to your app and then run your test code again to make sure it still works as expected. This is called regression testing and I think its the only reason to write your own unit tests.
and: Dilbert's take on testing.
What do we mean by code? When Unit testing, which is what I think we're talking about here, we are testing specific methods and classes.
I think I can't fully test my code
because I'm writing the test code
knowing what I want to get
In other words you are investigating whether some code fulfils a contract. Consider this example:
int getInvestvalue( int depositCents, double annualInterestRate, int years) {
}
What tests can you devise? If you devise a good set of tests you can have some confidence in this routine. So we could try these kinds of input:
deposit 100, rate 5.0, years 1 : expected answer 105
deposit 100, rate 0, years 1 : expected answer 100
deposit 100, rate 10, years 0 : expected anwer 100
What else? How about a negative rate?
More interestingly, how about a very high rate of interest like 1,000,000.50 and 100,000 years, what happens to the result, would it fit in an integer - the thing about devising this test is that it challenges the interface - why is there no exception documented?
The question then comes: how do we figure out those test cases. I don't think there is a single approach that leads to building a comprehensive set but here's a couple of things to consider:
Edges: Zero, one, two, many. In my example we don't just do a rate of 5%. We consider especially the special cases. Zero is special, one is special, negative is special, a big number is special ...
Corner cases: combinations of edges. In my example that's a large rate and large number of years. Picking these is something of an art, and is helped by our knowledge of the implmentation: here we know that there's a "multiplier" effect between rates and years.
White box: using knowldge of the implementation to drive code coverage. Adjusting the inputs to force the code down particiular paths. For example if yoiu know that the code has a "if negative rate" conditional path, then this is a clue to include a negative rate test.
One of the tenets of "Test Driven Development" is writing a test first (i.e. before you've written the code). Obviously this test will initially fail (your program may not even compile). If the test doesn't fail, then you know you've got a problem with the test itself. Once the test fails, the objective then becomes to keep writing code until the test passes.
Also, some of the more popular unit testing frameworks such as jUnit will allow you to test if something works or explicitly doesn't work (i.e. you can assert that a certain type of exception is thrown). This becomes useful to check bad input, corner cases, etc.
To steal a line from Stephen Covey, just begin with the end in mind and write as many tests as you can think of. This may seem trivial for very simple code, but the idea becomes useful as you move onto more complex problems.
This site has a lot of help resources for testing codes. SoftwareTestingHelp
First, you need to make sure your code is written to be unit tested. Dependencies on outside classes should be made explicit (required by the constructor if possible) so that it isn't possible to write a unit test without identifying every possible way to break things. If you find that there are too many dependencies, or that it isn't obvious how each dependency will be used, you need to work on the Single Responsibility Principle, which will make your classes smaller, simpler, and more modular.
Once your code is written so that you can foresee situations that might occur based on your dependencies and input parameters, you should write tests looking for the correct behavior from a variety of those foreseeable situations. One of the biggest advantages I've found to unit testing is that it actually forced me to think, "What if ...", and figure out what the correct behavior would be in each case. For example, I have to decide whether it makes more sense to throw an exception or return a null value in the case of certain errors.
Once you think you've got all your bases covered, you might also want to throw your code at a tool like QuickCheck to help you identify possibilities that you might have missed.
TestDrive
No, you should be writing JUnit or TestNG tests.
Done correctly, your tests are your specification. It defines what your code is supposed to do. Each test defines a new aspect of your application. Therefore, you would never write tests looking for things that don't work correctly since you tests specify how things should work correctly.
Once you think you've finished unit testing and coding your component, one of the best and easiest ways to raise confidence that things are working correctly is to use a technique called Exploratory Testing, which can be thought of as an unscripted exploration of the part of the application you've written looking for bugs based on your intuition and experience (and deviousness!).
Pair Programming is another great way to prevent and flush out the bugs from your code. Two minds are better than one and often someone else will think of something you didn't (and vice versa).