Related
I have read a lot about test-driven design. My project is using tests, but currently they are written after the code has been written, and I am not clear how to do it in the other direction.
Simple example: I have a class Rectangle. It has private fields width and height with corresponding getters and setters. Common Java. Now, I want to add a function getArea() which returns the product of both, but I want to write the test first.
Of course, I can write a unit test. But it isn’t the case that it fails, but it does not even compile, because there is no getArea() function yet. Does that mean that writing the test always already involves changing the productive code to introduce dummys without functionality? Or do I have to write the test in a way that it uses introspection? I don’t like the latter approach, because it makes the code less readable and later refactoring with tools will not discover it and break the test, and I know that we refactor a lot. Also, adding ‘dummys’ may include lots of changes, i.e. if I need additional fields, the database must be changed for Hibernate to continue to work, … that seems way to much productive code changes for me when yet “writing tests only”. What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
Is there a way to do that?
Well, TDD does not mean, that you cannot have anything in the production code before writing the test.
For example:
You put your method, e.g. getArea(param1, param2) in your production code with an empty body.
Then you write the test with valid input and your expected result.
You run the test and it will fail.
Then you change the production code and run the test again.
If it still fails: back to the previous step.
If it passes, you write the next test.
A quick introduction can be found for example here: codeutopia -> 5-step-method-to-make-test-driven-development-and-unit-testing-easy
What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
There isn't, that I have ever seen, a way to write a test with a dependency on a new part of the API, and have that test immediately compile without first extending the API of the test subject.
It's introspection or nothing.
But it isn’t the case that it fails, but it does not even compile, because there is no getArea() function yet
Historically, writing code that couldn't compile was part of the rhythm of TDD. Write a little bit of test code, write a little bit of production code, write a little bit of test code, write a little bit of production code, and so on.
Robert Martin describes this as the nano-cycle of TDD
... the goal is always to promote the line by line granularity that I experienced while working with Kent so long ago.
I've abandoned the nano-cycle constraint in my own work. Perhaps I fail to appreciate it because I've never paired with Kent.
But I'm perfectly happy to write tests that don't compile, and then back fill the production code I need when the test is in a satisfactory state. That works well for me because I normally work in a development environment that can generate production implementations at just a few key strokes.
Another possibility is to consider a discipline like TDD as if you meant it, which does a lot more of the real work in the test source hierarchy before moving code into the production hierarchy.
I've been working on Android dev quite sometimes, but never fully adopt TDD in Android. However I tried recently to develop my new app with complete TDD. So here is my opinion..
Does that mean that writing the test always already involves changing the productive code to introduce dummys without functionality?
I think is the yes. As I understand every tests are equivalent to every specs/use cases I have on the software. So writing a fail test first is about the attempt to filling in the requirement specs with test codes. Then when I tried to fill the productive code to pass the just-written TC, I really tried to make it work. After a doing this a while, I was pretty surprised how with my productive code size is very small, but it's able to fill how much of the requirement.
For me personally all the fail TC I wrote before productive code, were actually come from list of questions, which I brainstormed them about the requirement, and I sometimes used it to explore edge cases of requirement.
So the basic workflow is Red - Green - Refactor, which I got from the presentation from Bryan Breecham - https://www.infoq.com/presentations/tdd-lego/
About,
What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
For me I think it's possible, when you write all your productive logic first, then UT plays the roles of fulfilling the requirement. It's just the another way around. So in overall I think TDD is the approach but people may use Unit Test in different purposes, e.g reduce testing time, etc.
I was confused about Randomized testing.
It is cited form proj1b spec:
"The autograder project 1A largely relies on randomized tests. For
example, our JUnit tests on gradescope simply call random methods of
your LinkedListDeque class and our correct implementation
LinkedListDequeSolution and as soon as we see any disagreement, the
test fails and prints out a sequence of operations that caused the
failure. "
(http://datastructur.es/sp17/materials/proj/proj1b/proj1b.html)
I do not understand what it means by:
"call random methods of the tested class and the correct class"
I need to write something really similar with that autograder. But I do not know if I need to write tests for different methods together by using a loop to random pick up some to test?
If so, we can test all methods by using JUnit, why we need to randomized test?
Also, if I combine all the tests together, why I call it JUnit?
If you do not mind, some examples will be easier to understand.
Just to elaborate on the "random" testing.
There is a framework called QuickCheck, initially written for the Haskell programming language. But it has been ported to many other languages - also for Java. There is jqwik for junit5, or (probably outdated) jcheck.
The idea is "simply":
you describe properties of your methods, like (a(b(x)) == b(a(x))
the framework then created random input for method calls, and tries to find examples where a property doesn't hold
I assume they are talking about Model Based Testing. For that you'd have to create models - simplified versions of your production behaviour. Then you can list possible methods that can be invoked and the dependencies between those methods. After that you'd have to choose a random one and invoke both - method of your model and the method of your app. If the results are the same, then it works right. If the results differ - either there is a bug in your model or in your app. You can read more in this article.
In Java you can either write this logic on your own, or use existing frameworks. The only existing one that I know in Java is GraphWalker. But I haven't used it and don't know how good it is.
The original frameworks (like QuichCheck) are also able to "shrink" - if it took 50 calls to random methods to find a bug, then they will try to find the exact sequence of several steps that would lead to that bug. I don't know if there are such possibilities in Java frameworks, but it may be worth looking into ScalaCheck if you need a JVM (but not necessarily a Java solution).
Whenever I program, I seem to accumulate a lot of "trash" code, code that is not in use anymore. Just to keep my code neat, and to avoid making any expensive and unnecessary computations, Is there an easy way to tell if there is code that is not being used?
One of the basic principles which will help you in this regard is to reduce visibility of everything as much as possible. If a class can be private don't make it default, protected or public. Same applies for methods and variables. It is much easier when you can say for sure if something is not being used outside a class. In cases like this even IDEs like Eclipse and IntelliJ Idea will suggest you about unused code.
Using this practice while developing and refactoring code is the best way to clean unused code confidently without the possibility of breaking the application. This will help in scenarios even when reflection is being used.
It's difficult to do in Java since it's a reflective language. (You can't simply hunt for calls to a certain class or function, for example, since reflection can be used to call a function using strings that can only be resolved at runtime.)
So in full generality, you cannot be certain.
If you have adequate unit tests for your code base then the possibility of redundant code should not be a cause for concern.
I think "unused code" means the code that is always not executed at runtime. I hope I interpreted you correctly.
The way to do a simple check on this is very easy. Just use IntelliJ IDEA to write your code. It will tell you that parts of your code that will never be executed and also the parts where the code can be simplified. For example,
if (x == 5) {
}
And then it will tell you that this if statement is redundant. Or if you have this:
return;
someMethod();
The IDE will tell you that someMethod() can never be reached. And it also provides a lot of other cool features.
But sometimes this isn't enough. What if you have
if (x == 5) {
someMethod();
}
But actually in your code, x can only be in the range of 1 to 4? The IDE won't tell you about this. You can use a tool that shows your code coverage by running lots of tests. Then you can see which part of your code is not executed.
If you don't want to use such a tool, you can put breakpoints in your methods. Then run some tests by hand. When the debugger steps through your code, you can see exactly where the code goes and exactly which piece(s) of code is not executed.
Another method to do this is to use the Find/Replace function of the IDE. Check if some of your public/private methods are not being called anywhere. For example, to check whether someMethod() is called, search for someMethod in the whole project and see if there are occurrences other than the declaration.
But the most effective way would be,
Stop writing this kind of code in the first place!
i think the best way to check that is to install a plugin of coverage like eclemma and create unit and integration tests to get 100% of coverage of the code that accomplish the use code/task you have.
The code that don't need to be tested or don't pass over it after the tests are completed and run, is code that you are not using
Try to avoid accumulating trash in the first place. Remove stuff you don't need anymore. (You could make a backup or better use a source code management system.)
You should also write unit tests for your functions. So you know if it still works after you remove something.
Aside from that, most IDEs will show you unused local variables and private methods.
I do imagine situation when you have app developed by years and some part of your functions doesn't used anymore even they still working. Example: Let's assume you make some changes on internal systems when specific event occured but it is not occurs anymore.
I would say you could use AspectJ to obtain such data / log and then analyze after some time.
I'm learning Java by reading "Head First Java" and by doing all the puzzles and excercies. In the book they recommend to write TestDrive classes to test the code and clases I've written, that's one really simple thing to do, but by doing this I think I can't fully test my code because I'm writing the test code knowing what I want to get, I don't know if it makes any sense, but I was wondering if there's any way of testing my code in a simple way that it tell's me what isn't working correctly. Thanks.
that's right - you know what to expect, and write test cases to cover that knowledge. In many respects this is normal - you want to test the stuff you've written just so you know it works as you expect.
now, you need to take it to the next step: find a system where it will be working (ie integrate it with other bits n pieces of the complete puzzle) and see if it still works according to your assumptions and knowledge.
Then you need to give it to someone else to test for you - they will quickly find the bits that you never thought of.
Then you give it to a real user, and they not only find the things you and your tester never thought of, but they also find the things that were never thought of by the requirements analyst.
This is the way software works, and possibly the reason its never finished.
PS. One thing about your test code that does matter more than anything - once you've done it once and found it works as expected, you can add more stuff to your app and then run your test code again to make sure it still works as expected. This is called regression testing and I think its the only reason to write your own unit tests.
and: Dilbert's take on testing.
What do we mean by code? When Unit testing, which is what I think we're talking about here, we are testing specific methods and classes.
I think I can't fully test my code
because I'm writing the test code
knowing what I want to get
In other words you are investigating whether some code fulfils a contract. Consider this example:
int getInvestvalue( int depositCents, double annualInterestRate, int years) {
}
What tests can you devise? If you devise a good set of tests you can have some confidence in this routine. So we could try these kinds of input:
deposit 100, rate 5.0, years 1 : expected answer 105
deposit 100, rate 0, years 1 : expected answer 100
deposit 100, rate 10, years 0 : expected anwer 100
What else? How about a negative rate?
More interestingly, how about a very high rate of interest like 1,000,000.50 and 100,000 years, what happens to the result, would it fit in an integer - the thing about devising this test is that it challenges the interface - why is there no exception documented?
The question then comes: how do we figure out those test cases. I don't think there is a single approach that leads to building a comprehensive set but here's a couple of things to consider:
Edges: Zero, one, two, many. In my example we don't just do a rate of 5%. We consider especially the special cases. Zero is special, one is special, negative is special, a big number is special ...
Corner cases: combinations of edges. In my example that's a large rate and large number of years. Picking these is something of an art, and is helped by our knowledge of the implmentation: here we know that there's a "multiplier" effect between rates and years.
White box: using knowldge of the implementation to drive code coverage. Adjusting the inputs to force the code down particiular paths. For example if yoiu know that the code has a "if negative rate" conditional path, then this is a clue to include a negative rate test.
One of the tenets of "Test Driven Development" is writing a test first (i.e. before you've written the code). Obviously this test will initially fail (your program may not even compile). If the test doesn't fail, then you know you've got a problem with the test itself. Once the test fails, the objective then becomes to keep writing code until the test passes.
Also, some of the more popular unit testing frameworks such as jUnit will allow you to test if something works or explicitly doesn't work (i.e. you can assert that a certain type of exception is thrown). This becomes useful to check bad input, corner cases, etc.
To steal a line from Stephen Covey, just begin with the end in mind and write as many tests as you can think of. This may seem trivial for very simple code, but the idea becomes useful as you move onto more complex problems.
This site has a lot of help resources for testing codes. SoftwareTestingHelp
First, you need to make sure your code is written to be unit tested. Dependencies on outside classes should be made explicit (required by the constructor if possible) so that it isn't possible to write a unit test without identifying every possible way to break things. If you find that there are too many dependencies, or that it isn't obvious how each dependency will be used, you need to work on the Single Responsibility Principle, which will make your classes smaller, simpler, and more modular.
Once your code is written so that you can foresee situations that might occur based on your dependencies and input parameters, you should write tests looking for the correct behavior from a variety of those foreseeable situations. One of the biggest advantages I've found to unit testing is that it actually forced me to think, "What if ...", and figure out what the correct behavior would be in each case. For example, I have to decide whether it makes more sense to throw an exception or return a null value in the case of certain errors.
Once you think you've got all your bases covered, you might also want to throw your code at a tool like QuickCheck to help you identify possibilities that you might have missed.
TestDrive
No, you should be writing JUnit or TestNG tests.
Done correctly, your tests are your specification. It defines what your code is supposed to do. Each test defines a new aspect of your application. Therefore, you would never write tests looking for things that don't work correctly since you tests specify how things should work correctly.
Once you think you've finished unit testing and coding your component, one of the best and easiest ways to raise confidence that things are working correctly is to use a technique called Exploratory Testing, which can be thought of as an unscripted exploration of the part of the application you've written looking for bugs based on your intuition and experience (and deviousness!).
Pair Programming is another great way to prevent and flush out the bugs from your code. Two minds are better than one and often someone else will think of something you didn't (and vice versa).
I'm working on a data mining research project and use code from a big svn.
Apparently one of the methods I use from that svn uses randomness somewhere without asking for a seed, which makes 2 calls to my program return different results. That's annoying for what I want to do, so I'm trying to locate that "uncontrolled" randomness.
Since the classes I use depend on many other, that's pretty painful to do by hand. Any idea how I could find where that randomness comes from ?
Edit:
Roughly, my code is structured as :
- stuff i wrote
- call to a method I didnt write involving lots of others classes
- stuff i wrote
I know that the randomness is introduced in the method I didn't write, but can't locate where exactly...
Idea:
What I'm looking for might be a tool or Eclipse plug-in that would let me see each time Random is instantiated during the execution of my program. Know anything like that ?
The default seed of many random number generators is the current time. If it's a cryptographic random number generator, it's a seed that's far more complex than that.
I'd bet that your random numbers are probably being seeded with the current time. The only way to fix that is to find the code that creates or seeds the random number generator and change it to seed to a constant. I'm not sure what the syntax of that is in Java, but in my world (C#) it's something like:
Random r = new Random(seedValue);
So even with an answer from StackOverflow, you still have some detective work to do to find the code you want.
Maybe it's a bit old-fashioned style, but...
How about tracing the intermediate results (variables, functions arguments) to standard output, gathering inputs for two different runs and checking where do they start to differ?
Maybe you want to read this:
In Java, when you create a new Random object, the seed is automaticly set to the system clocks "current time" in nanoseconds. So, when you check out the source of the Random class you will see a constructor, something like this:
public Random()
{
this(System.nanoTime());
}
Or maybe this:
In Eclipse you can set your cursor in a variable and then press F3 or F2 (I don't know exactly). This will bring you to the point where this variable is declared.
A second tool you can use is "Find usages". Then your IDE will search to all usages of a method, a variable or variable or whatever you want.
Which "big svn" are you using?
You could write some simple tests, to test whether or not two identical calls to underlying functions return two identical results...
Unless you know where the Random object is created, you're going to have to do some detective work this way.
How much of this code is open to you?
Why don't you insert a lot of logging calls (e.g. to standard error) that trace the state of the value you are concerned about throughout the program.
You can compare the trace across two successive runs to narrow down where the randomness is happening by searching for the first difference in the two log files.
Then you can insert more logging calls in that area until you precisely identify the problem.
Java's "Set" class implementations do not guarantee that they iterate the elements the same order. Thus, even if you run a program on the same machine twice, the order in which a set is traversed may change. Can't do anything about it unless one changes all "set" uses into "lists".