I've been working with Java, specifically in Android, for a few months now and I've found that working with PowerMockito is something I'd rather not do. The complexities of keeping it working have outweighed any benefit of it. I also think I'd agree with most of the comments I've read on Stackoverflow that say not to use PowerMockito, so please keep that in mind when answering my question. I am looking for guidance to testing without PowerMockito.
My question is, when writing code that interfaces with a 3rd party SDK that has some static method, how would you test it? Specifically, when it seems the only thing really worth testing is a behaviour? ie that the static method was called?
I can and do put these 3rd party services behind adapter classes usually. And I can test that my adapter was called. But how do you live with not ever being able to test that the 3rd party itself was called and maybe confirm which arguments it was called with? Is this the only thing available in my toolbox? to limit logic as much as possible so that the untested area is less likely to fail?
When explaining this to someone coming from a dynamically typed language would you just say that the test wasn't valuable? I'm thinking at this point that these kind of tests are low value, but I can understand why others would want to test this kind of thing. Its the kind of test I've seen written a lot in Ruby projects I've worked on.
The one thing I have done in the past in similar situations:
created a tiny wrapper interface and an impl class calling that static method; and test verifying that the wrapper is called
a single test case that invokes that impl class and thereby the real static method.
If one is "lucky" that call has an observable effect, for example some exception gets thrown (that is the problem with a lot of static code in my context - it simply breaks unless the whole stack is running). And then you check for that. But I also agree: there isn't much value in doing so. It proofs correct plumbing, at the cost of being subject to change whenever the behavior of that static method changes.
Related
I have read a lot about test-driven design. My project is using tests, but currently they are written after the code has been written, and I am not clear how to do it in the other direction.
Simple example: I have a class Rectangle. It has private fields width and height with corresponding getters and setters. Common Java. Now, I want to add a function getArea() which returns the product of both, but I want to write the test first.
Of course, I can write a unit test. But it isn’t the case that it fails, but it does not even compile, because there is no getArea() function yet. Does that mean that writing the test always already involves changing the productive code to introduce dummys without functionality? Or do I have to write the test in a way that it uses introspection? I don’t like the latter approach, because it makes the code less readable and later refactoring with tools will not discover it and break the test, and I know that we refactor a lot. Also, adding ‘dummys’ may include lots of changes, i.e. if I need additional fields, the database must be changed for Hibernate to continue to work, … that seems way to much productive code changes for me when yet “writing tests only”. What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
Is there a way to do that?
Well, TDD does not mean, that you cannot have anything in the production code before writing the test.
For example:
You put your method, e.g. getArea(param1, param2) in your production code with an empty body.
Then you write the test with valid input and your expected result.
You run the test and it will fail.
Then you change the production code and run the test again.
If it still fails: back to the previous step.
If it passes, you write the next test.
A quick introduction can be found for example here: codeutopia -> 5-step-method-to-make-test-driven-development-and-unit-testing-easy
What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
There isn't, that I have ever seen, a way to write a test with a dependency on a new part of the API, and have that test immediately compile without first extending the API of the test subject.
It's introspection or nothing.
But it isn’t the case that it fails, but it does not even compile, because there is no getArea() function yet
Historically, writing code that couldn't compile was part of the rhythm of TDD. Write a little bit of test code, write a little bit of production code, write a little bit of test code, write a little bit of production code, and so on.
Robert Martin describes this as the nano-cycle of TDD
... the goal is always to promote the line by line granularity that I experienced while working with Kent so long ago.
I've abandoned the nano-cycle constraint in my own work. Perhaps I fail to appreciate it because I've never paired with Kent.
But I'm perfectly happy to write tests that don't compile, and then back fill the production code I need when the test is in a satisfactory state. That works well for me because I normally work in a development environment that can generate production implementations at just a few key strokes.
Another possibility is to consider a discipline like TDD as if you meant it, which does a lot more of the real work in the test source hierarchy before moving code into the production hierarchy.
I've been working on Android dev quite sometimes, but never fully adopt TDD in Android. However I tried recently to develop my new app with complete TDD. So here is my opinion..
Does that mean that writing the test always already involves changing the productive code to introduce dummys without functionality?
I think is the yes. As I understand every tests are equivalent to every specs/use cases I have on the software. So writing a fail test first is about the attempt to filling in the requirement specs with test codes. Then when I tried to fill the productive code to pass the just-written TC, I really tried to make it work. After a doing this a while, I was pretty surprised how with my productive code size is very small, but it's able to fill how much of the requirement.
For me personally all the fail TC I wrote before productive code, were actually come from list of questions, which I brainstormed them about the requirement, and I sometimes used it to explore edge cases of requirement.
So the basic workflow is Red - Green - Refactor, which I got from the presentation from Bryan Breecham - https://www.infoq.com/presentations/tdd-lego/
About,
What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
For me I think it's possible, when you write all your productive logic first, then UT plays the roles of fulfilling the requirement. It's just the another way around. So in overall I think TDD is the approach but people may use Unit Test in different purposes, e.g reduce testing time, etc.
I was confused about Randomized testing.
It is cited form proj1b spec:
"The autograder project 1A largely relies on randomized tests. For
example, our JUnit tests on gradescope simply call random methods of
your LinkedListDeque class and our correct implementation
LinkedListDequeSolution and as soon as we see any disagreement, the
test fails and prints out a sequence of operations that caused the
failure. "
(http://datastructur.es/sp17/materials/proj/proj1b/proj1b.html)
I do not understand what it means by:
"call random methods of the tested class and the correct class"
I need to write something really similar with that autograder. But I do not know if I need to write tests for different methods together by using a loop to random pick up some to test?
If so, we can test all methods by using JUnit, why we need to randomized test?
Also, if I combine all the tests together, why I call it JUnit?
If you do not mind, some examples will be easier to understand.
Just to elaborate on the "random" testing.
There is a framework called QuickCheck, initially written for the Haskell programming language. But it has been ported to many other languages - also for Java. There is jqwik for junit5, or (probably outdated) jcheck.
The idea is "simply":
you describe properties of your methods, like (a(b(x)) == b(a(x))
the framework then created random input for method calls, and tries to find examples where a property doesn't hold
I assume they are talking about Model Based Testing. For that you'd have to create models - simplified versions of your production behaviour. Then you can list possible methods that can be invoked and the dependencies between those methods. After that you'd have to choose a random one and invoke both - method of your model and the method of your app. If the results are the same, then it works right. If the results differ - either there is a bug in your model or in your app. You can read more in this article.
In Java you can either write this logic on your own, or use existing frameworks. The only existing one that I know in Java is GraphWalker. But I haven't used it and don't know how good it is.
The original frameworks (like QuichCheck) are also able to "shrink" - if it took 50 calls to random methods to find a bug, then they will try to find the exact sequence of several steps that would lead to that bug. I don't know if there are such possibilities in Java frameworks, but it may be worth looking into ScalaCheck if you need a JVM (but not necessarily a Java solution).
I have a public method that calls group of private methods.
I would like to test each of the private method with unit test as it is too complicated to test everything through the public method ,
Is think it will be a bad practice to change method accessibility only for testing purposes.
But I dont see any other way to test it (maybe reflection , but it is ugly)
Private methods should only exist as a consequence of refactoring a public method, that you've developed using TDD.
If you create a class with public methods and plan to add private methods to it, then your architecture will fail.
I know it's harsh, but what you're asking for is really, really bad software design.
I suggest you buy Uncle Bob's book "Clean Code"
http://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882
Which basically gives you a great foundation for getting it right and saving you a lot of grief in your future as a developer.
There is IMO only one correct answer to this question; If the the class is too complex it means it's doing too much and has too many responsibilities. You need to extract those responsibilities into other classes that can be tested separately.
So the answer to your question is NO!
What you have is a code smell. You're seeing the symptoms of a problem, but you're not curing it. What you need to do is to use refactoring techniques like extract class or extract subclass. Try to see if you can extract one of those private methods (or parts of it) into a class of itself. Then you can add unit test to that new class. Divide and conquer untill you have the code under control.
You could, as has been mentioned, change the visibility from private to package, and then ensure that the unit-tests are in the same package (which should normally be the case anyway).
This can be an acceptable solution to your testing problem, given that the interfaces of the (now) private functions are sufficiently stable and that you also do some integration testing (that is, checking that the public methods call the private ones in the correct way).
There are, however, some other options you might want to consider:
If the private functions are interface-stable but sufficiently complex, you might consider creating separate classes for them - it is likely that some of them might benefit from being split into several smaller functions themselves.
If testing the private functions via the public interface is inconvenient (maybe because of the need for a complex setup), this can sometimes be solved by the use of helper functions that simplify the setup and allow different tests to share common setup code.
You are right, changing the visibility of methods just so you are able to test them is a bad thing to do. Here are the options you have:
Test it through existing public methods. You really shouldn't test methods but behavior, which normally needs multiple methods anyway. So stop thinking about testing that method, but figure out the behavior that is not tested. If your class is well designed it should be easily testable.
Move the method into a new class. This is probably the best solution to your problem from a design perspective. If your code is so complex that you can't reach all the paths in that private method, parts of it should probably live in their own class. In that class they will have at least package scope and can easily be tested. Again: you should still test behavior not methods.
Use reflection. You can access private fields and methods using reflection. While this is technical possible it just adds more legacy code to the existing legacy code in order to hide the legacy code. In the general case a rather stupid thing to do. There are exceptions to this. For example is for some reason you are not allowed to make even the smallest change to the production source code. If you really need this, google it.
Just change the visibility Yes it is bad practice. But sometimes the alternatives are: Make large changes without tests or don't test it at all. So sometimes it is ok to just bite the bullet and change the visibility. Especially when it is the first step for writing some tests and then extracting the behavior in its own class.
Whenever I program, I seem to accumulate a lot of "trash" code, code that is not in use anymore. Just to keep my code neat, and to avoid making any expensive and unnecessary computations, Is there an easy way to tell if there is code that is not being used?
One of the basic principles which will help you in this regard is to reduce visibility of everything as much as possible. If a class can be private don't make it default, protected or public. Same applies for methods and variables. It is much easier when you can say for sure if something is not being used outside a class. In cases like this even IDEs like Eclipse and IntelliJ Idea will suggest you about unused code.
Using this practice while developing and refactoring code is the best way to clean unused code confidently without the possibility of breaking the application. This will help in scenarios even when reflection is being used.
It's difficult to do in Java since it's a reflective language. (You can't simply hunt for calls to a certain class or function, for example, since reflection can be used to call a function using strings that can only be resolved at runtime.)
So in full generality, you cannot be certain.
If you have adequate unit tests for your code base then the possibility of redundant code should not be a cause for concern.
I think "unused code" means the code that is always not executed at runtime. I hope I interpreted you correctly.
The way to do a simple check on this is very easy. Just use IntelliJ IDEA to write your code. It will tell you that parts of your code that will never be executed and also the parts where the code can be simplified. For example,
if (x == 5) {
}
And then it will tell you that this if statement is redundant. Or if you have this:
return;
someMethod();
The IDE will tell you that someMethod() can never be reached. And it also provides a lot of other cool features.
But sometimes this isn't enough. What if you have
if (x == 5) {
someMethod();
}
But actually in your code, x can only be in the range of 1 to 4? The IDE won't tell you about this. You can use a tool that shows your code coverage by running lots of tests. Then you can see which part of your code is not executed.
If you don't want to use such a tool, you can put breakpoints in your methods. Then run some tests by hand. When the debugger steps through your code, you can see exactly where the code goes and exactly which piece(s) of code is not executed.
Another method to do this is to use the Find/Replace function of the IDE. Check if some of your public/private methods are not being called anywhere. For example, to check whether someMethod() is called, search for someMethod in the whole project and see if there are occurrences other than the declaration.
But the most effective way would be,
Stop writing this kind of code in the first place!
i think the best way to check that is to install a plugin of coverage like eclemma and create unit and integration tests to get 100% of coverage of the code that accomplish the use code/task you have.
The code that don't need to be tested or don't pass over it after the tests are completed and run, is code that you are not using
Try to avoid accumulating trash in the first place. Remove stuff you don't need anymore. (You could make a backup or better use a source code management system.)
You should also write unit tests for your functions. So you know if it still works after you remove something.
Aside from that, most IDEs will show you unused local variables and private methods.
I do imagine situation when you have app developed by years and some part of your functions doesn't used anymore even they still working. Example: Let's assume you make some changes on internal systems when specific event occured but it is not occurs anymore.
I would say you could use AspectJ to obtain such data / log and then analyze after some time.
I have inherited a massive system from my predecessor and I am beginning to understand how it works but I cant fathom why.
It's in java and uses interfaces which, should add an extra layer, but they add 5 or 6.
Here's how it goes when the user interface button is pressed and that calls a function which looks like this
foo.create(stuff...)
{
bar.create;
}
bar.create is exactly the same except it calls foobar.creat and that in turn calls barfoo.create. this goes on through 9 classes before it finds a function that accessed the database.
as far as I know each extra function call incurs more performance cost so this seems stupid to me.
also in the foo.create all the variables are error checked, this makes sense but in every other call the error checks happen again, it looks like cut and paste code.
This seems like madness as once the variables are checked once they should not need to be re checked as this is just wastinh processor cycles in my opinion.
This is my first project using java and interfaces so im just confused as to whats going on.
can anyone explain why the system was designed like this, what benefits/drawbacks it has and what I can do to improve it if it is bad ?
Thank you.
I suggest you look at design patterns, and see if they are being used in the project. Search for words like factory and abstract factory initially. Only then will the intentions of the previous developer be understood correctly.
Also, in general, unless you are running on a resource constrained device, don't worry about the cost of an extra call or level of indirection. If it helps your design, makes it easier to understand or open to extension, then the extra calls are worth making.
However, if there is copy-paste in the code, then that is not a good sign, and the developer probably did not know what he was doing.
It is very hard to understand what exactly is done in your software. Maybe it even makes sense. But I've seen couple of projects done by some "design pattern maniacs". It looked like they wanted to demonstrate their knowledge of all sorts of delegates, indirections, etc. Maybe it is your case.
I cannot comment on the architecture without carefully examining it, but generally speaking separation of services across different layers is a good idea. That way if you change implementation of one service, other service remains unchanged. However this will be true only if there is loose coupling between different layers.
In addition, it is generally the norm that each service handles exceptions that specifically pertains to the kind of service it provides leaving the rest to others. This also allows us to reduce the coupling between service layers.