Right my junit tests look like a long story:
I create 4 users
I delete 1 user
I try to login with the deleted user and make sure it fails
I login with one of the 3 remaining user and verify I can login
I send a message from one user to the other and verify that it appears in the outbox of the sender and in the inbox of the receiver.
I delete the message
...
...
Advantages:
The tests are quite effective (are very good at detecting bugs) and are very stable, becuase they only use the API, if I refactor the code then the tests are refactored too. As I don't use "dirty tricks" such as saving and reloading the db in a given state, my tests are oblivious to schema changes and implementation changes.
Disadvantages:
The tests are getting difficult to maintain, any change in a test affects other tests. The tests run 8-9 min which is great for continuous integration but is a bit frustrating for developers. Tests cannot be run isolated, the best you can do is to stop after the test you are interested in has run - but you absolutely must run all the tests that come before.
How would you go about improving my tests?
First, understand the tests you have are integration tests (probably access external systems and hit a wide range of classes). Unit tests should be a lot more specific, which is a challenge on an already built system. The main issue achieving that is usually the way the code is structured:
i.e. class tightly coupled to external systems (or to other classes that are). To be able to do so you need to build the classes in such a way that you can actually avoid hitting external systems during the unit tests.
Update 1: Read the following, and consider that the resulting design will allow you to actually test the encryption logic without hitting files/databases - http://www.lostechies.com/blogs/gabrielschenker/archive/2009/01/30/the-dependency-inversion-principle.aspx (not in java, but ilustrates the issue very well) ... also note that you can do a really focused integration tests for the readers/writers, instead of having to test it all together.
I suggest:
Gradually include real unit tests on your system. You can do this when doing changes and developing new features, refactoring appropriately.
When doing the previous, include focused integration tests where appropriate. Make sure you are able to run the unit tests separated from the integration tests.
Consider your tests are close to testing the system as a whole, thus are different from automated acceptance tests only in that they operate on the border of the API. Given this think about factors related to the importance of the API for the product (like if it will be used externally), and whether you have good coverage with automated acceptance tests. This can help you understand what is the value of having these on your system, and also why they naturally take so long. Take a decision on whether you will be testing the system as a whole on the interface level, or both the interface+api level.
Update 2: Based on other answers, I want to clear something regarding doing TDD. Lets say you have to check whether some given logic sends an email, logs the info on a file, saves data on the database, and calls a web service (not all at once I know, but you start adding tests for each of those). On each test you don't want to hit the external systems, what you really want to test is if the logic will make the calls to those systems that you are expecting it to do. So when you write a test that checks that an email is sent when you create an user, what you test is if the logic calls the dependency that does that. Notice that you can write these tests and the related logic, without actually having to implement the code that sends the email (and then having to access the external system to know what was sent ...). This will help you focus on the task at hand and help you get a decoupled system. It will also make it simple to test what is being sent to those systems.
unit tests should - ideally - be independent, and able to run in any order. So, I would suggest that you:
break up your tests to be independent
consider using an in-memory database as the backend for your tests
consider wrapping each test or suite in a transaction that is rolled back at the end
profile the unit tests to see where the time is going, and concentrate on that
if it takes 8 minutes to create a few users and send a few messages, the performance problem may not be in the tests, rather this may be a symptom of performance problems with the system itself - only your profiler knows for sure!
[caveat: i do NOT consider these kinds of tests to be 'integration tests', though i may be in the minority; i consider these kinds of tests to be unit tests of features, a la TDD]
Now you are testing many things in one method (a violation of One Assertion Per Test). This is a bad thing, because when any of those things changes, the whole test fails. This leads it to not being immediately obvious why a test failed and what needs to be fixed. Also when you intentionally change the behaviour of the system, you need to change more tests to correspond the changed behaviour (i.e. the tests are fragile).
To know what kind of tests are good, it helps to read more on BDD: http://dannorth.net/introducing-bdd http://techblog.daveastels.com/2005/07/05/a-new-look-at-test-driven-development/ http://jonkruger.com/blog/2008/07/25/why-behavior-driven-development-is-good/
To improve the test that you mentioned, I would split it into the following three test classes with these context and test method names:
Creating user accounts
Before a user is created
the user does not exist
When a user is created
the user exists
When a user is deleted
the user does not exist anymore
Logging in
When a user exists
the user can login with the right password
the user can not login with a wrong password
When a user does not exist
the user can not login
Sending messages
When a user sends a message
the message appears in the sender's outbox
the message appears in the reciever's inbox
the message does not appear in any other message boxes
When a message is deleted
the message does not anymore exist
You also need to improve the speed of the tests. You should have a unit test suite with good coverage, which can run in a couple of seconds. If it takes longer than 10-20 seconds to run the tests, then you will hesitate to run them after every change, and you lose some of quick feedback that running the tests gives you. (If it talks to the database, it's not a unit test, but a system or integration test, which have their uses, but are not fast enough to be executed continually.) You need to break the dependencies of the classes under test by mocking or stubbing them. Also from your description it appears that your tests are not isolated, but instead the tests depend on the side-effects caused by previous tests - this is a no-no. Good tests are FIRST.
Reduce dependencies between tests. This can be done by using Mocks. Martin Fowler speaks about it in Mocks aren't stubs, especially why mocking reduces dependencies between tests.
You can use JExample, an extension of JUnit that allows test methods to have return values that are reused by other tests. JExample tests run with the normal JUnit plugin in Eclipse, and also work side by side with normal JUnit tests. Thus migration should be no problem. JExample is used as follows
#RunWith(JExample.class)
public class MyTest {
#Test
public Object a() {
return new Object();
}
#Test
#Given("#a")
public Object b(Object object) {
// do something with object
return object;
}
#Test
#Given("#b")
public void c(Object object) {
// do some more things with object
}
}
Disclaimer, I am among the JExample developers.
If you use TestNG you can annotate tests in a variety of ways. For example, you can annotate your tests above as long-running. Then you can configure your automated-build/continuous integration server to run these, but the standard "interactive" developer build would not (unless they explicitly choose to).
This approach depends on developers checking into your continuous build on a regular basis, so that the tests do get run!
Some tests will inevitably take a long time to run. The comments in this thread re. performance are all valid. However if your tests do take a long time, the pragmatic solution is to run them but not let their time-consuming nature impact the developers to the point that they avoid running them.
Note: you can do something similar with JUnit by (say) naming tests in different fashions and getting your continuous build to run a particular subset of test classes.
By testing stories like you describe, you have very brittle tests. If only one tiny bit of functionality is changing, your whole test might be messed up. Then you will likely to change all tests, which are affected by that change.
In fact the tests you are describing are more like functional tests or component tests than unit tests. So you are using a unit testing framework (junit) for non-unit tests. In my point of view there is nothing wrong to use a unit testing framework to do non-unit tests, if (and only if) you are aware of it.
So there are following options:
Choose another testing framework which supports a "story telling"-style of testing much better, like other user already have suggested. You have to evaluate and find a suitable testing framework.
Make your tests more “unit test”-like. Therefore you will need to break up your tests and maybe change your current production code. Why? Because unit testing aims on testing small units of code (unit testing purists suggest only one class at once). By doing this your unit tests become more independent. If you change the behavior of one class, you just need to change a relatively small amount of unit test code. This makes your unit test more robust. During that process you might see that your current code does not support unit testing very well -- mostly because of dependencies between classes. This is the reason that you will also need to modify your production code.
If you are in a project and running out of time, both options might not help you any further. Then you will have to live with those tests, but you can try to ease your pain:
Remove code duplication in your tests: Like in production code eliminate code duplication and put the code into helper methods or helper classes. If something changes, you might only need to change the helper method or class. This way you will converge to the next suggestion.
Add another layer of indirection to your tests: Produce helper methods and helper classes which operate on a higher level of abstraction. They should act as API for your tests. These helpers are calling you production code. Your story tests should only call those helpers. If something changes, you need to change only one place in your API and don't need to touch all your tests.
Example signatures for your API:
createUserAndDelete(string[] usersForCreation, string[] userForDeletion);
logonWithUser(string user);
sendAndCheckMessageBoxes(string fromUser, string toUser);
For general unit testing I suggest to have a look into XUnit Test Patterns from Gerard Meszaros.
For breaking dependencies in your production tests have a look into Working Effectively with Legacy Code from Michael Feathers
In addition to the above, pick up a good book on TDD (I can recommend "TDD and Acceptance TDD for Java Developers"). Even though it will approach from a TDD point of view there is alot of helpful information about writing the right kind of unit tests.
Find someone who has alot of knowledge in the area and use them to figure out how you can improve your tests.
Join a mailing list to ask questions and just read the traffic coming through. The JUnit list at yahoo (something like groups.yahoo.com/junit). Some of the movers and shakers in the JUnit world are on that list and actively participate.
Get a list of the golden rules of unit tests and stick them on your (and others) cubicle wall, something like:
Thou shalt never access an external system
Thou shalt only test the code under test
Thou shalt only test one thing at once
etc.
Since everyone else is talking about structure I'll pick different points. This sounds like a good opportunity to profile the code to find bottleknecks and to run it through code coverage to see if you are missing anything (given the time it takes to run it the results could be interesting).
I personally use the Netbeans profiler, but there are ones in other IDEs and stand alone ones as well.
For code coverage I use Cobertura, but EMMA works too (EMMA had an annoyance that Cobertura didn't have... I forget what it was and it may not be an issue anymore). Those two are free, there are paid ones as well that are nice.
Related
I work on a project which has existed for many years. The time it takes to build the project with all tests is almost sensational (not in a good way). This is mainly due to a lot of modules, as well as heaps of unit tests which uses a repository to set up test data rather then to mock the desired behaviour. Unit tests using a repository use a lot of time for test setup, and they run quite slowly. This adds up to a lot of time as the system is quite large.
We write all new unit tests by using Mockito to mock the repository (except when we are actually testing the repository obviously). We also try to rewrite all existing unit tests to using mocks of the repository instead of an actual repository whenever we have the time and opportunity. Completely eliminating the use of repo's in our tests has a huge effect on how much time it takes to run the tests.
A lot of the legacy code sets up its test data by using builders and test-utilities which in turn uses the repository. As the domain is quite complex, this often involves setting up a fair amount of objects and how they are related to each other. Re-writing a class of tests (say ~15 tests) to using only mocked object can therefore be quite time-consuming. And as everywhere else, time is not an infinite resource.
If we are adding some new functionality to a class, it would be far easier to just write one new repository test (in addition to the existing 15) than to find out exactly how the test data needs to be set up by using different mock objects.
I have tried to find some information on how and to what extent the test setup affects the actual time it takes to run the tests, but I have failed to find any useful information. My only "facts" are the observations I make when running a test class. The test setup for a repo test may easily take ~10 seconds, while the test setup for a mocked test class starts in less than a second.
NOTE: I am aware that I can use JUnit Stopwatch to benchmark a single or a set of tests, but my question is more concerned with best practices than exactly how long it takes me to run my tests.
I have two questions:
Say I encounter a test class which already has 15 unit tests where none of them mocks any behaviour. I write a test and do a small fix in the class. I do not have the time to re-write the whole test class to mock objects. Should I just add a new test without mocking any behaviour and follow the existing (and bad) pattern? Does it really matter whether I have 15 non-mocked tests and 1 mocked test or if I have 16 non-mocked tests?
In my test class with 15 unit tests, some of the tests are easier to refactor than others. Is it worth it to re-write only five of the tests to using mocked objects? Is it against best practice or in any other way not good to have a test class where some of the tests uses mocks and some don't?
Your question is really subjective but I'll try to suggest few options you can explore. It all depends upon how much you're willing to spend.
Should I just add a new test without mocking any behavior and follow the existing (and bad) pattern? Does it really matter whether I have 15 non-mocked tests and 1 mocked test or if I have 16 non-mocked tests?
Its not about just one new test. If you're still writing in bad/slow pattern, you're just increasing technical debt. You've to lay out the best practices for writing new unit tests yourself.
In my test class with 15 unit tests, some of the tests are easier to
refactor than others. Is it worth it to re-write only five of the
tests to using mocked objects?
Absolutely. Why not? You saying for yourself the improvements you're getting by following newer style of code.
Is it against best practice or in any other way not good to have a
test class where some of the tests uses mocks and some don't?
Well one best practice is to have consistent code everywhere. Mix of old styled repositories and newer one with mocks does not go too well as far as best practices are concerned. But I'd be more concerned if the code you write is not well covered with unit tests, whatever style if may be.
At the end of the day, you're the one to decide and look at all the trade offs like how much build time improvements can you achieve by newer mocked repositories, what is the frequency of your builds, and can this be achieved using hardware improvements and other factors.
As written by #ShanuGupta there is not general answer to your question.
But here is my thought:
After correctness the readability is the second most desirable value of code, especially test code (since it is executable specification)
On the other hand there is no rule that a unit cannot have more than one Test class. Therefore I'd separate "old" test methods not using mocks from the "mocking" tests by placing them in separate test classes.
I have a class of high school programming students and I would like an automated way to check the validity of their work. I go through their code and look for structure, efficiency and basic expected outcomes but I was hoping to take it to another level.
Would Unit Testing be a viable solution?
Is there an elegant way of check a bunch of student programs at once?
We are using Eclipse and I've imported their project, containing all of their programs, from their local network drive. Works great. I'm just trying to give them more feedback on how they are doing, and even introduce them to unit testing, which is something I've never done.
yes sure you can do that for checking the students work as
Unit Testing reduces the level of bugs in production code.
Automated tests can be run as frequently as required.
In my university we had automated tests for excercises. We just mailed the class files or built jars to an email address. On the serverside you set them in the classpath and start your tests. It's actually quite easy to implement. The important thing is to clearly document package structure and such in the given requirements, maybe even supply a project skeleton.
Nice plus: the students were given a smaller sets of tests so they could verify they work before they submitted it.
I think introducing them to unit testing and TDD is a great idea.
However, if YOU write the unit tests and give them to your students before they do the assignment, then they won't learn to write unit tests. Also, they will structure their code according to your test, which may or may not be what you want.
If they, on the other hand, write the unit tests, they will learn how to do that, but you won't know what their tests are testing.
Perhaps you could extract code test coverage and assign them to reach 80, 90 or 100% coverage or something like that.
I did a review of a programming test today, for a eventual new hire, and I feel that the reasoning behind programming choices is really important.
We are using JUnit to execute integration tests and also the system integration tests which rely on external test systems (not necessarily maintained by our own company).
I wonder where to put the code that checks if the system is available prior to running the test cases? So I can determine if there is a network or other issue and not one with the test itself.
JUnit allows to setup some parts of the test in JUnit-Rules. Is it a good idea to setup the service that communicates with the external system within the rule and do some basic checks ("ping") to the external system within the rule? Or to store the state and within the Test use a JUnit assume(rule.isAvailable) to avoid having the test executed?
Or would it be smarter to put this verification code in a custom JUnit Runner?
Or is there even another way to do this? (simply create some utils?)
The goal is to skip the tests if some conditions are not met since it is obvious the tests will fail. I know this indicates a bad exception handling but there is a lot of legacy code I can't change altogether.
I tried to find some articles myself but it seems the search terms ("test", "external system" and so on) are a little thankless.
thanks!
Consider using org.junit.Assume.* as described here. When assumptions fail, your tests are ignored by default. You can write a custom runner to do something else when your assumptions fail.
The key thing though is that the tests don't fail when assumptions like the availability of your external services fail.
If the ping applies to every single test in the class, I would put the ping call in the #Before method. #Before will get executed before every single test method (i.e., #JUnit -annotated methods).
If the ping does not apply to all tests in the class, then you would have to hit ping explicitly from those methods.
There seems to be two trends on this topic:
Some answers (such as this one) suggest unit tests should not log anything.
Some questions and answers (such as this one) suggest different logging techniques and formats used in unit tests.
Should unit tests log what they do? Would those additional informations be helpful in the unit test reports? Or should unit tests be silent as long as they don't fail?
My question targets Java unit tests, but input from programmers in other languages could be interesting as well.
Unit tests really should be so simple and focused, that a test failing already documents what went wrong. You shouldn't need to read through logs for a test case to find that out.
However, it is a good idea to log the total results of a large suite of automated tests so that you don't have to trawl through all the tests to find the ones that failed. It is nice to see a summary at the end that you can focus on.
This is obviously a bit subjective, but I don't see why you would disable logging in your unit tests.
I do think you're misinterpreting the first linked post; the poster is not claiming that you shouldn't log anything, he's saying that a pass/fail should not be something that is just in the logs. It should be returned to the testing framework. It should be a piece of data that is completely seperate from the normal logs.
I agree with him on that.
Apart from that you could still have your normal logging. You have it anyway in the classes you're testing (or should have). When a test fails, you might see something in the log which will help you debug it more quickly. I don't see how this could ever be a negative point.
I've used both quiet and verbose logging during unit tests, and I personally prefer it when each test outputs a single line with test name and how it went. I find it more appealing when I can tell what is going on, though I can't say it has any real impact on my work.
If you run from console, colored output is a plus, I think.
I believe, there should be no logging in unit tests, but only assertions. The main reason is that logging hides important information, which is visible in logs only to their author. Here is my blog post about this: Logging in Unit Tests, a Bad Practice.
Normally when you write tests one of the first thing that you learn is that a unit tests should not connect to anything - database, filesystem, internet. A unit test should be blazing fast and work regardless of the environment you work in. If it connects to something it's an integration test.
I'd argue that using a logging framework which may significantly reduce the speed of unit tests is something that goes against the philosophy of unit testing. The whole idea is that you can run thousands, tens of thousands tests on your every whim. The ideal would be to have your unit test suite plugged to your save button (not really plausible but you get my drift).
A logging framework is pretty useless if it doesn't lets you enable/disable the logs. So feel free to add logs, just make sure you can enable/disable them separately from everything else
Tests have assertions. If something is missing in assertions, add more assertions. Logs serve for investigating an error after the fact. But when you do unit testing, you have all the power to automate the search for errors.
If you want to look into logs, it means, you don't have enough test cases and checks. Add them.
I have a couple of design/architectural questions that always come up in our shop. I said "our", as opposed to "me" personally. Some of the decisions were made and made when J2EE was first introduced so there are some bad design choices and some good.
In a web environment, how do you work with filters. When should you use J2EE filters and when shouldn't you? Is it possible to have many filters, especially if you have too much logic in them. For example, there is a lot of logic in our authentication process. If you are this user, go to this site and if not go to another one. It is difficult to debug because one URL path could end up rendering different target pages.
Property resource bundle files for replacement values in JSP files: It seems that the consensus in the Java community is to use bundle files that contain labels and titles for a jsp parsing. I can see the benefit if you are doing development with many different languages and switching the label values based on locale. But what if you aren't working with multiple languages? Should every piece of static text in a JSP file or other template file really have to be put into a property file. Once again, we run into issues with debugging where text may not show up due to misspelling with property value keys or corrupt property files. Also, we have a process where graphic designers will send us html templates and then we convert them to jsp. It seems it more confusing to then remove the static text, add a key, add the key/value in a property file, etc.
E.g. A labels.properties file may contain the Username: label. That gets replaced by some key and rendered to the user.
Unit Testing for all J2EE development - we don't encourage unit testing. Some people do but I have never worked at shop that uses extensive unit testing. Once place did and then when crunch time hit, we stopped doing unit testing and then after a while the unit tests were useless and wouldn't ever compile. Most of the development I have done has been with servers, web application development, database connectivity. I see where unit testing can be cumbersome because you need an environment to unit test against. I think unit test manifestos encourage developers not to actually connect to external sources. But it seems like a major portion of the testing should be connecting to a database and running all of the code, not just a particular unit. So that is my question, for all types of development (like you see in CRUD oriented J2EE development) should we write unit tests in all cases? And if we don't write unit tests, what other developer testing mechanisms could we use?
Edited: Here are some good resources on some of these topics.
http://www.ibm.com/developerworks/java/library/j-diag1105.html
Redirection is a simpler way to handle different pages depending on role. The filter could be used simply for authentication, to get the User object and any associated Roles into the session.
As James Black said, if you had a central controller you could obviate the need to put this logic in the filters. To do this you'd map the central controller to all urls (or all non-static urls). Then the filter passes a User and Roles to the central controller which decides where to send the user. If the user tries to access a URL he doesn't have permission for, this controller can decide what to do about it.
Most major MVC web frameworks follow this pattern, so just check them out for a better understanding of this.
I agree with James here, too - you don't have to move everything there but it can make things simpler in the future. Personally, I think you often have to trade this one off in order to work efficiently with designers. I've often put the infrastructure and logic in to make it work but then littered my templates with static text while working with designers. Finally, went back and pulled all the static text out into the external files. Sure enough, found some spelling mistakes that way!
Testing - this is the big one. In my experience, a highly disciplined test-first approach can eliminate 90% of the stress in developing these apps. But unit tests are not quite enough.
I use three kinds of tests, as indicated by the Agile community:
acceptance/functional tests - customer defines these with each requirement and we don't ship til they all pass (look at FitNesse, Selenium, Mercury)
integration tests - ensure that the logic is correct and that issues don't come up across tiers or with realistic data (look at Cactus, DBUnit, Canoo WebTest)
unit tests - both defines the usage and expectations of a class and provides assurance that breaking changes will be caught quickly (look at JUnit, TestNG)
So you see that unit testing is really for the benefit of the developers... if there are five of us working on the project, not writing unit tests leads one of two things:
an explosion of necessary communication as developers try and figure out how to use (or how somebody broke) each other's classes
no communication and increased risk due to "silos" - areas where only one developer touches the code and in which the company is entirely reliant on that developer
Even if it's just me, it's too easy to forget why I put that little piece of special case logic in the class six months ago. Then I break my own code and have to figure out how... it's a big waste of time and does nothing to reduce my stress level! Also, if you force yourself to think through (and type) the test for each significant function in your class, and figure out how to isolate any external resources so you can pass in a mock version, your design improves immeasurably. So I tend to work test-first regardless.
Arguably the most useful, but least often done, is automated acceptance testing. This is what ensures that the developers have understood what the customer was asking for. Sometimes this is left to QA, and I think that's fine, but the ideal situation is one in which these are an integral part of the development process.
The way this works is: for each requirement the test plan is turned into a script which is added to the test suite. Then you watch it fail. Then you write code to make it pass. Thus, if a coder is working on changes and is ready to check in, they have to do a clean build and run all the acceptance tests. If any fail, fix before you can check in.
"Continuous integration" is simply the process of automating this step - when anyone checks code in, a separate server checks out the code and runs all the tests. If any are broken it spams the last developer to check in until they are fixed.
I once consulted with a team that had a single tester. This guy was working through the test plans manually, all day long. When a change took place, however minor, he would have to start over. I built them a spreadsheet indicating that there were over 16 million possible paths through just a single screen, and they ponied up the $10k for Mercury Test Director in a hurry! Now he makes spreadsheets and automates the test plans that use them, so they have pretty thorough regression testing without ever-increasing QA time demands.
Once you've begun automating tests at every layer of your app (especially if you work test-first) a remarkable thing happens. Worry disappears!
So, no, it's not necessary. But if you find yourself worrying about technical debt, about the big deployment this weekend, or about whether you're going to break things while trying to quickly change to meet the suddenly-urgent customer requirements, you may want to more deeply investigate test-first development.
Filters are useful to help move logic such as is the user authenticated, to properly handle this, since you don't want this logic in every page.
Since you don't have a central controller it sounds like your filters are serving this function, which is fine, but, as you mentioned, it does make debugging harder.
This is where unit tests can come in handy, as you can test different situations, with each filter individually, then with all the filters in a chain, outside of your container, to ensure it works properly.
Unit testing does require discipline, but, if the rule is that nothing goes to QA without a unit test then it may help, and there are many tools to help generate tests so you just have to write the test. Before you debug, write or update the unit test, and show that the unit test is failing, so the problem is duplicated.
This will ensure that that error won't return, and that you fixed it, and you have updated a unit test.
For resource bundles. If you are certain you will never support another language, then as you refactor you can remove the need for the bundles, but, I think it is easier to make spelling/grammar corrections if the text is actually in one place.
Filters in general are expected to perform smaller units of functionality and filter-chaining would be used to apply the filters as needed. In your case, maybe a refactoring can help to move out some of the logic to additional filters and the redirecting logic can be somewhat centralized through a controller to be easier to debug and understand.
Resource bundles are necessary to maintain flexibility, but if you know absolutely that the site is going to be used in a single locale, then you might skip it. Maybe you can move some of the work in maintaining the bundles to the designers i.e let them have access to the resource bundles, so that you get the HTML with the keys in place.
Unit testing is much easier to implement at the beginning of a project as opposed to building it into a existing product. For existing software, you may still implement unit tests for the new features. However, it requires a certain amount of insistence from team leads and the team needs to buy into the necessity of having unit tests. Code review for unit tests helps and a decision on what parts of the code need to be absolutely covered can help developers. Tools/plugins like Coverlipse can indicate the unit testing coverage, but they tend to look at every possible code path, some of which may be trivial.
At one of my earlier projects, unit tests were just compulsory and unit tests would be automatically kicked off after each check-in. However, this was not Test-driven development, as the tests were mostly written after the small chunks of code were written. TDD can result in developers writing code to just work with the unit tests and as a result, developers can lose the big picture of the component they are developing.
In a web environment, how do you work with filters. When should you use J2EE filters and when shouldn't you?
Filters are meant to steer/modify/intercept the actual requests/responses/sessions. For example: setting the request encoding, determining the logged-in user, wrapping/replacing the request or response, determining which servlet it should forward the request to, and so on.
To control the actual user input (parameters) and output (the results and the destination) and to execute actual business logic, you should use a servlet.
Property resource bundle files for replacement values in JSP files.
If you don't do i18n, just don't use them. But if you ever grow and the customer/users want i18n, then you'll be happy that you're already prepared. And not only that, it also simplifies the use of a CMS to edit the content by just using the java.util.Properties API.
Unit Testing for all J2EE development
JUnit can take care about it. You can also consider to "officially" do user tests only. Create several use cases and test it.