So I got a task where I have to verify the REST response of a remote micro service.
For now, I built an app that calling the given REST endpoint and now I have to verify the result.
My question is, where should I write the tests? If I write into the test scope with Junit/TestNG it should be unit tests with mocked objects, right?. But in this case, I couldn't test the remote service correctly.
Is it a bad pattern to call real service from test scope using Junit?
Thanks!
It really depends on the trade-offs that you'll need to make and could generally be pretty difficult to move towards a reliable state. For instance, if you wanted to test in QA but other teams have different philosophies on what QA is and happens to use it like a local development environment, their services are probably failing more often than it probably would have otherwise. In that case, when you want to run integration tests with real services that other teams own, chances are your tests may not be too indicative whether your services ran correctly. It could be the service dependency failing that trips failures when testing your services.
In that scenario, I would mock out downstream services that your team does not own because its out of your control. This could be mocking it by spinning up a web service that responds with JSON or XML payloads or whatever metalanguage the service you're consuming responding with. It could also be mocking it at the request client level meaning the lowest level of the library that sends the request.
As an effect, that means the requests that go to your service will go through all the layers of logic in your service without any mocks to the lowest possible part where your team has control over. It's based on black box testing where the service dependencies are isolated with the assumption that the other team is properly versioning their services or providing backwards compatibility. This type of testing is the less costly to maintain if you have ever seen the testing pyramid. The gist of the pyramid is that the more integrative your tests, the costlier it is to maintain and build.
If you want even more guarantees, you can combine this with contract driven tests. Essentially, another test suite that tests downstream services and making sure that they still adhere to the payload schema that your service depends on. This way, you can decouple these two sets of tests without having to worry about all the logic that comes with it. It is also more transparent in terms of failures as you will know whether your service failed or the service dependency failed.
That would be the strategy and I would advocate for using an acceptance testing framework like Cucumber to actually do the job. There are many variants of Cucumber but essentially, you would be writing behavioral driven tests (that even the product owner can read) that is parsed by a language of your choice. See this link for details.
Related
I have multi module application. To be more explicit these are maven modules where high level modules depends on low level modules.
Below are the some of the modules :-
user-management
common-services
utils
emails
For example :- If user management module wants to use any services from utils module, it can call its services as dependency of utils is already injected under user-management.
To convert/call my project truly following microserives architecture, I believe i need to convert each module as independently deployable services where each module is a war module
and provides its services over http(mainly as resful web services) . Is that correct or anything else need to be taken care of as well ?
Probably each modules now has to be secured and authentication layer as well ?
If that's the crux of microservices I really do not understand when someone ask whether you have worked on microservices as to me Its not tool/platform/framework but a simple
concept to divide you monolithic application in to smaller set of deployable modules whose services is available through HTTP. Is n't it ? May be its another buzz word.
Update:-
Obviously there are adavantages going micro services way like independent unit testablemodule, scalable as it can be deployed on separate machine, loose coupling etc but I see I need to handle two complex concerns also
Authentication:- For each module I need to ensure it authenticates the request which is not the case right now
Transaction:- I can not maintain the transaction atomicity across different services which I could do very easily at present
What you have understood is perfectly good and you have found the right area where microservices are getting complex over monoliths (Distributed Transaction) but let me clear up some points about microservices.
Microservice doesn't mean independent services exposed over HTTP: A microservice can communicate with other services either in a synchronous or asynchronous way, so REST is one of the solutions and it is applicable for synchronous communication but you can perform asynchronous communication too like message-driven using Kafka or hornetq etc. In synchronous communication an underlying service can call over Thrift protocol also.
Microservice following SRP: The beauty of microservices is that each service concentrates over only one business domain use case, so it handles only one domain object's functionality. But utils module is for common methods so every microservice depends on it. So even a small change in the utils module needs to build all other microservices so it is a violation of the microservices 12 principles so dissolve the utils service and make it local with each service.
Handling Authentication: To be specific a microservice can be one of three types:
a. Core service: Just perform a domain operation (like account creation/updation/deletion)
b. Aggregate service: Call one or more core service, gather results and perform some operation on it.
c. Edge service: Exposed to a client (like Mobile/browser etc). We sometimes call it a gateway service; the crux of this service is take a user request and based on the URL forward it to an actual microservice. So it is the ideal place to put authentication if it is common for all microservices.
Handling Distributed Transaction: Yes this is the hardest part of microservices but you can achieve it through an event-driven/message-driven way. Every action pops an event; a subscriber of this event receives the same and does some operation and generates another event. In case of failure it generates a reverse event which compensates the first event created.
For example, say from micoservice A we debited 100 rupees so create an AccountDebited event. Now in microservice B we try to credit the account. If it is successful we create AccountCredited which is received by A and creates another event AmountTransfered. In case of failure we generate an AccountCreditedFailed event which is received by A and generates a reverse event - AccountSpecialCredit - which maintains the atomicity.
What you have is mostly correct, but you appear to be considering some things as requirements when they are not, and you are forgetting one very important characteristic that microservices are supposed to have.
The main characteristics of microservices are statelessness and independence. Whether they are "WAR" modules and whether they provide their services over "HTTP" (and certainly whether they are RESTful) are secondary concerns and you may hear arguments to the contrary.
Statelessness means that no individual microservice may contain state. (Except for caches.) Microservices are supposed to always delegate the task of persisting data to some database module so they don't keep any state in memory. The idea is that this way, if one microservice fails, (or if an entire machine containing many microservices fails,) you can just route incoming requests to another instance (or another machine) and everything will continue working.
(Of course, if you want my opinion, it is just a cowardly acknowledgement of the fact that we don't know how to write reliable highly concurrent software, but the database guys are smart and they seem to have figured it all out, so we will just delegate the problem of maintaining our state to the software that they have written.)
In my opinion microservice architecture marries well with DDD
I think you should consider your multi-module project as a "monolith" and do your microservice separation based on domain concepts and not on maven projects.
Ex: Do not create a microservice called "utils" but rather a microservice called "accounts" or "user-management" or whatever your domain is. I think without domain driven development it kinda loses its usefulness.
It is really easy afterwards to work on different aspects of the domain knowing that it is separated by the rest. You should check out hexagonal architecture by Alistair Cockburn
How you split your application depends on the type of modules you have. If the module contains business logic than it makes sense to create a new service and communicate via Http or Messaging. On the other hand if your module has no business logic, but just a set of helper functions in might be better to extract it to a separate public/private maven package and just use it as a dependency.
Yes, microservice is a buzz-word that just recently became popular, but a concept has been around for a while. But it also comes with more than that. While it gives a benefits of scaling and independent service deployments, it comes with a price of complexity of managing and orchestrating big amount services.
For example in monolith application when you just call a function from another module you know for sure that it is always available for calling. With microservices some of the services might go down because of disruption or deployment, in which case you have to think about handling these situations gracefully (for example apply circuit breaker pattern).
There are many other things to consider when doing microservices and there are many literature available on this topic. I read Microservices: From Design to Deployment from Nginx. It's short and sweet.
So when people ask you Have you worked with microservices before? I guess they want to know if you familiar and had some experience with all the pitfalls of this concept.
In one way you are correct, in Microservices from outside it looks like this. When you go inside as you rightly mention about two complex concern :
Authentication:- For each module I need to ensure it authenticates the request which is not the case right now
Transaction:- I can not maintain the transaction atomicity across different services which I could do very easily at present
Apart from this there are various things which one need to understand otherwise doing and deploying microservices would be very tough:
I am mentioning some of them here, complete list you can see from my post:
What exactly is a microservice? Some said it should not exceed 1,000 lines of code.Some say it should fit one bounded context (if you don't know what a bounded context is, don't bother with it right now; keep reading).
Even before deciding on what the "micro"service will be, what exactly is a service?
Microservices do not allow updating multiple entities at once; how will I maintain consistency between entities?
Should I have a single database cluster for all my microservices?
What is this eventual consistency thing everyone is talking about?
How will I collate data which is composed of multiple entities residing in different services?
What would happen if one service goes down? How would the dependent services behave?
Should I make a sync invocation between microservices to always get consistent data?
How will I manage version upgrades to a few or all microservices? Is it always possible to do it without downtime?
And the last unavoidable question - how do I test the entire application as an integrated application?
How to do circuit breaking? (if one service down should not impact other)
CI/CD pipelines and .......
What we understood while starting our journey in microservices are:
Design pattern for breaking business problem in microservices is Domain Driven Design
Platform which support microservices development. (we used Lagom for this) which address some of the above concern out of the box
So in all while moving towards multiple process arch. communicating using Rest or some other methods, new considerations needs to be taken care which are not directly visible in Monolithic, and people want to whether you know about those considerations or not.
Today i begin a project that i must to spread a backend in two pieces, i dont know what exactly does the backend do, only that i have to move specific services into new spring, jersey, maven multi module project.
So, the task to move was really easy and now its time to write tests. Previously, the project does not have any test.
When i start to write a JUnits from my Business Objects, i saw, that most services only perform a basic operation with a DAO, like getAll, get, save, update and delete. The other services got bussines validation, but it is not comlpex.
So the questions are more theorical:
Should i write JUnits to test a simple DAO get that will be mocked (no integration test) which will be the benefits for do this?
Which will be the correct way to make a integration test for a simple DAO get, getAll, or create (create that does not have any validation to execute before)
Unit&Integration tests are for developers, so if you don't feel like some developer will benefit from the test - don't write it. Also consider that tests verify behavior, not code, so if you don't see any behavior that needs testing - don't spend the time.
In your case I would at most write integration tests for services, and maybe for DAOs (if there is no ORM).
Anyway correct answer to your question depends on the level of quality that you need to provide for your project, team size, potential of destructive code changes, etc.
Two examples:
1) Small site's admin-panel in CRUD style, only one developer introduces changes simultaneously, and almost no business logic is present. Bugs presence is non-critical.
In this case I would not spend the time on any tests - most likely you need to focus on other things (e.g. client-side).
2) You are starting a complex project, that is currently in CRUD style, but interactions between distant services/DAOs are present, business logic tends to become complex at some time. Team is rapidly growing/changing, more than one person involved, new developers can't understand how the system works easily. Bugs presence are bad for business.
In this case I would at least start with integration tests for services.
I have a service, which works as a mediator between two other services. It basically validates the inputs, then passes them to those two service sequentially (by trying to keep transactional integrity), and then, if everything goes well, saves a result to the database.
My problem now is to test this service in isolation. Of course, I can provide stubs to satisfy the dependencies. I can also test the validation of inputs, whether appropriate data is saved in the DB in a normal case, as well as whether transactional integrity is kept if any of the dependencies throws an exception.
Yet, this is only half of what the service really does. My dilemma is if I should try to prove whether the other two dependency services actually processed the data appropriately as well? The scope of my service is quite broad, so I guess it is better to also know if the dependency services also did their job well. Yet, this gets out of the scope unit testing, and moves into integration testing, right?
I am kind of confused here.
If you're asking about unit-testing, then the way to do it is to test the class in isolation using mocks or stubs.
BUT, if you feel that just doing that is not enough, you can write some component tests, where you use the all the real classes you want to test, and use a stub (or inmemory) database and mock some of the dependencies that you consider not important for what you are trying to test.
In the past, I've tested small clusters of classes that had a high interaction between them in this way (and sometimes skipping unit-tests for those classes, as the component tests covered all the scenarios). Obviously, the problem with doing this is that the number of scenarios grows almost exponentially the more classes you're testing. Maybe you can test the bridge and the 2 real classes that use that bridge.
You should do both.
For unit testing, definitely use mock-objects for dependencies, preferrably using a tool like EasyMock. As a sidenote, if you feel that the functionality of your mediator service is too broad for unit testing, you may want to consider breaking it down into smaller pieces.
Of course, you additionally should do integration testing as well, using real dependencies, to make sure your services work together as intended.
first time poster and TDD adopter. :-) I'll be a bit verbose so please bear with me.
I've recently started developing SOAP based web services using the Apache CXF framework, Spring and Commons Chain for implementing business flow. The problem I'm facing here is with testing the web services -- testing as in Unit testing and functional testing.
My first attempt at Unit testing was a complete failure. To keep the unit tests flexible, I used a Spring XML file to keep my test data in. Also, instead of creating instances of "components" to be tested, I retrieved them from my Spring Application context. The XML files which harbored data quickly got out of hand; creating object graphs in XML turned out to be a nightmare. Since the "components" to be tested were picked from the Spring Application Context, each test run loaded all the components involved in my application, the DAO objects used etc. Also, as opposed to the concept of unit test cases being centralized or concentrated on testing only the component, my unit tests started hitting databases, communicating with mail servers etc. Bad, really bad.
I knew what I had done wrong and started to think of ways to rectify it. Following an advice from one of the posts on this board, I looked up Mockito, the Java mocking framework so that I could do away with using real DAO classes and mail servers and just mock the functionality.
With unit tests a bit under control, this brings me to my second problem; the dependence on data. The web services which I have been developing have very little logic but heavy reliance on data. As an example, consider one of my components:
public class PaymentScheduleRetrievalComponent implements Command {
public boolean execute(Context ctx) {
Policy policy = (Policy)ctx.get("POLICY");
List<PaymentSchedule> list = billingDAO.getPaymentStatementForPolicy(policy);
ctx.put("PAYMENT_SCHEDULE_LIST", list);
return false;
}
}
A majority of my components follow the same route -- pick a domain object from the context, hit the DAO [we are using iBatis as the SQL mapper here] and retrieve the result.
So, now the questions:
- How are DAO classes tested esp when a single insertion or updation might leave the database in a "unstable" state [in cases where let's say 3 insertions into different tables actually form a single transaction]?
- What is the de-facto standard for functional testing web services which move around a lot of data i.e. mindless insertions/retrievals from the data store?
Your personal experiences/comments would be greatly appreciated. Please let me know in case I've missed out some details on my part in explaining the problem at hand.
-sasuke
I would stay well away from the "Context as global hashmap" 'pattern' if I were you.
Looks like you are testing your persistence mapping...
You might want to take a look at: testing persistent objects without spring
I would recommend an in-memory database for running your unit tests against, such as HSQL. You can use this to create your schema on the fly (for example if you are using Hibernate, you can use your XML mappings files), then insert/update/delete as required before destroying the database at the end of your unit test. At no time will your test interfere with your actual database.
For you second problem (end-to-end testing of web services), I have successfully unit tested CXF-based services in the past. The trick is to publish your web service using a light-weight web server at the beginning of your test (Jetty is ideal), then use CXF to point a client to your web service endpoint, run your calls, then finally shut down the Jetty instance hosting your web service once your unit test has completed.
To achive this, you can use the JaxWsServerFactoryBean (server-side) and JaxWsProxyFactoryBean (client-side) classes provided with CXF, see this page for sample code:
http://cwiki.apache.org/CXF20DOC/a-simple-jax-ws-service.html#AsimpleJAX-WSservice-Publishingyourservice
I would also give a big thumbs up to SOAP UI for doing functional testing of your web service. JMeter is also extremely useful for stress testing web services, which is particularity important for those services doing database lookups.
First of all: Is there a reason you have to retrieve the subject under test (SUT) from the Spring Application context? For efficient unit testing you should be able to create the SUT without the context. It sounds like you have some hidden dependencies somewhere. That might be the root of some of your headache.
How are DAO classes tested esp when a
single insertion or updation might
leave the database in a "unstable"
state [in cases where let's say 3
insertions into different tables
actually form a single transaction]?
It seems you are worried about the database's constistency after you have running the tests. If possible use a own database for testing, where you don't need care about it. If you have such a sandbox database you can delete data as you wish. In this case I would do the following:
Flag all your fake data with some common identifier, like putting a special prefix to a field.
Before running the test drop a delete statement, which deletes the flagged data. If there is none, then nothing bad happens.
Run your single DAO test. After that repeat step 2. for the next test.
What is the de-facto standard for
functional testing web services which
move around a lot of data i.e.
mindless insertions/retrievals from
the data store?
I am not aware of any. From the question your are asking I can infer that you have on one side the web service and on the other side the database. Split up the responsibilities. Have separate test suites for each side. One side just testing database access (as described above). On the other side just testing web service requests and responses. In this case it pays of the stub/fake/mock the layer talking to the network. Or consider https://wsunit.dev.java.net/.
If the program is only shoving data in and out I think that there is not much behavior. If this is the case, then the hardest work is to unit test the database side and the web service side. The point is you can do unit testing without the need for "realistic" data. For functional testing you will need handrolled data, which is close to reality. This might be cumbersome, but if you already unit tested the database and web service parts intensively, this should reduce the need for "realistic" test cases considerably.
First of all, make thing clear.
In an ideal world the lifecycle of the software your are building is something like this:
- sy makes a report with the customer, so you got an user story with examples about how the application should work
- you generalize the user story, so you got rules, which you call as use cases
- you start to write a piece of functional (end to end) test, and it fails...
- after that your build the ui and mock out the services, so you got a green functional test and a specification about how your services should work...
- your job is to keep the functional test green, and implement the services step by step writing integration tests, and mocking out dependencies with the same approach until you reach the level of unit tests
- after that you do the next iteration with the use cases, write the next piece of functional test, and so on until the end of the project
- after that you make acceptance tests with the customer who accepts the product and pays a lot
So what did we learn from this:
There are many types of tests (don't confuse them with each other)
functional tests - for testing the use cases (mock out nothing)
integration tests - for testing application, component, module, class interactions (mock out the irrelevant components)
unit tests - for testing a single class in isolation from its environment (mock out everything)
user acceptance tests - customer makes sure, that she accepts the product (manual functional tests, or presentation made from automatic functional tests in working)
You don't need to test everything by functional tests and integration tests, because it is impossible. Test only the relevant part by functional and integration tests and test everything by unit tests! Familiarize yourself with the testing pyramid.
Use TDD, it makes life easier!
How are DAO classes tested esp when a single insertion or updation might leave the database in a "unstable" state [in cases where let's
say 3 insertions into different tables actually form a single
transaction]?
You don't have to test your database transactions. Assume, that they are working well, because database developers have already tested them, and I am sure you don't want to write concurrency tests... Db is an external component, so you don't have to test it yourself. You can write a data access layer to adapt the data storage to your system, and write integration tests only for those adapters. In case of database migration these tests will work by the adapters of the new database as well, because your write them to implement a specific interface... By any other tests (except functional tests) you can mock out your data access layer. Do the same with every other external component as well, write adapters and mock them out. Put these kind of integration tests to a different test suite than the other tests, because they are slow because of database access, filesystem access, etc...
What is the de-facto standard for functional testing web services which move around a lot of data i.e. mindless insertions/retrievals
from the data store?
Your can mock out your data store with an in memory db which implements the same storage adapters until you implemented everything else except the database. After that your implement the data access layer for the database and test it with your functional tests as well. It will be slow, but it has to run only once, for example by every new release... If you need functional tests by developing, you can mock it out with an in memory solution again... An alternative approach to run only the affected functional tests by developing, or modify the settings of the test db to make things faster, and so on... I am sure there are many test optimization solutions...
I must say I don't really understand
your exact problem. Is the problem
that your database is left in an
altered state after you've run the
test?
Yes, there are actually two issues here. First one being the problem with the database left in an inconsistent state after running the test cases. The second one being that I'm looking for an elegant solution in terms of end-to-end testing of web services.
For efficient unit testing you should
be able to create the SUT without the
context. It sounds like you have some
hidden dependencies somewhere. That
might be the root of some of your
headache.
That indeed was the root cause of my headaches which I am now about to do away with with the help of a mocking framework.
It seems you are worried about the
database's constistency after you have
running the tests. If possible use a
own database for testing, where you
don't need care about it. If you have
such a sandbox database you can delete
data as you wish.
This is indeed one of the solutions to the problem I mentioned in my previous post but this might not work in all the cases esp when integrating with a legacy system in which the database/data isn't in your control and in cases when some DAO methods require a certain data to be already present in a given set of tables. Should I look into database unit testing frameworks like DBUnit?
In this case it pays of the
stub/fake/mock the layer talking to
the network. Or consider
https://wsunit.dev.java.net/.
Ah, looks interesting. I've also heard of tools like SOAPUI and the likes which can be used for functional testing. Has anyone here had any success with such tools?
Thanks for all the answers and apologies for the ambiguous explanation; English isn't my first language.
-sasuke
Right my junit tests look like a long story:
I create 4 users
I delete 1 user
I try to login with the deleted user and make sure it fails
I login with one of the 3 remaining user and verify I can login
I send a message from one user to the other and verify that it appears in the outbox of the sender and in the inbox of the receiver.
I delete the message
...
...
Advantages:
The tests are quite effective (are very good at detecting bugs) and are very stable, becuase they only use the API, if I refactor the code then the tests are refactored too. As I don't use "dirty tricks" such as saving and reloading the db in a given state, my tests are oblivious to schema changes and implementation changes.
Disadvantages:
The tests are getting difficult to maintain, any change in a test affects other tests. The tests run 8-9 min which is great for continuous integration but is a bit frustrating for developers. Tests cannot be run isolated, the best you can do is to stop after the test you are interested in has run - but you absolutely must run all the tests that come before.
How would you go about improving my tests?
First, understand the tests you have are integration tests (probably access external systems and hit a wide range of classes). Unit tests should be a lot more specific, which is a challenge on an already built system. The main issue achieving that is usually the way the code is structured:
i.e. class tightly coupled to external systems (or to other classes that are). To be able to do so you need to build the classes in such a way that you can actually avoid hitting external systems during the unit tests.
Update 1: Read the following, and consider that the resulting design will allow you to actually test the encryption logic without hitting files/databases - http://www.lostechies.com/blogs/gabrielschenker/archive/2009/01/30/the-dependency-inversion-principle.aspx (not in java, but ilustrates the issue very well) ... also note that you can do a really focused integration tests for the readers/writers, instead of having to test it all together.
I suggest:
Gradually include real unit tests on your system. You can do this when doing changes and developing new features, refactoring appropriately.
When doing the previous, include focused integration tests where appropriate. Make sure you are able to run the unit tests separated from the integration tests.
Consider your tests are close to testing the system as a whole, thus are different from automated acceptance tests only in that they operate on the border of the API. Given this think about factors related to the importance of the API for the product (like if it will be used externally), and whether you have good coverage with automated acceptance tests. This can help you understand what is the value of having these on your system, and also why they naturally take so long. Take a decision on whether you will be testing the system as a whole on the interface level, or both the interface+api level.
Update 2: Based on other answers, I want to clear something regarding doing TDD. Lets say you have to check whether some given logic sends an email, logs the info on a file, saves data on the database, and calls a web service (not all at once I know, but you start adding tests for each of those). On each test you don't want to hit the external systems, what you really want to test is if the logic will make the calls to those systems that you are expecting it to do. So when you write a test that checks that an email is sent when you create an user, what you test is if the logic calls the dependency that does that. Notice that you can write these tests and the related logic, without actually having to implement the code that sends the email (and then having to access the external system to know what was sent ...). This will help you focus on the task at hand and help you get a decoupled system. It will also make it simple to test what is being sent to those systems.
unit tests should - ideally - be independent, and able to run in any order. So, I would suggest that you:
break up your tests to be independent
consider using an in-memory database as the backend for your tests
consider wrapping each test or suite in a transaction that is rolled back at the end
profile the unit tests to see where the time is going, and concentrate on that
if it takes 8 minutes to create a few users and send a few messages, the performance problem may not be in the tests, rather this may be a symptom of performance problems with the system itself - only your profiler knows for sure!
[caveat: i do NOT consider these kinds of tests to be 'integration tests', though i may be in the minority; i consider these kinds of tests to be unit tests of features, a la TDD]
Now you are testing many things in one method (a violation of One Assertion Per Test). This is a bad thing, because when any of those things changes, the whole test fails. This leads it to not being immediately obvious why a test failed and what needs to be fixed. Also when you intentionally change the behaviour of the system, you need to change more tests to correspond the changed behaviour (i.e. the tests are fragile).
To know what kind of tests are good, it helps to read more on BDD: http://dannorth.net/introducing-bdd http://techblog.daveastels.com/2005/07/05/a-new-look-at-test-driven-development/ http://jonkruger.com/blog/2008/07/25/why-behavior-driven-development-is-good/
To improve the test that you mentioned, I would split it into the following three test classes with these context and test method names:
Creating user accounts
Before a user is created
the user does not exist
When a user is created
the user exists
When a user is deleted
the user does not exist anymore
Logging in
When a user exists
the user can login with the right password
the user can not login with a wrong password
When a user does not exist
the user can not login
Sending messages
When a user sends a message
the message appears in the sender's outbox
the message appears in the reciever's inbox
the message does not appear in any other message boxes
When a message is deleted
the message does not anymore exist
You also need to improve the speed of the tests. You should have a unit test suite with good coverage, which can run in a couple of seconds. If it takes longer than 10-20 seconds to run the tests, then you will hesitate to run them after every change, and you lose some of quick feedback that running the tests gives you. (If it talks to the database, it's not a unit test, but a system or integration test, which have their uses, but are not fast enough to be executed continually.) You need to break the dependencies of the classes under test by mocking or stubbing them. Also from your description it appears that your tests are not isolated, but instead the tests depend on the side-effects caused by previous tests - this is a no-no. Good tests are FIRST.
Reduce dependencies between tests. This can be done by using Mocks. Martin Fowler speaks about it in Mocks aren't stubs, especially why mocking reduces dependencies between tests.
You can use JExample, an extension of JUnit that allows test methods to have return values that are reused by other tests. JExample tests run with the normal JUnit plugin in Eclipse, and also work side by side with normal JUnit tests. Thus migration should be no problem. JExample is used as follows
#RunWith(JExample.class)
public class MyTest {
#Test
public Object a() {
return new Object();
}
#Test
#Given("#a")
public Object b(Object object) {
// do something with object
return object;
}
#Test
#Given("#b")
public void c(Object object) {
// do some more things with object
}
}
Disclaimer, I am among the JExample developers.
If you use TestNG you can annotate tests in a variety of ways. For example, you can annotate your tests above as long-running. Then you can configure your automated-build/continuous integration server to run these, but the standard "interactive" developer build would not (unless they explicitly choose to).
This approach depends on developers checking into your continuous build on a regular basis, so that the tests do get run!
Some tests will inevitably take a long time to run. The comments in this thread re. performance are all valid. However if your tests do take a long time, the pragmatic solution is to run them but not let their time-consuming nature impact the developers to the point that they avoid running them.
Note: you can do something similar with JUnit by (say) naming tests in different fashions and getting your continuous build to run a particular subset of test classes.
By testing stories like you describe, you have very brittle tests. If only one tiny bit of functionality is changing, your whole test might be messed up. Then you will likely to change all tests, which are affected by that change.
In fact the tests you are describing are more like functional tests or component tests than unit tests. So you are using a unit testing framework (junit) for non-unit tests. In my point of view there is nothing wrong to use a unit testing framework to do non-unit tests, if (and only if) you are aware of it.
So there are following options:
Choose another testing framework which supports a "story telling"-style of testing much better, like other user already have suggested. You have to evaluate and find a suitable testing framework.
Make your tests more “unit test”-like. Therefore you will need to break up your tests and maybe change your current production code. Why? Because unit testing aims on testing small units of code (unit testing purists suggest only one class at once). By doing this your unit tests become more independent. If you change the behavior of one class, you just need to change a relatively small amount of unit test code. This makes your unit test more robust. During that process you might see that your current code does not support unit testing very well -- mostly because of dependencies between classes. This is the reason that you will also need to modify your production code.
If you are in a project and running out of time, both options might not help you any further. Then you will have to live with those tests, but you can try to ease your pain:
Remove code duplication in your tests: Like in production code eliminate code duplication and put the code into helper methods or helper classes. If something changes, you might only need to change the helper method or class. This way you will converge to the next suggestion.
Add another layer of indirection to your tests: Produce helper methods and helper classes which operate on a higher level of abstraction. They should act as API for your tests. These helpers are calling you production code. Your story tests should only call those helpers. If something changes, you need to change only one place in your API and don't need to touch all your tests.
Example signatures for your API:
createUserAndDelete(string[] usersForCreation, string[] userForDeletion);
logonWithUser(string user);
sendAndCheckMessageBoxes(string fromUser, string toUser);
For general unit testing I suggest to have a look into XUnit Test Patterns from Gerard Meszaros.
For breaking dependencies in your production tests have a look into Working Effectively with Legacy Code from Michael Feathers
In addition to the above, pick up a good book on TDD (I can recommend "TDD and Acceptance TDD for Java Developers"). Even though it will approach from a TDD point of view there is alot of helpful information about writing the right kind of unit tests.
Find someone who has alot of knowledge in the area and use them to figure out how you can improve your tests.
Join a mailing list to ask questions and just read the traffic coming through. The JUnit list at yahoo (something like groups.yahoo.com/junit). Some of the movers and shakers in the JUnit world are on that list and actively participate.
Get a list of the golden rules of unit tests and stick them on your (and others) cubicle wall, something like:
Thou shalt never access an external system
Thou shalt only test the code under test
Thou shalt only test one thing at once
etc.
Since everyone else is talking about structure I'll pick different points. This sounds like a good opportunity to profile the code to find bottleknecks and to run it through code coverage to see if you are missing anything (given the time it takes to run it the results could be interesting).
I personally use the Netbeans profiler, but there are ones in other IDEs and stand alone ones as well.
For code coverage I use Cobertura, but EMMA works too (EMMA had an annoyance that Cobertura didn't have... I forget what it was and it may not be an issue anymore). Those two are free, there are paid ones as well that are nice.