How would I test a web server in Java? - java

I am basically practicing with Java socket programming by building client and server (not necessarily HTTP server). In brief, the clients are sending request through sockets to server and server adds requests to task queue. The thread pool initially has certain number of threads and each free one is assigned to one runnable task in the task queue. My web server also has a simple storage that stores and retrieves data from a file from disk. In this project, I have to take care of several concurrency issues.
Basically, I have to build client, server, thread pool, handler, storage. However, I want to test thoroughly in a good systematic way (unit test, integration test, etc.). I don't have much experience in testing so I am looking for pointers, methodologies, frameworks, or tutorials. (I use Ant to automate building, and initially consider JUnit and EasyMock for testing)

Before testing, I'd start by coding some rough and ready prototpye code. Just to see it working and to get a feel for the APIs I will be using.
Then introduce some unit tests with JUnit (there are other frameworks but JUnit is ubiquitous, and you'll find plenty of tutorials to get you started).
If your object needs to interact with some other Objects to complete it's tasks, then use mocks (EasyMock or whatever) to provide the interaction - this will probably lead to a bit of re-factoring.
Once you are happy, you can start to look at testing how your Objects interact, you can write new (integration) tests that replace the Mocks with the real thing. Greater interaction results in greater complexity.
Some things to remember
trivial methods aren't worth testing (e.g. simple accessors)
100% coverage is a waste of time
any test is better than none
Unit test is easier to achieve than integration test
Not all tests are functional
Testing multi-threaded applications is hard
There is a book on how Google does testing. Basically they don't write tests until something looks viable. They have engineers who advise on how to structure code for testing. The point is:
Runnable code is the goal
Tests add to that goal, but do not replace it
Writing code that can be tested is a learnt skill

Related

how to test complex asynchronous networked code and follow tdd

What would be the best practices for unit testing networked asynchronous code? I am trying to do/learn tdd
I am currently planning this part of a library, but in principle:
This is a ssh client library. I want it to be asynchronous. Ssh connection process is very complex, actually. My connect method would set some connection state variables in atomic way, then use some kind of task executor to schedule a connection task. The connection requires connecting to the server, introducing by sending and receiving ssh protocol version and stuff, then completing a key exchange process, that itself is divided into few cases, because there are few key exchange algorithms each requiring different packets to be exchanged.
Although I heard that I should test public api, and test private methods by testing public methods that use it, but in this case it seems difficult, as the task is quite complex and it is probably easier to fake only parts of a negotiation versus the whole connection/negotiation, just to check each possible result of a connect method including results of every key exchange algorithm.
Is it a good justification to split the larger connect tasks into smaller ones, even though they are not publicly available to the user, and test each separate connection stage instead of just the whole connect method all at once? Does it somehow break best practices, or how to do it in a different way? For example is it testing implementation details?
What would be the best practices for unit testing networked asynchronous code? I am trying to do/learn tdd
The reference you need to read is Growing Object Oriented Software, by Freeman and Price. That text contains a long walk through of how to use tests to develop an asynchronous networked auction client.
The process, as described by the authors, frontloads a lot of the work to get an initial end to end test up and running first, before beginning to fill in the other details.
It's not the only way to do it, of course.
Although I heard that I should test public api, and test private methods by testing public methods that use it
Yes, and...
in this case it seems difficult, as the task is quite complex and it is probably easier to fake only parts of a negotiation versus the whole connection/negotiation, just to check each possible result of a connect method including results of every key exchange algorithm.
What often happens is that a complex solution can be broken down into modules, each of which contains its own "public API" -- See On the Criteria to be Used in Decomposing Systems into Modules, by Parnas. You can then test modules individually.
It will often turn out, for instance, that your code can be organized into two piles; an internal functional core, then an imperative shell which interacts with the boundary of your system.
As a rule, the functional core is much easier to test than the imperative shell, so strive for a shell that is "so simple that there are obviously no deficiencies."
so what is the definition of public api?
Roughly: the affordances that are accessible outside of the scope of the implementation.
Put another way, they are the parts of the module that can't be changed without rewriting the code that calls the module.
in this case I would probably split connection process into subtasks, like connection, ssh introduction and key exchange. And test individual subtasks in isolation. Also I would test key exchange support in isolation from specific key exchange algorithm implementations. Requirements for testing each of those parts are different, and only the first requires mocking a socket.
You might also want to look at Cory Benfield's talk Building Protocol Libraries the Right Way.
Not sure if that is okay or not.
The TDD police are not going to come kicking in your door if you don't do it "right". At worst, they'll write you a nasty note.

Java Testing that Includes Multiple Virtual Machines

I have a large piece of Java software that I am looking to add tests to for a new module. The new module involves running external applications and handling the results. The external applications are not something I can modify. In order to get results, these external applications need to be able to access and modify 2+ remote machines. Once they make those modifications, that machine is no longer in a valid state for more testing. My current application already includes JUnit tests, so it would be nice if I could write the new tests in JUnit as well.
My question: Is there a know technique for this use case? I know I could hack together some kind of solution using Vagrant, where machines are started up and torn down in the setup and teardown blocks of JUnit, but that seems like it could be hard to maintain, would require the tests to be run serially, and would be very, very slow.
EDIT: Removed the word "unit" because it was significantly sidetracking the question.

Approach to perform unit and integration tests from Scratch for untested code

The basic question is "How should one start with writing unit and integration testing for a untested project? Especially considering the fact that the person is not familiar with the code and has not done integration testing before."
Consider the scenario where unit tests and integration tests have to be written for a project. The project uses Java/J2EE technology does not have any tests written at all.
The dilemma that I face is since I have not written the code, I don't want to refactor the code immediately to write tests. I also have to select a testing framework. I am thinking of using Mockito and Powermock.
I also have to estimate code coverage for the tests. And then perform integration testing. I will have to research on integration testing tools and select one. I have not done any integration testing or estimated acceptable level of code coverage for a project before.
Since I am working independently, if there are some strategies, tips, suggestions on what should I start with and tools that one can recommend, I will appreciate it.
First comes first:
Understand the architecture, what are the main components?
If you have no good overview of the features and functions the program offers, make a list of them and create a hierarchy of them
Get familiar with the code, I recommend the following approach:
after you understand where the code of its different components are started, try to figure out the method invocation hierarchy (in Eclipse you can easily jump the source code definitions by pressing F3)
later you can do the same, while debugging the code, this way it will jump automatically to the definitions, plus you can observe how the state of the program changes
For Unit Testing itself, I can recommend Clean Code Chapter 9 (circa 12 pages) for starters. It uses JUnit for the example and gives a very good introduction how good testing is done.
There you will learn things like the F.I.R.S.T. principle, that Unit Tests should be:
Fast, Independent, Repeatable, Self-Validating and Timly
Some clarifications, JUnit is the most used and accepted test framework itself. Mockito and Powermock are mocking frameworks, they are used together with JUnit when you want to do integration tests.
For code coverage I can only recommend Cobertura, but there are many more.
Start with unit tests before you dive into integration tests (bottom-up), you can also do it the other way around (top-bottom), but since you say you are not so much experienced I would stay stick to the first.
Finally, just go for it and get started. You will learn the most and fastest while actually writing the test code.
Stop. "..not familiar with the code..". First get familiar with the code and most importantly its expected functionality. You can't refactor or unit test a code that you are not comfortable with.
Since you have not done unit-tests before, I would suggest learning and getting convenient with unit-tests.
Important: Bad/Wrong unit-tests are worse than no unit-tests. This is because the next guy who will maintain your code will misinterpret
the functionality.
There are bunch of Code Coverage tools out there. You can use which ever seduces you better.
Adding tests to legacy code that has no tests is a difficult task. As #Suraj has mentioned, get familiar with the code base and the expected functionality. You can't test it if you don't know what it is supposed to do.
In terms of choosing which areas of the code to test. Start with the high business value areas. Which functionality is most important? You want to make sure you have a strong test set for that code.
Since you don't have any unit/integration tests, I would start with some high level end to end tests that at least ensure that given some inputs to the system you get some expected outputs. This doesn't ensure correctness but at least ensures consistency.
Then as you develop a test suite you can be confident that the refactorings you are doing are not changing the behavior of the code (unless you find bugs of course that are being fixed).
For testing frameworks, JUnit is the standard unit testing framework. Note that the frameworks Mockito and Powermock are not testing frameworks themselves, but they can be used within JUnit.
For acceptance tests, there are also a variety of frameworks to help. For web UI testing, Selenium is pretty standard. There are also tools like Fitnesse for more table driven testing.
There are also some common frameworks to help with code coverage - Cobertura, Emma, Clover come to mind.
I would also set up an automated build (Jenkins build server is pretty simple to set up). This will allow you to run your tests on every checkin. Even though your code coverage is going to be low to start, getting in this habit is a good one.

Testing a client-server based app in Java

I have written a client server based application in Java, where the client continually (after every 30 seconds) sends some data, and a panel on server is redrawn according to the incoming message from client. There are multiple threads (one for redrawing, one for reading incoming messages, and one for outgoing messages) running on the server. For GUI I have used Swing.
Now I am a complete newbie in testing, and I would like to know various methods, tricks and gotchas for testing my application. Any web resource or good texts on the same will be very helpful. Thanks in advance.
Take a look to Maveryx test automation framework and its documentation. It contains several examples to create and run automated scripts for testing Java (swing-based) applications.
You will, at least, have to do testing at two levels.
You will need to first unit test your java code, on both server and client, by writing e.g. JUnit tests.
You should then test the server and the client components individually by simulating the other side. Something like JMock may be able to help you simulate the parts of the code that you are not testing. For testing you GUI, you may refer to What is the best testing tool for Swing-based applications?
There is a lot of good material available on this topic online!
If you're looking to do unit testing, one of the most common ways of doing this in Java is with JUnit. The JUnit Cookbook has some decent information to get started with writing your unit tests.
JUnit Cookbook

How to proper test threads in Java?

I am developing an application in Java that uses threads to continuously retrieve data from a website. I would like to use Junit to test them but this is not straightforward. How is it possible to test these threads that do not even have a termination point?
One possiblity is to pull out the work that the threads do into helper methods or classes that can be tested separately in single-threaded unit tests.
Another is to provide mock objects that are invoked by the threads, and can check that the expected behaviour occurs.
Another is to spawn the worker threads, and get your test to poll something that will tell it whether the threads worked OK (preferably with a timeout so you tests doesn't run forever. The problem here is that your tests can be slow and non-reproducible.
why not use jvisualvm (which comes packaged with the jdk 6 and up) to monitor the threads
http://docs.oracle.com/javase/6/docs/technotes/guides/visualvm/threads.html
It is not clear what you mean by ‘test them’. It’s hard for me to see what your thread is – how much functionality is has etc. A classic unit test would test the functions in your class, each on its own. But it seems that is not what you want. I assume you want to test whether many of your threads run in parallel and still do the right thing. This kind of integration test is indeed difficult.
A threaded test is in order here. You have to decide how much of the environment you want to mock – run your tests against the real web site or not. The first may not be viewed friendly by the operators, the latter might introduce errors. I would recommend TestNG instead of JUnit, as it will easily allow you to run tests in parallel in any number of threads.
Well I think it depends on exactly what you're trying to test.
If you're just trying to test whether or not threads can be spawned, well that's silly - it's baked into the JVM, and isn't going to fail any time soon. (If you have some particular resource condition like low memory that would cause it to fail, I guess that makes sense, but in most I'd say not.)
I would break the test up into two components. Have a test that just does the data retreval, regardless of it is in its own thread or not. Then have your 'black box' test that tells your central component "Go get this data" - it spawns its threads as it feels it needs to.

Categories