How to proper test threads in Java? - java

I am developing an application in Java that uses threads to continuously retrieve data from a website. I would like to use Junit to test them but this is not straightforward. How is it possible to test these threads that do not even have a termination point?

One possiblity is to pull out the work that the threads do into helper methods or classes that can be tested separately in single-threaded unit tests.
Another is to provide mock objects that are invoked by the threads, and can check that the expected behaviour occurs.
Another is to spawn the worker threads, and get your test to poll something that will tell it whether the threads worked OK (preferably with a timeout so you tests doesn't run forever. The problem here is that your tests can be slow and non-reproducible.

why not use jvisualvm (which comes packaged with the jdk 6 and up) to monitor the threads
http://docs.oracle.com/javase/6/docs/technotes/guides/visualvm/threads.html

It is not clear what you mean by ‘test them’. It’s hard for me to see what your thread is – how much functionality is has etc. A classic unit test would test the functions in your class, each on its own. But it seems that is not what you want. I assume you want to test whether many of your threads run in parallel and still do the right thing. This kind of integration test is indeed difficult.
A threaded test is in order here. You have to decide how much of the environment you want to mock – run your tests against the real web site or not. The first may not be viewed friendly by the operators, the latter might introduce errors. I would recommend TestNG instead of JUnit, as it will easily allow you to run tests in parallel in any number of threads.

Well I think it depends on exactly what you're trying to test.
If you're just trying to test whether or not threads can be spawned, well that's silly - it's baked into the JVM, and isn't going to fail any time soon. (If you have some particular resource condition like low memory that would cause it to fail, I guess that makes sense, but in most I'd say not.)
I would break the test up into two components. Have a test that just does the data retreval, regardless of it is in its own thread or not. Then have your 'black box' test that tells your central component "Go get this data" - it spawns its threads as it feels it needs to.

Related

Is this a valid use of Thread.stop()?

Okay, I know it's dangerous, I know it's deprecated, and I know using it would make baby Jesus cry. I think I'm aware of the implications of calling it and have read this related question.
Here's my scenario. I would like to test a data processing library. It runs multiple jobs, one per thread. Each job only communicates with other jobs via an out-of-process queueing system. Otherwise, jobs are independent: there is no shared state between threads, at least not in my code base.
I would like to test that if some terrible thing such as an OutOfMemoryError or a cosmic ray killing the VM happens at some random point in a job, that the rest of the system is okay. Therefore I want to stop a thread at a completely arbitrary point, and killing the thread should not leave resources accessible by other threads in an undefined state. The job logic is part of a framework that I don't want to compromise for the purposes of this test so it's not viable to intersperse random exits throughout the job code.
Is this an appropriate use of Thread.stop()? And so that this is not an XY question, is there any other practical way to accomplish my goal? (I suppose it could be done with bytecode instrumentation but I think that would be tremendously difficult.)

Remote debugging threads with different debuggers

I have an application which is scheduler running different threads.
The application may load new Runnable classes and run them.
Currently the application is in production, that is it's running on remote server.
My team consists of 3 people developing Runnable classes.
When the class is ready, it's uploaded to server and loaded to scheduler.
I would like to give my team the ability to debug specific threads.
That is: person A may debug threads of Runnable A, B-B, and so on.
Giving them the full access to the remote JVM is not a solution, because
the developers are not allowed to see the system core, and each others solutions.
So my question is: how to allow multiple remote debugging with thread specific connections?
Preferable IDE: Eclipse
EDIT:
It's possible to connect remotely to specific thread with jdb
http://docs.oracle.com/javase/7/docs/technotes/tools/windows/jdb.html
Here is an example: http://www.itec.uni-klu.ac.at/~harald/CSE/Content/debugging.html
1) Find your thread with jdb threads
2) Put breakpoint and enter the wanted thread
Still the security issue stays.
One solution was to compile protected code without debug symbols, but it will only protect the core, allow seeing each other's threads.
So, next step - digging Security Manager. Maybe there's privilege layer suitable for my situation.
I'm not sure I've got a good answer to your question, but let's see how it pans out.
As I understand it you want to allow different developers to debug their class alone, and their class runs as a thread as part of a single Java process.
On the face of it that sort of runs counter to the nature of debugging in that normally you have access to everything in the process. I don't imagine that Java is any different to any other language in this respect (I'm no Java programmer).
So how about running the classes in separate Java processes. That way I presume the standard Eclipse tools would allow each developer to remote attach and debug their class.
However I presume that these classes need to interact with each other in some way, otherwise you wouldn't be asking your question in the first place. And running each class in a separate process (JVM) sounds like a bad thing as far as interaction is concerned.
So how about a different form of interaction where tbe process boundary between each class doesn't really matter that much? You could look at using JCSP which, as far as I can tell, doesn't really care if two threads are in the same process or not.
It's a completely different interaction model, based solely on synchronous message passing. You get some nice fringe benefits - scalability is suddenly no longer a massive problem, and it allows you to dodge many pitfalls normally associated with multithreaded programs (deadlock, etc). However if you've already written a large amount of code, adopting JCSP is probably a significant rewrite.
Is that anywhere near the mark? Good luck.

How would I test a web server in Java?

I am basically practicing with Java socket programming by building client and server (not necessarily HTTP server). In brief, the clients are sending request through sockets to server and server adds requests to task queue. The thread pool initially has certain number of threads and each free one is assigned to one runnable task in the task queue. My web server also has a simple storage that stores and retrieves data from a file from disk. In this project, I have to take care of several concurrency issues.
Basically, I have to build client, server, thread pool, handler, storage. However, I want to test thoroughly in a good systematic way (unit test, integration test, etc.). I don't have much experience in testing so I am looking for pointers, methodologies, frameworks, or tutorials. (I use Ant to automate building, and initially consider JUnit and EasyMock for testing)
Before testing, I'd start by coding some rough and ready prototpye code. Just to see it working and to get a feel for the APIs I will be using.
Then introduce some unit tests with JUnit (there are other frameworks but JUnit is ubiquitous, and you'll find plenty of tutorials to get you started).
If your object needs to interact with some other Objects to complete it's tasks, then use mocks (EasyMock or whatever) to provide the interaction - this will probably lead to a bit of re-factoring.
Once you are happy, you can start to look at testing how your Objects interact, you can write new (integration) tests that replace the Mocks with the real thing. Greater interaction results in greater complexity.
Some things to remember
trivial methods aren't worth testing (e.g. simple accessors)
100% coverage is a waste of time
any test is better than none
Unit test is easier to achieve than integration test
Not all tests are functional
Testing multi-threaded applications is hard
There is a book on how Google does testing. Basically they don't write tests until something looks viable. They have engineers who advise on how to structure code for testing. The point is:
Runnable code is the goal
Tests add to that goal, but do not replace it
Writing code that can be tested is a learnt skill

How to include performance goals in Java test suite?

Is there an easy way to automatically enforce goals like "This service must support 1,000 transactions per minute" in daily build tests for Java? Is this ever done in JUnit or are there caveats to it?
You can't do exactly what you are looking for but you can do the below using JUnitPerf.
JUnitPerf tests are intended to be
used specifically in situations where
you have quantitative performance
and/or scalability requirements that
you'd like to keep in check while
refactoring code. For example, you
might write a JUnitPerf test to ensure
that refactoring an algorithm didn't
incur undesirable performance overhead
in a performance-critical code
section. You might also write a
JUnitPerf test to ensure that
refactoring a resource pool didn't
adversely affect the scalability of
the pool under load.
That doesn't feel like a unit test to me. It could be a long-running transaction, one that will delay the completion of your build. You might reconsider if the total running time is longer than your build frequency.
It's not even clear to me how meaningful the question is, because it doesn't take into account simultaneous users.
You can easily do a "poor man's" multi-threaded load test like this with TestNG. It's not possible in JUnit 4.4, but it might be in a later version.

Unit testing real-time / concurrent software [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How should I unit test threaded code?
The classical unit testing is basically just putting x in and expecting y out, and automating that process. So it's good for testing anything that doesn't involve time. But then, most of the nontrivial bugs I've come across have had something to do with timing. Threads corrupt each others' data, or cause deadlocks. Nondeterministic behavior happens – in one run out of million. Hard stuff.
Is there anything useful out there for "unit testing" parts of multithreaded, concurrent systems? How do such tests work? Isn't it necessary to run the subject of such test for a long time and vary the environment in some clever manner, to become reasonably confident that it works correctly?
Most of the work I do these days involves multi-threaded and/or distributed systems. The majority of bugs involve "happens-before" type errors, where the developer assumes (wrongly) that event A will always happen before event B. But every 1000000th time the program is run, event B happens first, and this causes unpredictable behavior.
Additionally, there aren't really any good tools to detect timing issues, or even data corruption caused by race conditions. Tools like Helgrind and drd from the Valgrind toolkit work great for trivial programs, but they are not very useful in diagnosing large, complex systems. For one thing, they report false positives quite frequently (Helgrind especially). For another thing, it's difficult to actually detect certain errors while running under Helgrind/drd simply because programs running under Helgrind run almost 1000x slower, and you often need to run a program for quite a long time to even reproduce the race condition. Additionally, since running under Helgrind totally changes the timing of the program, it may become impossible to reproduce a certain timing issue. That's the problem with subtle timing issues; they're almost Heisenbergian in the sense that altering a program to detect timing issues may obscure the original issue.
The sad fact is, the human race still isn't adequately prepared to deal with complex, concurrent software. So unfortunately, there's no easy way to unit-test it. For distributed systems especially, you should plan your program carefully using Lamport's happens-before diagrams to help you identify the necessary order of events in your program. But ultimately, you can't really get away from brute-force unit testing with randomly varying inputs. It also helps to vary the frequency of thread context-switching during your unit-test by, e.g. running another background process which just takes up CPU cycles. Also, if you have access to a cluster, you can run multiple unit-tests in parallel, which can detect bugs much quicker and save you a lot of time.
If you can run your tests under Linux, valgrind includes a tool called helgrind which purports to detect race conditions and potential deadlocks in programs that use pthreads; you might get some benefit from running your multithreaded code under that, since it will report potential errors even if they didn't actually occur in that particular test run.
I have never heard of anything that can.
I guess if someone was to design one, it would have to have exact control over the execution of the threads and execute all possible combinations of stepping of the threads.
Sounds like a major task, not to mention the mathematical combinations for non-trivial sized threads when there are a handful or more of them...
Although, a quick search of stackoverflow... Unit testing a multithreaded application?
If the tested system is simple enough you could control the concurrency quite well by blocking operations in external mockup systems. This blocking can be done for example by waiting for some other operation to be started. If you can control all external calls this might work quite well by implementing different blocking sequences. I have tried this and it does reveal lock-level bugs quite well if you know possible problematic sequences well. And compared to many other concurrency testing it is quite deterministic. However this approach doesn't detect low level race conditions too well. I usually just go for load testing to find those, but I quess that isn't exactly unit testing.
I have seen these concurrency testing frameworks for .net, I'd assume its only matter of time before someone writes one for Java (hopefully).
And not to forget good old code reading. One of the best ways to find concurrency bugs is to just read through the code once again giving it your full concentration.
Perhaps the answer is that you shouldn't. In concurrent systems, there may not always be a single deterministic answer that is correct.
Take the example of people boarding a train and choosing a seat. You are going to end up with different results everytime.
Awaitility is a useful framework when you need to deal with asynchronicity in your tests. It allows you to wait until some state somewhere in your system is updated. For example:
await().untilCall( to(myService).myMethod(), equalTo(3) );
or
await().until( fieldIn(myObject).ofType(int.class), greaterThan(1));
It also has Scala and Groovy support.

Categories