Unit testing for void methods and threads in jUnit - java

I'm new to unit testing. I understood the principles of it, but I still can't figure out how to test my current project. I need to test void methods, operating with java.nio.SocketChannel. These methods are:
- initSelector, where I open selector, bind new ServerSocketChannel and register it
- read, which reads data and puts it to a queue (should i write extra method for verifying, if that data actually exists in queue? and in that case, should i write tests for that methods?)
- write method, which takes data from a queue and writes it to a SocketChannel
I can test this methods for not throwing IOException, but what else?
And how should I test run() method of a Thread? Or is it not unit testing, but system or other?

Basically, you have two possibilities:
if you want to thoroughly unit test these methods, you should hide the concrete (hardware dependent components like sockets etc. behind mockable interfaces, and use mocks in the unit tests to verify that the expected calls with the expected parameters are made to these objects
or you can write integration / system tests using the real sockets within the whole component / app, to verify that the correct sockets are opened, data is transferred properly etc.
Ideally, you should do both, but in the real world, unit testing may not always be feasible. Especially for such low-level methods, which depend on some external software/hardware component like sockets, DBs, file system etc. Then the best approach is to leave as little logic (thus as little possibilities for failure) in these methods / layers as possible, and abstract out the logic into a higher layer, designed to be unit testable (using mockable interfaces as mentioned above).
To test threads, you can just run() them from your unit test as usual. Then most likely you need to wait for some time before trying to get and verify the results produced by the thread.
Again, if you abstract away the actual logic of the task into e.g. a Callable or Runnable, you can unit test it in isolation much easier. And this also enables you to use the Executor framework (now or later), which makes dealing with concurrency much easier and safer.

So first, if you are using a real SocketChannel in your unit test, it is not a unit test. You should use a mock (consider Mockito) for the SocketChannel. Doing so will allow you to provide a controlled stream of bytes to the method under test and verify what bytes are passed to the channel.
If your class is creating the instance of the SocketChannel, consider changing the class to accept a SocketChannelFactory. Then you can inject a SocketChannelFactory mock which returns a SocketChannel mock.
You can just call run() directly in your unit test.
Mockito link

run() is a method like any other, so you should just be able to call it from a unit test (depending on if it's running in an endless loop of course - then you might want to test the methods that run() is calling).
For the SocketChannel I'd say you don't want to test the SocketChannel itself; you want to test how your code interacts with the SocketChannel given a certain set of start conditions. So you could look into creating a mock for it, and having your code talk to the mock. That way you can verify if your code is interacting with the channel in the way you expect (read(), write() and so on).
Check out http://code.google.com/p/powermock/ for example.

Related

How to test Flink Global Window with Trigger And Evictor

I have a pipeline which use Flink Global Window with custom Trigger based on Event Time (from timestamp on arriving element) and Evictor which cut unnecessary elements from the window and pass it to the ProcessFunction,
something like:
public SingleOutputStreamOperator<Results> processElements(DataStream<Elements> inputStream) {
return inputStream
.keyBy(Elements::getId)
.window(GlobalWindows.create())
.trigger(new CustomTrigger())
.evictor(new CustomEvictor())
.process(new MyWindowProcessFunction())
.name("Process")
.uid("process-elements")
.returns(Results.class);
}
public void executePipelineFlow(StreamExecutionEnvironment env) throws Exception {
DataStream<Elements> inputStream = getInputStream(env);
DataStream<Results> processedInput = processElements(inputStream);
applySink(processedInput);
}
I know i can test MyWindowProcessFunction with TestHarness which provide Watermark manipulation but i need to test whole flow, Trigger+Evictor+ProcessFunction.
Also i try some kind of timed SourceFunction with use of Thread.sleep() but my pipeline work in event time and this wont work if i had 1000 elements in test stream (because test will take couple of hours).
My question is, how i can unit test my whole processElements method?
I cant find any test examples for my case.
Thanks
You might look at how the end-to-end integration tests for the windowing exercise in the Flink training are implemented as an example. This exercise isn't using GlobalWindows or custom triggering, etc, but you can use this overall approach to test any pipeline.
The one thing that's maybe less than ideal about this approach is how it handles watermarking. The applications being tested are using the default periodic watermarking strategy, wherein watermarks are generated every 200msec. Since the tests don't run that long, the only watermark that's actually generated is the one that comes at the end of every job with bounded inputs. This works, but isn't quite the same as what will happen in production. (Is this why you were thinking of having your test source sleep between events?)
BTW, these tests in the Flink training repo are made slightly more complex than is ordinarily necessary, because these tests are used to provide coverage for the Java and the Scala implementations of both the exercises and solutions.

Akka Streams - define timeout for mapAsync

I am using Akka stream with an external service which its methods takes time.
I don't want this methods to block so I defined a flow which uses mapAsyncUnordered as follows:
Flow.of(SomeClass.class).mapAsyncUnordered(numThreads, someService::someMethod);
I want to define a timeout to the method running in the mapAsyncUnordered stage, so that if the method takes too much time, it won't fill the mapAsyncUnordered queue and prevent from others to pass.
The only option I saw is to define a different actor and use "ask" method of it, but as described in the documentation, if we arrive to timeout, the whole stream will be terminated with failure and this is not good for me (although it can be recovered, since I can't lose data).
https://doc.akka.io/docs/akka/current/stream/futures-interop.html
Is there another option?

JUnit 5: Difference between BeforeEachCallback and BeforeTestExecutionCallback

I can't find any ressources explaining what exactly the difference between BeforeEachCallback and BeforeTestExecutionCallback in the JUnit Jupiter extension model is. (I am of course also interested in the "After"-variants)
To my understanding, the following timeline describes what is happening:
BeforeEach - BeforeTestExecution - Actual execution of the test - AfterTestExecution - AfterEach
I suppose that BeforeTestExecution exists so you can execute code after all the BeforeEach callbacks have been worked on but before the actual test execution. However this is still unclear to me, because everyone could just use BeforeTestExecution instead of BeforeEach and the order of execution of these callbacks is random again.
So what is BeforeTestExecution exactly for and what happens if you use this callback in multiple extensions at the same time?
The Javadocs (here and here) don't make a clear distinction between them but the JUnit5 docs include the following:
BeforeTestExecutionCallback and AfterTestExecutionCallback define the APIs for Extensions that wish to add behavior that will be executed immediately before and immediately after a test method is executed, respectively. As such, these callbacks are well suited for timing, tracing, and similar use cases. If you need to implement callbacks that are invoked around #BeforeEach and #AfterEach methods, implement BeforeEachCallback and AfterEachCallback instead.
So, if you want to wrap just the test execution without any of the setup then use BeforeTestExecutionCallback. The docs go on to suggest timing and logging test execution as possible use cases for BeforeTestExecutionCallback.

Unit test consumers in Vertx

I have a snippet of code that I want to unit test.
this.vertx.eventBus().consumer(VERTICLE_ID).toObservable()
.subscribe(msg -> doSomethingCool());
and my consumer method:
private void doSomethingCool(){
// Some cool stuff.
}
Now I want to unit test doSomethingCool() without using powermockito (I want to have code coverage) and I dont want to make my method visible (public). How can I do that? Any hook in vertx to do that ?
It actually hard to tell how you should write your test if nothing is known about the purpose of doSomethingCool
Does it return a value? (i.e. via msg.reply())
Does it modify state of the Verticle? or global state?
Does it make a downstream call?
Does your method invoke a handler once it's done with whatever it does?
A unit test should verify an observable result. So write your unit test to verify one of these outcomes.
In case a handler is invoked, you could work with the vertxunit TestContext and count down an async.
... and stay away from Powermockito.

Cleanup after each test method in testng framework

I have 100 test methods. After each test, I need to perform some actions (data cleanup). Each of these 100 tests have different actions. These 100 test are not in one package or class. They are distributed.
How can I achieve this?
Right now, if a test passes, the cleanup happens, since it is part of the test. However, if the test fails, the cleanup doesn't happen. How can I make this work?
Any pointers would help.
If the tests do not have any common cleanup, you can ensure the test gets cleaned up from within the test method using a try/finally block, something like:
try {
// do test
}
finally {
// do cleanup
}
If there is any common cleanup between the test methods you could use #AfterMethod to do the cleaup.
In your case, it doesn't sound like there is much common cleanup, so the first may work better for you. It might also be worth considering if you need 100 different cleanup methods or if there can be any common setup/cleanup.
#AfterMethod would mean that you would need that every class gets this method. So you would need to go and edit each class/method. Same for #AfterGroups.
What I would suggest is to implement the IInvokedMethodListener. This would give you beforeInvocation and afterInvocation methods. In the afterInvocation method, implement your cleanup code.
Create a suite file with all of your tests which need this cleanup and specify this listener.
Hope it helps.
It sounds like you may already be using #AfterMethod to cleanup after the tests. To make #AfterMethod work after a failure, you need to use:
#AfterMethod(alwaysRun=true)
You can use groups and run a #AfterGroups somewhere. There's a #BeforeGroups as well. Setting it up with build tooling is a bit tedious and there are some interactions with IDEs as well. There's a BeforeSuite and AfterSuite as well, I believe.
An alternative could be using Spring and using the same spring context in all your tests (spring context gets reused that way). You can then do some things when the context is destroyed after your tests.

Categories