I am using Apache HttpClient 4 to communicate with a REST API and most of the time I do lengthy PUT operations. Since these may happen over an unstable Internet connection I need to detect if the connection is interrupted and possibly need to retry (with a resume request).
To try my routines in the real world I started a PUT operation and then I flipped the Wi-Fi switch of my laptop, causing an immediate total interruption of any data flow. However it takes a looong time (maybe 5 minutes or so) until eventually a SocketException is thrown.
How can I speed up to process? I'd like to set a timeout of maybe something around 30 seconds.
Update:
To clarify, my request is a PUT operation. So for a very long time (possibly hours) the only operation is a write() operation and there are no read operations. There is a timeout setting for read() operations, but I could not find one for write operations.
I am using my own Entity implementation and thus I write directly to an OutputStream which will pretty much immediately block once the Internet connection is interrupted. If OutputStreams had a timeout parameter so I could write out.write(nextChunk, 30000); I could detect such a problem myself. Actually I tried that:
public class TimeoutHttpEntity extends HttpEntityWrapper {
public TimeoutHttpEntity(HttpEntity wrappedEntity) {
super(wrappedEntity);
}
#Override
public void writeTo(OutputStream outstream) throws IOException {
try(TimeoutOutputStreamWrapper wrapper = new TimeoutOutputStreamWrapper(outstream, 30000)) {
super.writeTo(wrapper);
}
}
}
public class TimeoutOutputStreamWrapper extends OutputStream {
private final OutputStream delegate;
private final long timeout;
private final ExecutorService executorService = Executors.newSingleThreadExecutor();
public TimeoutOutputStreamWrapper(OutputStream delegate, long timeout) {
this.delegate = delegate;
this.timeout = timeout;
}
#Override
public void write(int b) throws IOException {
executeWithTimeout(() -> {
delegate.write(b);
return null;
});
}
#Override
public void write(byte[] b) throws IOException {
executeWithTimeout(() -> {
delegate.write(b);
return null;
});
}
#Override
public void write(byte[] b, int off, int len) throws IOException {
executeWithTimeout(() -> {
delegate.write(b, off, len);
return null;
});
}
#Override
public void close() throws IOException {
try {
executeWithTimeout(() -> {
delegate.close();
return null;
});
} finally {
executorService.shutdown();
}
}
private void executeWithTimeout(final Callable<?> task) throws IOException {
try {
executorService.submit(task).get(timeout, TimeUnit.MILLISECONDS);
} catch (TimeoutException e) {
throw new IOException(e);
} catch (ExecutionException e) {
final Throwable cause = e.getCause();
if (cause instanceof IOException) {
throw (IOException)cause;
}
throw new Error(cause);
} catch (InterruptedException e) {
throw new Error(e);
}
}
}
public class TimeoutOutputStreamWrapperTest {
private static final byte[] DEMO_ARRAY = new byte[]{1,2,3};
private TimeoutOutputStreamWrapper streamWrapper;
private OutputStream delegateOutput;
public void setUp(long timeout) {
delegateOutput = mock(OutputStream.class);
streamWrapper = new TimeoutOutputStreamWrapper(delegateOutput, timeout);
}
#AfterMethod
public void teardown() throws Exception {
streamWrapper.close();
}
#Test
public void write_writesByte() throws Exception {
// Setup
setUp(Long.MAX_VALUE);
// Execution
streamWrapper.write(DEMO_ARRAY);
// Evaluation
verify(delegateOutput).write(DEMO_ARRAY);
}
#Test(expectedExceptions = DemoIOException.class)
public void write_passesThruException() throws Exception {
// Setup
setUp(Long.MAX_VALUE);
doThrow(DemoIOException.class).when(delegateOutput).write(DEMO_ARRAY);
// Execution
streamWrapper.write(DEMO_ARRAY);
// Evaluation performed by expected exception
}
#Test(expectedExceptions = IOException.class)
public void write_throwsIOException_onTimeout() throws Exception {
// Setup
final CountDownLatch executionDone = new CountDownLatch(1);
setUp(100);
doAnswer(new Answer<Void>() {
#Override
public Void answer(InvocationOnMock invocation) throws Throwable {
executionDone.await();
return null;
}
}).when(delegateOutput).write(DEMO_ARRAY);
// Execution
try {
streamWrapper.write(DEMO_ARRAY);
} finally {
executionDone.countDown();
}
// Evaluation performed by expected exception
}
public static class DemoIOException extends IOException {
}
}
This is somewhat complicated, but it works quite well in my unit tests. And it works in real life as well, except that the HttpRequestExecutor catches the exception in line 127 and tries to close the connection. However when trying to close the connection it first tries to flush the connection which again blocks.
I might be able to dig deeper in HttpClient and figure out how to prevent this flush operation, but it is already a not too pretty solution, and it is just about to get even worse.
UPDATE:
It looks like this can't be done on the Java level. Can I do it on another level? (I am using Linux).
Java blocking I/O does not support socket timeout for write operations. You are entirely at the mercy of the OS / JRE to unblock the thread blocked by the write operation. Moreover, this behavior tends to be OS / JRE specific.
This might be a legitimate case to consider using a HTTP client based on non-blocking I/O (NIO) such as Apache HttpAsyncClient.
You can configure the socket timeout using RequestConfig:
RequestConfig myRequestConfig = RequestConfig.custom()
.setSocketTimeout(5000) // 5 seconds
.build();
When, when you do the call, just assign your new configuration. For instance,
HttpPut httpPut = new HttpPut("...");
httpPut.setConfig(requestConfig);
...
HttpClientContext context = HttpClientContext.create();
....
httpclient.execute(httpPut, context);
For more information regarthing timeout configurations, here there is a good explanation.
Her is one of the link i came across which talks connection eviction policy : here
public static class IdleConnectionMonitorThread extends Thread {
private final HttpClientConnectionManager connMgr;
private volatile boolean shutdown;
public IdleConnectionMonitorThread(HttpClientConnectionManager connMgr) {
super();
this.connMgr = connMgr;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(5000);
// Close expired connections
connMgr.closeExpiredConnections();
// Optionally, close connections
// that have been idle longer than 30 sec
connMgr.closeIdleConnections(30, TimeUnit.SECONDS);
}
}
} catch (InterruptedException ex) {
// terminate
}
}
public void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}}
I think you might want to look at this.
Related
I use AsyncHttpClient library for async non blocking requests.
My case: write data to a file as it is received over the network.
For download file from remote host and save to file I used default ResponseBodyPartFactory.EAGER and AsynchronousFileChannel so as not to block the netty thread as data arrives. But as my measurements showed, in comparison with LAZY the memory consumption in the Java heap increases many times over.
So I decided to go straight to LAZY, but did not consider the consequences for the files.
This code will help to reproduce the problem.:
public static class AsyncChannelWriter {
private final CompletableFuture<Integer> startPosition;
private final AsynchronousFileChannel channel;
public AsyncChannelWriter(AsynchronousFileChannel channel) throws IOException {
this.channel = channel;
this.startPosition = CompletableFuture.completedFuture((int) channel.size());
}
public CompletableFuture<Integer> getStartPosition() {
return startPosition;
}
public CompletableFuture<Integer> write(ByteBuffer byteBuffer, CompletableFuture<Integer> currentPosition) {
return currentPosition.thenCompose(position -> {
CompletableFuture<Integer> writenBytes = new CompletableFuture<>();
channel.write(byteBuffer, position, null, new CompletionHandler<Integer, ByteBuffer>() {
#Override
public void completed(Integer result, ByteBuffer attachment) {
writenBytes.complete(result);
}
#Override
public void failed(Throwable exc, ByteBuffer attachment) {
writenBytes.completeExceptionally(exc);
}
});
return writenBytes.thenApply(writenBytesLength -> writenBytesLength + position);
});
}
public void close(CompletableFuture<Integer> currentPosition) {
currentPosition.whenComplete((position, throwable) -> IOUtils.closeQuietly(channel));
}
}
public static void main(String[] args) throws IOException {
final String filepath = "/media/veracrypt4/files/1.jpg";
final String downloadUrl = "https://m0.cl/t/butterfly-3000.jpg";
final AsyncHttpClient client = Dsl.asyncHttpClient(Dsl.config().setFollowRedirect(true)
.setResponseBodyPartFactory(AsyncHttpClientConfig.ResponseBodyPartFactory.LAZY));
final AsynchronousFileChannel channel = AsynchronousFileChannel.open(Paths.get(filepath), StandardOpenOption.WRITE, StandardOpenOption.TRUNCATE_EXISTING, StandardOpenOption.CREATE);
final AsyncChannelWriter asyncChannelWriter = new AsyncChannelWriter(channel);
final AtomicReference<CompletableFuture<Integer>> atomicReferencePosition = new AtomicReference<>(asyncChannelWriter.getStartPosition());
client.prepareGet(downloadUrl)
.execute(new AsyncCompletionHandler<Response>() {
#Override
public State onBodyPartReceived(HttpResponseBodyPart content) throws Exception {
//if EAGER, content.getBodyByteBuffer() return HeapByteBuffer, if LAZY, return DirectByteBuffer
final ByteBuffer bodyByteBuffer = content.getBodyByteBuffer();
final CompletableFuture<Integer> currentPosition = atomicReferencePosition.get();
final CompletableFuture<Integer> newPosition = asyncChannelWriter.write(bodyByteBuffer, currentPosition);
atomicReferencePosition.set(newPosition);
return State.CONTINUE;
}
#Override
public Response onCompleted(Response response) {
asyncChannelWriter.close(atomicReferencePosition.get());
return response;
}
});
}
in this case, the picture is broken. But if i use FileChannel instead of AsynchronousFileChannel, in both cases, the files come out normal. Can there be any nuances when working with DirectByteBuffer (in case withLazyResponseBodyPart.getBodyByteBuffer()) and AsynchronousFileChannel?
What could be wrong with my code, if everything works fine with EAGER?
UPDATE
As I noticed, if I use LAZY, and for example,i add the line
Thread.sleep (10) in the method onBodyPartReceived, like this:
#Override
public State onBodyPartReceived(HttpResponseBodyPart content) throws Exception {
final ByteBuffer bodyByteBuffer = content.getBodyByteBuffer();
final CompletableFuture<Integer> currentPosition = atomicReferencePosition.get();
final CompletableFuture<Integer> newPosition = finalAsyncChannelWriter.write(bodyByteBuffer, currentPosition);
atomicReferencePosition.set(newPosition);
Thread.sleep(10);
return State.CONTINUE;
}
The file is saved to disk in non broken state.
As I understand it, the reason is that during these 10 milliseconds, the asynchronous thread in AsynchronousFileChannel manages to write data to the disk from this DirectByteBuffer.
It turns out that the file is broken due to the fact that this asynchronous thread uses this buffer for writing along with the netty thread.
If we take a look at source code with EagerResponseBodyPart, then we will see the following
private final byte[] bytes;
public EagerResponseBodyPart(ByteBuf buf, boolean last) {
super(last);
bytes = byteBuf2Bytes(buf);
}
#Override
public ByteBuffer getBodyByteBuffer() {
return ByteBuffer.wrap(bytes);
}
Thus, when a piece of data arrives, it is immediately stored in the byte array. Then we can safely wrap them in HeapByteBuffer and transfer to the asynchronous thread in file channel.
But if you look at the code LazyResponseBodyPart
private final ByteBuf buf;
public LazyResponseBodyPart(ByteBuf buf, boolean last) {
super(last);
this.buf = buf;
}
#Override
public ByteBuffer getBodyByteBuffer() {
return buf.nioBuffer();
}
As you can see, we actually use in asynchronous file channel thread netty ByteBuff(in this case always PooledSlicedByteBuf) via method call nioBuffer
What can i do in this situation, how to safely pass DirectByteBuffer in an async thread without copying buffer to java heap?
I talked to the maintainer of AsyncHttpClient.
Can see here
The main problem was that i dont's use netty ByteBuf methods retain and release.
In the end, I came to two solutions.
First: Write the bytes in sequence to the file with tracking position with CompletableFuture.
Define wrapper class for AsynchronousFileChannel
#Log4j2
public class AsyncChannelNettyByteBufWriter implements Closeable {
private final AtomicReference<CompletableFuture<Long>> positionReference;
private final AsynchronousFileChannel channel;
public AsyncChannelNettyByteBufWriter(AsynchronousFileChannel channel) {
this.channel = channel;
try {
this.positionReference = new AtomicReference<>(CompletableFuture.completedFuture(channel.size()));
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
public CompletableFuture<Long> write(ByteBuf byteBuffer) {
final ByteBuf byteBuf = byteBuffer.retain();
return positionReference.updateAndGet(x -> x.thenCompose(position -> {
final CompletableFuture<Integer> writenBytes = new CompletableFuture<>();
channel.write(byteBuf.nioBuffer(), position, byteBuf, new CompletionHandler<Integer, ByteBuf>() {
#Override
public void completed(Integer result, ByteBuf attachment) {
attachment.release();
writenBytes.complete(result);
}
#Override
public void failed(Throwable exc, ByteBuf attachment) {
attachment.release();
log.error(exc);
writenBytes.completeExceptionally(exc);
}
});
return writenBytes.thenApply(writenBytesLength -> writenBytesLength + position);
}));
}
public void close() {
positionReference.updateAndGet(x -> x.whenComplete((position, throwable) -> {
try {
channel.close();
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}));
}
}
In fact, there probably won't be an AtomicReference here, if the recording happens in one thread, and if from several, then we need to seriously approach synchronization.
And main usage.
public static void main(String[] args) throws IOException {
final String filepath = "1.jpg";
final String downloadUrl = "https://m0.cl/t/butterfly-3000.jpg";
final AsyncHttpClient client = Dsl.asyncHttpClient(Dsl.config().setFollowRedirect(true)
.setResponseBodyPartFactory(AsyncHttpClientConfig.ResponseBodyPartFactory.LAZY));
final AsynchronousFileChannel channel = AsynchronousFileChannel.open(Paths.get(filepath), StandardOpenOption.WRITE, StandardOpenOption.TRUNCATE_EXISTING, StandardOpenOption.CREATE);
final AsyncChannelNettyByteBufWriter asyncChannelNettyByteBufWriter = new AsyncChannelNettyByteBufWriter(channel);
client.prepareGet(downloadUrl)
.execute(new AsyncCompletionHandler<Response>() {
#Override
public State onBodyPartReceived(HttpResponseBodyPart content) {
final ByteBuf byteBuf = ((LazyResponseBodyPart) content).getBuf();
asyncChannelNettyByteBufWriter.write(byteBuf);
return State.CONTINUE;
}
#Override
public Response onCompleted(Response response) {
asyncChannelNettyByteBufWriter.close();
return response;
}
});
}
The second solution: track the position based on the received size of bytes.
public static void main(String[] args) throws IOException {
final String filepath = "1.jpg";
final String downloadUrl = "https://m0.cl/t/butterfly-3000.jpg";
final AsyncHttpClient client = Dsl.asyncHttpClient(Dsl.config().setFollowRedirect(true)
.setResponseBodyPartFactory(AsyncHttpClientConfig.ResponseBodyPartFactory.LAZY));
final ExecutorService executorService = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
final AsynchronousFileChannel channel = AsynchronousFileChannel.open(Paths.get(filepath), new HashSet<>(Arrays.asList(StandardOpenOption.WRITE, StandardOpenOption.TRUNCATE_EXISTING, StandardOpenOption.CREATE)), executorService);
client.prepareGet(downloadUrl)
.execute(new AsyncCompletionHandler<Response>() {
private long position = 0;
#Override
public State onBodyPartReceived(HttpResponseBodyPart content) {
final ByteBuf byteBuf = ((LazyResponseBodyPart) content).getBuf().retain();
long currentPosition = position;
position+=byteBuf.readableBytes();
channel.write(byteBuf.nioBuffer(), currentPosition, byteBuf, new CompletionHandler<Integer, ByteBuf>() {
#Override
public void completed(Integer result, ByteBuf attachment) {
attachment.release();
if(content.isLast()){
try {
channel.close();
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
}
#Override
public void failed(Throwable exc, ByteBuf attachment) {
attachment.release();
try {
channel.close();
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
});
return State.CONTINUE;
}
#Override
public Response onCompleted(Response response) {
return response;
}
});
}
In the second solution, because we don’t wait until some bytes are written to the file, AsynchronousFileChannel can create a lot of threads (If you use Linux, because Linux does not implement non-blocking asynchronous file IO. In Windows, the situation is much better).
As my measurements showed, in the case of writing to a slow USB flash, the number of threads can reach tens of thousands, so for this you need to limit the number of threads by creating your ExecutorService and transferring it to AsynchronousFileChannel.
Are there obvious advantages and disadvantages of the first and second solutions? It's hard for me to say. Maybe someone can tell what is more effective.
We have a streaming application taking data from MQTT and load into other resource. And this application have multiple threads to handle some tasks.
Here we have two tasks(threads):
First one is a READER
Second one is a WRITER
So READER will read data from MQTT broker and write on a java queue and WRITER will take this data from that queue and write it over one database. This application itself monitoring these threads for finding any failure. If any one of the threads failed then we will stop remaining threads gracefully. In case of paho MqttClient class (READER Class) wont create a thread even its a threaded class. But it will creating multiple threads in the background.
Because of this we could not check whether these threads is failed or running by java isAlive() function. So we just checking this class have connection by MqttClient isConnected() method. Once isConnected method return false (5 times) , then we will stop Writer thread gracefully. But Reader class threads which spawned in the background are not able to stop. I have tried disconnect() and close()
methods. But its not stopping any of the background threads. Its throws error disconnected threads could not stop.
So please anybody help.
What you suggest sounds like an awkward design.
Why not just use the Paho callbacks, in particular the connectionLost as below?
private final MqttCallbackExtended mCallback = new MqttCallbackExtended() {
#Override
public void connectComplete(boolean reconnect, String brokerAddress) {
mqttClient.subscribe("topic", 1, null, mSubscribeCallback);
}
#Override
public void connectionLost(Throwable ex) {
}
#Override
public void deliveryComplete(IMqttDeliveryToken deliveryToken) {
}
#Override
public void messageArrived(String topic, MqttMessage mqttMessage) throws Exception {
}
};
private final IMqttActionListener mConnectionCallback = new IMqttActionListener() {
#Override
public void onSuccess(IMqttToken asyncActionToken) {
// do nothing, this case is handled in mCallback.connectComplete()
}
#Override
public void onFailure(IMqttToken asyncActionToken, Throwable exception) {
}
};
private final IMqttActionListener mSubscribeCallback = new IMqttActionListener() {
#Override
public void onSuccess(IMqttToken subscribeToken) {
}
#Override
public void onFailure(IMqttToken subscribeToken, Throwable ex) {
}
};
MqttConnectOptions connectOptions = new MqttConnectOptions();
connectOptions.setCleanSession(true);
connectOptions.setAutomaticReconnect(true);
connectOptions.setUserName("username");
connectOptions.setPassword("password".toCharArray());
MqttAsyncClient mqttClient = new MqttAsyncClient("tcp:// test.mosquitto.org");
mqttClient.setCallback(mCallback);
try {
mqttClient.connect(connectOptions, null, mConnectionCallback);
} catch (Exception ex) {
System.err.println(ex.toString());
}
I'm using BufferedWriter with the default size of 8192 characters to write lines to a local file. The lines are read from socket inputstream using BufferedReader readLine method, blocking I/O.
Average line length is 50 characters. It all works well and fast enough (over 1 mln lines per second) however if the client stops writing, lines that are currently stored in BufferedWriter buffer won't be flushed to disk. In fact the buffered characters won't be flushed to disk until the client resumes writing or the connection is closed. This translates into a delay between the time line is transmitted by client and the time this line is committed to file, so long-tail latency goes up.
Is there a way to flush incomplete BufferedWriter buffer on timeout, e.g. within 100 milliseconds?
What about something like this? It's not a real BufferedWriter, but it's a Writer. It works by periodically checking on on the last writer to the underlying, hopefully unbuffered writer, then flushing the BufferedWriter if it's been longer than the timeout.
public class PeriodicFlushingBufferedWriter extends Writer {
protected final MonitoredWriter monitoredWriter;
protected final BufferedWriter writer;
protected final long timeout;
protected final Thread thread;
public PeriodicFlushingBufferedWriter(Writer out, long timeout) {
this(out, 8192, timeout);
}
public PeriodicFlushingBufferedWriter(Writer out, int sz, final long timeout) {
monitoredWriter = new MonitoredWriter(out);
writer = new BufferedWriter(monitoredWriter, sz);
this.timeout = timeout;
thread = new Thread(new Runnable() {
#Override
public void run() {
long deadline = System.currentTimeMillis() + timeout;
while (!Thread.interrupted()) {
try {
Thread.sleep(Math.max(deadline - System.currentTimeMillis(), 0));
} catch (InterruptedException e) {
return;
}
synchronized (PeriodicFlushingBufferedWriter.this) {
if (Thread.interrupted()) {
return;
}
long lastWrite = monitoredWriter.getLastWrite();
if (System.currentTimeMillis() - lastWrite >= timeout) {
try {
writer.flush();
} catch (IOException e) {
}
}
deadline = lastWrite + timeout;
}
}
}
});
thread.start();
}
#Override
public synchronized void write(char[] cbuf, int off, int len) throws IOException {
this.writer.write(cbuf, off, len);
}
#Override
public synchronized void flush() throws IOException {
this.writer.flush();
}
#Override
public synchronized void close() throws IOException {
try {
thread.interrupt();
} finally {
this.writer.close();
}
}
private static class MonitoredWriter extends FilterWriter {
protected final AtomicLong lastWrite = new AtomicLong();
protected MonitoredWriter(Writer out) {
super(out);
}
#Override
public void write(int c) throws IOException {
lastWrite.set(System.currentTimeMillis());
super.write(c);
}
#Override
public void write(char[] cbuf, int off, int len) throws IOException {
lastWrite.set(System.currentTimeMillis());
super.write(cbuf, off, len);
}
#Override
public void write(String str, int off, int len) throws IOException {
lastWrite.set(System.currentTimeMillis());
super.write(str, off, len);
}
#Override
public void flush() throws IOException {
lastWrite.set(System.currentTimeMillis());
super.flush();
}
public long getLastWrite() {
return this.lastWrite.get();
}
}
}
#copeg is right - flush it after every line. It is easy to flush it at time period but what is the sense to have only half record and not be able to proceed it?
You might apply Observer, Manager, and Factory patterns here and have a central BufferedWriterManager produce your BufferedWriters and maintain a list of active instances. An internal thread might wake periodically and flush the active instances. This might also be an opportunity for Weak references so there is no requirement for your consumers to explicitly free the object. Instead, the GC will do the work and your Manager simply needs to handle the case when its internal reference becomes null (i.e. when all strong references are dropped).
Don't try this complex scheme, it's too hard. Just reduce the size of the buffer, by specifying it when constructing the BufferedWriter. Reduce it till you find the balance between performance and latency that you need.
My apologies for throwing this random subject, but I did not come up with a better name,
class ReportSenderRunnable implements Runnable {
private final LPLogCompressor compressor;
public ReportSenderRunnable(final LPLogCompressor compressor) {
this.compressor = compressor;
}
#Override
public void run() {
executeTasks();
}
private void executeTasks() {
try {
// compressor.compress();
reportStatus = ReportStatus.COMPRESSING;
System.out.println("compressing for 10 seconds");
Thread.sleep(10000);
} catch (final IllegalStateException e) {
logCompressionError(e.getMessage());
} /*catch (final IOException e) {
logCompressionError(e.getMessage());
}*/ catch (InterruptedException e) {
logCompressionError(e.getMessage());
}
try {
reportStatus = ReportStatus.SENDING;
System.out.println("sending for 10 seconds");
Thread.sleep(10000);
} catch (final InterruptedException e) {
reportStatus = ReportStatus.EXCEPTION_IN_SENDING;
}
try {
reportStatus = ReportStatus.SUBMITTING_REPORT;
System.out.println("submitting report for 10 seconds");
Thread.sleep(10000);
} catch (final InterruptedException e) {
reportStatus = ReportStatus.EXCEPTION_IN_SUBMITTING_REPORT;
}
System.out.println("Report Sender completed");
reportStatus = ReportStatus.DONE;
}
private void logCompressionError(final String cause) {
logError(ReportStatus.COMPRESSING, cause);
reportStatus = ReportStatus.EXCEPTION_IN_COMPRESSION;
}
private void logError(final ReportStatus status, final String cause) {
LOGGER.error("{} - {}", status, cause);
}
}
Ideally, statements like
System.out.println("sending for 10 seconds");
Thread.sleep(10000);
will be replaced by actual tasks, but for now assuming this is the case, and they way it runs is
private void submitJob() {
final ExecutorService executorService = Executors.newSingleThreadExecutor();
try {
final LPLogCompressor lpLogCompressor = getLpLogCompressor();
executorService.execute(getReportSenderRunnable(lpLogCompressor));
} catch (final IOException e) {
reportStatus = ReportStatus.EXCEPTION_IN_COMPRESSION;
LOGGER.debug("Error in starting compression: {}", e.getMessage());
}
System.out.println("started Report Sender Job");
}
My question was how to effectively test this code? The one I wrote is
#Test
public void testJobAllStages() throws InterruptedException, IOException {
final ReportSender reportSender = spy(new ReportSender());
doReturn(compressor).when(reportSender).getLpLogCompressor();
when(compressor.compress()).thenReturn("nothing");
reportSender.sendAndReturnStatus();
Thread.sleep(10);
assertEquals(ReportStatus.COMPRESSING, reportSender.getCurrentStatus());
Thread.sleep(10000);
assertEquals(ReportStatus.SENDING, reportSender.getCurrentStatus());
Thread.sleep(10000);
assertEquals(ReportStatus.SUBMITTING_REPORT, reportSender.getCurrentStatus());
}
This runs well for above code.
To me this is crappy for following reasons
Not all tasks would take same time in ideal cases
Testing with Thread.sleep will take too much time and also adds non-determinism.
Question
How do I test this effectively?
You could add a class with a method (e.g., TimedAssertion.waitForCallable) that accepts a Callable, which then uses an ExecutorService to execute that Callable every second until it returns true. If it doesn't return true in a specific period of time, it fails.
You would then call that class from your test like this:
boolean result;
result = new TimedAssertion().waitForCallable(() ->
reportSender.getCurrentStatus() == ReportStatus.COMPRESSING);
assertTrue(result);
result = new TimedAssertion().waitForCallable(() ->
reportSender.getCurrentStatus() == ReportStatus.SENDING);
assertTrue(result);
...etc. This way, you can easily wait for a particular state in your code to be true, without waiting too long -- and you can reuse this new class anywhere that you need this sort of assertion.
Based on #Boris the Spider comment, I made use of mocks and here is what my tests look like
#Mock
private ReportSenderRunnable reportSenderRunnable;
#Mock
private LPLogCompressor compressor;
#Before
public void setUp() throws Exception {
MockitoAnnotations.initMocks(this);
}
#Test(timeout = 1000)
public void testJobNoException() throws InterruptedException, IOException {
final ReportSender reportSender = spy(new ReportSender());
doReturn(compressor).when(reportSender).getLpLogCompressor();
when(compressor.compress()).thenReturn("nothing");
reportSender.sendAndReturnStatus();
Thread.sleep(10);
assertEquals("Job must be completed successfully", ReportStatus.DONE,
reportSender.getCurrentStatus());
}
#Test(timeout = 1000)
public void testJobWithIllegalStateException() throws Exception {
final ReportSender reportSender = spy(new ReportSender());
doReturn(compressor).when(reportSender).getLpLogCompressor();
doThrow(IllegalStateException.class).when(compressor).compress();
reportSender.sendAndReturnStatus();
Thread.sleep(10);
assertEquals("Job must failed during compression", ReportStatus.EXCEPTION_IN_COMPRESSION,
reportSender.getCurrentStatus());
}
I am currently creating a service allowing to send objects from a client to a server and vice-versa, but experiencing an issue that I unfortunately cannot explain and fix.
First of all, here are the useful classes (I haven't put all methods such as getters and setters in this post).
/**
* This launcher creates a NetworkInterface, waits for a connection, sends a message to the connected client and waits for an incoming message
*
*/
public class ServerLauncher {
public static void main(String[] args) {
try {
NetworkSystem n = new NetworkSystem(4096);
n.startServerManager();
while (n.getCommunications().isEmpty()) {
// this line is unexpectedly magic
System.out.println("Waiting for a new connection...");
}
do {
n.getCommunications().get(0).send(new String("Hello, are you available?"));
} while (n.getCommunications().get(0).getReceiveManager().getReadObjects().isEmpty());
System.out.println(n.getCommunications().get(0).getReceiveManager().getReadObjects().get(0));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
/**
* This launcher creates a NetworkSystem, connects to the server, waits for an incoming message and anwers back
*
*/
public class ClientLauncher {
public static void main(String[] args) {
try {
NetworkSystem n = new NetworkSystem(8192);
n.instanciateCommunication(new Socket(InetAddress.getLocalHost(), 4096));
while (n.getCommunications().get(0).getReceiveManager().getReadObjects().isEmpty()) {
// this line is unexpectedly magic
System.out.println("Waiting for an object...");
}
System.out.println(n.getCommunications().get(0).getReceiveManager().getReadObjects().get(0));
n.getCommunications().get(0).getSendManager().send(new String("No, I am not! We will talk later..."));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
/**
* This class handles every incoming messages.
*/
public class ReceiveManager implements Runnable {
private ObjectInputStream inputStream;
private CommunicationManager communicationManager;
private List readObjects;
private boolean receive;
public ReceiveManager(CommunicationManager communicationManager) throws IOException {
this.communicationManager = communicationManager;
this.inputStream = new ObjectInputStream(this.communicationManager.getSocket().getInputStream());
this.readObjects = new ArrayList();
this.receive = true;
}
#Override
public void run() {
Object object = null;
try {
while ((object = this.inputStream.readObject()) != null && this.hasToReceive()) {
this.readObjects.add(object);
}
} catch (ClassNotFoundException | IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
this.setContinueToReceive(false);
}
}
private boolean hasToReceive() {
return this.receive;
}
public void setContinueToReceive(boolean value) {
this.receive = value;
}
}
/**
* This class allows the user to send messages
*/
public class SendManager {
private ObjectOutputStream outputStream;
private CommunicationManager communicationManager;
public SendManager(CommunicationManager communicationManager) throws IOException {
this.communicationManager = communicationManager;
this.outputStream = new ObjectOutputStream(this.communicationManager.getSocket().getOutputStream());
}
public void send(Object object) throws IOException {
this.outputStream.writeObject(object);
this.outputStream.flush();
}
}
So basically, as you may have noticed in the ServerLauncher and the ClientLauncher, there are two "magic" instructions. When those two lines are commented and I run the server then the client, nothing happens. The server and the client are simply running and never stop. However, when I uncomment these two magic lines, every works like a charm: messages are properly sent and received.
Would you guys know the reason of this unexpected behaviour ?
Oh yeah, I forgot, if you guys want me to upload everything to test the project or whatever, just tell me :-)
You're starving the CPU with those spin loops. You should sleep or wait while the queues are empty, or better still just take()from blocking queues.
NB Your loop condition isn't correct:
readObject() doesn't return null at end of stream. It throws EOFException.
You should also test hasToReceive() before calling readObject() rather than afterwards. Otherwise you always do an extra read.