I'm struggling with the functional style of Supplier<U>, etc and creating testable code.
So I have an InputStream that is split into chunks which are processed asynchronously, and I want to know when they are all done. To write testable code, I outsource the processing logic to its own Runnable:
public class StreamProcessor {
public CompletableFuture<Void> process(InputStream in) {
List<CompletableFuture> futures = new ArrayList<>();
while (true) {
try (SizeLimitInputStream chunkStream = new SizeLimitInputStream(in, 100)) {
byte[] data = IOUtils.toByteArray(chunkStream);
CompletableFuture<Void> f = CompletableFuture.runAsync(createTask(data));
futures.add(f);
} catch (EOFException ex) {
// end of stream reached
break;
} catch (IOException ex) {
return CompletableFuture.failedFuture(ex);
}
}
return CompletableFuture.allOf(futures.toArray(CompletableFuture<?>[]::new));
}
ChunkTask createTask(byte[] data) {
return new ChunkTask(data);
}
public class ChunkTask implements Runnable {
final byte[] data;
ChunkTask(byte[] data) {
this.data = data;
}
#Override
public void run() {
try {
// do something
} catch (Exception ex) {
// checked exceptions must be wrapped
throw new RuntimeException(ex);
}
}
}
}
This works well, but poses two problems:
The processing code cannot return anything; it's a Runnable after all.
Any checked exceptions caught inside ChunkTask.run() must be wrapped into a RuntimeException. Unwrapping the failed combined CompletableFuture returns the RuntimeException which needs to be unwrapped again to reach the original cause - in contrast to the IOException.
So I'm looking for a way to do this with CompletableFuture.supplyAsync(), but I can't figure out how to do this without lambdas (bad to test) or to return a CompletableFuture.failedFuture() from the processing logic.
I can think of two approaches:
1. With supplyAsync:
When using CompletableFuture.supplyAsync, you need a supplier instead of a runnable:
public static class ChunkTask implements Supplier<Object> {
final byte[] data;
ChunkTask(byte[] data) {
this.data = data;
}
#Override
public Object get() {
Object result = ...;
// Do something or throw an exception
return result;
}
}
and then:
CompletableFuture
.supplyAsync( new ChunkTask( data ) )
.whenComplete( (result, throwable) -> ... );
If an exception happens in Supplier.get(), it will b e propagated and you can see it in CompletableFuture.whenComplete, CompletableFuture.handle or CompletableFuture.exceptionally.
2. Passing a CompletableFuture to the thread
You can pass a CompletableFuture to ChunkTask:
public class ChunkTask implements Runnable {
final byte[] data;
private final CompletableFuture<Object> future;
ChunkTask(byte[] data, CompletableFuture<Object> future) {
this.data = data;
this.future = future;
}
#Override
public void run() {
try {
Object result = null;
// do something
future.complete( result );
} catch (Throwable ex) {
future.completeExceptionally( ex );
}
}
}
Then the logic becomes:
while (true) {
CompletableFuture<Object> f = new CompletableFuture<>();
try (SizeLimitInputStream chunkStream = new SizeLimitInputStream(in, 100)) {
byte[] data = IOUtils.toByteArray(chunkStream);
startThread(new ChunkTask(data, f));
futures.add(f);
} catch (EOFException ex) {
// end of stream reached
break;
} catch (IOException ex) {
f.completeExceptionally( ex );
return f;
}
}
Probably, Number 2 is the one that gives you more flexibility on how to manage the exception.
Related
I put a simple retry because the operation can rarely fail. The simplified code is below. The method putObject can accidentally throw an exception, in this case the retry should allow to invoke this method again. Is it possible to write a JUnit test for this?
I know that with Mockito library we can force to throw an Exception invoking a method but how to force this exception to be thrown only once?
public class RetryExample {
Bucket bucket = new Bucket();
static int INTERNAL_EXCEPTION_CODE = 100;
class AException extends RuntimeException {
private static final long serialVersionUID = 1L;
int statusCode;
public int getStatusCode() {
return statusCode;
}
}
class Bucket {
public void putObject(String fileName, byte[] data) throws AException {
System.out.println("PutObject=" + fileName + " data=" + data);
}
}
public void process(String fileName, byte[] data) throws AException {
try {
retryOperation((f, d) -> bucket.putObject(f, d), fileName, data);
} catch (Exception ex) {
throw new AException("Failed to write data", ex);
}
}
private <T, U> void retryOperation(BiConsumer<T, U> biConsumer, T t, U u) {
int retries = 0;
boolean retry = false;
AException lastServiceException = null;
do {
try {
biConsumer.accept(t, u);
} catch (AException e) {
lastServiceException = e;
int statusCode = e.getStatusCode();
if (statusCode == INTERNAL_EXCEPTION_CODE) {
throw e;
} else {
break;
}
}
retries++;
if (retries >= 3) {
retry = false;
}
} while (retry);
if (lastServiceException != null) {
throw lastServiceException;
}
}
Test Class:
public class RetryExampleTest {
...
#Test
public void test() {
RetryExample retryExample = new RetryExample();
String fileName = "TestFile";
byte[] data = simulatedPayload(10000);
try {
retryExample.process(fileName, data);
} catch (Exception e) {
fail("Exception thrown=" + e);
}
}
According to the Mockito documentation you can set different behavior for consecutive method calls.
when(mock.someMethod("some arg"))
.thenThrow(new RuntimeException())
.thenReturn("foo");
In case of a void method you can do something similar (Mockito documentation)
doThrow(new RuntimeException())
.doNothing()
.when(mock).doSomething();
I think you can use a global data object to store the times of throw Exceptions, so in the Mockito library invoke the Exception method just taken the global data object to record the times. It would be simple. Just all by your control.
I am trying to call a method multiple times every 60 seconds until a success response from the method which actually calls a rest end point on a different service. As of now I am using do while loop and using
Thread.sleep(60000);
to make the main thread wait 60 seconds which I feel is not the ideal way due to concurrency issues.
I came across the CountDownLatch method using
CountDownLatch latch = new CountDownLatch(1);
boolean processingCompleteWithin60Second = latch.await(60, TimeUnit.SECONDS);
#Override
public void run(){
String processStat = null;
try {
status = getStat(processStatId);
if("SUCCEEDED".equals(processStat))
{
latch.countDown();
}
} catch (Exception e) {
e.printStackTrace();
}
}
I have the run method in a different class which implements runnable. Not able to get this working. Any idea what is wrong?
You could use a CompletableFuture instead of CountDownLatch to return the result:
CompletableFuture<String> future = new CompletableFuture<>();
invokeYourLogicInAnotherThread(future);
String result = future.get(); // this blocks
And in another thread (possibly in a loop):
#Override
public void run() {
String processStat = null;
try {
status = getStat(processStatId);
if("SUCCEEDED".equals(processStat))
{
future.complete(processStat);
}
} catch (Exception e) {
future.completeExceptionally(e);
}
}
future.get() will block until something is submitted via complete() method and return the submitted value, or it will throw the exception supplied via completeExceptionally() wrapped in an ExecutionException.
There is also get() version with timeout limit:
String result = future.get(60, TimeUnit.SECONDS);
Finally got it to work using Executor Framework.
final int[] value = new int[1];
pollExecutor.scheduleWithFixedDelay(new Runnable() {
Map<String, String> statMap = null;
#Override
public void run() {
try {
statMap = coldService.doPoll(id);
} catch (Exception e) {
}
if (statMap != null) {
for (Map.Entry<String, String> entry : statMap
.entrySet()) {
if ("failed".equals(entry.getValue())) {
value[0] = 2;
pollExecutor.shutdown();
}
}
}
}
}, 0, 5, TimeUnit.MINUTES);
try {
pollExecutor.awaitTermination(40, TimeUnit.MINUTES);
} catch (InterruptedException e) {
}
I am writing a job queue using BlockingQueue and ExecutorService. It basically waiting new data in the queue, if there are any data put into the queue, executorService will fetch data from queue. But the problem is that i am using a loop that loops to wait the queue to have data and thus the cpu usage is super high.
I am new to use this api. Not sure how to improve this.
ExecutorService mExecutorService = Executors.newSingleThreadExecutor();
BlockingQueue<T> mBlockingQueue = new ArrayBlockingQueue();
public void handleRequests() {
Future<T> future = mExecutorService.submit(new WorkerHandler(mBlockingQueue, mQueueState));
try {
value = future.get();
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
if (mListener != null && returnedValue != null) {
mListener.onNewItemDequeued(value);
}
}
}
private static class WorkerHandler<T> implements Callable<T> {
private final BlockingQueue<T> mBlockingQueue;
private PollingQueueState mQueueState;
PollingRequestHandler(BlockingQueue<T> blockingQueue, PollingQueueState state) {
mBlockingQueue = blockingQueue;
mQueueState = state;
}
#Override
public T call() throws Exception {
T value = null;
while (true) { // problem is here, this loop takes full cpu usage if queue is empty
if (mBlockingQueue.isEmpty()) {
mQueueState = PollingQueueState.WAITING;
} else {
mQueueState = PollingQueueState.FETCHING;
}
if (mQueueState == PollingQueueState.FETCHING) {
try {
value = mBlockingQueue.take();
break;
} catch (InterruptedException e) {
Log.e(TAG, e.getMessage(), e);
break;
}
}
}
Any suggestions on how to improve this would be much appreciated!
You don't need to test for the queue to be empty, you just take(), so the thread blocks until data is available.
When an element is put on the queue the thread awakens an value is set.
If you don't need to cancel the task you just need:
#Override
public T call() throws Exception {
T value = mBlockingQueue.take();
return value;
}
If you want to be able to cancel the task :
#Override
public T call() throws Exception {
T value = null;
while (value==null) {
try {
value = mBlockingQueue.poll(50L,TimeUnit.MILLISECONDS);
break;
} catch (InterruptedException e) {
Log.e(TAG, e.getMessage(), e);
break;
}
}
return value;
}
if (mBlockingQueue.isEmpty()) {
mQueueState = PollingQueueState.WAITING;
} else {
mQueueState = PollingQueueState.FETCHING;
}
if (mQueueState == PollingQueueState.FETCHING)
Remove these lines, the break;, and the matching closing brace.
public class SOQuestion {
private class TaskResult1 {//some pojo
}
private class TaskResult2{// some other pojo
}
private class Task1 implements Callable<TaskResult1> {
public TaskResult1 call() throws InterruptedException {
// do something...
return new TaskResult1();
}
}
private class Task2 implements Callable<TaskResult2> {
public TaskResult2 call() throws InterruptedException {
// do something else...
return new TaskResult2();
}
}
private void cancelFuturesTask1(List<Future<TaskResult1>> futureList ){
for(Future<TaskResult1> future: futureList){
if(future.isDone())
{
continue;
} else
{
System.out.println("cancelling futures.....Task1.");
future.cancel(true);
}
}
}
private void cancelFuturesTask2(List<Future<TaskResult2>> futureList ){
for(Future<TaskResult2> future: futureList){
if(future.isDone())
{
continue;
} else
{
System.out.println("cancelling futures.....Task2.");
future.cancel(true);
}
}
}
void runTasks() {
ExecutorService executor = Executors.newFixedThreadPool(4);
CompletionService<TaskResult1> completionService1 = new ExecutorCompletionService<TaskResult1>(executor);
List<Future<TaskResult1>> futuresList1 = new ArrayList<Future<TaskResult1>>();
for (int i =0 ;i<10; i++) {
futuresList1.add(completionService1.submit(new Task1()));
}
for (int i = 0; i< 10; i++) {
try {
Future<TaskResult1> f = completionService1.take();
System.out.print(f.get());
System.out.println("....Completed..first one.. cancelling all others.");
cancelFuturesTask1(futuresList1);
} catch (InterruptedException e) {
System.out.println("Caught interrruption....");
break;
} catch (CancellationException e) {
System.out.println("Cancellation execution....");
break;
} catch (ExecutionException e) {
System.out.println("Execution exception....");
break;
}
}
CompletionService<TaskResult2> completionService2 = new ExecutorCompletionService<TaskResult2>(executor);
List<Future<TaskResult2>> futuresList2 = new ArrayList<Future<TaskResult2>>();
try{
for (int i =0 ;i<10; i++) {
futuresList2.add(completionService2.submit(new Task2()));
}
for (int i = 0; i< 10; i++) {
try {
Future<TaskResult2> f = completionService2.take();
System.out.print(f.get());
System.out.println("....Completed..first one.. cancelling all others.");
cancelFuturesTask2(futuresList2);
} catch (InterruptedException e) {
System.out.println("Caught interrruption....");
break;
} catch (CancellationException e) {
System.out.println("Cancellation execution....");
break;
} catch (ExecutionException e) {
System.out.println("Execution exception....");
break;
}
}
}catch(Exception e){
}
executor.shutdown();
}
}
As seen in the example, there is some repetition. I want to use Generics and wild card to generalize objects and re-use some methods.
My specific ask would be "cancelFuturesTask1" and "cancelFuturesTask2". Both methods do the same thing. How can I generalize them?
I read this: https://docs.oracle.com/javase/tutorial/java/generics/subtyping.html
I created a base class "TaskResult" extended "TaskResult1" and "TaskResult2"
private class TaskResult1 extends TaskResult
private class TaskResult2 extends TaskResult
and then use
List<Futures<? extends TaskResult>>
It gives me complication error and I am having some confusion in extending the concept to List<Futures<?>> in this case.
Any pointers or explanation on how to do that will help here.
Thanks in advance, let me know if you need some clarification.
This compiles fine for me, let me know if you get errors on it also.
public class FutureTest
{
public void cancelAll( Future<?> ... futures ) {
for( Future<?> f : futures ) {
if( !f.isDone() ) {
Logger.getLogger(FutureTest.class.getName()).log(
Level.INFO, "Canceling {0}", f);
f.cancel(true);
}
}
}
public <T extends Task1 & Task2> void cancelAll( List<Future<T>> futures ) {
cancelAll( futures.toArray( new Future[futures.size()]) );
}
}
interface Task1 {}
interface Task2 {}
For a more specific type, see my second method. You can do it with a Generic Method and Bounded Type Parameter, but only if all but one type are interfaces. Java doesn't support multiple inheritance, so you can't write one method that takes multiple (not covariant) class types. That's why I think unbounded (wildcard, "<?>") methods like the first example are better here.
https://docs.oracle.com/javase/tutorial/java/generics/boundedTypeParams.html
I have a program that performs lots of calculations and reports them to a file frequently. I know that frequent write operations can slow a program down a lot, so to avoid it I'd like to have a second thread dedicated to the writing operations.
Right now I'm doing it with this class I wrote (the impatient can skip to the end of the question):
public class ParallelWriter implements Runnable {
private File file;
private BlockingQueue<Item> q;
private int indentation;
public ParallelWriter( File f ){
file = f;
q = new LinkedBlockingQueue<Item>();
indentation = 0;
}
public ParallelWriter append( CharSequence str ){
try {
CharSeqItem item = new CharSeqItem();
item.content = str;
item.type = ItemType.CHARSEQ;
q.put(item);
return this;
} catch (InterruptedException ex) {
throw new RuntimeException( ex );
}
}
public ParallelWriter newLine(){
try {
Item item = new Item();
item.type = ItemType.NEWLINE;
q.put(item);
return this;
} catch (InterruptedException ex) {
throw new RuntimeException( ex );
}
}
public void setIndent(int indentation) {
try{
IndentCommand item = new IndentCommand();
item.type = ItemType.INDENT;
item.indent = indentation;
q.put(item);
} catch (InterruptedException ex) {
throw new RuntimeException( ex );
}
}
public void end(){
try {
Item item = new Item();
item.type = ItemType.POISON;
q.put(item);
} catch (InterruptedException ex) {
throw new RuntimeException( ex );
}
}
public void run() {
BufferedWriter out = null;
Item item = null;
try{
out = new BufferedWriter( new FileWriter( file ) );
while( (item = q.take()).type != ItemType.POISON ){
switch( item.type ){
case NEWLINE:
out.newLine();
for( int i = 0; i < indentation; i++ )
out.append(" ");
break;
case INDENT:
indentation = ((IndentCommand)item).indent;
break;
case CHARSEQ:
out.append( ((CharSeqItem)item).content );
}
}
} catch (InterruptedException ex){
throw new RuntimeException( ex );
} catch (IOException ex) {
throw new RuntimeException( ex );
} finally {
if( out != null ) try {
out.close();
} catch (IOException ex) {
throw new RuntimeException( ex );
}
}
}
private enum ItemType {
CHARSEQ, NEWLINE, INDENT, POISON;
}
private static class Item {
ItemType type;
}
private static class CharSeqItem extends Item {
CharSequence content;
}
private static class IndentCommand extends Item {
int indent;
}
}
And then I use it by doing:
ParallelWriter w = new ParallelWriter( myFile );
new Thread(w).start();
/// Lots of
w.append(" things ").newLine();
w.setIndent(2);
w.newLine().append(" more things ");
/// and finally
w.end();
While this works perfectly well, I'm wondering:
Is there a better way to accomplish this?
Your basic approach looks fine. I would structure the code as follows:
import java.io.BufferedWriter;
import java.io.File;
import java.io.IOException;
import java.io.Writer;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;
public interface FileWriter {
FileWriter append(CharSequence seq);
FileWriter indent(int indent);
void close();
}
class AsyncFileWriter implements FileWriter, Runnable {
private final File file;
private final Writer out;
private final BlockingQueue<Item> queue = new LinkedBlockingQueue<Item>();
private volatile boolean started = false;
private volatile boolean stopped = false;
public AsyncFileWriter(File file) throws IOException {
this.file = file;
this.out = new BufferedWriter(new java.io.FileWriter(file));
}
public FileWriter append(CharSequence seq) {
if (!started) {
throw new IllegalStateException("open() call expected before append()");
}
try {
queue.put(new CharSeqItem(seq));
} catch (InterruptedException ignored) {
}
return this;
}
public FileWriter indent(int indent) {
if (!started) {
throw new IllegalStateException("open() call expected before append()");
}
try {
queue.put(new IndentItem(indent));
} catch (InterruptedException ignored) {
}
return this;
}
public void open() {
this.started = true;
new Thread(this).start();
}
public void run() {
while (!stopped) {
try {
Item item = queue.poll(100, TimeUnit.MICROSECONDS);
if (item != null) {
try {
item.write(out);
} catch (IOException logme) {
}
}
} catch (InterruptedException e) {
}
}
try {
out.close();
} catch (IOException ignore) {
}
}
public void close() {
this.stopped = true;
}
private static interface Item {
void write(Writer out) throws IOException;
}
private static class CharSeqItem implements Item {
private final CharSequence sequence;
public CharSeqItem(CharSequence sequence) {
this.sequence = sequence;
}
public void write(Writer out) throws IOException {
out.append(sequence);
}
}
private static class IndentItem implements Item {
private final int indent;
public IndentItem(int indent) {
this.indent = indent;
}
public void write(Writer out) throws IOException {
for (int i = 0; i < indent; i++) {
out.append(" ");
}
}
}
}
If you do not want to write in a separate thread (maybe in a test?), you can have an implementation of FileWriter which calls append on the Writer in the caller thread.
One good way to exchange data with a single consumer thread is to use an Exchanger.
You could use a StringBuilder or ByteBuffer as the buffer to exchange with the background thread. The latency incurred can be around 1 micro-second, doesn't involve creating any objects and which is lower using a BlockingQueue.
From the example which I think is worth repeating here.
class FillAndEmpty {
Exchanger<DataBuffer> exchanger = new Exchanger<DataBuffer>();
DataBuffer initialEmptyBuffer = ... a made-up type
DataBuffer initialFullBuffer = ...
class FillingLoop implements Runnable {
public void run() {
DataBuffer currentBuffer = initialEmptyBuffer;
try {
while (currentBuffer != null) {
addToBuffer(currentBuffer);
if (currentBuffer.isFull())
currentBuffer = exchanger.exchange(currentBuffer);
}
} catch (InterruptedException ex) { ... handle ... }
}
}
class EmptyingLoop implements Runnable {
public void run() {
DataBuffer currentBuffer = initialFullBuffer;
try {
while (currentBuffer != null) {
takeFromBuffer(currentBuffer);
if (currentBuffer.isEmpty())
currentBuffer = exchanger.exchange(currentBuffer);
}
} catch (InterruptedException ex) { ... handle ...}
}
}
void start() {
new Thread(new FillingLoop()).start();
new Thread(new EmptyingLoop()).start();
}
}
Using a LinkedBlockingQueue is a pretty good idea. Not sure I like some of the style of the code... but the principle seems sound.
I would maybe add a capacity to the LinkedBlockingQueue equal to a certain % of your total memory.. say 10,000 items.. this way if your writing is going too slow, your worker threads won't keep adding more work until the heap is blown.
I know that frequent write operations
can slow a program down a lot
Probably not as much as you think, provided you use buffering.