I'm trying to send async transaction to my Fabric network using the java gateway sdk but i receive the error Channel [CHANNEL NAME] has been shutdown.
Here some example code:
Gateway.Builder builder = Gateway.createBuilder()
.discovery(true)
.identity(wallet, user.getName())
.networkConfig([PATH TO CONNECTION PROFILE]);
try(Gateway gateway = builder.connect()) {
Network channel = gateway.getNetwork(CHANNEL_NAME);
Contract someChaincode = channel.getContract(CHAINCODE_NAME);
int coresNumber = (Runtime.getRuntime().availableProcessors());
ExecutorService executor = Executors.newFixedThreadPool(coresNumber);
for(String elemt : elements) {
CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
try{
//Exception thrown here
byte[] res = someChaincode.submitTransaction("someFunction", elemt);
return new String(res);
} catch (ContractException e) {
e.printStackTrace();
}
}, executor);
}
} catch (Exception e) {
// Handle Exception
}
And here the exception:
java.util.concurrent.ExecutionException: org.hyperledger.fabric.gateway.GatewayRuntimeException: org.hyperledger.fabric.sdk.exception.InvalidArgumentException: Channel [CHANNEL NAME] has been shutdown.
Precisely, the exception is thrown in the method checkChannelState(). I have a sense that I'm not handling multithreading correctly.
You don't look to be waiting for completion of the futures you have created in your code snippet. So you are scheduling transaction invocations for execution on different threads but then, before this code is executed, dropping out of a try-with-resources block which closes the Gateway instance you are using to connect. Closing the Gateway causes all the associated resources and connections to be closed, including the underlying Channel. So when your transaction invocations actually get run, you have already closed the connection and resources needed for them to execute.
You need to get the results from the Future objects you have created before closing the Gateway instance; in other words, before dropping out of the try-with-resources block that creates the Gateway. Something vaguely like this:
Collection<Callable<String>> tasks = elements.stream()
.map(element -> new Callable<String>() {
public String call() throws ContractException, TimeoutException, InterruptedException {
byte[] result = contract.submitTransaction("someFunction", element);
return new String(result);
}
}).collect(Collectors.toList());
try {
Collection<String> results = new ArrayList<>();
Collection<Future<String>> futures = executor.invokeAll(tasks, timeout, timeUnit);
for (Future<String> future : futures) {
try {
String result = future.get(timeout, timeUnit);
results.add(result);
} catch (CancellationException | InterruptedException | ExecutionException | TimeoutException e) {
e.printStackTrace();
}
}
System.out.println("Results: " + results);
} catch (InterruptedException e ) {
e.printStackTrace();
}
Related
Im trying to throw an exception in one of the processes under tasks.add (the exception occurs in paymentDao,savePayment() method) , but that exception never shows up in my logs even though I see the thrown exception line being reached in the debugger. I expected the exception to be caught in one of the below catches but it never reaches there. Can someone explain how callable treats an exception that occurs within one of the tasks
private final ExecutorService service = Executors.newFixedThreadPool(100,namedThreadFactory);
List<Callable<Object>> tasks = new ArrayList<>();
try {
if (cacheService.isPayment(
(PaidPending) logProcessor.getCache().asMap().get(fileName), fileName)) {
tasks.add(
() -> {
long startTime = System.currentTimeMillis();
paymentDao.savePayment(paymentR, fileName);
log.info(
"Time taken by savePaymentSummary Key {} : {}",
key,
System.currentTimeMillis() - startTime);
return null;
});
service.invokeAll(tasks);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new CustomException("Failed to insert payment " , e);
} catch (CustomeException e) {
log.error("Error here {}", e);
throw new CustomException("Failed to fetch Payment", e);
}
If a Callable task, c, that you submit to a thread pool throws any Throwable object, th, then th will be stored in the Future object, f, that was returned by the submit(c) call. A subsequent call to f.get() will then throw an ExecutionException, exex, and you can call exex.getCause() to obtain the original Throwable, th.
Your example calls service.invokeAll(tasks), which returns a list of Future objects, but you do not bother to save the list.
Try this:
List<Future<ResultType>> futures = new ArrayList<>();
try {
...
futures = service.invokeAll(tasks);
}
catch (...) {
...
}
...optionally do something else before awaiting results...
for (Future<ResultType> future : futures) {
try {
ResultType result = future.get();
...do something with result...
}
catch (ExecutionException ex) {
Throwable originalException = ex.getCause();
...do something with originalException...
}
}
Note: ResultType is a proxy for whatever your Callable tasks return. I am not sure what type that should be, since in your example, the only value returned is null. Maybe ResultType should just be Object.
I am trying to refactor code that sequentially waits on multiple futures to complete, to instead jointly wait for completion.
So I try to wait on multiple futures with a single timeout by using
// Example outcomes
final CompletableFuture<String> completedFuture
= CompletableFuture.completedFuture("hello");
final CompletableFuture<String> failedFuture
= new CompletableFuture<>();
failedFuture.completeExceptionally(new RuntimeException("Test Stub Exception"));
final CompletableFuture<String> incompleteFuture
= new CompletableFuture<>();
final AtomicBoolean timeoutHandled = new AtomicBoolean(false);
final CompletableFuture<String> checkedFuture
= incompleteFuture.whenComplete(
(x, e) -> timeoutHandled.set(e instanceof TimeoutException));
// this example timeouts after 1ms
try {
CompletableFuture
.allOf(completedFuture, checkedFuture, failedFuture)
.get(1, TimeUnit.MILLISECONDS);
} catch (final InterruptedException e) {
Thread.currentThread().interrupt();
} catch (final TimeoutException e) {
// probably do something here?
}
// but the incomplete future is still pending
assertTrue(checkedFuture.isCompletedExceptionally());
// this still fails even if checkedFuture.completeExceptionally(e) is called
assertTrue(timeoutHandled.get());
However the assert above fails because while the collective future timed out, the individual future did not time out yet. I would like to cancel such individual futures the same way as if they had run into timeouts individually, because they might have individual whenComplete() handlers handling TimeoutExceptions:
Expecting
<CompletableFuture[Incomplete]>
to be completed exceptionally.
Is there a useful/safe pattern by which I can loop over all exceptions and invoke completeExceptionally() to simulate a timeout in each of the futures, and make sure all "exception handlers" have been invoked before moving on?
You can create a varargs method with your try/catch that loops through each CompletableFuture and invokes completeExceptionally().
static void completeFutures(CompletableFuture<?>... completableFutures) throws ExecutionException {
try {
CompletableFuture.allOf(completableFutures).get(1, TimeUnit.MILLISECONDS);
} catch (final InterruptedException e) {
Thread.currentThread().interrupt();
} catch (final TimeoutException e) {
for (CompletableFuture<?> cf : completableFutures) {
cf.completeExceptionally(e);
}
}
}
I am trying to upload to S3 within my asynchronous Java code
private void submitCallablesWithExecutor()
throws InterruptedException, ExecutionException, TimeoutException {
ExecutorService executorService = null;
try {
executorService = Executors.newCachedThreadPool();
Future<String> task1Future = executorService.submit(new Callable<String>() {
public String call() {
try {
processExportRequest(xmlPutRequest_, customizedRequest_, response_);
return "Success";
} catch (Exception ex) {
return ex.getMessage();
}
}
});
} finally {
executorService.shutdown();
try {
if (!executorService.awaitTermination(800, TimeUnit.MILLISECONDS)) {
executorService.shutdownNow();
}
} catch (InterruptedException e) {
executorService.shutdownNow();
}
}
}
within processExportRequest I am calling upload to S3. I have tried both S3Client and S3AsyncClient. In both cases, I am getting following error:
Failed to upload to S3: java.lang.IllegalStateException: Interrupted waiting to refresh the value.
I don't see anywhere in my code that's calling Thread.interrupt(), and everything else seems to work fine, just not S3 upload. Maybe the multithreaded nature of Java Future is not compatible with AWS SDK? Thanks.
I changed Future to CompletableFuture, and combine two of them (in sequence):
private CompletableFuture<PutObjectResponse> processExportAndUploadAsync()
throws IOException {
CompletableFuture<PutObjectResponse> result = processExportAsync()
.thenCompose(fileName -> uploadS3Async(fileName));
return result;
}
It seems to work.
I have a java 8 based project which performs a certain function on a url. I need to modify the code snippet below so that it is capable of killing the thread/process running and run the next instance after a certain period of time irrespective of current process status.
I tried the following techniques to implement the thread kill procedure:
Executor service
Timer Task
Multithreaded thread kill
The code snippet for my most recent attempt is linked below.
#SuppressWarnings("static-access")
public static void main(String[] args) {
//fetch url from the txt file
List<String> careerUrls = getCareerUrls();
int a = 0;
DBConnection ds = null;
ds = DBConnection.getInstance();
try (java.sql.Connection con = ds.getConnection()) {
//read a single Url
for (String url : careerUrls) {
int c = a++;
ExecutorService executor = Executors.newFixedThreadPool(3);
Future<?> future = executor.submit(new Runnable() {
#Override
// <-- job processing
public void run() {
long end_time = System.currentTimeMillis() + 10000;
System.out.println("STARTED PROCESSING URL: " + url);
jobareaDeciderSample w = new jobareaDeciderSample();
w.mainSample(url, c, con);
}
});
// <-- reject all further submissions
executor.shutdown();
try {
future.get(120, TimeUnit.SECONDS); // <-- wait 2 Minutes to finish
} catch (InterruptedException e) { // <-- possible error cases
System.out.println("job was interrupted");
future.cancel(true);
Thread.currentThread().interrupt();
;
} catch (ExecutionException e) {
System.out.println("caught exception: " + e.getCause());
} catch (TimeoutException e) {
System.out.println("timeout");
future.cancel(true);
}
// wait all unfinished tasks for 2 sec
if (!executor.awaitTermination(0, TimeUnit.SECONDS)) {
// force them to quit by interrupting
executor.shutdownNow();
}
}
} catch (Exception e) {
LOGGER.error(e);
}
}
You are correct with your approach.
calling cancel(true); on future is the right way to stop this task.
You have another problem- you cannot just stop a thread. (well you can, using stop() in thread class, but you should never do this).
cancel(true); sends information to the thread, that it should be stopped. Some java classes are responding to this information and throw interrupted exception. But some dont. You have to modify your task code, to check if Thread.currentThread().isInterrupted(), and if so, stop execution.
This is something you have to do in your code, which you call by
jobareaDeciderSample w = new jobareaDeciderSample();
w.mainSample(url, c, con);
You should do this in some long time spinning code, if you said you do some stuff with url, you should do it in your while loop, where you download information for the web. In other words, do this check only when your code spends 99% of the time.
Also you are calling
Thread.currentThread().interrupt();
in your main thread, this does not do anything for you, as if you want to quit current thread, you can just call return
Recently, I find some BlockingOperationException in my netty4 project.
Some people said that when using the sync() method of start netty's ServerBootstrap can cause dead lock, because sync() will invoke await() method, and there is a method called 'checkDeadLock' in await().
But I don't think so. ServerBootstrap use the EventLoopGroup called boosGroup, and Channel use the workerGroup to operation IO, I don't think they will influence each other, they have different EventExecutor.
And in my practice, Deadlock exception doesn't appear in the Netty startup process, most of which occurs after the Channel of the await writeAndFlush.
Analysis source code, checkDeadLock, BlockingOperationException exception thrown is when the current thread and executor thread of execution is the same.
My project code is blow:
private void channelWrite(T message) {
boolean success = true;
boolean sent = true;
int timeout = 60;
try {
ChannelFuture cf = cxt.write(message);
cxt.flush();
if (sent) {
success = cf.await(timeout);
}
if (cf.isSuccess()) {
logger.debug("send success.");
}
Throwable cause = cf.cause();
if (cause != null) {
this.fireError(new PushException(cause));
}
} catch (LostConnectException e) {
this.fireError(new PushException(e));
} catch (Exception e) {
this.fireError(new PushException(e));
} catch (Throwable e) {
this.fireError(new PushException("Failed to send messageā, e));
}
if (!success) {
this.fireError(new PushException("Failed to send message"));
}
}
I know Netty officials advise not to use sync() or await() method, but I want to know what situation will causes deadlocks in process and the current thread and executor thread of execution is the same.
I change my project code.
private void pushMessage0(T message) {
try {
ChannelFuture cf = cxt.writeAndFlush(message);
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws PushException {
if (future.isSuccess()) {
logger.debug("send success.");
} else {
throw new PushException("Failed to send message.");
}
Throwable cause = future.cause();
if (cause != null) {
throw new PushException(cause);
}
}
});
} catch (LostConnectException e) {
this.fireError(new PushException(e));
} catch (Exception e) {
this.fireError(new PushException(e));
} catch (Throwable e) {
this.fireError(new PushException(e));
}
}
But I face a new problem, I can't get the pushException from the ChannelHandlerListener.
BlockingOperationException will be throw by netty if you call sync*or await* on a Future in the same thread that the EventExecutor is using and to which the Future is tied to. This is usually the EventLoop that is used by the Channel itself.
Can not call await in IO thread is understandable. However, there are 2 points.
1. If you call below code in channel handler, no exception will be reported, because the the most of the time the check of isDone in await returns true, since you are in IO thread, and IO thread is writing data synchronously. the data has been written when await is called.
ChannelPromise p = ctx.writeAndFlush(msg);
p.await()
If add a handler in different EventExecutorGroup, this check is not necessary, since that executor is newly created and is not the same one with the channel's IO executor.