Asynchronous function - error status 0 - java

I execute my async function and before the result I reload the browser I get error - OnFailure(Throwable) is executed. Status error code is 0.
This problem is on FireFox and Chrome.
Could you tell me what this status code means.
do(new AsyncCallback<Boolean>() {
#Override
public void onSuccess(Boolean result) {}
#Override
public void onFailure(Throwable throw) {
do_sth();
}
});
Boolean do() { while(true); }
That also return status error 0

The 0 status code here means the request has been aborted (it could also denote a network error, or the request timed out).
See http://www.w3.org/TR/XMLHttpRequest/#the-status-attribute and http://www.w3.org/TR/XMLHttpRequest/#error-flag

You could always define your onFailure() like this (adapted from the GWT API) to be able to handle different kinds of failure nicely:
public void onFailure(Throwable caught) {
try {
throw caught;
} catch (IncompatibleRemoteServiceException e) {
// this client is not compatible with the server; cleanup and refresh the
// browser
} catch (InvocationException e) {
// the call didn't complete cleanly
} catch (YourOwnException e) {
// take appropriate action
} catch (Throwable e) {
// last resort -- a very unexpected exception
}
}

Related

CompletableFuture.runAsync not executing the number of times it was invoked

I am new to asynchronus coding in Java.
This is my code in spring boot application:
public void func(String someString){
if(someString != null){
doAsync(someString);
publish(topic, someString);
}
}
public void doAsync(String someString) {
log.info("Inside do Async");
CompletableFuture.runAsync(() -> {
try {
execute(someString);
} catch (Throwable throwable) {
log.error("Error");
}
}, executorService);
}
private void execute(String someString) {
try {
log.info("Inside Execute");
DBcall(someString);
} catch (Throwable e) {
log.info("Error");
}
}
The func() is being called around 200k times through an event in a Queue, In logs the log Inside do Async appeared for 200k times but the log Inside Execute appeared just for 195k times.
I see no errors/exception occurred in this flow.
Why is it not running consistently for all 200k events? Am I missing something in the implementation?
The publish() function publishes the same message to another subscriber which is in the same service, there are some 10k-11k errors in that subscriber flow(null pointer error). Is error in this flow is main reason behind not executing all async call?

Convert function into a completable with custom Exceptions

I have a vertx service for all message-broker related operations. For e.g. an exchange creation function looks like this
#Override
public BrokerService createExchange(String exchange,
Handler<AsyncResult<JsonArray>> resultHandler) {
try {
getAdminChannel(exchange).exchangeDeclare(exchange, "topic", true);
resultHandler.handle(Future.succeededFuture());
} catch(Exception e) {
e.printStackTrace();
resultHandler.handle(Future.failedFuture(e.getCause()));
}
return this;
}
I am in the process of converting my entire codebase into rxjava and I would like to convert functions like these into completables. Something like:
try {
getAdminChannel(exchange).exchangeDeclare(exchange, "topic", true);
Completable.complete();
} catch(Exception e) {
Completable.error(new BrokerErrorThrowable("Exchange creation failed"));
}
Furthermore, I would also like to be able to throw custom errors like Completable.error(new BrokerErrorThrowable("Exchange creation failed")) when things go wrong. This is so that I'll be able to catch these errors and respond with appropriate HTTP responses.
I saw that Completable.fromCallable() is one way to do it, but I haven't found a way to throw these custom exceptions. How do I go about this? Thanks in advance!
I was able to figure it out. All I had to do was this:
#Override
public BrokerService createExchange(String exchange, Handler<AsyncResult<Void>> resultHandler) {
Completable.fromCallable(
() -> {
try {
getAdminChannel(exchange).exchangeDeclare(exchange, "topic", true);
return Completable.complete();
} catch (Exception e) {
return Completable.error(new InternalErrorThrowable("Create exchange failed"));
}
})
.subscribe(CompletableHelper.toObserver(resultHandler));
return this;
}

Stopping a Camel route from outside the route

We have a data processing application that runs on Karaf 2.4.3 with Camel 2.15.3.
In this application, we have a bunch of routes that import data. We have a management view that lists these routes and where each route can be started. Those routes do not directly import data, but call other routes (some of them in other bundles, called via direct-vm), sometimes directly and sometimes in a splitter.
Is there a way to also completely stop a route/therefore stopping the entire exchange from being further processed?
When simply using the stopRoute function like this:
route.getRouteContext().getCamelContext().stopRoute(route.getId());
I eventually get a success message with Graceful shutdown of 1 routes completed in 10 seconds - the exchange is still being processed though...
So I tried to mimic the behaviour of the StopProcessor by setting the stop property, but that also didn't help:
public void stopRoute(Route route) {
try {
Collection<InflightExchange> browse = route.getRouteContext().getCamelContext().getInflightRepository()
.browse();
for (InflightExchange inflightExchange : browse) {
String exchangeRouteId = inflightExchange.getRouteId();
if ((exchangeRouteId != null) && exchangeRouteId.equals(route.getId())) {
this.stopExchange(inflightExchange.getExchange());
}
}
} catch (Exception e) {
Notification.show("Error while trying to stop route", Type.ERROR_MESSAGE);
LOGGER.error(e, e);
}
}
public void stopExchange(Exchange exchange) throws Exception {
AsyncProcessorHelper.process(new AsyncProcessor() {
#Override
public void process(Exchange exchange) throws Exception {
AsyncProcessorHelper.process(this, exchange);
}
#Override
public boolean process(Exchange exchange, AsyncCallback callback) {
exchange.setProperty(Exchange.ROUTE_STOP, Boolean.TRUE);
callback.done(true);
return true;
}
}, exchange);
}
Is there any way to completely stop an exchange from being processed from outside the route?
Can you get an exchange?
I use exchange.setProperty(Exchange.ROUTE_STOP, true);
Route stop flow and doesn't go to next route.

RabbitMQ Java client - How to sensibly handle exceptions and shutdowns?

Here's what I know so far (please correct me):
In the RabbitMQ Java client, operations on a channel throw IOException when there is a general network failure (malformed data from broker, authentication failures, missed heartbeats).
Operations on a channel can also throw the ShutdownSignalException unchecked exception, typically an AlreadyClosedException when we tried to perform an action on the channel/connection after it has been shut down.
The shutting down process happens in the event of "network failure, internal failure or explicit local shutdown" (e.g. via channel.close() or connection.close()). The shutdown event propagates down the "topology", from Connection -> Channel -> Consumer, and when the Channel it calls the Consumer's handleShutdown() method gets called.
A user can also add a shutdown listener which is called after the shutdown process completes.
Here is what I'm missing:
Since an IOException indicates a network failure, does it also initiate a shutdown request?
How does using auto-recovery mode affect shutdown requests? Does it cause channel operations to block while it tries to reconnect to the channel, or will the ShutdownSignalException still be thrown?
Here is how I'm handling exceptions at the moment, is this a sensible approach?
My setup is that I'm polling a QueueingConsumer and dispatching tasks to a worker pool. The rabbitmq client is encapsulated in MyRabbitMQWrapper here. When an exception occurs polling the queue I just gracefully shutdown everything and restart the client. When an exception occurs in the worker I also just log it and finish the worker.
My biggest worry (related to Question 1): Suppose an IOException occurs in the worker, then the task doesn't get acked. If the shutdown does not then occur, I now have an un-acked task that will be in limbo forever.
Pseudo-code:
class Main {
public static void main(String[] args) {
while(true) {
run();
//Easy way to restart the client, the connection has been
//closed so RabbitMQ will re-queue any un-acked tasks.
log.info("Shutdown occurred, restarting in 5 seconds");
Thread.sleep(5000);
}
}
public void run() {
MyRabbitMQWrapper rw = new MyRabbitMQWrapper("localhost");
try {
rw.connect();
while(!Thread.currentThread().isInterrupted()) {
try {
//Wait for a message on the QueueingConsumer
MyMessage t = rw.getNextMessage();
workerPool.submit(new MyTaskRunnable(rw, t));
} catch (InterruptedException | IOException | ShutdownSignalException e) {
//Handle all AMQP library exceptions by cleaning up and returning
log.warn("Shutting down", e);
workerPool.shutdown();
break;
}
}
} catch (IOException e) {
log.error("Could not connect to broker", e);
} finally {
try {
rw.close();
} catch(IOException e) {
log.info("Could not close connection");
}
}
}
}
class MyTaskRunnable implements Runnable {
....
public void run() {
doStuff();
try {
rw.ack(...);
} catch (IOException | ShutdownSignalException e) {
log.warn("Could not ack task");
}
}
}

netty 3.5.7 Channel.close() throws exception cause closefuture not notified

I'm digging a bug in my netty program:I used a heartbeat handler between the server and client,when client system rebooting,the heartbeat handler in server side will be aware of timeout and then close the Channel,but sometimes the listener registered in Channel's CloseFuture never be notified,that's weird.
After digging netty 3.5.7 source code,I figure out that the only way a Channel's CloseFuture be notified is through AbstractChannel.setClosed();May be this method not be executed when Channel is closed,see below:
NioServerSocketPipelineSink:
private static void close(NioServerSocketChannel channel, ChannelFuture future) {
boolean bound = channel.isBound();
try {
if (channel.socket.isOpen()) {
channel.socket.close();
Selector selector = channel.selector;
if (selector != null) {
selector.wakeup();
}
}
// Make sure the boss thread is not running so that that the future
// is notified after a new connection cannot be accepted anymore.
// See NETTY-256 for more information.
channel.shutdownLock.lock();
try {
if (channel.setClosed()) {
future.setSuccess();
if (bound) {
fireChannelUnbound(channel);
}
fireChannelClosed(channel);
} else {
future.setSuccess();
}
} finally {
channel.shutdownLock.unlock();
}
} catch (Throwable t) {
future.setFailure(t);
fireExceptionCaught(channel, t);
}
}
in some platform channel.socket.close() may throw IOException,that means channel.setClosed() may never executed,so the listener registered in CloseFuture may not be notified.
Here is my question:Do you ever encounter this problem? Is the analysis right?
I figure out it's my heartbeat handler cause the problem:never timeout,so never close the channel,below is running in a timer :
if ((now - lastReadTime > heartbeatTimeout)
&& (now - lastWriteTime > heartbeatTimeout)) {
getChannel().close();
stopHeartbeatTimer();
}
where lastReadTime and lastWriteTime are updated like below:
public void writeComplete(ChannelHandlerContext ctx, WriteCompletionEvent e)
throws Exception {
lastWriteTime = System.currentTimeMillis();
super.writeComplete(ctx, e);
}
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
lastReadTime = System.currentTimeMillis();
super.messageReceived(ctx, e);
}
Remote client is Windows xp,current server is Linux,both jdk1.6.
I think the writeComplete still invoked internally after remote client's system is rebooting,although messageReceived not invoked,no IOExceptoin is thrown during this period.
I will redesign the heartbeat handler,attaching a timestamp and a HEART_BEAT flag in heartbeat packet,when the peer side received the packet,send back the packet with the same timestamp and a ACK_HEART_BEAT flag,when the current side received this ack packet,use this timestamp to update lastWriteTime.

Categories