InterruptedException not caught on RxJava onError callback? [duplicate] - java

This question already has an answer here:
How to handle dispose in RxJava without InterruptedException
(1 answer)
Closed 2 years ago.
I have a simple code below
compositeDisposable.add(Observable.create<Int> { Thread.sleep(1000) }
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe({}, {Log.d("Track", it.localizedMessage)}, {}))
Handler().postDelayed({compositeDisposable.clear()}, 100)
It purposely use Thread.sleep(1000), just to trigger the InterruptedException. I purposely delay 100 milliseconds, so that ensure the sleep in the chain has started, and dispose it.
(Note, I know using of Thread.sleep is not preferred. I'm just writing this code to test and understand why onError is not called on this scenario, and how to prevent the crash elegantly without need to use try-catch in the RxJava chain)
At that time, when it is triggered, the error is not cause onError (i.e. doesn't reach the Log. But instead it throws the below error and crash the App.
io.reactivex.exceptions.UndeliverableException: java.lang.InterruptedException
at io.reactivex.plugins.RxJavaPlugins.onError(RxJavaPlugins.java:366)
at io.reactivex.internal.operators.observable.ObservableCreate$CreateEmitter.onError(ObservableCreate.java:74)
at io.reactivex.internal.operators.observable.ObservableCreate.subscribeActual(ObservableCreate.java:43)
at io.reactivex.Observable.subscribe(Observable.java:11194)
at io.reactivex.internal.operators.observable.ObservableSubscribeOn$SubscribeTask.run(ObservableSubscribeOn.java:96)
at io.reactivex.Scheduler$DisposeTask.run(Scheduler.java:463)
at io.reactivex.internal.schedulers.ScheduledRunnable.run(ScheduledRunnable.java:66)
at io.reactivex.internal.schedulers.ScheduledRunnable.call(ScheduledRunnable.java:57)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:301)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
Caused by: java.lang.InterruptedException
at java.lang.Thread.sleep(Native Method)
at java.lang.Thread.sleep(Thread.java:373)
at java.lang.Thread.sleep(Thread.java:314)
at com.elyeproj.porterduff.AnimateDrawPorterDuffView$startAnimate$1.subscribe(AnimateDrawPorterDuffView.kt:45)
at io.reactivex.internal.operators.observable.ObservableCreate.subscribeActual(ObservableCreate.java:40)
at io.reactivex.Observable.subscribe(Observable.java:11194) 
at io.reactivex.internal.operators.observable.ObservableSubscribeOn$SubscribeTask.run(ObservableSubscribeOn.java:96) 
at io.reactivex.Scheduler$DisposeTask.run(Scheduler.java:463) 
at io.reactivex.internal.schedulers.ScheduledRunnable.run(ScheduledRunnable.java:66) 
at io.reactivex.internal.schedulers.ScheduledRunnable.call(ScheduledRunnable.java:57) 
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:301) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) 
at java.lang.Thread.run(Thread.java:764) 
Why was't the InterruptedException caught by the onError in RxJava?

As per pointed by #akarnokd, in https://github.com/ReactiveX/RxJava/wiki/What's-different-in-2.0#error-handling, looks like the RxJava has been disposed before the throw, hence the error thrown later got through. To address the issue just need to register
RxJavaPlugins.setErrorHandler { e -> /*Do whatever we need with the error*/ }

Related

What does Retry mean in context of Java Couchbase SDK?

I am using Java couchbase sdk in my application. While setting up the DefaultCouchbaseEnvironment, I came across the property RetryStrategy. Now I am using the default configuration for which the retry strategy is BestEffortRetryStrategy. According to documentation
BestEffortRetryStrategy will retry the operation until it either succeeds or the maximum request lifetime is reached
By default the maximum request lifetime is 75 seconds.
Now what i what i want to understand here is what does retry mean here. Does retry mean retrying the request whenever an exception occurs or does it mean it will retry to allocate this request to some node to process the request in case it can't and it will keep retrying for 75 seconds?
I am looking at my application logs for different exceptions to understand this and I could see that TemporaryFailureException wasn't retried and i could also see that in some instances RequestCancelledException was being thrown after 75 seconds. Is it fair to assume that couchbase retries a request to assign it to node to process it and not actually retries on any exception once it makes it to the node that will process this request?
StackTrace for TemporaryFailureException-
stackTrace: com.couchbase.client.java.error.TemporaryFailureException: null
at com.couchbase.client.java.bucket.api.Mutate$2$1.call(Mutate.java:246)
at com.couchbase.client.java.bucket.api.Mutate$2$1.call(Mutate.java:220)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:69)
at rx.observers.Subscribers$5.onNext(Subscribers.java:235)
at rx.internal.operators.OnSubscribeDoOnEach$DoOnEachSubscriber.onNext(OnSubscribeDoOnEach.java:101)
at rx.internal.producers.SingleProducer.request(SingleProducer.java:65)
at rx.Subscriber.setProducer(Subscriber.java:211)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(OnSubscribeMap.java:102)
at rx.Subscriber.setProducer(Subscriber.java:205)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(OnSubscribeMap.java:102)
at rx.Subscriber.setProducer(Subscriber.java:205)
at rx.Subscriber.setProducer(Subscriber.java:205)
at rx.subjects.AsyncSubject.onCompleted(AsyncSubject.java:103)
at com.couchbase.client.core.endpoint.AbstractGenericHandler.completeResponse(AbstractGenericHandler.java:508)
at com.couchbase.client.core.endpoint.AbstractGenericHandler.access$000(AbstractGenericHandler.java:86)
at com.couchbase.client.core.endpoint.AbstractGenericHandler$1.call(AbstractGenericHandler.java:526)
at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java)
at java.lang.Thread.run(Thread.java:748)
Caused by: rx.exceptions.OnErrorThrowable$OnNextValue: OnError while emitting onNext value: com.couchbase.client.core.message.kv.UpsertResponse.class
at rx.exceptions.OnErrorThrowable.addValueAsLastCause(OnErrorThrowable.java:118)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:73)
... 21 common frames omitted```
BestEffortRetryStrategy should retry until the the request is cancelled by the timeout.
FailFastRetryStrategy should not retry. It should fail immediately.
If you have a TemporaryFailureException and have BestEffortRetryStrategy, that should have been retried. If you had one that was not retried can you share the stacktrace?
Mike

Could crashlytics crash on Android?

Here is an overview of a crash on fabric.io
java.lang.IllegalStateException - Fatal exception thrown on Scheduler.Worker thread
rx.exceptions.OnErrorFailedException - an exception originating somewhere in the depth of rx and ending in SafeSubscriber#_onError.
rx.exceptions.CompositeException - 2 exceptions occured
Casual chain - that's the interesting part
Caused by rx.exceptions.CompositeException$CompositeExceptionCausalChain: Chain of Causes for CompositeException In Order Received =>
at com.crashlytics.android.core.TrimmedThrowableData.<init>(TrimmedThrowableData.java:19)
at com.crashlytics.android.core.TrimmedThrowableData.<init>(TrimmedThrowableData.java:20)
at com.crashlytics.android.core.TrimmedThrowableData.<init>(TrimmedThrowableData.java:20)
at com.crashlytics.android.core.CrashlyticsController.writeSessionEvent(CrashlyticsController.java:1090)
at com.crashlytics.android.core.CrashlyticsController.writeFatal(CrashlyticsController.java:852)
at com.crashlytics.android.core.CrashlyticsController.access$400(CrashlyticsController.java:59)
at com.crashlytics.android.core.CrashlyticsController$6.call(CrashlyticsController.java:292)
at com.crashlytics.android.core.CrashlyticsController$6.call(CrashlyticsController.java:285)
at java.util.concurrent.FutureTask.run(FutureTask.java:237)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)
at io.fabric.sdk.android.services.common.ExecutorUtils$1$1.onRun(ExecutorUtils.java:75)
at io.fabric.sdk.android.services.common.BackgroundPriorityRunnable.run(BackgroundPriorityRunnable.java:30)
at java.lang.Thread.run(Thread.java:776)
retrofit.RetrofitError - 504 Gateway Timeout - probably supposed to be handled in one of the on onError
The version of Rx is 1.3.4. At first I thought that Rx missed the actual exception that occurred in one of the onError methods, but I found this issue https://github.com/ReactiveX/RxJava/issues/3679
and it seems to be resolved, by looking at source code of SafeSubscriber
https://github.com/ReactiveX/RxJava/blob/1.x/src/main/java/rx/observers/SafeSubscriber.java
it seems like rx should report the exception from onError correctly.
So, there is a CompositeException containing a retrofit exception, which is supposed to happen from time to time, and a stacktrace with references to crashlytics code with no references to any app code. I wonder what I should make out of this.

ERROR Error cleaning broadcast Exception [duplicate]

This question already has answers here:
What are possible reasons for receiving TimeoutException: Futures timed out after [n seconds] when working with Spark [duplicate]
(4 answers)
Closed 5 years ago.
I get the following error while running my spark streaming application, we have a large application running multiple stateful (with mapWithState) and stateless operations. It's getting difficult to isolate the error since spark itself hangs and the only error we see is in the spark log and not the application log itself.
The error happens only after abount 4-5 mins with a micro-batch interval of 10 seconds.
I am using Spark 1.6.1 on an ubuntu server with Kafka based input and output streams.
Please note it's not possible for me to provide the smallest possible code to re-create this bug as it does not occur in unit test-cases, and the application itself is very large
Any direction you can give to solve this issue will be helpful. Please let me know if I can provide any more information.
Error inline below:
[2017-07-11 16:15:15,338] ERROR Error cleaning broadcast 2211 (org.apache.spark.ContextCleaner)
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
at org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:136)
at org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228)
at org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
at org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:77)
at org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:233)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:189)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:180)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:180)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1180)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:173)
at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:68)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [120 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
Your exception message clearly says that its RPCTimeout due to default configuration of 120 seconds and adjust to optimal value as per your work load.
please see 1.6 configuration
your error messages org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds].
and
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76) confirms that.
For Better understanding please see the below code from
see RpcTimeout.scala
/**
* Wait for the completed result and return it. If the result is not available within this
* timeout, throw a [[RpcTimeoutException]] to indicate which configuration controls the timeout.
* #param awaitable the `Awaitable` to be awaited
* #throws RpcTimeoutException if after waiting for the specified time `awaitable`
* is still not ready
*/
def awaitResult[T](awaitable: Awaitable[T]): T = {
try {
Await.result(awaitable, duration)
} catch addMessageIfTimeout
}
}
Also see my answer in another context

FATAL EXCEPTION: RxCachedThreadScheduler-1 when trigger dispose. Why?

I have the following RxJava 2 code (in Kotlin), which have an Observable that
disposable = Observable.create<String>({
subscriber ->
try {
Thread.sleep(2000)
subscriber.onNext("Test")
subscriber.onComplete()
} catch (exception: Exception) {
subscriber.onError(exception)
}
}).subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe({ result -> Log.d("Test", "Completed $result") },
{ error -> Log.e("Test", "Completed ${error.message}") })
While it is still Thread.sleep(2000), I perform disposable?.dispose() call, it will error out
FATAL EXCEPTION: RxCachedThreadScheduler-1
Process: com.elyeproj.rxstate, PID: 10202
java.lang.InterruptedException
at java.lang.Thread.sleep(Native Method)
at java.lang.Thread.sleep(Thread.java:371)
at java.lang.Thread.sleep(Thread.java:313)
at presenter.MainPresenter$loadData$1.subscribe(MainPresenter.kt:41)
at io.reactivex.internal.operators.observable.ObservableCreate.subscribeActual(ObservableCreate.java:40)
I expect the dispose would help to cancel the operation silently, or at most, have the error catch with with the Log.e on the subscribe. However, it just crash as per the error message above.
Why did the Exception escape? Isn't dispose suppose to cancel the entire operation silently without crashing?
There is a combination of factors here:
dispose of a stream that uses subscribeOn also disposes of the thread used. This also involves calling Thread.interrupt() when using Schedulers.io(). This causes the exception in your case.
InterruptedException is an Exception thrown by Thread.sleep, so it is caught by your code and passed to onError like any other exception.
Calling onError after dispose redirects the error to the global error handler due to RxJava2's policy of NEVER throwing away errors. To work around this check subscriber.isDisposed() before calling onError or use RxJava 2.1.1's new subscriber.tryOnError.
if (!subscriber.isDisposed()) {
subscriber.onError(exception)
}
If you are using rxjava2, please add this in your initialisation code
RxJavaPlugins.setErrorHandler(t -> {
logger.log(Level.SEVERE,null, t);
});
Hope it helps

Netty 4 leak exception at HttpObjectAggregator

I'm getting an exception on my netty server while it's under heavy load. (Testing with LOIC etc...) I want to know that I do something wrong or it's a bug in the HttpObjectAggregator.
WARNING: LEAK: ByteBuf was GC'd before being released correctly. The following stack trace shows where the leaked object was created, rather than where you failed to release it.
io.netty.util.ResourceLeakException: io.netty.buffer.CompositeByteBuf#1c0eeb6
at io.netty.util.ResourceLeakDetector$DefaultResourceLeak.<init>(ResourceLeakDetector.java:174)
at io.netty.util.ResourceLeakDetector.open(ResourceLeakDetector.java:116)
at io.netty.buffer.CompositeByteBuf.<init>(CompositeByteBuf.java:60)
at io.netty.buffer.Unpooled.compositeBuffer(Unpooled.java:353)
at io.netty.handler.codec.http.HttpObjectAggregator.decode(HttpObjectAggregator.java:138)
at io.netty.handler.codec.http.HttpObjectAggregator.decode(HttpObjectAggregator.java:50)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
at io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
at io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:173)
at io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
at io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:100)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:465)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:359)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
at java.lang.Thread.run(Thread.java:722)
Most likely you forgot to release the message after handling it. Did you call release() on it ?

Categories