Google dataflow : javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure - java

I have a dataflow that does a request to an API to retrieve some data. Recently there was an update on the ciphers in the API and the dataflow suddenly started failing. I was using java 1.8 and beam SDK 2.19.0. The same code is working when run locally.
I tried upgrading to java 11 and beam SDK 2.24.0 just in case the new ciphers weren't supported in the version I was using but I'm getting the same result, it runs locally but I get the same error in dataflow.
This is the code I'm using to do the request to the API:
URL url = new URL(urlString);
HttpsURLConnection con = null;
outer: for (int retry = 0; retry <= maxRetries && !connected; retry++) {
if (retry > 0) {
logger.info("retry " + retry + "/" + maxRetries);
Thread.sleep(retryDelayMs);
}
logger.info("Creating connection to Customer Master Read API");
con = (HttpsURLConnection) url.openConnection();
con.setRequestMethod("GET");
con.setRequestProperty("Authorization", token);
con.setRequestProperty("x-client-id", xClientId);
con.setRequestProperty("Content-Type", "application/json");
con.setDoOutput(true);
responseCode = con.getResponseCode();
It fails in this line:
responseCode = con.getResponseCode();
And this is the full error trace:
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
at com.....gcp.dataflow.dpfw.http.ApiREST.getCustomerRequest(ApiREST.java:521)
at com.....gcp.dataflow.dpfw.transform.GetCustomer$GetCustomerInfo.processElement(GetCustomer.java:105)
at com.....gcp.dataflow.dpfw.transform.AutoValue_GetCustomer_GetCustomerInfo$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:227)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:186)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:334)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:44)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:279)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:267)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.access$900(SimpleDoFnRunner.java:79)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:413)
at org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.output(DoFnOutputReceivers.java:73)
at com.....gcp.dataflow.dpfw.util.FilteringMessage$MessageFilter.processElement(FilteringMessage.java:85)
at com.....gcp.dataflow.dpfw.util.AutoValue_FilteringMessage_MessageFilter$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:227)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:186)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:334)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:44)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:279)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:267)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.access$900(SimpleDoFnRunner.java:79)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:413)
at org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.output(DoFnOutputReceivers.java:73)
at com.....gcp.dataflow.dpfw.transform.PubsubTransform$PubSubMessageTransformation.processElement(PubsubTransform.java:101)
at com.....gcp.dataflow.dpfw.transform.AutoValue_PubsubTransform_PubSubMessageTransformation$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:227)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:186)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:334)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:44)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:279)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:267)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.access$900(SimpleDoFnRunner.java:79)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:413)
at org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.output(DoFnOutputReceivers.java:73)
at org.apache.beam.sdk.transforms.MapElements$1.processElement(MapElements.java:139)
at org.apache.beam.sdk.transforms.MapElements$1$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:227)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:186)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:334)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:44)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:201)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1365)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1100(StreamingDataflowWorker.java:154)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$7.run(StreamingDataflowWorker.java:1085)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:128)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308)
at java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:279)
at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:181)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:164)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1152)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1063)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:402)
at java.base/sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:567)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1581)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:329)
at com.....gcp.dataflow.dpfw.http.ApiREST.getCustomerRequest(ApiREST.java:464)
... 52 more
Any idea why this is failing? And how to solve it?
Thank you in advance!!

I spent almost 10 hours fixing the problem. I would like to thanks my team lead. When I asked for the solution. He told me we already had a fix for one of another data flow.
use : enable_conscrypt_security_provider as below
#Override
protected void overridePipelineOptions(final PipelineOptions options) {
options.setJobName(JOB_NAME);
((DataflowPipelineOptions) options).setExperiments(Arrays.asList("enable_conscrypt_security_provider"));
}

Related

AsyncIO operation not writing to output directory in Apache Flink

I'm very new to Flink, and trying out the Async IO operation by following the doc from here. I've a text file containing bunch of integers. I'm creating a stream on the file, then for each line, I'm making an async http request and finally storing the results into an output file. I created a fastAPI rest endpoint for managing simple get request. In the Flink code, I'm using the java async-http-client library to wrap the http call into an async request. But the problem is, when I run the Flink code, it always times out.
My input file looks something like:
-9
42
2
12
15
18
13
9
45
-15
11
...
The fastAPI code goes something like this:
import time
from random import random
from fastapi import FastAPI
app = FastAPI()
#app.get("/temperatures/{temperature}")
async def read_temperature(temperature: int):
time.sleep(random())
if temperature <= 0:
return {"category": "insanely cold"}
elif temperature <= 15:
return {"category": "cold"}
elif temperature <= 25:
return {"category": "moderate"}
elif temperature <= 35:
return {"category": "moderately hot"}
elif temperature <= 45:
return {"category": "hot"}
else:
return {"category": "insanely hot"}
And finally, this is my Flink code:
import ...
public class AsyncHttpRequest extends RichAsyncFunction<String, Tuple2<String, String>> {
private transient AsyncHttpClient client;
#Override
public void open(Configuration parameters) {
client = asyncHttpClient();
}
#Override
public void close() throws Exception {
client.close();
}
#Override
public void asyncInvoke(String key, final ResultFuture<Tuple2<String, String>> resultFuture) throws Exception {
// issue the asynchronous request, receive a future for result
String getURL = String.format("http://localhost:8000/temperatures/%s", key);
final Future<Response> result = client.executeRequest(get(getURL).build());
// set the callback to be executed once the request by the client is complete
// the callback simply forwards the result to the result future
CompletableFuture.supplyAsync(() -> {
try {
JSONObject responseJson = new JSONObject(result.get().getResponseBody());
return responseJson.getString("category");
} catch (InterruptedException | ExecutionException e) {
return null;
}
}).thenAccept((String httpResult) -> {
resultFuture.complete(Collections.singleton(new Tuple2<>(key, httpResult)));
});
}
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<String> stream =
env.readTextFile("file:///Users/me/http_input.txt");
env.enableCheckpointing(10);
DataStream<Tuple2<String, String>> resultStream =
AsyncDataStream.unorderedWait(
stream, new AsyncHttpRequest(), 60, TimeUnit.SECONDS, 10);
final StreamingFileSink<Tuple2<String, String>> sink =
StreamingFileSink.forRowFormat(
new Path("file:///Users/me/http_output"),
new SimpleStringEncoder<Tuple2<String, String>>("UTF-8"))
.withRollingPolicy(
DefaultRollingPolicy.builder()
.withRolloverInterval(TimeUnit.MINUTES.toMillis(1)) .withInactivityInterval(TimeUnit.MINUTES.toMillis(5))
.withMaxPartSize(1024 * 1024)
.build())
.build();
resultStream.addSink(sink);
env.execute("Async Http job");
}
}
I get the following stacktrace when running the flink job:
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1046)
at akka.dispatch.OnComplete.internal(Future.scala:264)
at akka.dispatch.OnComplete.internal(Future.scala:261)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:573)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:118)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:80)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:233)
at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:224)
at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:215)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:669)
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:89)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:447)
at jdk.internal.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at akka.actor.Actor.aroundReceive(Actor.scala:517)
at akka.actor.Actor.aroundReceive$(Actor.scala:515)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
... 4 more
Caused by: java.lang.Exception: Could not complete the stream element: Record # (undef) : 9.
at org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$ResultHandler.completeExceptionally(AsyncWaitOperator.java:383)
at org.apache.flink.streaming.api.functions.async.AsyncFunction.timeout(AsyncFunction.java:97)
at org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.lambda$processElement$0(AsyncWaitOperator.java:197)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1318)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$null$17(StreamTask.java:1309)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:86)
at org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.waitInFlightInputsFinished(AsyncWaitOperator.java:284)
at org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.endInput(AsyncWaitOperator.java:254)
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.endOperatorInput(StreamOperatorWrapper.java:91)
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$close$0(StreamOperatorWrapper.java:128)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:128)
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.closeOperators(OperatorChain.java:439)
at org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:627)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:589)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.util.concurrent.TimeoutException: Async function call has timed out.
... 20 more
Prining the Interruption/Execution Exception shows the following error:
java.util.concurrent.ExecutionException: java.net.ConnectException: executor not accepting a task
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.asynchttpclient.netty.NettyResponseFuture.get(NettyResponseFuture.java:201)
at com.merlot.data.pipeline.jobs.async.AsyncHttpRequest.lambda$asyncInvoke$0(AsyncHttpRequest.java:53)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1692)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.net.ConnectException: executor not accepting a task
at org.asynchttpclient.netty.channel.NettyConnectListener.onFailure(NettyConnectListener.java:179)
at org.asynchttpclient.netty.channel.NettyChannelConnector$1.onFailure(NettyChannelConnector.java:108)
at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:28)
at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:20)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:109)
at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:89)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:197)
at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:989)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:504)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.IllegalStateException: executor not accepting a task
at io.netty.resolver.AddressResolverGroup.getResolver(AddressResolverGroup.java:61)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:194)
... 21 more
I'm not really sure why the async function is timing out, because I can see the fastAPI endpoint is being queried in the console. Also, the endpoint is working fine, as all my postman requests go through perfectly fine. Any help in resolving the core issue is greatly appreciated.
I'm on MacOS Big Sur and using the following 3rd party libs:
implementation 'org.json:json:20201115'
implementation 'org.apache.httpcomponents:httpclient:4.5.13'
implementation 'org.asynchttpclient:async-http-client:2.12.2'
implementation 'org.apache.flink:flink-core:1.12.2'
implementation 'org.apache.flink:flink-streaming-java_2.12:1.12.2'
implementation 'org.apache.flink:flink-clients_2.12:1.12.2'
Update 1: If I reduce the capacity to 1 (from 1000), then I don't get any error, but still the output is empty.
Update 2: After the suggestion made by #DavidAnderson I've enabled checkpointing. Now, I'm not seeing the timeout error, and my job is not getting terminated abruptly, which is a good news. But now, the output folder is still empty. I've updated my Flink code to reflect the checkpointing changes.
A common issue with AsyncIO is that every concurrent request limit used in the execution stack should be sized appropriately. Some of these limits are implicit, e.g. if you don't supply your own thread pool to CompletableFuture.supplyAsync(), then it uses the shared commonPool, which is limited to a very small size, see https://dzone.com/articles/be-aware-of-forkjoinpoolcommonpool.
IIRC I use AsyncDataStream.unorderedWait(capacity) <= HTTP client capacity <= executor capacity. And the HTTP client capacity is often a limit of both its connection pool size and the number of connections per host.

Unable to connect to Cassandra: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback [duplicate]

This question already has answers here:
java.lang.NoClassDefFoundError: com/google/common/util/concurrent/FutureFallback
(4 answers)
Closed 6 years ago.
I am trying to connect to cassandra using Java (Hadoop2), but it is throwing the below error
Connecting to IP Address 127.0.0.1:9042...
16/04/12 10:35:13 INFO core.NettyUtil: Found Netty's native epoll transport in the classpath, using it
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common /util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent /FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture;
at com.datastax.driver.core.Connection.initAsync(Connection.java:177)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:731)
at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251)
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393)
at cassandra.CassandraConnector.connect(CassandraConnector.java:42)
at cassandra.Main.main(Main.java:20)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Please see the cassandra environment details::
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.5 | CQL spec 3.3.1 | Native protocol v4]
Jars i am using::
cassandra-driver-core-3.0.0.jar
guava-19.0.jar
netty-all-4.1.0.CR7.jar
I have tried other jars( guava >=16.01 ,netty-all-4.0...,cassandra-driver-core-2.2.0). but always it is throwing more or less similar error.
please see below the code snippet used for establishing connection:
public void connect(final String node, final int port)
{
this.cluster = Cluster.builder().addContactPoint(node).withPort(port)
.build();
final Metadata metadata = cluster.getMetadata();
ProtocolVersion myCurrentVersion = cluster.getConfiguration()
.getProtocolOptions()
.getProtocolVersion();
System.out.println(myCurrentVersion);
out.printf("Connected to cluster: %s\n", metadata.getClusterName());
for (final Host host : metadata.getAllHosts())
{
out.printf("Datacenter: %s; Host: %s; Rack: %s\n",
host.getDatacenter(), host.getAddress(), host.getRack());
}
session = cluster.connect();
}
public static void main(String[] args) {
// TODO Auto-generated method stub
final CassandraConnector client = new CassandraConnector();
final String ipAddress = args.length > 0 ? args[0] : "127.0.0.1";
final int port = args.length > 1 ? Integer.parseInt(args[1]) : 9042;
out.println("Connecting to IP Address " + ipAddress + ":" + port + "...");
client.connect(ipAddress, port);
client.close();
}
I think it might be because of some version conflict, but unable to find the correct version.
Have checked some other similar posts and tried the solutions (using different jars) but could not resolve the issue
Any help will be highly appreciated.
Yes.You are correct.This issue occurs due to the version conflict.
I will recommend you to use guava Jar and remove other unnecessary Jars.
Also please have a look at this answer, it might help you.

Alternatives to sleep when checking if Netty server is up?

I am starting Netty with a rest interface. I get this exception:
javax.ws.rs.ProcessingException: RESTEASY004655: Unable to invoke request
at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(ApacheHttpClient4Engine.java:287)
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.invoke(ClientInvocation.java:436)
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientInvoker.invoke(ClientInvoker.java:102)
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientProxy.invoke(ClientProxy.java:64)
at com.sun.proxy.$Proxy20.ping(Unknown Source)
at com.openet.atf.agent.proxy.SlaveRemoteProxy.ping(SlaveRemoteProxy.java:41)
at com.openet.atf.agent.manage.Master.startSlaves(Master.java:86)
at com.openet.acceptance.runner.AcceptanceRunner.main(AcceptanceRunner.java:195)
Caused by: org.apache.http.conn.HttpHostConnectException: Connection to http://ovm1:8889 refused
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:190)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(ApacheHttpClient4Engine.java:283)
... 7 more
Caused by: java.net.ConnectException: Connection refused
I prevented this from happening by doing a Thread.sleep(5000);. I am looking for a better alternative to sleep. Sleep always assumes that the length of time is 5 seconds.
A common approach used in situations where success could be any time in the future is the back off pattern, typically implemented by doubling the wait time every iteration.
Something like this:
long wait = 50; // ms
boolean connected;
while (!connected) {
Thread.sleep(wait);
connected = <code to check connection>
wait *= 2;
}
You can sleep until the connection is established.
boolean up = false;
while (!up) {
try {
// Try to connect
up = true;
} catch (Exception e) {
Thread.sleep(5000);
}
}

IndexNotFoundException[no such index]

I was running my first elasticsearch test case, I am using Java as the solution perspective to do elasticsearch experiment. it works perfectly fine in eclipse Debug Mode,
the debug mode result:
{postDate=2016-01-31T10:32:58.952Z, title=Posting, content=today's weather is hot, tags=[hashtag]}
But when I try this on in normal Run application mode, I am getting the following exception and I have no idea at all. Please guide me.
The following exception:
8253 [main] INFO org.elasticsearch.node - [Marc Spector] started
8257 [elasticsearch[Marc Spector][clusterService#updateTask][T#1]] DEBUG org.elasticsearch.index.store - [Marc Spector] [facebook] using index.store.throttle.type [none], with index.store.throttle.max_bytes_per_sec [0b]
8273 [elasticsearch[Marc Spector][search][T#4]] DEBUG org.elasticsearch.action.search.type - [Marc Spector] All shards failed for phase: [query]
RemoteTransportException[[Marc Spector][127.0.0.1:9300][indices:data/read/search[phase/query]]]; nested: IndexNotFoundException[no such index];
Caused by: [facebook] IndexNotFoundException[no such index]
at org.elasticsearch.indices.IndicesService.indexServiceSafe(IndicesService.java:310)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:635)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:617)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Exception in thread "main" Failed to execute phase [query], all shards failed
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:228)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onFailure(TransportSearchTypeAction.java:174)
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:46)
at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:821)
at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:799)
at org.elasticsearch.transport.TransportService$4.onFailure(TransportService.java:361)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
8278 [elasticsearch[Marc Spector][clusterService#updateTask][T#1]] DEBUG org.elasticsearch.index.mapper - [Marc Spector] [facebook] using dynamic[true]
The I think showing source code can be more clear to the issue
Source:
Node node = nodeBuilder().clusterName("testing2").node();
Client client = node.client();
SearchResponse response = client.prepareSearch("facebook")
.setTypes("Lance")
.setSearchType(SearchType.QUERY_THEN_FETCH)
.setQuery(QueryBuilders.matchPhrasePrefixQuery("title", "Pos"))
.setFrom(0).setSize(60).setExplain(true)
.execute()
.actionGet();
SearchHit[] searchResponse = response.getHits().getHits();
for(SearchHit hit : searchResponse){
System.out.println(hit.getSource());
}
Before querying your facebook index, you need to create it first:
Settings indexSettings = ImmutableSettings.settingsBuilder()
.put("number_of_shards", 5)
.put("number_of_replicas", 1)
.build();
CreateIndexRequest indexRequest = new CreateIndexRequest("facebook", indexSettings);
client.admin().indices().create(indexRequest).actionGet();
And if you expect to find some results, you need to index your data also:
IndexResponse response = client.prepareIndex("facebook", "Lance", "1")
.setSource(jsonBuilder()
.startObject()
.field("title", "Posting")
.field("postDate", new Date())
.field("content", "today's weather is hot")
.field("tags", Lists.newArrayList("hashtag"))
.endObject()
)
.execute()
.actionGet();
Then you can search on your index.

rabbitmq not working with java

I have rabbitmq setup on my machine and it has 3 different queues. One java code is listening to a queue and other queues are sending messages to python codes. Now python codes are working fine but java code seems to have a problem with AMQ. Following error is coming:
Exception in thread "main" com.rabbitmq.client.PossibleAuthenticationFailureException: Possibly caused by authentication failure
at com.rabbitmq.client.impl.AMQConnection.start(AMQConnection.java:341)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:590)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:612)
at com.elki.test.Worker.main(Worker.java:73)
Caused by: com.rabbitmq.client.ShutdownSignalException: connection error
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:67)
at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:33)
at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:343)
at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:216)
at com.rabbitmq.client.impl.AMQChannel.rpc(AMQChannel.java:202)
at com.rabbitmq.client.impl.AMQConnection.start(AMQConnection.java:326)
... 3 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readUnsignedByte(DataInputStream.java:290)
at com.rabbitmq.client.impl.Frame.readFrom(Frame.java:95)
at com.rabbitmq.client.impl.SocketFrameHandler.readFrame(SocketFrameHandler.java:139)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:532)
at java.lang.Thread.run(Thread.java:744)
How come there could be an AuthenticationFailure with java but not with python.
Any help appreciated.
CODE:
public static void main(String[] argv)
throws java.io.IOException,
java.lang.InterruptedException {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("127.0.0.1");
factory.setPort(5672);
com.rabbitmq.client.Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(TASK_QUEUE_NAME, true, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
channel.basicQos(1);
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume(TASK_QUEUE_NAME, false, consumer);
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println(" [x] Received '" + message + "'");
"do some work"
System.out.println(" [x] Done" );
int prefetchCount = 1;
channel.basicQos(prefetchCount);
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
}
}
I suspect it's because you haven't set the password nor the username on the ConnectionFactory object, and so it can't authenticate with RabbitMQ. (Perhaps your Python code is passing those in, and so therefore can authenticate.)
Try adding this code before calling factory.newConnection:
factory.setUsername(userName);
factory.setPassword(password);
replacing userName and password as needed for your code.
I had same error. The problem was that the rabbitmq was started with default config on ipv6 protocol. I don't know why, but it does not work on windows linux subsystem.
Force ipv4 helped for me:
cat /etc/rabbitmq/rabbitmq.config
[
{rabbit, [
{tcp_listeners, [{"127.0.0.1", 5672}]}
]}
].
By default rabbitmq uses ipv6 address ::1.
PS^ You need to create config file if there are not exists.

Categories