I'm trying to implement the sample EventHub application given here, but it's giving me errors. I've followed the exact same steps given in the document. I'm on HDInsight 3.5, Storm 1.0.1.2.5.4.0-121
Here's the one for EventHubReader, as seen from the Storm UI.
com.microsoft.eventhubs.client.EventHubException: org.apache.qpid.amqp_1_0.client.ConnectionErrorException: An AMQP error occurred (condition='amqp:unauthorized-access'). TrackingId:53ca4652535f423e5f0049dc08ef9_G22, SystemTracker:gateway2, Timestamp:2/28/2017 7:51:21 AM
at com.microsoft.eventhubs.client.EventHubReceiver.ensureReceiverCreated(EventHubReceiver.java:112) ~[stormjar.jar:?]
at com.microsoft.eventhubs.client.EventHubReceiver.<init>(EventHubReceiver.java:65) ~[stormjar.jar:?]
at com.microsoft.eventhubs.client.EventHubConsumerGroup.createReceiver(EventHubConsumerGroup.java:56) ~[stormjar.jar:?]
at com.microsoft.eventhubs.client.ResilientEventHubReceiver.initialize(ResilientEventHubReceiver.java:63) ~[stormjar.jar:?]
at org.apache.storm.eventhubs.spout.EventHubReceiverImpl.open(EventHubReceiverImpl.java:74) ~[stormjar.jar:?]
...
AMQP error occurred (condition='amqp:unauthorized-access'). TrackingId:53ca4652535f423e825f0049dc08eff9_G22, SystemTracker:gateway2, Timestamp:2/28/2017 7:51:21 AM
at org.apache.qpid.amqp_1_0.client.Receiver.<init>(Receiver.java:223) ~[stormjar.jar:?]
at org.apache.qpid.amqp_1_0.client.Session.createReceiver(Session.java:281) ~[stormjar.jar:?] ... 11 more
EventHubWriter:
com.microsoft.eventhubs.client.EventHubException: An error occurred while sending data.
at com.microsoft.eventhubs.client.EventHubSender.sendCore(EventHubSender.java:93) ~[stormjar.jar:?]
Caused by: org.apache.qpid.amqp_1_0.client.Sender$SenderCreationException: Peer did not create remote endpoint for link, target: my-event-hub
at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:191) ~[stormjar.jar:?]
pom.xml
<properties>
<storm.version>1.0.1</storm.version>
<hadoop.version>2.7.3</hadoop.version>
</properties>
...
<dependency>
<groupId>com.microsoft</groupId>
<artifactId>eventhubs</artifactId>
<version>1.0.2</version>
</dependency>
I've made sure in my EventHubs.properties file the eventhub connection namespace and policy keys were correct. I've also opened the .jar artifact and made sure the EventHub classes were included.
Does anyone know how to get it to work?
Answering my own question in case anyone else runs into the same problem. It turns out there's a bug with the storm eventhubs library.
https://issues.apache.org/jira/browse/STORM-2371?jql=project%20=%20STORM%20AND%20component%20=%20storm-eventhubs%20AND%20resolution%20=%20Unresolved%20ORDER%20BY%20priority%20DESC,%20key%20DESC
Related
In camel 2.22.1 I used the following camel route to perform a file operation
from(sftp://" + sourceUrl + "&preferredAuthentications=password&includeExt=xml&delete=true&disconnect=true&maxMessagesPerPoll=50&preMove=${file:name.noext}.process")
this will rename the files with .xml extension to .process and perform further route operations and finally delete the .process file from input folder. However camel 3.9.0 this route started failing. And I got the following error from the Camel file component:
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot delete file: source/do-sfdc-case-import-0/2451165.process
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.jcraft.jsch.SftpException: No such file
at org.apache.camel.component.file.remote.SftpOperations.deleteFile(SftpOperations.java:488)
... 22 common frames omitted
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot change directory to: ..
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.jcraft.jsch.SftpException:
at org.apache.camel.component.file.remote.SftpOperations.doChangeDirectory(SftpOperations.java:682)
... 11 common frames omitted
Caused by: java.io.IOException: Pipe closed
at com.jcraft.jsch.ChannelSftp.cd(ChannelSftp.java:337)
... 12 common frames omitted
To resolve the issue I tried to set stepwise=false flag however the application became dead slow, even though when the file mentioned in this stacktrace is available in the folder, but camel reports it to be not found or is not able to change to the corresponding directory.
Any idea what am I doing wrong here? Appreciate any help/tips on the same.
After several tests and debugging sessions, I understood from my application that Camel/jsch is not handling multi-threading quite efficiently and after a few google searches, i found this mail thread https://www.mail-archive.com/search?l=users#camel.apache.org&q=subject:%22GenericFileOperationFailedException%22&o=newest&f=1 that supported my theory with camel/jsch and multi-threading.
There is always a timing issue as when then route is trying to delete the file in the specific folder, the file is already deleted and this caused the above mentioned error in the route. And, after reading the camel-sftp documentation, I used the synchronous=true flag which ensures that camel is strictly processing the routes synchronously, the problem was solved for me.
I have a strange problem with the Elasticsearch JDBC driver version after packaging.
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>x-pack-sql-jdbc</artifactId>
<version>7.10.0</version>
</dependency>
When I run my code in IDEA to access Elasticsearch, it works normally.
Next, I execute mvn package to get a jar with dependencies.
When I run this jar to access Elasticsearch, the error is as follows:
java.sql.SQLException: Server sent bad type [action_request_validation_exception]. Original type was [Validation Failed: 1: The [0.0.0] version of the [jdbc] client is not compatible with Elasticsearch version [7.10.0];]. [org.elasticsearch.action.ActionRequestValidationException: Validation Failed: 1: The [0.0.0] version of the [jdbc] client is not compatible with Elasticsearch version [7.10.0];
at org.elasticsearch.action.ValidateActions.addValidationError(ValidateActions.java:26)
at org.elasticsearch.xpack.sql.action.AbstractSqlQueryRequest.validate(AbstractSqlQueryRequest.java:239)
at org.elasticsearch.xpack.sql.action.SqlQueryRequest.validate(SqlQueryRequest.java:79)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:144)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:83)
at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:86)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:75)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:412)
at org.elasticsearch.xpack.sql.plugin.RestSqlQueryAction.lambda$prepareRequest$0(RestSqlQueryAction.java:116)
at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:115)
at org.elasticsearch.xpack.security.rest.SecurityRestFilter.handleRequest(SecurityRestFilter.java:88)
at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:258)
at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:340)
at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:191)
at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:319)
at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:384)
......
I guess there was a problem with the version metadata when packaging, but I haven't found a solution.
I found some source code of Elasticsearch that may be useful.
https://github.com/elastic/elasticsearch/blob/master/x-pack/plugin/sql/sql-client/src/main/java/org/elasticsearch/xpack/sql/client/ClientVersion.java#L112
According to https://github.com/elastic/elasticsearch/blob/master/x-pack/plugin/sql/sql-client/src/main/java/org/elasticsearch/xpack/sql/client/ClientVersion.java#L112
I add the following configuration to pom.xml, and the issue is solved.
<manifestEntries>
<X-Compile-Elasticsearch-Version>7.10.0</X-Compile-Elasticsearch-Version>
</manifestEntries>
I'm writing a java streaming pipeline with Apache Beam that reads messages from Google Cloud PubSub and should write them into an ElasticSearch instance. Currently, I'm using the direct runner, but the plan is to deploy the solution on Google Cloud Dataflow.
First of all, I wrote a pipeline that reads from PubSub and writes to text files and it works. Then, I sat up the ElasticSearch instance and also this works. I wrote some documents with curl and it was easy.
Then, when I tried to perform the write with Beam's ElasticSearch connector, I started to get some error. Actually, I get ava.lang.NoSuchMethodError: org.elasticsearch.client.RestClient.performRequest, in spite of the fact that I added the dependency on my pom.xml file.
What I'm doing is essentially this:
messages.apply(
"TwoMinWindow",
Window.into(FixedWindows.of(new Duration(120*1000)))
).apply(
"ElasticWrite",
ElasticsearchIO.write()
.withConnectionConfiguration(
ElasticsearchIO.ConnectionConfiguration
.create(new String[]{"http://xxx.xxx.xxx.xxx:9200"}, "streaming_data", "string")
.withUsername("xxxx")
.withPassword("xxxxxxxx")
)
);
Using the DirectRunner, I'm able to connect to PubSub, but I get an exception when the pipeline tries to connect with the ElasticSearch instance:
java.lang.NoSuchMethodError: org.elasticsearch.client.RestClient.performRequest(Ljava/lang/String;Ljava/lang/String;[Lorg/apache/http/Header;)Lorg/elasticsearch/client/Response;
at org.apache.beam.sdk.util.UserCodeException.wrap (UserCodeException.java:34)
at org.apache.beam.sdk.io.elasticsearch.ElasticsearchIO$Write$WriteFn$DoFnInvoker.invokeSetup (Unknown Source)
at org.apache.beam.sdk.transforms.reflect.DoFnInvokers.tryInvokeSetupFor (DoFnInvokers.java:50)
at org.apache.beam.runners.direct.DoFnLifecycleManager$DeserializingCacheLoader.load (DoFnLifecycleManager.java:104)
at org.apache.beam.runners.direct.DoFnLifecycleManager$DeserializingCacheLoader.load (DoFnLifecycleManager.java:91)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture (LocalCache.java:3528)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync (LocalCache.java:2277)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad (LocalCache.java:2154)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get (LocalCache.java:2044)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get (LocalCache.java:3952)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad (LocalCache.java:3974)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get (LocalCache.java:4958)
at org.apache.beam.runners.direct.DoFnLifecycleManager.get (DoFnLifecycleManager.java:61)
at org.apache.beam.runners.direct.ParDoEvaluatorFactory.createEvaluator (ParDoEvaluatorFactory.java:129)
at org.apache.beam.runners.direct.ParDoEvaluatorFactory.forApplication (ParDoEvaluatorFactory.java:79)
at org.apache.beam.runners.direct.TransformEvaluatorRegistry.forApplication (TransformEvaluatorRegistry.java:169)
at org.apache.beam.runners.direct.DirectTransformExecutor.run (DirectTransformExecutor.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:511)
at java.util.concurrent.FutureTask.run (FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:624)
at java.lang.Thread.run (Thread.java:748)
Caused by: java.lang.NoSuchMethodError: org.elasticsearch.client.RestClient.performRequest(Ljava/lang/String;Ljava/lang/String;[Lorg/apache/http/Header;)Lorg/elasticsearch/client/Response;
at org.apache.beam.sdk.io.elasticsearch.ElasticsearchIO.getBackendVersion (ElasticsearchIO.java:1348)
at org.apache.beam.sdk.io.elasticsearch.ElasticsearchIO$Write$WriteFn.setup (ElasticsearchIO.java:1200)
What I added in the pom.xml is :
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-google-cloud-platform</artifactId>
<version>${beam.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.elasticsearch.client/elasticsearch-rest-client -->
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-client</artifactId>
<version>${elastic.version}</version>
</dependency>
I'm stuck with this problem and I don't know how to solve it. If I use a JestClient, I'm able to connect to ElasticSearch without any issue.
Have you any suggestion?
You are using a newer version of RestClient that does not have the method performRequest(String, Header). If you look at the latest source code, you can see that the method takes a Request now, whereas in older versions there were methods that took Strings and Headers.
These methods were deprecated and then removed from the code on September 1, 2018.
Either change your code to use the newer Elastic Search library, or specify an older version of the library (it needs to be before 7.0.x, e.g. 6.8.4) that is compatible with your code.
we have a Kafka Connect project where we rely on a library which fetches data from gitlab. This library depends on Jersey. Kafka also uses Jersey. When starting our connector, we receive a class cast error that appears to be caused by jersey having some kind of global discovery pattern that clashes when both server and client are in the same classpath.
org.gitlab4j.api.GitLabApiException: org.glassfish.jersey.server.wadl.internal.WadlAutoDiscoverable cannot be cast to org.glassfish.jersey.internal.spi.AutoDiscoverable
at org.gitlab4j.api.AbstractApi.handle(AbstractApi.java:615)
at org.gitlab4j.api.AbstractApi.get(AbstractApi.java:193)
at poc.connector.gitlab.api.ExtendedIssuesApi.getIssues(GitlabExtendedApi.scala:34)
at poc.connector.gitlab.GitLabSourceTask.poll(GitLabSourceTask.scala:49)
at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:244)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:220)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassCastException: org.glassfish.jersey.server.wadl.internal.WadlAutoDiscoverable cannot be cast to org.glassfish.jersey.internal.spi.AutoDiscoverable
at java.util.TreeMap.compare(TreeMap.java:1295)
at java.util.TreeMap.put(TreeMap.java:538)
at java.util.TreeSet.add(TreeSet.java:255)
at java.util.AbstractCollection.addAll(AbstractCollection.java:344)
at java.util.TreeSet.addAll(TreeSet.java:312)
at org.glassfish.jersey.model.internal.CommonConfig.configureAutoDiscoverableProviders(CommonConfig.java:599)
at org.glassfish.jersey.client.ClientConfig$State.configureAutoDiscoverableProviders(ClientConfig.java:403)
at org.glassfish.jersey.client.ClientConfig$State.initRuntime(ClientConfig.java:450)
at org.glassfish.jersey.internal.util.collection.Values$LazyValueImpl.get(Values.java:341)
at org.glassfish.jersey.client.ClientConfig.getRuntime(ClientConfig.java:826)
at org.glassfish.jersey.client.ClientRequest.getConfiguration(ClientRequest.java:285)
at org.glassfish.jersey.client.JerseyInvocation.validateHttpMethodAndEntity(JerseyInvocation.java:143)
at org.glassfish.jersey.client.JerseyInvocation.<init>(JerseyInvocation.java:112)
at org.glassfish.jersey.client.JerseyInvocation.<init>(JerseyInvocation.java:108)
at org.glassfish.jersey.client.JerseyInvocation.<init>(JerseyInvocation.java:99)
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:419)
at org.glassfish.jersey.client.JerseyInvocation$Builder.get(JerseyInvocation.java:319)
at org.gitlab4j.api.GitLabApiClient.get(GitLabApiClient.java:382)
at org.gitlab4j.api.GitLabApiClient.get(GitLabApiClient.java:370)
at org.gitlab4j.api.AbstractApi.get(AbstractApi.java:191)
... 11 more
$ #inside of the plugin path of kafka connect:
$ find ./ | grep jersey | grep server Di 26 Feb 2019 15:46:41 CET
./schema-registry/jersey-server-2.27.jar
./confluent-kafka-mqtt/jersey-server-2.27.jar
./kafka/jersey-server-2.27.jar
./rest-utils/jersey-server-2.27.jar
How would we go about configuring our code to avoid the issue that somewhere in the process of our connect application, the wrong class is used? Or how do we avoid the cast error in the context of AutoDiscoverable implementations?
We had a similar issue in one of our Kafka Connect connectors, which we solved by shading org.glassfish in our connector.
We package our connector as a "uber JAR" and place it in a path configured using the plugin.path setting.
See also the Confluent docs for Kafka Connect about this topic. There it is stated that
... a plugin should never contain any libraries that are provided by Kafka Connect's runtime.
We chose to shade instead, you might also be able to solve this by not packaging Jersey in your connector.
I just add exactly the same issue. Developing a kafka source connector for gitlab using gitlab4j.
I fixed it by adding the following dependencies to exclude section of assemby and shade plugins:
<exclude>org.glassfish.jersey.inject</exclude>
<exclude>org.glassfish.jersey.core</exclude>
<exclude>org.glassfish.jersey.connectors</exclude>
I'm trying to use OpenSSL engine on an HTTP server.
My configuration looks like
HttpServerOptions options = new HttpServerOptions()
.setSsl(config.getSsl())
.setSslEngineOptions(new OpenSSLEngineOptions())
.setClientAuth(ClientAuth.REQUEST)
.setKeyStoreOptions(keystoreOptions)
.setTrustStoreOptions(truststoreOptions)
.setEnabledSecureTransportProtocols(enabledSecureTransportProtocols);
I'm using Vert.x 3.6.2, which brings Netty dependencies 4.1.30. I also added to my pom:
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-tcnative</artifactId>
<version>2.0.20.Final</version>
<classifier>linux-x86_64-fedora</classifier>
</dependency>
Because my HTTP server is deployed on RHEL 7 with OpenSSL 1.0.1 (I know it's old). I'm getting the following error:
io.vertx.core.VertxException: OpenSSL is not available
And as I can see from logs, netty handler try to load this native library netty-tcnative and can't find it in the classpath:
netty-tcnative not in the classpath; OpenSslEngine will be unavailable. I don't know how to resolve this issue.