Apache Camel Web3j component - java

I am trying to utilize the apache camel-web3j component to connect to local ganache testnet node: https://github.com/apache/camel/blob/master/components/camel-web3j/src/main/docs/web3j-component.adoc
Even though this is not officially released, i was able to build it locally and include as a local dependency in my maven project. When I follow the instructions on the github site, I receive the error:
java.lang.RuntimeException: Provided file socket cannot be opened: 127.0.0.1:7545
at org.web3j.protocol.ipc.UnixDomainSocket.<init>(UnixDomainSocket.java:41)
at org.web3j.protocol.ipc.UnixDomainSocket.<init>(UnixDomainSocket.java:27)
at org.web3j.protocol.ipc.UnixIpcService.getIO(UnixIpcService.java:21)
at org.web3j.protocol.ipc.IpcService.performIO(IpcService.java:50)
at org.web3j.protocol.Service.send(Service.java:31)
at org.web3j.protocol.core.Request.send(Request.java:71)
at org.web3j.protocol.core.filters.BlockFilter.sendRequest(BlockFilter.java:24)
at org.web3j.protocol.core.filters.Filter.run(Filter.java:45)
at org.web3j.protocol.rx.JsonRpc2_0Rx.run(JsonRpc2_0Rx.java:73)
at org.web3j.protocol.rx.JsonRpc2_0Rx.lambda$ethBlockHashObservable$0(JsonRpc2_0Rx.java:46)
at rx.Observable.unsafeSubscribe(Observable.java:10142)
at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48)
at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33)
at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
at rx.Observable.subscribe(Observable.java:10238)
at rx.Observable.subscribe(Observable.java:10205)
at rx.Observable.subscribe(Observable.java:10086)
at org.apache.camel.component.web3j.Web3jConsumer.doStart(Web3jConsumer.java:100)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.impl.DefaultCamelContext.startService(DefaultCamelContext.java:3518)
at org.apache.camel.impl.DefaultCamelContext.doStartOrResumeRouteConsumers(DefaultCamelContext.java:3835)
at org.apache.camel.impl.DefaultCamelContext.doStartRouteConsumers(DefaultCamelContext.java:3771)
at org.apache.camel.impl.DefaultCamelContext.safelyStartRouteServices(DefaultCamelContext.java:3691)
at org.apache.camel.impl.DefaultCamelContext.doStartOrResumeRoutes(DefaultCamelContext.java:3455)
at org.apache.camel.impl.DefaultCamelContext.doStartCamel(DefaultCamelContext.java:3309)
at org.apache.camel.impl.DefaultCamelContext.access$000(DefaultCamelContext.java:202)
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:3093)
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:3089)
at org.apache.camel.impl.DefaultCamelContext.doWithDefinedClassLoader(DefaultCamelContext.java:3112)
at org.apache.camel.impl.DefaultCamelContext.doStart(DefaultCamelContext.java:3089)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:3026)
at org.apache.camel.main.Main.doStart(Main.java:129)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.main.MainSupport.run(MainSupport.java:168)
at com.optum.propel.commons.startup.AppInit.startMicroService(AppInit.java:141)
at com.optum.propel.commons.startup.AppInit.lambda$main$0(AppInit.java:81)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: No such file or directory
at jnr.unixsocket.UnixSocketChannel.doConnect(UnixSocketChannel.java:127)
at jnr.unixsocket.UnixSocketChannel.connect(UnixSocketChannel.java:136)
at jnr.unixsocket.UnixSocketChannel.open(UnixSocketChannel.java:68)
at org.web3j.protocol.ipc.UnixDomainSocket.<init>(UnixDomainSocket.java:35)
... 38 common frames omitted
{ "date_time":"2018-06-13 17:04:48,617", "thread":"[Thread-2]", "log_level":"INFO ", "class_name":"Web3jConsumer", "log_message":"Subscribed: org.apache.camel.component.web3j.Web3jConfiguration#1f7fe1b2" }
{ "date_time":"2018-06-13 17:04:48,617", "thread":"[Thread-2]", "log_level":"INFO ", "class_name":"DefaultCamelContext", "log_message":"Route: rsRoute started and consuming from: web3j://127.0.0.1:7545?operation=BLOCK_OBSERVABLE" }
I am confused because it first throws an error, then says it succesfully connected. Additionally, When i change the operation to ETH_GET_BLOCK_BY_HASH it throws an unsupported operation exception.
I was wondering if anyone has tried to use this component and have seen the issues?
The endpoint i am using is:
web3j://127.0.0.1:7545?operation=BLOCK_OBSERVABLE

Check integration tests.
The url, should be: http://127.0.0.1:7545
https://github.com/apache/camel/blob/master/components/camel-web3j/src/test/java/org/apache/camel/component/web3j/integration/Web3jProducerGanacheTest.java

Related

What can cause intermittent javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching <host> found?

For the past few weeks, I've been getting occasional reports of the following error:
javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching vassalengine.org found.
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:370)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:313)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1357)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1232)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1175)
at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:396)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:480)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:458)
at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:200)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1500)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1415)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:450)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:421)
at java.base/sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:580)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:183)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1665)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1589)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:224)
at java.base/java.net.URL.openStream(URL.java:1161)
at VASSAL.tools.version.LiveVersionInfo.getVersion(LiveVersionInfo.java:52)
at VASSAL.tools.version.LiveVersionInfo.getRelease(LiveVersionInfo.java:38)
at VASSAL.tools.version.VersionUtils.compareReportable(VersionUtils.java:38)
at VASSAL.tools.BugDialog$CheckRequest.doInBackground(BugDialog.java:487)
at VASSAL.tools.BugDialog$CheckRequest.doInBackground(BugDialog.java:473)
at java.desktop/javax.swing.SwingWorker$1.call(SwingWorker.java:304)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.desktop/javax.swing.SwingWorker.run(SwingWorker.java:343)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.security.cert.CertificateException: No subject alternative DNS name matching vassalengine.org found.
at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:212)
at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:452)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:426)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:238)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:132)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1341)
... 28 common frames omitted
The relevant part of VASSAL.tools.version.LiveVersionInfo.getVersion() is this:
try (InputStream in = new URL(url).openStream()) {
return IOUtils.toString(in, StandardCharsets.UTF_8).trim();
}
where url == "https://vassalengine.org/util/current-release".
If I visit that URL in a browser and check the certificate, the CN is vassalengine.org and the only Subject Alt Name listed is vassalengine.org. If I run the code myself, it succeeds; there is no exception thrown. Hundreds of other users have also successfully run this code. I've also had people who have reported this error run a diagnostic program containing just the code above... and none of them have been able to reproduce the error when they do that.
What could be causing a transient error like this?

H2 database not flushed because of file lock

I am running a Spring-Boot project in IDEA Intelli for training that writes a recipe to the database using REST-full services. The IDE runs the server and I use Postman to query.
My problem is that the H2 database does not persist the data entered. In memory it's fine, but as soon as I shutdown and start up again, the data is gone.
Looking at the database file, I see a trace file that seems to show that the DB wasn't updated because there was a lock on it. I have searched for a lock file (locate *.lock.db), stopped all Java processes, and even rebooted without a change in behavior. I've changed the FILE_LOCK options for H2 but the problem persists.
EDIT: I just noticed that I hadn't tried FILE_LOCK=FILE. I just tried it and the db isn't updated. However, a trace file is not produced.
EDIT 2: I forgot to say that I'm running on Ubuntu 20.04.
EDIT 3: I am being careful to shutdown using the "actuator/shutdown" from Postman. Using file locking, I do this:
start the server running from my IDE. I can see the lock file being created
create a new recipe
make sure I can get it (returns the recipe, no error)
shutdown using the actuator. The lock file is removed.
startup the server again. New lock file.
try to get the recipe. Returns a 404 error. No trace file.
What could be causing this? Here is my trace.db file [note: when I use FILE_LOCK=FILE, no trace file is created, so maybe the locking problem was a red herring?]:
2021-09-09 07:54:08 database: flush
org.h2.message.DbException: General error: "java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]" [50000-200]
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.message.DbException.convert(DbException.java:347)
at org.h2.mvstore.db.MVTableEngine$1.uncaughtException(MVTableEngine.java:93)
at org.h2.mvstore.MVStore.handleException(MVStore.java:2877)
at org.h2.mvstore.MVStore.panic(MVStore.java:481)
at org.h2.mvstore.MVStore.<init>(MVStore.java:402)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:3579)
at org.h2.mvstore.db.MVTableEngine$Store.open(MVTableEngine.java:170)
at org.h2.mvstore.db.MVTableEngine.init(MVTableEngine.java:103)
at org.h2.engine.Database.getPageStore(Database.java:2659)
at org.h2.engine.Database.open(Database.java:675)
at org.h2.engine.Database.openDatabase(Database.java:307)
at org.h2.engine.Database.<init>(Database.java:301)
at org.h2.engine.Engine.openSession(Engine.java:74)
at org.h2.engine.Engine.openSession(Engine.java:192)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:171)
at org.h2.engine.Engine.createSession(Engine.java:166)
at org.h2.engine.Engine.createSession(Engine.java:29)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:340)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:173)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:152)
at org.h2.Driver.connect(Driver.java:69)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:725)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:711)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.h2.jdbc.JdbcSQLNonTransientException: General error: "java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]" [50000-200]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:505)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:429)
... 33 more
Caused by: java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:950)
at org.h2.mvstore.FileStore.open(FileStore.java:166)
at org.h2.mvstore.MVStore.<init>(MVStore.java:381)
... 27 more
Caused by: java.nio.channels.OverlappingFileLockException
at java.base/sun.nio.ch.FileLockTable.checkList(FileLockTable.java:229)
at java.base/sun.nio.ch.FileLockTable.add(FileLockTable.java:123)
at java.base/sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1154)
at org.h2.store.fs.FileNio.tryLock(FilePathNio.java:121)
at java.base/java.nio.channels.FileChannel.tryLock(FileChannel.java:1165)
at org.h2.mvstore.FileStore.open(FileStore.java:163)
... 28 more
2021-09-09 07:54:08 database: flush
org.h2.message.DbException: General error: "java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]" [50000-200]
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.message.DbException.convert(DbException.java:347)
at org.h2.mvstore.db.MVTableEngine$1.uncaughtException(MVTableEngine.java:93)
at org.h2.mvstore.MVStore.handleException(MVStore.java:2877)
at org.h2.mvstore.MVStore.panic(MVStore.java:481)
at org.h2.mvstore.MVStore.<init>(MVStore.java:402)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:3579)
at org.h2.mvstore.db.MVTableEngine$Store.open(MVTableEngine.java:170)
at org.h2.mvstore.db.MVTableEngine.init(MVTableEngine.java:103)
at org.h2.engine.Database.getPageStore(Database.java:2659)
at org.h2.engine.Database.open(Database.java:675)
at org.h2.engine.Database.openDatabase(Database.java:307)
at org.h2.engine.Database.<init>(Database.java:301)
at org.h2.engine.Engine.openSession(Engine.java:74)
at org.h2.engine.Engine.openSession(Engine.java:192)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:171)
at org.h2.engine.Engine.createSession(Engine.java:166)
at org.h2.engine.Engine.createSession(Engine.java:29)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:340)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:173)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:152)
at org.h2.Driver.connect(Driver.java:69)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:725)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:711)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.h2.jdbc.JdbcSQLNonTransientException: General error: "java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]" [50000-200]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:505)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:429)
... 33 more
Caused by: java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:950)
at org.h2.mvstore.FileStore.open(FileStore.java:166)
at org.h2.mvstore.MVStore.<init>(MVStore.java:381)
... 27 more
Caused by: java.nio.channels.OverlappingFileLockException
at java.base/sun.nio.ch.FileLockTable.checkList(FileLockTable.java:229)
at java.base/sun.nio.ch.FileLockTable.add(FileLockTable.java:123)
at java.base/sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1154)
at org.h2.store.fs.FileNio.tryLock(FilePathNio.java:121)
at java.base/java.nio.channels.FileChannel.tryLock(FileChannel.java:1165)
at org.h2.mvstore.FileStore.open(FileStore.java:163)
... 28 more
...and the application.properties file [note: in my current iteration, I am using FILE_LOCK=FILE]:
# Required by HyperSkill
server.port=8881
management.endpoints.web.exposure.include=*
management.endpoint.shutdown.enabled=true
spring.datasource.url=jdbc:h2:file:../recipes_db
# Solves file locking problem? No.
#spring.datasource.url=jdbc:h2:file:../recipes_db;DB_CLOSE_ON_EXIT=TRUE;FILE_LOCK=NO
#spring.datasource.url=jdbc:h2:file:../recipes_db;FILE_LOCK=NO
#spring.datasource.url=jdbc:h2:file:../recipes_db;FILE_LOCK=SOCKET
#spring.datasource.url=jdbc:h2:file:../recipes_db;FILE_LOCK=FS
# Needed?
spring.datasource.auto-commit=true
# To remove warning
spring.jpa.open-in-view=true
I found the problem! Adding debug=true to the applications.properties file, I saw this:
Starting delayed evictData of schema as part of SessionFactory shut-down
So for some reason, Spring was explicitly dropping my table! Adding this to the applications.properties file stopped that:
spring.jpa.hibernate.ddl-auto=update

SVNKit using ssh throws IOException

I installed a new subversion repository but I am unable to connect to the repository.
In the old repository it is working as expected. The new repository thorws an exception:
org.eclipse.team.svn.core.connector.SVNConnectorException: svn: E210002: There was a problem while connecting to xn--x7h.example.com:22
at org.polarion.team.svn.connector.svnkit.SVNKitService.handleClientException(SVNKitService.java:59)
at org.polarion.team.svn.connector.svnkit.SVNKitConnector.listEntries(SVNKitConnector.java:1758)
at org.eclipse.team.svn.core.extension.factory.ThreadNameModifier.listEntries(ThreadNameModifier.java:324)
at org.eclipse.team.svn.core.utility.SVNUtility.list(SVNUtility.java:440)
at org.eclipse.team.svn.core.svnstorage.SVNRepositoryContainer.getChildren(SVNRepositoryContainer.java:79)
at org.eclipse.team.svn.core.operation.remote.GetRemoteFolderChildrenOperation.runImpl(GetRemoteFolderChildrenOperation.java:76)
at org.eclipse.team.svn.core.operation.AbstractActionOperation.run(AbstractActionOperation.java:82)
at org.eclipse.team.svn.core.utility.ProgressMonitorUtility.doTask(ProgressMonitorUtility.java:104)
at org.eclipse.team.svn.core.operation.CompositeOperation.runImpl(CompositeOperation.java:99)
at org.eclipse.team.svn.core.operation.AbstractActionOperation.run(AbstractActionOperation.java:82)
at org.eclipse.team.svn.core.operation.LoggedOperation.run(LoggedOperation.java:40)
at org.eclipse.team.svn.core.utility.ProgressMonitorUtility.doTask(ProgressMonitorUtility.java:104)
at org.eclipse.team.svn.core.utility.ProgressMonitorUtility.doTaskExternal(ProgressMonitorUtility.java:90)
at org.eclipse.team.svn.ui.utility.DefaultCancellableOperationWrapper.run(DefaultCancellableOperationWrapper.java:55)
at org.eclipse.team.svn.ui.utility.ScheduledOperationWrapper.run(ScheduledOperationWrapper.java:37)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: org.apache.subversion.javahl.ClientException: svn: E210002: There was a problem while connecting to xn--x7h.example.com:22
at org.apache.subversion.javahl.ClientException.fromException(ClientException.java:117)
at org.tmatesoft.svn.core.javahl17.SVNClientImpl.getClientException(SVNClientImpl.java:1539)
at org.tmatesoft.svn.core.javahl17.SVNClientImpl.list(SVNClientImpl.java:189)
at org.polarion.team.svn.connector.svnkit.SVNKitConnector.listEntries(SVNKitConnector.java:1745)
... 14 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: E210002: There was a problem while connecting to xn--x7h.example.com:22
at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:70)
at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:57)
at org.tmatesoft.svn.core.internal.io.svn.SVNSSHConnector.open(SVNSSHConnector.java:145)
at org.tmatesoft.svn.core.internal.io.svn.SVNConnection.open(SVNConnection.java:77)
at org.tmatesoft.svn.core.internal.io.svn.SVNRepositoryImpl.openConnection(SVNRepositoryImpl.java:1273)
at org.tmatesoft.svn.core.internal.io.svn.SVNRepositoryImpl.getLatestRevision(SVNRepositoryImpl.java:172)
at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119)
at org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:195)
at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:46)
at org.tmatesoft.svn.core.internal.wc2.remote.SvnRemoteList.run(SvnRemoteList.java:36)
at org.tmatesoft.svn.core.internal.wc2.remote.SvnRemoteList.run(SvnRemoteList.java:1)
at org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21)
at org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1235)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at org.tmatesoft.svn.core.javahl17.SVNClientImpl.list(SVNClientImpl.java:187)
... 15 more
Caused by: java.io.IOException: There was a problem while connecting to xn--x7h.example.com:22
at com.trilead.ssh2.Connection.connect(Connection.java:817)
at org.tmatesoft.svn.core.internal.io.svn.ssh.SshHost.openConnection(SshHost.java:225)
at org.tmatesoft.svn.core.internal.io.svn.ssh.SshHost.openSession(SshHost.java:153)
at org.tmatesoft.svn.core.internal.io.svn.ssh.SshSessionPool.openSession(SshSessionPool.java:85)
at org.tmatesoft.svn.core.internal.io.svn.SVNSSHConnector.open(SVNSSHConnector.java:122)
... 27 more
Caused by: java.io.IOException: Key exchange was not finished, connection is closed.
at com.trilead.ssh2.transport.KexManager.getOrWaitForConnectionInfo(KexManager.java:92)
at com.trilead.ssh2.transport.TransportManager.getConnectionInfo(TransportManager.java:231)
at com.trilead.ssh2.Connection.connect(Connection.java:769)
... 31 more
Caused by: java.io.IOException: Cannot negotiate, proposals do not match.
at com.trilead.ssh2.transport.KexManager.handleMessage(KexManager.java:413)
at com.trilead.ssh2.transport.TransportManager.receiveLoop(TransportManager.java:765)
at com.trilead.ssh2.transport.TransportManager$1.run(TransportManager.java:480)
at java.base/java.lang.Thread.run(Thread.java:834)
Any ideas?
I checked on server-side what is going on. So I called tail -f /var/log/auth.log and discovered a message that the client and the server was not able to agree on matching kex-algorithms.
Finally my solution was to enable the line
KexAlgorithms +diffie-hellman-group1-sha1
to the corresponding
/etc/ssh/sshd_config
File and restart the sshd-server.

Azure Blob Storage with Apache Flink 1.10

I am trying to use Azure Blob Storage with Apache Flink 1.10 for checkpointing purposes.
I followed all the instructions mentioned in [Flink Documentation][1] https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/filesystems/azure.html
Step 1:
mkdir ./plugins/azure-fs-hadoop
cp ./opt/flink-azure-fs-hadoop-1.10.0.jar ./plugins/azure-fs-hadoop/
Step 2:
This is what I have in flink-conf.xml
#Azure Storage Key
**fs.azure.account.key.<storage-account>.blob.core.windows.net:xxxxxxxxxxxxxxxxxx**
Step 3:
Use Azure Blob storage for checkpointing
This is what I have in my flink job
final StateBackend stateBackend = new FsStateBackend("wasb://flink-blob#$<storage-account>.blob.core.windows.net/checkpoint");
I am not sure if i missed anything here but when I submit the job, I get below Exception (AskTimeoutException from Actor).
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1741)
at org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1620)
at com.example.flink.checkpointing.CheckpointExample.main(CheckpointExample.java:78)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
... 8 more
Caused by: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1736)
... 17 more
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$7(RestClusterClient.java:359)
at java.base/java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:986)
at java.base/java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:970)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:274)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:610)
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1085)
at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/dispatcher#75666936]] after [10000 ms]. Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply.
at akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:635)
at akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:650)
at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:870)
at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)
at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:868)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
at akka.actor.LightArrayRevolverScheduler$$anon$3.executeBucket$1(LightArrayRevolverScheduler.scala:279)
at akka.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:283)
at akka.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:235)
at java.base/java.lang.Thread.run(Thread.java:834)
End of exception on server side>]
at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:390)
at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:374)
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
... 4 more```
I think your problem is twofold. The true failure cause is hidden because of the AskTimeoutException. This problem has been solved with FLINK-16018 which will be released with Flink 1.10.1. The problem is that the timeout value is too aggressive so that a long lasting job submission will fail on the client side.
For the true failure cause, I would recommend to take a look at Flink's jobmanager.log. It should contain information about what went wrong. I would suspect that there is a misconfiguration of the Azure Blob Storage.
That's correct. You must set your checkpoints.dir and savepoints.dir in the following format and use
fs.azure.account.key.<storage-account-name>.blob.core.windows.net:<key>
wasbs://<container>#<storage-account-name>.blob.core.windows.net/<directory>/

java.lang.ExceptionInInitializerError at activator.ActivatorCli$$anonfun$apply$1.apply$mcI$sp(ActivatorCli.sca la:21)

I am using git bash in windows. I downloaded and unzip the play framework and set the path also, but as soon as I run activator new it gives following error
$ activator new
java.lang.ExceptionInInitializerError
at activator.ActivatorCli$$anonfun$apply$1.apply$mcI$sp(ActivatorCli.sca
la:21)
at activator.ActivatorCli$$anonfun$apply$1.apply(ActivatorCli.scala:19)
at activator.ActivatorCli$$anonfun$apply$1.apply(ActivatorCli.scala:19)
at activator.ActivatorCli$.withContextClassloader(ActivatorCli.scala:179
)
at activator.ActivatorCli$.apply(ActivatorCli.scala:19)
at activator.ActivatorLauncher.run(ActivatorLauncher.scala:28)
at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109)
at xsbt.boot.Launch$.withContextLoader(Launch.scala:129)
at xsbt.boot.Launch$.run(Launch.scala:109)
at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:36)
at xsbt.boot.Launch$.launch(Launch.scala:117)
at xsbt.boot.Launch$.apply(Launch.scala:19)
at xsbt.boot.Boot$.runImpl(Boot.scala:44)
at xsbt.boot.Boot$.main(Boot.scala:20)
at xsbt.boot.Boot.main(Boot.scala)
Caused by: java.lang.RuntimeException: BAD URI: file://f:/Play/activator-1.2.10-
minimal
at activator.properties.ActivatorProperties.uriToFilename(ActivatorPrope
rties.java:106)
at activator.properties.ActivatorProperties.ACTIVATOR_HOME_FILENAME(Acti
vatorProperties.java:113)
at activator.properties.ActivatorProperties.ACTIVATOR_TEMPLATE_LOCAL_REP
O(ActivatorProperties.java:179)
at activator.UICacheHelper$.<init>(UICacheHelper.scala:31)
at activator.UICacheHelper$.<clinit>(UICacheHelper.scala)
... 15 more
Caused by: java.lang.IllegalArgumentException: URI has an authority component
at java.io.File.<init>(File.java:397)
at activator.properties.ActivatorProperties.uriToFilename(ActivatorPrope
rties.java:101)
... 19 more
Error during sbt execution: java.lang.ExceptionInInitializerError
can any one help how shall i proceed.
It is pretty much clear if you read your (own) stack trace carefully:
Caused by: java.lang.RuntimeException: BAD URI: file://f:/Play/activator-1.2.10-minimal
You are trying to access a file via an URI which is malformed.
A correct URI would be either file:/f:/Play/activator-1.2.10-minimal or file:///f:/Play/activator-1.2.10-minimal (see also this answer).
Amit... probably he dont know (how to) "play" ....
check this up...
https://github.com/typesafehub/activator/issues/648

Categories