I installed a new subversion repository but I am unable to connect to the repository.
In the old repository it is working as expected. The new repository thorws an exception:
org.eclipse.team.svn.core.connector.SVNConnectorException: svn: E210002: There was a problem while connecting to xn--x7h.example.com:22
at org.polarion.team.svn.connector.svnkit.SVNKitService.handleClientException(SVNKitService.java:59)
at org.polarion.team.svn.connector.svnkit.SVNKitConnector.listEntries(SVNKitConnector.java:1758)
at org.eclipse.team.svn.core.extension.factory.ThreadNameModifier.listEntries(ThreadNameModifier.java:324)
at org.eclipse.team.svn.core.utility.SVNUtility.list(SVNUtility.java:440)
at org.eclipse.team.svn.core.svnstorage.SVNRepositoryContainer.getChildren(SVNRepositoryContainer.java:79)
at org.eclipse.team.svn.core.operation.remote.GetRemoteFolderChildrenOperation.runImpl(GetRemoteFolderChildrenOperation.java:76)
at org.eclipse.team.svn.core.operation.AbstractActionOperation.run(AbstractActionOperation.java:82)
at org.eclipse.team.svn.core.utility.ProgressMonitorUtility.doTask(ProgressMonitorUtility.java:104)
at org.eclipse.team.svn.core.operation.CompositeOperation.runImpl(CompositeOperation.java:99)
at org.eclipse.team.svn.core.operation.AbstractActionOperation.run(AbstractActionOperation.java:82)
at org.eclipse.team.svn.core.operation.LoggedOperation.run(LoggedOperation.java:40)
at org.eclipse.team.svn.core.utility.ProgressMonitorUtility.doTask(ProgressMonitorUtility.java:104)
at org.eclipse.team.svn.core.utility.ProgressMonitorUtility.doTaskExternal(ProgressMonitorUtility.java:90)
at org.eclipse.team.svn.ui.utility.DefaultCancellableOperationWrapper.run(DefaultCancellableOperationWrapper.java:55)
at org.eclipse.team.svn.ui.utility.ScheduledOperationWrapper.run(ScheduledOperationWrapper.java:37)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: org.apache.subversion.javahl.ClientException: svn: E210002: There was a problem while connecting to xn--x7h.example.com:22
at org.apache.subversion.javahl.ClientException.fromException(ClientException.java:117)
at org.tmatesoft.svn.core.javahl17.SVNClientImpl.getClientException(SVNClientImpl.java:1539)
at org.tmatesoft.svn.core.javahl17.SVNClientImpl.list(SVNClientImpl.java:189)
at org.polarion.team.svn.connector.svnkit.SVNKitConnector.listEntries(SVNKitConnector.java:1745)
... 14 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: E210002: There was a problem while connecting to xn--x7h.example.com:22
at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:70)
at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:57)
at org.tmatesoft.svn.core.internal.io.svn.SVNSSHConnector.open(SVNSSHConnector.java:145)
at org.tmatesoft.svn.core.internal.io.svn.SVNConnection.open(SVNConnection.java:77)
at org.tmatesoft.svn.core.internal.io.svn.SVNRepositoryImpl.openConnection(SVNRepositoryImpl.java:1273)
at org.tmatesoft.svn.core.internal.io.svn.SVNRepositoryImpl.getLatestRevision(SVNRepositoryImpl.java:172)
at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119)
at org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:195)
at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:46)
at org.tmatesoft.svn.core.internal.wc2.remote.SvnRemoteList.run(SvnRemoteList.java:36)
at org.tmatesoft.svn.core.internal.wc2.remote.SvnRemoteList.run(SvnRemoteList.java:1)
at org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21)
at org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1235)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at org.tmatesoft.svn.core.javahl17.SVNClientImpl.list(SVNClientImpl.java:187)
... 15 more
Caused by: java.io.IOException: There was a problem while connecting to xn--x7h.example.com:22
at com.trilead.ssh2.Connection.connect(Connection.java:817)
at org.tmatesoft.svn.core.internal.io.svn.ssh.SshHost.openConnection(SshHost.java:225)
at org.tmatesoft.svn.core.internal.io.svn.ssh.SshHost.openSession(SshHost.java:153)
at org.tmatesoft.svn.core.internal.io.svn.ssh.SshSessionPool.openSession(SshSessionPool.java:85)
at org.tmatesoft.svn.core.internal.io.svn.SVNSSHConnector.open(SVNSSHConnector.java:122)
... 27 more
Caused by: java.io.IOException: Key exchange was not finished, connection is closed.
at com.trilead.ssh2.transport.KexManager.getOrWaitForConnectionInfo(KexManager.java:92)
at com.trilead.ssh2.transport.TransportManager.getConnectionInfo(TransportManager.java:231)
at com.trilead.ssh2.Connection.connect(Connection.java:769)
... 31 more
Caused by: java.io.IOException: Cannot negotiate, proposals do not match.
at com.trilead.ssh2.transport.KexManager.handleMessage(KexManager.java:413)
at com.trilead.ssh2.transport.TransportManager.receiveLoop(TransportManager.java:765)
at com.trilead.ssh2.transport.TransportManager$1.run(TransportManager.java:480)
at java.base/java.lang.Thread.run(Thread.java:834)
Any ideas?
I checked on server-side what is going on. So I called tail -f /var/log/auth.log and discovered a message that the client and the server was not able to agree on matching kex-algorithms.
Finally my solution was to enable the line
KexAlgorithms +diffie-hellman-group1-sha1
to the corresponding
/etc/ssh/sshd_config
File and restart the sshd-server.
Related
My application works fine locally and I'm able to connect to GCP Datastore from local. But when deployed to a server, I'm getting the below exception.
Caused by: com.google.datastore.v1.client.DatastoreException: I/O error\n\t
at com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:136)\n\t
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:105)\n\t
at com.google.datastore.v1.client.Datastore.beginTransaction(Datastore.java:79)\n\t
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.beginTransaction(HttpDatastoreRpc.java:153)\n\t... 92 common frames omitted\nCaused by: java.io.IOException: Error getting access token for service account: connect timed out\n\t
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:444)\n\t
at com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:157)\n\t
at com.google.auth.oauth2.OAuth2Credentials.getRequestMetadata(OAuth2Credentials.java:145)\n\t
at com.google.auth.oauth2.ServiceAccountCredentials.getRequestMetadata(ServiceAccountCredentials.java:603)\n\t
at com.google.auth.http.HttpCredentialsAdapter.initialize(HttpCredentialsAdapter.java:91)\n\t
at com.google.cloud.http.HttpTransportOptions$1.initialize(HttpTransportOptions.java:159)\n\t
at com.google.cloud.http.CensusHttpModule$CensusHttpRequestInitializer.initialize(CensusHttpModule.java:109)\n\t
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc$1.initialize(HttpDatastoreRpc.java:91)\n\t
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:91)\n\t... 94 common frames omitted\nCaused by: java.net.SocketTimeoutException: connect timed out\n\t
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)\n\t
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)\n\t
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)\n\t
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)\n\t
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)\n\t
at java.base/java.net.Socket.connect(Socket.java:591)\n\t
at java.base/sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:285)\n\t
at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)\n\t
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)\n\t
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)\n\t
at java.base/sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:265)\n\t
at java.base/sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:372)\n\t
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)\n\t
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1181)\n\t
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1075)\n\t
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)\n\t
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1356)\n\t
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1331)\n\t
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:241)\n\t
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:113)\n\t
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:84)\n\t
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1012)\n\t
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:441)
Edit -
Scopes under this credential
[https://www.googleapis.com/auth/pubsub,
https://www.googleapis.com/auth/spanner.admin,
https://www.googleapis.com/auth/spanner.data,
https://www.googleapis.com/auth/datastore,
https://www.googleapis.com/auth/sqlservice.admin,
https://www.googleapis.com/auth/devstorage.read_only,
https://www.googleapis.com/auth/devstorage.read_write,
https://www.googleapis.com/auth/cloudruntimeconfig,
https://www.googleapis.com/auth/trace.append,
https://www.googleapis.com/auth/cloud-platform,
https://www.googleapis.com/auth/cloud-vision,
https://www.googleapis.com/auth/bigquery,
https://www.googleapis.com/auth/monitoring.write]
Any lead will be appreciated.
Thank you in advance!
Turned out that it was a connectivity issue. Our server (in AWS) didn't have the right to access datastore.
I am trying to use Azure Blob Storage with Apache Flink 1.10 for checkpointing purposes.
I followed all the instructions mentioned in [Flink Documentation][1] https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/filesystems/azure.html
Step 1:
mkdir ./plugins/azure-fs-hadoop
cp ./opt/flink-azure-fs-hadoop-1.10.0.jar ./plugins/azure-fs-hadoop/
Step 2:
This is what I have in flink-conf.xml
#Azure Storage Key
**fs.azure.account.key.<storage-account>.blob.core.windows.net:xxxxxxxxxxxxxxxxxx**
Step 3:
Use Azure Blob storage for checkpointing
This is what I have in my flink job
final StateBackend stateBackend = new FsStateBackend("wasb://flink-blob#$<storage-account>.blob.core.windows.net/checkpoint");
I am not sure if i missed anything here but when I submit the job, I get below Exception (AskTimeoutException from Actor).
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1741)
at org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1620)
at com.example.flink.checkpointing.CheckpointExample.main(CheckpointExample.java:78)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
... 8 more
Caused by: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1736)
... 17 more
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$7(RestClusterClient.java:359)
at java.base/java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:986)
at java.base/java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:970)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:274)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:610)
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1085)
at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/dispatcher#75666936]] after [10000 ms]. Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply.
at akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:635)
at akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:650)
at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:870)
at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)
at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:868)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
at akka.actor.LightArrayRevolverScheduler$$anon$3.executeBucket$1(LightArrayRevolverScheduler.scala:279)
at akka.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:283)
at akka.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:235)
at java.base/java.lang.Thread.run(Thread.java:834)
End of exception on server side>]
at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:390)
at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:374)
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
... 4 more```
I think your problem is twofold. The true failure cause is hidden because of the AskTimeoutException. This problem has been solved with FLINK-16018 which will be released with Flink 1.10.1. The problem is that the timeout value is too aggressive so that a long lasting job submission will fail on the client side.
For the true failure cause, I would recommend to take a look at Flink's jobmanager.log. It should contain information about what went wrong. I would suspect that there is a misconfiguration of the Azure Blob Storage.
That's correct. You must set your checkpoints.dir and savepoints.dir in the following format and use
fs.azure.account.key.<storage-account-name>.blob.core.windows.net:<key>
wasbs://<container>#<storage-account-name>.blob.core.windows.net/<directory>/
im trying to run one of the examples coming with the OPC-UA Java Stack and getting the following Exception.
Exception in thread "main" org.opcfoundation.ua.common.RuntimeServiceResultException: org.opcfoundation.ua.common.ServiceResultException: Bad_CertificateInvalid (code=0x80120000, description="2148663296, java.io.IOException: Duplicate extensions not allowed")
at org.opcfoundation.ua.transport.TransportChannelSettings.getServerCertificate(TransportChannelSettings.java:114)
at org.opcfoundation.ua.transport.tcp.io.TcpConnection.initialize(TcpConnection.java:376)
at org.opcfoundation.ua.transport.tcp.io.SecureChannelTcp.initialize(SecureChannelTcp.java:273)
at org.opcfoundation.ua.transport.tcp.io.SecureChannelTcp.initialize(SecureChannelTcp.java:246)
at org.opcfoundation.ua.application.Client.createSecureChannel(Client.java:640)
at org.opcfoundation.ua.application.Client.createSecureChannel(Client.java:555)
at org.opcfoundation.ua.application.Client.createSessionChannel(Client.java:370)
at org.opcfoundation.ua.application.Client.createSessionChannel(Client.java:345)
at org.opcfoundation.ua.examples.SampleClient.main(SampleClient.java:109)
Caused by: org.opcfoundation.ua.common.ServiceResultException: Bad_CertificateInvalid (code=0x80120000, description="2148663296, java.io.IOException: Duplicate extensions not allowed")
at org.opcfoundation.ua.transport.security.Cert.<init>(Cert.java:143)
at org.opcfoundation.ua.transport.TransportChannelSettings.getServerCertificate(TransportChannelSettings.java:112)
... 8 more
Caused by: java.security.cert.CertificateParsingException: java.io.IOException: Duplicate extensions not allowed
at sun.security.x509.X509CertInfo.<init>(X509CertInfo.java:169)
at sun.security.x509.X509CertImpl.parse(X509CertImpl.java:1804)
at sun.security.x509.X509CertImpl.<init>(X509CertImpl.java:195)
at sun.security.provider.X509Factory.engineGenerateCertificate(X509Factory.java:102)
at java.security.cert.CertificateFactory.generateCertificate(CertificateFactory.java:339)
at org.opcfoundation.ua.utils.CertificateUtils.decodeX509Certificate(CertificateUtils.java:193)
at org.opcfoundation.ua.transport.security.Cert.<init>(Cert.java:136)
... 9 more
Caused by: java.io.IOException: Duplicate extensions not allowed
at sun.security.x509.CertificateExtensions.parseExtension(CertificateExtensions.java:115)
at sun.security.x509.CertificateExtensions.init(CertificateExtensions.java:88)
at sun.security.x509.CertificateExtensions.<init>(CertificateExtensions.java:78)
at sun.security.x509.X509CertInfo.parse(X509CertInfo.java:702)
at sun.security.x509.X509CertInfo.<init>(X509CertInfo.java:167)
... 15 more
I tried running another client (c#) and it works.
I'm loading a large number of nodes and relationships into an embedded Neo4j database. After about 10,000 inserts, it dies. If I stay under that point, then everything works great. Queries return as they should, as do inserts. It looks like somehow a database file is getting deleted in the middle of the inserts, which is causing everything to fall apart. My database builds itself from scratch, so if I completely delete my graphdb folder and restart it, it runs exactly the same every time. So how do you handle large embedded Neo4j databases?
Here are the pertinent errors.
From the Java output side
The transactions start not committing:
WorkerThread exception::org.neo4j.graphdb.TransactionFailureException::Unable to commit transaction
org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction
at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:140)
...
at java.lang.Thread.run(Thread.java:745)
Caused by: org.neo4j.graphdb.TransactionFailureException: commit threw exception
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:500)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:385)
at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:123)
at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
... 4 more
Caused by: javax.transaction.xa.XAException
Then it informs me that there's a missing file in the database:
saction.TransactionImpl.doCommit(TransactionImpl.java:560)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:448)
... 7 more
Caused by: org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:814)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.applyCommit(NeoStoreTransaction.java:699)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.doCommit(NeoStoreTransaction.java:631)
at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:327)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(XaResourceManager.java:632)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:533)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64)
at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:548)
... 8 more
Caused by: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:441)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:306)
at org.apache.lucene.index.FormatPostingsDocsWriter.<init>(FormatPostingsDocsWriter.java:47)
at org.apache.lucene.index.FormatPostingsTermsWriter.<init>(FormatPostingsTermsWriter.java:33)
at org.apache.lucene.index.FormatPostingsFieldsWriter.<init>(FormatPostingsFieldsWriter.java:51)
at org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:113)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:70)
at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:60)
at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:581)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3552)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:450)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:399)
at org.apache.lucene.index.DirectoryReader.doOpenFromWriter(DirectoryReader.java:413)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:432)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:375)
at org.apache.lucene.index.IndexReader.openIfChanged(IndexReader.java:508)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:109)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:57)
at org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:137)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanStore.refreshSearcher(LuceneLabelScanStore.java:159)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanWriter.close(LuceneLabelScanWriter.java:82)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:811)
... 15 more
Then I can no longer get a new transaction:
WorkerThread exception::org.neo4j.graphdb.TransactionFailureException::Unable to get transaction.
org.neo4j.graphdb.TransactionFailureException: Unable to get transaction.
at org.neo4j.kernel.InternalAbstractGraphDatabase.transactionRunning(InternalAbstractGraphDatabase.java:1064)
at org.neo4j.kernel.InternalAbstractGraphDatabase.beginTx(InternalAbstractGraphDatabase.java:1037)
at org.neo4j.kernel.TransactionBuilderImpl.begin(TransactionBuilderImpl.java:43)
at org.neo4j.kernel.InternalAbstractGraphDatabase.beginTx(InternalAbstractGraphDatabase.java:1024)
...
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.transaction.SystemException: Kernel has encountered some problem, please perform neccesary action (tx recovery/restart)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.neo4j.kernel.impl.transaction.KernelHealth.assertHealthy(KernelHealth.java:61)
at org.neo4j.kernel.impl.transaction.TxManager.assertTmOk(TxManager.java:339)
at org.neo4j.kernel.impl.transaction.TxManager.getTransaction(TxManager.java:725)
at org.neo4j.kernel.InternalAbstractGraphDatabase.transactionRunning(InternalAbstractGraphDatabase.java:1060)
... 7 more
Caused by: javax.transaction.xa.XAException
at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:560)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:448)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:385)
at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:123)
at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
... 4 more
Caused by: org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:814)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.applyCommit(NeoStoreTransaction.java:699)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.doCommit(NeoStoreTransaction.java:631)
at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:327)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(XaResourceManager.java:632)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:533)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64)
at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:548)
... 8 more
Caused by: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:441)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:306)
at org.apache.lucene.index.FormatPostingsDocsWriter.<init>(FormatPostingsDocsWriter.java:47)
at org.apache.lucene.index.FormatPostingsTermsWriter.<init>(FormatPostingsTermsWriter.java:33)
at org.apache.lucene.index.FormatPostingsFieldsWriter.<init>(FormatPostingsFieldsWriter.java:51)
at org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:113)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:70)
at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:60)
at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:581)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3552)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:450)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:399)
at org.apache.lucene.index.DirectoryReader.doOpenFromWriter(DirectoryReader.java:413)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:432)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:375)
at org.apache.lucene.index.IndexReader.openIfChanged(IndexReader.java:508)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:109)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:57)
at org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:137)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanStore.refreshSearcher(LuceneLabelScanStore.java:159)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanWriter.close(LuceneLabelScanWriter.java:82)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:811)
... 15 more
From the messages.log side
It starts off getting memory mapping errors. This actually happens first when the database first comes online. But more trickle in before it totally dies:
2015-01-27 21:51:29.112+0000 ERROR [org.neo4j]: [/home/user/graphdb/neostore.nodestore.db] Unable to memory map Unable to map pos=0 recordSize=15 totalSize=1048575
org.neo4j.kernel.impl.nioneo.store.MappedMemException: Unable to map pos=0 recordSize=15 totalSize=1048575
at org.neo4j.kernel.impl.nioneo.store.MappedPersistenceWindow.<init>(MappedPersistenceWindow.java:59)
at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.allocateNewWindow(PersistenceWindowPool.java:656)
at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.expandBricks(PersistenceWindowPool.java:617)
at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:144)
at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:546)
at org.neo4j.kernel.impl.nioneo.store.NodeStore.forceGetRecord(NodeStore.java:149)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreIndexStoreView$NodeStoreScan.run(NeoStoreIndexStoreView.java:327)
at org.neo4j.kernel.impl.api.index.IndexPopulationJob.indexAllNodes(IndexPopulationJob.java:212)
at org.neo4j.kernel.impl.api.index.IndexPopulationJob.run(IndexPopulationJob.java:107)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Invalid argument
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:875)
at org.neo4j.kernel.impl.nioneo.store.StoreFileChannel.map(StoreFileChannel.java:57)
at org.neo4j.kernel.impl.nioneo.store.MappedPersistenceWindow.<init>(MappedPersistenceWindow.java:53)
... 13 more
Then, and I've figured out that this is when everything falls apart, I get this error in messages.log:
2015-01-27 21:59:41.516+0000 ERROR [org.neo4j]: setting TM not OK. Kernel has encountered some problem, please perform neccesary action (tx recovery/restart) null
javax.transaction.xa.XAException
at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:560)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:448)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:385)
at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:123)
at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
...
at java.lang.Thread.run(Thread.java:745)
Caused by: org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:814)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.applyCommit(NeoStoreTransaction.java:699)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.doCommit(NeoStoreTransaction.java:631)
at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:327)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(XaResourceManager.java:632)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:533)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64)
at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:548)
... 8 more
Caused by: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:441)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:306)
at org.apache.lucene.index.FormatPostingsDocsWriter.<init>(FormatPostingsDocsWriter.java:47)
at org.apache.lucene.index.FormatPostingsTermsWriter.<init>(FormatPostingsTermsWriter.java:33)
at org.apache.lucene.index.FormatPostingsFieldsWriter.<init>(FormatPostingsFieldsWriter.java:51)
at org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:113)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:70)
at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:60)
at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:581)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3552)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:450)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:399)
at org.apache.lucene.index.DirectoryReader.doOpenFromWriter(DirectoryReader.java:413)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:432)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:375)
at org.apache.lucene.index.IndexReader.openIfChanged(IndexReader.java:508)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:109)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:57)
at org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:137)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanStore.refreshSearcher(LuceneLabelScanStore.java:159)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanWriter.close(LuceneLabelScanWriter.java:82)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:811)
... 15 more
2015-01-27 21:59:41.519+0000 ERROR [org.neo4j]: TM error tx commit commit threw exception
Any ideas on what's causing that .frq file to disappear?
We resolved it in a side-conversation, maximum open files was too low (4000) which is also reported at startup.
That causes Lucene to break internally.
After increasing the limit the OP could import the data successfully.
I am using git bash in windows. I downloaded and unzip the play framework and set the path also, but as soon as I run activator new it gives following error
$ activator new
java.lang.ExceptionInInitializerError
at activator.ActivatorCli$$anonfun$apply$1.apply$mcI$sp(ActivatorCli.sca
la:21)
at activator.ActivatorCli$$anonfun$apply$1.apply(ActivatorCli.scala:19)
at activator.ActivatorCli$$anonfun$apply$1.apply(ActivatorCli.scala:19)
at activator.ActivatorCli$.withContextClassloader(ActivatorCli.scala:179
)
at activator.ActivatorCli$.apply(ActivatorCli.scala:19)
at activator.ActivatorLauncher.run(ActivatorLauncher.scala:28)
at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109)
at xsbt.boot.Launch$.withContextLoader(Launch.scala:129)
at xsbt.boot.Launch$.run(Launch.scala:109)
at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:36)
at xsbt.boot.Launch$.launch(Launch.scala:117)
at xsbt.boot.Launch$.apply(Launch.scala:19)
at xsbt.boot.Boot$.runImpl(Boot.scala:44)
at xsbt.boot.Boot$.main(Boot.scala:20)
at xsbt.boot.Boot.main(Boot.scala)
Caused by: java.lang.RuntimeException: BAD URI: file://f:/Play/activator-1.2.10-
minimal
at activator.properties.ActivatorProperties.uriToFilename(ActivatorPrope
rties.java:106)
at activator.properties.ActivatorProperties.ACTIVATOR_HOME_FILENAME(Acti
vatorProperties.java:113)
at activator.properties.ActivatorProperties.ACTIVATOR_TEMPLATE_LOCAL_REP
O(ActivatorProperties.java:179)
at activator.UICacheHelper$.<init>(UICacheHelper.scala:31)
at activator.UICacheHelper$.<clinit>(UICacheHelper.scala)
... 15 more
Caused by: java.lang.IllegalArgumentException: URI has an authority component
at java.io.File.<init>(File.java:397)
at activator.properties.ActivatorProperties.uriToFilename(ActivatorPrope
rties.java:101)
... 19 more
Error during sbt execution: java.lang.ExceptionInInitializerError
can any one help how shall i proceed.
It is pretty much clear if you read your (own) stack trace carefully:
Caused by: java.lang.RuntimeException: BAD URI: file://f:/Play/activator-1.2.10-minimal
You are trying to access a file via an URI which is malformed.
A correct URI would be either file:/f:/Play/activator-1.2.10-minimal or file:///f:/Play/activator-1.2.10-minimal (see also this answer).
Amit... probably he dont know (how to) "play" ....
check this up...
https://github.com/typesafehub/activator/issues/648