HBase reverse scan error - java

this works well:
scan = new Scan(startRow, stopRow);
this throws exception sometimes:
scan = new Scan(stopRow, startRow);
scan.setReversed(true);
throwing exception while traffic is 100 req/s. there are actually no timeouts, exception is fired immediately for 1-10% requests
hbase: 0.98.12-hadoop2;
hadoop: 2.7.0;
cluster in AWS, 10 datanodes: d2.4xlarge
I think it's maybe related with this issue but I'm not using any filters http://apache-hbase.679495.n3.nabble.com/Exception-during-a-reverse-scan-with-filter-td4069721.html
java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
at com.socialbakers.broker.client.hbase.htable.AbstractHtableListScanner.scanToList(AbstractHtableListScanner.java:30)
at com.socialbakers.broker.client.hbase.htable.AbstractHtableListSingleScanner.invokeOperation(AbstractHtableListSingleScanner.java:23)
at com.socialbakers.broker.client.hbase.htable.AbstractHtableListSingleScanner.invokeOperation(AbstractHtableListSingleScanner.java:11)
at com.socialbakers.broker.client.hbase.AbstractHbaseApi.endPointMethod(AbstractHbaseApi.java:40)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.socialbakers.broker.client.Route.invoke(Route.java:241)
at com.socialbakers.broker.client.handler.EndpointHandler.invoke(EndpointHandler.java:173)
at com.socialbakers.broker.client.handler.EndpointHandler.process(EndpointHandler.java:69)
at com.thetransactioncompany.jsonrpc2.server.Dispatcher.process(Dispatcher.java:196)
at com.socialbakers.broker.client.RejectableRunnable.run(RejectableRunnable.java:38)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:430)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:333)
at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:91)
... 15 more
Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected nextCallSeq: 2 But the nextCallSeq got from client: 1; request=scanner_id: 27700695 number_of_rows: 100 close_scanner: false next_call_seq: 1
at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3231)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30946)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2093)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
at sun.reflect.GeneratedConstructorAccessor16.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:287)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:214)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:115)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:91)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:375)
... 17 more
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException): org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected nextCallSeq: 2 But the nextCallSeq got from client: 1; request=scanner_id: 27700695 number_of_rows: 100 close_scanner: false next_call_seq: 1
at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3231)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30946)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2093)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31392)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:173)
... 21 more

Related

Using copy_to function in sparklyr is taking very long to finish and completes when it fails

The file that I had loaded did not seem that big (234 MB), so I was surprised at the copy_to() function of sparklyr failing. I would really appreciate the help in diagnosing this issue.
library(tidyverse)
library(readr)
library(sparklyr)
hdb_df <- read_csv("ALL Prices 1990-2021 mar.csv")
sc <- spark_connect(master = "local", version = "3.3.0")
hdb_tbl <- copy_to(sc, df = hdb_df, name = "hdb")
With the error
Error: java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: Self-suppression not permitted
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
at java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:562)
at java.base/java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:591)
at java.base/java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:672)
at scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379)
at scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.internal(Tasks.scala:174)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.internal$(Tasks.scala:157)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.internal(Tasks.scala:440)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute(Tasks.scala:150)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute$(Tasks.scala:149)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:440)
at java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinTask.awaitDone(ForkJoinTask.java:436)
at java.base/java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:670)
at scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379)
at scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
at scala.collection.parallel.ForkJoinTasks.executeAndWaitResult(Tasks.scala:423)
at scala.collection.parallel.ForkJoinTasks.executeAndWaitResult$(Tasks.scala:416)
at scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:60)
at scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult(Tasks.scala:555)
at scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult$(Tasks.scala:555)
at scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:84)
at scala.collection.parallel.ParIterableLike$ResultMapping.leaf(ParIterableLike.scala:968)
at scala.collection.parallel.Task.$anonfun$tryLeaf$1(Tasks.scala:53)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.util.control.Breaks$$anon$1.catchBreak(Breaks.scala:67)
at scala.collection.parallel.Task.tryLeaf(Tasks.scala:56)
at scala.collection.parallel.Task.tryLeaf$(Tasks.scala:50)
at scala.collection.parallel.ParIterableLike$ResultMapping.tryLeaf(ParIterableLike.scala:963)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute(Tasks.scala:153)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute$(Tasks.scala:149)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:440)
at java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Caused by: java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: Self-suppression not permitted
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
at java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:562)
at java.base/java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:591)
at java.base/java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:672)
at scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379)
at scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.internal(Tasks.scala:174)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.internal$(Tasks.scala:157)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.internal(Tasks.scala:440)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute(Tasks.scala:150)
... 8 more
Caused by: java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: Self-suppression not permitted
... 23 more
Caused by: java.lang.IllegalArgumentException: Self-suppression not permitted
at java.base/java.lang.Throwable.addSuppressed(Throwable.java:1072)
at scala.collection.parallel.Task.mergeThrowables(Tasks.scala:74)
at scala.collection.parallel.Task.mergeThrowables$(Tasks.scala:72)
at scala.collection.parallel.ParIterableLike$Map.mergeThrowables(ParIterableLike.scala:1061)
at scala.collection.parallel.Task.tryMerge(Tasks.scala:69)
at scala.collection.parallel.Task.tryMerge$(Tasks.scala:66)
at scala.collection.parallel.ParIterableLike$Map.tryMerge(ParIterableLike.scala:1061)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.internal(Tasks.scala:177)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.internal$(Tasks.scala:157)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.internal(Tasks.scala:440)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute(Tasks.scala:150)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute$(Tasks.scala:149)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:440)
at java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool.helpJoin(ForkJoinPool.java:1883)
at java.base/java.util.concurrent.ForkJoinTask.awaitDone(ForkJoinTask.java:440)
at java.base/java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:670)
... 15 more
Caused by: java.lang.OutOfMemoryError: Java heap space
With another error following that:
Error: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:

JBoss 7.3 getting java.lang.IllegalArgumentException

I am deploying war file build on Java 11 on Jboss 7.3 and deployment works OK.
Then when I am sending a request to it, it ideally should open a login.jsp which is configured but it fails with below error. This used to work fine when I was using Java 8, same war build on java 8 works fine. Nothing is changed in code. And clues what could have been gone wrong here.
15:16:11,435 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /lis/authenticated/identity/login: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.IllegalArgumentException
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.spec.RequestDispatcherImpl.includeImpl(RequestDispatcherImpl.java:397)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.spec.RequestDispatcherImpl.setupIncludeImpl(RequestDispatcherImpl.java:326)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.spec.RequestDispatcherImpl.include(RequestDispatcherImpl.java:290)
at deployment.lis.war//com.teamcenter.lis.provider.dispatcher.util.HttpUtils.processLoginFailedAction(Unknown Source)
at deployment.lis.war//com.teamcenter.lis.provider.oauth.filter.OAuthFilter.doFilter(Unknown Source)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at org.wildfly.extension.undertow#7.3.0.GA-redhat-00004//org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:132)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.wildfly.extension.undertow#7.3.0.GA-redhat-00004//org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.wildfly.extension.undertow#7.3.0.GA-redhat-00004//org.wildfly.extension.undertow.deployment.GlobalRequestControllerHandler.handleRequest(GlobalRequestControllerHandler.java:68)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:269)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:78)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:133)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:130)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at org.wildfly.extension.undertow#7.3.0.GA-redhat-00004//org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
at org.wildfly.extension.undertow#7.3.0.GA-redhat-00004//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1504)
at org.wildfly.extension.undertow#7.3.0.GA-redhat-00004//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1504)
at org.wildfly.extension.undertow#7.3.0.GA-redhat-00004//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1504)
at org.wildfly.extension.undertow#7.3.0.GA-redhat-00004//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1504)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:249)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:78)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:99)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.Connectors.executeRootHandler(Connectors.java:376)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830)
at org.jboss.threads#2.3.3.Final-redhat-00001//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads#2.3.3.Final-redhat-00001//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982)
at org.jboss.threads#2.3.3.Final-redhat-00001//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at org.jboss.threads#2.3.3.Final-redhat-00001//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.spec.RequestDispatcherImpl.forwardImpl(RequestDispatcherImpl.java:251)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.spec.RequestDispatcherImpl.forwardImplSetup(RequestDispatcherImpl.java:149)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.spec.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:111)
at deployment.lis.war//com.teamcenter.lis.provider.oauth.servlet.JSecurityCheckServlet.doGet(Unknown Source)
at javax.servlet.api#2.0.0.Final-redhat-00001//javax.servlet.http.HttpServlet.service(HttpServlet.java:503)
at javax.servlet.api#2.0.0.Final-redhat-00001//javax.servlet.http.HttpServlet.service(HttpServlet.java:590)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:81)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:251)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler.dispatchToServlet(ServletInitialHandler.java:196)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.spec.RequestDispatcherImpl.includeImpl(RequestDispatcherImpl.java:391)
... 48 more
Caused by: java.lang.IllegalArgumentException
at deployment.lis.war//jersey.repackaged.org.objectweb.asm.ClassReader.<init>(ClassReader.java:171)
at deployment.lis.war//jersey.repackaged.org.objectweb.asm.ClassReader.<init>(ClassReader.java:153)
at deployment.lis.war//jersey.repackaged.org.objectweb.asm.ClassReader.<init>(ClassReader.java:425)
at deployment.lis.war//org.glassfish.jersey.server.internal.scanning.AnnotationAcceptingListener.process(AnnotationAcceptingListener.java:170)
at deployment.lis.war//org.glassfish.jersey.server.ResourceConfig.scanClasses(ResourceConfig.java:915)
at deployment.lis.war//org.glassfish.jersey.server.ResourceConfig._getClasses(ResourceConfig.java:869)
at deployment.lis.war//org.glassfish.jersey.server.ResourceConfig.getClasses(ResourceConfig.java:775)
at deployment.lis.war//org.glassfish.jersey.server.ResourceConfig$WrappingResourceConfig._getClasses(ResourceConfig.java:1147)
at deployment.lis.war//org.glassfish.jersey.server.ResourceConfig.getClasses(ResourceConfig.java:775)
at deployment.lis.war//org.glassfish.jersey.server.ResourceConfig$RuntimeConfig.<init>(ResourceConfig.java:1206)
at deployment.lis.war//org.glassfish.jersey.server.ResourceConfig$RuntimeConfig.<init>(ResourceConfig.java:1178)
at deployment.lis.war//org.glassfish.jersey.server.ResourceConfig.createRuntimeConfig(ResourceConfig.java:1174)
at deployment.lis.war//org.glassfish.jersey.server.ApplicationHandler.<init>(ApplicationHandler.java:345)
at deployment.lis.war//org.glassfish.jersey.servlet.WebComponent.<init>(WebComponent.java:392)
at deployment.lis.war//org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:177)
at deployment.lis.war//org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:369)
at javax.servlet.api#2.0.0.Final-redhat-00001//javax.servlet.GenericServlet.init(GenericServlet.java:180)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
at org.wildfly.extension.undertow#7.3.0.GA-redhat-00004//org.wildfly.extension.undertow.security.RunAsLifecycleInterceptor.init(RunAsLifecycleInterceptor.java:78)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:103)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:305)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.core.ManagedServlet.forceInit(ManagedServlet.java:212)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletChain.forceInit(ServletChain.java:130)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:63)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.core#2.0.28.SP1-redhat-00001//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:251)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.handlers.ServletInitialHandler.dispatchToPath(ServletInitialHandler.java:186)
at io.undertow.servlet#2.0.28.SP1-redhat-00001//io.undertow.servlet.spec.RequestDispatcherImpl.forwardImpl(RequestDispatcherImpl.java:227)
... 65 more

Apache Ignite: Failed to create string representation of binary object

I am getting below exception and not able to figure out what is wrong with the code.
I have simplified my pojo classes which are to be persisted to ignite cache, but still the complexity remains.
All my pojos are serializable but few of them have business logic code, dao, application context object. These objects can't be removed. Removing these things from the code will require whole code refractoring.
class org.apache.ignite.IgniteException: Failed to create string representation of binary object.
at org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1022)
at org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:864)
at org.apache.ignite.internal.processors.cache.distributed.near.GridNearSingleGetResponse.toString(GridNearSingleGetResponse.java:317)
at java.lang.String.valueOf(String.java:2994)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1162)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1209)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$6.apply(GridDhtCacheAdapter.java:1003)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$6.apply(GridDhtCacheAdapter.java:938)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:385)
at org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:355)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetRequest(GridDhtCacheAdapter.java:938)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$300(GridDhtAtomicCache.java:135)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:257)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:252)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1056)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:581)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:380)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:306)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:101)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:295)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
at org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteException: Failed to create string representation of binary object.
at org.apache.ignite.internal.binary.BinaryObjectExImpl.toString(BinaryObjectExImpl.java:189)
at org.apache.ignite.internal.binary.BinaryObjectImpl.toString(BinaryObjectImpl.java:920)
at java.lang.String.valueOf(String.java:2994)
at org.apache.ignite.internal.util.GridStringBuilder.a(GridStringBuilder.java:101)
at org.apache.ignite.internal.util.tostring.SBLimitedLength.a(SBLimitedLength.java:88)
at org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:939)
at org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1005)
... 27 more
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to read field: currentContacts
at org.apache.ignite.internal.binary.BinaryReaderExImpl.wrapFieldException(BinaryReaderExImpl.java:446)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.unmarshalField(BinaryReaderExImpl.java:343)
at org.apache.ignite.internal.binary.BinaryObjectImpl.field(BinaryObjectImpl.java:626)
at org.apache.ignite.internal.binary.BinaryObjectExImpl.toString(BinaryObjectExImpl.java:225)
at org.apache.ignite.internal.binary.BinaryObjectExImpl.toString(BinaryObjectExImpl.java:186)
... 33 more
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to unmarshal object with optimized marshaller
at org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1765)
at org.apache.ignite.internal.binary.BinaryUtils.unmarshal(BinaryUtils.java:1971)
at org.apache.ignite.internal.binary.BinaryUtils.unmarshal(BinaryUtils.java:1796)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.unmarshalField(BinaryReaderExImpl.java:340)
... 36 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to deserialize object with given class loader: [clsLdr=sun.misc.Launcher$AppClassLoader#764c12b6, err=Failed to deserialize object [typeName=java.util.concurrent.ConcurrentHashMap]]
at org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:237)
at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
at org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1762)
... 39 more
Caused by: java.io.IOException: Failed to deserialize object [typeName=java.util.concurrent.ConcurrentHashMap]
at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:350)
at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:425)
at org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:228)
... 41 more
Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readSerializable(OptimizedObjectInputStream.java:607)
at org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.read(OptimizedClassDescriptor.java:954)
at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:346)
... 44 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readSerializable(OptimizedObjectInputStream.java:604)
... 46 more
Caused by: java.lang.ClassNotFoundException: com.project.qm.controller.beans.WorkItem
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:8771)
at org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:349)
at org.apache.ignite.internal.marshaller.optimized.OptimizedMarshallerUtils.classDescriptor(OptimizedMarshallerUtils.java:264)
at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:341)
at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:425)
at java.util.concurrent.ConcurrentHashMap.readObject(ConcurrentHashMap.java:1445)
Any help is appreciated
In this case you should disable DEBUG logging. DEBUG logging will sometimes try to call toString on objects that does not need to be deserialized (and present as classes) otherwise.
Change global log level to INFO to make this issue go away.

Hive LLAP - ORC split generation failed

I am trying to evaluate hive LLAP on a Hortonworks HDP 2.6 cluster.
Unfortunately, I get a java.lang.RuntimeException: ORC split generation failed when trying to execute queries:
ERROR : Status: Failed
ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1504166274656_0006_3_00, diagnostics=[Vertex vertex_1504166274656_0006_3_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: gprs_records initializer failed, vertex=vertex_1504166274656_0006_3_00 [Map 1], java.lang.RuntimeException: ORC split generation failed with exception: java.lang.ArrayIndexOutOfBoundsException: 5
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1615)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1701)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:446)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:569)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 5
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1609)
... 15 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 5
at org.apache.orc.OrcFile$WriterVersion.from(OrcFile.java:145)
at org.apache.orc.impl.OrcTail.getWriterVersion(OrcTail.java:73)
at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:383)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:89)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1419)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1305)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2600(OrcInputFormat.java:1104)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1285)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1282)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1104)
... 4 more
]
ERROR : Vertex killed, vertexName=Reducer 2, vertexId=vertex_1504166274656_0006_3_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1504166274656_0006_3_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]
ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
INFO : org.apache.tez.common.counters.DAGCounter:
INFO : AM_CPU_MILLISECONDS: 840
INFO : AM_GC_TIME_MILLIS: 23
ERROR : FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1504166274656_0006_3_00, diagnostics=[Vertex vertex_1504166274656_0006_3_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: gprs_records initializer failed, vertex=vertex_1504166274656_0006_3_00 [Map 1], java.lang.RuntimeException: ORC split generation failed with exception: java.lang.ArrayIndexOutOfBoundsException: 5
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1615)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1701)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:446)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:569)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 5
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1609)
... 15 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 5
at org.apache.orc.OrcFile$WriterVersion.from(OrcFile.java:145)
at org.apache.orc.impl.OrcTail.getWriterVersion(OrcTail.java:73)
at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:383)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:89)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1419)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1305)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2600(OrcInputFormat.java:1104)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1285)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1282)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1104)
... 4 more
]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1504166274656_0006_3_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1504166274656_0006_3_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
INFO : Resetting the caller context to HIVE_SSN_ID:2415846c-fb92-480f-b869-240a0b0f30ed
INFO : Completed executing command(queryId=hive_20170831085623_51baf1df-5823-459e-80cf-76fa1f81789f); Time taken: 0.342 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1504166274656_0006_3_00, diagnostics=[Vertex vertex_1504166274656_0006_3_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: gprs_records initializer failed, vertex=vertex_1504166274656_0006_3_00 [Map 1], java.lang.RuntimeException: ORC split generation failed with exception: java.lang.ArrayIndexOutOfBoundsException: 5
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1615)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1701)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:446)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:569)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 5
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1609)
... 15 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 5
at org.apache.orc.OrcFile$WriterVersion.from(OrcFile.java:145)
at org.apache.orc.impl.OrcTail.getWriterVersion(OrcTail.java:73)
at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:383)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:89)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1419)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1305)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2600(OrcInputFormat.java:1104)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1285)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1282)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1104)
... 4 more
]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1504166274656_0006_3_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1504166274656_0006_3_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1 (state=08S01,code=2)
Here's some information to complete the picture:
I've enabled LLAP using the Ambari interface, installed HiveServer interactive, restarted the services. I am using Beeline version 1.2.1000.2.6.0 to connect to it.
The table being queried is in ORC format. ORC files are generated by an ETL pipeline and written by Java code using ORC Core. I have no problem querying it without LLAP using HiveServer2.
Any advice is really appreciated. Thank you all!
The issue seems to be related to several known issues of ORC split generation. Try running the query with the following config:
hive.exec.orc.split.strategy=BI
https://community.hortonworks.com/questions/131609/hive-llap-orc-split-generation-failed.html
Your orc file is written by OrcFileWriter with larger version.
Is your hive's version 2.1.1?
1. Fix the problem by using this patch file.
https://patch-diff.githubusercontent.com/raw/apache/orc/pull/75.patch
2. Recompile the hive project.
3. Replace the hive-exec-2.1.1.jar and hive-orc-2.1.1.jar in the lib directory.

Neo4j kernel crashing when loading large graph

I'm loading a large number of nodes and relationships into an embedded Neo4j database. After about 10,000 inserts, it dies. If I stay under that point, then everything works great. Queries return as they should, as do inserts. It looks like somehow a database file is getting deleted in the middle of the inserts, which is causing everything to fall apart. My database builds itself from scratch, so if I completely delete my graphdb folder and restart it, it runs exactly the same every time. So how do you handle large embedded Neo4j databases?
Here are the pertinent errors.
From the Java output side
The transactions start not committing:
WorkerThread exception::org.neo4j.graphdb.TransactionFailureException::Unable to commit transaction
org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction
at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:140)
...
at java.lang.Thread.run(Thread.java:745)
Caused by: org.neo4j.graphdb.TransactionFailureException: commit threw exception
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:500)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:385)
at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:123)
at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
... 4 more
Caused by: javax.transaction.xa.XAException
Then it informs me that there's a missing file in the database:
saction.TransactionImpl.doCommit(TransactionImpl.java:560)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:448)
... 7 more
Caused by: org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:814)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.applyCommit(NeoStoreTransaction.java:699)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.doCommit(NeoStoreTransaction.java:631)
at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:327)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(XaResourceManager.java:632)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:533)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64)
at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:548)
... 8 more
Caused by: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:441)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:306)
at org.apache.lucene.index.FormatPostingsDocsWriter.<init>(FormatPostingsDocsWriter.java:47)
at org.apache.lucene.index.FormatPostingsTermsWriter.<init>(FormatPostingsTermsWriter.java:33)
at org.apache.lucene.index.FormatPostingsFieldsWriter.<init>(FormatPostingsFieldsWriter.java:51)
at org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:113)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:70)
at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:60)
at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:581)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3552)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:450)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:399)
at org.apache.lucene.index.DirectoryReader.doOpenFromWriter(DirectoryReader.java:413)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:432)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:375)
at org.apache.lucene.index.IndexReader.openIfChanged(IndexReader.java:508)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:109)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:57)
at org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:137)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanStore.refreshSearcher(LuceneLabelScanStore.java:159)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanWriter.close(LuceneLabelScanWriter.java:82)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:811)
... 15 more
Then I can no longer get a new transaction:
WorkerThread exception::org.neo4j.graphdb.TransactionFailureException::Unable to get transaction.
org.neo4j.graphdb.TransactionFailureException: Unable to get transaction.
at org.neo4j.kernel.InternalAbstractGraphDatabase.transactionRunning(InternalAbstractGraphDatabase.java:1064)
at org.neo4j.kernel.InternalAbstractGraphDatabase.beginTx(InternalAbstractGraphDatabase.java:1037)
at org.neo4j.kernel.TransactionBuilderImpl.begin(TransactionBuilderImpl.java:43)
at org.neo4j.kernel.InternalAbstractGraphDatabase.beginTx(InternalAbstractGraphDatabase.java:1024)
...
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.transaction.SystemException: Kernel has encountered some problem, please perform neccesary action (tx recovery/restart)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.neo4j.kernel.impl.transaction.KernelHealth.assertHealthy(KernelHealth.java:61)
at org.neo4j.kernel.impl.transaction.TxManager.assertTmOk(TxManager.java:339)
at org.neo4j.kernel.impl.transaction.TxManager.getTransaction(TxManager.java:725)
at org.neo4j.kernel.InternalAbstractGraphDatabase.transactionRunning(InternalAbstractGraphDatabase.java:1060)
... 7 more
Caused by: javax.transaction.xa.XAException
at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:560)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:448)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:385)
at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:123)
at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
... 4 more
Caused by: org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:814)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.applyCommit(NeoStoreTransaction.java:699)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.doCommit(NeoStoreTransaction.java:631)
at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:327)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(XaResourceManager.java:632)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:533)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64)
at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:548)
... 8 more
Caused by: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:441)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:306)
at org.apache.lucene.index.FormatPostingsDocsWriter.<init>(FormatPostingsDocsWriter.java:47)
at org.apache.lucene.index.FormatPostingsTermsWriter.<init>(FormatPostingsTermsWriter.java:33)
at org.apache.lucene.index.FormatPostingsFieldsWriter.<init>(FormatPostingsFieldsWriter.java:51)
at org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:113)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:70)
at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:60)
at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:581)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3552)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:450)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:399)
at org.apache.lucene.index.DirectoryReader.doOpenFromWriter(DirectoryReader.java:413)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:432)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:375)
at org.apache.lucene.index.IndexReader.openIfChanged(IndexReader.java:508)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:109)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:57)
at org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:137)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanStore.refreshSearcher(LuceneLabelScanStore.java:159)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanWriter.close(LuceneLabelScanWriter.java:82)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:811)
... 15 more
From the messages.log side
It starts off getting memory mapping errors. This actually happens first when the database first comes online. But more trickle in before it totally dies:
2015-01-27 21:51:29.112+0000 ERROR [org.neo4j]: [/home/user/graphdb/neostore.nodestore.db] Unable to memory map Unable to map pos=0 recordSize=15 totalSize=1048575
org.neo4j.kernel.impl.nioneo.store.MappedMemException: Unable to map pos=0 recordSize=15 totalSize=1048575
at org.neo4j.kernel.impl.nioneo.store.MappedPersistenceWindow.<init>(MappedPersistenceWindow.java:59)
at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.allocateNewWindow(PersistenceWindowPool.java:656)
at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.expandBricks(PersistenceWindowPool.java:617)
at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:144)
at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:546)
at org.neo4j.kernel.impl.nioneo.store.NodeStore.forceGetRecord(NodeStore.java:149)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreIndexStoreView$NodeStoreScan.run(NeoStoreIndexStoreView.java:327)
at org.neo4j.kernel.impl.api.index.IndexPopulationJob.indexAllNodes(IndexPopulationJob.java:212)
at org.neo4j.kernel.impl.api.index.IndexPopulationJob.run(IndexPopulationJob.java:107)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Invalid argument
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:875)
at org.neo4j.kernel.impl.nioneo.store.StoreFileChannel.map(StoreFileChannel.java:57)
at org.neo4j.kernel.impl.nioneo.store.MappedPersistenceWindow.<init>(MappedPersistenceWindow.java:53)
... 13 more
Then, and I've figured out that this is when everything falls apart, I get this error in messages.log:
2015-01-27 21:59:41.516+0000 ERROR [org.neo4j]: setting TM not OK. Kernel has encountered some problem, please perform neccesary action (tx recovery/restart) null
javax.transaction.xa.XAException
at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:560)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:448)
at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:385)
at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:123)
at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
...
at java.lang.Thread.run(Thread.java:745)
Caused by: org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:814)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.applyCommit(NeoStoreTransaction.java:699)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.doCommit(NeoStoreTransaction.java:631)
at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:327)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(XaResourceManager.java:632)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:533)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64)
at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:548)
... 8 more
Caused by: java.io.FileNotFoundException: /home/user/graphdb/schema/label/lucene/_1z6.frq (Protocol error)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:441)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:306)
at org.apache.lucene.index.FormatPostingsDocsWriter.<init>(FormatPostingsDocsWriter.java:47)
at org.apache.lucene.index.FormatPostingsTermsWriter.<init>(FormatPostingsTermsWriter.java:33)
at org.apache.lucene.index.FormatPostingsFieldsWriter.<init>(FormatPostingsFieldsWriter.java:51)
at org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:113)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:70)
at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:60)
at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:581)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3552)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:450)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:399)
at org.apache.lucene.index.DirectoryReader.doOpenFromWriter(DirectoryReader.java:413)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:432)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:375)
at org.apache.lucene.index.IndexReader.openIfChanged(IndexReader.java:508)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:109)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:57)
at org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:137)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanStore.refreshSearcher(LuceneLabelScanStore.java:159)
at org.neo4j.kernel.api.impl.index.LuceneLabelScanWriter.close(LuceneLabelScanWriter.java:82)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreTransaction.updateLabelScanStore(NeoStoreTransaction.java:811)
... 15 more
2015-01-27 21:59:41.519+0000 ERROR [org.neo4j]: TM error tx commit commit threw exception
Any ideas on what's causing that .frq file to disappear?
We resolved it in a side-conversation, maximum open files was too low (4000) which is also reported at startup.
That causes Lucene to break internally.
After increasing the limit the OP could import the data successfully.

Categories