how to solve this issue about InvalidClassException when deserializing objects from inputstream? - java

I am using james2.3.2 to be my mail server and the backend is mysql 5.5.
I got a exception as the following:
23/06/11 16:39:49 DEBUG mailstore: Exception reading attributes Mail1308818378708-0-to-163.com in spool
java.io.InvalidClassException: [Ljava.lang.StackTraceElement;; enum descriptor has non-zero serialVersionUID: 163864874655243298
at java.io.ObjectStreamClass.readNonProxy(ObjectStreamClass.java:620)
at java.io.ObjectInputStream.readClassDescriptor(ObjectInputStream.java:789)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1534)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1591)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1910)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1834)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at java.util.HashMap.readObject(HashMap.java:1067)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:946)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1812)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at org.apache.james.mailrepository.JDBCMailRepository.retrieve(JDBCMailRepository.java:846)
at org.apache.james.mailrepository.JDBCSpoolRepository.accept(JDBCSpoolRepository.java:203)
at org.apache.james.mailrepository.JDBCSpoolRepository.accept(JDBCSpoolRepository.java:126)
at org.apache.james.mailrepository.MailStoreSpoolRepository.accept(MailStoreSpoolRepository.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at org.apache.avalon.phoenix.components.application.BlockInvocationHandler.invoke(BlockInvocationHandler.java:134)
at $Proxy5.accept(Unknown Source)
at org.apache.james.transport.JamesSpoolManager.run(JamesSpoolManager.java:299)
at java.lang.Thread.run(Thread.java:595)
How to solve this issue?
Thanks!

I have spent 3 hours on testing with different databases,such as mysql 5.0 and oracle.It workes well on oracle but not on mysql.It seems it's the issue of mysql, not james.

Related

Hive select problems: java.lang.IncompatibleClassChangeError

I have hadoop3.1.2 with hive 3.1.2 on win10, the problem is whenever I use select command or other functions to tables I got below errors and cli stops running, any help is grateful.
Exception in thread "main" java.lang.IncompatibleClassChangeError: Class com.google.common.collect.ImmutableSortedMap does not implement the requested interface java.util.NavigableMap
at org.apache.calcite.schema.Schemas.gatherLattices(Schemas.java:498)
at org.apache.calcite.schema.Schemas.getLatticeEntries(Schemas.java:492)
at org.apache.calcite.jdbc.CalciteConnectionImpl.init(CalciteConnectionImpl.java:153)
at org.apache.calcite.jdbc.Driver$1.onConnectionInit(Driver.java:109)
at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:139)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:150)
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1414)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1430)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:450)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12164)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:285)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:659)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

Docusignapi java.lang.NoSuchMethodError for createEnvelope

I'm trying to integrate the example code of the JAVA SDK 2.1 of Docusign in my application.
The authentification process passed well but when calling the following code :
EnvelopesApi envelopesApi = new EnvelopesApi(apiClient);
envelopesApi.createEnvelope("1111111", envelopeDefinition);
An exception is recieved :
Caused by: java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.ObjectWriter.getConfig()Lcom/fasterxml/jackson/databind/SerializationConfig;
at com.fasterxml.jackson.jaxrs.json.JsonEndpointConfig.forWriting(JsonEndpointConfig.java:45)
at com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider._configForWriting(JacksonJsonProvider.java:223)
at com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider._configForWriting(JacksonJsonProvider.java:45)
at com.fasterxml.jackson.jaxrs.base.ProviderBase._configForWriting(ProviderBase.java:481)
at com.fasterxml.jackson.jaxrs.base.ProviderBase._endpointForWriting(ProviderBase.java:694)
at com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:565)
at com.sun.jersey.api.client.RequestWriter.writeRequestEntity(RequestWriter.java:300)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:217)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153)
at com.sun.jersey.api.client.Client.handle(Client.java:652)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:570)
at com.docusign.esign.client.ApiClient.getAPIResponse(ApiClient.java:1125)
at com.docusign.esign.client.ApiClient.invokeAPI(ApiClient.java:1158)
at com.docusign.esign.api.EnvelopesApi.createEnvelope(EnvelopesApi.java:764)
at com.docusign.esign.api.EnvelopesApi.createEnvelope(EnvelopesApi.java:714)
at com.hlf.plateforme.webservice.DocusignWrapper.sendEnvelope(DocusignWrapper.java:160)
at com.hlf.plateforme.web.action.demande.EcontratAction.esignature(EcontratAction.java:246)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:270)
... 36 more
Can someone assist ?
I'm answering my questions in case it can help someone :)
Finally it was a version problem of one of the dependencies....
com.fasterxml.jackson.core
I updated the version to the most recent and it solved the problem.

How to resolve "Caused by: org.hibernate.HibernateException: Unable to access lob stream"

I have recently Upgraded my application.
Upgradation details:
- Java 6 to Java 8
- Hibernate 3 to Hibernate 3.6.10
- Spring 2.5 to Spring 4
- JBoss EAP 6 to JBoss EAP 7
I am trying to save some values which include some text values and files(Clob). The data is getting saved but I am getting an exception on the next hibernate operation.
Caused by: org.hibernate.HibernateException: Unable to access lob stream
at org.hibernate.type.descriptor.java.ClobTypeDescriptor.unwrap(ClobTypeDescriptor.java:117)
at org.hibernate.type.descriptor.java.ClobTypeDescriptor.unwrap(ClobTypeDescriptor.java:46)
at org.hibernate.type.descriptor.sql.ClobTypeDescriptor$3$1.doBind(ClobTypeDescriptor.java:83)
at org.hibernate.type.descriptor.sql.BasicBinder.bind(BasicBinder.java:91)
at org.hibernate.type.AbstractStandardBasicType.nullSafeSet(AbstractStandardBasicType.java:283)
at org.hibernate.type.AbstractStandardBasicType.nullSafeSet(AbstractStandardBasicType.java:278)
at org.hibernate.type.AbstractSingleColumnStandardBasicType.nullSafeSet(AbstractSingleColumnStandardBasicType.java:89)
at org.hibernate.persister.entity.AbstractEntityPersister.dehydrate(AbstractEntityPersister.java:2184)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2559)
at org.hibernate.persister.entity.AbstractEntityPersister.updateOrInsert(AbstractEntityPersister.java:2495)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2822)
at org.hibernate.action.EntityUpdateAction.execute(EntityUpdateAction.java:113)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:273)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:265)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:185)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216)
at com.honeywell.cdd.dao.RptGrpDAOImpl.removeUnConfigGrpMTDT(RptGrpDAOImpl.java:2373)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:98)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:262)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:95)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.orm.hibernate3.HibernateInterceptor.invoke(HibernateInterceptor.java:112)
... 68 more
Caused by: java.sql.SQLException: could not reset reader
at org.hibernate.engine.jdbc.ClobProxy.resetIfNeeded(ClobProxy.java:178)
at org.hibernate.engine.jdbc.ClobProxy.getCharacterStream(ClobProxy.java:89)
at org.hibernate.engine.jdbc.ClobProxy.invoke(ClobProxy.java:121)
at com.sun.proxy.$Proxy133.getCharacterStream(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.hibernate.engine.jdbc.SerializableClobProxy.invoke(SerializableClobProxy.java:74)
at com.sun.proxy.$Proxy132.getCharacterStream(Unknown Source)
at org.hibernate.type.descriptor.java.ClobTypeDescriptor.unwrap(ClobTypeDescriptor.java:114)
... 98 more
I am not able to find out exact cause of this exception.
> The Same Code is working fine in the older configuration
"The Same Code is working fine in the older configuration"
is not really relevant ... unless you want to submit a but report to someone.
Lets look at what the nested exception:
Caused by: java.sql.SQLException: could not reset reader
The stacktrace indicates that something is calling getCharacterStream() on a Clob proxy and that is failing when the proxy attempts to reset the proxy's reader. The failure is (presumably) occurring because the JDBC object wrapped by the proxy doesn't allow a reader to be reset.
So why is this happening?
Again ... presumably ... something in your application has tried to call getCharacterStream() twice on a Clob proxy.
My guess is that you are (implicitly) using Clob objects as if they were in-memory files ... to avoid reading them when you don't need to. Except that sometimes you need to, and this is interfering with the persisting.

java.lang.IllegalArgumentException: There is no queue named default

I'm trying to load the data into pig and dump the same data on to the console. I did without any errors in Cloudera sandbox using following commands.
raw_data = LOAD 'hdfs:/user/cloudera/sampledata' USING PigStorage(',') AS (
custno:chararray,
firstname:chararray,
lastname:chararray,
age:int,
profession:chararray
);
dump raw_data;
it dumps all the data in sampledata file.
Trying to do the same in MapR cluster with the following commands.
raw_data = LOAD '/hdfspath/input' USING PigStorage(',') AS (
custno:chararray,
firstname:chararray,
lastname:chararray,
age:int,
profession:chararray
);
dump raw_data;
Getting the following error.
(RemoteException): org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
ERROR org.apache.hadoop.ipc.RPC - FailoverProxy: Failing this Call: getQueueAdmins for error(RemoteException): org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
at org.apache.hadoop.mapred.QueueManager.getQueueACL(QueueManager.java:413)
at org.apache.hadoop.mapred.JobTracker.getQueueAdmins(JobTracker.java:5346)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:993)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1326)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1320)
ERROR 2997: Unable to recreate exception from backend error: org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
at org.apache.hadoop.mapred.QueueManager.getQueueACL(QueueManager.java:413)
at org.apache.hadoop.mapred.JobTracker.getQueueAdmins(JobTracker.java:5346)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:993)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1326)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1320)
at org.apache.hadoop.ipc.Client.call(Client.java:1095)
at org.apache.hadoop.ipc.Client.call(Client.java:1041)
at org.apache.hadoop.ipc.RPC$FailoverInvoker.invoke(RPC.java:540)
at org.apache.hadoop.mapred.$Proxy0.getQueueAdmins(Unknown Source)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:939)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:885)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:859)
at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)
at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop20.PigJobControl.mainLoopAction(PigJobControl.java:157)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:134)
at java.lang.Thread.run(Thread.java:724)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:257)
Any help please.
Thanks in Advance.
Typically this happens if your scheduler has specific queues created without users assigned, and the user submitting the job doesn't specify any queue name.
If it assumes default queue, and has no permission to use it, you could end up with this error. You can avoid the issue with
export PIG_OPTS=”$PIG_OPTS -Dmapred.job.queue.name=my-queue”
or
pig -Dmapreduce.job.queuename=$queue_name -f path/to/script.pig

Apache Crunch Error

I have command that reads the *.avro file and loads into the table
loadavro *.avro tablename
when I try to run the code in vagrant cluster and it's throwing following error
[root#cdh4-cluster vagrant]# hadoop jar devcenter-store-load-0.2-SNAPSHOT.jar loadavro person.avro person_table
Unexpected exception: Cannot create job output directory /tmp/crunch-1880716375
java.lang.RuntimeException: Cannot create job output directory /tmp/crunch-1880716375
at org.apache.crunch.impl.dist.DistributedPipeline.createTempDirectory(DistributedPipeline.java:281)
at org.apache.crunch.impl.dist.DistributedPipeline.<init>(DistributedPipeline.java:88)
at org.apache.crunch.impl.mr.MRPipeline.<init>(MRPipeline.java:88)
at org.apache.crunch.impl.mr.MRPipeline.<init>(MRPipeline.java:76)
at com.cerner.devcenter.pipeline.crunch.PersonRecordPipeline.getPipelineInstance(PersonRecordPipeline.java:72)
at com.cerner.devcenter.pipeline.crunch.PersonRecordPipeline.writeIntoHBase(PersonRecordPipeline.java:53)
at com.cerner.console.commands.LoadAvroCommand.run(LoadAvroCommand.java:35)
at com.cerner.kepler.commands.Command.run(Command.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.cerner.kepler.commands.BasicCommandRunner.run(BasicCommandRunner.java:61)
at com.cerner.kepler.commands.CommandRunner.run(CommandRunner.java:19)
at com.cerner.console.main.ConsoleMain.main(ConsoleMain.java:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/crunch-1880716375. Name node is in safe mode.
The reported blocks 104 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 106. Safe mode will be turned off automatically.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2982)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
at org.apache.hadoop.ipc.Client.call(Client.java:1225)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy10.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy11.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2121)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2092)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:546)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1902)
at org.apache.crunch.impl.dist.DistributedPipeline.createTempDirectory(DistributedPipeline.java:279)
... 16 more
If you look into the exception you can see that the NameNode is in safe mode so the crunch job can't write any new files into HDFS. Take the NameNode out the safe mode and run the job again. hadoop dfsadmin -safemode leave

Categories