I'm trying to load the data into pig and dump the same data on to the console. I did without any errors in Cloudera sandbox using following commands.
raw_data = LOAD 'hdfs:/user/cloudera/sampledata' USING PigStorage(',') AS (
custno:chararray,
firstname:chararray,
lastname:chararray,
age:int,
profession:chararray
);
dump raw_data;
it dumps all the data in sampledata file.
Trying to do the same in MapR cluster with the following commands.
raw_data = LOAD '/hdfspath/input' USING PigStorage(',') AS (
custno:chararray,
firstname:chararray,
lastname:chararray,
age:int,
profession:chararray
);
dump raw_data;
Getting the following error.
(RemoteException): org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
ERROR org.apache.hadoop.ipc.RPC - FailoverProxy: Failing this Call: getQueueAdmins for error(RemoteException): org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
at org.apache.hadoop.mapred.QueueManager.getQueueACL(QueueManager.java:413)
at org.apache.hadoop.mapred.JobTracker.getQueueAdmins(JobTracker.java:5346)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:993)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1326)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1320)
ERROR 2997: Unable to recreate exception from backend error: org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
at org.apache.hadoop.mapred.QueueManager.getQueueACL(QueueManager.java:413)
at org.apache.hadoop.mapred.JobTracker.getQueueAdmins(JobTracker.java:5346)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:993)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1326)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1320)
at org.apache.hadoop.ipc.Client.call(Client.java:1095)
at org.apache.hadoop.ipc.Client.call(Client.java:1041)
at org.apache.hadoop.ipc.RPC$FailoverInvoker.invoke(RPC.java:540)
at org.apache.hadoop.mapred.$Proxy0.getQueueAdmins(Unknown Source)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:939)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:885)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:859)
at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)
at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop20.PigJobControl.mainLoopAction(PigJobControl.java:157)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:134)
at java.lang.Thread.run(Thread.java:724)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:257)
Any help please.
Thanks in Advance.
Typically this happens if your scheduler has specific queues created without users assigned, and the user submitting the job doesn't specify any queue name.
If it assumes default queue, and has no permission to use it, you could end up with this error. You can avoid the issue with
export PIG_OPTS=”$PIG_OPTS -Dmapred.job.queue.name=my-queue”
or
pig -Dmapreduce.job.queuename=$queue_name -f path/to/script.pig
Related
I have hadoop3.1.2 with hive 3.1.2 on win10, the problem is whenever I use select command or other functions to tables I got below errors and cli stops running, any help is grateful.
Exception in thread "main" java.lang.IncompatibleClassChangeError: Class com.google.common.collect.ImmutableSortedMap does not implement the requested interface java.util.NavigableMap
at org.apache.calcite.schema.Schemas.gatherLattices(Schemas.java:498)
at org.apache.calcite.schema.Schemas.getLatticeEntries(Schemas.java:492)
at org.apache.calcite.jdbc.CalciteConnectionImpl.init(CalciteConnectionImpl.java:153)
at org.apache.calcite.jdbc.Driver$1.onConnectionInit(Driver.java:109)
at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:139)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:150)
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1414)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1430)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:450)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12164)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:285)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:659)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
I am developing one project in hadoop using java. When i run my code(jar) on local cluster its working fine but when i run its on amazon multi cluster then it will give exception...
my code for mapreduce job....
job.setJarByClass(ReadActivityDriver.class);
job.setMapperClass(ReadActivityLogMapper.class);
job.setReducerClass(ReadActivityLogReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setInputFormatClass(ColumnFamilyInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
ConfigHelper.setInputRpcPort(job.getConfiguration(), pro.getProperty("port"));
ConfigHelper.setInputInitialAddress(job.getConfiguration(), pro.getProperty("server"));
ConfigHelper.setInputPartitioner(job.getConfiguration(), "org.apache.cassandra.dht.Murmur3Partitioner");
ConfigHelper.setInputColumnFamily(job.getConfiguration(), keyspace, columnFamily);
SlicePredicate predicate = new SlicePredicate().setColumn_names(cn);
ConfigHelper.setInputSlicePredicate(job.getConfiguration(), predicate);
FileSystem.get(job.getConfiguration()).delete(new Path("ReadOutput"), true);
FileOutputFormat.setOutputPath(job, new Path("ReadOutput"));
job.waitForCompletion(true);
Exception which i am getting...
8020/home/ubuntu/hdfstmp/mapred/staging/ubuntu/.staging/job_201405080944_0010
java.lang.RuntimeException: org.apache.thrift.TApplicationException: Invalid method name: 'describe_local_ring'
at org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getRangeMap(AbstractColumnFamilyInputFormat.java:337)
at org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:125)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at com.cassandra.readActivity.ReadActivityDriver.run(ReadActivityDriver.java:117)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at com.cassandra.readActivity.ReadActivityDriver.main(ReadActivityDriver.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: org.apache.thrift.TApplicationException: Invalid method name: 'describe_local_ring'
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_local_ring(Cassandra.java:1277)
at org.apache.cassandra.thrift.Cassandra$Client.describe_local_ring(Cassandra.java:1264)
at org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getRangeMap(AbstractColumnFamilyInputFormat.java:329)
... 20 more
java.io.FileNotFoundException: File does not exist: /user/ubuntu/ReadOutput/part-r-00000;
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchLocatedBlocks(DFSClient.java:2006)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1975)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1967)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:735)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:165)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:436)
at com.cassandra.readActivity.ReadActivityMySql.calculatePoint(ReadActivityMySql.java:65)
at com.cassandra.readActivity.ReadActivityDriver.main(ReadActivityDriver.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
java.io.FileNotFoundException: File does not exist: /user/ubuntu/ReadOutput/part-r-00000;
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchLocatedBlocks(DFSClient.java:2006)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1975)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1967)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:735)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:165)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:436)
at com.cassandra.readActivity.MySqlSavePoint.setSavePoint(MySqlSavePoint.java:66)
at com.cassandra.readActivity.ReadActivityDriver.main(ReadActivityDriver.java:37)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
It looks like the input/output format jars and your cluster are not using the same Cassandra version. You need to either fix the jars, or upgrade the AWS Cassandra nodes.
I think problem in your cassandra partition try random partition
ConfigHelper.setInputPartitioner(job.getConfiguration(),"org.apache.cassandra.dht.RandomPartitioner");
finally i got the answer..
use the random partition
ConfigHelper.setInputPartitioner(job.getConfiguration(),"org.apache.cassandra.dht.RandomPartitioner");
instead of murmummer
ConfigHelper.setInputPartitioner(job.getConfiguration(),"org.apache.cassandra.dht.Murmur3Partitioner");
I have command that reads the *.avro file and loads into the table
loadavro *.avro tablename
when I try to run the code in vagrant cluster and it's throwing following error
[root#cdh4-cluster vagrant]# hadoop jar devcenter-store-load-0.2-SNAPSHOT.jar loadavro person.avro person_table
Unexpected exception: Cannot create job output directory /tmp/crunch-1880716375
java.lang.RuntimeException: Cannot create job output directory /tmp/crunch-1880716375
at org.apache.crunch.impl.dist.DistributedPipeline.createTempDirectory(DistributedPipeline.java:281)
at org.apache.crunch.impl.dist.DistributedPipeline.<init>(DistributedPipeline.java:88)
at org.apache.crunch.impl.mr.MRPipeline.<init>(MRPipeline.java:88)
at org.apache.crunch.impl.mr.MRPipeline.<init>(MRPipeline.java:76)
at com.cerner.devcenter.pipeline.crunch.PersonRecordPipeline.getPipelineInstance(PersonRecordPipeline.java:72)
at com.cerner.devcenter.pipeline.crunch.PersonRecordPipeline.writeIntoHBase(PersonRecordPipeline.java:53)
at com.cerner.console.commands.LoadAvroCommand.run(LoadAvroCommand.java:35)
at com.cerner.kepler.commands.Command.run(Command.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.cerner.kepler.commands.BasicCommandRunner.run(BasicCommandRunner.java:61)
at com.cerner.kepler.commands.CommandRunner.run(CommandRunner.java:19)
at com.cerner.console.main.ConsoleMain.main(ConsoleMain.java:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/crunch-1880716375. Name node is in safe mode.
The reported blocks 104 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 106. Safe mode will be turned off automatically.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2982)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
at org.apache.hadoop.ipc.Client.call(Client.java:1225)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy10.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy11.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2121)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2092)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:546)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1902)
at org.apache.crunch.impl.dist.DistributedPipeline.createTempDirectory(DistributedPipeline.java:279)
... 16 more
If you look into the exception you can see that the NameNode is in safe mode so the crunch job can't write any new files into HDFS. Take the NameNode out the safe mode and run the job again. hadoop dfsadmin -safemode leave
When running my GWT/Errai app I get this error message:
00:00:00.000 [ERROR] Unable to load module entry point class
org.jboss.errai.ioc.client.Container (see associated exception for
details) java.lang.RuntimeException: critical error in IOC container
bootstrap at
org.jboss.errai.ioc.client.Container.bootstrapContainer(Container.java:69)
at
org.jboss.errai.ioc.client.Container.onModuleLoad(Container.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601) at
com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:396) at
com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:200)
at
com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:525)
at
com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:363)
at java.lang.Thread.run(Thread.java:722) Caused by:
java.lang.AssertionError: This UIObject's element is not set; you may
be missing a call to either Composite.initWidget() or
UIObject.setElement() at
com.google.gwt.user.client.ui.UIObject.getElement(UIObject.java:527)
at
org.jboss.errai.ui.shared.TemplateUtil.compositeComponentReplace(TemplateUtil.java:61)
at
org.jboss.errai.ioc.client.BootstrapperImpl$65$1.init(BootstrapperImpl.java:1623)
at
org.jboss.errai.ioc.client.BootstrapperImpl$65$1.init(BootstrapperImpl.java:1)
at
org.jboss.errai.ioc.client.container.CreationalContext.resolveAllProxies(CreationalContext.java:351)
at
org.jboss.errai.ioc.client.container.CreationalContext.finish(CreationalContext.java:312)
at
org.jboss.errai.ioc.client.Container.bootstrapContainer(Container.java:59)
at
org.jboss.errai.ioc.client.Container.onModuleLoad(Container.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601) at
com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:396) at
com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:200)
at
com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:525)
at
com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:363)
at java.lang.Thread.run(Thread.java:722)
I've already done mvn clean compile package gwt:run
It probably means that you use a Composite on which you did not call initWidget(Widget).
Extract of javadoc for Composite
A type of widget that can **wrap** another widget, hiding the wrapped widget's methods.
If you don't call initWidget(), there is no wrapped widget and it leads to this error message.
It also happens if you extends UiObject without calling setElement() but this is a rare usecase.
I am using james2.3.2 to be my mail server and the backend is mysql 5.5.
I got a exception as the following:
23/06/11 16:39:49 DEBUG mailstore: Exception reading attributes Mail1308818378708-0-to-163.com in spool
java.io.InvalidClassException: [Ljava.lang.StackTraceElement;; enum descriptor has non-zero serialVersionUID: 163864874655243298
at java.io.ObjectStreamClass.readNonProxy(ObjectStreamClass.java:620)
at java.io.ObjectInputStream.readClassDescriptor(ObjectInputStream.java:789)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1534)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1591)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1910)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1834)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at java.util.HashMap.readObject(HashMap.java:1067)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:946)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1812)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at org.apache.james.mailrepository.JDBCMailRepository.retrieve(JDBCMailRepository.java:846)
at org.apache.james.mailrepository.JDBCSpoolRepository.accept(JDBCSpoolRepository.java:203)
at org.apache.james.mailrepository.JDBCSpoolRepository.accept(JDBCSpoolRepository.java:126)
at org.apache.james.mailrepository.MailStoreSpoolRepository.accept(MailStoreSpoolRepository.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at org.apache.avalon.phoenix.components.application.BlockInvocationHandler.invoke(BlockInvocationHandler.java:134)
at $Proxy5.accept(Unknown Source)
at org.apache.james.transport.JamesSpoolManager.run(JamesSpoolManager.java:299)
at java.lang.Thread.run(Thread.java:595)
How to solve this issue?
Thanks!
I have spent 3 hours on testing with different databases,such as mysql 5.0 and oracle.It workes well on oracle but not on mysql.It seems it's the issue of mysql, not james.