java.nio.channels.UnresolvedAddressException when running druid sample app - java

I am trying out druid.io with zookeeper 34.6 on fedora 20 x64. U am following tut [here](
http://druid.io/docs/latest/Tutorial:-A-First-Look-at-Druid.html)
After huge effort I am able to run zookeeper. Now when I run the server of sample druid app
It gives me below error. Notice that it says Inventory initialized
2015-06-21T17:14:03,472 INFO [ServerInventoryView-0] io.druid.client.BatchServerInventoryView - Inventory Initialized
2015-06-21T17:14:03,472 ERROR [main] io.druid.cli.CliBroker - Error when starting up. Failing.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_45]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_45]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_45]
at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_45]
at com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler.start(Lifecycle.java:331) ~[java-util-0.27.0.jar:?]
at com.metamx.common.lifecycle.Lifecycle.start(Lifecycle.java:250) ~[java-util-0.27.0.jar:?]
at io.druid.guice.LifecycleModule$2.start(LifecycleModule.java:136) ~[druid-api-0.3.8.jar:0.7.3]
at io.druid.cli.GuiceRunnable.initLifecycle(GuiceRunnable.java:71) [druid-services-0.7.3.jar:0.7.3]
at io.druid.cli.ServerRunnable.run(ServerRunnable.java:38) [druid-services-0.7.3.jar:0.7.3]
at io.druid.cli.Main.main(Main.java:88) [druid-services-0.7.3.jar:0.7.3]
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:127) ~[?:1.7.0_45]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:640) ~[?:1.7.0_45]
at com.ircclouds.irc.api.comms.SocketChannelConnection.open(SocketChannelConnection.java:24) ~[irc-api-1.0-0011.jar:?]
at com.ircclouds.irc.api.AbstractIRCSession.open(AbstractIRCSession.java:104) ~[irc-api-1.0-0011.jar:?]
at com.ircclouds.irc.api.IRCApiImpl.connect(IRCApiImpl.java:99) ~[irc-api-1.0-0011.jar:?]
at io.druid.segment.realtime.firehose.IrcFirehoseFactory.connect(IrcFirehoseFactory.java:116) ~[druid-server-0.7.3.jar:0.7.3]
at io.druid.segment.realtime.firehose.IrcFirehoseFactory.connect(IrcFirehoseFactory.java:59) ~[druid-server-0.7.3.jar:0.7.3]
at io.druid.segment.realtime.FireDepartment.connect(FireDepartment.java:97) ~[druid-server-0.7.3.jar:0.7.3]
at io.druid.segment.realtime.RealtimeManager$FireChief.init(RealtimeManager.java:207) ~[druid-server-0.7.3.jar:0.7.3]
at io.druid.segment.realtime.RealtimeManager.start(RealtimeManager.java:109) ~[druid-server-0.7.3.jar:0.7.3]
... 10 more
So which address it is failing to resolve? I am new to both druid and zookeeper so must be missing some architectural understanding.

This problem can occur with the wikipedia firehose if you are behind a corporate firewall. If you try at home, do you see the same problem? FWIW, you will get a faster response by posting questions here: https://groups.google.com/forum/#!forum/druid-user

Related

Hive select problems: java.lang.IncompatibleClassChangeError

I have hadoop3.1.2 with hive 3.1.2 on win10, the problem is whenever I use select command or other functions to tables I got below errors and cli stops running, any help is grateful.
Exception in thread "main" java.lang.IncompatibleClassChangeError: Class com.google.common.collect.ImmutableSortedMap does not implement the requested interface java.util.NavigableMap
at org.apache.calcite.schema.Schemas.gatherLattices(Schemas.java:498)
at org.apache.calcite.schema.Schemas.getLatticeEntries(Schemas.java:492)
at org.apache.calcite.jdbc.CalciteConnectionImpl.init(CalciteConnectionImpl.java:153)
at org.apache.calcite.jdbc.Driver$1.onConnectionInit(Driver.java:109)
at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:139)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:150)
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1414)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1430)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:450)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12164)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:285)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:659)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

Docusignapi java.lang.NoSuchMethodError for createEnvelope

I'm trying to integrate the example code of the JAVA SDK 2.1 of Docusign in my application.
The authentification process passed well but when calling the following code :
EnvelopesApi envelopesApi = new EnvelopesApi(apiClient);
envelopesApi.createEnvelope("1111111", envelopeDefinition);
An exception is recieved :
Caused by: java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.ObjectWriter.getConfig()Lcom/fasterxml/jackson/databind/SerializationConfig;
at com.fasterxml.jackson.jaxrs.json.JsonEndpointConfig.forWriting(JsonEndpointConfig.java:45)
at com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider._configForWriting(JacksonJsonProvider.java:223)
at com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider._configForWriting(JacksonJsonProvider.java:45)
at com.fasterxml.jackson.jaxrs.base.ProviderBase._configForWriting(ProviderBase.java:481)
at com.fasterxml.jackson.jaxrs.base.ProviderBase._endpointForWriting(ProviderBase.java:694)
at com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:565)
at com.sun.jersey.api.client.RequestWriter.writeRequestEntity(RequestWriter.java:300)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:217)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153)
at com.sun.jersey.api.client.Client.handle(Client.java:652)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:570)
at com.docusign.esign.client.ApiClient.getAPIResponse(ApiClient.java:1125)
at com.docusign.esign.client.ApiClient.invokeAPI(ApiClient.java:1158)
at com.docusign.esign.api.EnvelopesApi.createEnvelope(EnvelopesApi.java:764)
at com.docusign.esign.api.EnvelopesApi.createEnvelope(EnvelopesApi.java:714)
at com.hlf.plateforme.webservice.DocusignWrapper.sendEnvelope(DocusignWrapper.java:160)
at com.hlf.plateforme.web.action.demande.EcontratAction.esignature(EcontratAction.java:246)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:270)
... 36 more
Can someone assist ?
I'm answering my questions in case it can help someone :)
Finally it was a version problem of one of the dependencies....
com.fasterxml.jackson.core
I updated the version to the most recent and it solved the problem.

java.lang.IllegalArgumentException: There is no queue named default

I'm trying to load the data into pig and dump the same data on to the console. I did without any errors in Cloudera sandbox using following commands.
raw_data = LOAD 'hdfs:/user/cloudera/sampledata' USING PigStorage(',') AS (
custno:chararray,
firstname:chararray,
lastname:chararray,
age:int,
profession:chararray
);
dump raw_data;
it dumps all the data in sampledata file.
Trying to do the same in MapR cluster with the following commands.
raw_data = LOAD '/hdfspath/input' USING PigStorage(',') AS (
custno:chararray,
firstname:chararray,
lastname:chararray,
age:int,
profession:chararray
);
dump raw_data;
Getting the following error.
(RemoteException): org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
ERROR org.apache.hadoop.ipc.RPC - FailoverProxy: Failing this Call: getQueueAdmins for error(RemoteException): org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
at org.apache.hadoop.mapred.QueueManager.getQueueACL(QueueManager.java:413)
at org.apache.hadoop.mapred.JobTracker.getQueueAdmins(JobTracker.java:5346)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:993)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1326)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1320)
ERROR 2997: Unable to recreate exception from backend error: org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
at org.apache.hadoop.mapred.QueueManager.getQueueACL(QueueManager.java:413)
at org.apache.hadoop.mapred.JobTracker.getQueueAdmins(JobTracker.java:5346)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:993)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1326)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1320)
at org.apache.hadoop.ipc.Client.call(Client.java:1095)
at org.apache.hadoop.ipc.Client.call(Client.java:1041)
at org.apache.hadoop.ipc.RPC$FailoverInvoker.invoke(RPC.java:540)
at org.apache.hadoop.mapred.$Proxy0.getQueueAdmins(Unknown Source)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:939)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:885)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:859)
at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)
at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop20.PigJobControl.mainLoopAction(PigJobControl.java:157)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:134)
at java.lang.Thread.run(Thread.java:724)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:257)
Any help please.
Thanks in Advance.
Typically this happens if your scheduler has specific queues created without users assigned, and the user submitting the job doesn't specify any queue name.
If it assumes default queue, and has no permission to use it, you could end up with this error. You can avoid the issue with
export PIG_OPTS=”$PIG_OPTS -Dmapred.job.queue.name=my-queue”
or
pig -Dmapreduce.job.queuename=$queue_name -f path/to/script.pig

java.sql.SQLNonTransientConnectionException: org.apache.thrift.transport.TTransportException: Frame size larger than max length (16384000)!

I have a java project in IntellyJ with Cassandra DB and I am using Maven 3 and Java 7. Cassandra version is 2.0.6. I have a table with near 100,000 rows. When I run the program I get this exception:
java.sql.SQLNonTransientConnectionException: org.apache.thrift.transport.TTransportException: Frame size (16858796) larger than max length (16384000)!
at org.apache.cassandra.cql.jdbc.CassandraStatement.doExecute(CassandraStatement.java:197)
at org.apache.cassandra.cql.jdbc.CassandraStatement.executeQuery(CassandraStatement.java:229)
at ir.categorization.methods.featureselection.DBFeatureSelection.getFeatures(DBFeatureSelection.java:102)
at ir.categorization.methods.test.Classifier.setFeatures(Classifier.java:67)
at ir.categorization.methods.test.Classifier.<init>(Classifier.java:50)
at ir.categorization.methods.test.ClassifierTest.main(ClassifierTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: org.apache.thrift.transport.TTransportException: Frame size (16858796) larger than max length (16384000)!
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1562)
at org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1547)
at org.apache.cassandra.cql.jdbc.CassandraConnection.execute(CassandraConnection.java:468)
at org.apache.cassandra.cql.jdbc.CassandraConnection.execute(CassandraConnection.java:494)
at org.apache.cassandra.cql.jdbc.CassandraStatement.doExecute(CassandraStatement.java:164)
... 10 more
Exception in thread "main" java.lang.NullPointerException
at java.util.TimSort.sort(TimSort.java:173)
at java.util.Arrays.sort(Arrays.java:659)
at ir.categorization.methods.test.Classifier.setFeatures(Classifier.java:68)
at ir.categorization.methods.test.Classifier.<init>(Classifier.java:50)
at ir.categorization.methods.test.ClassifierTest.main(ClassifierTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
I was already using cassandra 1.2.8 in eclipse with java 6, and every thing was ok!
P.S: I set native_transport_max_frame_size_in_mb in cassandra.yaml from 256 to 512 and thrift_framed_transport_size_in_mb from 15 to 32, but it doesn't fix.
Can any body help?
Please use Thrift version 0.9.0.
Change property start_native_transport in cassandra.yaml to
start_native_transport: true
then try and make sure you are using right port for operation.
Which API you are using to interact with Cassandra?

how to solve this issue about InvalidClassException when deserializing objects from inputstream?

I am using james2.3.2 to be my mail server and the backend is mysql 5.5.
I got a exception as the following:
23/06/11 16:39:49 DEBUG mailstore: Exception reading attributes Mail1308818378708-0-to-163.com in spool
java.io.InvalidClassException: [Ljava.lang.StackTraceElement;; enum descriptor has non-zero serialVersionUID: 163864874655243298
at java.io.ObjectStreamClass.readNonProxy(ObjectStreamClass.java:620)
at java.io.ObjectInputStream.readClassDescriptor(ObjectInputStream.java:789)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1534)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1591)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1910)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1834)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at java.util.HashMap.readObject(HashMap.java:1067)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:946)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1812)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at org.apache.james.mailrepository.JDBCMailRepository.retrieve(JDBCMailRepository.java:846)
at org.apache.james.mailrepository.JDBCSpoolRepository.accept(JDBCSpoolRepository.java:203)
at org.apache.james.mailrepository.JDBCSpoolRepository.accept(JDBCSpoolRepository.java:126)
at org.apache.james.mailrepository.MailStoreSpoolRepository.accept(MailStoreSpoolRepository.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at org.apache.avalon.phoenix.components.application.BlockInvocationHandler.invoke(BlockInvocationHandler.java:134)
at $Proxy5.accept(Unknown Source)
at org.apache.james.transport.JamesSpoolManager.run(JamesSpoolManager.java:299)
at java.lang.Thread.run(Thread.java:595)
How to solve this issue?
Thanks!
I have spent 3 hours on testing with different databases,such as mysql 5.0 and oracle.It workes well on oracle but not on mysql.It seems it's the issue of mysql, not james.

Categories