SparkContext: Error initializing SparkContext on MapR Sandbox - java

I tried running this sample project which uses MapR.
I tried executing the class ml.Flight in the sandbox and from the below line,
val spark: SparkSession = SparkSession.builder().appName("churn").getOrCreate()
I got this error.
[user01#maprdemo ~]$ spark-submit --class ml.Flight --master local[2] spark-ml-flightdelay-1.0.jar
Warning: Unable to determine $DRILL_HOME
18/12/19 05:39:09 WARN Utils: Your hostname, maprdemo.local resolves to a loopback address: 127.0.0.1; using 10.0.3.1 instead (on interface enp0s3)
18/12/19 05:39:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
18/12/19 05:39:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/12/19 05:39:28 ERROR SparkContext: Error initializing SparkContext.
java.io.IOException: Could not create FileClient
at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:656)
at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:709)
at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1419)
at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:1093)
at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:522)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:933)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:924)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:924)
at ml.Flight$.main(Flight.scala:37)
at ml.Flight.main(Flight.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:899)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: Could not create FileClient
at com.mapr.fs.MapRClientImpl.<init>(MapRClientImpl.java:137)
at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:650)
... 22 more
I'm new to Scala/Spark and any help is welcome. Thanks in advance.

I think you are using or exporting a different python version of spark-submit.
For example:
/opt/mapr/spark/spark-2.3.1/bin/spark-submit

Related

Hadoop Kerberos failure on Spark local

I'm getting the following error when running spark-submit on Gitlab.
This works locally, but not on Gitlab.
21/11/03 05:34:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" org.apache.hadoop.security.KerberosAuthException: failure to login: javax.security.auth.login.LoginException: java.lang.NullPointerException: invalid null input: name
at jdk.security.auth/com.sun.security.auth.UnixPrincipal.<init>(UnixPrincipal.java:67)
Why would this happen?
pipenv run spark-submit --packages org.apache.spark: spark-avro_2.12:3.2.0 --master local[1] --conf spark.jars. ivy=/tmp/.ivy main.py

ClassNotFoundException: org.apache.hadoop.conf.Configuration Starting Flink SQL Client

I'm trying to set up the Hive integration with Flink as shown here. I have everything configured as mentioned, all services (Hive, MySQL, Kafka) are running properly. However, as I start a Flink SQL Client on a standalone local Flink cluster with this command:
./bin/sql-client.sh embedded
I get a ClassNotFoundException: org.apache.hadoop.conf.Configuration...
This is the detailed log file with the exception trace:
2020-04-01 11:27:31,458 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, localhost
2020-04-01 11:27:31,459 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123
2020-04-01 11:27:31,459 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m
2020-04-01 11:27:31,460 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.process.size, 1568m
2020-04-01 11:27:31,460 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1
2020-04-01 11:27:31,460 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1
2020-04-01 11:27:31,460 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region
2020-04-01 11:27:31,507 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available.
2020-04-01 11:27:31,527 WARN org.apache.flink.client.cli.CliFrontend - Could not load CLI class org.apache.flink.yarn.cli.FlinkYarnSessionCli.
java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/exceptions/YarnException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flink.client.cli.CliFrontend.loadCustomCommandLine(CliFrontend.java:1076)
at org.apache.flink.client.cli.CliFrontend.loadCustomCommandLines(CliFrontend.java:1030)
at org.apache.flink.table.client.gateway.local.LocalExecutor.<init>(LocalExecutor.java:135)
at org.apache.flink.table.client.SqlClient.start(SqlClient.java:85)
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:178)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.yarn.exceptions.YarnException
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 7 more
2020-04-01 11:27:31,543 INFO org.apache.flink.table.client.gateway.local.LocalExecutor - Using default environment file: file:/home/rudip7/flink/conf/sql-client-defaults.yaml
2020-04-01 11:27:31,861 INFO org.apache.flink.table.client.config.entries.ExecutionEntry - Property 'execution.restart-strategy.type' not specified. Using default value: fallback
2020-04-01 11:27:32,776 ERROR org.apache.flink.table.client.SqlClient - SQL Client must stop. Unexpected exception. This is a bug. Please consider filing an issue.
org.apache.flink.table.client.gateway.SqlExecutionException: Could not create execution context.
at
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:753)
at org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:228)
at org.apache.flink.table.client.SqlClient.start(SqlClient.java:98)
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:178)
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
at org.apache.flink.table.catalog.hive.factories.HiveCatalogFactory.createCatalog(HiveCatalogFactory.java:84)
at org.apache.flink.table.client.gateway.local.ExecutionContext.createCatalog(ExecutionContext.java:371)
at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$null$4(ExecutionContext.java:547)
at java.util.HashMap.forEach(HashMap.java:1289)
at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$5(ExecutionContext.java:546)
at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:240)
at org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:545)
at org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:494)
at org.apache.flink.table.client.gateway.local.ExecutionContext.<init>(ExecutionContext.java:159)
at org.apache.flink.table.client.gateway.local.ExecutionContext.<init>(ExecutionContext.java:118)
at org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:742)
... 3 more
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 14 more
Do you have any hint about what I'm missing? I have all my Hadoop libraries on my Classpath and I don't understand why this class is missing...
I'm using Flink 1.10 and Java 8
Also, this line in the log file makes me wonder if it really my mistake:
2020-04-01 11:27:32,776 ERROR org.apache.flink.table.client.SqlClient - SQL Client must stop. Unexpected exception. This is a bug. Please consider filing an issue.
Thank you in advance!
if you not add the HADOOP_HOME in you env, you can export the HADOOP_CLASSPATH, before run the ./bin/sql-client.sh embedded
export HADOOP_CLASSPATH=`hadoop classpath`

Hadoop's command start-dfs.sh is showing a strange error

When i try to run below command, an error pops up
Alis-Mac:hadoop-2.7.3 naziaimran$ sbin/start-dfs.sh
Below is the error,
2018-06-05 01:04:31.424 java[1879:21215] Unable to load realm info from SCDynamicStore
18/06/05 01:04:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /Users/naziaimran/Desktop/hadoop-2.7.3/logs/hadoop-naziaimran-namenode-Alis-Mac.out
localhost: Exception in thread "main" java.lang.ExceptionInInitializerError
localhost: at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
localhost: at org.apache.hadoop.hdfs.server.common.HdfsServerConstants$RollingUpgradeStartupOption.getAllOptionString(HdfsServerConstants.java:80)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.<clinit>(NameNode.java:249)
localhost: Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2
localhost: at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3107)
localhost: at java.base/java.lang.String.substring(String.java:1873)
localhost: at org.apache.hadoop.util.Shell.<clinit>(Shell.java:51)
localhost: ... 3 more
localhost: starting datanode, logging to /Users/naziaimran/Desktop/hadoop-2.7.3/logs/hadoop-naziaimran-datanode-Alis-Mac.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /Users/naziaimran/Desktop/hadoop-2.7.3/logs/hadoop-naziaimran-secondarynamenode-Alis-Mac.out
0.0.0.0: Exception in thread "main" java.lang.ExceptionInInitializerError
0.0.0.0: at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
0.0.0.0: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:667)
0.0.0.0: Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2
0.0.0.0: at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3107)
0.0.0.0: at java.base/java.lang.String.substring(String.java:1873)
0.0.0.0: at org.apache.hadoop.util.Shell.<clinit>(Shell.java:51)
0.0.0.0: ... 2 more
2018-06-05 01:04:48.170 java[2203:22211] Unable to load realm info from SCDynamicStore
18/06/05 01:04:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
I am stuck here for days now, any help will be highly appreciated.
Thanks in advance :)
The problem is that Hadoop 2.7 is incompatible with Java 9/10.
I had the same issue and solved it by downgrading to Java 8.
Check the answer by VK321 here, if you are unsure about how to downgrade and get it to work:
https://stackoverflow.com/a/48422257/5181904

Error message when running hadoop wordcount example

I used this command to run wordcound example in Hadoop.
hadoop jar /usr/local/Cellar/hadoop/3.0.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount inputWiki/Wiki_data_100MB outputWiki0301
and I got error message like below.
2018-03-01 18:54:14,845 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-03-01 18:54:16,107 INFO beanutils.FluentPropertyBeanIntrospector: Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property.*
I used that command ran similar file before and it worked well. Could anyone help me on this?
Update results below:
pal-nat186-66-224:bin xujingjing$ hadoop jar
/usr/local/Cellar/hadoop/3.0.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount inputGurtenberg0302/gurtenberg.txt outputGurtenberg0302
2018-03-02 17:23:58,961 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable 2018-03-02 17:24:00,164 INFO
beanutils.FluentPropertyBeanIntrospector: Error when creating
PropertyDescriptor for public final void
org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
Ignoring this property. 2018-03-02 17:24:00,226 INFO
impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2018-03-02 17:24:00,396 INFO impl.MetricsSystemImpl: Scheduled Metric
snapshot period at 10 second(s). 2018-03-02 17:24:00,397 INFO
impl.MetricsSystemImpl: JobTracker metrics system started 2018-03-02
17:24:00,781 INFO mapreduce.JobSubmitter: Cleaning up the staging area
file:/tmp/hadoop/mapred/staging/xujingjing1314852612/.staging/job_local1314852612_0001
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input
path does not exist:
hdfs://localhost:8020/user/xujingjing/inputGurtenberg0302/gurtenberg.txt
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:330)
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:272)
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:394)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:313)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:330)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:203)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at
org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at
java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at
org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) at
org.apache.hadoop.examples.WordCount.main(WordCount.java:87) at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method) at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564) at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method) at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564) at
org.apache.hadoop.util.RunJar.run(RunJar.java:239) at
org.apache.hadoop.util.RunJar.main(RunJar.java:153)
pal-nat186-66-224:bin xujingjing$
The error is this
Input path does not exist: hdfs://localhost:8020/user/xujingjing/inputGurtenberg0302/gurtenberg.txt
So use
hdfs dfs -mkdir -p /user/xujingjing/inputGurtenberg0302/
hdfs dfs -copyFromLocal \
/path/to/gurtenberg.txt \
/user/xujingjing/inputGurtenberg0302/
I used that command ran similar file before
The line in your initial command uses a completely different file

ERROR SparkContext: Error initializing SparkContext.. IntelliJ and Scala

I'm using Window 10, Scala 2.10.2, Spark 1.6.1, and Java 1.8.
Below is the code I'm trying to run.
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
object WordsCount {
def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf().setAppName("SparkWordCount").setMaster("local[2]")
val sc = new SparkContext(sparkConf)
val text = sc.textFile("shortTwitter.txt")
val counts = text.flatMap(line=>line.split(" ")).map(word=>(word,1)).reduceByKey(_+_)
counts.foreach(println)
}
}
and I got the following error.
"C:\Program Files\Java\jdk1.8.0_101\bin\java" -Didea.launcher.port=7533 "-Didea.launcher.bin.path=C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 2016.2.3\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_101\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\rt.jar;C:\Users\Administrator\IdeaProjects\choi_jieun_assignment1\out\production\choi_jieun_assignment1;C:\scala\lib\jline.jar;C:\scala\lib\scalap.jar;C:\scala\lib\diffutils.jar;C:\scala\lib\akka-actors.jar;C:\scala\lib\scala-swing.jar;C:\scala\lib\scala-actors.jar;C:\scala\lib\scala-library.jar;C:\scala\lib\scala-partest.jar;C:\scala\lib\scala-reflect.jar;C:\scala\lib\scala-compiler.jar;C:\scala\lib\typesafe-config.jar;C:\scala\lib\scala-actors-migration.jar;C:\Users\Administrator\.ivy2\cache\org.scala-lang\scala-library\jars\scala-library-2.10.2.jar;C:\Users\Administrator\.ivy2\cache\org.scala-lang\scala-reflect\jars\scala-reflect-2.10.2.jar;C:\spark\lib\datanucleus-core-3.2.10.jar;C:\spark\lib\datanucleus-rdbms-3.2.9.jar;C:\spark\lib\spark-1.6.1-yarn-shuffle.jar;C:\spark\lib\datanucleus-api-jdo-3.2.6.jar;C:\spark\lib\spark-assembly-1.6.1-hadoop2.4.0.jar;C:\spark\lib\spark-examples-1.6.1-hadoop2.4.0.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 2016.2.3\lib\idea_rt.jar" com.intellij.rt.execution.application.AppMain WordsCount
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/spark/lib/spark-assembly-1.6.1-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/spark/lib/spark-examples-1.6.1-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/09/14 16:03:21 INFO SparkContext: Running Spark version 1.6.1
16/09/14 16:03:22 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/09/14 16:03:22 INFO SecurityManager: Changing view acls to: Administrator
16/09/14 16:03:22 INFO SecurityManager: Changing modify acls to: Administrator
16/09/14 16:03:22 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Administrator); users with modify permissions: Set(Administrator)
16/09/14 16:03:23 INFO Utils: Successfully started service 'sparkDriver' on port 50762.
16/09/14 16:03:23 ERROR SparkContext: Error initializing SparkContext.
java.lang.NoSuchMethodException: akka.remote.RemoteActorRefProvider.<init> (java.lang.String, akka.actor.ActorSystem$Settings, akka.event.EventStream, akka.actor.Scheduler, akka.actor.DynamicAccess)
at java.lang.Class.getConstructor0(Class.java:3082)
at java.lang.Class.getDeclaredConstructor(Class.java:2178)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:77)
at scala.util.Try$.apply(Try.scala:161)
at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:74)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:85)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:85)
at scala.util.Success.flatMap(Try.scala:200)
at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:85)
at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:546)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:111)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:104)
at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52)
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979)
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:457)
at WordsCount$.main(WordsCount.scala:7)
at WordsCount.main(WordsCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
16/09/14 16:03:23 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.lang.NoSuchMethodException: akka.remote.RemoteActorRefProvider.<init>(java.lang.String, akka.actor.ActorSystem$Settings, akka.event.EventStream, akka.actor.Scheduler, akka.actor.DynamicAccess)
at java.lang.Class.getConstructor0(Class.java:3082)
at java.lang.Class.getDeclaredConstructor(Class.java:2178)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:77)
at scala.util.Try$.apply(Try.scala:161)
at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:74)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:85)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:85)
at scala.util.Success.flatMap(Try.scala:200)
at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:85)
at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:546)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:111)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:104)
at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52)
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979)
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:457)
at WordsCount$.main(WordsCount.scala:7)
at WordsCount.main(WordsCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Process finished with exit code 1
I have been struggling for a couple of hours to solve this problem but couldn't figure out. Can someone help me out? Thanks!
Try installing the Java 7 JDK and you may have better luck. Although the release notes for Scala 2.10.2 seem to be missing, 2.10.4 doesn't support Java 8:
New ByteCode emitter based on ASM
Can target JDK 1.5, 1.6 and 1.7
Emits 1.6 bytecode by default
Old 1.5 backend is deprecated
...and even 2.11.x only has experimental Java 8 support:
The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.1, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support. The next major release, 2.12, will most likely target Java 8 by default.
Also see: Can I use java 8 in an mixed scala 2.10 / java project built by sbt?

Categories