Connecting Apache Spark with couchbase - java

I am trying to connect spark application with Couchbase. For this i am applying the following code.
double[] val=new double[3];
SparkContext sc = new SparkContext(new SparkConf().setAppName("sql").setMaster("local").set("com.couchbase.nodes", "url").set("com.couchbase.client.bucket","password"));
SQLContext sql = new SQLContext(sc);
JsonObject content = JsonObject.create().put("mean", val[0]).put("median", val[1]).put("standardDeviation",
val[2]);
JsonDocument doc=JsonDocument.create("docId", content);
bucket.upsert(doc);
But i am getting the following exception
Exception in thread "main" java.lang.NoClassDefFoundError: com/couchbase/client/java/document/json/JsonObject
at com.cloudera.sparkwordcount.JavaWordCount.main(JavaWordCount.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.couchbase.client.java.document.json.JsonObject
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 10 more
My maven dependencies are as follows:-
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-csv_2.10</artifactId>
<version>1.4.0</version>
</dependency>
<dependency>
<groupId>com.couchbase.client</groupId>
<artifactId>spark-connector_2.10</artifactId>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.couchbase.client</groupId>
<artifactId>java-client</artifactId>
<version>2.3.4</version>
</dependency>
Please tell me where i am missing.

Below are the minimum dependencies you need to connect to Couchbase using Spark 1.6
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.2</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.6.2</version>
</dependency>
<dependency>
<groupId>com.couchbase.client</groupId>
<artifactId>spark-connector_2.10</artifactId>
<version>1.2.1</version>
</dependency>
And here is the sample program to save and retrieve JsonDocument to Couchbase. Hope this helps.
import java.util.Arrays;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import com.couchbase.client.java.document.JsonDocument;
import com.couchbase.client.java.document.json.JsonObject;
import com.couchbase.spark.japi.CouchbaseDocumentRDD;
import com.couchbase.spark.japi.CouchbaseSparkContext;
public class CouchBaseDemo {
public static void main(String[] args) {
//JavaSparkContext
SparkConf conf = new SparkConf().setAppName("CouchBaseDemo").setMaster("local").set("com.couchbase.bucket.travel-sample", "");
JavaSparkContext jsc = new JavaSparkContext(conf);
CouchbaseSparkContext csc = CouchbaseSparkContext.couchbaseContext(jsc);
//Create and save JsonDocument
JsonDocument docOne = JsonDocument.create("docOne", JsonObject.create().put("new", "doc-content"));
JavaRDD<JsonDocument> jRDD = jsc.parallelize(Arrays.asList(docOne));
CouchbaseDocumentRDD<JsonDocument> cbRDD = CouchbaseDocumentRDD.couchbaseDocumentRDD(jRDD);
cbRDD.saveToCouchbase();
//fetch JsonDocument
List<JsonDocument> doc = csc.couchbaseGet(Arrays.asList("docOne")).collect();
System.out.println(doc);
}
}

Related

Azure -> Java -> File not found exception when trying to access file from azure container

I am trying to read a parquet file in azure. But I am getting file not found exception, even when the file is available in the container.
My use case is : To read the parquet file without downloading it.
I took the reference from this stackoverflow question answer: Read parquet data from Azure Blob container without downloading it locally
It can be easily reproduced if you have a azure container with a parquet file inside it.
My Code:
public static final String storageConnectionString = createConnectionString();
public static void main(String[] args) throws URISyntaxException, StorageException, InvalidKeyException, IOException {
{
CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
CloudBlobClient blobClient = storageAccount.createCloudBlobClient();
CloudBlobContainer container = blobClient.getContainerReference(CONTAINER_NAME);
final List<String> blobItems = new LinkedList<>();
for (final ListBlobItem listBlobItem : container.listBlobs()) {
blobItems.add(listBlobItem.getUri().getPath());
}
System.out.println("List of files: " + blobItems);
CloudBlob blob = container.getBlockBlobReference("userdata1.parquet");
Configuration config = new Configuration();
config.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem");
config.set("fs.azure.sas." + CONTAINER_NAME + "." + ACCOUNT_NAME + ".blob.core.windows.net", YOUR_KEY);
URI uri = new URI("wasbs://"+CONTAINER_NAME+"#"+ACCOUNT_NAME+".blob.core.windows.net/" + blob.getName());
System.out.println("URI is :" + uri);
// https://javacodevalidation.blob.core.windows.net/testcontainer/userdata1.parquet
InputFile file = HadoopInputFile.fromPath(new org.apache.hadoop.fs.Path(uri), config);
ParquetReader<GenericRecord> reader = AvroParquetReader.<GenericRecord>builder(file).build();
GenericRecord record;
while ((record = reader.read()) != null) {
System.out.println(record);
}
reader.close();
}
}
Output:
List of files: [/testcontainer/AEM.txt, /testcontainer/Test/, /testcontainer/idea.png, /testcontainer/jenkin.png, /testcontainer/jmeterResults.jpg, /testcontainer/userdata1.parquet]
URI is :wasbs://testcontainer#javacodevalidation.blob.core.windows.net/testcontainer/userdata1.parquet
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.io.FileNotFoundException: testcontainer/userdata1.parquet is not found
at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatusInternal(NativeAzureFileSystem.java:2749)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:2686)
at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39)
at com.Parquet.readParquetFile.main(readParquetFile.java:54)
Process finished with exit code 1
Import:
import com.microsoft.azure.storage.CloudStorageAccount;
import com.microsoft.azure.storage.StorageException;
import com.microsoft.azure.storage.blob.CloudBlob;
import com.microsoft.azure.storage.blob.CloudBlobClient;
import com.microsoft.azure.storage.blob.CloudBlobContainer;
import com.microsoft.azure.storage.blob.ListBlobItem;
import org.apache.avro.generic.GenericRecord;
import org.apache.hadoop.conf.Configuration;
import org.apache.parquet.avro.AvroParquetReader;
import org.apache.parquet.hadoop.ParquetReader;
import org.apache.parquet.hadoop.util.HadoopInputFile;
import org.apache.parquet.io.InputFile;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.security.InvalidKeyException;
import java.util.LinkedList;
import java.util.List;
import static com.constants.ProjectConstants.ACCOUNT_NAME;
import static com.constants.ProjectConstants.CONTAINER_NAME;
import static com.container.containerHelper.createConnectionString;
Pom:
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-storage-blob</artifactId>
<version>12.15.0</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.11</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-annotations -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.13.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-core -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.13.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.13.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.azure/azure-core -->
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-core</artifactId>
<version>1.26.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.parquet/parquet-avro -->
<dependency>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-avro</artifactId>
<version>1.12.2</version>
</dependency>
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>6.2.1.jre8</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-storage</artifactId>
<version>7.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.3.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-azure</artifactId>
<version>3.3.2</version>
</dependency>
Java:
corretto-1.8.0_292

Unresolved compilation problem: The method map is ambiguous for the type Dataset<Row>

I try to make java client codes of apache spark 3.0.1. First belows are the pom.xml codes.
<dependencies>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.10.2</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-csv</artifactId>
<version>2.11.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>3.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.7.0</version>
</dependency>
<dependency>
<groupId>org.mongodb.spark</groupId>
<artifactId>mongo-spark-connector_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.datatype</groupId>
<artifactId>jackson-datatype-jsr310</artifactId>
<version>2.12.1</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.12.7</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.12</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql-kafka-0-10_2.12</artifactId>
<version>3.1.0</version>
</dependency>
</dependencies>
And I make java client codes with spark structured streaming api
SparkSession spark = SparkSession.builder().master("local[*]").appName("KafkaMongo_StrctStream").getOrCreate();
Dataset<Row> inputDF = spark.read().format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("subscribe", "topicForMongoDB").option("startingOffsets", "earliest").load().selectExpr("CAST(value AS STRING)");
Encoder<Document> mongoEncode = Encoders.bean(Document.class);
Dataset<Row> tempDF = inputDF.map(row -> { //map function throws the exception.
String[] parameters = new String[row.mkString().split(",").length];
CsvMapper csvMapper = new CsvMapper();
parameters = csvMapper.readValue(row.mkString(), String[].class);
DateTimeFormatter formatter = DateTimeFormatter.ISO_DATE;
EntityMongoDB data = new EntityMongoDB();//LocalDate.parse(parameters[2], formatter), Float.valueOf(parameters[3]), parameters[4], parameters[5], parameters[6], parameters[7], parameters[8], parameters[9]);
String jsonInString = csvMapper.writeValueAsString(data);
Document doc = new Document(Document.parse(jsonInString));
return doc;
}, mongoEncode).toDF();
But these codes can not run because of the below exception,
Exception in thread "main" java.lang.Error: Unresolved compilation problem:
The method map(Function1<Row,Document>, Encoder<Document>) is ambiguous for the type Dataset<Row>
I can not see any errors on these codes because these codes worked without exceptions on apache spark 2.4 version. These unresolved compilation exception are brought from the apache spark versions matters? Kindly inform me how to solve this issue.
= Updated =
import java.time.LocalDate;
import java.time.format.DateTimeFormatter;
import java.util.Properties;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Encoder;
import org.apache.spark.sql.Encoders;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.bson.Document;
import com.aaa.etl.pojo.EntityMongoDB;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.dataformat.csv.CsvMapper;
For your information, I also attach the EntityMongoDB class source,
#Data
#AllArgsConstructor
#NoArgsConstructor
public class EntityMongoDB implements Serializable {
#JsonFormat(pattern="yyyy-MM-dd")
#JsonDeserialize(using = LocalDateDeserializer.class)
private LocalDate date;
private float value;
private String id;
private String title;
private String state;
private String frequency_short;
private String units_short;
private String seasonal_adjustment_short;
}
I was upgrading from Spark 2.x -> 3.x. I found this error occurs when moving from scala 2.11 to 2.12, for example the artifact below has this problem too.
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>2.4.7</version>
</dependency>
The fix I found was to avoid the inline mapping. In the example above you can split it into it's own MapFunction class.
public class DocumentMapper implements MapFunction<Row, Document> {
#Override
public Document call(Row row) throws Exception {
String[] parameters = new String[row.mkString().split(",").length];
CsvMapper csvMapper = new CsvMapper();
parameters = csvMapper.readValue(row.mkString(), String[].class);
DateTimeFormatter formatter = DateTimeFormatter.ISO_DATE;
EntityMongoDB data = new EntityMongoDB();//LocalDate.parse(parameters[2], formatter), Float.valueOf(parameters[3]), parameters[4], parameters[5], parameters[6], parameters[7], parameters[8], parameters[9]);
String jsonInString = csvMapper.writeValueAsString(data);
Document doc = new Document(Document.parse(jsonInString));
return doc;
}
}
Then reference this in the mapping call.
Dataset<Row> tempDF = inputDF.map(new DocumentMapper(), mongoEncode).toDF();
Hope this helps other upgraders out there.

Spark akka throws a java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc

I have built spark using scala 2.11. I ran the following steps :
./dev/change-scala-version.sh 2.11
mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package
After building spark successfully, I tried to intialize spark via akka model .
So, my Main class looks like :
ActorSystem system = ActorSystem.create("ClusterSystem");
Inbox inbox = Inbox.create(system);
ActorRef sparkActorRef = system.actorOf(SparkActor.props(mapOfArguments), "sparkActor");
inbox.send(sparkActorRef, "start");
The spark actor looks like:
public class SparkActor extends UntypedActor{
private static Logger logger = LoggerFactory.getLogger(SparkActor.class);
final Map<String,Object> configurations;
final SparkConf sparkConf;
private int sparkBatchDuration;
public static Props props(final Map<String,Object> configurations) {
return Props.create(new Creator<SparkActor>() {
private static final long serialVersionUID = 1L;
#Override
public SparkActor create() throws Exception {
return new SparkActor(configurations);
}
});
}
public SparkActor(Map<String,Object> configurations) {
this.configurations = configurations;
this.sparkConf =initializeSparkConf(configurations);
ActorRef mediator = DistributedPubSub.get(getContext().system()).mediator();
mediator.tell(new DistributedPubSubMediator.Subscribe("data", getSelf()), getSelf());
}
private SparkConf initializeSparkConf(Map<String, Object> mapOfArgs) {
SparkConf conf = new SparkConf();
Configuration sparkConf = (Configuration) mapOfArgs.get(StreamingConstants.MAP_SPARK_CONFIGURATION);
Iterator it = sparkConf.getKeys();
while(it.hasNext()){
String propertyKey = (String)it.next();
String propertyValue = sparkConf.getString(propertyKey);
conf.set(propertyKey.trim(), propertyValue.trim());
}
conf.setMaster(sparkConf.getString(StreamingConstants.SET_MASTER));
return conf;
}
#Override
public void onReceive(Object arg0) throws Exception {
if((arg0 instanceof String) & (arg0.toString().equalsIgnoreCase("start"))){
logger.info("Going to start");
sparkConf.setAppName(StreamingConstants.APP_NAME);
logger.debug("App name set to {}. Beginning spark execution",StreamingConstants.APP_NAME);
Configuration kafkaConfiguration = (Configuration) configurations.get(StreamingConstants.MAP_KAFKA_CONFIGURATION);
sparkBatchDuration = Integer.parseInt((String)configurations.get(StreamingConstants.MAP_SPARK_DURATION));
//Initializing Kafka configurations.
String[] eplTopicsAndThreads = kafkaConfiguration.getString(StreamingConstants.EPL_QUEUE).split(",");
Map<String,Integer> mapofeplTopicsAndThreads = new TreeMap<>();
for (String item : eplTopicsAndThreads){
String topic = item.split(StreamingConstants.EPL_QUEUE_SEPARATOR)[0];
Integer numberOfThreads= Integer.parseInt(item.split(StreamingConstants.EPL_QUEUE_SEPARATOR)[1]);
mapofeplTopicsAndThreads.put(topic, numberOfThreads);
}
//Creating a receiver stream in spark
JavaPairReceiverInputDStream<String,String> receiverStream = null;
JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, Durations.seconds(sparkBatchDuration));
receiverStream = KafkaUtils.createStream(ssc,
kafkaConfiguration.getString(StreamingConstants.ZOOKEEPER_SERVER_PROPERTY),
kafkaConfiguration.getString(StreamingConstants.KAFKA_GROUP_NAME),
mapofeplTopicsAndThreads);
JavaDStream<String> javaRdd = receiverStream.map(new SparkTaskTupleHelper());
javaRdd.foreachRDD(new Function<JavaRDD<String>, Void>() {
#Override
public Void call(JavaRDD<String> jsonData) throws Exception {
//Code to process some data from kafka
}
});
ssc.start();
ssc.awaitTermination();
}
}
I start my spark application as
./spark-submit --class com.sample.Main --master local[8] ../executables/spark-akka.jar
I get the following exception on startup
Uncaught error from thread [ClusterSystem-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[ClusterSystem]
java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;
at akka.cluster.pubsub.protobuf.DistributedPubSubMessageSerializer.<init>(DistributedPubSubMessageSerializer.scala:42)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78)
at scala.util.Try$.apply(Try.scala:161)
at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:84)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:84)
at scala.util.Success.flatMap(Try.scala:200)
at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:84)
at akka.serialization.Serialization.serializerOf(Serialization.scala:165)
at akka.serialization.Serialization$$anonfun$3.apply(Serialization.scala:174)
at akka.serialization.Serialization$$anonfun$3.apply(Serialization.scala:174)
at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at akka.serialization.Serialization.<init>(Serialization.scala:174)
at akka.serialization.SerializationExtension$.createExtension(SerializationExtension.scala:15)
at akka.serialization.SerializationExtension$.createExtension(SerializationExtension.scala:12)
at akka.actor.ActorSystemImpl.registerExtension(ActorSystem.scala:713)
at akka.actor.ExtensionId$class.apply(Extension.scala:79)
at akka.serialization.SerializationExtension$.apply(SerializationExtension.scala:12)
at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:175)
at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:620)
at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:617)
at akka.actor.ActorSystemImpl._start(ActorSystem.scala:617)
at akka.actor.ActorSystemImpl.start(ActorSystem.scala:634)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:142)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:119)
at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52)
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1913)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1904)
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55)
at org.apache.spark.rpc.akka.AkkaRpcEnvFactory.create(AkkaRpcEnv.scala:253)
at org.apache.spark.rpc.RpcEnv$.create(RpcEnv.scala:53)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:252)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:450)
at org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:864)
at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:81)
at org.apache.spark.streaming.api.java.JavaStreamingContext.<init>(JavaStreamingContext.scala:134)
at com.sample.SparkActor.onReceive(SparkActor.java:106)
at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
A list of options that I have already tried..
1) rebuilt spark with akka version 2.4.4 and got a NoSuchMethodError for toRootLowerCase
2) Tried to reuse the inbuilt spark of 2.3.11 and still got the same exception at CLusterSettings.scala
I have looked at similar errors on stackoverflow and found that it was due to a scala version mismatch. But having built everything with 2.11 and using akka 2.4.4 I thought that all jars will be on the same scala version.
Am i missing any particular step?
My pom file for your reference.
<packaging>jar</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<slf4j.version>1.7.6</slf4j.version>
<log4j.version>2.0-rc1</log4j.version>
<commons.cli.version>1.2</commons.cli.version>
<kafka.version>0.8.2.2</kafka.version>
<akka.version>2.4.4</akka.version>
<akka.version.old>2.4.4</akka.version.old>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.11</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.11.8</version>
</dependency>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-actor_2.11</artifactId>
<version>${akka.version}</version>
</dependency>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-cluster_2.11</artifactId>
<version>${akka.version}</version>
</dependency>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-kernel_2.11</artifactId>
<version>${akka.version}</version>
</dependency>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-cluster-tools_2.11</artifactId>
<version>${akka.version}</version>
</dependency>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-remote_2.11</artifactId>
<version>2.4.4</version>
</dependency>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-slf4j_2.11</artifactId>
<version>2.4.4</version>
</dependency>
If I remove the cluster jars and the distributedpubsub code and use plain remoting i.e akka.tcp then no errors are shown. It works fine in that scenario. I wish to know why the distributedpubsub throws this error.

java.lang.NoClassDefFoundError: javax/servlet/FilterRegistration

I am using Spark 1.6.0 and I am trying to code a very simple project of "word counts". I am getting this error:
java.lang.NoClassDefFoundError: javax/servlet/FilterRegistration
This is my code:
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import java.util.Arrays;
import org.apache.spark.SparkConf;
public class WordCount {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("WordCount").setMaster("local[2]");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> lines = sc.textFile("scrittura.txt");
JavaRDD<Integer> lineLengths = lines.map(s -> s.length());
int totalLength = lineLengths.reduce((a, b) -> a + b);
System.out.println("TOTAL: " + totalLength);
JavaRDD<String> flat = lines
.flatMap(x -> Arrays.asList(x.replaceAll("[^A-Za-z ]", "").split(" ")));
JavaPairRDD<String, Integer> map = flat
.mapToPair(x -> new Tuple2<String, Integer>(x, 1));
JavaPairRDD<String, Integer> reduce = map
.reduceByKey((x, y) -> x + y);
System.out.println(reduce.collect());
sc.stop();
sc.close();
}}
This is my log:
Exception in thread "main" java.lang.NoClassDefFoundError:
javax/servlet/FilterRegistration at
org.spark-project.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:136)
at
org.spark-project.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:129)
at
org.spark-project.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:98)
at
org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:110)
at
org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:101)
at org.apache.spark.ui.WebUI.attachPage(WebUI.scala:78) at
org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:62)
at
org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:62)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.ui.WebUI.attachTab(WebUI.scala:62) at
org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:61) at
org.apache.spark.ui.SparkUI.(SparkUI.scala:74) at
org.apache.spark.ui.SparkUI$.create(SparkUI.scala:190) at
org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:141) at
org.apache.spark.SparkContext.(SparkContext.scala:466) at
org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:61)
at WordCount.main(WordCount.java:16) Caused by:
java.lang.ClassNotFoundException: javax.servlet.FilterRegistration at
java.net.URLClassLoader.findClass(URLClassLoader.java:381) at
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at
java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 18 more
This is my pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>examples</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>examples</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId>
<version>1.5.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.2</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty.orbit</groupId>
<artifactId>javax.servlet</artifactId>
<version>3.0.0.v201112011016</version>
</dependency>
</dependencies>
</project>
How can I solve it?
Thank you!

NoClassDefFoundError: kafka/api/OffsetRequest

I am trying to write application for real time processing with apache storm , kafka and trident
but in initialization of TridentKafkaConfig i see this error
Exception in thread "main" java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
at storm.kafka.KafkaConfig.<init>(KafkaConfig.java:43)
at storm.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:30)
at spout.TestSpout.<clinit>(TestSpout.java:22)
at IOTTridentTopology.initializeTridentTopology(IOTTridentTopology.java:31)
at IOTTridentTopology.main(IOTTridentTopology.java:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
... 10 more
my spout class is
public class TestSpout extends OpaqueTridentKafkaSpout {
private static TridentKafkaConfig config;
private static BrokerHosts HOSTS = new ZkHosts(TridentConfig.ZKHOSTS);
private static String TOPIC = "test";
private static int BUFFER_SIZE = TridentConfig.BUFFER_SIZE;
static{
config = new TridentKafkaConfig(HOSTS, TOPIC);
config.scheme = new SchemeAsMultiScheme(new RawScheme());
config.bufferSizeBytes = BUFFER_SIZE;
}
public TestSpout(TridentKafkaConfig config) {
super(config);
}
public TestSpout() {
super(config);
}
}
main class:
public static void main(String[] args) {
initializeTridentTopology();
}
private static void initializeTridentTopology() {
TridentTopology topology = new TridentTopology();
TestSpout spout = new TestSpout();
//////////////// test //////////////////////
topology.newStream("testspout", spout).each(spout.getOutputFields(), new TestFunction(), new Fields());
/////////////// end test ///////////////////
LocalCluster cluster = new LocalCluster();
Config config = new Config();
config.setDebug(false);
config.setMaxTaskParallelism(1);
config.registerSerialization(storm.kafka.trident.GlobalPartitionInformation.class);
config.registerSerialization(java.util.TreeMap.class);
config.setNumWorkers(5);
config.setFallBackOnJavaSerialization(true);
cluster.submitTopology("KafkaTrident", config, topology.build());
}
and my pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
<groupId>IOT</groupId>
<artifactId>ver0.1</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>0.9.3</version>
</dependency>
</dependencies>
I am trying different version of storm-kafka (0.9.3 and 0.9.4 and 0.9.5 and 0.9.6 and 0.10.0) and storm-core (9.3 and 9.4 and 9.6)
But I still see my previous error
by googling i found this link but ...
ClassNotFoundException: kafka.api.OffsetRequest
after some googling i found this link
https://github.com/wurstmeister/storm-kafka-0.8-plus-test
and found my answer in pom.xml file
by adding this code and find compatible version of kafka all problem resolved
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.9.0.0</version>
<exclusions>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
If you use LocalCluster deployment a storm topology you need to add the Kafka lib to your dependencies (for Storm 0.10.0):
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.9.2</artifactId>
<version>0.8.1.1</version>
</dependency>
kafka.api.OffsetRequest class is missed beacause org.apache.kafka is provided dependency for the storm-kafka:
http://mvnrepository.com/artifact/org.apache.storm/storm-kafka/0.10.0. Please, see the Provided Dependencies section for details.

Categories