how to save a list of case classes in scala - java

I have a case class named Rdv:
case class Rdv(
id: Option[Int],
nom: String,
prénom: String,
sexe: Int,
telPortable: String,
telBureau: String,
telPrivé: String,
siteRDV: String,
typeRDV: String,
libelléRDV: String,
numRDV: String,
étape: String,
dateRDV: Long,
heureRDVString: String,
statut: String,
orderId: String)
and I would like to save a list of such elements on disk, and reload them later.
I tried with java classes (ObjectOutputStream, fileOutputStream, objectInputStream, fileInputStream) but I have an error in the retrieving step : the statement
val n2 = ois.readObject().asInstanceOf[List[Rdv]]
always get an error(classNotFound:Rdv), although the correct path is given in the imports place.
Do you know a workaround to save such an object?
Please provide a little piece of code!
thanks
olivier
ps: I have the same error while using the Marshall class, such as in this code:
object Application extends Controller {
def index = Action {
//implicit val Rdv2Writes = Json.writes[rdv2]
def rdvTordv2(rdv: Rdv): rdv2 = new rdv2(
rdv.nom,
rdv.prénom,
rdv.dateRDV,
rdv.heureRDVString,
rdv.telPortable,
rdv.telBureau,
rdv.telPrivé,
rdv.siteRDV,
rdv.typeRDV,
rdv.libelléRDV,
rdv.orderId,
rdv.statut)
val n = variables.manager.liste_locale
val out = new FileOutputStream("out")
out.write(Marshal.dump(n))
out.close
val in = new FileInputStream("out")
val bytes = Stream.continually(in.read).takeWhile(-1 !=).map(_.toByte).toArray
val bar: List[Rdv] = Marshal.load[List[Rdv]](bytes) <--------------
val n3 = bar.map(rdv =>
rdvTordv2(rdv))
println("n3:" + n3.size)
Ok(views.html.Application.olivier2(n3))
}
},
in the line with the arrow.
It seems that the conversion to the type List[Rdv] encounters problems, but why? Is it a play! linked problem?
ok, there's a problem with play:
I created a new scala project with this code:
object Test1 extends App {
//pour des fins de test
case class Person(name:String,age:Int)
val liste_locale=List(new Person("paul",18))
val n = liste_locale
val out = new FileOutputStream("out")
out.write(Marshal.dump(n))
out.close
val in = new FileInputStream("out")
val bytes = Stream.continually(in.read).takeWhile(-1 !=).map(_.toByte).toArray
val bar: List[Person] = Marshal.load[List[Person]](bytes)
println(s"bar:size=${bar.size}")
}
and the display is good ("bar:size=1").
then, I modified my previous code in the play project, in the controller class, such as this:
object Application extends Controller {
def index = Action {
//pour des fins de test
case class Person(name:String,age:Int)
val liste_locale=List(new Person("paul",18))
val n = liste_locale
val out = new FileOutputStream("out")
out.write(Marshal.dump(n))
out.close
val in = new FileInputStream("out")
val bytes = Stream.continually(in.read).takeWhile(-1 !=).map(_.toByte).toArray
val bar: List[Person] = Marshal.load[List[Person]](bytes)
println(s"bar:size=${bar.size}")
Ok(views.html.Application.olivier2(Nil))
}
}
and I have an error saying:
play.api.Application$$anon$1: Execution exception[[ClassNotFoundException: controllers.Application$$anonfun$index$1$Person$3]]
is there anyone having the answer?
edit: I thought the error could come from sbt, so I modified build.scala such as this:
import sbt._
import Keys._
import play.Project._
object ApplicationBuild extends Build {
val appName = "sms_play_2"
val appVersion = "1.0-SNAPSHOT"
val appDependencies = Seq(
// Add your project dependencies here,
jdbc,
anorm,
"com.typesafe.slick" % "slick_2.10" % "2.0.0",
"com.github.nscala-time" %% "nscala-time" % "0.6.0",
"org.xerial" % "sqlite-jdbc" % "3.7.2",
"org.quartz-scheduler" % "quartz" % "2.2.1",
"com.esotericsoftware.kryo" % "kryo" % "2.22",
"io.argonaut" %% "argonaut" % "6.0.2")
val mySettings = Seq(
(javaOptions in run) ++= Seq("-Dconfig.file=conf/dev.conf"))
val playCommonSettings = Seq(
Keys.fork := true)
val main = play.Project(appName, appVersion, appDependencies).settings(
Keys.fork in run := true,
resolvers += Resolver.sonatypeRepo("snapshots")).settings(mySettings: _*)
.settings(playCommonSettings: _*)
}
but without success, the error is still there (Class Person not found)
can you help me?

Scala Pickling has reasonable momentum and the approach has many advantages (lots of the heavy lifting is done at compile time). There is a plugable serialization mechanism and formats like json are supported.

Related

Scala macro can't find java.util.List, java.lang.Object

Update: see answers below for solution to this problem. There's a second problem (macro now can't find Pojo), question about second problem here: Scala macro can't find my java class
I'm creating a scala macro to automatically generate case classes from POJOs (in order to make working with avro a little bit nicer).
Everything "works" except that the compiler chokes on built-in java classes like java.util.List, and java.lang.Object.
My question is: how do I generate code in the macro such that the compiler resolves java classes?
Example error message:
(without // coments in Pojo.java)
[info] Compiling 1 Scala source to /Users/marcin/development/repo/problemdemo/target/scala-2.11/classes...
fieldType:java.util.List
fieldType:Int
fieldType:java.util.List
Expr[Any](case class Demo extends scala.Product with scala.Serializable {
<caseaccessor> <paramaccessor> val baz: java.util.List[com.squarefoot.Pojo] = _;
<caseaccessor> <paramaccessor> val bar: Int = _;
<caseaccessor> <paramaccessor> val foo: java.util.List[java.lang.Integer] = _;
def <init>(baz: java.util.List[com.squarefoot.Pojo], bar: Int, foo: java.util.List[java.lang.Integer]) = {
super.<init>();
()
}
})
[error] /Users/marcin/development/repos/problemdemo/src/main/scala/com/squarefoot/converters/problemdemo.scala:5: not found: type java.util.List
[error] #Caseify(classOf[com.squarefoot.Pojo])
[error] ^
[error] one error found
[error] (root/compile:compileIncremental) Compilation failed
[error] Total time: 17 s, completed Dec 11, 2016 12:00:57 PM
(Pojo.java as shown)
[info] Compiling 1 Scala source to /Users/marcin/development/repos/problemdemo/target/scala-2.11/classes...
fieldType:java.lang.Object
fieldType:Int
Expr[Any](case class Demo extends scala.Product with scala.Serializable {
<caseaccessor> <paramaccessor> val qux: java.lang.Object = _;
<caseaccessor> <paramaccessor> val bar: Int = _;
def <init>(qux: java.lang.Object, bar: Int) = {
super.<init>();
()
}
})
[error] /Users/marcin/development/repos/problemdemo/src/main/scala/com/squarefoot/converters/problemdemo.scala:5: not found: type java.lang.Object
[error] #Caseify(classOf[com.squarefoot.Pojo])
[error] ^
[error] one error found
[error] (root/compile:compileIncremental) Compilation failed
[error] Total time: 6 s, completed Dec 11, 2016 12:04:29 PM
Edit: Results of showRaw
showRaw gives output like this, which looks fine to me:
ValDef(Modifiers(DEFERRED), TermName("availablebuildouts"), AppliedTypeTree(Ident(TypeName("java.util.List")), List(Ident(TypeName("com.squarefoot.buildouttype")))), EmptyTree)
problemdemo/avroschemas/src/main/java/com/squarefoot/Pojo.java:
package com.squarefoot;
public class Pojo {
//public java.util.List<Integer> foo;
public int bar;
//public java.util.List<Pojo> baz;
public java.lang.Object qux;
}
problemdemo/src/main/scala/com/squarefoot/converters/problemdemo.scala:
package com.squarefoot.converters
import com.squarefoot.Pojo
class Foomin {
val foobar: java.util.List[Int]
}
#Caseify(classOf[com.squarefoot.Pojo])
case class Demo()
problemdemo/macros/src/main/scala/com/squarefoot/converters/Caseify.scala:
package com.squarefoot.converters
import scala.language.experimental.macros
import scala.annotation.StaticAnnotation
import scala.reflect.macros.Context
/**
* Generate case class from POJO
* ex:
* #Caseify(classOf[com.squarefoot.incominglisting])
* case class Incominglisting()
* NOTE that the type parameter to classOf must be provided as a fully
* qualified name, otherwise the macro code here won't be able to find it.
*
* Generates a case class with the same members as the public, non-static
* members of the pojo
*
* Note that you must have all types used in the POJO in scope where the macro
* is invoked
*/
class Caseify[T](source: Class[T]) extends StaticAnnotation {
def macroTransform(annottees: Any*) = macro CaseifyMacro.expand_impl[T]
}
object CaseifyMacro {
/** generate case class from POJO */
def expand_impl[T](c: Context)(annottees: c.Expr[Any]*) = {
import c.universe._
// macro expand the macro expression itself to extract param
val source: Class[T] = c.prefix.tree match {
case q"new Caseify($param)" => c.eval[Class[T]](c.Expr(param))
}
val rm = scala.reflect.runtime.currentMirror
val vars =
rm.classSymbol(source).toType.members.map(_.asTerm).
filter(_.isVar).filter(_.isPublic)
lazy val fields = vars.map({f=>
val fieldName = TermName(f.name.toString)
val fieldType = TypeName(f.typeSignature.typeConstructor.toString)
val typeArgs = f.typeSignature.typeArgs.map(a=>TypeName(a.toString))
println("fieldType:"+fieldType.toString)
q"val $fieldName: $fieldType"
if(typeArgs.size > 0)
q"val $fieldName: $fieldType[..$typeArgs]"
else
q"val $fieldName: $fieldType"
})
annottees.map(_.tree) match {
case List(q"case class $newname()") => {
val q = c.Expr[Any](
// Add your own logic here, possibly using arguments on the annotation.
q"""
case class $newname(..$fields)
""")
println(q.toString)
q
}
// Add validation and error handling here.
}
}
}
Sbt files:
problemdemo/build.sbt
name := "data-importer"
addCompilerPlugin("org.scalamacros" % "paradise" % "2.1.0" cross CrossVersion.full)
scalaVersion := "2.11.8"
val avroVersion = "1.8.1"
lazy val root =
project.in( file(".") )
.aggregate(avroschemas, macros).dependsOn(macros, avroschemas)
lazy val macros = project.dependsOn(avroschemas)
lazy val avroschemas = project
libraryDependencies ++= Seq(
"org.scala-lang" % "scala-reflect" % scalaVersion.value
)
// better error reporting
scalacOptions in Test ++= Seq("-Yrangepos")
run in Compile := Defaults.runTask(fullClasspath in Compile, mainClass in (Compile, run), runner in (Compile, run))
problemdemo/macros/build.sbt
name := "data-importer-macros"
addCompilerPlugin("org.scalamacros" % "paradise" % "2.1.0" cross CrossVersion.full)
organization := "com.squarefoot"
scalaVersion := "2.11.3"
libraryDependencies ++= Seq(
"org.scala-lang" % "scala-reflect" % scalaVersion.value
)
scalacOptions in Test ++= Seq("-Yrangepos")
problemdemo/avroschemas/build.sbt
name := "data-importer-avroschemas"
addCompilerPlugin("org.scalamacros" % "paradise" % "2.1.0" cross CrossVersion.full)
organization := "com.squarefoot"
scalaVersion := "2.11.8"
// better error reporting
scalacOptions in Test ++= Seq("-Yrangepos")
run in Compile := Defaults.runTask(fullClasspath in Compile, mainClass in (Compile, run), runner in (Compile, run))
Basically, instead of TypeName("java.util.List"), you want something like (based on the example in http://docs.scala-lang.org/overviews/reflection/symbols-trees-types.html#tree-creation-via-reify, can't test at the moment) Select(Select(This(TypeName("java")), TypeName("util")), TypeName("List")). If you do showRaw on your input tree, you should see more precisely. So instead of TypeName(...toString), split on .. Maybe just removing TypeName:
val fieldType = f.typeSignature.typeConstructor
val typeArgs = f.typeSignature.typeArgs
will be enough?
So, I don't yet have a working macro, but I have solved this problem with the help of Alexey Romanov's answer. This code results in the error:
[error] /Users/marcin/development/repos/problemdemo/src/main/scala/com/squarefoot/converters/problemdemo.scala:10: not found: type com.squarefoot.Pojo
[error] #Caseify(classOf[com.squarefoot.Pojo])
I'm about to open a separate question about that issue.
package com.squarefoot.converters
import scala.language.experimental.macros
import scala.annotation.StaticAnnotation
import scala.reflect.macros.Context
/**
* Generate case class from POJO
* ex:
* #Caseify(classOf[com.squarefoot.incominglisting])
* case class Incominglisting()
* NOTE that the type parameter to classOf must be provided as a fully
* qualified name, otherwise the macro code here won't be able to find it.
*
* Generates a case class with the same members as the public, non-static
* members of the pojo
*
* Note that you must have all types used in the POJO in scope where the macro
* is invoked
*/
class Caseify[T](source: Class[T]) extends StaticAnnotation {
def macroTransform(annottees: Any*) = macro CaseifyMacro.expand_impl[T]
}
object CaseifyMacro {
/** generate case class from POJO */
def expand_impl[T](c: Context)(annottees: c.Expr[Any]*) = {
import c.universe._
// macro expand the macro expression itself to extract param
val source: Class[T] = c.prefix.tree match {
case q"new Caseify($param)" => c.eval[Class[T]](c.Expr(param))
}
val rm = scala.reflect.runtime.currentMirror
val vars =
rm.classSymbol(source).toType.members.map(_.asTerm).
filter(_.isVar).filter(_.isPublic)
val fields = vars.map({f=>
val fieldName = TermName(f.name.toString)
val fieldType = tq"${f.typeSignature.typeConstructor.typeSymbol.fullName}"
val rawTypeArgs = f.typeSignature.typeArgs.map(a=>TypeName(a.toString))
val typeArgs = tq"${rawTypeArgs}"
println("typeArgs: "+typeArgs.toString)
println("fieldType:"+fieldType.getClass.toString+"|"+fieldType.toString)
println(f.typeSignature.typeSymbol.asType.name.getClass.toString)
val arraylistname = tq"java.util.ArrayList"
println("DEBUG:"+tq"${arraylistname}".toString+"|"+f.typeSignature.typeConstructor.typeSymbol.fullName)
q"val $fieldName: $fieldType"
if(rawTypeArgs.nonEmpty) {
val appliedFieldType = tq"${arraylistname}[..$rawTypeArgs]"
q"val $fieldName: $appliedFieldType"
}
else
q"val $fieldName: $fieldType"
})
annottees.map(_.tree) match {
case List(q"case class $newname()") => {
val q = c.Expr[Any](
// Add your own logic here, possibly using arguments on the annotation.
q"""
case class $newname(..$fields)
""")
println(q.toString)
q
}
// Add validation and error handling here.
}
}
}

Parsing JSON objects into a java.util.List using Scala

I'm trying to take the JSON output of an analysis tool, and turn it into a Java list.
I'm doing this with Scala, and while it doesn't sound that hard, I've run into trouble.
So I was hoping the following code would do it:
def returnWarnings(json: JSONObject): java.util.List[String] ={
var bugs = new ArrayBuffer[String]
for (warning <- json.getJSONArray("warnings").){ //might need to add .asScala
bugs += (warning.getString("warning_type") + ": " + warning.getString("message"))
}
val rval: java.util.List[String] = bugs.asJava
rval
}
This block produces two errors when I try to compile it:
Error:(18, 42) value foreach is not a member of org.json.JSONArray
for (warning <- json.getJSONArray("warnings")){ //might need to add .asScala
^
and
Error:(21, 49) value asJava is not a member of scala.collection.mutable.ArrayBuffer[String]
val rval: java.util.List[String] = bugs.asJava
^
I don't know what's wrong with my for loop.
EDIT: with a bit more reading, I figured out what was up with the loop. see https://stackoverflow.com/a/6376083/5843840
The second error is especially baffling, because as far as I can tell it should work. It is really similar to the code from this documentation
scala> val jul: java.util.List[Int] = ArrayBuffer(1, 2, 3).asJava
jul: java.util.List[Int] = [1, 2, 3]
You should try the following:
import scala.collection.JavaConverters._
def returnWarnings(input: JSONObject): java.util.List[String] = {
val warningsArray = input.getJSONArray("warnings")
val output = (0 until warningsArray.length).map { i =>
val warning = warningsArray.getJSONObject(i)
warning.getString("warning_type") + ": " + warning.getString("message")
}
output.asJava
}
That final conversion could be done implicitly (without invoking .asJava), by importing scala.collection.JavaConversions._

How to trigger source generation with sbt

I have an sbt sub-project which compiles messages.json files into new java sources. I've set the task up to run before running tests and before compiling the primary project, or run manually via a new command "gen-messages".
The problem is the message generation takes some time, it always generates all sources, and it is running too often. Some tasks like running tests with coverage end up generating and compiling the messages twice!
How can I monitor the sources to the generator, and only run the source generation if something has changed/or the expected output java files are missing?
Secondly how would I go about running the generator only on changed messages.json files?
Currently the sbt commands I'm using are:
lazy val settingsForMessageGeneration =
((test in Test) <<= (test in Test) dependsOn(messageGenerationCommand)) ++
((compile in Compile) <<= (compile in Compile) dependsOn(messageGenerationCommand)) ++
(messageGenerationCommand <<= messageGenerationTask) ++
(sourceGenerators in Compile += messageGenerationTask.taskValue)
lazy val messageGenerationCommand = TaskKey[scala.collection.Seq[File]]("gen-messages")
lazy val messageGenerationTask = (
sourceManaged,
fullClasspath in Compile in messageGenerator,
runner in Compile in messageGenerator,
streams
) map { (dir, cp, r, s) =>
lazy val f = getFileTree(new File("./subProjectWithMsgSources/src/")).filter(_.getName.endsWith("messages.json"))
f.foreach({ te =>
val messagePackagePath = te.getAbsolutePath().replace("messages.json", "msg").replace("./", "")
val messagePath = te.getAbsolutePath().replace("./", "")
val fi = new File(messagePackagePath)
if (!fi.exists()) {
fi.mkdirs()
}
val ar = List("-d", messagePackagePath, messagePath)
toError(r.run("com.my.MessageClassGenerator", cp.files, ar, s.log))
})
getFileTree(new File("./subProjectWithMsgSources/src/"))
.filter(_.getName.endsWith("/msg/*.java"))
.to[scala.collection.Seq]
}
The message generator creates a directory with the newly created java files - no other content will be in that directory.
Related Questions
sbt generate using project generator
You can use sbt.FileFunction.cached to run your source generator only when your input files or output files have been changed.
The idea is to factor your actual source generation to a function Set[File] => Set[File], and call it via FileFunction.cached.
lazy val settingsForMessageGeneration =
((test in Test) <<= (test in Test) dependsOn(messageGenerationCommand)) ++
((compile in Compile) <<= (compile in Compile) dependsOn(messageGenerationCommand)) ++
(messageGenerationCommand <<= messageGenerationTask) ++
(sourceGenerators in Compile += messageGenerationTask.taskValue)
lazy val messageGenerationCommand = TaskKey[scala.collection.Seq[File]]("gen-messages")
lazy val messageGenerationTask = (
sourceManaged,
fullClasspath in Compile in messageGenerator,
runner in Compile in messageGenerator,
streams
) map { (dir, cp, r, s) =>
lazy val f = getFileTree(new File("./subProjectWithMsgSources/src/")).filter(_.getName.endsWith("messages.json"))
def gen(sources: Set[File]): Set[File] = {
sources.foreach({ te =>
val messagePackagePath = te.getAbsolutePath().replace("messages.json", "msg").replace("./", "")
val messagePath = te.getAbsolutePath().replace("./", "")
val fi = new File(messagePackagePath)
if (!fi.exists()) {
fi.mkdirs()
}
val ar = List("-d", messagePackagePath, messagePath)
toError(r.run("com.my.MessageClassGenerator", cp.files, ar, s.log))
})
getFileTree(new File("./subProjectWithMsgSources/src/"))
.filter(_.getName.endsWith("/msg/*.java"))
.to[scala.collection.immutable.Set]
}
val func = FileFunction.cached(s.cacheDirectory / "gen-messages", FilesInfo.hash) { gen }
func(f.toSet).toSeq
}

Camel Processing JSON Messages from RabbitMQ

I want to post a message in JSON format to RabbitMQ and have that message consumed successfully. I'm attempting to use Camel to integrate producers and consumers. However, I'm struggling to understand how to create a route to make this happen. I'm using JSON Schema to define the interface between the Producer and Consumer. My application creates JSON, converts it to a byte[] and a Camel ProducerTemplate is used to send the message to RabbitMQ. On the consumer end, the byte[] message needs to be converted to a String, then to JSON, and then marshalled to an Object so I can process it. The following code line doesn't work however
from(startEndpoint).transform(body().convertToString()).marshal().json(JsonLibrary.Jackson, classOf[Payload]).bean(classOf[JsonBeanExample]),
It's as if the bean is passed the original byte[] content and not the object created by JSON json(JsonLibrary.Jackson, classOf[Payload]). All the camel examples I've seen which use the json(..) call seem be followed by a to(..) which is the end of the route? Here is the error message
Caused by: org.apache.camel.InvalidPayloadException: No body available of type: uk.co.techneurons.messaging.Payload but has value: [B#48898819 of type: byte[] on: Message: "{\"id\":1}". Caused by: No type converter available to convert from type: byte[] to the required type: uk.co.techneurons.messaging.Payload with value [B#48898819. Exchange[ID-Tonys- iMac-local-54996-1446407983661-0-2][Message: "{\"id\":1}"]. Caused by: [org.apache.camel.NoTypeConversionAvailableException - No type converter available to convert from type: byte[] to the required type: uk.co.techneurons.messaging.Payload with value [B#48898819]`
I don't really want to use Spring, Annotations etc, would like to service activation as simple as possible. Use Camel as much as possible
This is the producer
package uk.co.techneurons.messaging
import org.apache.camel.builder.RouteBuilder
import org.apache.camel.impl.DefaultCamelContext
object RabbitMQProducer extends App {
val camelContext = new DefaultCamelContext
val rabbitMQEndpoint: String = "rabbitmq:localhost:5672/advert?autoAck=false&threadPoolSize=1&username=guest&password=guest&exchangeType=topic&autoDelete=false&declare=false"
val rabbitMQRouteBuilder = new RouteBuilder() {
override def configure(): Unit = {
from("direct:start").to(rabbitMQEndpoint)
}
}
camelContext.addRoutes(rabbitMQRouteBuilder)
camelContext.start
val producerTemplate = camelContext.createProducerTemplate
producerTemplate.setDefaultEndpointUri("direct:start")
producerTemplate.sendBodyAndHeader("{\"id\":1}","rabbitmq.ROUTING_KEY","advert.edited")
camelContext.stop
}
This is the consumer..
package uk.co.techneurons.messaging
import org.apache.camel.builder.RouteBuilder
import org.apache.camel.impl.DefaultCamelContext
import org.apache.camel.model.dataformat.JsonLibrary
object RabbitMQConsumer extends App {
val camelContext = new DefaultCamelContext
val startEndpoint = "rabbitmq:localhost:5672/advert?queue=es_index&exchangeType=topic&autoDelete=false&declare=false&autoAck=false"
val consumer = camelContext.createConsumerTemplate
val routeBuilder = new RouteBuilder() {
override def configure(): Unit = {
from(startEndpoint).transform(body().convertToString()).marshal().json(JsonLibrary.Jackson, classOf[Payload]).bean(classOf[JsonBeanExample])
}
}
camelContext.addRoutes(routeBuilder)
camelContext.start
Thread.sleep(1000)
camelContext.stop
}
case class Payload(id: Long)
class JsonBeanExample {
def process(payload: Payload): Unit = {
println(s"JSON ${payload}")
}
}
For completeness, this is the sbt file for easy replication..
name := """camel-scala"""
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= {
val scalaTestVersion = "2.2.4"
val camelVersion: String = "2.16.0"
val rabbitVersion: String = "3.5.6"
val slf4jVersion: String = "1.7.12"
val logbackVersion: String = "1.1.3"
Seq(
"org.scala-lang.modules" %% "scala-xml" % "1.0.3",
"org.apache.camel" % "camel-core" % camelVersion,
"org.apache.camel" % "camel-jackson" % camelVersion,
"org.apache.camel" % "camel-scala" % camelVersion,
"org.apache.camel" % "camel-rabbitmq" % camelVersion,
"com.rabbitmq" % "amqp-client" % rabbitVersion,
"org.slf4j" % "slf4j-api" % slf4jVersion,
"ch.qos.logback" % "logback-classic" % logbackVersion,
"org.apache.camel" % "camel-test" % camelVersion % "test",
"org.scalatest" %% "scalatest" % scalaTestVersion % "test")
}
Thanks
I decided that I needed to create a Bean and Register it (easier said than done! - for some as yet unknown reason JNDIRegistry didn't work with DefaultCamelContext - so I used a SimpleRegistry),
val registry: SimpleRegistry = new SimpleRegistry()
registry.put("myBean", new JsonBeanExample())
val camelContext = new DefaultCamelContext(registry)
Then I changed the consuming routeBuilder - seems like I had been over transforming the message.
from(startEndpoint).unmarshal.json(JsonLibrary.Jackson, classOf[Payload]).to("bean:myBean?method=process")
I also changed the Bean so setter methods were available, and added a toString
class Payload {
#BeanProperty var id: Long = _
override def toString = s"Payload($id)"
}
class JsonBeanExample() {
def process(payload: Payload): Unit = {
println(s"recieved ${payload}")
}
}
The next problem now is to get dead letter queues working, and ensuring that failures in the Bean handler make their way properly back up the stack

JSON4s can't find constructor w/spark

I've run into an issue with attempting to parse json in my spark job. I'm using spark 1.1.0, json4s, and the Cassandra Spark Connector, with DSE 4.6. The exception thrown is:
org.json4s.package$MappingException: Can't find constructor for BrowserData org.json4s.reflect.ScalaSigReader$.readConstructor(ScalaSigReader.scala:27)
org.json4s.reflect.Reflector$ClassDescriptorBuilder.ctorParamType(Reflector.scala:108)
org.json4s.reflect.Reflector$ClassDescriptorBuilder$$anonfun$6.apply(Reflector.scala:98)
org.json4s.reflect.Reflector$ClassDescriptorBuilder$$anonfun$6.apply(Reflector.scala:95)
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
My code looks like this:
case class BrowserData(navigatorObjectData: Option[NavigatorObjectData],
flash_version: Option[FlashVersion],
viewport: Option[Viewport],
performanceData: Option[PerformanceData])
.... other case classes
def parseJson(b: Option[String]): Option[String] = {
implicit val formats = DefaultFormats
for {
browserDataStr <- b
browserData = parse(browserDataStr).extract[BrowserData]
navObject <- browserData.navigatorObjectData
userAgent <- navObject.userAgent
} yield (userAgent)
}
def getJavascriptUa(rows: Iterable[com.datastax.spark.connector.CassandraRow]): Option[String] = {
implicit val formats = DefaultFormats
rows.collectFirst { case r if r.getStringOption("browser_data").isDefined =>
parseJson(r.getStringOption("browser_data"))
}.flatten
}
def getRequestUa(rows: Iterable[com.datastax.spark.connector.CassandraRow]): Option[String] = {
rows.collectFirst { case r if r.getStringOption("ua").isDefined =>
r.getStringOption("ua")
}.flatten
}
def checkUa(rows: Iterable[com.datastax.spark.connector.CassandraRow], sessionId: String): Option[Boolean] = {
for {
jsUa <- getJavascriptUa(rows)
reqUa <- getRequestUa(rows)
} yield (jsUa == reqUa)
}
def run(name: String) = {
val rdd = sc.cassandraTable("beehive", name).groupBy(r => r.getString("session_id"))
val counts = rdd.map(r => (checkUa(r._2, r._1)))
counts
}
I use :load to load the file into the REPL, and then call the run function. The failure is happening in the parseJson function, as far as I can tell. I've tried a variety of things to try to get this to work. From similar posts, I've made sure my case classes are in the top level in the file. I've tried compiling just the case class definitions into a jar, and including the jar in like this: /usr/bin/dse spark --jars case_classes.jar
I've tried adding them to the conf like this: sc.getConf.setJars(Seq("/home/ubuntu/case_classes.jar"))
And still the same error. Should I compile all of my code into a jar? Is this a spark issue or a JSON4s issue? Any help at all appreciated.

Categories