I want to post a message in JSON format to RabbitMQ and have that message consumed successfully. I'm attempting to use Camel to integrate producers and consumers. However, I'm struggling to understand how to create a route to make this happen. I'm using JSON Schema to define the interface between the Producer and Consumer. My application creates JSON, converts it to a byte[] and a Camel ProducerTemplate is used to send the message to RabbitMQ. On the consumer end, the byte[] message needs to be converted to a String, then to JSON, and then marshalled to an Object so I can process it. The following code line doesn't work however
from(startEndpoint).transform(body().convertToString()).marshal().json(JsonLibrary.Jackson, classOf[Payload]).bean(classOf[JsonBeanExample]),
It's as if the bean is passed the original byte[] content and not the object created by JSON json(JsonLibrary.Jackson, classOf[Payload]). All the camel examples I've seen which use the json(..) call seem be followed by a to(..) which is the end of the route? Here is the error message
Caused by: org.apache.camel.InvalidPayloadException: No body available of type: uk.co.techneurons.messaging.Payload but has value: [B#48898819 of type: byte[] on: Message: "{\"id\":1}". Caused by: No type converter available to convert from type: byte[] to the required type: uk.co.techneurons.messaging.Payload with value [B#48898819. Exchange[ID-Tonys- iMac-local-54996-1446407983661-0-2][Message: "{\"id\":1}"]. Caused by: [org.apache.camel.NoTypeConversionAvailableException - No type converter available to convert from type: byte[] to the required type: uk.co.techneurons.messaging.Payload with value [B#48898819]`
I don't really want to use Spring, Annotations etc, would like to service activation as simple as possible. Use Camel as much as possible
This is the producer
package uk.co.techneurons.messaging
import org.apache.camel.builder.RouteBuilder
import org.apache.camel.impl.DefaultCamelContext
object RabbitMQProducer extends App {
val camelContext = new DefaultCamelContext
val rabbitMQEndpoint: String = "rabbitmq:localhost:5672/advert?autoAck=false&threadPoolSize=1&username=guest&password=guest&exchangeType=topic&autoDelete=false&declare=false"
val rabbitMQRouteBuilder = new RouteBuilder() {
override def configure(): Unit = {
from("direct:start").to(rabbitMQEndpoint)
}
}
camelContext.addRoutes(rabbitMQRouteBuilder)
camelContext.start
val producerTemplate = camelContext.createProducerTemplate
producerTemplate.setDefaultEndpointUri("direct:start")
producerTemplate.sendBodyAndHeader("{\"id\":1}","rabbitmq.ROUTING_KEY","advert.edited")
camelContext.stop
}
This is the consumer..
package uk.co.techneurons.messaging
import org.apache.camel.builder.RouteBuilder
import org.apache.camel.impl.DefaultCamelContext
import org.apache.camel.model.dataformat.JsonLibrary
object RabbitMQConsumer extends App {
val camelContext = new DefaultCamelContext
val startEndpoint = "rabbitmq:localhost:5672/advert?queue=es_index&exchangeType=topic&autoDelete=false&declare=false&autoAck=false"
val consumer = camelContext.createConsumerTemplate
val routeBuilder = new RouteBuilder() {
override def configure(): Unit = {
from(startEndpoint).transform(body().convertToString()).marshal().json(JsonLibrary.Jackson, classOf[Payload]).bean(classOf[JsonBeanExample])
}
}
camelContext.addRoutes(routeBuilder)
camelContext.start
Thread.sleep(1000)
camelContext.stop
}
case class Payload(id: Long)
class JsonBeanExample {
def process(payload: Payload): Unit = {
println(s"JSON ${payload}")
}
}
For completeness, this is the sbt file for easy replication..
name := """camel-scala"""
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= {
val scalaTestVersion = "2.2.4"
val camelVersion: String = "2.16.0"
val rabbitVersion: String = "3.5.6"
val slf4jVersion: String = "1.7.12"
val logbackVersion: String = "1.1.3"
Seq(
"org.scala-lang.modules" %% "scala-xml" % "1.0.3",
"org.apache.camel" % "camel-core" % camelVersion,
"org.apache.camel" % "camel-jackson" % camelVersion,
"org.apache.camel" % "camel-scala" % camelVersion,
"org.apache.camel" % "camel-rabbitmq" % camelVersion,
"com.rabbitmq" % "amqp-client" % rabbitVersion,
"org.slf4j" % "slf4j-api" % slf4jVersion,
"ch.qos.logback" % "logback-classic" % logbackVersion,
"org.apache.camel" % "camel-test" % camelVersion % "test",
"org.scalatest" %% "scalatest" % scalaTestVersion % "test")
}
Thanks
I decided that I needed to create a Bean and Register it (easier said than done! - for some as yet unknown reason JNDIRegistry didn't work with DefaultCamelContext - so I used a SimpleRegistry),
val registry: SimpleRegistry = new SimpleRegistry()
registry.put("myBean", new JsonBeanExample())
val camelContext = new DefaultCamelContext(registry)
Then I changed the consuming routeBuilder - seems like I had been over transforming the message.
from(startEndpoint).unmarshal.json(JsonLibrary.Jackson, classOf[Payload]).to("bean:myBean?method=process")
I also changed the Bean so setter methods were available, and added a toString
class Payload {
#BeanProperty var id: Long = _
override def toString = s"Payload($id)"
}
class JsonBeanExample() {
def process(payload: Payload): Unit = {
println(s"recieved ${payload}")
}
}
The next problem now is to get dead letter queues working, and ensuring that failures in the Bean handler make their way properly back up the stack
Related
I am trying to store a Spark DataSet to HBase in an efficient way. When we tried to do something like that with a lambda in JAVA :
sparkDF.foreach(l->this.hBaseConnector.persistMappingToHBase(l,"name_of_hBaseTable") );
The function persistMappingtoHBase uses the HBase Java client (Put) to store in HBase.
I get an exception: Exception in thread "main" org.apache.spark.SparkException: Task not serializable
Then we tried this:
sparkDF.foreachPartition(partition -> {
final HBaseConnector hBaseConnector = new HBaseConnector();
hBaseConnector.connect(hbaseProps);
while (partition.hasNext()) {
hBaseConnector.persistMappingToHBase(partition.next());
}
hBaseConnector.closeConnection();
});
which seems to be working but it seems quite inefficient, I guess because we create and close a connection for each row of the dataframe.
What is a good way to store a spark DS to HBase ? I saw a connector developed by IBM but never used it.
The following can be used to save the content to HBase
val hbaseConfig = HBaseConfiguration.create
hbaseConfig.set("hbase.zookeeper.quorum", "xx.xxx.xxx.xxx")
hbaseConfig.set("hbase.zookeeper.property.clientPort", "2181")
val job = Job.getInstance(hbaseConfig)
job.setOutputFormatClass(classOf[TableOutputFormat[_]])
job.getConfiguration.set(TableOutputFormat.OUTPUT_TABLE, "test_table")
val result = sparkDF.map(row -> {
// Using UUID as my rowkey, you can use your own rowkey
val put = new Put(Bytes.toBytes(UUID.randomUUID().toString))
// setting the value of each row to Put object
....
....
new Tuple2[ImmutableBytesWritable, Put](new ImmutableBytesWritable(), put)
});
// save result to hbase table
result.saveAsNewAPIHadoopDataset(job.getConfiguration)
I have following dependencies in my build.sbt file
libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.3.0"
libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.3.0"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.3.0"
Im using https://github.com/AsyncHttpClient/async-http-client this library in my scala project, and im performing some http calls with it, but now on some http calls I need to retry a call if I dont get my expected result for 3 times.
How should I implement something like this?
thaknks
This is an example of retry function based on Future.recoverWith
If you run it you can see that it prints "run process" until the Future is successful but not more than 'times' times
object X extends App{
type Request = String
type Response = String
import scala.concurrent.ExecutionContext.Implicits.global
def retry(request: Request, process: Request => Future[Response], times: Int): Future[Response] ={
val responseF = process(request)
if(times > 0)
responseF.recoverWith{
case ex => println("fail")
retry(request, process, times - 1)
}
else
responseF
}
def process(s: Request): Future[Response] = {
println("run process")
if(Random.nextBoolean()) Future.successful("good") else Future.failed(new Exception)
}
val result = retry("", process, 3)
import scala.concurrent.duration._
println(Await.result(result, 1.second))
}
I've run into an issue with attempting to parse json in my spark job. I'm using spark 1.1.0, json4s, and the Cassandra Spark Connector, with DSE 4.6. The exception thrown is:
org.json4s.package$MappingException: Can't find constructor for BrowserData org.json4s.reflect.ScalaSigReader$.readConstructor(ScalaSigReader.scala:27)
org.json4s.reflect.Reflector$ClassDescriptorBuilder.ctorParamType(Reflector.scala:108)
org.json4s.reflect.Reflector$ClassDescriptorBuilder$$anonfun$6.apply(Reflector.scala:98)
org.json4s.reflect.Reflector$ClassDescriptorBuilder$$anonfun$6.apply(Reflector.scala:95)
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
My code looks like this:
case class BrowserData(navigatorObjectData: Option[NavigatorObjectData],
flash_version: Option[FlashVersion],
viewport: Option[Viewport],
performanceData: Option[PerformanceData])
.... other case classes
def parseJson(b: Option[String]): Option[String] = {
implicit val formats = DefaultFormats
for {
browserDataStr <- b
browserData = parse(browserDataStr).extract[BrowserData]
navObject <- browserData.navigatorObjectData
userAgent <- navObject.userAgent
} yield (userAgent)
}
def getJavascriptUa(rows: Iterable[com.datastax.spark.connector.CassandraRow]): Option[String] = {
implicit val formats = DefaultFormats
rows.collectFirst { case r if r.getStringOption("browser_data").isDefined =>
parseJson(r.getStringOption("browser_data"))
}.flatten
}
def getRequestUa(rows: Iterable[com.datastax.spark.connector.CassandraRow]): Option[String] = {
rows.collectFirst { case r if r.getStringOption("ua").isDefined =>
r.getStringOption("ua")
}.flatten
}
def checkUa(rows: Iterable[com.datastax.spark.connector.CassandraRow], sessionId: String): Option[Boolean] = {
for {
jsUa <- getJavascriptUa(rows)
reqUa <- getRequestUa(rows)
} yield (jsUa == reqUa)
}
def run(name: String) = {
val rdd = sc.cassandraTable("beehive", name).groupBy(r => r.getString("session_id"))
val counts = rdd.map(r => (checkUa(r._2, r._1)))
counts
}
I use :load to load the file into the REPL, and then call the run function. The failure is happening in the parseJson function, as far as I can tell. I've tried a variety of things to try to get this to work. From similar posts, I've made sure my case classes are in the top level in the file. I've tried compiling just the case class definitions into a jar, and including the jar in like this: /usr/bin/dse spark --jars case_classes.jar
I've tried adding them to the conf like this: sc.getConf.setJars(Seq("/home/ubuntu/case_classes.jar"))
And still the same error. Should I compile all of my code into a jar? Is this a spark issue or a JSON4s issue? Any help at all appreciated.
While learning how to use akka I/O I am trying to implement a simple protocal on top of akka i/o and was following the documentation here.
However in my gradle file I use version 2.3.9 as shown below
dependencies {
compile group: 'org.slf4j', name: 'slf4j-log4j12', version: '1.7.7'
compile group: 'com.typesafe.akka', name: 'akka-actor_2.11', version: '2.3.9'
compile group: 'com.typesafe.akka', name: 'akka-contrib_2.11', version: '2.3.9'
compile group: 'org.scala-lang', name: 'scala-library', version: '2.11.5'
testCompile group: 'junit', name: 'junit', version: '4.11'
}
import of some things that are pipeline specific like
import akka.io.SymmetricPipelineStage;
import akka.io.PipelineContext;
import akka.io.SymmetricPipePair;
generate can not resolve symbol errors.
Hence my questions.
Were these removed or there is some dependancy I need to add to my gradle file.
If they were removed, how would the encod/decode stage be dealt with?
Pipelines were experimental and indeed removed in Akka 2.3.
The removal was documented in the Migration Guide 2.2.x to 2.3.x.
There is also mention of being able to package the "older" pipeline implementation with Akka 2.3 here, though it doesn't appear to be a simple addition of a dependency.
I would wager that Akka Streams is intended to be the better replacement of pipelines, coming in Akka 2.4, but available now as an experimental module. The encode/decode stage or protocol layer can be handled by using Akka Streams in conjunction with Akka I/O.
Yes, pipelines were removed without any alternatives. I came from Netty world and don't find pipelines "unintuitive" - they accumulate buffers and supply children actors with ready to use messages.
Take a look at our solutions, it requires "org.scalaz" %% "scalaz-core" % 7.2.14 as a dependency.
Codec class is a State monad which is being called by the actor and produces output. In our projects we are using Varint32 protobuf encoding, so every message is prepended with varint32 length field:
import com.google.protobuf.CodedInputStream
import com.trueaccord.scalapb.{GeneratedMessage, GeneratedMessageCompanion, Message}
import com.zeptolab.tlc.front.codecs.Varint32ProtoCodec.ProtoMessage
import scalaz.{-\/, State, \/, \/-}
trait Accumulator
trait Codec[IN, OUT] {
type Stream = State[Accumulator, Seq[IN]]
def decode(buffer: Array[Byte]): Throwable \/ IN
def encode(message: OUT): Array[Byte]
def emptyAcc: Accumulator
def decodeStream(data: Array[Byte]): Stream
}
object Varint32ProtoCodec {
type ProtoMessage[T] = GeneratedMessage with Message[T]
def apply[IN <: ProtoMessage[IN], OUT <: ProtoMessage[OUT]](protoType: GeneratedMessageCompanion[IN]) = new Varint32ProtoCodec[IN, OUT](protoType)
}
class Varint32ProtoCodec[IN <: ProtoMessage[IN], OUT <: ProtoMessage[OUT]](protoType: GeneratedMessageCompanion[IN]) extends Codec[IN, OUT] {
import com.google.protobuf.CodedOutputStream
private case class AccumulatorImpl(expected: Int = -1, buffer: Array[Byte] = Array.empty) extends Accumulator
override def emptyAcc: Accumulator = AccumulatorImpl()
override def decode(buffer: Array[Byte]): Throwable \/ IN = {
\/.fromTryCatchNonFatal {
val dataLength = CodedInputStream.newInstance(buffer).readRawVarint32()
val bufferLength = buffer.length
val dataBuffer = buffer.drop(bufferLength - dataLength)
protoType.parseFrom(dataBuffer)
}
}
override def encode(message: OUT): Array[Byte] = {
val messageBuf = message.toByteArray
val messageBufLength = messageBuf.length
val prependLength = CodedOutputStream.computeUInt32SizeNoTag(messageBufLength)
val prependLengthBuffer = new Array[Byte](prependLength)
CodedOutputStream.newInstance(prependLengthBuffer).writeUInt32NoTag(messageBufLength)
prependLengthBuffer ++ messageBuf
}
override def decodeStream(data: Array[Byte]): Stream = State {
case acc: AccumulatorImpl =>
if (data.isEmpty) {
(acc, Seq.empty)
} else {
val accBuffer = acc.buffer ++ data
val accExpected = readExpectedLength(accBuffer, acc)
if (accBuffer.length >= accExpected) {
val (frameBuffer, restBuffer) = accBuffer.splitAt(accExpected)
val output = decode(frameBuffer) match {
case \/-(proto) => Seq(proto)
case -\/(_) => Seq.empty
}
val (newAcc, recOutput) = decodeStream(restBuffer).run(emptyAcc)
(newAcc, output ++ recOutput)
} else (AccumulatorImpl(accExpected, accBuffer), Seq.empty)
}
case _ => (emptyAcc, Seq.empty)
}
private def readExpectedLength(data: Array[Byte], acc: AccumulatorImpl) = {
if (acc.expected == -1 && data.length >= 1) {
\/.fromTryCatchNonFatal {
val is = CodedInputStream.newInstance(data)
val dataLength = is.readRawVarint32()
val tagLength = is.getTotalBytesRead
dataLength + tagLength
}.getOrElse(acc.expected)
} else acc.expected
}
}
And the Actor is:
import akka.actor.{Actor, ActorRef, Props}
import akka.event.Logging
import akka.util.ByteString
import com.zeptolab.tlc.front.codecs.{Accumulator, Varint32ProtoCodec}
import com.zeptolab.tlc.proto.protocol.{Downstream, Upstream}
object FrameCodec {
def props() = Props[FrameCodec]
}
class FrameCodec extends Actor {
import akka.io.Tcp._
private val logger = Logging(context.system, this)
private val codec = Varint32ProtoCodec[Upstream, Downstream](Upstream)
private val sessionActor = context.actorOf(Session.props())
def receive = {
case r: Received =>
context become stream(sender(), codec.emptyAcc)
self ! r
case PeerClosed => peerClosed()
}
private def stream(ioActor: ActorRef, acc: Accumulator): Receive = {
case Received(data) =>
val (next, output) = codec.decodeStream(data.toArray).run(acc)
output.foreach { up =>
sessionActor ! up
}
context become stream(ioActor, next)
case d: Downstream =>
val buffer = codec.encode(d)
ioActor ! Write(ByteString(buffer))
case PeerClosed => peerClosed()
}
private def peerClosed() = {
logger.info("Connection closed")
context stop self
}
}
I have a case class named Rdv:
case class Rdv(
id: Option[Int],
nom: String,
prénom: String,
sexe: Int,
telPortable: String,
telBureau: String,
telPrivé: String,
siteRDV: String,
typeRDV: String,
libelléRDV: String,
numRDV: String,
étape: String,
dateRDV: Long,
heureRDVString: String,
statut: String,
orderId: String)
and I would like to save a list of such elements on disk, and reload them later.
I tried with java classes (ObjectOutputStream, fileOutputStream, objectInputStream, fileInputStream) but I have an error in the retrieving step : the statement
val n2 = ois.readObject().asInstanceOf[List[Rdv]]
always get an error(classNotFound:Rdv), although the correct path is given in the imports place.
Do you know a workaround to save such an object?
Please provide a little piece of code!
thanks
olivier
ps: I have the same error while using the Marshall class, such as in this code:
object Application extends Controller {
def index = Action {
//implicit val Rdv2Writes = Json.writes[rdv2]
def rdvTordv2(rdv: Rdv): rdv2 = new rdv2(
rdv.nom,
rdv.prénom,
rdv.dateRDV,
rdv.heureRDVString,
rdv.telPortable,
rdv.telBureau,
rdv.telPrivé,
rdv.siteRDV,
rdv.typeRDV,
rdv.libelléRDV,
rdv.orderId,
rdv.statut)
val n = variables.manager.liste_locale
val out = new FileOutputStream("out")
out.write(Marshal.dump(n))
out.close
val in = new FileInputStream("out")
val bytes = Stream.continually(in.read).takeWhile(-1 !=).map(_.toByte).toArray
val bar: List[Rdv] = Marshal.load[List[Rdv]](bytes) <--------------
val n3 = bar.map(rdv =>
rdvTordv2(rdv))
println("n3:" + n3.size)
Ok(views.html.Application.olivier2(n3))
}
},
in the line with the arrow.
It seems that the conversion to the type List[Rdv] encounters problems, but why? Is it a play! linked problem?
ok, there's a problem with play:
I created a new scala project with this code:
object Test1 extends App {
//pour des fins de test
case class Person(name:String,age:Int)
val liste_locale=List(new Person("paul",18))
val n = liste_locale
val out = new FileOutputStream("out")
out.write(Marshal.dump(n))
out.close
val in = new FileInputStream("out")
val bytes = Stream.continually(in.read).takeWhile(-1 !=).map(_.toByte).toArray
val bar: List[Person] = Marshal.load[List[Person]](bytes)
println(s"bar:size=${bar.size}")
}
and the display is good ("bar:size=1").
then, I modified my previous code in the play project, in the controller class, such as this:
object Application extends Controller {
def index = Action {
//pour des fins de test
case class Person(name:String,age:Int)
val liste_locale=List(new Person("paul",18))
val n = liste_locale
val out = new FileOutputStream("out")
out.write(Marshal.dump(n))
out.close
val in = new FileInputStream("out")
val bytes = Stream.continually(in.read).takeWhile(-1 !=).map(_.toByte).toArray
val bar: List[Person] = Marshal.load[List[Person]](bytes)
println(s"bar:size=${bar.size}")
Ok(views.html.Application.olivier2(Nil))
}
}
and I have an error saying:
play.api.Application$$anon$1: Execution exception[[ClassNotFoundException: controllers.Application$$anonfun$index$1$Person$3]]
is there anyone having the answer?
edit: I thought the error could come from sbt, so I modified build.scala such as this:
import sbt._
import Keys._
import play.Project._
object ApplicationBuild extends Build {
val appName = "sms_play_2"
val appVersion = "1.0-SNAPSHOT"
val appDependencies = Seq(
// Add your project dependencies here,
jdbc,
anorm,
"com.typesafe.slick" % "slick_2.10" % "2.0.0",
"com.github.nscala-time" %% "nscala-time" % "0.6.0",
"org.xerial" % "sqlite-jdbc" % "3.7.2",
"org.quartz-scheduler" % "quartz" % "2.2.1",
"com.esotericsoftware.kryo" % "kryo" % "2.22",
"io.argonaut" %% "argonaut" % "6.0.2")
val mySettings = Seq(
(javaOptions in run) ++= Seq("-Dconfig.file=conf/dev.conf"))
val playCommonSettings = Seq(
Keys.fork := true)
val main = play.Project(appName, appVersion, appDependencies).settings(
Keys.fork in run := true,
resolvers += Resolver.sonatypeRepo("snapshots")).settings(mySettings: _*)
.settings(playCommonSettings: _*)
}
but without success, the error is still there (Class Person not found)
can you help me?
Scala Pickling has reasonable momentum and the approach has many advantages (lots of the heavy lifting is done at compile time). There is a plugable serialization mechanism and formats like json are supported.