Were pipelines removed from akka i/o? - java

While learning how to use akka I/O I am trying to implement a simple protocal on top of akka i/o and was following the documentation here.
However in my gradle file I use version 2.3.9 as shown below
dependencies {
compile group: 'org.slf4j', name: 'slf4j-log4j12', version: '1.7.7'
compile group: 'com.typesafe.akka', name: 'akka-actor_2.11', version: '2.3.9'
compile group: 'com.typesafe.akka', name: 'akka-contrib_2.11', version: '2.3.9'
compile group: 'org.scala-lang', name: 'scala-library', version: '2.11.5'
testCompile group: 'junit', name: 'junit', version: '4.11'
}
import of some things that are pipeline specific like
import akka.io.SymmetricPipelineStage;
import akka.io.PipelineContext;
import akka.io.SymmetricPipePair;
generate can not resolve symbol errors.
Hence my questions.
Were these removed or there is some dependancy I need to add to my gradle file.
If they were removed, how would the encod/decode stage be dealt with?

Pipelines were experimental and indeed removed in Akka 2.3.
The removal was documented in the Migration Guide 2.2.x to 2.3.x.
There is also mention of being able to package the "older" pipeline implementation with Akka 2.3 here, though it doesn't appear to be a simple addition of a dependency.
I would wager that Akka Streams is intended to be the better replacement of pipelines, coming in Akka 2.4, but available now as an experimental module. The encode/decode stage or protocol layer can be handled by using Akka Streams in conjunction with Akka I/O.

Yes, pipelines were removed without any alternatives. I came from Netty world and don't find pipelines "unintuitive" - they accumulate buffers and supply children actors with ready to use messages.
Take a look at our solutions, it requires "org.scalaz" %% "scalaz-core" % 7.2.14 as a dependency.
Codec class is a State monad which is being called by the actor and produces output. In our projects we are using Varint32 protobuf encoding, so every message is prepended with varint32 length field:
import com.google.protobuf.CodedInputStream
import com.trueaccord.scalapb.{GeneratedMessage, GeneratedMessageCompanion, Message}
import com.zeptolab.tlc.front.codecs.Varint32ProtoCodec.ProtoMessage
import scalaz.{-\/, State, \/, \/-}
trait Accumulator
trait Codec[IN, OUT] {
type Stream = State[Accumulator, Seq[IN]]
def decode(buffer: Array[Byte]): Throwable \/ IN
def encode(message: OUT): Array[Byte]
def emptyAcc: Accumulator
def decodeStream(data: Array[Byte]): Stream
}
object Varint32ProtoCodec {
type ProtoMessage[T] = GeneratedMessage with Message[T]
def apply[IN <: ProtoMessage[IN], OUT <: ProtoMessage[OUT]](protoType: GeneratedMessageCompanion[IN]) = new Varint32ProtoCodec[IN, OUT](protoType)
}
class Varint32ProtoCodec[IN <: ProtoMessage[IN], OUT <: ProtoMessage[OUT]](protoType: GeneratedMessageCompanion[IN]) extends Codec[IN, OUT] {
import com.google.protobuf.CodedOutputStream
private case class AccumulatorImpl(expected: Int = -1, buffer: Array[Byte] = Array.empty) extends Accumulator
override def emptyAcc: Accumulator = AccumulatorImpl()
override def decode(buffer: Array[Byte]): Throwable \/ IN = {
\/.fromTryCatchNonFatal {
val dataLength = CodedInputStream.newInstance(buffer).readRawVarint32()
val bufferLength = buffer.length
val dataBuffer = buffer.drop(bufferLength - dataLength)
protoType.parseFrom(dataBuffer)
}
}
override def encode(message: OUT): Array[Byte] = {
val messageBuf = message.toByteArray
val messageBufLength = messageBuf.length
val prependLength = CodedOutputStream.computeUInt32SizeNoTag(messageBufLength)
val prependLengthBuffer = new Array[Byte](prependLength)
CodedOutputStream.newInstance(prependLengthBuffer).writeUInt32NoTag(messageBufLength)
prependLengthBuffer ++ messageBuf
}
override def decodeStream(data: Array[Byte]): Stream = State {
case acc: AccumulatorImpl =>
if (data.isEmpty) {
(acc, Seq.empty)
} else {
val accBuffer = acc.buffer ++ data
val accExpected = readExpectedLength(accBuffer, acc)
if (accBuffer.length >= accExpected) {
val (frameBuffer, restBuffer) = accBuffer.splitAt(accExpected)
val output = decode(frameBuffer) match {
case \/-(proto) => Seq(proto)
case -\/(_) => Seq.empty
}
val (newAcc, recOutput) = decodeStream(restBuffer).run(emptyAcc)
(newAcc, output ++ recOutput)
} else (AccumulatorImpl(accExpected, accBuffer), Seq.empty)
}
case _ => (emptyAcc, Seq.empty)
}
private def readExpectedLength(data: Array[Byte], acc: AccumulatorImpl) = {
if (acc.expected == -1 && data.length >= 1) {
\/.fromTryCatchNonFatal {
val is = CodedInputStream.newInstance(data)
val dataLength = is.readRawVarint32()
val tagLength = is.getTotalBytesRead
dataLength + tagLength
}.getOrElse(acc.expected)
} else acc.expected
}
}
And the Actor is:
import akka.actor.{Actor, ActorRef, Props}
import akka.event.Logging
import akka.util.ByteString
import com.zeptolab.tlc.front.codecs.{Accumulator, Varint32ProtoCodec}
import com.zeptolab.tlc.proto.protocol.{Downstream, Upstream}
object FrameCodec {
def props() = Props[FrameCodec]
}
class FrameCodec extends Actor {
import akka.io.Tcp._
private val logger = Logging(context.system, this)
private val codec = Varint32ProtoCodec[Upstream, Downstream](Upstream)
private val sessionActor = context.actorOf(Session.props())
def receive = {
case r: Received =>
context become stream(sender(), codec.emptyAcc)
self ! r
case PeerClosed => peerClosed()
}
private def stream(ioActor: ActorRef, acc: Accumulator): Receive = {
case Received(data) =>
val (next, output) = codec.decodeStream(data.toArray).run(acc)
output.foreach { up =>
sessionActor ! up
}
context become stream(ioActor, next)
case d: Downstream =>
val buffer = codec.encode(d)
ioActor ! Write(ByteString(buffer))
case PeerClosed => peerClosed()
}
private def peerClosed() = {
logger.info("Connection closed")
context stop self
}
}

Related

How to trigger source generation with sbt

I have an sbt sub-project which compiles messages.json files into new java sources. I've set the task up to run before running tests and before compiling the primary project, or run manually via a new command "gen-messages".
The problem is the message generation takes some time, it always generates all sources, and it is running too often. Some tasks like running tests with coverage end up generating and compiling the messages twice!
How can I monitor the sources to the generator, and only run the source generation if something has changed/or the expected output java files are missing?
Secondly how would I go about running the generator only on changed messages.json files?
Currently the sbt commands I'm using are:
lazy val settingsForMessageGeneration =
((test in Test) <<= (test in Test) dependsOn(messageGenerationCommand)) ++
((compile in Compile) <<= (compile in Compile) dependsOn(messageGenerationCommand)) ++
(messageGenerationCommand <<= messageGenerationTask) ++
(sourceGenerators in Compile += messageGenerationTask.taskValue)
lazy val messageGenerationCommand = TaskKey[scala.collection.Seq[File]]("gen-messages")
lazy val messageGenerationTask = (
sourceManaged,
fullClasspath in Compile in messageGenerator,
runner in Compile in messageGenerator,
streams
) map { (dir, cp, r, s) =>
lazy val f = getFileTree(new File("./subProjectWithMsgSources/src/")).filter(_.getName.endsWith("messages.json"))
f.foreach({ te =>
val messagePackagePath = te.getAbsolutePath().replace("messages.json", "msg").replace("./", "")
val messagePath = te.getAbsolutePath().replace("./", "")
val fi = new File(messagePackagePath)
if (!fi.exists()) {
fi.mkdirs()
}
val ar = List("-d", messagePackagePath, messagePath)
toError(r.run("com.my.MessageClassGenerator", cp.files, ar, s.log))
})
getFileTree(new File("./subProjectWithMsgSources/src/"))
.filter(_.getName.endsWith("/msg/*.java"))
.to[scala.collection.Seq]
}
The message generator creates a directory with the newly created java files - no other content will be in that directory.
Related Questions
sbt generate using project generator
You can use sbt.FileFunction.cached to run your source generator only when your input files or output files have been changed.
The idea is to factor your actual source generation to a function Set[File] => Set[File], and call it via FileFunction.cached.
lazy val settingsForMessageGeneration =
((test in Test) <<= (test in Test) dependsOn(messageGenerationCommand)) ++
((compile in Compile) <<= (compile in Compile) dependsOn(messageGenerationCommand)) ++
(messageGenerationCommand <<= messageGenerationTask) ++
(sourceGenerators in Compile += messageGenerationTask.taskValue)
lazy val messageGenerationCommand = TaskKey[scala.collection.Seq[File]]("gen-messages")
lazy val messageGenerationTask = (
sourceManaged,
fullClasspath in Compile in messageGenerator,
runner in Compile in messageGenerator,
streams
) map { (dir, cp, r, s) =>
lazy val f = getFileTree(new File("./subProjectWithMsgSources/src/")).filter(_.getName.endsWith("messages.json"))
def gen(sources: Set[File]): Set[File] = {
sources.foreach({ te =>
val messagePackagePath = te.getAbsolutePath().replace("messages.json", "msg").replace("./", "")
val messagePath = te.getAbsolutePath().replace("./", "")
val fi = new File(messagePackagePath)
if (!fi.exists()) {
fi.mkdirs()
}
val ar = List("-d", messagePackagePath, messagePath)
toError(r.run("com.my.MessageClassGenerator", cp.files, ar, s.log))
})
getFileTree(new File("./subProjectWithMsgSources/src/"))
.filter(_.getName.endsWith("/msg/*.java"))
.to[scala.collection.immutable.Set]
}
val func = FileFunction.cached(s.cacheDirectory / "gen-messages", FilesInfo.hash) { gen }
func(f.toSet).toSeq
}

Camel Processing JSON Messages from RabbitMQ

I want to post a message in JSON format to RabbitMQ and have that message consumed successfully. I'm attempting to use Camel to integrate producers and consumers. However, I'm struggling to understand how to create a route to make this happen. I'm using JSON Schema to define the interface between the Producer and Consumer. My application creates JSON, converts it to a byte[] and a Camel ProducerTemplate is used to send the message to RabbitMQ. On the consumer end, the byte[] message needs to be converted to a String, then to JSON, and then marshalled to an Object so I can process it. The following code line doesn't work however
from(startEndpoint).transform(body().convertToString()).marshal().json(JsonLibrary.Jackson, classOf[Payload]).bean(classOf[JsonBeanExample]),
It's as if the bean is passed the original byte[] content and not the object created by JSON json(JsonLibrary.Jackson, classOf[Payload]). All the camel examples I've seen which use the json(..) call seem be followed by a to(..) which is the end of the route? Here is the error message
Caused by: org.apache.camel.InvalidPayloadException: No body available of type: uk.co.techneurons.messaging.Payload but has value: [B#48898819 of type: byte[] on: Message: "{\"id\":1}". Caused by: No type converter available to convert from type: byte[] to the required type: uk.co.techneurons.messaging.Payload with value [B#48898819. Exchange[ID-Tonys- iMac-local-54996-1446407983661-0-2][Message: "{\"id\":1}"]. Caused by: [org.apache.camel.NoTypeConversionAvailableException - No type converter available to convert from type: byte[] to the required type: uk.co.techneurons.messaging.Payload with value [B#48898819]`
I don't really want to use Spring, Annotations etc, would like to service activation as simple as possible. Use Camel as much as possible
This is the producer
package uk.co.techneurons.messaging
import org.apache.camel.builder.RouteBuilder
import org.apache.camel.impl.DefaultCamelContext
object RabbitMQProducer extends App {
val camelContext = new DefaultCamelContext
val rabbitMQEndpoint: String = "rabbitmq:localhost:5672/advert?autoAck=false&threadPoolSize=1&username=guest&password=guest&exchangeType=topic&autoDelete=false&declare=false"
val rabbitMQRouteBuilder = new RouteBuilder() {
override def configure(): Unit = {
from("direct:start").to(rabbitMQEndpoint)
}
}
camelContext.addRoutes(rabbitMQRouteBuilder)
camelContext.start
val producerTemplate = camelContext.createProducerTemplate
producerTemplate.setDefaultEndpointUri("direct:start")
producerTemplate.sendBodyAndHeader("{\"id\":1}","rabbitmq.ROUTING_KEY","advert.edited")
camelContext.stop
}
This is the consumer..
package uk.co.techneurons.messaging
import org.apache.camel.builder.RouteBuilder
import org.apache.camel.impl.DefaultCamelContext
import org.apache.camel.model.dataformat.JsonLibrary
object RabbitMQConsumer extends App {
val camelContext = new DefaultCamelContext
val startEndpoint = "rabbitmq:localhost:5672/advert?queue=es_index&exchangeType=topic&autoDelete=false&declare=false&autoAck=false"
val consumer = camelContext.createConsumerTemplate
val routeBuilder = new RouteBuilder() {
override def configure(): Unit = {
from(startEndpoint).transform(body().convertToString()).marshal().json(JsonLibrary.Jackson, classOf[Payload]).bean(classOf[JsonBeanExample])
}
}
camelContext.addRoutes(routeBuilder)
camelContext.start
Thread.sleep(1000)
camelContext.stop
}
case class Payload(id: Long)
class JsonBeanExample {
def process(payload: Payload): Unit = {
println(s"JSON ${payload}")
}
}
For completeness, this is the sbt file for easy replication..
name := """camel-scala"""
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= {
val scalaTestVersion = "2.2.4"
val camelVersion: String = "2.16.0"
val rabbitVersion: String = "3.5.6"
val slf4jVersion: String = "1.7.12"
val logbackVersion: String = "1.1.3"
Seq(
"org.scala-lang.modules" %% "scala-xml" % "1.0.3",
"org.apache.camel" % "camel-core" % camelVersion,
"org.apache.camel" % "camel-jackson" % camelVersion,
"org.apache.camel" % "camel-scala" % camelVersion,
"org.apache.camel" % "camel-rabbitmq" % camelVersion,
"com.rabbitmq" % "amqp-client" % rabbitVersion,
"org.slf4j" % "slf4j-api" % slf4jVersion,
"ch.qos.logback" % "logback-classic" % logbackVersion,
"org.apache.camel" % "camel-test" % camelVersion % "test",
"org.scalatest" %% "scalatest" % scalaTestVersion % "test")
}
Thanks
I decided that I needed to create a Bean and Register it (easier said than done! - for some as yet unknown reason JNDIRegistry didn't work with DefaultCamelContext - so I used a SimpleRegistry),
val registry: SimpleRegistry = new SimpleRegistry()
registry.put("myBean", new JsonBeanExample())
val camelContext = new DefaultCamelContext(registry)
Then I changed the consuming routeBuilder - seems like I had been over transforming the message.
from(startEndpoint).unmarshal.json(JsonLibrary.Jackson, classOf[Payload]).to("bean:myBean?method=process")
I also changed the Bean so setter methods were available, and added a toString
class Payload {
#BeanProperty var id: Long = _
override def toString = s"Payload($id)"
}
class JsonBeanExample() {
def process(payload: Payload): Unit = {
println(s"recieved ${payload}")
}
}
The next problem now is to get dead letter queues working, and ensuring that failures in the Bean handler make their way properly back up the stack

JSON4s can't find constructor w/spark

I've run into an issue with attempting to parse json in my spark job. I'm using spark 1.1.0, json4s, and the Cassandra Spark Connector, with DSE 4.6. The exception thrown is:
org.json4s.package$MappingException: Can't find constructor for BrowserData org.json4s.reflect.ScalaSigReader$.readConstructor(ScalaSigReader.scala:27)
org.json4s.reflect.Reflector$ClassDescriptorBuilder.ctorParamType(Reflector.scala:108)
org.json4s.reflect.Reflector$ClassDescriptorBuilder$$anonfun$6.apply(Reflector.scala:98)
org.json4s.reflect.Reflector$ClassDescriptorBuilder$$anonfun$6.apply(Reflector.scala:95)
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
My code looks like this:
case class BrowserData(navigatorObjectData: Option[NavigatorObjectData],
flash_version: Option[FlashVersion],
viewport: Option[Viewport],
performanceData: Option[PerformanceData])
.... other case classes
def parseJson(b: Option[String]): Option[String] = {
implicit val formats = DefaultFormats
for {
browserDataStr <- b
browserData = parse(browserDataStr).extract[BrowserData]
navObject <- browserData.navigatorObjectData
userAgent <- navObject.userAgent
} yield (userAgent)
}
def getJavascriptUa(rows: Iterable[com.datastax.spark.connector.CassandraRow]): Option[String] = {
implicit val formats = DefaultFormats
rows.collectFirst { case r if r.getStringOption("browser_data").isDefined =>
parseJson(r.getStringOption("browser_data"))
}.flatten
}
def getRequestUa(rows: Iterable[com.datastax.spark.connector.CassandraRow]): Option[String] = {
rows.collectFirst { case r if r.getStringOption("ua").isDefined =>
r.getStringOption("ua")
}.flatten
}
def checkUa(rows: Iterable[com.datastax.spark.connector.CassandraRow], sessionId: String): Option[Boolean] = {
for {
jsUa <- getJavascriptUa(rows)
reqUa <- getRequestUa(rows)
} yield (jsUa == reqUa)
}
def run(name: String) = {
val rdd = sc.cassandraTable("beehive", name).groupBy(r => r.getString("session_id"))
val counts = rdd.map(r => (checkUa(r._2, r._1)))
counts
}
I use :load to load the file into the REPL, and then call the run function. The failure is happening in the parseJson function, as far as I can tell. I've tried a variety of things to try to get this to work. From similar posts, I've made sure my case classes are in the top level in the file. I've tried compiling just the case class definitions into a jar, and including the jar in like this: /usr/bin/dse spark --jars case_classes.jar
I've tried adding them to the conf like this: sc.getConf.setJars(Seq("/home/ubuntu/case_classes.jar"))
And still the same error. Should I compile all of my code into a jar? Is this a spark issue or a JSON4s issue? Any help at all appreciated.

Delete directory recursively in Scala

I am writing the following (with Scala 2.10 and Java 6):
import java.io._
def delete(file: File) {
if (file.isDirectory)
Option(file.listFiles).map(_.toList).getOrElse(Nil).foreach(delete(_))
file.delete
}
How would you improve it ? The code seems working but it ignores the return value of java.io.File.delete. Can it be done easier with scala.io instead of java.io ?
With pure scala + java way
import scala.reflect.io.Directory
import java.io.File
val directory = new Directory(new File("/sampleDirectory"))
directory.deleteRecursively()
deleteRecursively() Returns false on failure
Try this code that throws an exception if it fails:
def deleteRecursively(file: File): Unit = {
if (file.isDirectory) {
file.listFiles.foreach(deleteRecursively)
}
if (file.exists && !file.delete) {
throw new Exception(s"Unable to delete ${file.getAbsolutePath}")
}
}
You could also fold or map over the delete if you want to return a value for all the deletes.
Using scala IO
import scalax.file.Path
val path = Path.fromString("/tmp/testfile")
try {
path.deleteRecursively(continueOnFailure = false)
} catch {
case e: IOException => // some file could not be deleted
}
or better, you could use a Try
val path: Path = Path ("/tmp/file")
Try(path.deleteRecursively(continueOnFailure = false))
which will either result in a Success[Int] containing the number of files deleted, or a Failure[IOException].
From
http://alvinalexander.com/blog/post/java/java-io-faq-how-delete-directory-tree
Using Apache Common IO
import org.apache.commons.io.FileUtils;
import org.apache.commons.io.filefilter.WildcardFileFilter;
public void deleteDirectory(String directoryName)
throws IOException
{
try
{
FileUtils.deleteDirectory(new File(directoryName));
}
catch (IOException ioe)
{
// log the exception here
ioe.printStackTrace();
throw ioe;
}
}
The Scala one can just do this...
import org.apache.commons.io.FileUtils
import org.apache.commons.io.filefilter.WildcardFileFilter
FileUtils.deleteDirectory(new File(outputFile))
Maven Repo Imports
Using the Java NIO.2 API:
import java.nio.file.{Files, Paths, Path, SimpleFileVisitor, FileVisitResult}
import java.nio.file.attribute.BasicFileAttributes
def remove(root: Path): Unit = {
Files.walkFileTree(root, new SimpleFileVisitor[Path] {
override def visitFile(file: Path, attrs: BasicFileAttributes): FileVisitResult = {
Files.delete(file)
FileVisitResult.CONTINUE
}
override def postVisitDirectory(dir: Path, exc: IOException): FileVisitResult = {
Files.delete(dir)
FileVisitResult.CONTINUE
}
})
}
remove(Paths.get("/tmp/testdir"))
Really, it's a pity that the NIO.2 API is with us for so many years and yet few people are using it, even though it is really superior to the old File API.
Expanding on Vladimir Matveev's NIO2 solution:
object Util {
import java.io.IOException
import java.nio.file.{Files, Paths, Path, SimpleFileVisitor, FileVisitResult}
import java.nio.file.attribute.BasicFileAttributes
def remove(root: Path, deleteRoot: Boolean = true): Unit =
Files.walkFileTree(root, new SimpleFileVisitor[Path] {
override def visitFile(file: Path, attributes: BasicFileAttributes): FileVisitResult = {
Files.delete(file)
FileVisitResult.CONTINUE
}
override def postVisitDirectory(dir: Path, exception: IOException): FileVisitResult = {
if (deleteRoot) Files.delete(dir)
FileVisitResult.CONTINUE
}
})
def removeUnder(string: String): Unit = remove(Paths.get(string), deleteRoot=false)
def removeAll(string: String): Unit = remove(Paths.get(string))
def removeUnder(file: java.io.File): Unit = remove(file.toPath, deleteRoot=false)
def removeAll(file: java.io.File): Unit = remove(file.toPath)
}
Using java 6 without using dependencies this is pretty much the only way to do so.
The problem with your function is that it return Unit (which I btw would explicit note it using def delete(file: File): Unit = {
I took your code and modify it to return map from file name to the deleting status.
def delete(file: File): Array[(String, Boolean)] = {
Option(file.listFiles).map(_.flatMap(f => delete(f))).getOrElse(Array()) :+ (file.getPath -> file.delete)
}
To add to Slavik Muz's answer:
def deleteFile(file: File): Boolean = {
def childrenOf(file: File): List[File] = Option(file.listFiles()).getOrElse(Array.empty).toList
#annotation.tailrec
def loop(files: List[File]): Boolean = files match {
case Nil ⇒ true
case child :: parents if child.isDirectory && child.listFiles().nonEmpty ⇒
loop((childrenOf(child) :+ child) ++ parents)
case fileOrEmptyDir :: rest ⇒
println(s"deleting $fileOrEmptyDir")
fileOrEmptyDir.delete()
loop(rest)
}
if (!file.exists()) false
else loop(childrenOf(file) :+ file)
}
This one uses java.io but one can delete directories matching it with wildcard string which may or may not contain any content within it.
for (file <- new File("<path as String>").listFiles;
if( file.getName() matches("[1-9]*"))) FileUtils.deleteDirectory(file)
Directory structure e.g.
* A/1/, A/2/, A/300/ ... thats why the regex String: [1-9]*, couldn't find a File API in scala which supports regex(may be i missed something).
Getting little lengthy, but here's one that combines the recursivity of Garrette's solution with the npe-safety of the original question.
def deleteFile(path: String) = {
val penultimateFile = new File(path.split('/').take(2).mkString("/"))
def getFiles(f: File): Set[File] = {
Option(f.listFiles)
.map(a => a.toSet)
.getOrElse(Set.empty)
}
def getRecursively(f: File): Set[File] = {
val files = getFiles(f)
val subDirectories = files.filter(path => path.isDirectory)
subDirectories.flatMap(getRecursively) ++ files + penultimateFile
}
getRecursively(penultimateFile).foreach(file => {
if (getFiles(file).isEmpty && file.getAbsoluteFile().exists) file.delete
})
}
This is recursive method that clean all in directory, and return count of deleted files
def cleanDir(dir: File): Int = {
#tailrec
def loop(list: Array[File], deletedFiles: Int): Int = {
if (list.isEmpty) deletedFiles
else {
if (list.head.isDirectory && !list.head.listFiles().isEmpty) {
loop(list.head.listFiles() ++ list.tail ++ Array(list.head), deletedFiles)
} else {
val isDeleted = list.head.delete()
if (isDeleted) loop(list.tail, deletedFiles + 1)
else loop(list.tail, deletedFiles)
}
}
}
loop(dir.listFiles(), 0)
}
What I ended up with
def deleteRecursively(f: File): Boolean = {
if (f.isDirectory) f.listFiles match {
case files: Array[File] => files.foreach(deleteRecursively)
case null =>
}
f.delete()
}
os-lib makes it easy to delete recursively with a one-liner:
os.remove.all(os.pwd/"dogs")
os-lib uses java.nio under the hood, just doesn't expose all the Java ugliness. See here for more info on how to use the library.
You can do this by excute external system commands.
import sys.process._
def delete(path: String) = {
s"""rm -rf ${path}""".!!
}

how to save a list of case classes in scala

I have a case class named Rdv:
case class Rdv(
id: Option[Int],
nom: String,
prénom: String,
sexe: Int,
telPortable: String,
telBureau: String,
telPrivé: String,
siteRDV: String,
typeRDV: String,
libelléRDV: String,
numRDV: String,
étape: String,
dateRDV: Long,
heureRDVString: String,
statut: String,
orderId: String)
and I would like to save a list of such elements on disk, and reload them later.
I tried with java classes (ObjectOutputStream, fileOutputStream, objectInputStream, fileInputStream) but I have an error in the retrieving step : the statement
val n2 = ois.readObject().asInstanceOf[List[Rdv]]
always get an error(classNotFound:Rdv), although the correct path is given in the imports place.
Do you know a workaround to save such an object?
Please provide a little piece of code!
thanks
olivier
ps: I have the same error while using the Marshall class, such as in this code:
object Application extends Controller {
def index = Action {
//implicit val Rdv2Writes = Json.writes[rdv2]
def rdvTordv2(rdv: Rdv): rdv2 = new rdv2(
rdv.nom,
rdv.prénom,
rdv.dateRDV,
rdv.heureRDVString,
rdv.telPortable,
rdv.telBureau,
rdv.telPrivé,
rdv.siteRDV,
rdv.typeRDV,
rdv.libelléRDV,
rdv.orderId,
rdv.statut)
val n = variables.manager.liste_locale
val out = new FileOutputStream("out")
out.write(Marshal.dump(n))
out.close
val in = new FileInputStream("out")
val bytes = Stream.continually(in.read).takeWhile(-1 !=).map(_.toByte).toArray
val bar: List[Rdv] = Marshal.load[List[Rdv]](bytes) <--------------
val n3 = bar.map(rdv =>
rdvTordv2(rdv))
println("n3:" + n3.size)
Ok(views.html.Application.olivier2(n3))
}
},
in the line with the arrow.
It seems that the conversion to the type List[Rdv] encounters problems, but why? Is it a play! linked problem?
ok, there's a problem with play:
I created a new scala project with this code:
object Test1 extends App {
//pour des fins de test
case class Person(name:String,age:Int)
val liste_locale=List(new Person("paul",18))
val n = liste_locale
val out = new FileOutputStream("out")
out.write(Marshal.dump(n))
out.close
val in = new FileInputStream("out")
val bytes = Stream.continually(in.read).takeWhile(-1 !=).map(_.toByte).toArray
val bar: List[Person] = Marshal.load[List[Person]](bytes)
println(s"bar:size=${bar.size}")
}
and the display is good ("bar:size=1").
then, I modified my previous code in the play project, in the controller class, such as this:
object Application extends Controller {
def index = Action {
//pour des fins de test
case class Person(name:String,age:Int)
val liste_locale=List(new Person("paul",18))
val n = liste_locale
val out = new FileOutputStream("out")
out.write(Marshal.dump(n))
out.close
val in = new FileInputStream("out")
val bytes = Stream.continually(in.read).takeWhile(-1 !=).map(_.toByte).toArray
val bar: List[Person] = Marshal.load[List[Person]](bytes)
println(s"bar:size=${bar.size}")
Ok(views.html.Application.olivier2(Nil))
}
}
and I have an error saying:
play.api.Application$$anon$1: Execution exception[[ClassNotFoundException: controllers.Application$$anonfun$index$1$Person$3]]
is there anyone having the answer?
edit: I thought the error could come from sbt, so I modified build.scala such as this:
import sbt._
import Keys._
import play.Project._
object ApplicationBuild extends Build {
val appName = "sms_play_2"
val appVersion = "1.0-SNAPSHOT"
val appDependencies = Seq(
// Add your project dependencies here,
jdbc,
anorm,
"com.typesafe.slick" % "slick_2.10" % "2.0.0",
"com.github.nscala-time" %% "nscala-time" % "0.6.0",
"org.xerial" % "sqlite-jdbc" % "3.7.2",
"org.quartz-scheduler" % "quartz" % "2.2.1",
"com.esotericsoftware.kryo" % "kryo" % "2.22",
"io.argonaut" %% "argonaut" % "6.0.2")
val mySettings = Seq(
(javaOptions in run) ++= Seq("-Dconfig.file=conf/dev.conf"))
val playCommonSettings = Seq(
Keys.fork := true)
val main = play.Project(appName, appVersion, appDependencies).settings(
Keys.fork in run := true,
resolvers += Resolver.sonatypeRepo("snapshots")).settings(mySettings: _*)
.settings(playCommonSettings: _*)
}
but without success, the error is still there (Class Person not found)
can you help me?
Scala Pickling has reasonable momentum and the approach has many advantages (lots of the heavy lifting is done at compile time). There is a plugable serialization mechanism and formats like json are supported.

Categories