I am using the following code to compare 2 image files but even after downloading required dependecies/jar files CompareCmd class is not getting identified and gives compilation error.
import org.im4java.core.*
import org.im4java.process.*
import org.im4java.utils.*
boolean compareImages(String file1, String file2) {
// This instance wraps the compare command
CompareCmd compare = new CompareCmd() //throws compilation error here
// For metric-output
compare.setErrorConsumer(StandardStream.STDERR)
IMOperation cmpOp = new IMOperation()
// Set the compare metric
cmpOp.metric("mae")
// Add the expected image
cmpOp.addImage(file1)
// Add the current image
cmpOp.addImage(file2)
// This stores the difference
// cmpOp.addImage(diff)
try {
// Do the compare
compare.run(cmpOp);
return true
}
catch (Exception ex) {
return false
}
}
Error:
unable to resolve class CompareCmd
# line 150, column 20.
CompareCmd compare = new CompareCmd()
^
Can someone tell me if this class still exists in latest version of im4Java? or am I missing something.
Thanks in advance.
I've run into an issue with attempting to parse json in my spark job. I'm using spark 1.1.0, json4s, and the Cassandra Spark Connector, with DSE 4.6. The exception thrown is:
org.json4s.package$MappingException: Can't find constructor for BrowserData org.json4s.reflect.ScalaSigReader$.readConstructor(ScalaSigReader.scala:27)
org.json4s.reflect.Reflector$ClassDescriptorBuilder.ctorParamType(Reflector.scala:108)
org.json4s.reflect.Reflector$ClassDescriptorBuilder$$anonfun$6.apply(Reflector.scala:98)
org.json4s.reflect.Reflector$ClassDescriptorBuilder$$anonfun$6.apply(Reflector.scala:95)
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
My code looks like this:
case class BrowserData(navigatorObjectData: Option[NavigatorObjectData],
flash_version: Option[FlashVersion],
viewport: Option[Viewport],
performanceData: Option[PerformanceData])
.... other case classes
def parseJson(b: Option[String]): Option[String] = {
implicit val formats = DefaultFormats
for {
browserDataStr <- b
browserData = parse(browserDataStr).extract[BrowserData]
navObject <- browserData.navigatorObjectData
userAgent <- navObject.userAgent
} yield (userAgent)
}
def getJavascriptUa(rows: Iterable[com.datastax.spark.connector.CassandraRow]): Option[String] = {
implicit val formats = DefaultFormats
rows.collectFirst { case r if r.getStringOption("browser_data").isDefined =>
parseJson(r.getStringOption("browser_data"))
}.flatten
}
def getRequestUa(rows: Iterable[com.datastax.spark.connector.CassandraRow]): Option[String] = {
rows.collectFirst { case r if r.getStringOption("ua").isDefined =>
r.getStringOption("ua")
}.flatten
}
def checkUa(rows: Iterable[com.datastax.spark.connector.CassandraRow], sessionId: String): Option[Boolean] = {
for {
jsUa <- getJavascriptUa(rows)
reqUa <- getRequestUa(rows)
} yield (jsUa == reqUa)
}
def run(name: String) = {
val rdd = sc.cassandraTable("beehive", name).groupBy(r => r.getString("session_id"))
val counts = rdd.map(r => (checkUa(r._2, r._1)))
counts
}
I use :load to load the file into the REPL, and then call the run function. The failure is happening in the parseJson function, as far as I can tell. I've tried a variety of things to try to get this to work. From similar posts, I've made sure my case classes are in the top level in the file. I've tried compiling just the case class definitions into a jar, and including the jar in like this: /usr/bin/dse spark --jars case_classes.jar
I've tried adding them to the conf like this: sc.getConf.setJars(Seq("/home/ubuntu/case_classes.jar"))
And still the same error. Should I compile all of my code into a jar? Is this a spark issue or a JSON4s issue? Any help at all appreciated.
I am writing the following (with Scala 2.10 and Java 6):
import java.io._
def delete(file: File) {
if (file.isDirectory)
Option(file.listFiles).map(_.toList).getOrElse(Nil).foreach(delete(_))
file.delete
}
How would you improve it ? The code seems working but it ignores the return value of java.io.File.delete. Can it be done easier with scala.io instead of java.io ?
With pure scala + java way
import scala.reflect.io.Directory
import java.io.File
val directory = new Directory(new File("/sampleDirectory"))
directory.deleteRecursively()
deleteRecursively() Returns false on failure
Try this code that throws an exception if it fails:
def deleteRecursively(file: File): Unit = {
if (file.isDirectory) {
file.listFiles.foreach(deleteRecursively)
}
if (file.exists && !file.delete) {
throw new Exception(s"Unable to delete ${file.getAbsolutePath}")
}
}
You could also fold or map over the delete if you want to return a value for all the deletes.
Using scala IO
import scalax.file.Path
val path = Path.fromString("/tmp/testfile")
try {
path.deleteRecursively(continueOnFailure = false)
} catch {
case e: IOException => // some file could not be deleted
}
or better, you could use a Try
val path: Path = Path ("/tmp/file")
Try(path.deleteRecursively(continueOnFailure = false))
which will either result in a Success[Int] containing the number of files deleted, or a Failure[IOException].
From
http://alvinalexander.com/blog/post/java/java-io-faq-how-delete-directory-tree
Using Apache Common IO
import org.apache.commons.io.FileUtils;
import org.apache.commons.io.filefilter.WildcardFileFilter;
public void deleteDirectory(String directoryName)
throws IOException
{
try
{
FileUtils.deleteDirectory(new File(directoryName));
}
catch (IOException ioe)
{
// log the exception here
ioe.printStackTrace();
throw ioe;
}
}
The Scala one can just do this...
import org.apache.commons.io.FileUtils
import org.apache.commons.io.filefilter.WildcardFileFilter
FileUtils.deleteDirectory(new File(outputFile))
Maven Repo Imports
Using the Java NIO.2 API:
import java.nio.file.{Files, Paths, Path, SimpleFileVisitor, FileVisitResult}
import java.nio.file.attribute.BasicFileAttributes
def remove(root: Path): Unit = {
Files.walkFileTree(root, new SimpleFileVisitor[Path] {
override def visitFile(file: Path, attrs: BasicFileAttributes): FileVisitResult = {
Files.delete(file)
FileVisitResult.CONTINUE
}
override def postVisitDirectory(dir: Path, exc: IOException): FileVisitResult = {
Files.delete(dir)
FileVisitResult.CONTINUE
}
})
}
remove(Paths.get("/tmp/testdir"))
Really, it's a pity that the NIO.2 API is with us for so many years and yet few people are using it, even though it is really superior to the old File API.
Expanding on Vladimir Matveev's NIO2 solution:
object Util {
import java.io.IOException
import java.nio.file.{Files, Paths, Path, SimpleFileVisitor, FileVisitResult}
import java.nio.file.attribute.BasicFileAttributes
def remove(root: Path, deleteRoot: Boolean = true): Unit =
Files.walkFileTree(root, new SimpleFileVisitor[Path] {
override def visitFile(file: Path, attributes: BasicFileAttributes): FileVisitResult = {
Files.delete(file)
FileVisitResult.CONTINUE
}
override def postVisitDirectory(dir: Path, exception: IOException): FileVisitResult = {
if (deleteRoot) Files.delete(dir)
FileVisitResult.CONTINUE
}
})
def removeUnder(string: String): Unit = remove(Paths.get(string), deleteRoot=false)
def removeAll(string: String): Unit = remove(Paths.get(string))
def removeUnder(file: java.io.File): Unit = remove(file.toPath, deleteRoot=false)
def removeAll(file: java.io.File): Unit = remove(file.toPath)
}
Using java 6 without using dependencies this is pretty much the only way to do so.
The problem with your function is that it return Unit (which I btw would explicit note it using def delete(file: File): Unit = {
I took your code and modify it to return map from file name to the deleting status.
def delete(file: File): Array[(String, Boolean)] = {
Option(file.listFiles).map(_.flatMap(f => delete(f))).getOrElse(Array()) :+ (file.getPath -> file.delete)
}
To add to Slavik Muz's answer:
def deleteFile(file: File): Boolean = {
def childrenOf(file: File): List[File] = Option(file.listFiles()).getOrElse(Array.empty).toList
#annotation.tailrec
def loop(files: List[File]): Boolean = files match {
case Nil ⇒ true
case child :: parents if child.isDirectory && child.listFiles().nonEmpty ⇒
loop((childrenOf(child) :+ child) ++ parents)
case fileOrEmptyDir :: rest ⇒
println(s"deleting $fileOrEmptyDir")
fileOrEmptyDir.delete()
loop(rest)
}
if (!file.exists()) false
else loop(childrenOf(file) :+ file)
}
This one uses java.io but one can delete directories matching it with wildcard string which may or may not contain any content within it.
for (file <- new File("<path as String>").listFiles;
if( file.getName() matches("[1-9]*"))) FileUtils.deleteDirectory(file)
Directory structure e.g.
* A/1/, A/2/, A/300/ ... thats why the regex String: [1-9]*, couldn't find a File API in scala which supports regex(may be i missed something).
Getting little lengthy, but here's one that combines the recursivity of Garrette's solution with the npe-safety of the original question.
def deleteFile(path: String) = {
val penultimateFile = new File(path.split('/').take(2).mkString("/"))
def getFiles(f: File): Set[File] = {
Option(f.listFiles)
.map(a => a.toSet)
.getOrElse(Set.empty)
}
def getRecursively(f: File): Set[File] = {
val files = getFiles(f)
val subDirectories = files.filter(path => path.isDirectory)
subDirectories.flatMap(getRecursively) ++ files + penultimateFile
}
getRecursively(penultimateFile).foreach(file => {
if (getFiles(file).isEmpty && file.getAbsoluteFile().exists) file.delete
})
}
This is recursive method that clean all in directory, and return count of deleted files
def cleanDir(dir: File): Int = {
#tailrec
def loop(list: Array[File], deletedFiles: Int): Int = {
if (list.isEmpty) deletedFiles
else {
if (list.head.isDirectory && !list.head.listFiles().isEmpty) {
loop(list.head.listFiles() ++ list.tail ++ Array(list.head), deletedFiles)
} else {
val isDeleted = list.head.delete()
if (isDeleted) loop(list.tail, deletedFiles + 1)
else loop(list.tail, deletedFiles)
}
}
}
loop(dir.listFiles(), 0)
}
What I ended up with
def deleteRecursively(f: File): Boolean = {
if (f.isDirectory) f.listFiles match {
case files: Array[File] => files.foreach(deleteRecursively)
case null =>
}
f.delete()
}
os-lib makes it easy to delete recursively with a one-liner:
os.remove.all(os.pwd/"dogs")
os-lib uses java.nio under the hood, just doesn't expose all the Java ugliness. See here for more info on how to use the library.
You can do this by excute external system commands.
import sys.process._
def delete(path: String) = {
s"""rm -rf ${path}""".!!
}
I am writing an own databse in scala. To verify my results are correct, I check with a MySQL inside of a specs2 specification. I get the right result and everything is just fine. But if I run the test again without any changes, I get a SQLException: No suitable driver found for jdbc:mysql://localhost:3306/DBNAME?user=DBUSER (null:-1). Why is the driver not loaded again?
Edit
import java.sql.{ Connection, DriverManager, ResultSet }
import org.specs2.mutable.Specification
// SDDB imports ...
class DBValidationSpec extends Specification {
"SDDB and MySQl" should {
// SDDB
// ...
// JDBC
val connectionString = "jdbc:mysql://localhost:3306/sddb_test?user=root"
val query = """SELECT content, SUM( duration ) duration
FROM test
WHERE times
BETWEEN '2011-12-08'
AND '2011-12-09'
GROUP BY content"""
classOf[com.mysql.jdbc.Driver]
"give the same result" in {
// ...
//sddbResult
lazy val conn = DriverManager.getConnection(connectionString)
try{
val rs = conn.createStatement().executeQuery(query)
var mysqlResult = Map[List[String], Int]()
while (rs.next) {
mysqlResult += (rs.getString("content") :: Nil) -> rs.getInt("duration")
}
sddbResult == mysqlResult && sddbResult.size == 478 must beTrue
} finally {
conn.close()
}
}
}
}
I left out some parts of my code because they don't belong to the question.
Edit #2
The problem became even weirder. I added a second testcase. The testcase uses the same connectionString. The Exception was only raised once. The second test succeeded. I added sequential to my test definition and saw that only the first executed test raises the Exception. Afterwards I traced the classLoader to check if it is the same one. It is.
I did the following workaround:
trait PreExecuting extends Before {
override def before {
var conn: Option[Connection] = None
try {
val connectionString = "jdbc:mysql://localhost:3306/sddb_test?user=root"
conn = Some(DriverManager.getConnection(connectionString))
} catch {
case _ =>
} finally {
conn map (_.close())
}
}
}
I don't get the Exception any more because I suppress it by using the PreExecution tait. But I still wonder what is going wrong here.
I cannot pin down the error, to the following, but at least better also close the result set and statement.
val stmt = conn.createStatement()
val rs = stmt.executeQuery(query)
var mysqlResult = Map[List[String], Int]()
while (rs.next) {
mysqlResult += (rs.getString("content") :: Nil) -> rs.getInt("duration")
}
sddbResult == mysqlResult && sddbResult.size == 478 must beTrue
rs.close()
stmt.close()
It's seems to be a problem with the Driver registration, the Driver has to be registered some like this...
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
or some like this...
DriverManager.registerDriver(new DriverWrapper((Driver) Class.forName(props.getProperty("dbg.driver"), true, gcloader).newInstance()));
before use getConnection. I hope this help.
The driver is only loaded once.
No suitable driver usually means that the connection URL syntax is incorrect.
I am having trouble creating an export of my database using an org.dbunit.database.QueryDataSet. When I call org.dbunit.dataset.xml.FlatXmlDataSet.write(IDataSet, OutputStream), I get the following stack trace:
java.lang.IllegalStateException: Did not find column 'MYCOL' for <schema.table> 'MYSCHEMA .MYTABLE' in catalog 'MYDB' because names do not exactly match.
at org.dbunit.database.ResultSetTableMetaData.scrollTo(ResultSetTableMetaData.java:297)
at org.dbunit.database.ResultSetTableMetaData.createColumnFromDbMetaData(ResultSetTableMetaData.java:262)
at org.dbunit.database.ResultSetTableMetaData.createMetaData(ResultSetTableMetaData.java:154)
at org.dbunit.database.ResultSetTableMetaData.createMetaData(ResultSetTableMetaData.java:131)
at org.dbunit.database.ResultSetTableMetaData.<init>(ResultSetTableMetaData.java:97)
at org.dbunit.database.AbstractResultSetTable.<init>(AbstractResultSetTable.java:84)
at org.dbunit.database.AbstractResultSetTable.<init>(AbstractResultSetTable.java:63)
at org.dbunit.database.ForwardOnlyResultSetTable.<init>(ForwardOnlyResultSetTable.java:65)
at org.dbunit.database.CachedResultSetTableFactory.createTable(CachedResultSetTableFactory.java:52)
at org.dbunit.database.AbstractDatabaseConnection.createQueryTable(AbstractDatabaseConnection.java:90)
at org.dbunit.database.AbstractDatabaseConnection.createTable(AbstractDatabaseConnection.java:115)
at org.dbunit.database.QueryTableIterator.getTable(QueryTableIterator.java:143)
at org.dbunit.dataset.stream.DataSetProducerAdapter.produce(DataSetProducerAdapter.java:83)
at org.dbunit.dataset.xml.FlatXmlWriter.write(FlatXmlWriter.java:124)
at org.dbunit.dataset.xml.FlatXmlDataSet.write(FlatXmlDataSet.java:341)
In researching this, I saw that someone else had this problem back in February, and fixed it using a snapshot build of 2.4.4. I am using the regular release build of 2.4.4.
Any ideas?
This seems to be a bug in the DB2 driver.
If you look at your log, you have a spare space in the schema name. This schema name is returned by the DB2 driver. A fix as been implemented for this specific error in the last source of dbunit (bug 2838922).
As mentioned in the bug 2838922, this might not solve the issue completely.
If this is the case, a workaround for the remaining problem is to implement your own IMetadataHandler by taking the default one as an example and changing the "matches" method as follow:
...
boolean areEqual =
areEqualIgnoreBothNull(catalog, catalogName, caseSensitive) &&
areEqualIgnoreNull(schema, schemaName, caseSensitive) &&
areEqualIgnoreNull(table, tableName, caseSensitive) &&
areEqualIgnoreNull(column, columnName, caseSensitive);
return areEqual;
private boolean areEqualIgnoreBothNull(String value1, String value2,
boolean caseSensitive) {
boolean areEqual = true;
if (value1 != null && value2 != null) {
if (value1.equals("") && value2.equals("")) {
if (caseSensitive) {
areEqual = value1.equals(value2);
} else {
areEqual = value1.equalsIgnoreCase(value2);
}
}
}
return areEqual;
}
(you can also solve the first schema name issue in your own IMetadataHandler by triming the schema name before the equality test)
The page http://dbunit.sourceforge.net/properties.html shows how to replace a the implementation of IMetadataHandler.
This issue seems to occur only then using DB2 and is fixed with DBUnit 2.4.7.
All you have to do is:
IDatabaseConnection connection = ...
connection.getConfig().setProperty
(
"http://www.dbunit.org/properties/metadataHandler",
new Db2MetadataHandler()
);