What is the difference between Agitar and Quickcheck property based testing? - java

A number of years ago a Java testing tool called Agitar was popular. It appeared to do something like property based testing.
Nowadays - property based testing based on Haskell's Quickcheck is popular. There are a number of ports to Java including:
quickcheck
jcheck
junit-quickcheck
My question is: What is the difference between Agitar and Quickcheck property based testing?

To me, the key features of Haskell QuickCheck are:
It generates random data for testing
If a test fails, it repeatedly "shrinks" the data (e.g., changing numbers to zero,
reducing the size of a list) until it finds the simplest test case that still fails. This is very useful, because when you see the simplest test case, you often know exactly where the bug is and how to fix it.
It starts testing with simple data, and gradually moves on to more complex data. This is useful because it means that tests fail more quickly. Also, it ensures that edge cases (e.g., empty lists, zeroes) are properly tested.
Quickcheck for Java supports (1), but not (2) or (3). I don't know what features are supported by Agitar, but it would be useful to check.
Additionally, you might look into ScalaCheck. Since Scala is interoperable with Java, you could use it to test your Java code. I haven't used it, so I don't know which features it has, but I suspect it has more features than Java Quickcheck.

Its worth noting that as of version 0.6, junit-quickcheck now supports shrinking:
http://pholser.github.io/junit-quickcheck/site/0.6-alpha-3-SNAPSHOT/usage/shrinking.html
quickcheck doesn't look to have had any new releases since 2011:
https://bitbucket.org/blob79/quickcheck

Related

Randomized Testing in java- what is it and how to achieve it?

I was confused about Randomized testing.
It is cited form proj1b spec:
"The autograder project 1A largely relies on randomized tests. For
example, our JUnit tests on gradescope simply call random methods of
your LinkedListDeque class and our correct implementation
LinkedListDequeSolution and as soon as we see any disagreement, the
test fails and prints out a sequence of operations that caused the
failure. "
(http://datastructur.es/sp17/materials/proj/proj1b/proj1b.html)
I do not understand what it means by:
"call random methods of the tested class and the correct class"
I need to write something really similar with that autograder. But I do not know if I need to write tests for different methods together by using a loop to random pick up some to test?
If so, we can test all methods by using JUnit, why we need to randomized test?
Also, if I combine all the tests together, why I call it JUnit?
If you do not mind, some examples will be easier to understand.
Just to elaborate on the "random" testing.
There is a framework called QuickCheck, initially written for the Haskell programming language. But it has been ported to many other languages - also for Java. There is jqwik for junit5, or (probably outdated) jcheck.
The idea is "simply":
you describe properties of your methods, like (a(b(x)) == b(a(x))
the framework then created random input for method calls, and tries to find examples where a property doesn't hold
I assume they are talking about Model Based Testing. For that you'd have to create models - simplified versions of your production behaviour. Then you can list possible methods that can be invoked and the dependencies between those methods. After that you'd have to choose a random one and invoke both - method of your model and the method of your app. If the results are the same, then it works right. If the results differ - either there is a bug in your model or in your app. You can read more in this article.
In Java you can either write this logic on your own, or use existing frameworks. The only existing one that I know in Java is GraphWalker. But I haven't used it and don't know how good it is.
The original frameworks (like QuichCheck) are also able to "shrink" - if it took 50 calls to random methods to find a bug, then they will try to find the exact sequence of several steps that would lead to that bug. I don't know if there are such possibilities in Java frameworks, but it may be worth looking into ScalaCheck if you need a JVM (but not necessarily a Java solution).

Is Java.awt.geom suitable for discrete calculations?

The package java.awt.geom allows testing if a point lies within a rectangle and similar questions. In particular I need to know if a rectangle is intersected by a line. All involved values are integers.
However, it appears we cannot have those calculations use integers instead of floating point. As I need a completely consistent and reproducible result (its factual accuracy is not as important, actually), I am worried this might be a bad approach. The program will be deployed on Windows, Linux and Android platform, and I do not have full control over the machines.
I have implemented the required algorithm myself (using pure integer arithmetic), and it suffices all my needs. Yet, if possible, I would like to use the preprovided package. Is there some sort of guarantee on its consistency?
Yet, if possible, I would like to use the preprovided package.
It is unlikely the J2SE classes will be available in Android, so stick with your own custom rolled solution.

How does Java compute the sine and cosine functions?

How does Java find sine and cosine? I’m working on trying to make a game that is a simple platformer something like super Mario or Castlevania. I attempted to make a method that would rotate an image for me and then resize the JLabel to fit that image. I found an algorithm that worked and was able to accomplish my goal. However all I did was copy and past the algorithm any one can do that I want to understand the math behind it. So far I have figured everything out except one part. The methods sin and cos in the math class. They work and I can use them but I have no idea how Java get its numbers.
It would seem there is more then one way to solve this problem. For now I’m interested in how Java does it. I looked into the Taylor series but I’m not sure that is how java does it. But if Java does use the Taylor series I would like to know how that algorithm is right all the time (I am aware that it is an approximation). I’ve also heard of the CORDIC algorithm but I don’t know much about it as I do with the Taylor series which I have programmed into Java even though I don’t understand it. If CORDIC is how it's done, I would like to know how that algorithm is always right. It would seem it is also possible that the Java methods are system dependent meaning that the algorithm or code used would differ from system to system. If the methods are system dependent then I would like to know how Windows gets sine and cosine. However if it is the CPU itself that gets the answer I would like to know what algorithm it is using (I run an AMD Turion II Dual-Core Mobile M520 2.29GHz).
I have looked at the score code of the Math class and it points to the StrictMath class. However the StrictMath class only has a comment inside it no code. I have noticed though that the method does use the keyword native. A quick Google search suggest that this keyword enables java to work with other languages and systems supporting the idea that the methods are system dependent. I have looked at the java api for the StrictMath class (http://docs.oracle.com/javase/7/docs/api/java/lang/StrictMath.html) and it mentions something called fdlimb. The link is broken but I was able to Google it (http://www.netlib.org/fdlibm/).
It seems to be some sort of package written in C. while I know Java I have never learned C so I have been having trouble deciphering it. I started looking up some info about the C language in the hopes of getting to bottom of this but it a slow process. Of cores even if did know C I still don’t know what C file Java is using. There seems to be different version of the c methods for different systems and I can’t tell which one is being used. The API suggest it is the "IEEE 754 core function" version (residing in a file whose name begins with the letter e). But I see no sin method in the e files. I have found one that starts with a k which I think is sort for kernel and another that starts with an s which I think is sort for standard. The only e files I found that look similar to sin are e_sinh.c and e_asin.c which I think are different math functions. And that’s the story of my quest to fiend the Java algorithms for sine and cosine.
Somewhere at some point in the line an algorithm is being called upon to get these numbers and I want to know what it is and why it works(there is no way java just gets these numbers out of thin air).
The JDK is not obligated to compute sine and cosine on its own, only to provide you with an interface to some implementation via Math. So the simple answer to your question is: It doesn't; it asks something else to do it, and that something else is platform/JDK/JVM dependent.
All JDKs that I know of pass the burden off to some native code. In your case, you came across a reference to fdlibm, and you'll just have to suck it up and learn to read that code if you want to see the actual implementation there.
Some JVMs can optimize this. I believe HotSpot has the ability to spot Math.cos(), etc. calls and throw in a hardware instruction on systems where it is available, but do not quote me on that.
From the documentation for Math:
By default many of the Math methods simply call the equivalent method in StrictMath for their implementation. Code generators are encouraged to use platform-specific native libraries or microprocessor instructions, where available, to provide higher-performance implementations of Math methods. Such higher-performance implementations still must conform to the specification for Math.
The documentation for StrictMath actually mentions fdlibm (it places the constraint on StrictMath that all functions must produce the same results that fdlibm produces):
To help ensure portability of Java programs, the definitions of some of the numeric functions in this package require that they produce the same results as certain published algorithms. These algorithms are available from the well-known network library netlib as the package "Freely Distributable Math Library," fdlibm. These algorithms, which are written in the C programming language, are then to be understood as executed with all floating-point operations following the rules of Java floating-point arithmetic.
Note, however, that Math is not required to defer to StrictMath. Use StrictMath explicitly in your code if you want to guarantee consistent results across all platforms. Note also that this implies that code generators (e.g. HotSpot) are not given the freedom to optimize StrictMath calls to hardware calls unless the hardware would produce exactly the same results as fdlibm.
In any case, again, Java doesn't have to implement these on its own (it usually doesn't), and this question doesn't have a definitive answer. It depends on the platform, the JDK, and in some cases, the JVM.
As for general computational techniques, there are many; here is a potentially good starting point. C implementations are generally easy to come by. You'll have to search through hardware datasheets and documentation if you want to find out more about the hardware options available on a specific platform (if Java is even using them on that platform).

Closures in Java 7 [duplicate]

This question already has answers here:
What’s the current state of closures in Java?
(6 answers)
Closed 9 years ago.
So is Java 7 finally going to get the closures? What's the latest news?
Yes, closures were include to release plan of java 7 (and it was the most significant reason to delay release from winter to autumn (expected in September 2010)).
The latest news could be found at Project Lambda. You may also be interested in reading latest specification draft.
There is no official statement on the state of closures at the moment.
Here are some readable examples of how it could work and look like.
If you want to get some insight into what's going on I refer you to the OpenJDK mailing list.
Overview
Basically there is some hope, because code together with some tests were already committed to some source code branch and there are at least some halfway working infrastructure to test it.
The change message from Maurizio Cimadamore reads:
initial lambda push; the current prototype suuports the following features:
function types syntax (optionally enabled with -XDallowFunctionTypes)
function types subtyping
full support for lambda expression of type 1 and 2
inference of thrown types/return type in a lambda
lambda conversion using rules specified in v0.1.5 draft
support references to 'this' (both explicit and implicit)
translation using method handles
The modified script build of the
langtools repository now generates an
additional jarfile called javacrt.jar
which contains an helper class to be
used during SAM conversion; after the
build, the generated scripts
javac/java will take care of
automatically setting up the required
dependencies so that code containing
lambda expressions can be compiled and
executed.
But this is ongoing work and quite buggy at the moment.
For instance the compiler sometimes crashes on valid expressions, doesn't compile correct closure syntax code or generates illegal byte code.
On the negative side there are some statements from Neal Gafter:
It's been nearly three months since the 0.15 draft, and it is now less
than two weeks before the TL (Tools and Languages) final integration
preceding openjdk7 feature complete. If you've made progress on the
specification and implementation, we would very much appreciate it being
shared with us. If not, perhaps we can help. If Oracle has decided that
this feature is no longer important for JDK7, that would be good to know
too. Whatever is happening, silence sends the wrong message.
A discussion between Neal Gafter and Jonathan Gibbons:
Great to see this, Maurizio! Unfortunately it arrives a week too late, and
in the wrong repository, to be included in jdk7.
I notice that none of the tests show a variable of function type being
converted to a SAM type. What are the plans there?
Jonathan Gibbons' response:
Since the published feature list for jdk7 and the published schedule for
jdk7 would appear to be at odds, why do you always assume the schedule
is correct?
Neal Gafter's answer:
Because I recall repeated discussion to the effect that the feature set
would be adjusted based on their completion status with respect to the
schedule.
Some people even question if the whole thing makes sense anymore and suggest moving to another language:
One starts to wonder, why not just move to Scala -- there's much more
that needs to be added to Java in order to build a coherent combination
of features around lambdas. And now these delays, which affect not just
users of ParallelArray but everyone who wants to build neatly refactored,
testable software in Java.
Seems like nobody's proposing to add declaration-site variance in Java
=> means FunctionN<T, ...> interfaces will not subtype the way they should.
Nor is there specialization for primitives. (Scala's #specialized is
broken for all but toy classes, but at least it's moving in the right
direction)
No JVM-level recognition that an object is a closure, and can hence be
eliminated, as it can be with Scala's closure elimination (if the HOF can
also be inlined.) The JVM seems to add something like an unavoidable machine
word access to every polymorphic call site, even if they are supposedly
inline-cached and not megamorphic, even inside a loop. Result that I've
seen is approximately a 2x slowdown on toy microbenchmarks like "sum an array
of integers" if implemented with any form of closures other than something
that can be #inline'd in Scala. (And even in Scala, most HOF's are virtual
hence can't be inlined.) I for one would like to see usable inlining in a
language that /encourages/ the use of closures in every for loop.
Conclusion
This is just a quick look at the whole problem going on and the quotes and statements are not exhaustive at all. At the moment people are still in the state of "Can closures really be done in Java and if yes, how should it be done and how might it look like?".
There is no simple "OK, we just add closures to Java over the weekend".
Due to the interaction of some design mistakes like varargs as arrays, type erasure ... there are cases which just can't work. Finding all these small problems and deciding if they are fixable is quite hard.
In the end, there might be some surprises.
I'm not sure what that surprise will be, but I guess it will be either:
Closures won't get into Java 7 or
Closures in Java 7 will be what Generics were in Java 5 (Buggy, complex stuff, which looks like it might work, but breaks apart as soon as you push things a bit further)
Personal opinion
I switched to Scala a long time ago. As long as Oracle doesn't do stupid things to the JVM, I don't care anymore. In the evolutionary process of the Java language mistakes were made, partly due to backward compatibility. This created an additional burden with every new change people tried to make.
Basically: Java works, but there will be no evolution of the language anymore. Every change people make increases the cost of making the next change, making changes in the future more and more unlikely.
I don't believe that there will be any changes to the Java language after Java 7 apart from some small syntax improvements like project Coin.
http://java.dzone.com/news/closures-coming-java-7
The latest news is AFAIK still as of late Nov 2009 that closures will be in Java 7 in some form. Since this was given as the main reason for a significant delay in the release, it seems pretty unlikely that they'll drop it again.
There's been a whole lot of syntax and transparency related debating (particularly focusing on how hard to read a currying function with a particular syntax is, it seems like) going on on the lambda-dev mailing list, and there have been a couple draft proposal iterations from Sun, but I haven't seen much from them on that list in a while.
I'm at a release conference now and the speaker is saying closures are coming to Java 8.

From Static Typing to Dynamic Typing

I have always worked on statically typed languages (C/C++, Java). I have been playing with Clojure and I really like it.
One thing I am worried about is: say that I have a windows that takes 3 modules as arguments and along the way the requirements change and I need to pass another module to the function. I just change the function and the compiler complains everywhere I used it. But in Clojure it won't complain until the function is called. I can just do a regex search and replace but it seems there is a chance to miss a call and it will go unnoticed until that function is actually called. How do you guys deal with this?
This is one of the reasons automated testing/test driven development is even more important in dynamically typed languages. I haven't used Clojure (I mostly use Ruby), so unfortunately I can't recommend a specific testing framework.
The first thing I'd like to mention is that Bruce Eckel has written a very interesting article called Strong Typing vs Strong Testing (the link is down at the moment, unfortunately, but hopefully it will be up soon).
His idea is that when dealing with compiled languages, the compiler is just acting as the first, automatic step of automatic testing. When making the move to a dynamic language, you lose this first level of automatic testing. But in both cases, this first, automatic level is just one part of testing, and not even a very important part.
His point is that if you're developing programs properly, i.e. doing some form of tests and regression tests, the lack of a compiler will only force you to add some more, somewhat basic tests anyways, which is why it's no big loss.
So I guess the first answer I'd give you is, focus on your testing, something you should be doing anyway, and such changes shouldn't affect you too badly.
The second thing I'd like to mention is many dynamic languages that I've seen (for example, Python) have much better abilities to change what methods/classes do without breaking existing code.
For example, with Python, if your method used to accept two parameters but now requires a third one, you can always add a default parameter without breaking any existing code, but that you can now utilize. This is a very basic technique, but in Python's case (and I assume most other dynamic languages as well), these techniques can get much more interesting; since they're dynamic, you can pretty much change the implementation of functions for specific modules, change what variables mean, etc.
I'd suggest looking at which techniques Clojure has that allow similair things, and deciding if they apply in your situation.
You do the same thing you did if the method was part of a public interface that you weren't the only user of.
You add a new method with the extra module and and change the old one to call the new one with a suitable default.
Oh and if your program is that big, make sure you have good tests (test-is should make it simpler than Java)
Test coverage is definitely important. But a dynamically typed language will allow you to work in a different way. In a strongly typed language (like Java), a change in the interface needs to modify all the callers. In Ruby, you could do this-- but probably won't. Instead, you'll probably add flexibility to the method on one of a few ways. Namely:
you tend to have very few methods that take as many as three parameters in Ruby (as opposed to Java). Because you don't have strong typed interface of Java, you break the problem down into smaller pieces and steps. It's much more common to write methods that take just 1 parameter, and then refactor when it becomes more complex.
it's possible-- and common-- to leave the old behavior in place while adding more arguments. For example, if you have to add a third argument to a two argument method, you will set its default value to preserve the old behavior (and save you a refactor). If you are familiar with Javascript libraries like jQuery, they take advantage of this everywhere with "optional" arguments.
similar to optional arguments, methods can grow to take a flexible parameter list. With solid test coverage, you can quite easily add a new behavior to an existing method and safely know you haven't broken the existing code. In Rails, methods like "render" take a wide range of options.
You're not completely without compiler support in Clojure. In the specific example you give, it's the arity of the function that changed, which would be picked up by compiling the Clojure code. I'm still making the strong -> dynamic typing transition and find this comforting!
You lose some level of refactoring and type safety when you move to dynamic languages. The more information the compiler has, the more it can do at compile time for you.
Tim Bray discusses it here,critique of which by Cedric is here,and a post on artima discussing it at length.
If you really need static typing, you can use https://github.com/clojure/core.typed and it's leiningen module to test static variable passing.

Categories