How to call Z3 properly from Java program? - java

I want to integrate Z3 to my security tool developed in Java. At the moment, I'm outputting the formula to check into a file, and then call Z3. May I ask how stable the Java API is?
When I look at the Java API example distributed with Z3, it seems there are two ways to solve a formula. The first one is to create a solver:
Solver solver = ctx.MkSolver();
for (BoolExpr a : g.Formulas())
solver.Assert(a);
if (solver.Check() != Status.SATISFIABLE)
throw new TestFailedException();
Another way is to use Tactic. There are examples of using with tactic "simplify" and "smt"
ApplyResult ar = ApplyTactic(ctx, ctx.MkTactic("simplify"), g);
if (ar.NumSubgoals() == 1
&& (ar.Subgoals()[0].IsDecidedSat() || ar.Subgoals()[0]
.IsDecidedUnsat()))
throw new TestFailedException();
My question is: which is the more efficient way to call z3? the first or the second one. And which tactic is good for which problem? And the tactic "smt" is for SMT-LIB1 or SMT-LIB2?
Thanks.

The Z3 Java API is stable in the sense that it will not change any function/structure names until the next release. There may of course be bugfixes and perhaps some added functionality.
Whether it makes more sense to use solvers or tactics entirely depends on the application. However, since you currently use the file-based interface, using the solver-based interface is likely to be sufficient. When this is used, solver.Check() will use a default tactic (which may depend on the logic used) to solve problems.
For more information about tactics, please see the strategies tutorial, which shows how to use goals and tactics from the SMT-LIB file based interface. The same applies for the Java API, and the names of tactics are the same. The "smt" tactic is the SMT solver wrapped in a tactic; this is independent of the input language (SMT1 or SMT2), and is essentially the same as using the default Solver object constructed via ctx.MkSolver().

Related

Randomized Testing in java- what is it and how to achieve it?

I was confused about Randomized testing.
It is cited form proj1b spec:
"The autograder project 1A largely relies on randomized tests. For
example, our JUnit tests on gradescope simply call random methods of
your LinkedListDeque class and our correct implementation
LinkedListDequeSolution and as soon as we see any disagreement, the
test fails and prints out a sequence of operations that caused the
failure. "
(http://datastructur.es/sp17/materials/proj/proj1b/proj1b.html)
I do not understand what it means by:
"call random methods of the tested class and the correct class"
I need to write something really similar with that autograder. But I do not know if I need to write tests for different methods together by using a loop to random pick up some to test?
If so, we can test all methods by using JUnit, why we need to randomized test?
Also, if I combine all the tests together, why I call it JUnit?
If you do not mind, some examples will be easier to understand.
Just to elaborate on the "random" testing.
There is a framework called QuickCheck, initially written for the Haskell programming language. But it has been ported to many other languages - also for Java. There is jqwik for junit5, or (probably outdated) jcheck.
The idea is "simply":
you describe properties of your methods, like (a(b(x)) == b(a(x))
the framework then created random input for method calls, and tries to find examples where a property doesn't hold
I assume they are talking about Model Based Testing. For that you'd have to create models - simplified versions of your production behaviour. Then you can list possible methods that can be invoked and the dependencies between those methods. After that you'd have to choose a random one and invoke both - method of your model and the method of your app. If the results are the same, then it works right. If the results differ - either there is a bug in your model or in your app. You can read more in this article.
In Java you can either write this logic on your own, or use existing frameworks. The only existing one that I know in Java is GraphWalker. But I haven't used it and don't know how good it is.
The original frameworks (like QuichCheck) are also able to "shrink" - if it took 50 calls to random methods to find a bug, then they will try to find the exact sequence of several steps that would lead to that bug. I don't know if there are such possibilities in Java frameworks, but it may be worth looking into ScalaCheck if you need a JVM (but not necessarily a Java solution).

Is Java.awt.geom suitable for discrete calculations?

The package java.awt.geom allows testing if a point lies within a rectangle and similar questions. In particular I need to know if a rectangle is intersected by a line. All involved values are integers.
However, it appears we cannot have those calculations use integers instead of floating point. As I need a completely consistent and reproducible result (its factual accuracy is not as important, actually), I am worried this might be a bad approach. The program will be deployed on Windows, Linux and Android platform, and I do not have full control over the machines.
I have implemented the required algorithm myself (using pure integer arithmetic), and it suffices all my needs. Yet, if possible, I would like to use the preprovided package. Is there some sort of guarantee on its consistency?
Yet, if possible, I would like to use the preprovided package.
It is unlikely the J2SE classes will be available in Android, so stick with your own custom rolled solution.

How does Java compute the sine and cosine functions?

How does Java find sine and cosine? I’m working on trying to make a game that is a simple platformer something like super Mario or Castlevania. I attempted to make a method that would rotate an image for me and then resize the JLabel to fit that image. I found an algorithm that worked and was able to accomplish my goal. However all I did was copy and past the algorithm any one can do that I want to understand the math behind it. So far I have figured everything out except one part. The methods sin and cos in the math class. They work and I can use them but I have no idea how Java get its numbers.
It would seem there is more then one way to solve this problem. For now I’m interested in how Java does it. I looked into the Taylor series but I’m not sure that is how java does it. But if Java does use the Taylor series I would like to know how that algorithm is right all the time (I am aware that it is an approximation). I’ve also heard of the CORDIC algorithm but I don’t know much about it as I do with the Taylor series which I have programmed into Java even though I don’t understand it. If CORDIC is how it's done, I would like to know how that algorithm is always right. It would seem it is also possible that the Java methods are system dependent meaning that the algorithm or code used would differ from system to system. If the methods are system dependent then I would like to know how Windows gets sine and cosine. However if it is the CPU itself that gets the answer I would like to know what algorithm it is using (I run an AMD Turion II Dual-Core Mobile M520 2.29GHz).
I have looked at the score code of the Math class and it points to the StrictMath class. However the StrictMath class only has a comment inside it no code. I have noticed though that the method does use the keyword native. A quick Google search suggest that this keyword enables java to work with other languages and systems supporting the idea that the methods are system dependent. I have looked at the java api for the StrictMath class (http://docs.oracle.com/javase/7/docs/api/java/lang/StrictMath.html) and it mentions something called fdlimb. The link is broken but I was able to Google it (http://www.netlib.org/fdlibm/).
It seems to be some sort of package written in C. while I know Java I have never learned C so I have been having trouble deciphering it. I started looking up some info about the C language in the hopes of getting to bottom of this but it a slow process. Of cores even if did know C I still don’t know what C file Java is using. There seems to be different version of the c methods for different systems and I can’t tell which one is being used. The API suggest it is the "IEEE 754 core function" version (residing in a file whose name begins with the letter e). But I see no sin method in the e files. I have found one that starts with a k which I think is sort for kernel and another that starts with an s which I think is sort for standard. The only e files I found that look similar to sin are e_sinh.c and e_asin.c which I think are different math functions. And that’s the story of my quest to fiend the Java algorithms for sine and cosine.
Somewhere at some point in the line an algorithm is being called upon to get these numbers and I want to know what it is and why it works(there is no way java just gets these numbers out of thin air).
The JDK is not obligated to compute sine and cosine on its own, only to provide you with an interface to some implementation via Math. So the simple answer to your question is: It doesn't; it asks something else to do it, and that something else is platform/JDK/JVM dependent.
All JDKs that I know of pass the burden off to some native code. In your case, you came across a reference to fdlibm, and you'll just have to suck it up and learn to read that code if you want to see the actual implementation there.
Some JVMs can optimize this. I believe HotSpot has the ability to spot Math.cos(), etc. calls and throw in a hardware instruction on systems where it is available, but do not quote me on that.
From the documentation for Math:
By default many of the Math methods simply call the equivalent method in StrictMath for their implementation. Code generators are encouraged to use platform-specific native libraries or microprocessor instructions, where available, to provide higher-performance implementations of Math methods. Such higher-performance implementations still must conform to the specification for Math.
The documentation for StrictMath actually mentions fdlibm (it places the constraint on StrictMath that all functions must produce the same results that fdlibm produces):
To help ensure portability of Java programs, the definitions of some of the numeric functions in this package require that they produce the same results as certain published algorithms. These algorithms are available from the well-known network library netlib as the package "Freely Distributable Math Library," fdlibm. These algorithms, which are written in the C programming language, are then to be understood as executed with all floating-point operations following the rules of Java floating-point arithmetic.
Note, however, that Math is not required to defer to StrictMath. Use StrictMath explicitly in your code if you want to guarantee consistent results across all platforms. Note also that this implies that code generators (e.g. HotSpot) are not given the freedom to optimize StrictMath calls to hardware calls unless the hardware would produce exactly the same results as fdlibm.
In any case, again, Java doesn't have to implement these on its own (it usually doesn't), and this question doesn't have a definitive answer. It depends on the platform, the JDK, and in some cases, the JVM.
As for general computational techniques, there are many; here is a potentially good starting point. C implementations are generally easy to come by. You'll have to search through hardware datasheets and documentation if you want to find out more about the hardware options available on a specific platform (if Java is even using them on that platform).

What is the difference between Agitar and Quickcheck property based testing?

A number of years ago a Java testing tool called Agitar was popular. It appeared to do something like property based testing.
Nowadays - property based testing based on Haskell's Quickcheck is popular. There are a number of ports to Java including:
quickcheck
jcheck
junit-quickcheck
My question is: What is the difference between Agitar and Quickcheck property based testing?
To me, the key features of Haskell QuickCheck are:
It generates random data for testing
If a test fails, it repeatedly "shrinks" the data (e.g., changing numbers to zero,
reducing the size of a list) until it finds the simplest test case that still fails. This is very useful, because when you see the simplest test case, you often know exactly where the bug is and how to fix it.
It starts testing with simple data, and gradually moves on to more complex data. This is useful because it means that tests fail more quickly. Also, it ensures that edge cases (e.g., empty lists, zeroes) are properly tested.
Quickcheck for Java supports (1), but not (2) or (3). I don't know what features are supported by Agitar, but it would be useful to check.
Additionally, you might look into ScalaCheck. Since Scala is interoperable with Java, you could use it to test your Java code. I haven't used it, so I don't know which features it has, but I suspect it has more features than Java Quickcheck.
Its worth noting that as of version 0.6, junit-quickcheck now supports shrinking:
http://pholser.github.io/junit-quickcheck/site/0.6-alpha-3-SNAPSHOT/usage/shrinking.html
quickcheck doesn't look to have had any new releases since 2011:
https://bitbucket.org/blob79/quickcheck

How easily customizable are SAP industry-specific solutions?

First of all, I have a very superficial knowledge of SAP. According to my understanding, they provide a number of industry specific solutions. The concept seems very interesting and I work on something similar for banking industry. The biggest challenge we face is how to adapt our products for different clients. Many concepts are quite similar across enterprises, but there are always some client-specific requirements that have to be resolved through configuration and customization. Often this requires reimplementing and developing customer specific features.
I wonder how efficient in this sense SAP products are. How much effort has to be spent in order to adapt the product so it satisfies specific customer needs? What are the mechanisms used (configuration, programming etc)? How would this compare to developing custom solution from scratch? Are they capable of leveraging and promoting best practices?
Disclaimer: I'm talking about the ABAP-based part of SAP software only.
Disclaimer 2, ref PATRYs response: HR is quite a bit different from the rest of the SAP/ABAP world. I do feel rather competent as a general-purpose ABAP developer, but HR programming is so far off my personal beacon that I've never even tried to understand what they're doing there. %-|
According to my understanding, they provide a number of industry specific solutions.
They do - but be careful when comparing your own programs to these solutions. For example, IS-H (SAP for Healthcare) started off as an extension of the SD (Sales & Distribution) system, but has become very much more since then. While you could technically use all of the techniques they use for their IS, you really should ask a competent technical consultant before you do - there are an awful lot of pits to avoid.
The concept seems very interesting and I work on something similar for banking industry.
Note that a SAP for Banking IS already exists. See here for the documentation.
The biggest challenge we face is how to adapt our products for different clients.
I'd rather rephrase this as "The biggest challenge is to know where the product is likely to be adapted and to structurally prepare the product for adaption." The adaption techniques are well researched and easily employed once you know where the customer is likely to deviate from your idea of the perfect solution.
How much effort has to be spent in
order to adapt the product so it
satisfies specific customer needs?
That obviously depends on the deviation of the customer's needs from the standard path - but that won't help you. With a SAP-based system, you always have three choices. You can try to customize the system within its limits. Customizing basically means tweaking settings (think configuration tables, tens of thousands of them) and adding stuff (program fragments, forms, ...) in places that are intended to do so. Technology - see below.
Sometimes customizing isn't enough - you can develop things additionally. A very frequent requirement is some additional reporting tool. With the SAP system, you get the entire development environment delivered - the very same tools that all the standard applications were written with. Your programs can peacefully coexist with the standard programs and even use common routines and data. Of course you can really screw things up, but show me a real programming environment where you can't.
The third option is to modify the standard implementations. Modifications are like a really sharp two-edged kitchen knife - you might be able to cook really cool things in half of the time required by others, but you might hurt yourself really badly if you don't know what you're doing. Even if you don't really intend to modify the standard programs, it's very comforting to know that you could and that you have full access to the coding.
(Note that this is about the application programs only - you have no chance whatsoever to tweak the kernel, but fortunately, that's rarely necessary.)
What are the mechanisms used (configuration, programming etc)?
Configurations is mostly about configuration tables with more or less sophisticated dialog applications. For the programming part of customizing, there's the extension framework - see http://help.sap.com/saphelp_nw70ehp1/helpdata/en/35/f9934257a5c86ae10000000a155106/frameset.htm for details. It's basically a controlled version of dependency injection. As a solution developer, you have to anticipate the extension points, define the interface that has to be implemented by the customer code and then embed the call in your code. As a project developer, you have to create an implementation that adheres to the interface and activate it. The basic runtime system takes care of glueing the two programs together, you don't have to worry about that.
How would this compare to developing custom solution from scratch?
IMHO this depends on how much of the solution is the same for all customers and how much of it has to be adapted. It's really hard to be more specific without knowing more about what you want to do.
I can only speak for the Human Resource component, but this is a component where there is a lot of difference between customers, based on a common need.
First, most of the time you set the value for a group, and then associate the object (person, location...) with a group depending on one or two values. This is akin to an indirection, and allow for great flexibility, as you can change the association for a given location without changing the others. in a few case, there is a 3 level indirection...
Second, there is a lot of customization that is nearly programming. Payroll or administrative operations are first class example of this. In the later cas, you get a table with the operation (hiring for example), the event (creation, modification...) a code for the action (I for test, F to call a function, O for a standard operation) and a text field describing the parameters of a function ("C P0001, begda, endda" to create a structure P001 with default values).
Third, you can also use such a table to indicate a function or class (ABAP-OO), that will be dynamically called. You get a developer to create this function or class, and then indicate this in the table. This is a method to replace a functionality by another one, or extend it. This is used extensively in the ESS/MSS.
Last, there is also extension point or file that you can modify. this is nearly the same as the previous one, except that you don't need to indicate the change : the file is always used (ZXPADU01/02 for HR modification of infotype)
hope this help
Guillaume PATRY

Categories