Z3 Java API. Reading SMTLib2 and exending it - java

I am trying to read a common SMTLib2 file by the Java API of Z3 using the following method:
BoolExpr parseSMTLIB2String (String str, Symbol[] sortNames, Sort[] sorts, Symbol[] declNames, FuncDecl[] decls)
The issue is that it seems that it only reads the assertions and ignores the rest. So one cannot add a new assertion based on a sort that is defined in the file. The sort is unknown and the addition of the assertion fails.
Is there any way to do this that I miss?
If not, it seems that I should directly generate SMTLib2 format instead of using the API.
Thanks for your consideration.

That's correct, this function returns one expression that is the conjunction of all assertions in the file, ignoring (almost) all other file content. There is no function for reading SMT2 commands as that is usually done outside of Z3.
That said, parseSMTLIB2String takes the parameter sorts, which can be populated by sorts later referred to in the SMT2 file. This can be used such that the SMT2 file and the rest of your infrastructure refer to the same sorts.

Related

Is it possible to compare two .java files methods and fields in all cases?

I am currently taking a project management class and the professor gave this assignment to compare two .java files methods and fields in all cases programmatically. I don't think it's actually possible to do but maybe I am wrong!
The assignment spec is as following (its extremely ambiguous I know)
In this assignment, you are required to write a comparison tool for two
versions of a Java source file.
Your program takes as input two .java files representing those two versions
and reports the following atomic changes:
1. AM: Add a new method
2. DM: Delete a method
3. CM: Change the body of a method (note: you need to handle the case where a method is
relocated within the body of its class)
4. AF: Add a field
5. DF: Delete a field
6. CFI: Change the definition of an instance field initializer (including (i) adding an initialization to a
field, (ii) deleting an initialization of a field, (iii) making changes to the initialized value of a field,
and (iv) making changes to a field modifier, e.g., private to public)
So that's what I am working with and my approach was to use reflection as it allows you to do everything but detect differences in the method body.
I had considered the idea that you could create a parser but that seemed ridiculous, especially for a 3 credit undergrad class in project management. Tools like BeyondCompare don't list what methods or fields changed, just lines that are different so don't meet the requirements.
I turned in this assignment and pretty much the entire class failed it with the reason as "our code would not work for java files with external dependencies that are not given or with java files in different projects" - which is completely correct but also I'm thinking, impossible to do.
I am trying to nail down a concrete answer as to why this is not actually possible to do or learn something new about why this is possible so any insight would be great.
What you got wrong here is that you have started to examine the .class files (using reflection). Some of the information listed above is not even available at that stage (generics, in-lined functions). What you need to do is parsing the .java files as text. That is the only way to actually solve the problem. A very high-level solution could be writing a program that:
reads the files
constructs a specific object for each .java file containing all the informations that needs to be compared (name of the functions, name of the instance variables, etc)
compares the constructed objects (example: addedFunctions = functionsFromA.removeAll(functionsFromB)) to provide the requested results
Note: if this is an assignment, you should not be using solutions provided by anybody else, you need to do it on your own. Likely you will not get a single point if you use a library written by somebody else.

what is the best way to calculate the expected_value in assertEquals() method in jUnit

I'm using the assertEquals() method in jUnit to test a certain value is equals or not to the actual value the code generates.
/*
calculating the actual_value
*/
int expected_value = 1000; // rows of the set of files, manually calculated
assertEquals(expected_value, actual_value);
I'm wondering if I do it something like below will that be a problem, in case of standards and formalities.
/*
calculating the actual_value
*/
int expected_value = getRelevantLinesOfFiles(set of files); // rows of the set of files
assertEquals(expected_value, actual_value);
since it's almost impossible to always find that kind of variable manually, I've written a method to read and calculate the relevant lines in those files.
My concern is that I'm using an out put of a method in assertEquals testing. But the getRelevantLinesOfFiles() method is not tested. If I'm going to test it, then again i have to manually read the files. So it's kinda same thing again and again.
Is that a good practice ? or what is the best way to do these kind of testing ?
If those files are also the input that actual_value is calculated from, what you're doing is testing an alternative implementation vs the real one. That's valid but requires understanding things up front, e.g it's usually done with a very simple and easy-to-review test implementation compared with an optimized and more complicated 'production' implementation. If that's not the case you're doing something wrong.
If the files contain results that actual_value isn't calculated from then it should be ok, e.g if you have sets of inputs and matching sets of expected output.
Also, consider whether you can distill at least a few cases of trivial hard-coded input and hard-coded expected output, similar to your first example, not involving files. This may require allowing your interface to work with an abstraction that isn't a File in order to mock input, or to be able to inject an alternative file-reading mechanism in tests that actually serves mock test data.
EDIT: just to provide a concrete example of a great abstraction to use instead of File instances (or filenames, or whatever) consider using okio and passing in a set of Source instances.
Real implementation: given a list of File instances, create Source instances using Okio.source(file).
Tests: pass a list of Buffer instances containing whatever you want
Buffer b = new Buffer();
b.writeUtf8("whatever the hell I want in this file");
// can also write bytes or anything else
int actualValue = getRelevantLinesOfFiles(Arrays.asList(b));
if the test files are generated only for test, i think you should carefully prepare these test files manually, for example, 'file1' has 1 line, file0 has 0 line, and 'fileX' has X lines, etc. no need prepare too much files, you can just consider some critical cases.
if these files are real data from production environment, then i suggest you write a method, like the getRelevantLinesOfFiles in your code, to count the line number of them. but firstly, you should test this method using the approach i mentioned above.
It is always good practice to leave "Magic Numbers" (aka 1000) out of your code. Like talex said, test the getRelevantLivesOfFiles() method on files small enough to count. Then you can use it with confidence to test the other parts of your code.

Create JUnit test for a method covering all its invocations in Eclipse workspace

Lets assume following Java class:
class MyClass {
public String methodA( String data, String expression ) {
// expression is evaluated using data
// ...
// this leads to a result which is returned
return result;
}
}
Note, that expression is an instance of an expression language (DSL). Thus, the evaluation of expression using data depends on the combination of both. Normally, expression is a fixed value which does not change often and data can change throughout all invocations.
Well, sometime later a bug is found in MyClass.methodA(String,String). The bug resides in some underlying class and only occurs for some special combination of expression and data. A JUnit test is easily written for this special case and can be fixed.
Unfortunately, this method is used frequently across the whole project. Eclipse call hierarchy identifies more than 97 other methods where this method is used. I now fear regression, if I just apply the bugfix. To feel more safe, I would like to do some regression testing.
Normally, unit tests should consider all important types of invocations, especially border cases. But, as expression is a DSL which can vary highly, it's not easy to test-cover all potential usage. Additionally, those tests would not identify false usage relying on the bug.
So my idea is to proceed in the following way:
Find all invocations of this method (like using "Call hierarchy" in Eclipse) and extract all the values for expression.
Sample enough real-life values for data (e.g. from database) and cross-evaluate all expressions from the first step on it using the original version of MyClass.methodA(String,String). Save the triples (data, expression, result) to a file.
Implement bugfix.
Test method MyClass.methodA(String,String) using the above file to assert that the results have not changed.
Following questions:
What do you think of this approach?
Using call hierarchy in Eclipse I can only copy and paste the calling methods but not the exact invocation including arguments to clipboard (cf. step 1). I would have to copy the invoking call by hand for each method found. How do I extract the invocations in a convenient way (in the complete Eclipse workspace, thus in several projects)?
IMHO I am only testing one single method, thus the tests covers only one unit. Is it ok to use JUnit for step 4 or is there something more sophisticated?
As the expected additional value by Testing your software is to cover the most circumstances with lowest costs and maximum outcome, I would agree with your approach.
Collecting reallife samplevalues from your software and save them to a representative file should not be that complex to realize and seems to me as the best way to analyze this thing. Even if you have to copy it manually.

Can i Call methods From Properties File in Java

I am reading an Excel file using java, I have written some methods for my business logic and i want to implement using properties File.
What i want to Do is :
I am having 10 Columns and 10 methods, i need to declare this methods in Properties file use from that file.
Suppose for column A i want only 2 methods then i can use only those 2 methods, For next column B suppose 10 methods and So on.
Can i Do this or is there any other way to implement this.
Please Suggest me if there is any other way to implement this. Thanks in Advance.
To prevent misunterstandings: properties files are just text files, so you need to use Java code in order to process them. They'll only provide a String->String (key->value) mapping, thus you need to parse and interpret the strings yourself.
That being said, here's a suggestion on what you might (want to) do:
Since your question isn't that clear I'll make a couple of assumptions:
Your properties file contains something like:
column1=method1,method7
column2=method1,method2,method3
etc.
The methods are all declared in one class using a common signature
When parsing your excel file you might want to apply the methods to the columns based on the properties file.
So, here are some hints on what you could do:
Parse the properties file and create a list of method names per column
When working on a cell/column get the method names for the respective column
Use YourClass.class.getMethod(methodName, parameterTypes) to get the Method instances.
Call Method#invoke(...) to invoke the method.
Edit:
As an alternative to having all methods in one class you could also use the fully qualified class name, e.g. yourpackage.YourClass#method1, split at # to get the class name and the method, then use Class.forName(fqcn) to get the class and finally call getMethod(...) on that.
If the signature differs, you might have to use a more complicated notation and parse the parameter classes as well.
There be parsers ready to use for this, but I don't know any. However, some apache commons projects like El and Configuration might prove useful for this task.
It's not at all clear what you are trying to do, but in any case you definitely cannot "call" any Java methods from a .properties file (or call from Java to a method contained in a .properties file). A .properties file is not executable, nor is it treated by any standard Java utility as executable/interpretable code. In essence, all it contains is data, and data cannot call or be called.
What you could do however is roll your own framework for doing this, either by using reflection to map the textual method calls in your input file to actual method calls against one or more Java objects, or more simply (but less flexibly/extensibly) by storing a predetermined set of codes which you map directly to method calls using a switch/if-else block, or perhaps by storing your desired code as JavaScript and using an existing Java-based JavaScript interpreter to execute the code.

Overriding line number in generated Java source

Is there any way to override the line numbers in Java (e.g., by using some kind of preprocessor directive)?
I am "compiling" a high-level language, down to Java byte code using Janino. I need compiler errors to report the line from the original file, not the generated Java code.
C# has the #line directive, and I've successfully exploited it to map line numbers from a source file to the compiled result. I need the same for Java.
Thanks!
I've never seen it used for other than JSP, but JSR-45 was designed to be used for this purpose for any source language. The process involves creating a second file in a special format ("SMAP") that maps line numbers in the original source to line numbers in the generated Java source code.
Unfortunately, no, there isn't an equivalent #line directive in Java. The best you can do is modify the source after it's been generated by deleting/inserting newlines to match the real source (or modify the code generator). Or, you could modify the line numbers stored in the binary class files after they've been compiled, but that will likely be even more painful.
Using Janino you can derive from Scanner and override the location() method. This method returns a Location object. You can override the read() method to look for annotations, within comments for instance (added during code generation), that hold line number information.
You simply pass your scanner to the SimpleCompiler.cook() method and you can control what filename, line and column get reported on error.
Instead of generating Java code as your intermediate language, you could try using JVM assembler. Jasmin has nice syntax, and you are free to insert .line directives at appropriate places in your code. You can also can also specify the original source file using the the .source directive.
Granted, going the assembler route may be more hassle than it's worth :)
There is no simple solution. One workaround would be to generate a line number map from [your language] to Java when you generate the code. You can pipe the compiler output and use the map to replace Java's line numbers with your line numbers.

Categories