The compiler has access to the format string AND the required types and parameters. So I assume there would be some way to indicate missing parameters for the varargs ... even if only for a subset of cases. Is there someway for eclipse or another ide to indicate that the varargs passed might cause a problem at runtime ?
It looks as if FindBugs can solve your problem. There are some warning categories related to format strings.
http://www.google.com/search?q=%2Bjava+%2Bprintf+%2Bfindbugs
http://findbugs.sourceforge.net/bugDescriptions.html#VA_FORMAT_STRING_MISSING_ARGUMENT
The Java compiler doesn't have any built-in semantic knowledge of StringFormat parameters, so it can't check on these at compile time. For all it knows, StringFormat is just another class and String.format is just another method, and the given format string is just another string like any other.
But yeah, I feel your pain, having come across these same problems in the past couple days. What they ought to have done is make it 'less careful' about the number of parameters, and just leave trailing %s markers un-replaced.
Related
I'm replacing a group of String constants with an enum, but the constants weren't used everywhere they should have been. So we're replacing a lot of someValue.equals(FOO_CONST) with someValue == MyEnum.FOO. It's easy to fix all the places where they were used--just delete the constants and the compiler tells you where the problems are. However, there are also bits like "foo".equals(someValue), which the compiler can't identify as an error after the change is made.
Is there any way I can detect potential bugs caused by any of these inline literals that get missed during the conversion? (I'm using eclipse)
FindBugs reports bugs for calls to equals(Object) when the two objects are not of the same type, which handles this problem nicely.
They will show up in the Bug Explorer under:
Scariest
High confidence
Call to equals() comparing different types
When I type an entry for an array in java this way, Jalopy (an alternate program to Jindent) switches the square bracket to the other side. Am I typing the wrong way or what?
Before formatting:
After formatting:
Using square bracket after variable name is old style of C,C++. While placing it with the type name is recommended by Java. It is specific to java code style. Since Jalopy is specifically there to format java Code it uses recommended Java style to format. and Hence the code is changing.
Consistency for one, I guess.
And more importantly to make the types clearer. Java (similar to C, I think) allows the [] to appear after either the type or identifier (or even both, being the equivalent of [][]). Putting them after the type makes very clear what the actual type is, because nickFreq is an int[], not an int.
If you prefer the old C-style, you can configure it in your settings, see here:
http://www.triemax.com/products/jalopy/manual/java.html#ARRAY_BRACKETS_AFTER_IDENT
It gets quite interessting when it comes to multi-dimensional arrays...
You can use e.g. JUnit to test the functionality of your library, but how do you test its type-safetiness with regards to generics and wildcards?
Only testing against codes that compile is a "happy path" testing; shouldn't you also test your API against non-type-safe usage and confirm that those codes do NOT compile?
// how do you write and verify these kinds of "tests"?
List<Number> numbers = new ArrayList<Number>();
List<Object> objects = new ArrayList<Object>();
objects.addAll(numbers); // expect: this compiles
numbers.addAll(objects); // expect: this does not compile
So how do you verify that your genericized API raises the proper errors at compile time? Do you just build a suite a non-compiling code to test your library against, and consider a compilation error as a test success and vice versa? (Of course you have to confirm that the errors are generics-related).
Are there frameworks that facilitate such testing?
Since this is not testing in the traditional sense (that is - you can't "run" the test), and I don't think such a tool exists, here's what I can suggest:
Make a regular unit-test
Generate code in it - both the right code and the wrong code
Use the Java compiler API to try to compile it and inspect the result
You can make an easy-to-use wrapper for that functionality and contribute it for anyone with your requirements.
It sounds like you are trying to test the Java compiler to make sure it would raise the right compilation errors if you assign the wrong types (as opposed to testing your own api).
If that is the case, why aren't you also concerned about the compiler not failing when you assign Integers to String fields, and when you call methods on objects that have not been initialized, and the million other things compilers are supposed to check when they compile code?!
I guess your question isn't limited to generics. We can raise the same question to non-generic codes. If the tool you described exists, I'll be terrified. There are lots of people very happy to test their getters and setters(and try to enforce that on others). Now they are happier to write new tests to make sure that accesses to their private fields don't compile! Oh the humanity!
But then I guess generics are way more complicated so your question isn't moot. To most programmers, they'll be happy if they can get their damn generics code finally compile. If a piece of generics code doesn't compile, which is the norm during dev, they aren't really sure who to blame.
"How do you test the type-safetiness of your genericized API?" IMHO, the short answer to your question should be:
Don't use any #SuppressWarnings
Make sure you compile without warnings (or errors)
The longer answer is that "type safety" is not a property of an API, it is a property of the programming language and its type system. Java 5 generics is type safe in the sense that it gives you the guarantee that you will not have a type error (ClassCastException) at runtime unless it originates from a user-level cast operation (and if you program with generics, you rarely need such casts anymore). The only backdoor is the use of raw types for interoperability with pre-Java 5 code, but for these cases the compiler will issue warnings such as the infamous "unchecked cast" warning to indicate that type-safety may be compromised. However, short of such warnings, Java will guarantee your type safety.
So unless you are a compiler writer (or you do not trust the compiler), it seems strange to want to test "type safety". In the code example that you give, if you are the implementor of ArrayList<T>, you should only care to give addAll the most flexible type signature that allows you to write a functionally correct implementation. For example, you could type the argument as Collection<T>, but also as Collection<? extends T>, where the latter is preferred because it is more flexible. While you can over-constrain your types, the programming language and the compiler will make sure that you cannot write something that is not type-safe: for example, you simply cannot write a correct implementation for addAll where the argument has type Collection<?> or Collection<? super T>.
The only exception I can think of, is where you are writing a facade for some unsafe part of the system, and want to use generics to enforce some kind of guarantees on the use of this part through the facade. For example, although Java's reflection is not controlled as such by the type system, Java uses generics in things such as Class<T>, to allow that some reflective operations, such as clazz.newInstance(), to integrate with the type system.
Maybe you can use Collections.checkedList() in your unit test. The following example will compile but will throw a ClassCassException. Example below is copied from #Simon G.
List<String> stringList = new ArrayList<String>();
List<Number> numberList = Collections.checkedList(new ArrayList<Number>(), Number.class);
stringList.add("a string");
List list = stringList;
numberList.addAll(list);
System.out.println("Number list is " + numberList);
Testing for compilation failures sounds like barking up the wrong tree, then using a screwdriver to strip the bark off again. Use the right tool for the right job.
I would think you want one or more of:
code reviews (maybe supported by a code review tool like JRT).
static analysis tools (FindBugs/CheckStyle)
switch language to C++, with an implementation that supports concepts (may require also switching universe to one in which such an implementation exists).
If you really needed to to this as a 'test', you could use reflection to enforce any desired rule, say 'any function starting with add must have an argument that is a generic'. That's not very different from a custom Checkstyle rule, just clumsier and less reusable.
Well, in C++ they tried to do this with concepts but that got booted from the standard.
Using Eclipse I get pretty fast turn around time when something in Java doesn't compile, and the error messages are pretty straight forward. For example if you expect a type to have a certain method call and it doesn't then your compiler tells you what you need to know. Same with type mismatches.
Good luck building compile time concepts into java :P
I have a parser written in bigloo scheme functional language which I need to compile into a java class. The whole of the parser is written as a single function. Unfortunately this is causing the JVM compiler to throw a "Method too large" warning and later give "far label in localvar" error. Is there any possible way where I can circumvent this error? I read somewhere about a DontCompileHugeMethods option, does it work? Splitting the function doesnt seem to be a viable option to me :( !!
Is there any possible way where I can circumvent this error?
Well, the root cause of this compiler error is that there are hard limits in the format of bytecode files. In this case, the problem is that a single method can consist of at most 65536 bytes of bytecodes. (See the JVM spec).
The only workaround is to split the method.
Split the method in related operations or splitting utilities separately.
Well, the case is a bit different
here, the method only consists of a
single function call. Now this
function has a huge parameter list(the
whole of the parser actually!!). So I
have no clues how to split this!!
The way to split up such a beast could be:
define data holder objects for your parameters (put sets of parameters in objects according to the ontology of your data model),
build those data holder objects in their own context
pass the parameter objects to the function
Quick and Dirty: Assign all your parameters to class variables of the same name (you must rename your parameters) at the beginning of your function and start chopping up your function in pieces and put those pieces in functions. This should guarantee that your function will basically operate with the same semantics.
But, this will not lead to pretty code!
[Later: Still can't figure out if Groovy has static typing (seems that it does not) or if the bytecode generated using explicit typing is different (seems that it is). Anyway, on to the question]
One of the main differences between Groovy and other dynamic languages -- or at least Ruby -- is that you can statically explicitly type variables when you want to.
That said, when should you use static typing in Groovy? Here are some possible answers I can think of:
Only when there's a performance problem. Statically typed variables are faster in Groovy. (or are they? some questions about this link)
On public interfaces (methods, fields) for classes, so you get autocomplete. Is this possible/true/totally wrong?
Never, it just clutters up code and defeats the purpose of using Groovy.
Yes when your classes will be inherited or used
I'm not just interested in what YOU do but more importantly what you've seen around in projects coded in Groovy. What's the norm?
Note: If this question is somehow wrong or misses some categories of static-dynamic, let me know and I'll fix it.
In my experience, there is no norm. Some use types a lot, some never use them. Personally, I always try to use types in my method signatures (for params and return values). For example I always write a method like this
Boolean doLogin(User user) {
// implementation omitted
}
Even though I could write it like this
def doLogin(user) {
// implementation omitted
}
I do this for these reasons:
Documentation: other developers (and myself) know what types will be provided and returned by the method without reading the implementation
Type Safety: although there is no compile-time checking in Groovy, if I call the statically typed version of doLogin with a non-User parameter it will fail immediately, so the problem is likely to be easy to fix. If I call the dynamically typed version, it will fail some time after the method is invoked, and the cause of the failure may not be immediately obvious.
Code Completion: this is particularly useful when using a good IDE (i.e. IntelliJ) as it can even provide completion for dynamically added methods such as domain class' dynamic finders
I also use types quite a bit within the implementation of my methods for the same reasons. In fact the only times I don't use types are:
I really want to support a wide range of types. For example, a method that converts a string to a number could also covert a collection or array of strings to numbers
Laziness! If the scope of a variable is very short, I already know which methods I want to call, and I don't already have the class imported, then declaring the type seems like more trouble than it's worth.
BTW, I wouldn't put too much faith in that blog post you've linked to claiming that typed Groovy is much faster than untyped Groovy. I've never heard that before, and I didn't find the evidence very convincing.
I worked on a several Groovy projects and we stuck to such conventions:
All types in public methods must be specified.
public int getAgeOfUser(String userName){
...
}
All private variables are declared using the def keyword.
These conventions allow you to achieve many things.
First of all, if you use joint compilation your java code will be able to interact with your groovy code easily. Secondly, such explicit declarations make code in large projects more readable and sustainable. And of-course auto-completion is an important benefit too.
On the other hand, the scope of a method is usually quite small that you don't need to declare types explicitly. By the way, modern IDEs can auto-complete your local variables even if you use defs.
I have seen type information used primarily in service classes for public methods. Depending on how complex the parameter list is, even here I usually see just the return type typed. For example:
class WorkflowService {
....
WorkItem getWorkItem(processNbr) throws WorkflowException {
...
...
}
}
I think this is useful because it explicitly tells the user of the service what type they will be dealing with and does help with code assist in IDE's.
Groovy does not support static typing. See it for yourself:
public Foo func(Bar bar) {
return bar
}
println("no static typing")
Save and compile that file and run it.