There seems to be a bug in the Java varargs implementation. Java can't distinguish the appropriate type when a method is overloaded with different types of vararg parameters.
It gives me an error The method ... is ambiguous for the type ...
Consider the following code:
public class Test
{
public static void main(String[] args) throws Throwable
{
doit(new int[]{1, 2}); // <- no problem
doit(new double[]{1.2, 2.2}); // <- no problem
doit(1.2f, 2.2f); // <- no problem
doit(1.2d, 2.2d); // <- no problem
doit(1, 2); // <- The method doit(double[]) is ambiguous for the type Test
}
public static void doit(double... ds)
{
System.out.println("doubles");
}
public static void doit(int... is)
{
System.out.println("ints");
}
}
the docs say: "Generally speaking, you should not overload a varargs method, or it will be difficult for programmers to figure out which overloading gets called."
however they don't mention this error, and it's not the programmers that are finding it difficult, it's the compiler.
thoughts?
EDIT - Compiler: Sun jdk 1.6.0 u18
The problem is that it is ambiguous.
doIt(1, 2);
could be a call to doIt(int ...), or doIt(double ...). In the latter case, the integer literals will be promoted to double values.
I'm pretty sure that the Java spec says that this is an ambiguous construct, and the compiler is just following the rules laid down by the spec. (I'd have to research this further to be sure.)
EDIT - the relevant part of the JLS is "15.12.2.5 Choosing the Most Specific Method", but it is making my head hurt.
I think that the reasoning would be that void doIt(int[]) is not more specific (or vice versa) than void doIt(double[]) because int[] is not a subtype of double[] (and vice versa). Since the two overloads are equally specific, the call is ambiguous.
By contrast, void doItAgain(int) is more specific than void doItAgain(double) because int is a subtype of double according the the JLS. Hence, a call to doItAgain(42) is not ambiguous.
EDIT 2 - #finnw is right, it is a bug. Consider this part of 15.12.2.5 (edited to remove non-applicable cases):
One variable arity member method named m is more specific than another variable arity member method of the same name if:
One member method has n parameters and the other has k parameters, where n ≥ k. The types of the parameters of the first member method are T1, . . . , Tn-1 , Tn[], the types of the parameters of the other method are U1, . . . , Uk-1, Uk[]. Let Si = Ui, 1<=i<=k. Then:
for all j from 1 to k-1, Tj <: Sj, and,
for all j from k to n, Tj <: Sk
Apply this to the case where n = k = 1, and we see that doIt(int[]) is more specific than doIt(double[]).
In fact, there is a bug report for this and Sun acknowledges that it is indeed a bug, though they have prioritized it as "very low". The bug is now marked as Fixed in Java 7 (b123).
There is a discussion about this over at the Sun Forums.
No real resolution there, just resignation.
Varargs (and auto-boxing, which also leads to hard-to-follow behaviour, especially in combination with varargs) have been bolted on later in Java's life, and this is one area where it shows. So it is more a bug in the spec, than in the compiler.
At least, it makes for good(?) SCJP trick questions.
Interesting. Fortunately, there are a couple different ways to avoid this problem:
You can use the wrapper types instead in the method signatures:
public static void doit(Double... ds) {
for(Double currD : ds) {
System.out.println(currD);
}
}
public static void doit(Integer... is) {
for(Integer currI : is) {
System.out.println(currI);
}
}
Or, you can use generics:
public static <T> void doit(T... ts) {
for(T currT : ts) {
System.out.println(currT);
}
}
Related
In my JavaFX, I attempted to have an ObservableMap<String, String> and a MapChangeListener that listens to keys and values changes(adding/removing a key or the corresponding value) and then does its job.
To make the listener be effective, the method to implement is:
void onChanged(MapChangeListener.Change<? extends K,? extends V> change)
What I first did, with a lambda expression, that doesn't generate any error:
map.addListener((MapChangeListener.Change<? extends String, ? extends String> change) -> {
//code here to implement onChange method
}
And here is what I discovered, that still doesn't generate any error:
map.addListener((MapChangeListener<String, String>) change -> {
//code here to implement onChange method
}
Note the position of the round brackets in this two different examples. The second seems to me to be a cast, but I really don't understand why this second option works.
Can anyone explain me this, please?
P.S.: Actually, I came accross this because I was dealing with a
ObservableMap<String, List<String>>,
that is a multimap, and the first "way" of the two above didn't work (with the right adjustments). /EDIT: I tried again with the first "way" and actually it does work, there was an error on the code I didn't notice END EDIT/. Then I tried with the second option, and it did work, and I was dazed. Then I discovered this same "behaviour" with a simple map <String, String> and this question has arisen.
These two are equivalent. The first one, you are defining the parameter of the lambda expression - note that your bracket covers the whole change parameter. This allows the compiler to know which overload to match it against.
The second one is simply a cast. You are telling the compiler what kind of method signature to match this lambda against. (MapChangeListener<String, String>) casts the whole lambda expression into a MapChangeListener, so the compiler knows that it really is addListener(MapChangeListener). Since you have defined the single parameter defined by MapChangeListener, the compiler doesn't complain that it is wrong either.
Edit
Now that I have a bit more time, I would give you some concrete example that will help you understand a little more in depth.
public class Foo {
public final void bar(IntfA a) {}
public final void bar(IntfB b) {}
public final void bar(IntfC c) {}
}
#FunctionalInterface
public interface IntfA {
void doSomething(Double a);
}
#FunctionalInterface
public interface IntfB {
void doSomething(Integer a);
}
#FunctionalInterface
public interface IntfC {
void doSomething(Double a);
}
public class Test {
public static void main(String[] args)
{
Foo foo = new Foo();
foo.bar(a -> {}); // Ambiguous
foo.bar((Integer a) -> {}); // Okay, this is IntfB
foo.bar((Double a) -> {}); // Ambiguous between IntfA and IntfC
foo.bar((IntfC) a -> {}); // No longer ambiguous since you specified that it's IntfC
foo.bar((IntfC) (a, b) -> {}); // Method signature does not match IntfC
}
}
Edit 2
It seems like you need a little more help here.
When you define a method bar(IntfA), you are expecting an object of IntfA, regardless whether IntfA is an interface type or a class type.
Then, lambda expressions are just compile-time convenient syntax. When I write foo.bar((Integer a) -> {}), the compiler will eventually turn it into Java bytecodes (within .class file) that is equivalent to this:
foo.bar(new IntfB() {
public void doSomething(Integer a) {
}
});
That equivalence is what we call Anonymous Class.
The biggest and possibly only difference in using lambda is, it makes your code shorter. Sometimes it makes your code more readable, sometimes it makes your code less readable.
Since lambda reduces the amount of things that you need to type out, it is very easy to have a lambda expression that is ambiguous for the compiler when there are overload methods like in the example. Remember that the compiler needs to figure out which overload first, then it will help you to instantiate the object for you.
When you write foo.bar((Double a) -> {}), the compile notices that you have a lambda expression that takes in one Double parameter and returns nothing. It will then look at the three overloads of bar(). It notices that both bar(IntfA) and bar(IntfC) takes in a functional interface, and both interface's method takes in one Double parameter and returns nothing. At this point, the compiler is not sure whether it should generate bytecodes equivalent to which two set of codes:
Choice 1:
foo.bar(new IntfA() {
public void doSomething(Double a) {
}
});
Choice 2:
foo.bar(new IntfC() {
public void doSomething(Double a) {
}
});
If you write foo.bar((IntfC) a -> {}), you are already hinting to the compiler that you want it to match foo.bar(IntfC) overload. The compiler sees that you have one parameter of unknown type, but since you have already tell it to match to IntfC, it will assume that parameter is Double.
Now to the last part, calling foo.bar(IntfA) doesn't automatically call the doSomething(Double a) method specified by IntfA. In my example the bar() methods did nothing, but normally people would write something useful.
Example again:
public final void bar(IntfB obj) {
if (obj == null)
System.out.println("I was waiting for an IntfB object but I got nothing!");
else
obj.doSomething(100);
}
foo.bar((Integer a) -> {
System.out.println("I got " + a + " marks for my exam!");
});
This causes "I got 100 marks for my exam!" to be printed on the console.
Lambda in reality doesn't require its type to be expressed unless there is an ambiguity.
If you would not type change it would conflict with addListener(InvalidationListener) that has the same argument length. There are 2 ways of solving this, either by explicitly expressing the type (your first snippet) or by directing the compiler to the correct overload (second), which has nothing to do with lambda semantics.
To reiterate the second point, say you have
void print(String s)
and
void print(Integer i)
calling
print(null) would cause an ambiguity. The solution is print((String)null) which is of course not a type cast, as null has no type, but rather a compiler note.
So, good ol' Dietel states, "All generic method declarations have a type-parameter section delimited by angle brackets (< and >) that precedes the methods return type," (Deitel, 2012, italicized emphasis mine). The example given is as follows:
public static < T > void printArray (T[] inputArray)
{
for (T element : inputArray)
(
System.out.printlf("%s", element);
}
That makes sense to me. I get that. But, here is my question, not addressed explicitly in the book.
I have a very simple class to demonstrate:
public class Pair<F, S>
{
private F first;
private S second;
}
Now, according to Deitel, "ALL" generic method declarations must contain a type-parameter section. So, naturally, I want to add a get() and set() method to my class example. So, I do this:
public class Pair<F, S>
{
private F first;
private S second;
// Here, I'll do one instead of both for the sake of shortening the code
public < F > F getF()
{
return F;
}
// And the Accessor:
public < F > void setF(F first)
{
this.first = first;
}
}
So, here's the deal. The Eclipse IDE gives me a warning ahead of my attempt to compile (the Java version of Intellisense) that states, "The type parameter F is hiding the type F". Now, I don't particularly trust Dietel for Java - and am growing to understand that they are not particularly reliable (in that they often leave out important distinctions). So, I went to the Oracle Documentation for what I am doing and - GUESS WHAT - they mention nothing of the sort, unless you're talking about 'upperbounded' type parameters.
Here's the question (it's threefold):
Is the difference here the `static' qualifier, i.e. that the method I am writing appears in a class?
What on Earth is Dietel doing, particularly as implementation of their suggestions, here, yields a warning?
By changing the class type parameters, I get rid of the warning. So, conceptually, what is going on to where the method parameter type is "hiding" the class parameter type?
The JLS specifically designates a generic method as one that declares type parameters. (JLS) So the confusion here is that Deital has said that "all generic methods have a type parameter section" but presumably not specifically pointed out that this is their definition. It is more clear to say that "a generic method is one that has a type parameter section".
As noted in a comment, when you have type parameters declared by a class, you do not need to redeclare them at the method. As noted by Eclipse, doing so actually declares new type parameters which hide the ones declared by the class.
When they are declared on the class you can use them directly:
class Pair<F, S> {
F getF() { ... }
S getS() { ... }
void setF(F f) { ... }
void setS(S s) { ... }
}
The purpose of a generic method is to use it parametrically. The given example is not particularly good for understanding because the generic type is actually unused: the printf overload for Object is called. It can be rewritten without generics with no change to its functionality:
public static void printArray(Object[] arr) {
for(Object o : arr) {
System.out.printf("%s", o);
}
}
The easiest example for understanding the use of a generic method is the implementation of Objects#requireNonNull which is something like this:
public static <T> T requireNonNull(T obj) {
if(obj == null)
throw new NullPointerException();
return obj;
}
It takes any object and conveniently returns it as a T:
// T is inferred
String hello = Objects.requireNonNull("hello world");
// T is provided as a witness (rarely necessary)
Integer five = Objects.<Integer>requireNonNull(5);
It is the simplest generic method.
I came across some advanced java code (advanced for me :) ) I need help understanding.
In a class there is a nested class as below:
private final class CoverageCRUDaoCallable implements
Callable<List<ClientCoverageCRU>>
{
private final long oid;
private final long sourceContextId;
private CoverageCRUDaoCallable(long oid, long sourceContextId)
{
this.oid = oid;
this.sourceContextId = sourceContextId;
}
#Override
public List<ClientCoverageCRU> call() throws Exception
{
return coverageCRUDao.getCoverageCRUData(oid, sourceContextId);
}
}
Later in the outer class, there is an instance of the callable class being created.
I have no idea what this is:
ConnectionHelper.<List<ClientCoverageCRU>> tryExecute(coverageCRUDaoCallable);
It doesn't look like java syntax to me. Could you please elaborate what's going on in this cryptic syntax? You can see it being used below in the code excerpt.
CoverageCRUDaoCallable coverageCRUDaoCallable = new CoverageCRUDaoCallable(
dalClient.getOid(), sourceContextId);
// use Connection helper to make coverageCRUDao call.
List<ClientCoverageCRU> coverageCRUList = ConnectionHelper
.<List<ClientCoverageCRU>> tryExecute(coverageCRUDaoCallable);
EDITED
added the ConnectionHelper class.
public class ConnectionHelper<T>
{
private static final Logger logger =
LoggerFactory.getLogger(ConnectionHelper.class);
private static final int CONNECTION_RETRIES = 3;
private static final int MIN_TIMEOUT = 100;
public static <T> T tryExecute(Callable<T> command)
{
T returnValue = null;
long delay = 0;
for (int retry = 0; retry < CONNECTION_RETRIES; retry++)
{
try
{
// Sleep before retry
Thread.sleep(delay);
if (retry != 0)
{
logger.info("Connection retry #"+ retry);
}
// make the actual connection call
returnValue = command.call();
break;
}
catch (Exception e)
{
Throwable cause = e.getCause();
if (retry == CONNECTION_RETRIES - 1)
{
logger.info("Connection retries have exhausted. Not trying "
+ "to connect any more.");
throw new RuntimeException(cause);
}
// Delay increased exponentially with every retry.
delay = (long) (MIN_TIMEOUT * Math.pow(2, retry));
String origCause = ExceptionUtils.getRootCauseMessage(e);
logger.info("Connection retry #" + (retry + 1)
+ " scheduled in " + delay + " msec due to "
+ origCause);
+ origCause);
}
}
return returnValue;
}
You more often think of classes as being generic, but methods can be generic too. A common example is Arrays.asList.
Most of the time, you don't have to use the syntax with angle brackets <...>, even when you're invoking a generic method, because this is the one place in which the Java compiler is actually capable of doing basic type inference in some circumstances. For example, the snippet given in the Arrays.asList documentation omits the type:
List<String> stooges = Arrays.asList("Larry", "Moe", "Curly");
But it's equivalent to this version in which the generic type is given explicitly:
List<String> stooges = Arrays.<String>asList("Larry", "Moe", "Curly");
That is because, until Java 7, generics do not fully support target typing, so you need to help the compiler a little with what is called a type witness like in ConnectionHelper.<List<ClientCoverageCRU>>.
Note however that Java 8 significantly improves target typing and in your specific example the type witness is not required in Java 8.
It's ugly, but valid.
Whatever ConnectionHelper is, it has a static tryExecute method that needs to infer a generic type.
Something like:
public static <T> T tryExecute() { ... }
Edit from updated Question: Java has type inference for generic types. The first <T> in the method signature signifies the type will be inferred when the method is called.
In your updated post you show tryExecute() defined to take a generic argument:
public static <T> T tryExecute(Callable<T> command)
This actually means the use of that syntax is completely redundant and unnecessary; T (the type) is inferred from the command being passed in which has to implement Callable<T>. The method is defined to return something of the inferred type T.
Infer a type
|
v
public static <T> T tryExecute(Callable<T> command)
^ ^
| |
<-return type--------------------------
In your example, coverageCRUDaoCallable has to be implementing Callable<List<ClientCoverageCRU>> because the method is returning List<ClientCoverageCRU>
In my example above you'd have to use the syntax you were asking about because nothing is being passed in from which to infer the type. T has to be explicitly provided via using ConnectionHelper.<List<ClientCoverageCRU>>tryExecute()
From Java Generics and Collections,
List<Integer> ints = Lists.<Integer>toList(); // first example
List<Object> objs = Lists.<Object>toList(1, "two"); // second example
In the first example, without the type parameter there is too little information for
the type inference algorithm used by Sun's compiler to infer the correct type. It infers
that the argument to toList is an empty array of an arbitrary generic type rather than
an empty array of integers, and this triggers the unchecked warning described earlier.
(The Eclipse compiler uses a different inference algorithm, and compiles the same line
correctly without the explicit parameter.)
In the second example, without the type parameter there is too much
information for the type inference algorithm to infer the correct
type. You might think that Object is the only type that an integer
and a string have in common, but in fact they also both implement
the interfaces Serializable and Comparable. The type inference
algorithm cannot choose which of these three is the correct type.
In general, the following rule of thumb suffices:
In a call to a generic method, if there
are one or more arguments that correspond to a type parameter and they all have the
same type then the type parameter may be inferred; if there are no arguments that
correspond to the type parameter or the arguments belong to different subtypes of the
intended type then the type parameter must be given explicitly.
Some points for passing type parameter
When a type parameter is passed to a generic method invocation, it appears in angle
brackets to the left, just as in the method declaration.
The Java grammar requires that type parameters may appear only in method invocations that use a dotted form. Even
if the method toList is defined in the same class that invokes the code, we cannot
shorten it as follows:
List<Integer> ints = <Integer>toList(); // compile-time error
This is illegal because it will confuse the parser.
So basically, the tryExecute() method in the ConnectionHelper uses generics. This allows you to feed the type inference to it prior to the method call after the "dot operator". This is actually shown directly in the Oracle Java tutorials for Generics, even though I'd consider it bad practice in a production environment.
You can see an official example of it here.
As you can see in your modified post, the tryExecute() definition is:
public static <T> T tryExecute(Callable<T> command)
By calling it as such (<List<ClientCoverageCRU>> tryExcute), you are forcing T to be a List<ClientCoverageCRU>. A better practice in general, though, would be to let this be inferred from an actual argument in the method. The type can also be inferred from the Callable<T>, so supplying it a Callable<List<ClientCoverageCRU>> as an argument would eliminate the need for this confusing usage.
See its usage in the JLS 4.11 - Where Types Are Used:
<S> void loop(S s) { this.<S>loop(s); }
... and the formal definition of why this is allowed in method invocation in JLS 15.12 - Method Invocation Expressions. You can skip down to 15.12.2.7 and 15.12.2.8 for still more specifics. 15.12.2.8 - Inferring Unresolved Type Arguments explains the formal logic by which this functions.
I've recently come across the java #SafeVarargs annotation. Googling for what makes a variadic function in Java unsafe left me rather confused (heap poisoning? erased types?), so I'd like to know a few things:
What makes a variadic Java function unsafe in the #SafeVarargs sense (preferably explained in the form of an in-depth example)?
Why is this annotation left to the discretion of the programmer? Isn't this something the compiler should be able to check?
Is there some standard one must adhere to in order to ensure his function is indeed varags safe? If not, what are the best practices to ensure it?
1) There are many examples on the Internet and on StackOverflow about the particular issue with generics and varargs. Basically, it's when you have a variable number of arguments of a type-parameter type:
<T> void foo(T... args);
In Java, varargs are a syntactic sugar that undergoes a simple "re-writing" at compile-time: a varargs parameter of type X... is converted into a parameter of type X[]; and every time a call is made to this varargs method, the compiler collects all of the "variable arguments" that goes in the varargs parameter, and creates an array just like new X[] { ...(arguments go here)... }.
This works well when the varargs type is concrete like String.... When it's a type variable like T..., it also works when T is known to be a concrete type for that call. e.g. if the method above were part of a class Foo<T>, and you have a Foo<String> reference, then calling foo on it would be okay because we know T is String at that point in the code.
However, it does not work when the "value" of T is another type parameter. In Java, it is impossible to create an array of a type-parameter component type (new T[] { ... }). So Java instead uses new Object[] { ... } (here Object is the upper bound of T; if there upper bound were something different, it would be that instead of Object), and then gives you a compiler warning.
So what is wrong with creating new Object[] instead of new T[] or whatever? Well, arrays in Java know their component type at runtime. Thus, the passed array object will have the wrong component type at runtime.
For probably the most common use of varargs, simply to iterate over the elements, this is no problem (you don't care about the runtime type of the array), so this is safe:
#SafeVarargs
final <T> void foo(T... args) {
for (T x : args) {
// do stuff with x
}
}
However, for anything that depends on the runtime component type of the passed array, it will not be safe. Here is a simple example of something that is unsafe and crashes:
class UnSafeVarargs
{
static <T> T[] asArray(T... args) {
return args;
}
static <T> T[] arrayOfTwo(T a, T b) {
return asArray(a, b);
}
public static void main(String[] args) {
String[] bar = arrayOfTwo("hi", "mom");
}
}
The problem here is that we depend on the type of args to be T[] in order to return it as T[]. But actually the type of the argument at runtime is not an instance of T[].
3) If your method has an argument of type T... (where T is any type parameter), then:
Safe: If your method only depends on the fact that the elements of the array are instances of T
Unsafe: If it depends on the fact that the array is an instance of T[]
Things that depend on the runtime type of the array include: returning it as type T[], passing it as an argument to a parameter of type T[], getting the array type using .getClass(), passing it to methods that depend on the runtime type of the array, like List.toArray() and Arrays.copyOf(), etc.
2) The distinction I mentioned above is too complicated to be easily distinguished automatically.
For best practices, consider this.
If you have this:
public <T> void doSomething(A a, B b, T... manyTs) {
// Your code here
}
Change it to this:
public <T> void doSomething(A a, B b, T... manyTs) {
doSomething(a, b, Arrays.asList(manyTs));
}
private <T> void doSomething(A a, B b, List<T> manyTs) {
// Your code here
}
I've found I usually only add varargs to make it more convenient for my callers. It would almost always be more convenient for my internal implementation to use a List<>. So to piggy-back on Arrays.asList() and ensure there's no way I can introduce Heap Pollution, this is what I do.
I know this only answers your #3. newacct has given a great answer for #1 and #2 above, and I don't have enough reputation to just leave this as a comment. :P
#SafeVarargs is used to indicate that methods will not cause heap pollution.
Heap pollution is when we mix different parameterized types in generic array.
For example:
public static <T> T[] unsafe(T... elements) {
return elements;
}
Object [] listOfItems = unsafe("some value", 34, new ArrayList<>());
String stringValue = (String) listOfItems[0]; // some value
String intValue = (String) listOfItems[1]; // ClassCastException
As you can see, such implementation could easily cause ClassCastException if we don't guess with the type.
There seems to be a bug in the Java varargs implementation. Java can't distinguish the appropriate type when a method is overloaded with different types of vararg parameters.
It gives me an error The method ... is ambiguous for the type ...
Consider the following code:
public class Test
{
public static void main(String[] args) throws Throwable
{
doit(new int[]{1, 2}); // <- no problem
doit(new double[]{1.2, 2.2}); // <- no problem
doit(1.2f, 2.2f); // <- no problem
doit(1.2d, 2.2d); // <- no problem
doit(1, 2); // <- The method doit(double[]) is ambiguous for the type Test
}
public static void doit(double... ds)
{
System.out.println("doubles");
}
public static void doit(int... is)
{
System.out.println("ints");
}
}
the docs say: "Generally speaking, you should not overload a varargs method, or it will be difficult for programmers to figure out which overloading gets called."
however they don't mention this error, and it's not the programmers that are finding it difficult, it's the compiler.
thoughts?
EDIT - Compiler: Sun jdk 1.6.0 u18
The problem is that it is ambiguous.
doIt(1, 2);
could be a call to doIt(int ...), or doIt(double ...). In the latter case, the integer literals will be promoted to double values.
I'm pretty sure that the Java spec says that this is an ambiguous construct, and the compiler is just following the rules laid down by the spec. (I'd have to research this further to be sure.)
EDIT - the relevant part of the JLS is "15.12.2.5 Choosing the Most Specific Method", but it is making my head hurt.
I think that the reasoning would be that void doIt(int[]) is not more specific (or vice versa) than void doIt(double[]) because int[] is not a subtype of double[] (and vice versa). Since the two overloads are equally specific, the call is ambiguous.
By contrast, void doItAgain(int) is more specific than void doItAgain(double) because int is a subtype of double according the the JLS. Hence, a call to doItAgain(42) is not ambiguous.
EDIT 2 - #finnw is right, it is a bug. Consider this part of 15.12.2.5 (edited to remove non-applicable cases):
One variable arity member method named m is more specific than another variable arity member method of the same name if:
One member method has n parameters and the other has k parameters, where n ≥ k. The types of the parameters of the first member method are T1, . . . , Tn-1 , Tn[], the types of the parameters of the other method are U1, . . . , Uk-1, Uk[]. Let Si = Ui, 1<=i<=k. Then:
for all j from 1 to k-1, Tj <: Sj, and,
for all j from k to n, Tj <: Sk
Apply this to the case where n = k = 1, and we see that doIt(int[]) is more specific than doIt(double[]).
In fact, there is a bug report for this and Sun acknowledges that it is indeed a bug, though they have prioritized it as "very low". The bug is now marked as Fixed in Java 7 (b123).
There is a discussion about this over at the Sun Forums.
No real resolution there, just resignation.
Varargs (and auto-boxing, which also leads to hard-to-follow behaviour, especially in combination with varargs) have been bolted on later in Java's life, and this is one area where it shows. So it is more a bug in the spec, than in the compiler.
At least, it makes for good(?) SCJP trick questions.
Interesting. Fortunately, there are a couple different ways to avoid this problem:
You can use the wrapper types instead in the method signatures:
public static void doit(Double... ds) {
for(Double currD : ds) {
System.out.println(currD);
}
}
public static void doit(Integer... is) {
for(Integer currI : is) {
System.out.println(currI);
}
}
Or, you can use generics:
public static <T> void doit(T... ts) {
for(T currT : ts) {
System.out.println(currT);
}
}