I understand what an ArrayStoreException is. My question is: why isn't this caught by the compiler?
This might be an odd example, but say you do this:
HashMap[] h = new LinkedHashMap[4];
h[0] = new PrinterStateReasons();
Why can't the compiler recognize that this isn't valid?
Because the information you've given the compiler allows what you're doing. It's only the runtime state that's invalid. Your h variable is declared as being a HashMap[], which means that as far as h is concerned, anything implementing HashMap is a valid element. PrinterStateReasons implements HashMap, and so h[0] = new PrinterStateReasons(); is a perfectly valid statement. Similarly, since LinkedHashMap implements HashMap, the statement HashMap[] h = new LinkedHashMap[4]; is a perfectly valid statement. It's only at runtime that you try to store a PrinterStateReasons object as an element in a LinkedHashMap array, which you can't do as it isn't assignment-compatible.
The two statements you've given are contiguous, but of course the generalized reality is far more complex. Consider:
HashMap[] h = foo.getHashMapArray();
h[0] = new PrinterStateReasons();
// ... elsewhere, in some `Foo` class -- perhaps compiled
// completely separately from the code above, perhaps
// even by a completely different team and even a different
// compiler -- and only combined with the code above at runtime...
public HashMap[] getHashMapArray() {
return new LinkedHashMap[4];
}
Well, I suppose that a smart compiler would be able to statically analyse that h can never be anything else than a LinkedHashmap[] at line 2.
But without that (potentially rather complex analytics, perhaps not this simple case though) the compiler can't really know what is assigned to h. You can assign a PrinterStateReasons to a HashMap, just not to a LinkedHashMap.
What the compiler can't catch is when you refer to a specific type array using a more generic type array:
String[] s = new String[10];
Object[] o = s;
o[0] = new Integer(5);
The compiler can't detect it because it has no way, from the variable declaration, to know the actual type of the array.
Note that you won't have this problems with generic collections, because although a String[] is also an Object[], a List<String> is not a List<Object>. One more reason to prefer collections over arrays.
The Java Language Specification says that this is valid program. A compiler that flagged it as an error is not implementing the specification, and therefore is not a compliant Java compiler. The real harm of a non-compliant compiler is that it leads to people writing source code that is not portable; e.g. compiles with one compiler but not another.
The best that a compiler could legally do is to warn you that the code will always throw an exception. For instance, some compilers will warn you about null pointer exceptions that will always be thrown. I guess they don't do this in the array index case because the analysis is more complicated, and the mistake is made less often.
Related
I just randomly tried seeing if new String(); would compile and it did (because according to Oracle's Java documentation on "Expressions, Statements, and Blocks", one of the valid statement types is "object creation"),
However, new int[0]; is giving me a "not a statement" error.
What's wrong with this? Aren't I creating an array object with new int[0]?
EDIT:
To clarify this question, the following code:
class Test {
void foo() {
new int[0];
new String();
}
}
causes a compiler error on new int[0];, whereas new String(); on its own is fine. Why is one not acceptable and the other one is fine?
The reason is a somewhat overengineered spec.
The idea behind expressions not being valid statements is that they accomplish nothing whatsoever. 5 + 2; does nothing on its own. You must assign it to something, or pass it to something, otherwise why write it?
There are exceptions, however: Expressions which, on their own, will (or possibly will) have side effects. For example, whilst this is illegal:
void foo(int a) {
a + 1;
}
This is not:
void foo(int a) {
a++;
}
That is because, on its own, a++ is not completely useless, it actually changes things (a is modified by doing this). Effectively, 'ignoring the value' (you do nothing with a + 1 in that first snippet) is acceptable if the act of producing the value on its own causes other stuff to happen: After all, maybe that is what you were after all along.
For that reason, invoking methods is also a legit expressionstatement, and in fact it is quite common that you invoke methods (even ones that don't return void), ignoring the return value. For void methods it's the only legal way to invoke them, even.
Constructors are technically methods and can have side effects. It is extremely unlikely, and quite bad code style, if this method:
void doStuff() {
new Something();
}
is 'sensible' code, but it could in theory be written, bad as it may be: The constructor of the Something class may do something useful and perhaps that's all you want to do here: Make that constructor run, do the useful thing, and then take the created object and immediately toss it in the garbage. Weird, but, okay. You're the programmer.
Contrast with:
new Something[10];
This is different: The compiler knows what the array 'constructor' does. And what it does is nothing useful - it creates an object and returns a reference to the object, and that is all that happens. If you then instantly toss the reference in the garbage, then the entire operation was a complete waste of time, and surely you did not intend to do nothing useful with such a bizarre statement, so the compiler designers thought it best to just straight up disallow you from writing it.
This 'oh dear that code makes no sense therefore I shall not compile it' is very limited and mostly an obsolete aspect of the original compiler spec; it's never been updated and this is not a good way to trust that code is sensible; there's all sorts of linter tools out there that go vastly further in finding you code that just cannot be right, so if you care about that sort of thing, invest in learning those.
Nevertheless, the java 1.0 spec had this stuff baked in and there is no particularly good reason to drop this aspect of the java spec, therefore, it remains, and constructing a new array is not a valid ExpressionStatement.
As JLS §14.8 states, specifically, a ClassInstanceCreationExpression is in the list of valid expressionstatements. Click that word to link to the definition of ClassInstanceCreationExpression and you'll find that it specifically refers to invoking constructors, and not to array construction.
Thus, the JLS is specific and requires this behaviour. javac is simply following the spec.
I lost 10% for the following decision (instantiating my que as an Object and not an Integer type) and I'm not sure why? Perhaps someone can see why?
Here I instantiated "myQueue2" as type Object.
Queue<Object> myQueue2 = new LinkedQueue<Object>();
Next I enqueued and dequeued some integers
try {
myQueue2.enqueue(10);
System.out.println(myQueue2);
myQueue2.enqueue(5);
System.out.println(myQueue2);
myQueue2.dequeue();
int total = 0;
while (!myQueue2.isEmpty()) {
total += (int)myQueue2.dequeue();
}
System.out.println("The Queue's remain elements added to: " + total);
} catch (QueueEmptyException ex) {
System.out.println("Stack Empty Error");
}
The problem according to my grader is that I should have should have instantiated my queue as an Integer type. At first they argued that it didn't compile because they weren't using Java 7 and this line is illegal before Java 7:
total += (int)myQueue2.dequeue();
After I explained, they still said I should have instantiated the Queue as an Integer type.
However, my logic is that I can enqueue strings. characters and integers by instantiating it as an Object and then casting it as an (int) in this line: (it just works when I try i try it)
total += (int)myQueue2.dequeue();
I thought my approach was more flexible,no?
Are their any pros and cons to my choice to use Object here,that I don't fully get?
Using "blind casting" or instanceof is generally VERY BAD approach. Using right types allows your compiler to find bugs before even running the program.
Also, it helps you to easily find, what your instances are used for (imagine, that you see your code after year or two, and you see Queue<Object> myQueue2 = new LinkedQueue<Object>();, how you should know, that it is used for integers?
When you pick the arguments for that class, You are setting a limitation. Since everything in java is an Object (let alone things like primitives), You will be able to store any Type in that array.
Your teacher only wants you to be able to store integers in that array. Since nothing prevents you from doing myqueue.add("Im a string, not an integer");, your teacher docked you those points.
Example:
class Car { }
ArrayList<Car> carlist = new ArrayList<Car>();
If I tried doing carlist.add("Hey");, an error would be thrown, since String does not extend Car. If I had
class Benz extends Car { }
I could do carlist.add(new Benz()); because Benz is a Car.
Every class falls under the hierarchy of Object, so you could put anything in there.
Tip: Some classes like String and Integer are final, so nothing can extend these classes.
You can put in strings in the sense that it compiles, but you will get an exception at runtime when you put in anything other than an Integer and then cast it to int. So you've replaced a compile time error with a runtime error, which is bad.
While fiddling around in Java, I initialized a new String array with a negative length.
i.e. -
String[] arr = new String[-1];
To my surprise, the compiler didn't complain about it.
Googling didn't bring up any relevant answers. Can anyone shed some light on this matter?
Many thanks!
The reason is that the JLS allows this, and a compiler that flagged it as a compilation error would be rejecting valid Java code.
It is specified in JLS 15.10.1. Here's the relevant snippet:
"... If the value of any DimExpr expression is less than zero, then a NegativeArraySizeException is thrown."
Now if the Java compiler flagged the code as an error, then that specified behaviour could not occur ... in that specific code.
Furthermore, there's no text that I can find that "authorizes" the compiler to reject this in the "obvious mistake" cases involving compile-time constant expressions like -1. (And who is to say it really was a mistake?)
The next question, of course, is 'why does the JLS allow this?'
You've need to ask the Java designers. However I can think of some (mostly) plausible reasons:
This was originally overlooked, and there's no strong case for fixing it. (Noting that fixing it breaks source code compatibility.)
It was considered to be too unusual / edge case to be worth dealing with.
It would potentially cause problems for people writing source code generators. (Imagine, having to write code to evaluate compile-time constant expressions in order that you don't generate non-compilable code. With the current JLS spec, you can simply generate the code with the "bad" size, and deal with the exception (or not) if the code ever gets executed.)
Maybe someone had a plan to add "unarrays" to Java :-)
Other answers have suggested that the compiler could / should "flag" this case. If "flagging" means outputting a warning message, that is certainly permitted by the JLS. However, it is debatable whether the compiler should do this. On the one hand, if the above code was written by mistake, then it would be useful to have that mistake flagged. On the other hand, if it was not a mistake (or the "mistake" was not relevant) then the warning would be noise, or worse. Either way, this is something that you would need to discuss with the maintainer(s) for the respective compiler(s).
I see no reason why this couldn't be flagged up at compile time (at least as a warning), since this unconditionally throws NegativeArraySizeException when executed.
I've done some quick experiments with my compiler, and it seems surprisingly relaxed about this sort of thing. It issues no warning about integer divide by zero in constant expressions, out-of-bounds array access with constant indices etc.
From this I conclude that the general pattern here is to trust the programmer.
Compiler only responsible for checking language syntax, but not the semantic meaning of you code.
Thus it is reasonable the compiler is not complaining error as there is no syntax error in your code at all.
In Java, array is allocated at runtime, which is absolutely ok. If it is allocate at compile time, then how compiler check the following code?
// runtime pass the length, with any value
void t(int length) {
String[] stirngs = new String[length];
}
When pass negative value as length to contruct array, the runtime exception will being thrown.
public class Main {
public static void main(String[] args) {
String[] v = new String[-1];
}
}
with error:
Exception in thread "main" java.lang.NegativeArraySizeException
at Main.main(Main.java:5)
Java compiler takes an integer as the length of an array. It can be a variable or a compile-time constant. The length of an array is established when the array is created. After creation, its length is fixed.
The compiler should flag a negative compile-time constant as the length of an array. It just does not do so . If the length is a negative number you will get a NegativeArraySizeException at run time.
I was learning about generics in JAVA while I came across this point:
Generic type checks are done only at compile time. Its a bad idea to modify code to insert other instances at runtime.
I do not know enough Java to tweak code to do this at runtime yet. Reflection maybe? Hence, I was not able to try it out to see what happens. So, my question, what is it that would happen if the above is done? Why and how is it bad?
You would get a ClassCastException. Consider this example (of what not to do):
List<Integer> intList = new ArrayList<Integer>();
List asRaw = intList; // Bad! Your compiler will complain / warn. Don't ignore it.
asRaw.add("not a number");
Integer myInt = intList.get(0); // ClassCastException
This is because generics are erased at compile time -- that is, internally, the ArrayList only knows it has a bunch of Objects. (Google for Java erasure for lots more info on this.) The compile-time generics turn into runtime casts, which are guaranteed to work if your generics usage is safe (no raw types, no arrays of parameterized types, no reflection to put in bad values). If your usage is not safe (as in the example above), the casts can fail, causing a ClassCastException. This can actually happen somewhat far from the area where the bad value was put in -- or even far from where it was taken out -- so it can be hard to track down.
No need for reflection:
import java.util.*;
public class Test {
public static void main(String[] args) {
List<String> strings = new ArrayList<String>();
Object tmp = strings;
// Unsafe cast
List<Integer> integers = (List<Integer>) tmp;
integers.add(10);
String x = strings.get(0); // Bang! (ClassCastException)
}
}
That does give a warning, but it's still valid code. There may be ways of achieving similar things without a warning, if you're subtle... just don't do it.
The answer to "what is it that would happen" is shown above: you'd get an exception but probably later than you might expect. You'll get the exception when you try to use the rogue element, as that's when there'll be a cast.
Although the examples in this question are in Visual Basic.NET and Java, it's possible that the topic may also apply to other languages that implement type inference.
Up until now, I've only used type inference when using LINQ in VB.NET. However, I'm aware that type inference can be used in other parts of the language, as demonstrated below:
Dim i = 1 'i is inferred to be an Integer
Dim s = "Hoi" 's is inferred to be a string
Dim Temperatures(10) as Double
For T In Temperatures 'T is inferred to be a Double
'Do something
Next
I can see how type inference reduces the amount of typing I would need to write a piece of code and can also make it quicker to change the type of a base variable (such as Temperatures(10) above) as you wouldn't need to change the types of all other variables that access it (such as T). However, I was concerned that the lack of an explicit declaration of type might make the code harder to read, say during a code inspection, as it may not be immediately obvious what type a variable is. For example, by looking at just the For loop above, you might correclty assume that T is some floating point value, but wouldn't be able to tell if it's single- or double-precision without looking back at the declaration of Temperatures. Depending on how the code is written, Temperatures could be declared much earlier and would thus require the reader to go back and find the declaration and then resume reading. Admittedly, when using a good IDE, this is not so much of an issue, as a quick hover of the mouse cursor over the variable name reveals the type.
Also, I would imagine that using type inference in some situations could introduce some bugs when attempting to change the code. Consider the following (admittedly contrived) example, assuming that the classes A and B do completely different things.
Class A
Public Sub DoSomething()
End Sub
End Class
Class B
Public Sub DoSomething()
End Sub
End Class
Dim ObjectList(10) As A
For item In ObjectList
item.DoSomething()
End For
It is possible to change the type of ObjectList from A to B and the code would compile, but would function differently, which could manifest as a bug in other parts of the code.
With some examples of type inference, I would imagine that most people would agree that its use has no negative affect on readability or maintainability. For example, in Java 7, the following can be used:
ArrayList<String> a = new ArrayList<>(); // The constructor type is inferred from the declaration.
instead of
ArrayList<String> a = new ArrayList<String>();
I was curious to find out what people's thoughts are on this matter and if there are any recommendations on how extensively type inference should be used.
Amr
Personally I tend to use implicit type inference only in cases where the type is clear and where it makes the code easier to read
var dict = new Dictionary<string, List<TreeNode<int>>>();
is easier to read than
Dictionary<string, List<TreeNode<int>>> dict =
new Dictionary<string, List<TreeNode<int>>>();
yet the type is clear.
Here it is not really clear what you get and I would specify the type explicitly:
var something = GetSomeThing();
Here it makes no sense to use var since it does not simplify anything
int i = 0;
double x = 0.0;
string s = "hello";
I think in part it's a matter of coding style. I personally go with the the more verbose code given the choice. (Although when I move to a Java 7 project, I'll probably take advantage of the new syntax because everything is still there on one line.)
I also think that being explicit can avoid certain subtle bugs. Take your example:
For T In Temperatures 'T is inferred to be a Double
'Do something
Next
If Do something was a numerical method that relied on T being a Double, then changing Temperatures to single precision could cause an algorithm to fail to converge, or to converge to an incorrect value.
I imagine that I could argue the other way—that being explicit about variable types can in certain cases generate a lot of trivial maintenance work that could have been avoided. Again, I think it's all a matter of style.
The use of Generics is to ensure type safe checking at compile time instead of handling it on run time. If you are using type safe correctly it will save you from run time exceptions as well as writing casting codes at the time of data retrieval.
JAva Doesn't use this sentax prior to jdk 1.7
ArrayList<String> a = new ArrayList<>();
Its
ArrayList<String> a = new ArrayList();
But in java the use of
ArrayList<String> a = new ArrayList(); and
ArrayList<String> a = new ArrayList<String>();
Both consider same because the reference variable holds the complile time information of type safe which is string you can not add anything else instead of string in it i.e
a.add(new Integer(5));
Its still type safe and you will get error that you can only insert Strings.
So you can't do what you are expecting to do.