I was trying to find the working of for-each loop when I make a function call. Please see following code,
public static int [] returnArr()
{
int [] a=new int [] {1,2,3,4,5};
return a;
}
public static void main(String[] args)
{
//Version 1
for(int a : returnArr())
{
System.out.println(a);
}
//Version 2
int [] myArr=returnArr();
for(int a : myArr)
{
System.out.println(a);
}
}
In version 1, I'm calling returnArr() method in for-each loop and in version 2, I'm explicitly calling returnArr() method and assigning it to an array and then iterating through it. Result is same for both the scenarios. I would like to know which is more efficient and why.
I thought version 2 will be more efficient, as I'm not calling method in every iteration. But to my surprise, when I debugged the code using version 1, I saw the method call happened only once!
Can anyone please explain how does it actually work? Which is more efficient/better when I code for complex objects?
The Java Language Specification shows the underlying compilation
Let L1 ... Lm be the (possibly empty) sequence of labels immediately
preceding the enhanced for statement.
The enhanced for statement is equivalent to a basic for statement of
the form:
T[] #a = Expression;
L1: L2: ... Lm:
for (int #i = 0; #i < #a.length; #i++) {
{VariableModifier} TargetType Identifier = #a[#i];
Statement
}
where Expression is the right hand side of the : in an enhanced for statement (your returnArr()). In both cases, it gets evaluated only once: in version 1, as part of the enhanced for statement; in version 2, because its result is assigned to a variable which is then used in the enhanced for statement.
The compiler is calling the method returnArr() only once. compile time optimization :)
Byte code :
public static void main(java.lang.String[]);
descriptor: ([Ljava/lang/String;)V
flags: ACC_PUBLIC, ACC_STATIC
Code:
stack=2, locals=6, args_size=1
** case -1 start ***
0: invokestatic #20 // Method returnArr:()[I --> called only once.
3: dup
4: astore 4
6: arraylength
7: istore_3
8: iconst_0
9: istore_2
10: goto 28
13: aload 4 --> loop start
15: iload_2
16: iaload
17: istore_1
18: getstatic #22 // Field java/lang/System.out:Ljav
/io/PrintStream;
21: iload_1
22: invokevirtual #28 // Method java/io/PrintStream.prin
ln:(I)V
25: iinc 2, 1
28: iload_2
29: iload_3
30: if_icmplt 13
***case -2 start****
33: invokestatic #20 // Method returnArr:()[I
36: astore_1
37: aload_1
38: dup
39: astore 5
41: arraylength
42: istore 4
44: iconst_0
45: istore_3
46: goto 64
49: aload 5 --> loop start case 2
51: iload_3
52: iaload
53: istore_2
54: getstatic #22 // Field java/lang/System.out:Ljav
/io/PrintStream;
57: iload_2
58: invokevirtual #28 // Method java/io/PrintStream.prin
ln:(I)V
61: iinc 3, 1
64: iload_3
65: iload 4
67: if_icmplt 49
70: return
Note : I am using jdk 8.
I'm not going to copy paste from the Java Language Specification, like one of the previous answers did, but instead interpret the specification in a readable format.
Consider the following code:
for (T x : expr) {
// do something with x
}
If expr evaluates to an array type like in your case, the language specification states that the resulting bytecode will be the same as:
T[] arr = expr;
for (int i = 0; i < arr.length; i++) {
T x = arr[i];
// do something with x
}
The difference only is that the variables arr and i will not be visible to your code - or the debugger, unfortunately. That's why for development, the second version might be more useful: You have the return value stored in a variable accessible by the debugger.
In your first version expr is simply the function call, while in the second version you declare another variable and assign the result of the function call to that, then use that variable as expr. I'd expect them to exhibit no measurable difference in performance, as that additional variable assignment in the second version should be optimized away by the JIT compiler, unless you also use it elsewhere.
foreach internally uses list iterator to traverse through list and yes there is a difference between them.
If you just want to traverse the list and do not have any intension to modify it then you should use foreach else use list iterator.
for (String i : myList) {
System.out.println(i);
list.remove(i); // Exception here
}
Iterator it=list.iterator();
while (it.hasNext()){
System.out.println(it.next());
it.remove(); // No Exception
}
Also if using foreach you are passing a list which is null then you will get null pointer exception in java.util.ArrayList.iterator()
Related
I am a master's student and I am researching static analysis.
In one of my tests I came across a problem in marking lines in the java compiler.
I have the following java code:
226: String json = "/org/elasticsearch/index/analysis/commongrams/commongrams_query_mode.json";
227: Settings settings = Settings.settingsBuilder()
228: .loadFromStream(json, getClass().getResourceAsStream(json))
229: .put("path.home", createHome())
230: .build();
When compiling this code, and executing the command javap -p -v CLASSNAME, I get a table with the corresponding line of the source code for each instruction in the bytecode.
See the image below:
Bytecode table
The problem is that in the call to the .put (" path.home ", createHome ()) method, bytecode generates basically four instructions:
19: anewarray
24: ldc - String path.home
30: invokespecial - createHome
34: invokevirtual - put
Being the first two marked as line 228 (Wrong) and the last two as line 229 (correct).
See the image below:
Bytecode table
This is the original implementation of the .put("path.home", createHome()) method:
public Builder put(Object... settings) {
if (settings.length == 1) {
// support cases where the actual type gets lost down the road...
if (settings[0] instanceof Map) {
//noinspection unchecked
return put((Map) settings[0]);
} else if (settings[0] instanceof Settings) {
return put((Settings) settings[0]);
}
}
if ((settings.length % 2) != 0) {
throw new IllegalArgumentException("array settings of key + value order doesn't hold correct number of arguments (" + settings.length + ")");
}
for (int i = 0; i < settings.length; i++) {
put(settings[i++].toString(), settings[i].toString());
}
return this;
}
I have already tried to compile the code using Oracle-JDK v8 and Open-JDK v16 and in both results.
I also did a test by making a change to the put() method by removing its parameters. When compiling this code the problem in marking the lines did not occur.
I wonder why the bytecode instructions map the line 229: .put (" path.home ", createHome ()) on lines other than the original in the java source code? Does anyone know if this is done on purpose?
This is connected to the way, the line number association is stored in the class file and the history of the javac compiler.
The line number table only contains entries associating line numbers to a a code location marking its beginning. So all instructions after that location are assumed to belong to the same line up to the next location that has been explicitly mentioned in the table.
Since detailed information will take up space and the specification does not demand a particular precision for the line number table, compiler vendors made different decisions about which details to include.
In the past, i.e. up to Java 7, javac only generated line number table entries for the beginning of statements, so when I compile the following code with Java 7’s javac
String settings = new StringBuilder() // this is line 7 in my .java file
.append('a')
.append(
5
+
"".length())
.toString();
I get something like
stack=3, locals=2, args_size=1
0: new #2 // class java/lang/StringBuilder
3: dup
4: invokespecial #3 // Method java/lang/StringBuilder."<init>":()V
7: bipush 97
9: invokevirtual #4 // Method java/lang/StringBuilder.append:(C)Ljava/lang/StringBuilder;
12: iconst_5
13: ldc #5 // String
15: invokevirtual #6 // Method java/lang/String.length:()I
18: iadd
19: invokevirtual #7 // Method java/lang/StringBuilder.append:(I)Ljava/lang/StringBuilder;
22: invokevirtual #8 // Method java/lang/StringBuilder.toString:()Ljava/lang/String;
25: astore_1
26: return
LineNumberTable:
line 7: 0
line 14: 26
which would cause all instructions belonging to the statement to be associated with line 7 only.
This has been considered to be too little, so starting with Java 8, javac generates additional entries for method invocations within an expression spanning multiple lines. So when I compile the same code with Java 8 or newer, I get
stack=3, locals=2, args_size=1
0: new #2 // class java/lang/StringBuilder
3: dup
4: invokespecial #3 // Method java/lang/StringBuilder."<init>":()V
7: bipush 97
9: invokevirtual #4 // Method java/lang/StringBuilder.append:(C)Ljava/lang/StringBuilder;
12: iconst_5
13: ldc #5 // String
15: invokevirtual #6 // Method java/lang/String.length:()I
18: iadd
19: invokevirtual #7 // Method java/lang/StringBuilder.append:(I)Ljava/lang/StringBuilder;
22: invokevirtual #8 // Method java/lang/StringBuilder.toString:()Ljava/lang/String;
25: astore_1
26: return
LineNumberTable:
line 7: 0
line 8: 9
line 12: 15
line 9: 19
line 13: 22
line 14: 26
Note how each additional entry (compared to the Java 7 version) points to an invocation instruction, to ensure that the method invocations are associated with the correct line number. This greatly improves exception stack traces as well as step debugging.
The non-invocation instructions having no explicit entry will still get associated with their closest preceding code location that has an entry.
Therefore, the bipush 97 instruction corresponding to the 'a' constant gets associated with line 7 as only the subsequent append invocation consuming the constant has an explicit entry associating it with line 8.
The consequences for the next expression, 5 + "".length(), are even more dramatic.
The instructions for pusing the constants, iconst_5 and ldc [""], get associated to line 8, the location of the previous append invocation, whereas the iadd instruction, actually belonging to the + operator between the 5 and "" constants, gets associated with the line 12, as the most recent invocation instruction that got an explicit line number is the length() invocation.
For comparison, this is how Eclipse compiles the same code:
stack=3, locals=2, args_size=1
0: new #20 // class java/lang/StringBuilder
3: dup
4: invokespecial #22 // Method java/lang/StringBuilder."<init>":()V
7: bipush 97
9: invokevirtual #23 // Method java/lang/StringBuilder.append:(C)Ljava/lang/StringBuilder;
12: iconst_5
13: ldc #27 // String
15: invokevirtual #29 // Method java/lang/String.length:()I
18: iadd
19: invokevirtual #35 // Method java/lang/StringBuilder.append:(I)Ljava/lang/StringBuilder;
22: invokevirtual #38 // Method java/lang/StringBuilder.toString:()Ljava/lang/String;
25: astore_1
26: return
LineNumberTable:
line 6: 0
line 7: 7
line 9: 12
line 11: 13
line 9: 18
line 8: 19
line 12: 22
line 6: 25
line 13: 26
The Eclipse compiler doesn’t have javac’s history, but rather has been designed to produce line number entries for expressions in the first place. We can see that it associates the first instruction belonging to an invocation expression (not the invocation instruction) with the right line, i.e. the bipush 97 for append('a') and ldc [""] for "".length().
Further, it has additional entries for iconst_5, iadd, and astore_1, to associate them with the right lines. Of course, this higher precision also results in slightly bigger class files.
I was curious to see how Java and Scala implement switches on strings:
class Java
{
public static int java(String s)
{
switch (s)
{
case "foo": return 1;
case "bar": return 2;
case "baz": return 3;
default: return 42;
}
}
}
object Scala {
def scala(s: String): Int = {
s match {
case "foo" => 1
case "bar" => 2
case "baz" => 3
case _ => 42
}
}
}
It seems like Java switches on the hashcode and then does a single string comparison:
0: aload_0
1: dup
2: astore_1
3: invokevirtual #16 // Method java/lang/String.hashCode:()I
6: lookupswitch { // 3
97299: 40
97307: 52
101574: 64
default: 82
}
40: aload_1
41: ldc #22 // String bar
43: invokevirtual #24 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
46: ifne 78
49: goto 82
52: aload_1
53: ldc #28 // String baz
55: invokevirtual #24 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
58: ifne 80
61: goto 82
64: aload_1
65: ldc #30 // String foo
67: invokevirtual #24 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
70: ifne 76
73: goto 82
76: iconst_1
77: ireturn
78: iconst_2
79: ireturn
80: iconst_3
81: ireturn
82: bipush 42
84: ireturn
In contrast, Scala seems to compare against all the cases:
0: aload_1
1: astore_2
2: ldc #16 // String foo
4: aload_2
5: invokevirtual #20 // Method java/lang/Object.equals:(Ljava/lang/Object;)Z
8: ifeq 16
11: iconst_1
12: istore_3
13: goto 47
16: ldc #22 // String bar
18: aload_2
19: invokevirtual #20 // Method java/lang/Object.equals:(Ljava/lang/Object;)Z
22: ifeq 30
25: iconst_2
26: istore_3
27: goto 47
30: ldc #24 // String baz
32: aload_2
33: invokevirtual #20 // Method java/lang/Object.equals:(Ljava/lang/Object;)Z
36: ifeq 44
39: iconst_3
40: istore_3
41: goto 47
44: bipush 42
46: istore_3
47: iload_3
48: ireturn
Is it possible to convince Scala to employ the hashcode trick? I would rather prefer an O(1) solution to an O(n) solution. In my real code, I need to compare against 33 possible keywords.
Definitely it seems that this case is a lack of optimization from the Scala compiler. Sure, the match construct is much (much much) powerful than the switch/case in Java, and it is a lot harder to optimize it, but it could detect these special cases in which a simple hash comparison would apply.
Also, I don't think this case will show many times in idiomatic Scala, because you always match with case classes that have some meaning apart from having different value.
I think the problem is that you're thinking about Scala from a Java point of view (I think you're also prematurely optimizing, but hey).
I would think that the solution you want is to instead memoize your mapping.
You've got a function that maps from String -> Int, right? So do this:
class Memoize1[-T, +R](f: T => R) extends (T => R) {
import scala.collection.mutable
private[this] val vals = mutable.Map.empty[T, R]
def apply(x: T): R = {
if (vals.contains(x)) {
vals(x)
}
else {
val y = f(x)
vals += ((x, y))
y
}
}
}
object Memoize1 {
def apply[T, R](f: T => R) = new Memoize1(f)
}
(this memoizing code is taken from here.
Then you can memoize your code like this:
object Scala {
def scala(s: String): Int = {
s match {
case "foo" => 1
case "bar" => 2
case "baz" => 3
case _ => 42
}
}
val memoed = Memoize1(Scala.scala)
val n = memoed("foo")
}
Tada! Now you're doing hash value comparisons. Although I will add that most memoization examples (this one included) are toys and will not survive most use cases. Real-world memoization should include an upper limit to the amount you're willing to cache, and in the case of your code where you have a tiny number of possible valid cases and a huge number of invalid cases, I would consider making a general class that pre-builds the map and has a specialized lookup that says, "in my cache, you win, not in my cache, default." which can be done very easily by tweaking the memoizer to take a List of input to precache and change the "not-in-cache" code to return a default.
This problem inspired me to learn about Scala macros, and I might as well share my solution.
Here is how I use the macro:
switch(s, 42, "foo", "bar", "baz")
The associated values are counted up automatically. If this is not what you want, you can change the implementation to accept ArrowAssocs instead, but this was way too complicated for me.
And here is how the macro is implemented:
import scala.language.experimental.macros
import scala.reflect.macros.blackbox.Context
import scala.collection.mutable.ListBuffer
object StringSwitch {
def switch(value: String, default: Long, cases: String*): Long =
macro switchImpl
def switchImpl(c: Context)(value: c.Expr[String], default: c.Expr[Long],
cases: c.Expr[String]*): c.Expr[Long] = {
import c.universe._
val buf = new ListBuffer[CaseDef]
var i = 0
for (x <- cases) {
x match {
case Expr(Literal(Constant(y))) =>
i += 1
buf += cq"${y.hashCode} => if ($x.equals($value)) $i else $default"
case _ => throw new AssertionError("string literal expected")
}
}
buf += cq"_ => $default"
c.Expr(Match(q"$value.hashCode", buf.toList))
}
}
Note that this solution does not handle hash collisions. Since the particular strings I care about in my actual problem do not collide, I didn't cross that particular bridge yet.
I had the idea I would turn some of my if blocks into single lines, using the conditional operator. However, I was wondering if there would be a speed discrepancy. I ran the following test:
static long startTime;
static long elapsedTime;
static String s;
public static void main(String[] args) {
startTime = System.nanoTime();
s = "";
for (int i= 0; i < 1000000000; i++) {
if (s.equals("")) {
s = "";
}
}
elapsedTime = System.nanoTime() - startTime;
System.out.println("Type 1 took this long: " + elapsedTime + " ns");
startTime = System.nanoTime();
s = "";
for (int i= 0; i < 1000000000; i++) {
s = (s.equals("") ? "" : s);
}
elapsedTime = System.nanoTime() - startTime;
System.out.println("Type 2 took this long: " + elapsedTime + " ns");
}
This is my result:
Type 1 took this long: 3293937157 ns
Type 2 took this long: 2856769127 ns
Am I doing something wrong here?
Assuming s.equals("") necessarily is true, is this a viable way to make your code faster?
, is this a viable way to make your code faster?
You can even make it faster if your String s; is a non static field. Static-field is slower than non-static field when you are referencing it a billion times
public static void main(String[] args) {
startTime = System.nanoTime();
String s = "";
.
.
}
EDIT:
Why is it faster??
It is due to the referencing of string to the static field.
You can see it in the byte code of it
0: ldc #23 // String
2: putstatic #25 // Field s:Ljava/lang/String;
5: iconst_0
6: istore_1
7: goto 22
10: getstatic #25 // Field s:Ljava/lang/String;
13: ldc #23 // String
15: invokevirtual #27 // Method java/lang/String.equals:(L
java/lang/Object;)Z
18: pop
19: iinc 1, 1
22: iload_1
23: ldc #33 // int 1000000000
25: if_icmplt 10
28: return
As you can see getStatic and putStatic will be called a billion times, what it does is that it will call the reference of the static field and put the reference of the string using putStatic
getStatic - get a static field value of a class, where the field is identified by field reference in the constant pool index (index1 << 8 + index2)
putStatic - set static field to value in a class, where the field is identified by a field reference index in constant pool (indexbyte1 << 8 + indexbyte2)
See those bit shifting that is the cause of the slowness of the program
Also if you are using a global/member field it will create the same bytecode but instead it will use
getfield and putfield which is the same as static's getStatic and putStatic
Now let see the non static field bytecode
0: ldc #21 // String
2: astore_1
3: iconst_0
4: istore_2
5: goto 23
8: aload_1
9: ldc #21 // String
11: invokevirtual #23 // Method java/lang/String.equals:(L
java/lang/Object;)Z
14: ifeq 20
17: ldc #21 // String
19: astore_1
20: iinc 2, 1
23: iload_2
24: ldc #29 // int 1000000000
26: if_icmplt 8
29: return
As you can see it only uses astore_1 and aload_1 to save and load the reference of the non static field without extra operation.
This does smell like premature optimization to me. If you still intend to microbenchmark both implementations this way, I suggest using isEmpty() instead since the underlying code for that is more straightforward compared to equals(). By that, I mean any optimization that the compiler/JVM will do for you will be less likely triggered by what's happening in equals(), and more reflective of any minute benefits that one implementation has over the other, assuming that really matters.
Readability should be the better rule for you to decide whether you want to use if-else or ? :.
The other answers have useful information that is relevant but none of them addresses the real question if the first form is more efficient than the second form.
This benchmarking does not provide reliable results since it's not done properly: one important "rule of thumb" in benchmarking Java code is to provide a warm-up. In this case, the first loop provides a warm-up to the second loop.
This answer provides additional instructions for micro-benchmarking as well as some useful links.
This is kind of strange, but code speaks more then words, so look at the test to see what I'm doing. In my current setup (Java 7 update 21 on Windows 64 bit) this test fails with ArrayIndexOutOfBoundsException, but replacing the test method code with the commented code, it the works. And I wonder if there is any part of the Java specification that would explain why.
It seems to me, as "michael nesterenko" suggested, that the value of the array field is cached in the stack, before calling the method, and not updated on return from the call. I can't tell if it's a JVM bug or a documented "optimisation". No multi-threading or "magic" involved.
public class TestAIOOB {
private String[] array = new String[0];
private int grow(final String txt) {
final int index = array.length;
array = Arrays.copyOf(array, index + 1);
array[index] = txt;
return index;
}
#Test
public void testGrow() {
//final int index = grow("test");
//System.out.println(array[index]);
System.out.println(array[grow("test")]);
}
}
This is well defined by the Java Language Specification: to evaluate x[y], first x is evaluated, and then y is evaluated. In your case, x evaluates to a String[] with zero elements. Then, y modifies a member variable, and evaluates to 0. Trying to access the 0th element of the already-returned array fails. The fact that the member array changes has no bearing on the array lookup, because we're looking at the String[] that array referenced at the time we evaluated it.
This behavior is mandated by the JLS. Per 15.13.1, "An array access expression is evaluated using the following procedure: First, the array reference expression is evaluated. If this evaluation completes abruptly, then the array access completes abruptly for the same reason and the index expression is not evaluated. Otherwise, the index expression is evaluated. [...]".
Compare the compiled Java code by using javap -c TestAIOOB
Uncommented code:
public void testGrow();
Code:
0: getstatic #6; //Field java/lang/System.out:Ljava/io/PrintStream;
3: aload_0
4: getfield #3; //Field array:[Ljava/lang/String;
7: aload_0
8: ldc #7; //String test
10: invokespecial #8; //Method grow:(Ljava/lang/String;)I
13: aaload
14: invokevirtual #9; //Method java/io/PrintStream.println:(Ljava/lang/St
ing;)V
17: return
Commented code:
public void testGrow();
Code:
0: aload_0
1: ldc #6; //String test
3: invokespecial #7; //Method grow:(Ljava/lang/String;)I
6: istore_1
7: getstatic #8; //Field java/lang/System.out:Ljava/io/PrintStream;
10: aload_0
11: getfield #3; //Field array:[Ljava/lang/String;
14: iload_1
15: aaload
16: invokevirtual #9; //Method java/io/PrintStream.println:(Ljava/lang/Str
ing;)V
19: return
In the first the getfield happens before the call to grow and in the second it happens after.
In the following:
for (String deviceNetwork : deviceOrganizer.getNetworkTypes(deviceManufacturer)) {
// do something
}
Is it safe to assume that deviceOrganizer.getNetworkTypes(deviceManufacturer) will be called only once?
Yes, absolutely.
From section 14.14.2 of the spec:
If the type of Expression is a subtype of Iterable, then let I be the type of the
expression Expression.iterator(). The enhanced for statement is equivalent to a basic for
statement of the form:
for (I #i = Expression.iterator(); #i.hasNext(); ) {
VariableModifiersopt Type Identifier = #i.next();
Statement
}
(The alternative deals with arrays.)
Note how Expression is only mentioned in the first part of the for loop expression - so it's only evaluated once.
Yes, give it a try:
public class ForLoop {
public static void main( String [] args ) {
for( int i : testData() ){
System.out.println(i);
}
}
public static int[] testData() {
System.out.println("Test data invoked");
return new int[]{1,2,3,4};
}
}
Output:
$ java ForLoop
Test data invoked
1
2
3
4
To complement what's been said and verify that the spec is doing what it says, let's look at the generated bytecode for the following class, which implements the old and new style loops to loop over a list returned by a method call, getList():
public class Main {
static java.util.List getList() { return new java.util.ArrayList(); }
public static void main(String[] args) {
for (Object o : getList()) {
System.out.print(o);
}
for (java.util.Iterator itr = getList().iterator(); itr.hasNext(); ) {
Object o = itr.next(); System.out.print(o);
}
}
}
Relevant parts of the output:
0: invokestatic #4; //Method getList
3: invokeinterface #5, 1; //InterfaceMethod java/util/List.iterator
8: astore_1
9: aload_1
10: invokeinterface #6, 1; //InterfaceMethod java/util/Iterator.hasNext
15: ifeq 35
18: aload_1
19: invokeinterface #7, 1; //InterfaceMethod java/util/Iterator.next
24: astore_2
25: getstatic #8; //Field java/lang/System.out
28: aload_2
29: invokevirtual #9; //Method java/io/PrintStream.print
32: goto 9
35: invokestatic #4; //Method getList
38: invokeinterface #10, 1; //InterfaceMethod java/util/List.iterator
43: astore_1
44: aload_1
45: invokeinterface #6, 1; //InterfaceMethod java/util/Iterator.hasNext
50: ifeq 70
53: aload_1
54: invokeinterface #7, 1; //InterfaceMethod java/util/Iterator.next
59: astore_2
60: getstatic #8; //Field java/lang/System.out
63: aload_2
64: invokevirtual #9; //Method java/io/PrintStream.print
67: goto 44
70: return
This shows that the first loop (0 to 32) and the second (35-67) are identical.
The generated bytecode is exactly the same.