What is the difference between "Explicitly" and "Implicitly" in programming language? - java

I would like to have a clear and precise understanding of the difference between the two.
Also is the this keyword used to implicitly reference or explicitly ? This is also why I want clarification between the two?
I assume to use the this keyword is to reference implicitly (being something withing the class) whilst explicitly (is something not belonging to the class itself) like a parameter variable being passed into a method.
Of course my assumptions could obviously be wrong which is why I'm here asking for clarification.

Explicit means done by the programmer.
Implicit means done by the JVM or the tool , not the Programmer.
For Example:
Java will provide us default constructor implicitly.Even if the programmer didn't write code for constructor, he can call default constructor.
Explicit is opposite to this , ie. programmer has to write .

already you have got your answer but I would like to add few more.
Implicit: which is already available into your programming language like methods, classes , dataTypes etc.
-implicit code resolve the difficulties of programmer and save the time of development.
-it provides optimised code. and so on.
Explicit: which is created by the programmer(you) as per their(your) requirement, like your app class, method like getName(), setName() etc.
finally in simple way,
A pre-defined code which provides help to programmer to build their app,programs etc it is know as implicit, and which have been written by the (you)programmer to full fill the requirement it is known as Explicit.

1: Implicit casting (widening conversion)
A data type of lower size (occupying less memory) is assigned to a data type of higher size. This is done implicitly by the JVM. The lower size is widened to higher size. This is also named as automatic type conversion.
Examples:
int x = 10; // occupies 4 bytes
double y = x; // occupies 8 bytes
System.out.println(y); // prints 10.0
In the above code 4 bytes integer value is assigned to 8 bytes double value.
Explicit casting (narrowing conversion)
A data type of higher size (occupying more memory) cannot be assigned to a data type of lower size. This is not done implicitly by the JVM and requires explicit casting; a casting operation to be performed by the programmer. The higher size is narrowed to lower size.
double x = 10.5; // 8 bytes
int y = x; // 4 bytes ; raises compilation error
1
2
double x = 10.5; // 8 bytes
int y = x; // 4 bytes ; raises compilation error
In the above code, 8 bytes double value is narrowed to 4 bytes int value. It raises error. Let us explicitly type cast it.
double x = 10.5;
int y = (int) x;
1
2
double x = 10.5;
int y = (int) x;
The double x is explicitly converted to int y. The thumb rule is, on both sides, the same data type should exist.

I'll try to provide an example of a similar functionality across different programming languages to differentiate between implicit & explicit.
Implicit: When something is available as a feature/aspect of the programming language constructs being used. And you have to do nothing but call the respective functionality through the API/interface directly.
For example Garbage collection in java happens implicitly. The JVM does it for us at an appropriate time.
Explicit: When user/programmer intervention is required to invoke/call a specific functionality, without which the desired action wont take place.
For example, in C++, freeing of the memory (read: Garbage collection version) has to happen by explicitly calling delete and free operators.
Hope this helps you understand the difference clearly.

This was way more complicated than I think it needed to be:
explicit = label names of a index (label-based indexing)
example:
df['index label name']
vs
implicit = integer of index (zero-based indexing)
df[0]

Related

Expression evaluation in C vs Java

int y=3;
int z=(--y) + (y=10);
when executed in C language the value of z evaluates to 20
but when the same expression in java, when executed gives the z value as 12.
Can anyone explain why this is happening and what is the difference?
when executed in C language the value of z evaluates to 20
No it does not. This is undefined behavior, so z could get any value. Including 20. The program could also theoretically do anything, since the standard does not say what the program should do when encountering undefined behavior. Read more here: Undefined, unspecified and implementation-defined behavior
As a rule of thumb, never modify a variable twice in the same expression.
It's not a good duplicate, but this will explain things a bit deeper. The reason for undefined behavior here is sequence points. Why are these constructs using pre and post-increment undefined behavior?
In C, when it comes to arithmetic operators, like + and /, the order of evaluation of the operands is not specified in the standard, so if the evaluation of those has side effects, your program becomes unpredictable. Here is an example:
int foo(void)
{
printf("foo()\n");
return 0;
}
int bar(void)
{
printf("bar()\n");
return 0;
}
int main(void)
{
int x = foo() + bar();
}
What will this program print? Well, we don't know. I'm not entirely sure if this snippet invokes undefined behavior or not, but regardless, the output is not predictable. I made a question, Is it undefined behavior to use functions with side effects in an unspecified order? , about that, so I'll update this answer later.
Some other variables have specified order (left to right) of evaluation, like || and && and this feature is used for short circuiting. For instance, if we use the above example functions and use foo() && bar(), only the foo() function will be executed.
I'm not very proficient in Java, but for completeness, I want to mention that Java basically does not have undefined or unspecified behavior except for very special situations. Almost everything in Java is well defined. For more details, read rzwitserloot's answer
There are 3 parts to this answer:
How this works in C (unspecified behaviour)
How this works in Java (the spec is clear on how this should be evaluated)
Why is there a difference.
For #1, you should read #klutt's fantastic answer.
For #2 and #3, you should read this answer.
How does it work in java?
Unlike in C, java's language specification is far more clearly specified. For example, C doesn't even tell you how many bits the data type int is supposed to have, whereas the java lang spec does: 32 bits. Even on 64-bit processors and a 64-bit java implementation.
The java spec clearly says that x+y is to be evaluated left-to-right (vs. C's 'in any order you please, compiler'), thus, first --y is evaluated which is clearly 2 (with the side-effect of making y 2), and then y=10 is evaluated which is clearly 10 (with the side effect of making y 10), and then 2+10 is evaluated which is clearly 12.
Obviously, a language like java is just better; after all, undefined behaviour is pretty much a bug by definition, whatever was wrong with the C lang spec writers to introduce this crazy stuff?
The answer is: performance.
In C, your source code is turned into machine code by the compiler, and the machine code is then interpreted by the CPU. A 2-step model.
In java, your source code is turned into bytecode by the compiler, the bytecode is then turned into machine code by the runtime, and the machine code is then interpreted by the CPU. A 3-step model.
If you want to introduce optimizations, you don't control what the CPU does, so for C there is only 1 step where it can be done: Compilation.
So C (the language) is designed to give lots of freedom to C compilers to attempt to produce optimized machine code. This is a cost/benefit scenario: At the cost of having a ton of 'undefined behaviour' in the lang spec, you get the benefit of better optimizing compilers.
In java, you get a second step, and that's where java does its optimizations: At runtime. java.exe does it to class files; javac.exe is quite 'stupid' and optimizes almost nothing. This is on purpose; at runtime you can do a better job (for example, you can use some bookkeeping to track which of two branches is more commonly taken and thus branch predict better than a C app ever could) - it also means that cost/benefit analysis now results in: The lang spec should be clear as day.
So java code is never undefined behaviour?
Not so. Java has a memory model which includes a ton of undefined behaviour:
class X { int a, b; }
X instance = new X();
new Thread() { public void run() {
int a = instance.a;
int b = instance.b;
instance.a = 5;
instance.b = 6;
System.out.print(a);
System.out.print(b);
}}.start();
new Thread() { public void run() {
int a = instance.a;
int b = instance.b;
instance.a = 1;
instance.b = 2;
System.out.print(a);
System.out.print(b);
}}.start();
is undefined in java. It may print 0056, 0012, 0010, 0002, 5600, 0600, and many many more possibilities. Something like 5000 (which it could legally print) is hard to imagine: How can the read of a 'work' but the read of b then fail?
For the exact same reason your C code produces arbitrary answers:
Optimization.
The cost/benefit of 'hardcoding' in the spec exactly how this code would behave would have a large cost to it: You'd take away most of the room for optimization. So java paid the cost and now has a langspec that is ambigous whenever you modify/read the same fields from different threads without establish so-called 'comes-before' guards using e.g. synchronized.
When executed in C language the value of z evaluates to 20
It is not the truth. The compiler you use evaluates it to 20. Another one can evaluate it completely different way: https://godbolt.org/z/GcPsKh
This kind of behaviour is called Undefined Behaviour.
In your expression you have two problems.
Order of eveluation (except the logical expressions) is not specified in C (it is an Unspecified Behaviour)
In this expression there is also problem with the sequence point (Undefined Bahaviour)

What is the deference between primitive data type and wrapper classes? Is the use of primitive data type in java violation of object orientated rules? [duplicate]

Since Java 5, we've had boxing/unboxing of primitive types so that int is wrapped to be java.lang.Integer, and so and and so forth.
I see a lot of new Java projects lately (that definitely require a JRE of at least version 5, if not 6) that are using int rather than java.lang.Integer, though it's much more convenient to use the latter, as it has a few helper methods for converting to long values et al.
Why do some still use primitive types in Java? Is there any tangible benefit?
In Joshua Bloch's Effective Java, Item 5: "Avoid creating unnecessary objects", he posts the following code example:
public static void main(String[] args) {
Long sum = 0L; // uses Long, not long
for (long i = 0; i <= Integer.MAX_VALUE; i++) {
sum += i;
}
System.out.println(sum);
}
and it takes 43 seconds to run. Taking the Long into the primitive brings it down to 6.8 seconds... If that's any indication why we use primitives.
The lack of native value equality is also a concern (.equals() is fairly verbose compared to ==)
for biziclop:
class Biziclop {
public static void main(String[] args) {
System.out.println(new Integer(5) == new Integer(5));
System.out.println(new Integer(500) == new Integer(500));
System.out.println(Integer.valueOf(5) == Integer.valueOf(5));
System.out.println(Integer.valueOf(500) == Integer.valueOf(500));
}
}
Results in:
false
false
true
false
EDIT Why does (3) return true and (4) return false?
Because they are two different objects. The 256 integers closest to zero [-128; 127] are cached by the JVM, so they return the same object for those. Beyond that range, though, they aren't cached, so a new object is created. To make things more complicated, the JLS demands that at least 256 flyweights be cached. JVM implementers may add more if they desire, meaning this could run on a system where the nearest 1024 are cached and all of them return true... #awkward
Autounboxing can lead to hard to spot NPEs
Integer in = null;
...
...
int i = in; // NPE at runtime
In most situations the null assignment to in is a lot less obvious than above.
Boxed types have poorer performance and require more memory.
Primitive types:
int x = 1000;
int y = 1000;
Now evaluate:
x == y
It's true. Hardly surprising. Now try the boxed types:
Integer x = 1000;
Integer y = 1000;
Now evaluate:
x == y
It's false. Probably. Depends on the runtime. Is that reason enough?
Besides performance and memory issues, I'd like to come up with another issue: The List interface would be broken without int.
The problem is the overloaded remove() method (remove(int) vs. remove(Object)). remove(Integer) would always resolve to calling the latter, so you could not remove an element by index.
On the other hand, there is a pitfall when trying to add and remove an int:
final int i = 42;
final List<Integer> list = new ArrayList<Integer>();
list.add(i); // add(Object)
list.remove(i); // remove(int) - Ouch!
Can you really imagine a
for (int i=0; i<10000; i++) {
do something
}
loop with java.lang.Integer instead? A java.lang.Integer is immutable, so each increment round the loop would create a new java object on the heap, rather than just increment the int on the stack with a single JVM instruction. The performance would be diabolical.
I would really disagree that it's much mode convenient to use java.lang.Integer than int. On the contrary. Autoboxing means that you can use int where you would otherwise be forced to use Integer, and the java compiler takes care of inserting the code to create the new Integer object for you. Autoboxing is all about allowing you to use an int where an Integer is expected, with the compiler inserting the relevant object construction. It in no way removes or reduces the need for the int in the first place. With autoboxing you get the best of both worlds. You get an Integer created for you automatically when you need a heap based java object, and you get the speed and efficiency of an int when you are just doing arithmetic and local calculations.
Primitive types are much faster:
int i;
i++;
Integer (all Numbers and also a String) is an immutable type: once created it can not be changed. If i was Integer, than i++ would create a new Integer object - much more expensive in terms of memory and processor.
First and foremost, habit. If you've coded in Java for eight years, you accumulate a considerable amount of inertia. Why change if there is no compelling reason to do so? It's not as if using boxed primitives comes with any extra advantages.
The other reason is to assert that null is not a valid option. It would be pointless and misleading to declare the sum of two numbers or a loop variable as Integer.
There's the performance aspect of it too, while the performance difference isn't critical in many cases (though when it is, it's pretty bad), nobody likes to write code that could be written just as easily in a faster way we're already used to.
By the way, Smalltalk has only objects (no primitives), and yet they had optimized their small integers (using not all 32 bits, only 27 or such) to not allocate any heap space, but simply use a special bit pattern. Also other common objects (true, false, null) had special bit patterns here.
So, at least on 64-bit JVMs (with a 64 bit pointer namespace) it should be possible to not have any objects of Integer, Character, Byte, Short, Boolean, Float (and small Long) at all (apart from these created by explicit new ...()), only special bit patterns, which could be manipulated by the normal operators quite efficiently.
I can't believe no one has mentioned what I think is the most important reason:
"int" is so, so much easier to type than "Integer". I think people underestimate the importance of a concise syntax. Performance isn't really a reason to avoid them because most of the time when one is using numbers is in loop indexes, and incrementing and comparing those costs nothing in any non-trivial loop (whether you're using int or Integer).
The other given reason was that you can get NPEs but that's extremely easy to avoid with boxed types (and it is guaranteed to be avoided as long as you always initialize them to non-null values).
The other reason was that (new Long(1000))==(new Long(1000)) is false, but that's just another way of saying that ".equals" has no syntactic support for boxed types (unlike the operators <, >, =, etc), so we come back to the "simpler syntax" reason.
I think Steve Yegge's non-primitive loop example illustrates my point very well:
http://sites.google.com/site/steveyegge2/language-trickery-and-ejb
Think about this: how often do you use function types in languages that have good syntax for them (like any functional language, python, ruby, and even C) compared to java where you have to simulate them using interfaces such as Runnable and Callable and nameless classes.
Couple of reasons not to get rid of primitives:
Backwards compatability.
If it's eliminated, any old programs wouldn't even run.
JVM rewrite.
The entire JVM would have to be rewritten to support this new thing.
Larger memory footprint.
You'd need to store the value and the reference, which uses more memory. If you have a huge array of bytes, using byte's is significantly smaller than using Byte's.
Null pointer issues.
Declaring int i then doing stuff with i would result in no issues, but declaring Integer i and then doing the same would result in an NPE.
Equality issues.
Consider this code:
Integer i1 = 5;
Integer i2 = 5;
i1 == i2; // Currently would be false.
Would be false. Operators would have to be overloaded, and that would result in a major rewrite of stuff.
Slow
Object wrappers are significantly slower than their primitive counterparts.
Objects are much more heavyweight than primitive types, so primitive types are much more efficient than instances of wrapper classes.
Primitive types are very simple: for example an int is 32 bits and takes up exactly 32 bits in memory, and can be manipulated directly. An Integer object is a complete object, which (like any object) has to be stored on the heap, and can only be accessed via a reference (pointer) to it. It most likely also takes up more than 32 bits (4 bytes) of memory.
That said, the fact that Java has a distinction between primitive and non-primitive types is also a sign of age of the Java programming language. Newer programming languages don't have this distinction; the compiler of such a language is smart enough to figure out by itself if you're using simple values or more complex objects.
For example, in Scala there are no primitive types; there is a class Int for integers, and an Int is a real object (that you can methods on etc.). When the compiler compiles your code, it uses primitive ints behind the scenes, so using an Int is just as efficient as using a primitive int in Java.
In addition to what others have said, primitive local variables are not allocated from the heap, but instead on the stack. But objects are allocated from the heap and thus have to be garbage collected.
It's hard to know what kind of optimizations are going on under the covers.
For local use, when the compiler has enough information to make optimizations excluding the possibility of the null value, I expect the performance to be the same or similar.
However, arrays of primitives are apparently very different from collections of boxed primitives. This makes sense given that very few optimizations are possible deep within a collection.
Furthermore, Integer has a much higher logical overhead as compared with int: now you have to worry about about whether or not int a = b + c; throws an exception.
I'd use the primitives as much as possible and rely on the factory methods and autoboxing to give me the more semantically powerful boxed types when they are needed.
int loops = 100000000;
long start = System.currentTimeMillis();
for (Long l = new Long(0); l<loops;l++) {
//System.out.println("Long: "+l);
}
System.out.println("Milliseconds taken to loop '"+loops+"' times around Long: "+ (System.currentTimeMillis()- start));
start = System.currentTimeMillis();
for (long l = 0; l<loops;l++) {
//System.out.println("long: "+l);
}
System.out.println("Milliseconds taken to loop '"+loops+"' times around long: "+ (System.currentTimeMillis()- start));
Milliseconds taken to loop '100000000' times around Long: 468
Milliseconds taken to loop '100000000' times around long: 31
On a side note, I wouldn't mind seeing something like this find it's way into Java.
Integer loop1 = new Integer(0);
for (loop1.lessThan(1000)) {
...
}
Where the for loop automatically increments loop1 from 0 to 1000
or
Integer loop1 = new Integer(1000);
for (loop1.greaterThan(0)) {
...
}
Where the for loop automatically decrements loop1 1000 to 0.
Primitive types have many advantages:
Simpler code to write
Performance is better since you are not instantiating an object for the variable
Since they do not represent a reference to an object there is no need to check for nulls
Use primitive types unless you need to take advantage of the boxing features.
You need primitives for doing mathematical operations
Primitives takes less memory as answered above and better performing
You should ask why Class/Object type is required
Reason for having Object type is to make our life easier when we deal with Collections. Primitives cannot be added directly to List/Map rather you need to write a wrapper class. Readymade Integer kind of Classes helps you here plus it has many utility methods like Integer.pareseInt(str)
I agree with previous answers, using primitives wrapper objects can be expensive.
But, if performance is not critical in your application, you avoid overflows when using objects. For example:
long bigNumber = Integer.MAX_VALUE + 2;
The value of bigNumber is -2147483647, and you would expect it to be 2147483649. It's a bug in the code that would be fixed by doing:
long bigNumber = Integer.MAX_VALUE + 2l; // note that '2' is a long now (it is '2L').
And bigNumber would be 2147483649. These kind of bugs sometimes are easy to be missed and can lead to unknown behavior or vulnerabilities (see CWE-190).
If you use wrapper objects, the equivalent code won't compile.
Long bigNumber = Integer.MAX_VALUE + 2; // Not compiling
So it's easier to stop these kind of issues by using primitives wrapper objects.
Your question is so answered already, that I reply just to add a little bit more information not mentioned before.
Because JAVA performs all mathematical operations in primitive types. Consider this example:
public static int sumEven(List<Integer> li) {
int sum = 0;
for (Integer i: li)
if (i % 2 == 0)
sum += i;
return sum;
}
Here, reminder and unary plus operations can not be applied on Integer(Reference) type, compiler performs unboxing and do the operations.
So, make sure how many autoboxing and unboxing operations happen in java program. Since, It takes time to perform this operations.
Generally, it is better to keep arguments of type Reference and result of primitive type.
The primitive types are much faster and require much less memory. Therefore, we might want to prefer using them.
On the other hand, current Java language specification doesn’t allow usage of primitive types in the parameterized types (generics), in the Java collections or the Reflection API.
When our application needs collections with a big number of elements, we should consider using arrays with as more “economical” type as possible.
*For detailed info see the source: https://www.baeldung.com/java-primitives-vs-objects
To be brief: primitive types are faster and require less memory than boxed ones

If a Java method's return type is int, can the method return a value of type byte?

I'm preparing myself for the Java SE 7 Programmer I exam (1Z0-803) by reading a book called OCA Java SE 7 Programmer I Certification Guide. This book has a numerous amount of flaws in it despite the author having 12 years of experience with Java programming and despite a technical proofreader, presumably on a salary.
There is one thing though that makes me insecure. The author says on page 168 that this statement is true:
If the return type of a method is int, the method can return a value
of type byte.
Well I argue differently and need your help. Take this piece of code for an example:
public static void main(String[] args)
{
// This line won't compile ("possible loss of precision"):
byte line1 = returnByte();
// compiles:
int line2 = returnByte();
// compiles too, we "accept" the risk of precision loss:
byte line3 = (byte) returnByte();
}
public static int returnByte()
{
byte b = 1;
return b;
}
Obviously, the compiler do not complain about a different return type int in the returnByte() method signature and what we actually do return at the end of the method implementation: a byte. The byte uses fewer bits (8) than the int (32) and will be cast to an int without the risk of a precision loss. But the returned value is and will always be an integer! Or am I wrong? What would you have answered on the exam?
I'm not totally sure what the actual return type is, since this statement in the book is said to be true. Is the cast happening at the end of our method implementation or is the cast happening back in the main method just before assignment?
In the real world, this question would not matter as long as one understand the implicit casting and the risk of losing precision. But since this is just one of those questions that might popup on the exam, I'd love to know the technically correct answer to the question.
Clarification!
The majority of the answers seem to think that I want to know whether one can cast a byte to an int and what happens then. Well, that is not the question. I'm asking what the returned type is. In other words, is the author's quoted statement right or wrong? Does the cast to an int happen before or after the returnByte() method actually returns? If this was the real exam and you would have got the question, what would you have answered?
Please see line1 in my code snippet. If what the author say is right, that line would have compiled as the returned value would have been a byte. But it does not compile, the rules of type promotion says that we risk loosing precision if we try to squeeze an int into a byte. For me, that is a proof that the returned value is an integer.
Yes, you can do that.
The value of the byte expression will be promoted to int before it is returned.
The actual return type is as declared in the method signature - int.
IMO, what the author of that book wrote is more or less correct. He just left out explaining the bit about the byte-to-int promotion that happens in the return statement when you "return a byte".
Is the cast happening at the end of our method implementation or is the cast happening back in the main method just before assignment?
The cast (promotion) happens in the returnByte method.
In the real world, this question would not matter as long as one understand the implicit casting and the risk of losing precision.
There is no loss of precision in promoting a byte to an int. If the types were different, there could be loss of precision, but (hypothetically) the loss of precision would be the same wherever the promotion is performed.
The section of the JLS that deals with this is JLS 14.17, which says that the return expression must assignable to the method's declared return type. It doesn't explicitly state that the promotion is done in the method, but it is implied. Furthermore, it is the only practical way to implement this.
If (hypothetically) the conversion was done in the calling method (e.g. main), then:
The compiler would need to know what the return statement is doing. This is not possible, given that Java classes can be compiled separately.
The compiler would need to know which actual method is going to be called. This is not possible if the method is overridden.
If the method contained two (or more) return statements with expressions that have different types, then the compiler needs to know which return will be executed. This is impossible.
If this was the real exam and you would have got the question, what would you have answered?
I would have answered ... "it depends" ... and proceeded to the alternative viewpoints.
Technically speaking the method is returning an int, but the return statement can take any expression whose type can be converted to an int.
But if someone said to me that the method is returning a byte, I would understand what they meant.
Also, as far as i know methods/function use stacks for storage.etc and inside those stacks they store the return addresses not what (type) they are returning to. So, again the ambiguity arises (at least for me). Please correct me if I am wrong.
Yes, technically a method doesn't return a type. (Not even if it returns a Type object. That is an object that denotes a type, not the type itself.) But everyone and his dog would happily say "the returnByte methods returns type int".
So, if you are going to be pedantic, then yes it is ambiguous. But the solution is to not be pedantic.
I'm fairly certain it's the same as doing myInt << 24, getting the last eight bits, therefor an imminent loss of precision if the integer is over the value of 255(2^8).
Yes the statement is valid. A byte will always fit into an int comfortably.
byte => int //no precision loss
int => byte //precision loss
But if you are doing:
byte => int => byte //you won't lose any data
Which is what you are doing. Does that help?

Use of uninitialized final field - with/without 'this.' qualifier

Can someone explain to me why the first of the following two samples compiles, while the second doesn't? Notice the only difference is that the first one explicitly qualifies the reference to x with '.this', while the second doesn't. In both cases, the final field x is clearly attempted to be used before initialized.
I would have thought both samples would be treated completely equally, resulting in a compilation error for both.
1)
public class Foo {
private final int x;
private Foo() {
int y = 2 * this.x;
x = 5;
}
}
2)
public class Foo {
private final int x;
private Foo() {
int y = 2 * x;
x = 5;
}
}
After a bunch of spec-reading and thought, I've concluded that:
In a Java 5 or Java 6 compiler, this is correct behavior. Chapter 16 "Definite Assignment of The Java Language Specification, Third Edition says:
Each local variable (§14.4) and every blank final (§4.12.4) field (§8.3.1.2) must have a definitely assigned value when any access of its value occurs. An access to its value consists of the simple name of the variable occurring anywhere in an expression except as the left-hand operand of the simple assignment operator =.
(emphasis mine). So in the expression 2 * this.x, the this.x part is not considered an "access of [x's] value" (and therefore is not subject to the rules of definite assignment), because this.x is not the simple name of the instance variable x. (N.B. the rule for when definite assignment occurs, in the paragraph after the above-quoted text, does allow something like this.x = 3, and considers x to be definitely assigned thereafter; it's only the rule for accesses that doesn't count this.x.) Note that the value of this.x in this case will be zero, per §17.5.2.
In a Java 7 compiler, this is a compiler bug, but an understandable one. Chapter 16 "Definite Assignment" of the Java Language Specification, Java 7 SE Edition says:
Each local variable (§14.4) and every blank final field (§4.12.4, §8.3.1.2) must have a definitely assigned value when any access of its value occurs.
An access to its value consists of the simple name of the variable (or, for a field, the simple name of the field qualified by this) occurring anywhere in an expression except as the left-hand operand of the simple assignment operator = (§15.26.1).
(emphasis mine). So in the expression 2 * this.x, the this.x part should be considered an "access to [x's] value", and should give a compile error.
But you didn't ask whether the first one should compile, you asked why it does compile (in some compilers). This is necessarily speculative, but I'll make two guesses:
Most Java 7 compilers were written by modifying Java 6 compilers. Some compiler-writers may not have noticed this change. Furthermore, many Java-7 compilers and IDEs still support Java 6, and some compiler-writers may not have felt motivated to specifically reject something in Java-7 mode that they accept in Java-6 mode.
The new Java 7 behavior is strangely inconsistent. Something like (false ? null : this).x is still allowed, and for that matter, even (this).x is still allowed; it's only the specific token-sequence this plus . plus the field-name that's affected by this change. Granted, such an inconsistency already existed on the left-hand side of an assignment statement (we can write this.x = 3, but not (this).x = 3), but that's more readily understandable: it's accepting this.x = 3 as a special permitted case of the otherwise forbidden construction obj.x = 3. It makes sense to allow that. But I don't think it makes sense to reject 2 * this.x as a special forbidden case of the otherwise permitted construction 2 * obj.x, given that (1) this special forbidden case is easily worked around by adding parentheses, that (2) this special forbidden case was allowed in previous versions of the language, and that (3) we still need the special rule whereby final fields have their default values (e.g. 0 for an int) until they're initialized, both because of cases like (this).x, and because of cases like this.foo() where foo() is a method that accesses x. So some compiler-writers may not have felt motivated to make this inconsistent change.
Either of these would be surprising — I assume that compiler-writers had detailed information about every single change to the spec, and in my experience Java compilers are usually pretty good about sticking to the spec exactly (unlike some languages, where every compiler has its own dialect) — but, well, something happened, and the above are my only two guesses.
When you use this in the constructor, compiler is seeing x as a member attribute of this object (default initialized). Since x is int, it's default initialized with 0. This makes compiler happy and its working fine at run time too.
When you don't use this, then compiler is using x declaration directly in the lexical analysis and hence it complains about it's initialization (compile time phenomenon).
So It's definition of this, which makes compiler to analyze x as a member variable of an object versus direct attribute during the lexical analysis in the compilation and resulting into different compilation behavior.
When used as a primary expression, the keyword this denotes a value that is a reference to the object for which the instance method was invoked (§15.12), or to the object being constructed.
I think the compiler estimates that writing this.x implies 'this' exists, so a Constructor has been called (and final variable has been initialized).
But you should get a RuntimeException when trying to run it
I assume you refer to the behaviour in Eclipse. (As stated as comment a compile with javac works).
I think this is an Eclipse problem. It has its own compiler, and own set of rules. One of them is that you may not access a field which is not initialized, although the Java-commpiler would initialize variables for you.

Integer to byte casting in Java

In Java we can do
byte b = 5;
But why can't we pass same argument to a function which accepts byte
myObject.testByte(5);
public void testByte (byte b)
{
System.out.println("Its byte");
}
It gives following error
The method testByte(byte) in the type Apple is not applicable for the arguments (int)
PS: May be a silly question, I think I need to revise my basics again.
Thanks.
Hard-coded initializer values are somewhat special in Java - they're assumed to have a coercion to the type of the variable you're initializing. Essentially, that first bit of code effectively looks like this:
byte b = (byte) 5;
If you did this...
myObject.testByte((byte) 5);
...you wouldn't get that error, but if you don't do that, then the 5 is created by default as an int, and not automatically coerced.
The reason is that when you are narrowing a primitive, you must explicitly make a cast - so you acknowledge a possible loss of data.
To illustrate, when casting 5 there is no loss because the value is within the -128...127 byte value range, but consider a larger int value, say 300 - if you cast to byte, you must throw away some bits to make it fit into 8 bits.
The topic is covered in full here.
Normally, converting an int to a byte without an explicit cast is not allowed.
However, if the conversion is part of an assignment, and the value is a statically-known constant that will fit in the destination type, the compiler will perform the conversion automatically.
This special behaviour is described in section 5.2 of the JLS. It is a special case that only applies to assignment; it does not apply to conversions in other contexts.
Now that I think about it, the lack of auto-narrowing for arguments is probably to avoid issues with overload resolution. If I have methods #foo(short) and #foo(char), it's not clear which one foo(65) should call. You could have special rules to get around this, but it's easier to just require the caller to be explicit in all cases.
You must cast your argument 5 to type byte in your method testByte. Java looks specifically at the argument type.
Change it to:
myObject.testByte( (byte) 5);
Integer literals are by default int in Java. And in myObject.testByte(5); 5 is an integer literal which will be treated as int.
As all you know Java is strictly types language, so it will not allow to assign an int to a byte. All you need to have explicit type-casting.

Categories