Integer to byte casting in Java - java

In Java we can do
byte b = 5;
But why can't we pass same argument to a function which accepts byte
myObject.testByte(5);
public void testByte (byte b)
{
System.out.println("Its byte");
}
It gives following error
The method testByte(byte) in the type Apple is not applicable for the arguments (int)
PS: May be a silly question, I think I need to revise my basics again.
Thanks.

Hard-coded initializer values are somewhat special in Java - they're assumed to have a coercion to the type of the variable you're initializing. Essentially, that first bit of code effectively looks like this:
byte b = (byte) 5;
If you did this...
myObject.testByte((byte) 5);
...you wouldn't get that error, but if you don't do that, then the 5 is created by default as an int, and not automatically coerced.

The reason is that when you are narrowing a primitive, you must explicitly make a cast - so you acknowledge a possible loss of data.
To illustrate, when casting 5 there is no loss because the value is within the -128...127 byte value range, but consider a larger int value, say 300 - if you cast to byte, you must throw away some bits to make it fit into 8 bits.
The topic is covered in full here.

Normally, converting an int to a byte without an explicit cast is not allowed.
However, if the conversion is part of an assignment, and the value is a statically-known constant that will fit in the destination type, the compiler will perform the conversion automatically.
This special behaviour is described in section 5.2 of the JLS. It is a special case that only applies to assignment; it does not apply to conversions in other contexts.
Now that I think about it, the lack of auto-narrowing for arguments is probably to avoid issues with overload resolution. If I have methods #foo(short) and #foo(char), it's not clear which one foo(65) should call. You could have special rules to get around this, but it's easier to just require the caller to be explicit in all cases.

You must cast your argument 5 to type byte in your method testByte. Java looks specifically at the argument type.
Change it to:
myObject.testByte( (byte) 5);

Integer literals are by default int in Java. And in myObject.testByte(5); 5 is an integer literal which will be treated as int.
As all you know Java is strictly types language, so it will not allow to assign an int to a byte. All you need to have explicit type-casting.

Related

What is the difference between "Explicitly" and "Implicitly" in programming language?

I would like to have a clear and precise understanding of the difference between the two.
Also is the this keyword used to implicitly reference or explicitly ? This is also why I want clarification between the two?
I assume to use the this keyword is to reference implicitly (being something withing the class) whilst explicitly (is something not belonging to the class itself) like a parameter variable being passed into a method.
Of course my assumptions could obviously be wrong which is why I'm here asking for clarification.
Explicit means done by the programmer.
Implicit means done by the JVM or the tool , not the Programmer.
For Example:
Java will provide us default constructor implicitly.Even if the programmer didn't write code for constructor, he can call default constructor.
Explicit is opposite to this , ie. programmer has to write .
already you have got your answer but I would like to add few more.
Implicit: which is already available into your programming language like methods, classes , dataTypes etc.
-implicit code resolve the difficulties of programmer and save the time of development.
-it provides optimised code. and so on.
Explicit: which is created by the programmer(you) as per their(your) requirement, like your app class, method like getName(), setName() etc.
finally in simple way,
A pre-defined code which provides help to programmer to build their app,programs etc it is know as implicit, and which have been written by the (you)programmer to full fill the requirement it is known as Explicit.
1: Implicit casting (widening conversion)
A data type of lower size (occupying less memory) is assigned to a data type of higher size. This is done implicitly by the JVM. The lower size is widened to higher size. This is also named as automatic type conversion.
Examples:
int x = 10; // occupies 4 bytes
double y = x; // occupies 8 bytes
System.out.println(y); // prints 10.0
In the above code 4 bytes integer value is assigned to 8 bytes double value.
Explicit casting (narrowing conversion)
A data type of higher size (occupying more memory) cannot be assigned to a data type of lower size. This is not done implicitly by the JVM and requires explicit casting; a casting operation to be performed by the programmer. The higher size is narrowed to lower size.
double x = 10.5; // 8 bytes
int y = x; // 4 bytes ; raises compilation error
1
2
double x = 10.5; // 8 bytes
int y = x; // 4 bytes ; raises compilation error
In the above code, 8 bytes double value is narrowed to 4 bytes int value. It raises error. Let us explicitly type cast it.
double x = 10.5;
int y = (int) x;
1
2
double x = 10.5;
int y = (int) x;
The double x is explicitly converted to int y. The thumb rule is, on both sides, the same data type should exist.
I'll try to provide an example of a similar functionality across different programming languages to differentiate between implicit & explicit.
Implicit: When something is available as a feature/aspect of the programming language constructs being used. And you have to do nothing but call the respective functionality through the API/interface directly.
For example Garbage collection in java happens implicitly. The JVM does it for us at an appropriate time.
Explicit: When user/programmer intervention is required to invoke/call a specific functionality, without which the desired action wont take place.
For example, in C++, freeing of the memory (read: Garbage collection version) has to happen by explicitly calling delete and free operators.
Hope this helps you understand the difference clearly.
This was way more complicated than I think it needed to be:
explicit = label names of a index (label-based indexing)
example:
df['index label name']
vs
implicit = integer of index (zero-based indexing)
df[0]

For boolean fields in Java Model class is it better to use Boolean object or primitive boolean field [duplicate]

There are discussions around Integer vs int in Java. The default value of the former is null while in the latter it's 0. How about Boolean vs boolean?
A variable in my application can have 0/1 values. I would like to use boolean/Boolean and prefer not to use int. Can I use Boolean/boolean instead?
Yes you can use Boolean/boolean instead.
First one is Object and second one is primitive type.
On first one, you will get more methods which will be useful.
Second one is cheap considering memory expense The second will save you a lot more memory, so go for it
Now choose your way.
Boolean wraps the boolean primitive type. In JDK 5 and upwards, Oracle (or Sun before Oracle bought them) introduced autoboxing/unboxing, which essentially allows you to do this
boolean result = Boolean.TRUE;
or
Boolean result = true;
Which essentially the compiler does,
Boolean result = Boolean.valueOf(true);
So, for your answer, it's YES.
I am a bit extending provided answers (since so far they concentrate on their "own"/artificial terminology focusing on programming a particular language instead of taking care of the bigger picture behind the scene of creating the programming languages, in general, i.e. when things like type-safety vs. memory considerations make the difference):
int is not boolean
Consider
boolean bar = true;
System.out.printf("Bar is %b\n", bar);
System.out.printf("Bar is %d\n", (bar)?1:0);
int baz = 1;
System.out.printf("Baz is %d\n", baz);
System.out.printf("Baz is %b\n", baz);
with output
Bar is true
Bar is 1
Baz is 1
Baz is true
Java code on 3rd line (bar)?1:0 illustrates that bar (boolean) cannot be implicitly converted (casted) into an int. I am bringing this up not to illustrate the details of implementation behind JVM, but to point out that in terms of low level considerations (as memory size) one does have to prefer values over type safety. Especially if that type safety is not truly/fully used as in boolean types where checks are done in form of
if value \in {0,1} then cast to boolean type, otherwise throw an exception.
All just to state that {0,1} < {-2^31, .. , 2^31 -1}. Seems like an overkill, right? Type safety is truly important in user defined types, not in implicit casting of primitives (although last are included in the first).
Bytes are not types or bits
Note that in memory your variable from range of {0,1} will still occupy at least a byte or a word (xbits depending on the size of the register) unless specially taken care of (e.g. packed nicely in memory - 8 "boolean" bits into 1 byte - back and forth).
By preferring type safety (as in putting/wrapping value into a box of a particular type) over extra value packing (e.g. using bit shifts or arithmetic), one does effectively chooses writing less code over gaining more memory. (On the other hand one can always define a custom user type which will facilitate all the conversion not worth than Boolean).
keyword vs. type
Finally, your question is about comparing keyword vs. type. I believe it is important to explain why or how exactly you will get performance by using/preferring keywords ("marked" as primitive) over types (normal composite user-definable classes using another keyword class)
or in other words
boolean foo = true;
vs.
Boolean foo = true;
The first "thing" (type) can not be extended (subclassed) and not without a reason. Effectively Java terminology of primitive and wrapping classes can be simply translated into inline value (a LITERAL or a constant that gets directly substituted by compiler whenever it is possible to infer the substitution or if not - still fallback into wrapping the value).
Optimization is achieved due to trivial:
"Less runtime casting operations => more speed."
That is why when the actual type inference is done it may (still) end up in instantiating of wrapping class with all the type information if necessary (or converting/casting into such).
So, the difference between boolean and Boolean is exactly in Compilation and Runtime (a bit far going but almost as instanceof vs. getClass()).
Finally, autoboxing is slower than primitives
Note the fact that Java can do autoboxing is just a "syntactic sugar". It does not speed up anything, just allows you to write less code. That's it. Casting and wrapping into type information container is still performed. For performance reasons choose arithmetics which will always skip extra housekeeping of creating class instances with type information to implement type safety. Lack of type safety is the price you pay to gain performance. For code with boolean-valued expressions type safety (when you write less and hence implicit code) would be critical e.g. for if-then-else flow controls.
You can use the Boolean constants - Boolean.TRUE and Boolean.FALSE instead of 0 and 1. You can create your variable as of type boolean if primitive is what you are after. This way you won't have to create new Boolean objects.
One observation: (though this can be thought of side effect)
boolean being a primitive can either say yes or no.
Boolean is an object (it can refer to either yes or no or 'don't know' i.e. null)
Basically boolean represent a primitive data type where Boolean represent a reference data type. this story is started when Java want to become purely object oriented it's provided wrapper class concept to over come to use of primitive data type.
boolean b1;
Boolean b2;
b1 and b2 are not same.
You can use Boolean / boolean. Simplicity is the way to go.
If you do not need specific api (Collections, Streams, etc.) and you are not foreseeing that you will need them - use primitive version of it (boolean).
With primitives you guarantee that you will not pass null values. You will not fall in traps like this. The code below throws NullPointerException (from: Booleans, conditional operators and autoboxing):
public static void main(String[] args) throws Exception {
Boolean b = true ? returnsNull() : false; // NPE on this line.
System.out.println(b);
}
public static Boolean returnsNull() {
return null;
}
Use Boolean when you need an object, eg:
Stream of Booleans,
Optional
Collections of Booleans
Boolean is threadsafe, so you can consider this factor as well along with all other listed in answers

Java: auto-unboxing combined with casting

Please help me wrap my head around why this doesn't work. (It's not a practical problem, it's a mental excercise for the OCPJP exam.)
public class ImplicitConversions {
Integer iBoxed;
short sPrimitive = (short)iBoxed;
}
//compiler error: incompatible types; required: short, found: Integer
I'm assuming the compiler tries to cast first without (or before) unboxing, whereas for example an arithmetic operation (iBoxed+iBoxed) will unbox it first. Therefore, is it safe to say that auto-boxing/unboxing has its place in the the order of operations (Unary, Arithmetic, Relational, Logical, Conditional, Assignment) and where is it exactly?
I've been reading about casting conversion in source below (to make sure I'm compatible with 1.6), but this one eludes me. Thanks.
http://docs.oracle.com/javase/specs/jls/se5.0/html/conversions.html#20232
This
(short)iBoxed
is a stand-alone expression that doesn't depend on its context. What you are trying to do is cast an Integer reference value to a short primitive value. That's just not a casting context that is allowed. (See the table further down in the chapter.)
Integer has a method shortValue(). Use this instead:
short sPrimitive = iBoxed.shortValue();
An auto-boxing/unboxing expression cannot be combined with a wider- or narrower-range cast.
However, you can double-cast the iBoxed variable:
short sPrimitive = (short) (int) iBoxed;
First the iBoxed variable is auto-unboxed to an int, and then the int is converted to a short.

What is the deference between primitive data type and wrapper classes? Is the use of primitive data type in java violation of object orientated rules? [duplicate]

Since Java 5, we've had boxing/unboxing of primitive types so that int is wrapped to be java.lang.Integer, and so and and so forth.
I see a lot of new Java projects lately (that definitely require a JRE of at least version 5, if not 6) that are using int rather than java.lang.Integer, though it's much more convenient to use the latter, as it has a few helper methods for converting to long values et al.
Why do some still use primitive types in Java? Is there any tangible benefit?
In Joshua Bloch's Effective Java, Item 5: "Avoid creating unnecessary objects", he posts the following code example:
public static void main(String[] args) {
Long sum = 0L; // uses Long, not long
for (long i = 0; i <= Integer.MAX_VALUE; i++) {
sum += i;
}
System.out.println(sum);
}
and it takes 43 seconds to run. Taking the Long into the primitive brings it down to 6.8 seconds... If that's any indication why we use primitives.
The lack of native value equality is also a concern (.equals() is fairly verbose compared to ==)
for biziclop:
class Biziclop {
public static void main(String[] args) {
System.out.println(new Integer(5) == new Integer(5));
System.out.println(new Integer(500) == new Integer(500));
System.out.println(Integer.valueOf(5) == Integer.valueOf(5));
System.out.println(Integer.valueOf(500) == Integer.valueOf(500));
}
}
Results in:
false
false
true
false
EDIT Why does (3) return true and (4) return false?
Because they are two different objects. The 256 integers closest to zero [-128; 127] are cached by the JVM, so they return the same object for those. Beyond that range, though, they aren't cached, so a new object is created. To make things more complicated, the JLS demands that at least 256 flyweights be cached. JVM implementers may add more if they desire, meaning this could run on a system where the nearest 1024 are cached and all of them return true... #awkward
Autounboxing can lead to hard to spot NPEs
Integer in = null;
...
...
int i = in; // NPE at runtime
In most situations the null assignment to in is a lot less obvious than above.
Boxed types have poorer performance and require more memory.
Primitive types:
int x = 1000;
int y = 1000;
Now evaluate:
x == y
It's true. Hardly surprising. Now try the boxed types:
Integer x = 1000;
Integer y = 1000;
Now evaluate:
x == y
It's false. Probably. Depends on the runtime. Is that reason enough?
Besides performance and memory issues, I'd like to come up with another issue: The List interface would be broken without int.
The problem is the overloaded remove() method (remove(int) vs. remove(Object)). remove(Integer) would always resolve to calling the latter, so you could not remove an element by index.
On the other hand, there is a pitfall when trying to add and remove an int:
final int i = 42;
final List<Integer> list = new ArrayList<Integer>();
list.add(i); // add(Object)
list.remove(i); // remove(int) - Ouch!
Can you really imagine a
for (int i=0; i<10000; i++) {
do something
}
loop with java.lang.Integer instead? A java.lang.Integer is immutable, so each increment round the loop would create a new java object on the heap, rather than just increment the int on the stack with a single JVM instruction. The performance would be diabolical.
I would really disagree that it's much mode convenient to use java.lang.Integer than int. On the contrary. Autoboxing means that you can use int where you would otherwise be forced to use Integer, and the java compiler takes care of inserting the code to create the new Integer object for you. Autoboxing is all about allowing you to use an int where an Integer is expected, with the compiler inserting the relevant object construction. It in no way removes or reduces the need for the int in the first place. With autoboxing you get the best of both worlds. You get an Integer created for you automatically when you need a heap based java object, and you get the speed and efficiency of an int when you are just doing arithmetic and local calculations.
Primitive types are much faster:
int i;
i++;
Integer (all Numbers and also a String) is an immutable type: once created it can not be changed. If i was Integer, than i++ would create a new Integer object - much more expensive in terms of memory and processor.
First and foremost, habit. If you've coded in Java for eight years, you accumulate a considerable amount of inertia. Why change if there is no compelling reason to do so? It's not as if using boxed primitives comes with any extra advantages.
The other reason is to assert that null is not a valid option. It would be pointless and misleading to declare the sum of two numbers or a loop variable as Integer.
There's the performance aspect of it too, while the performance difference isn't critical in many cases (though when it is, it's pretty bad), nobody likes to write code that could be written just as easily in a faster way we're already used to.
By the way, Smalltalk has only objects (no primitives), and yet they had optimized their small integers (using not all 32 bits, only 27 or such) to not allocate any heap space, but simply use a special bit pattern. Also other common objects (true, false, null) had special bit patterns here.
So, at least on 64-bit JVMs (with a 64 bit pointer namespace) it should be possible to not have any objects of Integer, Character, Byte, Short, Boolean, Float (and small Long) at all (apart from these created by explicit new ...()), only special bit patterns, which could be manipulated by the normal operators quite efficiently.
I can't believe no one has mentioned what I think is the most important reason:
"int" is so, so much easier to type than "Integer". I think people underestimate the importance of a concise syntax. Performance isn't really a reason to avoid them because most of the time when one is using numbers is in loop indexes, and incrementing and comparing those costs nothing in any non-trivial loop (whether you're using int or Integer).
The other given reason was that you can get NPEs but that's extremely easy to avoid with boxed types (and it is guaranteed to be avoided as long as you always initialize them to non-null values).
The other reason was that (new Long(1000))==(new Long(1000)) is false, but that's just another way of saying that ".equals" has no syntactic support for boxed types (unlike the operators <, >, =, etc), so we come back to the "simpler syntax" reason.
I think Steve Yegge's non-primitive loop example illustrates my point very well:
http://sites.google.com/site/steveyegge2/language-trickery-and-ejb
Think about this: how often do you use function types in languages that have good syntax for them (like any functional language, python, ruby, and even C) compared to java where you have to simulate them using interfaces such as Runnable and Callable and nameless classes.
Couple of reasons not to get rid of primitives:
Backwards compatability.
If it's eliminated, any old programs wouldn't even run.
JVM rewrite.
The entire JVM would have to be rewritten to support this new thing.
Larger memory footprint.
You'd need to store the value and the reference, which uses more memory. If you have a huge array of bytes, using byte's is significantly smaller than using Byte's.
Null pointer issues.
Declaring int i then doing stuff with i would result in no issues, but declaring Integer i and then doing the same would result in an NPE.
Equality issues.
Consider this code:
Integer i1 = 5;
Integer i2 = 5;
i1 == i2; // Currently would be false.
Would be false. Operators would have to be overloaded, and that would result in a major rewrite of stuff.
Slow
Object wrappers are significantly slower than their primitive counterparts.
Objects are much more heavyweight than primitive types, so primitive types are much more efficient than instances of wrapper classes.
Primitive types are very simple: for example an int is 32 bits and takes up exactly 32 bits in memory, and can be manipulated directly. An Integer object is a complete object, which (like any object) has to be stored on the heap, and can only be accessed via a reference (pointer) to it. It most likely also takes up more than 32 bits (4 bytes) of memory.
That said, the fact that Java has a distinction between primitive and non-primitive types is also a sign of age of the Java programming language. Newer programming languages don't have this distinction; the compiler of such a language is smart enough to figure out by itself if you're using simple values or more complex objects.
For example, in Scala there are no primitive types; there is a class Int for integers, and an Int is a real object (that you can methods on etc.). When the compiler compiles your code, it uses primitive ints behind the scenes, so using an Int is just as efficient as using a primitive int in Java.
In addition to what others have said, primitive local variables are not allocated from the heap, but instead on the stack. But objects are allocated from the heap and thus have to be garbage collected.
It's hard to know what kind of optimizations are going on under the covers.
For local use, when the compiler has enough information to make optimizations excluding the possibility of the null value, I expect the performance to be the same or similar.
However, arrays of primitives are apparently very different from collections of boxed primitives. This makes sense given that very few optimizations are possible deep within a collection.
Furthermore, Integer has a much higher logical overhead as compared with int: now you have to worry about about whether or not int a = b + c; throws an exception.
I'd use the primitives as much as possible and rely on the factory methods and autoboxing to give me the more semantically powerful boxed types when they are needed.
int loops = 100000000;
long start = System.currentTimeMillis();
for (Long l = new Long(0); l<loops;l++) {
//System.out.println("Long: "+l);
}
System.out.println("Milliseconds taken to loop '"+loops+"' times around Long: "+ (System.currentTimeMillis()- start));
start = System.currentTimeMillis();
for (long l = 0; l<loops;l++) {
//System.out.println("long: "+l);
}
System.out.println("Milliseconds taken to loop '"+loops+"' times around long: "+ (System.currentTimeMillis()- start));
Milliseconds taken to loop '100000000' times around Long: 468
Milliseconds taken to loop '100000000' times around long: 31
On a side note, I wouldn't mind seeing something like this find it's way into Java.
Integer loop1 = new Integer(0);
for (loop1.lessThan(1000)) {
...
}
Where the for loop automatically increments loop1 from 0 to 1000
or
Integer loop1 = new Integer(1000);
for (loop1.greaterThan(0)) {
...
}
Where the for loop automatically decrements loop1 1000 to 0.
Primitive types have many advantages:
Simpler code to write
Performance is better since you are not instantiating an object for the variable
Since they do not represent a reference to an object there is no need to check for nulls
Use primitive types unless you need to take advantage of the boxing features.
You need primitives for doing mathematical operations
Primitives takes less memory as answered above and better performing
You should ask why Class/Object type is required
Reason for having Object type is to make our life easier when we deal with Collections. Primitives cannot be added directly to List/Map rather you need to write a wrapper class. Readymade Integer kind of Classes helps you here plus it has many utility methods like Integer.pareseInt(str)
I agree with previous answers, using primitives wrapper objects can be expensive.
But, if performance is not critical in your application, you avoid overflows when using objects. For example:
long bigNumber = Integer.MAX_VALUE + 2;
The value of bigNumber is -2147483647, and you would expect it to be 2147483649. It's a bug in the code that would be fixed by doing:
long bigNumber = Integer.MAX_VALUE + 2l; // note that '2' is a long now (it is '2L').
And bigNumber would be 2147483649. These kind of bugs sometimes are easy to be missed and can lead to unknown behavior or vulnerabilities (see CWE-190).
If you use wrapper objects, the equivalent code won't compile.
Long bigNumber = Integer.MAX_VALUE + 2; // Not compiling
So it's easier to stop these kind of issues by using primitives wrapper objects.
Your question is so answered already, that I reply just to add a little bit more information not mentioned before.
Because JAVA performs all mathematical operations in primitive types. Consider this example:
public static int sumEven(List<Integer> li) {
int sum = 0;
for (Integer i: li)
if (i % 2 == 0)
sum += i;
return sum;
}
Here, reminder and unary plus operations can not be applied on Integer(Reference) type, compiler performs unboxing and do the operations.
So, make sure how many autoboxing and unboxing operations happen in java program. Since, It takes time to perform this operations.
Generally, it is better to keep arguments of type Reference and result of primitive type.
The primitive types are much faster and require much less memory. Therefore, we might want to prefer using them.
On the other hand, current Java language specification doesn’t allow usage of primitive types in the parameterized types (generics), in the Java collections or the Reflection API.
When our application needs collections with a big number of elements, we should consider using arrays with as more “economical” type as possible.
*For detailed info see the source: https://www.baeldung.com/java-primitives-vs-objects
To be brief: primitive types are faster and require less memory than boxed ones

If a Java method's return type is int, can the method return a value of type byte?

I'm preparing myself for the Java SE 7 Programmer I exam (1Z0-803) by reading a book called OCA Java SE 7 Programmer I Certification Guide. This book has a numerous amount of flaws in it despite the author having 12 years of experience with Java programming and despite a technical proofreader, presumably on a salary.
There is one thing though that makes me insecure. The author says on page 168 that this statement is true:
If the return type of a method is int, the method can return a value
of type byte.
Well I argue differently and need your help. Take this piece of code for an example:
public static void main(String[] args)
{
// This line won't compile ("possible loss of precision"):
byte line1 = returnByte();
// compiles:
int line2 = returnByte();
// compiles too, we "accept" the risk of precision loss:
byte line3 = (byte) returnByte();
}
public static int returnByte()
{
byte b = 1;
return b;
}
Obviously, the compiler do not complain about a different return type int in the returnByte() method signature and what we actually do return at the end of the method implementation: a byte. The byte uses fewer bits (8) than the int (32) and will be cast to an int without the risk of a precision loss. But the returned value is and will always be an integer! Or am I wrong? What would you have answered on the exam?
I'm not totally sure what the actual return type is, since this statement in the book is said to be true. Is the cast happening at the end of our method implementation or is the cast happening back in the main method just before assignment?
In the real world, this question would not matter as long as one understand the implicit casting and the risk of losing precision. But since this is just one of those questions that might popup on the exam, I'd love to know the technically correct answer to the question.
Clarification!
The majority of the answers seem to think that I want to know whether one can cast a byte to an int and what happens then. Well, that is not the question. I'm asking what the returned type is. In other words, is the author's quoted statement right or wrong? Does the cast to an int happen before or after the returnByte() method actually returns? If this was the real exam and you would have got the question, what would you have answered?
Please see line1 in my code snippet. If what the author say is right, that line would have compiled as the returned value would have been a byte. But it does not compile, the rules of type promotion says that we risk loosing precision if we try to squeeze an int into a byte. For me, that is a proof that the returned value is an integer.
Yes, you can do that.
The value of the byte expression will be promoted to int before it is returned.
The actual return type is as declared in the method signature - int.
IMO, what the author of that book wrote is more or less correct. He just left out explaining the bit about the byte-to-int promotion that happens in the return statement when you "return a byte".
Is the cast happening at the end of our method implementation or is the cast happening back in the main method just before assignment?
The cast (promotion) happens in the returnByte method.
In the real world, this question would not matter as long as one understand the implicit casting and the risk of losing precision.
There is no loss of precision in promoting a byte to an int. If the types were different, there could be loss of precision, but (hypothetically) the loss of precision would be the same wherever the promotion is performed.
The section of the JLS that deals with this is JLS 14.17, which says that the return expression must assignable to the method's declared return type. It doesn't explicitly state that the promotion is done in the method, but it is implied. Furthermore, it is the only practical way to implement this.
If (hypothetically) the conversion was done in the calling method (e.g. main), then:
The compiler would need to know what the return statement is doing. This is not possible, given that Java classes can be compiled separately.
The compiler would need to know which actual method is going to be called. This is not possible if the method is overridden.
If the method contained two (or more) return statements with expressions that have different types, then the compiler needs to know which return will be executed. This is impossible.
If this was the real exam and you would have got the question, what would you have answered?
I would have answered ... "it depends" ... and proceeded to the alternative viewpoints.
Technically speaking the method is returning an int, but the return statement can take any expression whose type can be converted to an int.
But if someone said to me that the method is returning a byte, I would understand what they meant.
Also, as far as i know methods/function use stacks for storage.etc and inside those stacks they store the return addresses not what (type) they are returning to. So, again the ambiguity arises (at least for me). Please correct me if I am wrong.
Yes, technically a method doesn't return a type. (Not even if it returns a Type object. That is an object that denotes a type, not the type itself.) But everyone and his dog would happily say "the returnByte methods returns type int".
So, if you are going to be pedantic, then yes it is ambiguous. But the solution is to not be pedantic.
I'm fairly certain it's the same as doing myInt << 24, getting the last eight bits, therefor an imminent loss of precision if the integer is over the value of 255(2^8).
Yes the statement is valid. A byte will always fit into an int comfortably.
byte => int //no precision loss
int => byte //precision loss
But if you are doing:
byte => int => byte //you won't lose any data
Which is what you are doing. Does that help?

Categories