I know that if we are performing arithmetic operations on byte value,then implicitly it is promoted to an int and the result will be an int,and hence we need to explicitly convert it into byte in order to store the result in a byte variable.
But I wanted to ask-
Does the conversion from byte to an int happens at the time when it
is declared or does it happens when we use it in arithmetic
operation? Because the java decompiler I am using converts it from
byte to an int at the time of declaration.So,is it the decompiler
problem or is it really so.
And if it really happens at the time of declaration,then why storing
a value beyond the range of byte shows an error?
Eg-
The code is-
public class Hello
{
public static void main(String[] args)
{
byte a=90;
}
}
Output from decompiler-
A byte remains a byte, and is as such statically typed by the compiler.
The JVM machine implements holding a single byte (as opposed to a byte array) as variable in an int slot. And uses int opcodes for arithmetic.
Also assigning a final int constant to a byte will be done by the compiler, as long as it is in the byte range -128, ... 127.
Implicit type conversion in Java is a compile-time mechanism. So yes, it will show errors such as value beyond range. Here is a great SO answer on this topic:
Runtime vs Compile time
The compiler will crop any value declared on the stack to an integer, if it is within range.
Taken from infoq article
When loading a value onto the operand stack, the JVM treats primitive
types that are smaller than an integer as if they were integers.
Consequently, it makes little difference to a program’s bytecode
representation if a method variable is represented by, for example, a
short instead of an int. Instead, the Java compiler inserts additional
bytecode instructions that crop a value to the allowed range of a
short when assigning values. Using shorts over integers in a method’s
body can therefore rather result in additional bytecode instructions
rather than presumably optimizing a program.
Related
static void method(short x)
{
//do some stuff
}
When I called above method from main method using the following line.
method(1); // compilation failed
I know above calling is invalid because parameter 'x' is expecting short and we are passing int.
I further tested the above concept and coded another method:
static short method()
{
//do some stuff
return 1;
}
but above method works fine, where return type is short and we are returning int.
Why does the second method compile?
The return statement (JLS 14.17) is able to use an assignment conversion (JLS 5.2) to convert from the original expression type to the return type.
Assignment conversion includes the ability to convert a constant expression to a narrower type if it's in the range of the target type. So a constant expression of type int can be converted to short when the value is in the range of short.
Method arguments don't go through assignment conversion - they only use method invocation conversion (JLS 5.3) which doesn't include this constant conversion.
In terms of why this happens - I suspect it just makes things simpler to reason about. Assignment conversions always have a single target type - whereas in the case of method arguments, there may be various different overloads to consider, so there'd have to be more rules to determine how specific a constant expression conversion would be. That's just a guess though - and it clearly could be done. (C# allows this, for example.)
Your value happens to fit in a short. Try return something that doesn't fit in 16 bits, like a integer value larger than 32767 and you will get a compile time error.
There shouldn't be an issue with any Number within the range of -32,768 and a maximum value of 32,767
Actually you can use a short to save memory in large arrays, in situations where the
memory savings actually matters.
In comparison, the int data type is a 32-bit signed two's complement integer. It has a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647. For integral values, this data type is generally the default choice unless there is a reason (i.e Memory savings) to choose something else. This data type will most likely be large enough for the numbers your program will use, but if you need a wider range of values, use long instead.
Is it possible to index a Java array based on a byte?
i.e. something like
array[byte b] = x;
I have a very performance-critical application which reads b (in the code above) from a file, and I don't want the overhead of converting this to an int. What is the best way to achieve this? Is there a performance-decrease as a result of using this method of indexing rather than an int?
With many thanks,
Froskoy.
There's no overhead for "converting this to an int." At the Java bytecode level, all bytes are already ints.
In any event, doing array indexing will automatically upcast to an int anyway. None of these things will improve performance, and many will decrease performance. Just leave your code using an int.
The JVM specification, section 2.11.1:
Note that most instructions in Table 2.2 do not have forms for the integral types byte, char, and short. None have forms for the boolean type. Compilers encode loads of literal values of types byte and short using Java virtual machine instructions that sign-extend those values to values of type int at compile-time or runtime. Loads of literal values of types boolean and char are encoded using instructions that zero-extend the literal to a value of type int at compile-time or runtime. Likewise, loads from arrays of values of type boolean, byte, short, and char are encoded using Java virtual machine instructions that sign-extend or zero-extend the values to values of type int. Thus, most operations on values of actual types boolean, byte, char, and short are correctly performed by instructions operating on values of computational type int.
As all integer types in java are signed you have anyway to mask out 8 bits of b's value provided you do expect to read from the file values greater than 0x7F:
byte b;
byte a[256];
a [b & 0xFF] = x;
No; array indices are non-negative integers (JLS 10.4), but byte indices will be promoted.
No, there is no performance decrease, because on the moment you read the byte, you store it in a CPU register sometime. Those registers always works with WORDs, which means that the byte is always "converted" to an int (or a long, if you are on a 64 bit machine).
So, simply read your byte like this:
int b = (in.readByte() & 0xFF);
If your application is that performance critical, you should be optimizing elsewhere.
HashMap internally has its own static final variables for its working.
static final int DEFAULT_INITIAL_CAPACITY = 16;
Why can't they use byte datatype instead of using int since the value is too small.
They could, but it would be a micro-optimization, and the tradeoff would be less readable and maintainable code (Premature optimization, anyone?).
This is a static final variable, so it's allocated only once per classloader. I'd say we can spare those 3 (I'm guessing here) bytes.
I think this is because the capacity for a Map is expressed in terms of an int. When you try to work with a byte and an int, because of promotion rules, the byte will anyways be converted to an int. The default capacity is expressed in terms of an int to maybe avoid those needless promotions.
Using byte or short for variables and constants instead of int is a premature optimization that has next to no effect.
Most arithmetic and logical instructions of the JVM work only with int, long, float and double, other data types have to be cast to (usually) ints in order for these instructions to be executed on them.
The default type of number literals is int for integral and double for floating point numbers. Using byte, short and float types can thus cause some subtle programming bugs and generally worsens code readability.
A little example from the Java Puzzlers book:
public static void main(String[] args) {
for (byte b = Byte.MIN_VALUE; b < Byte.MAX_VALUE; b++) {
if (b == 0x90)
System.out.print("Joy!");
}
}
This program doesn't print Joy!, because the hex value 0x90 is implicitly promoted to an int with the value 144. Since bytes in Java are signed (which itself is very inconvenient), the variable b is never assigned to this value (Byte.MAX_VALUE = 127) and therefore, the condition is never satisfied.
All in all, the reduction of the memory footprint is simply too small (next to none) to justify such micro-optimisation. Generally, explicit numeric types of different size are not necessary and suitable for higher level programming. I personally think that only case where smaller numeric types are acceptable are byte arrays.
The byte values still taking the same space in the JVM and it will also need to be converted to int to the practical purposes explicitly or implicitly, including array sizes, indexes, etc.
Converting from a byte to an int(as it needs to be anint` in any case) would make the code slower if anything. The cost of memory is pretty trivial in the overall scheme of things.
Given the default could be any int value, I think int makes sense.
A lot of data can be represented as a series of Bytes.
Int is the default data type that most users will use when counting or workign with whole numbers.
the issue with using Byte is that the compiler will not recognize it for type conversion.
anytime you tried
int variablename = bytevariable;
it wouldnt complete the assignment however
double variablename = intVariable;
would work.
Why do I have to cast 0 to byte when the method argument is byte?
Example:
void foo() {
bar((byte) 0);
}
void bar(byte val) {}
I know that if the argument is of type long I don't have to cast it, so I'm guessing that Java thinks of mathematical integers as integers runtime.
Doesn't it discourage the usage of byte/short?
Because 0 is an int literal, and down-casting to byte from int requires an explicit cast (since there is the possibility of information loss.) You don't need an explicit cast from int to long, since no information could be lost in that conversion.
The literal 0 is an integer, and there is no automatic cast from integer -> byte, due to a potential loss of precision. For example, (byte)1024 is outside the valid range. Perhaps the compiler could be smarter and allow this for small integer literals, but it doesn't...
Widening to long is okay, since every integer can be a long with no loss of information.
And yes, for this reason I would almost never use short or byte in any APIs exposed by my code (although it would be fine for intermediate calculations)
Integer literals are implicitly of type int, unless they are explicitly marked as long through an L suffix. There are no corresponding markers for short or byte.
Doesn't it discourage the usage of byte/short?
Perhaps it does, and rightfully so. There is no advantage in using individual bytes or shorts, and they often cause problems, especially since the introduction of autoboxing.
I just wondering why this works (in Java):
byte b = 27;
but being method declared like this:
public void method(byte b){
System.out.println(b);
}
This doesn't work:
a.method(27);
Gives a Compiler error as follows:
`The method method(byte) in the type App is not applicable for the arguments (int)`
Reading this doesn't give me any clue (probably i am missunderstanding something).
Thanks in advance.
The reason the assignment
byte b = 27;
works is due to section 5.2 of the Java Language Specification (assignment conversion), which includes:
In addition, if the expression is a constant expression (§15.28) of type
byte, short, char or int :
A narrowing primitive conversion may
be used if the type of the variable is
byte, short, or char, and the value of
the constant expression is
representable in the type of the
variable.
In other words, the language has special provision for this case with assignments. Normally, there's no implicit conversion from int to byte.
Interestingly, C# works differently in this respect (despite being like Java in so many other aspects of core functionality) - the method call is valid in C#.
Simple answer - a byte is 1 byte of memory, and an integer is typically 4 bytes.
Without explicitly trying to cast the int to a byte, there is no implied coercion because an integer of size say 10790 would lose information if truncated down to one byte.
The process of taking one numeric value and converting it to another without your help is called a promotion - You are taking for example a 1 byte number, and making it into a 4 byte number by filling in zeroes. This is (generally) safe since the range of possible values in the target type is greater than in the source type.
When the opposite occurs, such as taking an int and converting to a byte, it is a "demotion" ad it is not automatic, you have to force it since there's a loss of value.
What happens here is that 27 is interpreted as an int. The expression itself is an int, and then it gets sent to a method that expects a byte. In other words, the fact that you are putting it inside a call to a method that takes a byte doesn't change the fact that Java considers it to be a bit to begin with.
I don't remember exactly how to define byte constants in Java, but generally speaking, a number like 27 is a magic number. You may want to write a
final byte MY_CONSTANT_THAT_MEANS_27_BUT_WITH_MEANINGFUL_NAME = 27;
And then call your function with that constant instead of the 27.
By default, Java will look at a numeric value such as 27 as an integer type. Think of this in a similar aspect, if you say 123.45, it will see it as a float, and "hello" is a string. These are static values, and not variables, so you need to cast it to a byte, like so:
a.method((byte)27);