Assign values to char in java [duplicate] - java

This question already has answers here:
In java why we can't assign int to char directly??? but vice-versa is true?
(4 answers)
Closed 10 months ago.
I tried
int a = 100;
char c = a; //doesn't work
but
char c = 100; //does work
Does anybody know why?

Sentence
char a = 100;
will work, as every number in char stands for symbol in Unicode, from 0 to 65,536
For example
char a = 435;
System.out.println("Output: " + a);
gives output
Output: Ƴ
As mentioned in answer below, type cast needed, when assigning int value to char, as int has wider values range (from -2147483648 to 2147483647) than char
For example,
long a = 1;
int b = a;
also impossible

The reason why char c = a; does not work is since char and int are incompatible types.
char can store 2 bytes where as int can store 4 bytes. And so the java compiler does not allow the above operation since it can probably result in data loss while type conversion takes place.
To avoid the above issue, we can simply typecast int to char before assigning char c = (char) a;.

compiler won't do 'Widening or Automatic Type Conversion' for int to char.
Reference

Related

Java char addition (char1 = char2 + 10) [duplicate]

This question already has answers here:
Why can't you add an int and a char in some cases?
(1 answer)
Java - char, int conversions
(4 answers)
Integer arithmetic in Java with char and integer literal
(6 answers)
Closed 1 year ago.
Why can't i do this?
public class test123 {
public static void main (String [] args) {
char c = 34;
char a = c + 10;
}
}
new to java, so sorry if this question is actually stupid.
When you add numbers, they undergo binary numeric promotion. That is, the operands are widened to allow them to be added.
When you add a char to an int, the char is widened to an int, and the result is an int.
As such, you would have to cast the int back to a char:
char a = (char) (c + 10);
However, even when adding a char to another char, both are widened to int, and the result is again an int, and so cannot be assigned to a char variable. The rules are basically:
If either operand is a double, widen both to double
Else, if either operand is a float, widen both to float
Else, if either operand is a long, widen both to long
Else, widen both to int
So, even if you were adding a byte to a byte, both are widened to int, added, and the result is an int.
The exception to this is if you made c final:
final char c = 34;
In that case, c has a compile-time constant value, so c + 10 is a compile-time constant expression. Because it's a compile-time constant expression, the compiler knows its value, and knows that it fits into the range of char; so it would be allowed:
final char c = 34;
char a = c + 10;
As per the JLS, int is the narrowest type for arithmetic. Narrower values are widened to int, and the result is int.
You would get the same error even if you coded:
char a = c + c; // error
The Java char is a primitive data type. It is used to declare the character-type like char char1='a';
But you can add an int to a char, but the result is an int - you'd have to cast back to char
char a = 'a';
char b = (char)(a+4);
System.out.println(b);// print "e"

How to reconvert from int to AScII in java? [duplicate]

This question already has answers here:
Convert int to char in java
(18 answers)
Closed 1 year ago.
So what i understand is that if for example i have an int a = 49 and want to find out the ASCII equivalent to it, all i need to do is:
int a = 49;
System.out.println((char)a);
which has the Output: 1.
But how can i do this reversed? So lets say i have int a = 1 and i want the output to be 49?
I have already tried stuff like:
int a = 1;
System.out.println ((char)a);
But the output here is "." instead of 49.
This:
int a = 49;
System.out.println((char)a);
gives you the char whose internal value is 49, which is to say the unicode character '1'. There is no numerical conversion going on, it's just two ways to look at the same number.
In this:
int a = 1;
System.out.println((char)a);
you get the char whose internal value is 1, which is a unicode control character (ctrl/A, which probably won't be printed in any useful way). There is nothing there that can magically come up with the value 49.
If you had the character '1' then it's easy:
int a = '1';
System.out.println(a);
but this is practically the same as your first case; '1' and 49 are the same value.

final characters in Java [duplicate]

This question already has answers here:
Why can not I add two bytes and get an int and I can add two final bytes get a byte?
(3 answers)
Java - char, int conversions
(4 answers)
Closed 8 years ago.
The following segment of code issues a compile-time error.
char c = 'c';
char d = c + 5;
The error on the second line says,
possible loss of precision
required: char
found: int
The error message is based on the NetBeans IDE.
When this character c is declared final like as follows.
final char c = 'c';
char d = c + 5;
The compiler-time error vanishes.
It is unrelated to the case of final strings
What does the final modifier make a difference here?
The reason is that the JLS #5.2 (Assignment conversion) says so:
If the expression is a constant expression (§15.28) of type byte, short, char, or int, a narrowing primitive conversion may be used if the type of the variable is byte, short, or char, and the value of the constant expression is representable in the type of the variable.
In your example, char c = 'c'; is not a constant but final char c = 'c'; is.
The rationale is probably that the addition operator + first converts its operands to integers. So the operation could overflow unless everything is constant in which case the compiler can prove that there is no overflow.
When you apply the + operator to integral types
Binary numeric promotion is performed on the operands (§5.6.2).
In this case, the char values are promoted to int values.
Here
char c = 'c';
char d = c + 5;
because c is not a constant expression, the compiler cannot determine if the value of c + 5 which is an int will be able to fit in a char.
In this
final char c = 'c';
char d = c + 5;
where c is a constant expression, the compiler can determine that the value of c, which is 99, added to 5, which is 104 does fit in a char. Because of this guarantee, Java can safely perform a narrowing conversion from int to char.
If instead you had
final char a = Character.MAX_VALUE;
char b = (a + 5);
you would see the similar behavior as your first test case as the value of a + 5 does not fit in a char. The compiler determines that the int value resulting from a + 5 would not fit in a char.

Can the char type be categorized as an integer?

Just now I read "char is the only unsigned integral primitive type in Java."
Does this mean the char is one of the integral types in Java?
Same as in C, recently I have read that C types includes scalar types, function types, union types, aggregate types, and scalar types include pointer types and arithmetic types, then arithmetic types include integral types and floating-point types, the integral types include enumerated types and character types.
Can the char type really be categorized as a integer both in Java and C?
Yes, a char is an integral type in all the popular languages in which it appears. "Integral" means that its spectrum is discrete and the smallest difference between any two distinct values is 1. The required range of supported values is usually quite small compared to that of other integral types. Computer hardware traditionally treats integers as the fundamental data type; by contrast, arithmetic floating-point types are a more recent and more complicated addition.
Similar to #aioobe's answer
int n = 5;
char ch = (char) '0' + 5; // '5'
or the reverse
char ch = '9';
int i = ch - '0'; // 9
or the unusual
char ch = 'a';
ch += 1; // 'b';
or the odd
char ch = '0';
ch *= 1.1; // '4' as (char) (48 * 1.1) == '4'
ch %= 16; // `\u0004`
int i = ch; // 4
BTW: From String.hashCode()
// int h = 0;
char val[] = value;
int len = count;
for (int i = 0; i < len; i++) {
h = 31*h + val[off++];
}
According to the Java Primitive Data Types tutorial:
char: The char data type is a single 16-bit Unicode character. It has a minimum value of '\u0000' (or 0) and a maximum value of '\uffff' (or 65,535 inclusive).
So yes, it is a 16-bit unsigned integer. Whether you use the type to represent a number or a character is up to you...
Also keep in mind that while the char type is guaranteed to be 16 bits in Java, the only restriction C imposes is that the type must be at least 8 bits. According to the C spec reference from this answer:
maximum number of bits for smallest object that is not a bit-field (byte)
CHAR_BIT 8
So a char in C does not necessarily represent the same range of integer values as a char in Java.
I'm unsure of the formal definition of an integral type, but in short, yes, char is an integral type in Java, since it can be seen as representing an integer.
You can for instance do
char c1 = 'a' + 'b';
char c2 = 5;
char c3 = c2 + 3;
int i = c3;
char c4 = (char) i;
and so on.
In memory, essentially everything is integral... but yes. char is an integer type in C, C++ and Java.
Char is an "Integer Data Type" in C and its related progeny. As per the K&R Book, "By definition, chars are just small integers". They are used for storing 'Character' data.
The ASCII Table lists 128 characters, and each text character corresponds to an integer value.
Char Data Type is a 1 Byte (8 Bits). Therefore, they can store upto 2^8 = 256 different integers. (These 256 different integers correspond to different ASCII or UTF characters.)
For example:
As per ASCII standard, the letter "x" is stored as 01111000 (decimal 120).
for e.g., you can Add a value in a char variable, just like any integer!
int main()
{
char a='x';
int b=3+a;
printf("value is:\n%d",b);
return 0;
}
output: b=123 (3 + 120 for the char 'x')
i.e. chars are just numbers (integers: 0-256 (unsigned)).
The C specification calls for the char type to be implemented as a 1-byte integer. The specifications for other types are not always so clear. For example, the original Kernigan and Ritchie C book said that, as far as short and long were concerned, among different implementations, you could only rely on the fact that short was no longer than long.
Yes a char type can be categorized as an integer:
int i = 49;
int k = 36;
System.out.print((char)i + (char)k + "$"); //may expect 1$ but print: 85$

Data conversion java

char c='c';
int i=10;
double d =50;
long l=30;
String s="Goodbye";
Are these statement valid?
s+=i;
i+=s;
c+=s;
c=c+i;
Can someone explain the logic of converting data types
Why don't you give it a try:
bash-3.2$ cat ConveraionTest.java
public class ConvertsonTest {
public static void main( String [] args ) {
char c='c';
int i=10;
double d =50;
long l=30;
String s="Goodbye";
//Are these statement valid?
s+=i;
i+=s;
c+=s;
c=c+i;
}
}
bash-3.2$ javac ConversionTest.java
ConversionTest.java:12: incompatible types
found : int
required: java.lang.String
i+=s;
^
ConversionTest.java:13: incompatible types
found : char
required: java.lang.String
c+=s;
^
ConversionTest.java:14: possible loss of precision
found : int
required: char
c=c+i;
^
3 errors
EDIT
Long history
Basically, all the types in java have a "shape" if you want to call it like that ( well I'm going to call it like that for this answer )
For the primitives data types ( boolean, byte, short, char, int, float, long, double ) the "shape" is the size in bytes it uses ( or in bits, here 1 byte = 8 bits ) :
boolean = true or false
byte = 8 bits
short = 16 bits
char = 16 bits
int = 32 bits
float = 32 bits
long = 64 bits
double = 64 bits
The "shape" of objects varies according to it class.
So, basically you can assign anything to anything as long as they fit in the "shape"
So you can assign an int to a long ( you can thing 32 bits fits into 64 bits ) a short(16) into a int(32) etc.
What you can't do is to assign something that doesn't fit in the shape.
So
ConversionTest.java:12: incompatible types
found : int
required: java.lang.String
i+=s;
^
You can't assign a String into an int. How would you? Where would the contents go? They are not of the same "shape", nor even a compatible one.
Same goes for String to char
ConversionTest.java:13: incompatible types
found : char
required: java.lang.String
c+=s;
^
Now, you might assign an int(32 bits) to a char(16 bits) or to a short(16 bits) The problem would be, that if the value holds > than 16 bits ( 131 071 for instance )
You would lose the bits that do not fit into 16 bits. That's why you get this error:
ConversionTest.java:14: possible loss of precision
found : int
required: char
c=c+i;
However if you are sure that it fits ( for instance int i = 65; which certainly fits into 16 bits ) you can cast it, like this:
int i = 65;
char c = ( char ) i;
Casting it the way you tell the compiler:
Hey I'm the programmer here, I know what I'm doing.
Yes, no, no, no (unless you explicitly perform a typecast). If you were to write up a simple main method, compile it, and execute it, you could have seen this - these problems should be identified by the compiler.
This page on Java primitive data types explains it pretty well.
char c='c';
int i=10;
double d =50;
long l=30;
String s="Goodbye";
s+=i; // legal :)
i+=s; // not legal :( The operator += is undefined for the argument types int, String
c+=s; // not legal :( The operator += is undefined for the argument types char, String
c=c+i; // not legal :( Type Mismatch: cannot convert from int to char
The complete explanation of Java data type conversions is long and detailed.
There are two types of conversions:widening conversions and narrowing conversions. Widening conversions are allowed and Java will handle it for you, but narrowing conversions are not allowed. Widening conversions mean that you are converting a "smaller" value such as int (32 bits) to a "larger" value such as long (62 bits). Narrowing conversions which go the other way will have to be done explicitly.
s+=i;
will require an int to be converted to a String which is allowed.
i+=s;
Will require a String to be converted to an int which is not allowed. The += operator will translate to
i = i + s;
and i + s will return a String which cannot be assigned to an int.
c+=s;
This cannot be allowed for a similar reason that c + s returns a String which you are trying to assign to a char.
c=c+i;
will also give an error because c + i will result in an int (32 bits) and assigning it to a char (16 bits) may cause loss of precision.
Each of the operations you try are actually possible but you have to explicitly tell Java that you want to do themn and will accept the consequences. Having said that mixed type operations are frowned upon in the totally pure hard nosed programming arena since there are edge cases that potentially cause problems.
s += i will concatenate s and string "10", this is equal to s += ((Integer)i).toString();
i += s don't think this will work, types are incompatible
c += s also shouldn't compile, same, incompatiple types.
c = c + i should add 10 to ascii value of c, to c will become 10th letter after 'c' => 'm', i guess
EDIT. So in last case you have to cast i to char to make it compile.

Categories