I was reading the documentation for StringBuffer, in particular the reverse() method. That documentation mentions something about surrogate pairs. What is a surrogate pair in this context? And what are low and high surrogates?
The term "surrogate pair" refers to a means of encoding Unicode characters with high code-points in the UTF-16 encoding scheme.
In the Unicode character encoding, characters are mapped to values between 0x0 and 0x10FFFF.
Internally, Java uses the UTF-16 encoding scheme to store strings of Unicode text. In UTF-16, 16-bit (two-byte) code units are used. Since 16 bits can only contain the range of characters from 0x0 to 0xFFFF, some additional complexity is used to store values above this range (0x10000 to 0x10FFFF). This is done using pairs of code units known as surrogates.
The surrogate code units are in two ranges known as "high surrogates" and "low surrogates", depending on whether they are allowed at the start or end of the two-code-unit sequence.
Early Java versions represented Unicode characters using the 16-bit char data type. This design made sense at the time, because all Unicode characters had values less than 65,535 (0xFFFF) and could be represented in 16 bits. Later, however, Unicode increased the maximum value to 1,114,111 (0x10FFFF). Because 16-bit values were too small to represent all of the Unicode characters in Unicode version 3.1, 32-bit values β called code points β were adopted for the UTF-32 encoding scheme.
But 16-bit values are preferred over 32-bit values for efficient memory use, so Unicode introduced a new design to allow for the continued use of 16-bit values. This design, adopted in the UTF-16 encoding scheme, assigns 1,024 values to 16-bit high surrogates(in the range U+D800 to U+DBFF) and another 1,024 values to 16-bit low surrogates(in the range U+DC00 to U+DFFF). It uses a high surrogate followed by a low surrogate β a surrogate pair β to represent (the product of 1,024 and 1,024)1,048,576 (0x100000) values between 65,536 (0x10000) and 1,114,111 (0x10FFFF) .
Adding some more info to the above answers from this post.
Tested in Java-12, should work in all Java versions above 5.
As mentioned here: https://stackoverflow.com/a/47505451/2987755,
whichever character (whose Unicode is above U+FFFF) is represented as a surrogate pair, which Java stores as a pair of char values, i.e. the single Unicode character is represented as two adjacent Java characters.
As we can see in the following example.
1. Length:
"π".length() //2, Expectations was it should return 1
"π".codePointCount(0,"π".length()) //1, To get the number of Unicode characters in a Java String
2. Equality:
Represent "π" to String using Unicode \ud83c\udf09 as below and check equality.
"π".equals("\ud83c\udf09") // true
Java does not support UTF-32
"π".equals("\u1F309") // false
3. You can convert Unicode character to Java String
"π".equals(new String(Character.toChars(0x0001F309))) //true
4. String.substring() does not consider supplementary characters
"ππ".substring(0,1) //"?"
"ππ".substring(0,2) //"π"
"ππ".substring(0,4) //"ππ"
To solve this we can use String.offsetByCodePoints(int index, int codePointOffset)
"ππ".substring(0,"ππ".offsetByCodePoints(0,1) // "π"
"ππ".substring(2,"ππ".offsetByCodePoints(1,2)) // "π"
5. Iterating Unicode string with BreakIterator
6. Sorting Strings with Unicode java.text.Collator
7. Character's toUpperCase(), toLowerCase(), methods should not be used, instead, use String uppercase and lowercase of particular locale.
8. Character.isLetter(char ch) does not support, better used Character.isLetter(int codePoint), for each methodName(char ch) method in the Character class there will be type of methodName(int codePoint) which can handle supplementary characters.
9. Specify charset in String.getBytes(), converting from Bytes to String, InputStreamReader, OutputStreamWriter
Ref:
https://coolsymbol.com/emojis/emoji-for-copy-and-paste.html#objects
https://www.online-toolz.com/tools/text-unicode-entities-convertor.php
https://www.ibm.com/developerworks/library/j-unicode/index.html
https://www.oracle.com/technetwork/articles/javaee/supplementary-142654.html
More info on example image1 image2
Other terms worth to explore: Normalization, BiDi
What that documentation is saying is that invalid UTF-16 strings may become valid after calling the reverse method since they might be the reverses of valid strings. A surrogate pair (discussed here) is a pair of 16-bit values in UTF-16 that encode a single Unicode code point; the low and high surrogates are the two halves of that encoding.
Small preface
Unicode represents code points. Each code point can be encoded in 8-, 16,- or 32-bit blocks according to the Unicode standard.
Prior to the Version 3.1, mostly in use was 8-bit enconding, known as UTF-8, and 16-bit encoding, known as UCS-2 or βUniversal Character Set coded in 2 octetsβ. UTF-8 encodes Unicode points as a sequence of 1-byte blocks, while UCS-2 always takes 2 bytes:
A = 41 - one block of 8-bits with UTF-8
A = 0041 - one block of 16-bits with UCS-2
Ξ© = CE A9 - two blocks of 8-bits with UTF-8
Ξ© = 03A9 - one block of 16-bits with UCS-2
Problem
The consortium thought that 16 bits would be enough to cover any human-readable language, which gives 2^16 = 65536 possible code values. This was true for the Plane 0, also known as BMP or Basic Multilingual Plane, that includes 55,445 of 65536 code points today. BMP covers almost every human language in the world, including Chinese-Japanese-Korean symbols (CJK).
The time passed and new Asian character sets were added, Chinese symbols took more than 70,000 points alone. Now, there are even Emoji points as part of the standard πΊ. New 16 "additional" Planes were added. The UCS-2 room was not enough to cover anything bigger than Plane-0.
Unicode decision
Limit Unicode to the 17 planes Γ 65 536 characters per plane = 1 114 112 maximum points.
Present UTF-32, former known as UCS-4, to hold 32-bits for each code point and cover all planes.
Continue to use UTF-8 as dynamic encoding, limit UTF-8 to 4 bytes maximum for each code point, i.e. from 1 up to 4 bytes per point.
Deprecate UCS-2
Create UTF-16 based on UCS-2. Make UTF-16 dynamic, so it takes 2 bytes or 4 bytes per point. Assign 1024 points U+D800βU+DBFF, called High Surrogates, to UTF-16; assign 1024 symbols U+DC00βU+DFFF, called Low Surrogates, to UTF-16.
With those changes, BMP is covered with 1 block of 16 bits in UTF-16, while all "Supplementary characters" are covered with Surrogate Pairs presenting 2 blocks by 16 bits each, totally 1024x1024 = 1 048 576 points.
A high surrogate precedes a low surrogate. Any deviation from this rule is considered as a bad encoding. For example, a surrogate without a pair is incorrect, a low surrogate standing before a high surrogate is incorrect.
π, 'MUSICAL SYMBOL G CLEF', is encoded in UTF-16 as a pair of surrogates 0xD834 0xDD1E (2 by 2 bytes),
in UTF-8 as 0xF0 0x9D 0x84 0x9E (4 by 1 byte),
in UTF-32 as 0x0001D11E (1 by 4 bytes).
Current situation
Although according to the standard the surrogates are specifically assigned only to UTF-16, historically some Windows and Java applications used UTF-8 and UCS-2 points reserved now to the surrogate range.
To support legacy applications with incorrect UTF-8/UTF-16 encodings, a new standard WTF-8, Wobbly Transformation Format, was created. It supports arbitrary surrogate points, such as a non-paired surrogate or an incorrect sequence. Today, some products do not comply with the standard and treat UTF-8 as WTF-8.
The surrogate solution opened some security problems, as well as attempts to use "illigal surrogate pairs".
Many historic details were suppressed to follow the topic β.
The latest Unicode Standard can be found at http://www.unicode.org/versions/latest
Surrogate pairs refer to UTF-16's way of encoding certain characters, see http://en.wikipedia.org/wiki/UTF-16/UCS-2#Code_points_U.2B10000..U.2B10FFFF
A surrogate pair is two 'code units' in UTF-16 that make up one 'code point'. The Java documentation is stating that these 'code points' will still be valid, with their 'code units' ordered correctly, after the reverse. It further states that two unpaired surrogate code units may be reversed and form a valid surrogate pair. Which means that if there are unpaired code units, then there is a chance that the reverse of the reverse may not be the same!
Notice, though, the documentation says nothing about Graphemes -- which are multiple codepoints combined. Which means e and the accent that goes along with it may still be switched, thus placing the accent before the e. Which means if there is another vowel before the e it may get the accent that was on the e.
Yikes!
Related
I have small test example like this
public class Main {
public static void main(String[] args) {
String s = "π»πΊ";
System.out.println(s);
System.out.println(s.length());
System.out.println(s.toCharArray().length);
System.out.println(s.getBytes(StandardCharsets.UTF_8).length);
System.out.println(s.getBytes(StandardCharsets.UTF_16).length);
System.out.println(s.codePointCount(0, s.length()));
System.out.println(Character.codePointCount(s, 0, s.length()));
}
}
And result is:
π»πΊ
4
4
8
10
2
2
I can not understand, why 1 unicode character Vanuatu flag return 4 of length, 8 bytes in utf-8 and 10 bytes in utf-16, I know java using
UTF-16 and it need 1 char(2 byte) for 1 code point but it make me confusing about 4 char for 1 unicode character, i think it just need 2 char but result 4. Someone can fully explain to help me understand about this. Many thanks.
Unicode flag emojis are encoded as two code points.
There are 26 Regional Indicator Symbols representing A-Z, and a flag is encoded by spelling out the ISO country code. For example, the Vanuatu flag is encoded as "VU", and the American flag is "US".
The indicators are all in the supplemental plane, so they each require two UTF-16 characters. This brings the total up to 4 Java char per flag.
The purpose of this is to avoid having to update the standard whenever a country gains or loses independence, and it helps the Unicode consortium stay neutral since it doesn't have to be an arbiter of geopolitical claims.
UTF-8 is a variable-length encoding that uses 1 to 4 bytes per Unicode character. The first byte carries from 3 to 7 bits of the character, and each subsequent byte carries 6 bits. Thus there's from 7 to 21 bits of payload.
The number of bytes needed depends on the particular character.
See this Wikipedia page for the encoding.
UTF-16 uses either one 16-bit unit or two 16-bit units for a Unicode character. Approximately speaking, characters in the first 64K characters are encoded as one unit; characters outside that range need two units.
"Approximately" because, actually, the codes that fit in one 16-bit unit are either in U+0000 to U+D7FF, or U+E000 to U+FFFF. The values in between those two are used for the two-unit format.
The number of 16-bit units needed depends on the particular character.
See this other Wikipedia page.
I have been reading about how Unicode code points have evolved over time, including this article by Joel Spolsky, which says:
Some people are under the misconception that Unicode is simply a 16-bit code where each character takes 16 bits and therefore there are 65,536 possible characters. This is not, actually, correct.
But despite all this reading, I couldn't find the real reason that Java uses UTF-16 for a char.
Isn't UTF-8 far more efficient than UTF-16? For example, if I had a string which contains 1024 letters of ASCII scoped characters, UTF-16 will take 1024 * 2 bytes (2KB) of memory.
But if Java used UTF-8, it would be just 1KB of data. Even if the string has a few character which needs to 2 bytes, it will still only take about a kilobyte. For example, suppose in addition to the 1024 characters, there were 10 characters of "ε" (code-point U+5b57, UTF-8 encoding e5 ad 97). In UTF-8, this will still take only (1024 * 1 byte) + (10 * 3 bytes) = 1KB + 30 bytes.
So this doesn't answer my question. 1KB + 30 bytes for UTF-8 is clearly less memory than 2KB for UTF-16.
Of course it makes sense that Java doesn't use ASCII for a char, but why does it not use UTF-8, which has a clean mechanism for handling arbitrary multi-byte characters when they come up? UTF-16 looks like a waste of memory in any string which has lots of non-multibyte chars.
Is there some good reason for UTF-16 that I'm missing?
Java used UCS-2 before transitioning over UTF-16 in 2004/2005. The reason for the original choice of UCS-2 is mainly historical:
Unicode was originally designed as a fixed-width 16-bit character encoding. The primitive data type char in the Java programming language was intended to take advantage of this design by providing a simple data type that could hold any character.
This, and the birth of UTF-16, is further explained by the Unicode FAQ page:
Originally, Unicode was designed as a pure 16-bit encoding, aimed at representing all modern scripts. (Ancient scripts were to be represented with private-use characters.) Over time, and especially after the addition of over 14,500 composite characters for compatibility with legacy sets, it became clear that 16-bits were not sufficient for the user community. Out of this arose UTF-16.
As #wero has already mentioned, random access cannot be done efficiently with UTF-8. So all things weighed up, UCS-2 was seemingly the best choice at the time, particularly as no supplementary characters had been allocated by that stage. This then left UTF-16 as the easiest natural progression beyond that.
Historically, one reason was the performance characteristics of random access or iterating over the characters of a String:
UTF-8 encoding uses a variable number (1-4) bytes to encode a Unicode character. Therefore accessing a character by index: String.charAt(i) would be way more complicated to implement and slower than the array access used by java.lang.String.
Even today, Python uses a fixed-width format for Strings internally, storing either 1, 2, or 4 bytes per character depending on the maximum size of a character in that string.
Of course, this is no longer a pure benefit in Java, since, as nj_ explains, Java no longer uses a fixed-with format. But at the time the language was developed, Unicode was a fixed-width format (now called UCS-2), and this would have been an advantage.
Regarding Java syntax, there is a NumericType which consists of IntegralType and FloatingPointType. IntegralTypes are byte, short, int, long and char.
At the same time, I can assign a single character to char variable.
char c1 = 10;
char c2 = 'c';
So here is my question. Why char in numeric type and how JVM convert 'c' to a number?
Why char in numeric type...
Using numbers to represent characters as indexes into a table is the standard way the text is handled in computers. It's called character encoding and has a long history, going back at least to telegraphs. For a long time personal computers used ASCII (a 7-bit encoding = 127 characters plus nul) and then "extended ASCII" (an 8-bit encoding of various forms where the "upper" 128 characters had a variety of interpretations), but these are now obsolete and suitable only for niche purposes thanks to their limited character set. Before personal computers, popular ones were EBCDIC and its precursor BCD. Modern systems use Unicode (usually by storing one or more of its transformations such as UTF-8 or UTF-16) or various standardized "code pages" such as Windows-1252 or ISO-8859-1.
...and how JVM convert 'c' to a number?
Java's numeric char values map to and from characters via Unicode (which is how the JVM knows that 'c' is the value 0x0063, or that 'Γ©' is 0x00E9). Specifically, a char value maps to a Unicode code point and strings are sequences of code points.
There's quite a lot about the char data type, including why the value is 16 bits wide, in the JavaDoc of the Character class:
Unicode Character Representations
The char data type (and therefore the value that a Character object encapsulates) are based on the original Unicode specification, which defined characters as fixed-width 16-bit entities. The Unicode Standard has since been changed to allow for characters whose representation requires more than 16 bits. The range of legal code points is now U+0000 to U+10FFFF, known as Unicode scalar value. (Refer to the definition of the U+n notation in the Unicode Standard.)
The set of characters from U+0000 to U+FFFF is sometimes referred to as the Basic Multilingual Plane (BMP). Characters whose code points are greater than U+FFFF are called supplementary characters. The Java platform uses the UTF-16 representation in char arrays and in the String and StringBuffer classes. In this representation, supplementary characters are represented as a pair of char values, the first from the high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).
A char value, therefore, represents Basic Multilingual Plane (BMP) code points, including the surrogate code points, or code units of the UTF-16 encoding. An int value represents all Unicode code points, including supplementary code points. The lower (least significant) 21 bits of int are used to represent Unicode code points and the upper (most significant) 11 bits must be zero. Unless otherwise specified, the behavior with respect to supplementary characters and surrogate char values is as follows:
The methods that only accept a char value cannot support supplementary characters. They treat char values from the surrogate ranges as undefined characters. For example, Character.isLetter('\uD840') returns false, even though this specific value if followed by any low-surrogate value in a string would represent a letter.
The methods that accept an int value support all Unicode characters, including supplementary characters. For example, Character.isLetter(0x2F81A) returns true because the code point value represents a letter (a CJK ideograph).
In the Java SE API documentation, Unicode code point is used for character values in the range between U+0000 and U+10FFFF, and Unicode code unit is used for 16-bit char values that are code units of the UTF-16 encoding. For more information on Unicode terminology, refer to the Unicode Glossary.
Because underneath Java represents chars as Unicode. There is some convenience to this, for example you can run a loop from 'A' to 'Z' and do something. It's important to realize, however, that in Java Strings aren't strictly arrays of characters like they are in some other languages. More info here
Internally char is stored as ASCII (or UNICODE) code which is integer. The difference is in how it is processed after it is read from memory.
In C/C++ char and int are very close and is type casted implicitly. Similar behavior in Java shows the relation between C/C++ and Java as JVM is written in C/C++.
Besides being able to do arithmetic operations on chars which sometimes comes handy (like c >= 'a' && c <= 'z') I would say it is a design decision driven by the similar approach taken in other languages when Java was invented (primarily C and C++).
The fact that Character does not extend Number (as other numeric primitive wrappers do) somehow indicates that Java designers tried to find some kind of a compromise between numeric and non-numeric nature of characters.
DISCLAIMER I was not able to find any official docs about this.
Difference between UTF-8 and UTF-16?
Why do we need these?
MessageDigest md = MessageDigest.getInstance("SHA-256");
String text = "This is some text";
md.update(text.getBytes("UTF-8")); // Change this to "UTF-16" if needed
byte[] digest = md.digest();
I believe there are a lot of good articles about this around the Web, but here is a short summary.
Both UTF-8 and UTF-16 are variable length encodings. However, in UTF-8 a character may occupy a minimum of 8 bits, while in UTF-16 character length starts with 16 bits.
Main UTF-8 pros:
Basic ASCII characters like digits, Latin characters with no accents, etc. occupy one byte which is identical to US-ASCII representation. This way all US-ASCII strings become valid UTF-8, which provides decent backwards compatibility in many cases.
No null bytes, which allows to use null-terminated strings, this introduces a great deal of backwards compatibility too.
UTF-8 is independent of byte order, so you don't have to worry about Big Endian / Little Endian issue.
Main UTF-8 cons:
Many common characters have different length, which slows indexing by codepoint and calculating a codepoint count terribly.
Even though byte order doesn't matter, sometimes UTF-8 still has BOM (byte order mark) which serves to notify that the text is encoded in UTF-8, and also breaks compatibility with ASCII software even if the text only contains ASCII characters. Microsoft software (like Notepad) especially likes to add BOM to UTF-8.
Main UTF-16 pros:
BMP (basic multilingual plane) characters, including Latin, Cyrillic, most Chinese (the PRC made support for some codepoints outside BMP mandatory), most Japanese can be represented with 2 bytes. This speeds up indexing and calculating codepoint count in case the text does not contain supplementary characters.
Even if the text has supplementary characters, they are still represented by pairs of 16-bit values, which means that the total length is still divisible by two and allows to use 16-bit char as the primitive component of the string.
Main UTF-16 cons:
Lots of null bytes in US-ASCII strings, which means no null-terminated strings and a lot of wasted memory.
Using it as a fixed-length encoding βmostly worksβ in many common scenarios (especially in US / EU / countries with Cyrillic alphabets / Israel / Arab countries / Iran and many others), often leading to broken support where it doesn't. This means the programmers have to be aware of surrogate pairs and handle them properly in cases where it matters!
It's variable length, so counting or indexing codepoints is costly, though less than UTF-8.
In general, UTF-16 is usually better for in-memory representation because BE/LE is irrelevant there (just use native order) and indexing is faster (just don't forget to handle surrogate pairs properly). UTF-8, on the other hand, is extremely good for text files and network protocols because there is no BE/LE issue and null-termination often comes in handy, as well as ASCII-compatibility.
They're simply different schemes for representing Unicode characters.
Both are variable-length - UTF-16 uses 2 bytes for all characters in the basic multilingual plane (BMP) which contains most characters in common use.
UTF-8 uses between 1 and 3 bytes for characters in the BMP, up to 4 for characters in the current Unicode range of U+0000 to U+1FFFFF, and is extensible up to U+7FFFFFFF if that ever becomes necessary... but notably all ASCII characters are represented in a single byte each.
For the purposes of a message digest it won't matter which of these you pick, so long as everyone who tries to recreate the digest uses the same option.
See this page for more about UTF-8 and Unicode.
(Note that all Java characters are UTF-16 code points within the BMP; to represent characters above U+FFFF you need to use surrogate pairs in Java.)
Security: Use only UTF-8
Difference between UTF-8 and UTF-16? Why do we need these?
There have been at least a couple of security vulnerabilities in implementations of UTF-16. See Wikipedia for details.
CVE-2008-2938
CVE-2012-2135
WHATWG and W3C have now declared that only UTF-8 is to be used on the Web.
The [security] problems outlined here go away when exclusively using UTF-8, which is one of the many reasons that is now the mandatory encoding for all things.
Other groups are saying the same.
So while UTF-16 may continue being used internally by some systems such as Java and Windows, what little use of UTF-16 you may have seen in the past for data files, data exchange, and such, will likely fade away entirely.
This is unrelated to UTF-8/16 (in general, although it does convert to UTF16 and the BE/LE part can be set w/ a single line), yet below is the fastest way to convert String to byte[]. For instance: good exactly for the case provided (hash code). String.getBytes(enc) is relatively slow.
static byte[] toBytes(String s){
byte[] b=new byte[s.length()*2];
ByteBuffer.wrap(b).asCharBuffer().put(s);
return b;
}
Simple way to differentiate UTF-8 and UTF-16 is to identify commonalities between them.
Other than sharing same unicode number for given character, each one is their own format.
UTF-8 try to represent, every unicode number given to character with one byte(If it is ASCII), else 2 two bytes, else 4 bytes and so on...
UTF-16 try to represent, every unicode number given to character with two byte to start with. If two bytes are not sufficient, then uses 4 bytes. IF that is also not sufficient, then uses 6 bytes.
Theoretically, UTF-16 is more space efficient, but in practical UTF-8 is more space efficient as most of the characters(98% of data) for processing are ASCII and UTF-8 try to represent them with single byte and UTF-16 try to represent them with 2 bytes.
Also, UTF-8 is superset of ASCII encoding. So every app that expects ASCII data would also accepted by UTF-8 processor. This is not true for UTF-16. UTF-16 could not understand ASCII, and this is big hurdle for UTF-16 adoption.
Another point to note is, all UNICODE as of now could be fit in 4 bytes of UTF-8 maximum(Considering all languages of world). This is same as UTF-16 and no real saving in space compared to UTF-8 ( https://stackoverflow.com/a/8505038/3343801 )
So, people use UTF-8 where ever possible.
A Java char is 2 bytes (max size of 65,536) but there are 95,221 Unicode characters. Does this mean that you can't handle certain Unicode characters in a Java application?
Does this boil down to what character encoding you are using?
You can handle them all if you're careful enough.
Java's char is a UTF-16 code unit. For characters with code-point > 0xFFFF it will be encoded with 2 chars (a surrogate pair).
See http://www.oracle.com/us/technologies/java/supplementary-142654.html for how to handle those characters in Java.
(BTW, in Unicode 5.2 there are 107,154 assigned characters out of 1,114,112 slots.)
Java uses UTF-16. A single Java char can only represent characters from the basic multilingual plane. Other characters have to be represented by a surrogate pair of two chars. This is reflected by API methods such as String.codePointAt().
And yes, this means that a lot of Java code will break in one way or another when used with characters outside the basic multilingual plane.
To add to the other answers, some points to remember:
A Java char takes always 16 bits.
A Unicode character, when encoded as UTF-16, takes "almost always" (not always) 16 bits: that's because there are more than 64K unicode characters. Hence, a Java char is NOT a Unicode character (though "almost always" is).
"Almost always", above, means the 64K first code points of Unicode, range 0x0000 to 0xFFFF (BMP), which take 16 bits in the UTF-16 encoding.
A non-BMP ("rare") Unicode character is represented as two Java chars (surrogate representation). This applies also to the literal representation as a string: For example, the character U+20000 is written as "\uD840\uDC00".
Corolary: string.length() returns the number of java chars, not of Unicode chars. A string that has just one "rare" unicode character (eg U+20000) would return length() = 2 . Same consideration applies to any method that deals with char-sequences.
Java has little intelligence for dealing with non-BMP unicode characters as a whole. There are some utility methods that treat characters as code-points, represented as ints eg: Character.isLetter(int ch). Those are the real fully-Unicode methods.
You said:
A Java char is 2 bytes (max size of 65,536) but there are 95,221 Unicode characters.
Unicode grows
Actually, the inventory of characters defined in Unicode has grown dramatically. Unicode continues to grow β and not just because of emojis.
143,859 characters in Unicode 13 (Java 15, release notes)
137,994 characters in Unicode 12.1 (Java 13 & 14)
136,755 characters in Unicode 10 (Java 11 & 12)
120,737 characters in Unicode 8 (Java 9)
110,182 characters in Unicode 6.2 (Java 8)
109,449 characters in Unicode 6.0 (Java 7)
96,447 characters in Unicode 4.0 (Java 5 & 6)
49,259 characters in Unicode 3.0 (Java 1.4)
38,952 characters in Unicode 2.1 (Java 1.1.7)
38,950 characters in Unicode 2.0 (Java 1.1)
34,233 characters in Unicode 1.1.5 (Java 1.0)
char is legacy
The char type is long outmoded, now legacy.
Use code point numbers
Instead, you should be working with code point numbers.
You asked:
Does this mean that you can't handle certain Unicode characters in a Java application?
The char type can address less than half of today's Unicode characters.
To represent any Unicode character, use code point numbers. Never use char.
Every character in Unicode is assigned a code point number. These range over a million, from 0 to 1,114,112. Doing the math when comparing to the numbers listed above, this means most of the numbers in that range have not yet been assigned to a character yet. Some of those numbers are reserved as Private Use Areas and will never be assigned.
The String class has gained methods for working with code point numbers, as did the Character class.
Get the code point number for any character in a string, by zero-based index number. Here we get 97 for the letter a.
int codePoint = "Cat".codePointAt( 1 ) ; // 97 = 'a', hex U+0061, LATIN SMALL LETTER A.
For the more general CharSequence rather than String, use Character.codePointAt.
We can get the Unicode name for a code point number.
String name = Character.getName( 97 ) ; // letter `a`
LATIN SMALL LETTER A
We can get a stream of the code point numbers of all the characters in a string.
IntStream codePointsStream = "Cat".codePoints() ;
We can turn that into a List of Integer objects. See How do I convert a Java 8 IntStream to a List?.
List< Integer > codePointsList = codePointsStream.boxed().collect( Collectors.toList() ) ;
Any code point number can be changed into a String of a single character by calling Character.toString.
String s = Character.toString( 97 ) ; // 97 is `a`, LATIN SMALL LETTER A.
a
We can produce a String object from an IntStream of code point numbers. See Make a string from an IntStream of code point numbers?.
IntStream intStream = IntStream.of( 67 , 97 , 116 , 32 , 128_008 ); // 32 = SPACE, 128,008 = CAT (emoji).
String output =
intStream
.collect( // Collect the results of processing each code point.
StringBuilder :: new , // Supplier<R> supplier
StringBuilder :: appendCodePoint , // ObjIntConsumer<R> accumulator
StringBuilder :: append // BiConsumer<R,βR> combiner
) // Returns a `CharSequence` object.
.toString(); // If you would rather have a `String` than `CharSequence`, call `toString`.
Cat π
You asked:
Does this boil down to what character encoding you are using?
Internally, a String in Java is always using UTF-16.
You only use other character encoding when importing or exporting text in or out of Java strings.
So, to answer your question, no, character encoding is not directly related here. Once you get your text into a Java String, it is in UTF-16 encoding and can therefore contain any Unicode character. Of course, to see that character, you must be using a font with a glyph defined for that particular character.
When exporting text from Java strings, if you specify a legacy character encoding that cannot represent some of the Unicode characters used in your text, you will have a problem. So use a modern character encoding, which nowadays means UTF-8 as UTF-16 is now considered harmful.
Have a look at the Unicode 4.0 support in J2SE 1.5 article to learn more about the tricks invented by Sun to provide support for all Unicode 4.0 code points.
In summary, you'll find the following changes for Unicode 4.0 in Java 1.5:
char is a UTF-16 code unit, not a code point
new low-level APIs use an int to represent a Unicode code point
high level APIs have been updated to understand surrogate pairs
a preference towards char sequence APIs instead of char based methods
Since Java doesn't have 32 bit chars, I'll let you judge if we can call this good Unicode support.
Here's Oracle's documentation on Unicode Character Representations. Or, if you prefer, a more thorough documentation here.
The char data type (and therefore the value that a Character object
encapsulates) are based on the original Unicode specification, which
defined characters as fixed-width 16-bit entities. The Unicode
standard has since been changed to allow for characters whose
representation requires more than 16 bits. The range of legal code
points is now U+0000 to U+10FFFF, known as Unicode scalar value.
(Refer to the definition of the U+n notation in the Unicode standard.)
The set of characters from U+0000 to U+FFFF is sometimes referred to
as the Basic Multilingual Plane (BMP). Characters whose code points
are greater than U+FFFF are called supplementary characters. The Java
2 platform uses the UTF-16 representation in char arrays and in the
String and StringBuffer classes. In this representation, supplementary
characters are represented as a pair of char values, the first from
the high-surrogates range, (\uD800-\uDBFF), the second from the
low-surrogates range (\uDC00-\uDFFF).
A char value, therefore, represents Basic Multilingual Plane (BMP)
code points, including the surrogate code points, or code units of the
UTF-16 encoding. An int value represents all Unicode code points,
including supplementary code points. The lower (least significant) 21
bits of int are used to represent Unicode code points and the upper
(most significant) 11 bits must be zero. Unless otherwise specified,
the behavior with respect to supplementary characters and surrogate
char values is as follows:
The methods that only accept a char value cannot support supplementary characters. They treat char values from the surrogate
ranges as undefined characters. For example,
Character.isLetter('\uD840') returns false, even though this specific
value if followed by any low-surrogate value in a string would
represent a letter.
The methods that accept an int value support all Unicode characters, including supplementary characters. For example,
Character.isLetter(0x2F81A) returns true because the code point value
represents a letter (a CJK ideograph).
From the OpenJDK7 documentation for String:
A String represents a string in the
UTF-16 format in which supplementary
characters are represented by
surrogate pairs (see the section
Unicode Character Representations in
the Character class for more
information). Index values refer to
char code units, so a supplementary
character uses two positions in a
String.