Java encoding unicode characters

What is the character encoding of String in Java?

I am actually confused regarding the encoding of strings in Java. I have a couple of questions. Please help me if you know the answer to them: 1) What is the native encoding of Java strings in memory? When I write String a = «Hello» in which format will it be stored? Since Java is machine independent I don’t think the system will do the encoding. 2) I read on the net that «UTF-16» is the default encoding but I got confused because say when I write that int a = ‘c’ I get the number of the character in the ASCII table. So are ASCII and UTF-16 the same? 3) Also I wasn’t sure on what the storage of a string in the memory depends: OS, language?

You should consider breaking these out into individual questions, as they are really very different. #2 can probably be answered here: stackoverflow.com/questions/1490218/…

4 Answers 4

  1. Java stores strings as UTF-16 internally.
  2. «default encoding» isn’t quite right. Java stores strings as UTF-16 internally, but the encoding used externally, the «system default encoding», varies from platform to platform, and can even be altered by things like environment variables on some platforms. ASCII is a subset of Latin 1 which is a subset of Unicode. UTF-16 is a way of encoding Unicode. So if you perform your int i = ‘x’ test for any character that falls in the ASCII range you’ll get the ASCII value. UTF-16 can represent a lot more characters than ASCII, however.
  3. From the java.lang.Character docs:

The Java 2 platform uses the UTF-16 representation in char arrays and in the String and StringBuffer classes.

The usage of char and char arrays is only defined for the public, external API for String and StringBuffer. The internal storage of the characters is implementation specific.

Читайте также:  Hashset java доступ к элементу

@jarnbjo The above is a direct quote from the docs. The char datatype in Java represents a UTF-16 code unit (not a character, aka Unicode codepoint) so I think it’s pretty safe to say that Java the language’s representation of text is UTF-16. Yes, conceivably an implementation could choose to do something different under the covers, but in the end they’d have to make it look just like they were using UTF-16.

Since there is no way to access the internal storage of the String and StringBuffer classes, it makes to sense to assume that the statement you quote apply to it.

@HendyIrawan Jana doesn’t let you access the individual bytes, only the chars (which correspond to UTF-16 code units), so there is no set endian. The actual endian used in memory is JVM/platform dependent, just like the endian used to store an int in memory.

1) Strings are objects, which typically contain a char array and the strings’s length. The character array is usually implemented as a contiguous array of 16-bit words, each one containing a Unicode character in native byte order.

2) Assigning a character value to an integer converts the 16-bit Unicode character code into its integer equivalent. Thus ‘c’ , which is U+0063, becomes 0x0063 , or 99.

3) Since each String is an object, it contains other information than its class members (e.g., class descriptor word, lock/semaphore word, etc.).

ADENDUM
The object contents depend on the JVM implementation (which determines the inherent overhead associated with each object), and how the class is actually coded (i.e., some libraries may be more efficient than others).

EXAMPLE
A typical implementation will allocate an overhead of two words per object instance (for the class descriptor/pointer, and a semaphore/lock control word); a String object also contains an int length and a char[] array reference. The actual character contents of the string are stored in a second object, the char[] array, which in turn is allocated two words, plus an array length word, plus as many 16-bit char elements as needed for the string (plus any extra chars that were left hanging around when the string was created).

ADDENDUM 2
The case that one char represents one Unicode character is only true in most of the cases. This would imply UCS-2 encoding and true before 2005. But by now Unicode has become larger and Strings have to be encoded using UTF-16 — where alas a single Unicode character may use two char s in a Java String .

Источник

Unicode

Unicode is a computing industry standard designed to consistently and uniquely encode characters used in written languages throughout the world. The Unicode standard uses hexadecimal to express a character. For example, the value 0x0041 represents the Latin character A. The Unicode standard was initially designed using 16 bits to encode characters because the primary machines were 16-bit PCs.

When the specification for the Java language was created, the Unicode standard was accepted and the char primitive was defined as a 16-bit data type, with characters in the hexadecimal range from 0x0000 to 0xFFFF.

Because 16-bit encoding supports 2 16 (65,536) characters, which is insufficient to define all characters in use throughout the world, the Unicode standard was extended to 0x10FFFF, which supports over one million characters. The definition of a character in the Java programming language could not be changed from 16 bits to 32 bits without causing millions of Java applications to no longer run properly. To correct the definition, a scheme was developed to handle characters that could not be encoded in 16 bits.

The characters with values that are outside of the 16-bit range, and within the range from 0x10000 to 0x10FFFF, are called supplementary characters and are defined as a pair of char values.

This lesson includes the following sections:

  • Terminology – Code points and other terms are explained.
  • Supplementary Characters as Surrogates – 16-bit surrogates are used to implement supplementary characters, which cannot be implemented as a single primitive char data type.
  • Character and String API – A listing of related API for the Character , String , and related classes.
  • Sample Usage – Several useful code snippets are provided.
  • Design Considerations – Design considerations to keep in mind to ensure that your application will work with any language script.
  • More Information – A list of further resources are provided.

Источник

Class Charset

A named mapping between sequences of sixteen-bit Unicode code units and sequences of bytes. This class defines methods for creating decoders and encoders and for retrieving the various names associated with a charset. Instances of this class are immutable.

This class also defines static methods for testing whether a particular charset is supported, for locating charset instances by name, and for constructing a map that contains every charset for which support is available in the current Java virtual machine. Support for new charsets can be added via the service-provider interface defined in the CharsetProvider class.

All of the methods defined in this class are safe for use by multiple concurrent threads.

Charset names

  • The uppercase letters ‘A’ through ‘Z’ ( ‘\u0041’ through ‘\u005a’ ),
  • The lowercase letters ‘a’ through ‘z’ ( ‘\u0061’ through ‘\u007a’ ),
  • The digits ‘0’ through ‘9’ ( ‘\u0030’ through ‘\u0039’ ),
  • The dash character ‘-‘ ( ‘\u002d’ , HYPHEN-MINUS),
  • The plus character ‘+’ ( ‘\u002b’ , PLUS SIGN),
  • The period character ‘.’ ( ‘\u002e’ , FULL STOP),
  • The colon character ‘:’ ( ‘\u003a’ , COLON), and
  • The underscore character ‘_’ ( ‘\u005f’ , LOW LINE).

Every charset has a canonical name and may also have one or more aliases. The canonical name is returned by the name method of this class. Canonical names are, by convention, usually in upper case. The aliases of a charset are returned by the aliases method.

Some charsets have an historical name that is defined for compatibility with previous versions of the Java platform. A charset’s historical name is either its canonical name or one of its aliases. The historical name is returned by the getEncoding() methods of the InputStreamReader and OutputStreamWriter classes.

If a charset listed in the IANA Charset Registry is supported by an implementation of the Java platform then its canonical name must be the name listed in the registry. Many charsets are given more than one name in the registry, in which case the registry identifies one of the names as MIME-preferred. If a charset has more than one registry name then its canonical name must be the MIME-preferred name and the other names in the registry must be valid aliases. If a supported charset is not listed in the IANA registry then its canonical name must begin with one of the strings «X-» or «x-» .

The IANA charset registry does change over time, and so the canonical name and the aliases of a particular charset may also change over time. To ensure compatibility it is recommended that no alias ever be removed from a charset, and that if the canonical name of a charset is changed then its previous canonical name be made into an alias.

Standard charsets

Every implementation of the Java platform is required to support the following standard charsets. Consult the release documentation for your implementation to see if any other charsets are supported. The behavior of such optional charsets may differ between implementations.

Description of standard charsets
Charset Description
US-ASCII Seven-bit ASCII, a.k.a. ISO646-US , a.k.a. the Basic Latin block of the Unicode character set
ISO-8859-1 ISO Latin Alphabet No. 1, a.k.a. ISO-LATIN-1
UTF-8 Eight-bit UCS Transformation Format
UTF-16BE Sixteen-bit UCS Transformation Format, big-endian byte order
UTF-16LE Sixteen-bit UCS Transformation Format, little-endian byte order
UTF-16 Sixteen-bit UCS Transformation Format, byte order identified by an optional byte-order mark

The UTF-8 charset is specified by RFC 2279; the transformation format upon which it is based is specified in Amendment 2 of ISO 10646-1 and is also described in the Unicode Standard.

The UTF-16 charsets are specified by RFC 2781; the transformation formats upon which they are based are specified in Amendment 1 of ISO 10646-1 and are also described in the Unicode Standard.

  • When decoding, the UTF-16BE and UTF-16LE charsets interpret the initial byte-order marks as a ZERO-WIDTH NON-BREAKING SPACE; when encoding, they do not write byte-order marks.
  • When decoding, the UTF-16 charset interprets the byte-order mark at the beginning of the input stream to indicate the byte-order of the stream but defaults to big-endian if there is no byte-order mark; when encoding, it uses big-endian byte order and writes a big-endian byte-order mark.

Every instance of the Java virtual machine has a default charset, which may or may not be one of the standard charsets. The default charset is determined during virtual-machine startup and typically depends upon the locale and charset being used by the underlying operating system.

The StandardCharsets class defines constants for each of the standard charsets.

Terminology

The name of this class is taken from the terms used in RFC 2278. In that document a charset is defined as the combination of one or more coded character sets and a character-encoding scheme. (This definition is confusing; some other software systems define charset as a synonym for coded character set.)

A coded character set is a mapping between a set of abstract characters and a set of integers. US-ASCII, ISO 8859-1, JIS X 0201, and Unicode are examples of coded character sets.

Some standards have defined a character set to be simply a set of abstract characters without an associated assigned numbering. An alphabet is an example of such a character set. However, the subtle distinction between character set and coded character set is rarely used in practice; the former has become a short form for the latter, including in the Java API specification.

A character-encoding scheme is a mapping between one or more coded character sets and a set of octet (eight-bit byte) sequences. UTF-8, UTF-16, ISO 2022, and EUC are examples of character-encoding schemes. Encoding schemes are often associated with a particular coded character set; UTF-8, for example, is used only to encode Unicode. Some schemes, however, are associated with multiple coded character sets; EUC, for example, can be used to encode characters in a variety of Asian coded character sets.

When a coded character set is used exclusively with a single character-encoding scheme then the corresponding charset is usually named for the coded character set; otherwise a charset is usually named for the encoding scheme and, possibly, the locale of the coded character sets that it supports. Hence US-ASCII is both the name of a coded character set and of the charset that encodes it, while EUC-JP is the name of the charset that encodes the JIS X 0201, JIS X 0208, and JIS X 0212 coded character sets for the Japanese language.

The native character encoding of the Java programming language is UTF-16. A charset in the Java platform therefore defines a mapping between sequences of sixteen-bit UTF-16 code units (that is, sequences of chars) and sequences of bytes.

Источник

Оцените статью