:Popularity of text encodings
{{Short description|none}}
{{Lead too short|date=June 2024|reason=The statistics/claims on the actual use of encodings for the context in the article should be present}}
A number of text encodings have historically been used for storing text on the World Wide Web, though by now UTF-8 is dominant, with all languages at 95% use or higher by some estimates. The same encodings are used in local files (or databases), in fact many more, at least historically. Measuring the prevalence of each are not possible, because of privacy reasons (e.g. for local files, not web accessible), but rather accurate estimates are available for public web sites, and statistics may (or may not accurately) reflect use in local files. Attempts at measuring encoding popularity may utilize counts of numbers of (web) documents, or counts weighed by actual use or visibility of those documents.
The decision to use any one encoding may depend on the language used for the documents, or the locale that is the source of the document, or the purpose of the document. Text may be ambiguous as to what encoding it is in, for instance pure ASCII text is valid ASCII or ISO-8859-1 or CP1252 or UTF-8. Tags may indicate a document encoding, but when this is incorrect this may be silently corrected by display software (for instance the HTML specification says that the tag for ISO-8859-1 should be treated as CP1252), so counts of tags may not be accurate.
Popularity on the World Wide Web
File:Unicode Web growth.svg-only figure includes all web pages that only contain ASCII characters, regardless of the declared header.]]
UTF-8 has been the most common encoding for the World Wide Web since 2008.{{Cite web |author-first=Mark |author-last=Davis |author-link=Mark Davis (Unicode) |date=2008-05-05|title=Moving to Unicode 5.1 |url=https://googleblog.blogspot.com/2008/05/moving-to-unicode-51.html |access-date=2023-03-13 |website=Official Google Blog |language=en}} {{As of|2025|04}}, UTF-8 is used by 98.6% of surveyed web sites (and 99.2% of top 100,000 pages), the next-most popular encoding, ISO-8859-1, is used by 1.1% (and only 15 of the top 1,000 pages).{{Cite web|url=https://w3techs.com/technologies/cross/character_encoding/ranking|title=Usage Survey of Character Encodings broken down by Ranking |website=W3Techs |lang=en |date=April 2025 |access-date=2025-04-15}} Although many pages only use ASCII characters to display content, very few websites now declare their encoding to only be ASCII instead of UTF-8.{{cite web |url=https://w3techs.com/technologies/details/en-usascii |title = Usage statistics and market share of ASCII for websites | date = November 2024 | website = W3Techs | access-date = 2024-11-20 }}
All countries (and over 97% all of the tracked languages) have at least 96% use of the UTF-8 encoding on the web. See below for the major alternative encodings:
The second-most popular encoding varies depending on locale, and is typically more efficient for the associated language. One such encoding is the Chinese GB 18030 standard, which is a full Unicode Transformation Format, still 96% of websites in China and territories use UTF-8{{Cite web |title=Distribution of Character Encodings among websites that use China and territories |url=https://w3techs.com/technologies/segmentation/sl-cnter-/character_encoding |access-date=2025-04-15 |website=w3techs.com}}{{Cite web|title=Distribution of Character Encodings among websites that use .cn|url=https://w3techs.com/technologies/segmentation/tld-cn-/character_encoding|website=w3techs.com|access-date=2021-11-01}}{{Cite web|title=Distribution of Character Encodings among websites that use Chinese|url=https://w3techs.com/technologies/segmentation/cl-zh-/character_encoding|website=w3techs.com|access-date=2021-11-01}} with it (effectivelyThe Chinese standard {{nowrap|GB 2312}} and with its extension GBK (which are both interpreted by web browsers as GB 18030, having support for the same letters as UTF-8)) the next popular encoding. Big5 is another popular non-UTF encoding meant for traditional Chinese characters (though GB 18030 works for those too, is a full UTF), and is next-most popular in Taiwan after UTF-8 at 96.8%, and it's also second-most used in Hong Kong, while there as elsewhere, UTF-8 is even more dominant at 98.4%.{{Cite web |title=Distribution of Character Encodings among websites that use Taiwan |url=https://w3techs.com/technologies/segmentation/sl-tw-/character_encoding |access-date=2025-01-12 |website=w3techs.com}} The single-byte Windows-1251 is twice as efficient for the Cyrillic script and still 96.1% of Russian websites use UTF-8{{Cite web|title=Distribution of Character Encodings among websites that use .ru|url=https://w3techs.com/technologies/segmentation/tld-ru-/character_encoding|website=w3techs.com|access-date=2025-04-15}} (however e.g. Greek and Hebrew encodings are also twice as efficient, and UTF-8 has over 99% use for those languages).{{Cite web|title=Distribution of Character Encodings among websites that use Greek|url=https://w3techs.com/technologies/segmentation/cl-el-/character_encoding|access-date=2024-01-01|website=w3techs.com}}{{Cite web|title=Distribution of Character Encodings among websites that use Hebrew|url=https://w3techs.com/technologies/segmentation/cl-he-/character_encoding|access-date=2024-02-02|website=w3techs.com}} Korean, Chinese and Japanese language websites also have relatively high non-UTF-8 use compared to most other countries, with Japanese UTF-8 use at 98.8% the rest use the legacy EUC-JP and/or Shift JIS (actually decoded as its superset Windows-31J) encodings that both are used about as much.{{cite web|url=https://w3techs.com/technologies/history_overview/character_encoding|title=Historical trends in the usage of character encodings|access-date=2024-07-03}}{{cite web |url=https://trends.builtwith.com/encoding/UTF-8 |title=UTF-8 Usage Statistics |publisher=BuiltWith |access-date=2011-03-28}} South Korea has 95% UTF-8 use, with the rest of websites mainly using EUC-KR which is more efficient for Korean text.
Popularity for local text files
Local storage on computers has considerably more use of "legacy" single-byte encodings than on the web. Attempts to update to UTF-8 have been blocked by editors that do not display or write UTF-8 unless the first character in a file is a byte order mark, making it impossible for other software to use UTF-8 without being rewritten to ignore the byte order mark on input and add it on output. UTF-16 files are also fairly common on Windows, but not in other systems.{{Cite web|title=Charset|url=https://developer.android.com/reference/java/nio/charset/Charset|quote=Android note: The Android platform default is always UTF-8.|access-date=2021-01-02|website=Android Developers|language=en}}{{Cite web|last=Galloway|first=Matt|title=Character encoding for iOS developers. Or UTF-8 what now?|url=https://www.galloway.me.uk/2012/10/character-encoding-for-ios-developers-utf8/|quote=in reality, you usually just assume UTF-8 since that is by far the most common encoding.|access-date=2021-01-02|website=www.galloway.me.uk|date=9 October 2012 |language=en}}
Popularity internally in software
In the memory of a computer program, usage of UTF-16 is very common, particularly in Windows but also cross-platform languages and libraries such as JavaScript, Python, and Qt. Compatibility with the Windows API is a major reason for this. Non-Windows libraries written in the early days of Unicode also tend to use UTF-16, such as International Components for Unicode.{{cite web|url=https://unicode-org.github.io/icu/userguide/strings/utf-8.html#utf-8|title=ICU Documentation: UTF-8}}
At one time it was believed by many (and is still believed today by some) that having fixed-size code units offers computational advantages, which led many systems, in particular Windows, to use the fixed-size UCS-2 with two bytes per character. This is false: strings are almost never randomly accessed, and sequential access is the same speed in both variable- and fixed-size encodings. In addition, even UCS-2 was not "fixed size" if combining characters are considered, and when Unicode exceeded 65536 code points it had to be replaced with the non-fixed-sized UTF-16 anyway.
Recently it has become clear that the overhead of translating from/to UTF-8 on input and output, and dealing with potential encoding errors in the input UTF-8, overwhelms any benefits {{nowrap|UTF-16}} could offer. So newer software systems are starting to use UTF-8. The default string primitive used in newer programing languages, such as Go,{{Cite web|title=The Go Programming Language Specification|url=https://golang.org/ref/spec#Source_code_representation|access-date=2021-02-10}} Julia, Rust and Swift 5,{{Cite web|last=Tsai|first=Michael J.|title=Michael Tsai - Blog - UTF-8 String in Swift 5|url=https://mjtsai.com/blog/2019/03/21/utf-8-string-in-swift-5/|access-date=2021-03-15|language=en}} assume UTF-8 encoding. PyPy also uses UTF-8 for its strings,{{Cite web|last=Mattip|date=2019-03-24|title=PyPy Status Blog: PyPy v7.1 released; now uses utf-8 internally for unicode strings|url=https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html|access-date=2020-11-21|website=PyPy Status Blog}} and Python is looking into storing all strings in UTF-8.{{Cite web|title=PEP 623 -- Remove wstr from Unicode|url=https://www.python.org/dev/peps/pep-0623/|quote=Until we drop legacy Unicode object, it is very hard to try other Unicode implementation like UTF-8 based implementation in PyPy. |access-date=2020-11-21|website=Python.org|language=en}} Microsoft now recommends the use of UTF-8 for applications using the Windows API, while continuing to maintain a legacy "Unicode" (meaning UTF-16) interface.{{Cite web|title=Use the Windows UTF-8 code page|url=https://docs.microsoft.com/en-us/windows/uwp/design/globalizing/use-utf8-code-page|access-date=2020-06-06|work=UWP applications|publisher=docs.microsoft.com|language=en-us}}