On August 18, the Citizen Lab published an analysis of Apple product engraving services and observed censorship. In this post, we discuss the significance of the findings with report authors.
What has your study of Apple engraving services revealed?
We analyzed Apple’s filtering of product engravings in six regions, discovering 1,105 keyword filtering rules used to moderate their content. We found that Apple’s content moderation practices pertaining to derogatory, racist, or sexual content are inconsistently applied across these regions. Within mainland China, we found that Apple censors political content including broad references to Chinese leadership and China’s political system, names of dissidents and independent news organizations, and general terms relating to religions, democracy, and human rights and that part of this politically motivated censorship is applied to users in Hong Kong and Taiwan. We present evidence that Apple does not fully understand what content they censor and that, rather than each censored keyword being born of careful consideration, many seem to have been thoughtlessly reappropriated from other sources. In one case, Apple censored ten Chinese names surnamed “Zhang” with generally unclear political significance. These names appear to have been copied from a list we found also used to censor products from a Chinese company.
How was the study conducted? And what are its limitations?
From previous work analyzing automated Internet censorship, we have amassed hundreds of thousands of keywords spanning numerous languages used to censor a variety of different Internet applications and platforms. We automatically tested these keywords against the API endpoints that Apple’s online store uses to determine whether to filter an engraving. Based on which engravings were filtered, we were able to systematically determine Apple’s keyword filtering rules. However, while we draw upon a large test set of keywords to discover as many of Apple’s keyword filtering rules as possible, our method cannot provide an exhaustive list of which keywords Apple filters in each region, and so there are likely some filtering rules which we were unable to discover.
Where do we see the most restrictions? And what kind of words are most likely to be censored?
Among the keyword filtering rules we discovered in the six regions we tested, the largest number applied to mainland China, where we found 1,045 keywords filtering product engravings, followed by Hong Kong, and then Taiwan. Compared to its Chinese language filtering, we discovered fewer restrictions on Apple product engravings in Japan, Canada, and the United States. However, more notable than the varying size of keyword lists is the different motivations driving Apple’s content moderation policies across these regions. A large number of keywords blocked in Apple’s engraving services in mainland China, Hong Kong, and Taiwan are politically motivated (e.g., “新聞自由”, freedom of the press) as opposed to motivated by restricting vulgar, racist, and derogatory content (e.g., “POO”) in Japan, Canda, and the United States.
Apple censors political content in Chinese-speaking regions but not in others. Is Apple’s engraving filtering inconsistent in any other aspects?
Apple inconsistently applies its keyword filtering of racial and ethnic slurs to different regions. For instance, “SLANTEYE”, a derogatory term referencing Asian people, is filtered in mainland China, Hong Kong, Taiwan, and Canada, but not in Japan or the United States. Highly controversial terms targeting China such as “中國肺炎” (China Pneumonia) or “CHINAVIRUS”, references to the 2019 novel coronavirus that inflame anti-Asian racism, and “CHINAZI”, a reference to the Chinese Communist Party that was used by Hong Kong protesters, are filtered only in mainland China, though they are mainly used by people outside the region.
How does Apple determine which words to censor in different jurisdictions?
Our review of Apple’s public-facing documents, including its terms of service documents, suggests that Apple has failed to disclose its content moderation policies for product engravings. In our report, we present evidence that Apple may not completely understand what content they filter in Chinese language regions. By comparing Apple’s Chinese language lists to those we have previously found used to censor other Chinese products, we found that Apple’s list has similarity to many that is unexplainable by coincidence. Rather than each censored keyword being born of careful consideration, many of Apple’s censored Chinese keywords seem to have been thoughtlessly reappropriated from other sources.
Is Apple unique in facing pressure from governments to adopt policies and procedures?
No. Pressure from governments to moderate content both online and offline are inevitable. Apple is just one of the many cases highlighting growing legal, political, and social tensions as companies continue to expand into ever-growing global markets. China presents a unique set of opportunities and challenges due to its sheer size and restrictive legal environment, which ultimately demands that companies find a balance between reaching into China’s domestic market and acquiescing to government pressures and content regulations, including those requiring the censorship of political speech. In previous work, we analyzed how instant messaging applications such as Microsoft’s Skype and Naver’s LINE messengers applied regionally-based keyword filtering features to mainland Chinese users. What is most alarming based on our current findings is the lack of transparency in Apple’s content moderation policies and the unexplained extension of politically motivated moderation rules from one region to another.
However, while all companies face pressure from governments, they have other choices than to blindly comply with that government’s requirements. As repeatedly pointed out by civil society groups and international organizations, companies should first and foremost align their content moderation practices with international human rights norms when faced with conflicting national and regional requirements. Moreover, while Apple operates in China, subjecting Apple to China’s legal requirements, many other technology companies such as Twitter and Facebook do not operate in that region, avoiding compliance with those requirements.
How do these findings fit into the larger context of Apple censorship inside (and outside) of China?
Apple performs political censorship in other aspects of its products and platforms. For example, Apple censors the Taiwan flag emoji for users that have their iOS region set to mainland China, Hong Kong, or Macau. Apple also censors its App Store in mainland China, restricting users from accessing foreign news services, VPNs, and gay dating services. Outside of China, Apple has also faced criticism from civil society groups for censoring LGBTQ+ content in its App Store in over 150 countries, in sharp contrast to the company’s pro-LGBTQ+ stance in the United States. Our work expands upon previous findings by being the first to systematically measure how Apple directly censors users’ written communication. Specifically, we measure what content is allowed or forbidden to be engraved on Apple products. We find that much of the content Apple censors in China is related to politics, religion, and human rights and follows the categories of content censored in Chinese communications platforms like chat apps and live streaming services.
How is this censorship a human rights issue?
In mainland China, Apple broadly censors words relating to religion such as “達賴” (Dalai [Lama]) and “正法” (dharma) and words politically inexpedient to the Chinese government such as “新聞自由” (freedom of press) and even the word “人权” (human rights). Apple also censors many such terms related to politics, religion, and human rights in Hong Kong and Taiwan as well. Content moderation decisions must be carefully evaluated by involving a wide range of stakeholders to determine whether censorship is the only solution, is proportionate, and follows a set of clear, transparent, and consistent rules that can be reviewed and appealed. Apple’s heavy-handed, inconsistent, and non-transparent censorship restricts users’ abilities to freely express themselves politically and religiously.
How do these findings relate to the concerns around the recent controversy surrounding Apple using on-device image scanning to implement new child safety features?
Content moderation and monitoring systems, even when well intentioned, can be repurposed to harm free speech and stifle political expression. Our research shows how Apple’s system for filtering engravings is used in some regions to prevent the engraving of racial slurs, such as in Japan, the United States, and Canada, but can subsequently be used to curtail political speech, such as in China. While Apple’s on-device image scanning technology is currently applied to detecting child sexual abuse material that is present on an Apple mobile device, and subsequently uploaded to iCloud Photos, such technology could be adapted to detect other types of images or content. Moreover, while Apple presently requires both client and server actions to initiate a manual review of photos by to an Apple employee, there is no absolute requirement that content analysis which utilizes client-based surveillance must rely on either a server side component or manual inspection of detected content by an Apple employee before reporting content to state or state-adjacent parties. Given Apple’s history of complying with China’s censorship requirements, it remains uncertain how Apple would respond if asked or compelled by the Chinese government to scan for and report images relating to undesired political content such as images advocating for democratic or human rights reform.
Has Apple commented on these findings?
To better understand how and from where Apple derived its keyword filtering rules as well as which laws, regulations, or policies govern Apple’s filtering of engravings, we wrote a letter to Apple inquiring into these and other topics. The full letter may be found here. Apple sent a response on August 17, 2021. Read their full response here.