How Editing Companies are Exploiting Non-Anglophone Scholars with AI: Analyzing the Ethical Implications and Impact on Academic Publishing

 

In the academic world, language discrimination is a critical issue that non-anglophone scholars often face. Natalia Kucirkova, a professor in Norway, shed light on this problem in a recent article in Times Higher Education. She described the stress caused by insensitive referee comments and the significant time and financial investment required to prepare articles for journal submission. Kucirkova argued that AI “bots” could potentially level the publication playing field for these scholars. However, in a rather unfortunate turn of events, AI systems are currently being used to exploit non-anglophone scholars by stealing their intellectual property.

 

The Exploitation of Non-Anglophone Scholars

Editing companies are stealing unpublished research to train their AI |  Times Higher Education (THE)

Many academic publishers collaborate with large, private editing firms to provide “author services,” including English language editing. With the advent of AI, these firms saw an opportunity to monetize their resources – research papers uploaded in digital formats and well-trained editors. They started using client papers to train specialized AI large language models (LLMs) that can recognize and correct the characteristic mistakes made by non-anglophone authors from around the world. Human editors then proofread the automatically generated text and provide feedback for optimization, further enhancing the AI system’s capabilities.

 

Understanding Large Language Models (LLMs)

 

Large language models, like the ones powering ChatGPT and Copilot for Microsoft 365, have come a long way from simple predictive-text systems. They can now predict hundreds of words at a time. Editing companies have utilized LLMs to encode not just editorial corrections but everything within a manuscript. When researchers upload their work, their intellectual property, including original ideas, innovative variations on established theories, and newly coined terms, is appropriated by the company. This information is then used to “predict” and generate text in similar papers edited by the service or anyone using the company’s editing tools.

 

The Lack of Transparency

 

Surprisingly, few scholars have noticed this fundamental transformation of academic editing. Publishers rarely mention the editing firms they outsource work to, while editing companies boast about their AI advances when marketing new tools but not when advertising their editing services. Researchers are led to believe that their papers will be edited entirely by humans, when in reality, human editors are increasingly marginalized by AI systems.

 

Furthermore, every journal, publisher, and editing company guarantees research confidentiality. However, their data protection and privacy policies conveniently omit any mention of AI. While this may be misleading, it is not illegal, as current legislation protecting the confidentiality of personal data does not regulate or prohibit the use of anonymized academic work.

 

Pushing Back Against AI Exploitation

 

Prominent victims of AI exploitation are beginning to push back. The New York Times filed a lawsuit in December against ChatGPT for using millions of articles published by The Times to train automated chatbots that now compete with the news outlet as a source of reliable information. The National Institutes of Health also took action in June, prohibiting scientific peer reviewers from using AI tools to analyze or critique grant applications or R&D contract proposals due to concerns about data privacy and security.

The AI revolution is an opportunity for writers (the human kind)

The Society of Authors highlights the complex ethical and moral issues surrounding profit-driven AI development. These issues extend beyond copyright infringement and may include violations of moral rights, data protection laws, invasions of privacy, and acts of passing off. It calls upon publishers and editing companies to embrace transparency and the fundamental academic principle of informed consent.

 

Embracing Transparency and Informed Consent

 

To address the exploitation of non-anglophone scholars and protect their intellectual property, publishers and editing companies must prioritize transparency. Editing-service providers should disclose the AI-based systems and tools they use on client work. They should explain clearly how LLMs work and offer scholars a choice, such as compensating authors for loss of rights by pricing hybrid human/AI editing as a cheaper alternative to fully confidential human editing.

 

Publishers who offer branded author services should also name the editing companies they outsource work to, allowing researchers to make an informed choice. This transparency is crucial to protect both publishers from potential lawsuits and authors from exploitation.

 

Future Laws and Regulations

 

While new laws and regulations around AI training are on the horizon, scholars must take steps to protect their intellectual property in the present. By familiarizing themselves with the basics of AI, reading the fine print, and thoroughly questioning editing services, scholars can safeguard their work and rights, even when working with trusted firms and publishers.

 

As the academic community becomes more aware of the exploitation happening in the realm of AI-driven editing, it is crucial to advocate for ethical practices, transparency, and informed consent. Only through collective action can we ensure that non-anglophone scholars are not unjustly exploited and that their intellectual property is protected.

 

Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any academic institution, publisher, or editing company.

Keywords: AI, non-anglophone scholars, intellectual property, editing companies, large language models, academic editing, transparency, informed consent.

 

Leave a Comment