January 18, 2023 By Bart Leonard 0

OpenAI Used Kenyan Workers On Less Than $2/hour To Filter X-rated ChatGPT Content

The recent scandal involving OpenAI and the Kenyan workers they hired to label explicit content used to train their AI chatbot, ChatGPT, has brought the ethical implications of artificial intelligence into the spotlight. The workers, reportedly paid a meagre $2 per hour, were exposed to graphic descriptions of bestiality and fan-fiction involving rape in order to label the content. This raises the question of whether or not it is ethical to exploit vulnerable laborers in order to train AI systems.

The use of AI technology has skyrocketed in recent years, with companies such as OpenAI leading the charge. OpenAI, founded in 2015 by tech industry leaders such as Elon Musk and Sam Altman, has made a name for itself in the AI space with its chatbot ChatGPT. The chatbot, which was released in November 2020, has gained traction with over 1 million users signing up for its free research preview.

However, the recent news of OpenAI outsourcing Kenyan laborers to label explicit content used to train ChatGPT has put a spotlight on the ethical implications of using AI technology. The workers were exposed to graphic descriptions of bestiality and fan-fiction involving rape, a situation that could be seen as exploitative. This is especially concerning considering the workers were reportedly paid a meagre $2 per hour for their services.

The ethical implications of using AI technology have long been a source of debate. While proponents argue that AI can be used to improve the lives of people, opponents argue that it can too easily be abused by those in power. The case of OpenAI and the Kenyan laborers is a stark reminder of the potential for AI to be used for unethical purposes.

In order to ensure that AI technology is used ethically and responsibly, companies must take the necessary steps to protect those involved in the process. This includes providing fair compensation for those doing the work, as well as ensuring that they are not exposed to any potentially harmful content. Furthermore, companies must also put in place measures to ensure that the data used to train AI systems is collected ethically and is not used in a way that could harm those who are providing it.

The case of OpenAI and the Kenyan laborers is a stark reminder of the potential for AI to be used for unethical purposes. Companies must take the necessary steps to ensure that AI technology is used ethically and responsibly, and that those involved in the process are not exploited in any way.