The images pop up in Mophat Okinyi’s mind when he’s alone or about to sleep. Okinyi, a former content moderator for Open AI’s ChatGPT in Nairobi, Kenya, is one of four individuals in that role who have submitted a petition to the Kenyan government. They are calling for an investigation into the exploitative conditions faced by contractors who review the content that powers artificial intelligence programs. Okinyi described the job as damaging to his mental health. He would view up to 700 text passages a day, many of which depicted graphic sexual violence. This took a toll on him, causing him to avoid people and project paranoid narratives onto those around him. Last year, his wife left him while she was pregnant, citing a change in his behavior. The petition, filed by the moderators, focuses on a contract between OpenAI and Sama, a data annotation services company based in California. The moderators allege that while working for Sama in 2021 and 2022 in Nairobi, reviewing content for OpenAI, they suffered psychological trauma, received low pay, and were suddenly dismissed. The moderators claim they were not properly warned about the brutal nature of the material they would be reviewing, and were not provided with adequate psychological support. They were paid between $1.46 and $3.74 per hour. When OpenAI terminated the contract eight months early, the moderators were left without an income and had to deal with the trauma on their own. OpenAI declined to comment on the matter. Sama stated that its moderators had access to licensed mental health therapists around the clock and were given medical benefits to cover the cost of psychiatrists. Sama also claimed to have given proper notice to employees about the termination of the ChatGPT project and gave them the opportunity to participate in another project. The human labor behind the boom of AI has been overshadowed by concerns about automation. However, the work of feeding large language models like ChatGPT with examples of hate speech, violence, and sexual abuse is a growing industry, expected to reach over $14 billion by 2030. The labor is often performed in countries like Kenya where workers are willing to do the job for lower wages. Nairobi has become a hub for this type of work due to its economic crisis, high rate of English speakers, and mix of international workers. This allowed Sama to recruit young, educated Kenyans looking for work. However, the moderators claim that the content they were tasked with reviewing became increasingly disturbing and traumatizing over time. They received little psychological support from Sama, according to their allegations. The moderators are calling for new legislation to regulate how harmful technology work is outsourced in Kenya and to include exposure to harmful content as an occupational hazard. They also want an investigation into how the Ministry of Labor failed to protect Kenyan youth from outsourcing companies. OpenAI and other tech companies bear significant responsibility in this matter, as they often distance themselves from the working conditions of content moderators who endure harsh conditions.
Kenyan Moderators Speak Out Against Devastating Impact of AI Training
Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.