The AI Training Toll | Generative ai Examples in Finance | Generative ai in Investment Management | Free Generative ai Tools for Images | Turtles AI

The AI Training Toll
  In an interesting #article on "The #Guardian", some #Kenyan #content #moderators who reviewed #disturbing #material to train #AI #systems like #ChatGPT are speaking out about the #psychological #trauma and low #pay they endured in that #work. Former contractors for Sama, a California-based AI training company, have filed a petition with the Kenyan government decrying their treatment. Sama had a contract with #OpenAI, the maker of ChatGPT, to supply content moderators who would review texts and images to train the AI systems. The Kenyan moderators describe having to view graphic content like sexual violence, murder, and child abuse for long hours and low pay, with little psychological support. Some say the trauma has left them withdrawn, paranoid, and suffering mental health issues. One man says his wife left him while pregnant due to personality changes. The workers were paid between $1.46 to $3.74 per hour. Sama disputes allegations of poor working conditions, saying moderators had access to therapists and benefits. When Sama ended its OpenAI contract early, workers felt abandoned while dealing with trauma, the petition says. OpenAI did not comment. The content moderation industry is expected to boom along with AI. But workers doing the difficult job of feeding disturbing examples to algorithms are often far from Silicon Valley tech firms, in places like Kenya where labour is cheaper. Activists say tech companies must take more responsibility for outsourced workers' conditions. Turtle's AI point: This report highlights the human toll behind training artificial intelligence systems like ChatGPT. While AI promises exciting advances, the well-being of the real people doing difficult content moderation work must not be overlooked. Companies relying on cheap overseas labour should consider their ethical obligations. What can be done to improve conditions for these workers? I invite readers to join the discussion on ensuring humane treatment across the AI supply chain.   Highlights:
  • Kenyan moderators describe psychological trauma from reviewing graphic content to train AI
  • Workers say they endured long hours and low pay with little support
  • The petition alleges lack of warnings about disturbing material and inadequate mental health care
  • OpenAI contract ended early, leaving workers abandoned while dealing with trauma