abuse

“DoorDash Introduces ‘SafeChat+’: A Revolutionary AI Feature for Identifying Verbal Harassment”

Gettyimages 1232015093
DoorDash hopes to reduce verbally abusive and inappropriate interactions between consumers and delivery people with its new AI-powered feature that automatically detects offensive language. Dubbed “SafeChat+,” DoorDash is leveraging AI technology to review in-app conversations and determine if a customer or Dasher is being harassed. The feature is an upgrade from SafeChat, where DoorDash’s Trust & Safety team manually screens chats for verbal abuse. The company tells TechCrunch that SafeChat+ is “the same concept [as SafeChat] but backed by even better, even more sophisticated technology. It can understand subtle nuances and threats that don’t match any specific keywords.”“We know that verbal abuse or harassment represents the largest type of safety incident on our platform.

Intrinsic: Leveraging Y Combinator Support to Revolutionize Trust and Safety Infrastructure

Content Moderation1
“Intrinsic is a fully customizable AI content moderation platform,” Mellata said. Intrinsic, he explained, lets customers “ask” it about mistakes it makes in content moderation decisions and offers explanation as to its reasoning. The platform also hosts manual review and labeling tools that allow customers to fine-tune moderation models on their own data. “Most conventional trust and safety solutions aren’t flexible and weren’t built to evolve with abuse,” Mellata said. “The broader slowdown in tech is driving more interest in automation for trust and safety, which places Intrinsic in a unique position,” Mellata said.