A few years ago, Karine Mellata and Michael Lin crossed paths while working on Apple’s fraud engineering and algorithmic risk team. Both being engineers, they were heavily involved in tackling online abuse, such as spam, botting, account security, and developer fraud for Apple’s growing customer base.
Despite their tireless efforts to develop new models to combat the constantly evolving patterns of abuse, Mellata and Lin couldn’t help but feel that they were always one step behind. They found themselves stuck rebuilding core components of their trust and safety infrastructure to keep up with the changing landscape.
“As regulations become stricter and teams are expected to centralize their trust and safety responses, we saw an opportunity to modernize this industry and create a safer internet for everyone,” Mellata explained in an email interview with TechCrunch.
“We dreamed of a system that could adapt as quickly as the abuse itself.”
So, the two co-founded Intrinsic, a startup aimed at providing safety teams with the necessary tools to prevent abusive behavior on their platforms. Recently, Intrinsic raised a whopping $3.1 million in a seed round, with funding from Urban Innovation Fund, Y Combinator, 645 Ventures, and Okta.
Intrinsic offers a platform for moderating both user- and AI-generated content. By providing the infrastructure to enable customers, mainly social media companies and e-commerce marketplaces, to detect and take action against content that violates their policies, Intrinsic is making strides in creating a safer digital space.
According to Mellata, Intrinsic focuses on integrating safety products and automating tasks such as banning users and flagging content for review.
“Our platform is a fully customizable AI content moderation platform,” Mellata said. “For instance, we can help a publishing company avoid giving financial advice in their marketing materials, which can pose legal liabilities. We can also assist marketplaces in detecting illegal listings, such as brass knuckles in California, but not Texas.”
Mellata argues that there are no “off-the-shelf” classifiers for these nuanced categories, and even well-resourced trust and safety teams would require several weeks or even months of engineering time to add new automated detection categories in-house.
When asked about competing platforms like Spectrum Labs, Azure, and Cinder (which is a nearly direct competitor), Mellata believes that Intrinsic sets itself apart in two key ways: the platform’s “explainability” and its extensive tooling.
Intrinsic offers customers the ability to question the decisions made by the platform in content moderation and provides an explanation for its reasoning. Additionally, Intrinsic hosts manual review and labeling tools, allowing customers to fine-tune moderation models using their own data.
“Most conventional trust and safety solutions lack flexibility and were not designed to evolve with abuse,” Mellata stated. “Trust and safety teams with limited resources are increasingly turning to vendors for help, and they are looking to reduce moderation costs while maintaining high safety standards.”
Without a third-party audit, it is challenging to determine the accuracy of a vendor’s moderation models and whether they are susceptible to the same biases that plague content moderation models elsewhere. However, Intrinsic is gaining traction, with large, established enterprise customers signing contracts in the six-figure range on average.
In the near future, Intrinsic plans to expand its team of three and extend its moderation technology to cover not only text and images but also video and audio.
“The current slowdown in the tech market has increased interest in automation for trust and safety, which puts Intrinsic in a unique position,” Mellata concluded. “COOs are looking to cut costs, and chief compliance officers are looking to reduce risk. Intrinsic addresses both concerns by providing a cheaper, faster, and more efficient solution that catches significantly more abuse than existing vendors or equivalent in-house solutions.”