
Kevin Guo cofounded Hive in 2014 as a consumer app and later pivoted the business to sell its internal content moderation AI software to customers.
AI-generated content, from illegal images of child sexual abuse material to misleading political deepfakes, is infecting the internet, and Hive CEO Kevin Guo believes his company's content moderation systems are a "modern antivirus."
Used by social media platforms like Reddit and Bluesky, Hive's systems use machine learning (ML) models to flag harmful content. Now the startup will be better equipped to identify and remove child sexual abuse material from its customers' websites, thanks to a new partnership with the Internet Watch Foundation (IWF), a U.K.-based child safety nonprofit, Hive announced Thursday.
Hive will integrate the organization's datasets, which include a regularly updated list of about 8,000 websites that host confirmed images of both real and AI-generated CSAM, into its models. The dataset also includes a list of unique phrases and cryptic keywords that are used by offenders to conceal CSAM and shirk moderation. As part of the deal, Hive's customers can get access to IWF's "hashes"—digital fingerprints of millions of known CSAM images and videos. The partnership builds on Hive's 2024 collaboration with Thorn, a national nonprofit that builds CSAM detection technology, to expand its reach and purge more violating content from its customers' platforms.
The new partnership aims to stem the flood of AI-generated child sexual abuse images, of which offenders created tens of thousands in 2024. Generative AI tools have only made it easier to produce illicit imagery; in 2023, IWF flagged over 275,000 web pages containing CSAM to law enforcement, a record setting number for the organization.
“CSAM previously was reasonably difficult to obtain. The content's not that common,” Guo told Forbes. “These AI generation engines, image and video both, unlock a very different world here with the explosion of content.”
Founded in 2014 as a social media app, Hive pivoted in 2017 to sell its internal moderation tools to companies instead. Now, in addition to detecting toxic content, the company’s AI models can also identify logos, recognize tens of thousands of celebrities and spot copies of movies and TV shows being shared online. The San Francisco-based company has raised $120 million in venture capital from the likes of General Catalyst and was valued at $2 billion back in 2021.
The wide prevalence of AI-generated content has translated into business growth for Hive, where revenue has multiplied 30 times since 2020 (Guo declined to disclose current figures). The company processes 10 billion pieces of content each month from its 400 customers, which include social media platforms like streaming platform Kick, which has some 50 million users, and the Pentagon. Last month, Hive secured a $2.4 million contract with the U.S. Department of Defense to ensure that the audio, video and text-based content its staff receives from different sources are real and trustworthy.
There’s also increasing interest from document verification companies and insurance companies which have reported an influx of fake claims, “where people have taken a picture of their bumper in a car and the AI generated a scratch,” Guo said.
Most recently in the wake of a ban on TikTok, alternative platforms like Clapper and Favorited signed up for Hive’s automated content moderation systems to prepare for an onrush of “TikTok refugees,” Guo said. “They've also proactively been onboarding onto our CSAM offerings, because they're very afraid of that becoming a high profile issue.”
Guo isn’t concerned about President Donald Trump’s hands-off approach to regulating AI. Even though he has repealed the Biden administration’s executive order on AI that outlined measures to deal with deepfakes, online child safety is a “fairly bipartisan issue,” Guo said. “We don't think this part is going to go away.”
-----
Source: Forbe
0 Comments