- “Unitary has emerged as clear early leaders in the important AI field of content safety, and we’re so excited to back this exceptional team as they continue to accelerate and innovate in content classification technology.”“From the start, Unitary had some of the most powerful AI for classifying harmful content..
- Lately, social media companies have signaled a move away from stronger moderation policies; fact-checking organizations are losing momentum; and questions remain about the ethics of moderation when it comes to harmful content..
- “In an online world, there’s an immense need for a technology-driven approach to identify harmful content,” said Christopher Steed, chief investment officer, Paladin Capital Group, in a statement.Still, it’s a crowded space..
- “Rather than analyzing just a series of frames, in order to understand the nuance and whether a video is [for example] artistic or violent, you need to be able to simulate the way a human moderator watches the video..
- But also, it may refer to Haco’s previous career: Unitary operators are used in describing a quantum state, which in itself is complicated and unpredictable — just like online content and humans..
- Unitary thus straddles that interesting intersection of cutting-edge research and real-world application.“We first met Sasha and James two years ago and have been incredibly impressed,” said Gemma Bloemen, a principal at Creandum and board member, in a statement..
Content moderation continues to be a contentious topic in the world of online media. New regulations and public concern are likely to keep it as a priority for many years to come. But weaponised AI a [+6104 chars]