Investing in: Authentic(AI)tion
One liner - Systems that ensure transparency and prevent the weaponization of AI
Description
A lot of anti-AI ink has been spilled on the risk of societal collapse. Historically this argument found home in robotic enslavement by Skynet, paperclip-driven eradication or an unemployment dystopia. But more recently, vocal dissension has emerged from inside the technologist camp who see the real manifestation of AI danger: weaponized disinformation.
Content moderation, in both factual accuracy and trust & safety, is not a new discipline. Social media and gaming platforms have wrestled with it for years. While there has been great advancement, notably within abuse prevention, today’s best-in-class methods are heavily human dependent and it’s brutal work. I’ve long been interested in this topic and have been exploring investment options for years. Even if most people are inherently good, there are a cruel souls in the world and the internet provides that darkness scale. There are countless articles that document the traumatic reality of being a moderator, not to mention the compliance costs required and the reputational damage felt when those systems fail. If AI’s promise is to free us from undesirable work, internet moderation should be the first on the list.
So far, however, I’ve failed to see sufficient market pull because the costs are negatively externalized and highly fragmented. Social media platforms are perversely incentivized host controversial (though not illegal) content to drive engagement. And in most cases the benefits of preventing malicious content is greatly outweighed by the damage of being seen as a censor.
But Generative AI changes all of that. The releases of the last ~9 months have completely reframed what was thought possible. AI-derived communication is virtually indistinguishable from our own and audio / video recreations are potentially just a few months away. The Turing test is dead and in its wake are some very possible overt or subtle threats:
Clandestine political influence
Synthetic evidence within litigation
Advanced financial engineering (e.g. avoiding AML)
The single greatest motivator across human history has been a common enemy and that could soon become AI. The recent Senate hearing on AI safety made it clear that politicians have a new target in their sights. How this will all unfold remains unclear. Companies may view this as a security issue and adopt systems out of risk prevention. AI safety could become a consumer wedge issue that drives purchasing decisions. Or any number of regulatory measures could be rolled out with varying degrees of success.
Synthetic / artificially-generated content identification
User-determined filtering (i.e. “I only want to see human-derived….”)
Clinical trial-inspired approval procedures
3rd party ratings (e.g. nutrition labels, crediting scoring)
I have few strong beliefs, strongly held in this space but one is that the solution(s) can only come from new, nimble, creative software startups that do not yet exist. Governmental oversight alone can’t hope to keep up with the development frenzy and self-regulation is not a viable path forward. This much change in a short period of time creates opportunity and I’m here for it.
Interesting Companies
Safety Systems: Lakera (Fly portfolio), Unitary, Checkstep, Oterlu, Hive Moderation, Truepic
Synthetic media: Synthesia, Sonatic , Tavus, Rephrase.ai, Coqui - yes I know I’m missing literally hundreds)
ML Observability: Evidently AI (Fly portfolio), Fiddler, Arize, Why Labs
Related Reading
Catching bad content in the age of AI (MIT Tech Review)
The rise of synthetic media (Bessemer)
2022 Code of Practice on Disinformation (European Union)
Why Geoff Hinton left Google AI (MIT Tech Review)
Did the neurons read your book? (Imperial College London)
Other Thoughts
While AI is quickly becoming a common enemy the second, arguably larger and certainly more understood, elephant in the room is China. Even the most ardent AI doomers will quickly acknowledge the threat of a China lead in AI and it’s uncontroversial to recognize that any pause / slowdown taken by the West accelerates this risk. It’s unclear how the China threat will warp AI safety policy, but I’m watching.
Parallels to the nuclear age are drawn frequently, but there the requirement of enriched dense matter provided a viable chokepoint for governments. For a time compute limits might offer this but that will give way to evermore efficient models.