“Interpretability Will Not Reliably Find Deceptive AI” by Neel Nanda
EA Forum Podcast (All audio) - A podcast by EA Forum Team

Categories:
(Disclaimer: Post written in a personal capacity. These are personal hot takes and do not in any way represent my employer's views.) TL;DR:** I do not think we will produce high reliability methods to evaluate or monitor the safety**[1]** of superintelligent systems** via current research paradigms, with interpretability or otherwise[2]. Interpretability seems a valuable tool here and remains worth investing in, as it will hopefully increase the reliability we can achieve. However, interpretability should be viewed as part of an overall portfolio of defences: a layer in a defence-in-depth strategy. It is not the one thing that will save us, and it still won’t be enough for high reliability. Introduction There's a common, often implicit, argument made in AI safety discussions: interpretability is presented as the only reliable path forward for detecting deception in advanced AI - notably argued in Dario Amodei's recent “The Urgency of [...] ---Outline:(00:58) Introduction(02:59) High Reliability Seems Unattainable(05:16) Why Won't Interpretability be Reliable?(07:50) The Potential of Black-Box Methods(08:52) The Role of Interpretability(12:07) ConclusionThe original text contained 5 footnotes which were omitted from this narration. --- First published: May 4th, 2025 Source: https://forum.effectivealtruism.org/posts/Th4tviypdKzeb59GN/interpretability-will-not-reliably-find-deceptive-ai --- Narrated by TYPE III AUDIO.