risky

Safety Measures Strengthened: OpenAI Grants Board with Final Authority over Risky AI

Openai Pattern 01
OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. In-production models are governed by a “safety systems” team; this is for, say, systematic abuses of ChatGPT that can be mitigated with API restrictions or tuning. Frontier models in development get the “preparedness” team, which tries to identify and quantify risks before the model is released. So, only medium and high risks are to be tolerated one way or the other. For that reason OpenAI is making a “cross-functional Safety Advisory Group” that will sit on top of the technical side, reviewing the boffins’ reports and making recommendations inclusive of a higher vantage.